Updates from: 05/27/2022 01:15:35
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory On Premises Application Provisioning Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-application-provisioning-architecture.md
Microsoft provides direct support for the latest agent version and one version b
### Download link You can download the latest version of the agent using [this link](https://aka.ms/onpremprovisioningagent).
+### 1.1.892.0
+
+May 20th, 2022 - released for download
+
+#### Fixed issues
+
+- We added support for exporting changes to integer attributes, which benefits customers using the generic LDAP connector.
+ ### 1.1.846.0 April 11th, 2022 - released for download
active-directory Use Scim To Provision Users And Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/use-scim-to-provision-users-and-groups.md
Previously updated : 04/13/2022 Last updated : 05/25/2022
The SCIM spec doesn't define a SCIM-specific scheme for authentication and autho
|Username and password (not recommended or supported by Azure AD)|Easy to implement|Insecure - [Your Pa$$word doesn't matter](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/your-pa-word-doesn-t-matter/ba-p/731984)|Not supported for new gallery or non-gallery apps.| |Long-lived bearer token|Long-lived tokens do not require a user to be present. They are easy for admins to use when setting up provisioning.|Long-lived tokens can be hard to share with an admin without using insecure methods such as email. |Supported for gallery and non-gallery apps. | |OAuth authorization code grant|Access tokens are much shorter-lived than passwords, and have an automated refresh mechanism that long-lived bearer tokens do not have. A real user must be present during initial authorization, adding a level of accountability. |Requires a user to be present. If the user leaves the organization, the token is invalid and authorization will need to be completed again.|Supported for gallery apps, but not non-gallery apps. However, you can provide an access token in the UI as the secret token for short term testing purposes. Support for OAuth code grant on non-gallery is in our backlog, in addition to support for configurable auth / token URLs on the gallery app.|
-|OAuth client credentials grant|Access tokens are much shorter-lived than passwords, and have an automated refresh mechanism that long-lived bearer tokens do not have. Both the authorization code grant and the client credentials grant create the same type of access token, so moving between these methods is transparent to the API. Provisioning can be completely automated, and new tokens can be silently requested without user interaction. ||Not supported for gallery and non-gallery apps. Support is in our backlog.|
+|OAuth client credentials grant|Access tokens are much shorter-lived than passwords, and have an automated refresh mechanism that long-lived bearer tokens do not have. Both the authorization code grant and the client credentials grant create the same type of access token, so moving between these methods is transparent to the API. Provisioning can be completely automated, and new tokens can be silently requested without user interaction. ||Supported for gallery apps, but not non-gallery apps. However, you can provide an access token in the UI as the secret token for short term testing purposes. Support for OAuth client credentials grant on non-gallery is in our backlog.|
> [!NOTE] > It's not recommended to leave the token field blank in the Azure AD provisioning configuration custom app UI. The token generated is primarily available for testing purposes.
active-directory How To Mfa Number Match https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-number-match.md
To enable number matching in the Azure AD portal, complete the following steps:
![Screenshot of enabling number match.](media/howto-authentication-passwordless-phone/enable-number-matching.png) >[!NOTE]
->[Least privilege role in Azure Active Directory - Multi-factor Authentication](https://docs.microsoft.com/azure/active-directory/roles/delegate-by-task#multi-factor-authentication)
+>[Least privilege role in Azure Active Directory - Multi-factor Authentication](../roles/delegate-by-task.md#multi-factor-authentication)
Number matching is not supported for Apple Watch notifications. Apple Watch need to use their phone to approve notifications when number matching is enabled. ## Next steps
-[Authentication methods in Azure Active Directory - Microsoft Authenticator app](concept-authentication-authenticator-app.md)
-
+[Authentication methods in Azure Active Directory - Microsoft Authenticator app](concept-authentication-authenticator-app.md)
active-directory Concept Condition Filters For Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-condition-filters-for-devices.md
There are multiple scenarios that organizations can now enable using filter for
- **Restrict access to privileged resources**. For this example, lets say you want to allow access to Microsoft Azure Management from a user who is assigned a privilged role Global Admin, has satisfied multifactor authentication and accessing from a device that is [privileged or secure admin workstations](/security/compass/privileged-access-devices) and attested as compliant. For this scenario, organizations would create two Conditional Access policies: - Policy 1: All users with the directory role of Global administrator, accessing the Microsoft Azure Management cloud app, and for Access controls, Grant access, but require multifactor authentication and require device to be marked as compliant.
- - Policy 2: All users with the directory role of Global administrator, accessing the Microsoft Azure Management cloud app, excluding a filter for devices using rule expression device.extensionAttribute1 equals SAW and for Access controls, Block. Learn how to [update extensionAttributes on an Azure AD device object](https://docs.microsoft.com/graph/api/device-update?view=graph-rest-1.0&tabs=http).
+ - Policy 2: All users with the directory role of Global administrator, accessing the Microsoft Azure Management cloud app, excluding a filter for devices using rule expression device.extensionAttribute1 equals SAW and for Access controls, Block. Learn how to [update extensionAttributes on an Azure AD device object](/graph/api/device-update?tabs=http&view=graph-rest-1.0).
- **Block access to organization resources from devices running an unsupported Operating System**. For this example, lets say you want to block access to resources from Windows OS version older than Windows 10. For this scenario, organizations would create the following Conditional Access policy: - All users, accessing all cloud apps, excluding a filter for devices using rule expression device.operatingSystem equals Windows and device.operatingSystemVersion startsWith "10.0" and for Access controls, Block. - **Do not require multifactor authentication for specific accounts on specific devices**. For this example, lets say you want to not require multifactor authentication when using service accounts on specific devices like Teams phones or Surface Hub devices. For this scenario, organizations would create the following two Conditional Access policies:
The filter for devices condition in Conditional Access evaluates policy based on
- [Update device Graph API](/graph/api/device-update?tabs=http) - [Conditional Access: Conditions](concept-conditional-access-conditions.md) - [Common Conditional Access policies](concept-conditional-access-policy-common.md)-- [Securing devices as part of the privileged access story](/security/compass/privileged-access-devices)
+- [Securing devices as part of the privileged access story](/security/compass/privileged-access-devices)
active-directory Msal Net Migration Confidential Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-migration-confidential-client.md
description: Learn how to migrate a confidential client application from Azure A
-
Last updated 06/08/2021 -+ #Customer intent: As an application developer, I want to migrate my confidential client app from ADAL.NET to MSAL.NET. # Migrate confidential client applications from ADAL.NET to MSAL.NET
-This article describes how to migrate a confidential client application from Azure Active Directory Authentication Library for .NET (ADAL.NET) to Microsoft Authentication Library for .NET (MSAL.NET). Confidential client applications are web apps, web APIs, and daemon applications that call another service on their own behalf. For more information about confidential applications, see [Authentication flows and application scenarios](authentication-flows-app-scenarios.md). If your app is based on ASP.NET Core, use [Microsoft.Identity.Web](microsoft-identity-web.md).
+In this how-to guide you'll migrate a confidential client application from Azure Active Directory Authentication Library for .NET (ADAL.NET) to Microsoft Authentication Library for .NET (MSAL.NET). Confidential client applications include web apps, web APIs, and daemon applications that call another service on their own behalf. For more information about confidential apps, see [Authentication flows and application scenarios](authentication-flows-app-scenarios.md). If your app is based on ASP.NET Core, see [Microsoft.Identity.Web](microsoft-identity-web.md).
For app registrations:
For app registrations:
## Migration steps
-1. Find the code by using ADAL.NET in your app.
+1. Find the code that uses ADAL.NET in your app.
- The code that uses ADAL in a confidential client application instantiates `AuthenticationContext` and calls either `AcquireTokenByAuthorizationCode` or one override of `AcquireTokenAsync` with the following parameters:
+ The code that uses ADAL in a confidential client app instantiates `AuthenticationContext` and calls either `AcquireTokenByAuthorizationCode` or one override of `AcquireTokenAsync` with the following parameters:
- A `resourceId` string. This variable is the app ID URI of the web API that you want to call. - An instance of `IClientAssertionCertificate` or `ClientAssertion`. This instance provides the client credentials for your app to prove the identity of your app.
-1. After you've identified that you have apps that are using ADAL.NET, install the MSAL.NET NuGet package [Microsoft.Identity.Client](https://www.nuget.org/packages/Microsoft.Identity.Client) and update your project library references. For more information, see [Install a NuGet package](https://www.bing.com/search?q=install+nuget+package). If you want to use token cache serializers, also install [Microsoft.Identity.Web.TokenCache](https://www.nuget.org/packages/Microsoft.Identity.Web.TokenCache).
+1. After you've identified that you have apps that are using ADAL.NET, install the MSAL.NET NuGet package [Microsoft.Identity.Client](https://www.nuget.org/packages/Microsoft.Identity.Client) and update your project library references. For more information, see [Install a NuGet package](https://www.bing.com/search?q=install+nuget+package). To use token cache serializers, install [Microsoft.Identity.Web.TokenCache](https://www.nuget.org/packages/Microsoft.Identity.Web.TokenCache).
1. Update the code according to the confidential client scenario. Some steps are common and apply across all the confidential client scenarios. Other steps are unique to each scenario.
- The confidential client scenarios are:
+ Confidential client scenarios:
- [Daemon scenarios](?tabs=daemon#migrate-daemon-apps) supported by web apps, web APIs, and daemon console applications. - [Web API calling downstream web APIs](?tabs=obo#migrate-a-web-api-that-calls-downstream-web-apis) supported by web APIs calling downstream web APIs on behalf of the user. - [Web app calling web APIs](?tabs=authcode#migrate-a-web-api-that-calls-downstream-web-apis) supported by web apps that sign in users and call a downstream web API.
-You might have provided a wrapper around ADAL.NET to handle certificates and caching. This article uses the same approach to illustrate the process of migrating from ADAL.NET to MSAL.NET. However, this code is only for demonstration purposes. Don't copy/paste these wrappers or integrate them in your code as they are.
+You might have provided a wrapper around ADAL.NET to handle certificates and caching. This guide uses the same approach to illustrate the process of migrating from ADAL.NET to MSAL.NET. However, this code is only for demonstration purposes. Don't copy/paste these wrappers or integrate them in your code as they are.
## [Daemon](#tab/daemon)
The ADAL code for your app uses daemon scenarios if it contains a call to `Authe
- A resource (app ID URI) as a first parameter - `IClientAssertionCertificate` or `ClientAssertion` as the second parameter
-`AuthenticationContext.AcquireTokenAsync` doesn't have a parameter of type `UserAssertion`. If it does, then your app is a web API, and it's using the [web API calling downstream web APIs](?tabs=obo#migrate-a-web-api-that-calls-downstream-web-apis) scenario.
+`AuthenticationContext.AcquireTokenAsync` doesn't have a parameter of type `UserAssertion`. If it does, then your app is a web API, and it uses the [web API calling downstream web APIs](?tabs=obo#migrate-a-web-api-that-calls-downstream-web-apis) scenario.
#### Update the code of daemon scenarios [!INCLUDE [Common steps](includes/msal-net-adoption-steps-confidential-clients.md)]
-In this case, we replace the call to `AuthenticationContext.AcquireTokenAsync` with a call to `IConfidentialClientApplication.AcquireTokenClient`.
+In this case, replace the call to `AuthenticationContext.AcquireTokenAsync` with a call to `IConfidentialClientApplication.AcquireTokenClient`.
Here's a comparison of ADAL.NET and MSAL.NET code for daemon scenarios:
public partial class AuthWrapper
#### Benefit from token caching
-To benefit from the in-memory cache, the instance of `IConfidentialClientApplication` needs to be kept in a member variable. If you re-create the confidential client application each time you request a token, you won't benefit from the token cache.
+To benefit from the in-memory cache, the instance of `IConfidentialClientApplication` must be kept in a member variable. If you re-create the confidential client app each time you request a token, you won't benefit from the token cache.
-You'll need to serialize `AppTokenCache` if you choose not to use the default in-memory app token cache. Similarly, If you want to implement a distributed token cache, you'll need to serialize `AppTokenCache`. For details, see [Token cache for a web app or web API (confidential client application)](msal-net-token-cache-serialization.md?tabs=aspnet) and the sample [active-directory-dotnet-v1-to-v2/ConfidentialClientTokenCache](https://github.com/Azure-Samples/active-directory-dotnet-v1-to-v2/tree/master/ConfidentialClientTokenCache).
+You'll need to serialize `AppTokenCache` if you don't use the default in-memory app token cache. Similarly, If you want to implement a distributed token cache, serialize `AppTokenCache`. For details, see [Token cache for a web app or web API (confidential client application)](msal-net-token-cache-serialization.md?tabs=aspnet) and the sample [active-directory-dotnet-v1-to-v2/ConfidentialClientTokenCache](https://github.com/Azure-Samples/active-directory-dotnet-v1-to-v2/tree/master/ConfidentialClientTokenCache).
[Learn more about the daemon scenario](scenario-daemon-overview.md) and how it's implemented with MSAL.NET or Microsoft.Identity.Web in new applications.
public partial class AuthWrapper
#### Benefit from token caching
-For token caching in OBOs, you need to use a distributed token cache. For details, see [Token cache for a web app or web API (confidential client application)](msal-net-token-cache-serialization.md?tabs=aspnet) and read through [sample code](https://github.com/Azure-Samples/active-directory-dotnet-v1-to-v2/tree/master/ConfidentialClientTokenCache).
+For token caching in OBOs, use a distributed token cache. For details, see [Token cache for a web app or web API (confidential client app)](msal-net-token-cache-serialization.md?tabs=aspnet) and read through [sample code](https://github.com/Azure-Samples/active-directory-dotnet-v1-to-v2/tree/master/ConfidentialClientTokenCache).
```CSharp app.UseInMemoryTokenCaches(); // or a distributed token cache. ```
-[Learn more about web APIs calling downstream web APIs](scenario-web-api-call-api-overview.md) and how they're implemented with MSAL.NET or Microsoft.Identity.Web in new applications.
+[Learn more about web APIs calling downstream web APIs](scenario-web-api-call-api-overview.md) and how they're implemented with MSAL.NET or Microsoft.Identity.Web in new apps.
## [Web app calling web APIs](#tab/authcode) ### Migrate a web app that calls web APIs
-If your app uses ASP.NET Core, we strongly recommend that you update to Microsoft.Identity.Web, which processes everything for you. For a quick presentation, see the [Microsoft.Identity.Web announcement of general availability](https://github.com/AzureAD/microsoft-identity-web/wiki/1.0.0). For details about how to use it in a web app, see [Why use Microsoft.Identity.Web in web apps?](https://aka.ms/ms-id-web/webapp).
+If your app uses ASP.NET Core, we strongly recommend that you update to Microsoft.Identity.Web because it processes everything for you. For a quick presentation, see the [Microsoft.Identity.Web announcement of general availability](https://github.com/AzureAD/microsoft-identity-web/wiki/1.0.0). For details about how to use it in a web app, see [Why use Microsoft.Identity.Web in web apps?](https://aka.ms/ms-id-web/webapp).
-Web apps that sign in users and call web APIs on behalf of users use the OAuth2.0 [authorization code flow](v2-oauth2-auth-code-flow.md). Typically:
+Web apps that sign in users and call web APIs on behalf of users employ the OAuth2.0 [authorization code flow](v2-oauth2-auth-code-flow.md). Typically:
-1. The web app signs in a user by executing a first leg of the authorization code flow. It does this by going to the Microosft identity platform authorize endpoint. The user signs in and performs multifactor authentications if needed. As an outcome of this operation, the app receives the authorization code. The authentication library is not used at this stage.
+1. The app signs in a user by executing a first leg of the authorization code flow by going to the Microsoft identity platform authorize endpoint. The user signs in and performs multi-factor authentications if needed. As an outcome of this operation, the app receives the authorization code. The authentication library isn't used at this stage.
1. The app executes the second leg of the authorization code flow. It uses the authorization code to get an access token, an ID token, and a refresh token. Your application needs to provide the `redirectUri` value, which is the URI where the Microsoft identity platform endpoint will provide the security tokens. After the app receives that URI, it typically calls `AcquireTokenByAuthorizationCode` for ADAL or MSAL to redeem the code and to get a token that will be stored in the token cache.
-1. The app uses ADAL or MSAL to call `AcquireTokenSilent` so that it can get tokens for calling the necessary web APIs. This is done from the web app controllers.
+1. The app uses ADAL or MSAL to call `AcquireTokenSilent` to get tokens for calling the necessary web APIs from the web app controllers.
#### Find out if your code uses the auth code flow
The ADAL code for your app uses auth code flow if it contains a call to `Authent
[!INCLUDE [Common steps](includes/msal-net-adoption-steps-confidential-clients.md)]
-In this case, we replace the call to `AuthenticationContext.AcquireTokenAsync` with a call to `IConfidentialClientApplication.AcquireTokenByAuthorizationCode`.
+In this case, replace the call to `AuthenticationContext.AcquireTokenAsync` with a call to `IConfidentialClientApplication.AcquireTokenByAuthorizationCode`.
Here's a comparison of sample authorization code flows for ADAL.NET and MSAL.NET:
public partial class AuthWrapper
#### Benefit from token caching
-Because your web app uses `AcquireTokenByAuthorizationCode`, your app needs to use a distributed token cache for token caching. For details, see [Token cache for a web app or web API](msal-net-token-cache-serialization.md?tabs=aspnet) and read through [sample code](https://github.com/Azure-Samples/active-directory-dotnet-v1-to-v2/tree/master/ConfidentialClientTokenCache).
+Because your web app uses `AcquireTokenByAuthorizationCode`, it needs to use a distributed token cache for token caching. For details, see [Token cache for a web app or web API](msal-net-token-cache-serialization.md?tabs=aspnet) and read through [sample code](https://github.com/Azure-Samples/active-directory-dotnet-v1-to-v2/tree/master/ConfidentialClientTokenCache).
```CSharp
app.UseInMemoryTokenCaches(); // or a distributed token cache.
#### Handling MsalUiRequiredException When your controller attempts to acquire a token silently for different
-scopes/resources, MSAL.NET might throw an `MsalUiRequiredException`. This is expected if, for instance, the user needs to re-sign-in, or if the
+scopes/resources, MSAL.NET might throw an `MsalUiRequiredException` as expected if the user needs to re-sign-in, or if the
access to the resource requires more claims (because of a conditional access
-policy for instance). For details on mitigation see how to [Handle errors and exceptions in MSAL.NET](msal-error-handling-dotnet.md).
+policy). For details on mitigation see how to [Handle errors and exceptions in MSAL.NET](msal-error-handling-dotnet.md).
[Learn more about web apps calling web APIs](scenario-web-app-call-api-overview.md) and how they're implemented with MSAL.NET or Microsoft.Identity.Web in new applications.
policy for instance). For details on mitigation see how to [Handle errors and ex
Key benefits of MSAL.NET for your app include: -- **Resilience**. MSAL.NET helps make your app resilient through the following:
+- **Resilience**. MSAL.NET helps make your app resilient through:
- - Azure AD Cached Credential Service (CCS) benefits. CCS operates as an Azure AD backup.
- - Proactive renewal of tokens if the API that you call enables long-lived tokens through [continuous access evaluation](app-resilience-continuous-access-evaluation.md).
+ - Azure AD Cached Credential Service (CCS) benefits. CCS operates as an Azure AD backup.
+ - Proactive renewal of tokens if the API that you call enables long-lived tokens through [continuous access evaluation](app-resilience-continuous-access-evaluation.md).
- **Security**. You can acquire Proof of Possession (PoP) tokens if the web API that you want to call requires it. For details, see [Proof Of Possession tokens in MSAL.NET](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/Proof-Of-Possession-(PoP)-tokens) -- **Performance and scalability**. If you don't need to share your cache with ADAL.NET, disable the legacy cache compatibility when you're creating the confidential client application (`.WithLegacyCacheCompatibility(false)`). This increases the performance significantly.
+- **Performance and scalability**. If you don't need to share your cache with ADAL.NET, disable the legacy cache compatibility when you're creating the confidential client application (`.WithLegacyCacheCompatibility(false)`) to significantly increase performance.
```csharp app = ConfidentialClientApplicationBuilder.Create(ClientId)
If you get an exception with either of the following messages:
> `subscriptions for the tenant. Check to make sure you have the correct tenant ID. Check with your subscription` > `administrator.`
-You can troubleshoot the exception by using these steps:
+Troubleshoot the exception using these steps:
1. Confirm that you're using the latest version of [MSAL.NET](https://www.nuget.org/packages/Microsoft.Identity.Client/).
-1. Confirm that the authority host that you set when building the confidential client application and the authority host that you used with ADAL are similar. In particular, is it the same [cloud](msal-national-cloud.md) (Azure Government, Azure China 21Vianet, or Azure Germany)?
+1. Confirm that the authority host that you set when building the confidential client app and the authority host that you used with ADAL are similar. In particular, is it the same [cloud](msal-national-cloud.md) (Azure Government, Azure China 21Vianet, or Azure Germany)?
### MsalClientException
-In multi-tenant applications, you can have scenarios where you specify a common authority when building the application, but then want to target a specific tenant (for instance the tenant of the user) when calling a web API. Since MSAL.NET 4.37.0, when you specify `.WithAzureRegion` at the application creation, you can no longer specify the Authority using `.WithAuthority` during the token requests. If you do, you'll get the following error when updating from previous versions of MSAL.NET:
+In multi-tenant apps, specify a common authority when building the app to target a specific tenant such as, the tenant of the user when calling a web API. Since MSAL.NET 4.37.0, when you specify `.WithAzureRegion` at the app creation, you can no longer specify the Authority using `.WithAuthority` during the token requests. If you do, you'll get the following error when updating from previous versions of MSAL.NET:
`MsalClientException - "You configured WithAuthority at the request level, and also WithAzureRegion. This is not supported when the environment changes from application to request. Use WithTenantId at the request level instead."`
active-directory Msal Net Token Cache Serialization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-token-cache-serialization.md
You can also specify options to limit the size of the in-memory token cache:
#### Distributed caches
-If you use `app.AddDistributedTokenCache`, the token cache is an adapter against the .NET `IDistributedCache` implementation. So you can choose between a SQL Server cache, a Redis cache, an Azure Cosmos DB cache, or any other cache implementing the [IDistributedCache](https://docs.microsoft.com/dotnet/api/microsoft.extensions.caching.distributed.idistributedcache?view=dotnet-plat-ext-6.0) interface.
+If you use `app.AddDistributedTokenCache`, the token cache is an adapter against the .NET `IDistributedCache` implementation. So you can choose between a SQL Server cache, a Redis cache, an Azure Cosmos DB cache, or any other cache implementing the [IDistributedCache](/dotnet/api/microsoft.extensions.caching.distributed.idistributedcache?view=dotnet-plat-ext-6.0) interface.
For testing purposes only, you may want to use `services.AddDistributedMemoryCache()`, an in-memory implementation of `IDistributedCache`.
The following samples illustrate token cache serialization.
| | -- | -- | |[active-directory-dotnet-desktop-msgraph-v2](https://github.com/azure-samples/active-directory-dotnet-desktop-msgraph-v2) | Desktop (WPF) | Windows Desktop .NET (WPF) application that calls the Microsoft Graph API. ![Diagram that shows a topology with a desktop app client flowing to Azure Active Directory by acquiring a token interactively and to Microsoft Graph.](media/msal-net-token-cache-serialization/topology.png)| |[active-directory-dotnet-v1-to-v2](https://github.com/Azure-Samples/active-directory-dotnet-v1-to-v2) | Desktop (console) | Set of Visual Studio solutions that illustrate the migration of Azure AD v1.0 applications (using ADAL.NET) to Microsoft identity platform applications (using MSAL.NET). In particular, see [Token cache migration](https://github.com/Azure-Samples/active-directory-dotnet-v1-to-v2/blob/master/TokenCacheMigration/README.md) and [Confidential client token cache](https://github.com/Azure-Samples/active-directory-dotnet-v1-to-v2/tree/master/ConfidentialClientTokenCache). |
-[ms-identity-aspnet-webapp-openidconnect](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect) | ASP.NET (net472) | Example of token cache serialization in an ASP.NET MVC application (using MSAL.NET). In particular, see [MsalAppBuilder](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect/blob/master/WebApp/Utils/MsalAppBuilder.cs).
+[ms-identity-aspnet-webapp-openidconnect](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect) | ASP.NET (net472) | Example of token cache serialization in an ASP.NET MVC application (using MSAL.NET). In particular, see [MsalAppBuilder](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect/blob/master/WebApp/Utils/MsalAppBuilder.cs).
active-directory Multi Service Web App Access Microsoft Graph As User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/multi-service-web-app-access-microsoft-graph-as-user.md
Using the [Microsoft.Identity.Web library](https://github.com/AzureAD/microsoft-
To see this code as part of a sample application, see the [sample on GitHub](https://github.com/Azure-Samples/ms-identity-easyauth-dotnet-storage-graphapi/tree/main/2-WebApp-graphapi-on-behalf). > [!NOTE]
-> The Microsoft.Identity.Web library isn't required in your web app for basic authentication/authorization or to authenticate requests with Microsoft Graph. It's possible to [securely call downstream APIs](/azure/app-service/tutorial-auth-aad#call-api-securely-from-server-code) with only the App Service authentication/authorization module enabled.
+> The Microsoft.Identity.Web library isn't required in your web app for basic authentication/authorization or to authenticate requests with Microsoft Graph. It's possible to [securely call downstream APIs](../../app-service/tutorial-auth-aad.md#call-api-securely-from-server-code) with only the App Service authentication/authorization module enabled.
> > However, the App Service authentication/authorization is designed for more basic authentication scenarios. For more complex scenarios (handling custom claims, for example), you need the Microsoft.Identity.Web library or [Microsoft Authentication Library](msal-overview.md). There's a little more setup and configuration work in the beginning, but the Microsoft.Identity.Web library can run alongside the App Service authentication/authorization module. Later, when your web app needs to handle more complex scenarios, you can disable the App Service authentication/authorization module and Microsoft.Identity.Web will already be a part of your app.
active-directory Multi Service Web App Authentication App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/multi-service-web-app-authentication-app-service.md
Learn how to enable authentication for your web app running on Azure App Service
App Service provides built-in authentication and authorization support, so you can sign in users and access data by writing minimal or no code in your web app. Using the App Service authentication/authorization module isn't required, but helps simplify authentication and authorization for your app. This article shows how to secure your web app with the App Service authentication/authorization module by using Azure Active Directory (Azure AD) as the identity provider.
-The authentication/authorization module is enabled and configured through the Azure portal and app settings. No SDKs, specific languages, or changes to application code are required.ΓÇï A variety of identity providers are supported, which includes Azure AD, Microsoft Account, Facebook, Google, and TwitterΓÇïΓÇï. When the authentication/authorization module is enabled, every incoming HTTP request passes through it before being handled by app code.ΓÇïΓÇï To learn more, see [Authentication and authorization in Azure App Service](/azure/app-service/overview-authentication-authorization.md).
+The authentication/authorization module is enabled and configured through the Azure portal and app settings. No SDKs, specific languages, or changes to application code are required.ΓÇï A variety of identity providers are supported, which includes Azure AD, Microsoft Account, Facebook, Google, and TwitterΓÇïΓÇï. When the authentication/authorization module is enabled, every incoming HTTP request passes through it before being handled by app code.ΓÇïΓÇï To learn more, see [Authentication and authorization in Azure App Service](../../app-service/overview-authentication-authorization.md).
In this tutorial, you learn how to:
In this tutorial, you learn how to:
## Create and publish a web app on App Service
-For this tutorial, you need a web app deployed to App Service. You can use an existing web app, or you can follow one of the [ASP.NET Core](/azure/app-service/quickstart-dotnetcore), [Node.js](/azure/app-service/quickstart-nodejs), [Python](/azure/app-service/quickstart-python), or [Java](/azure/app-service/quickstart-java) quickstarts to create and publish a new web app to App Service.
+For this tutorial, you need a web app deployed to App Service. You can use an existing web app, or you can follow one of the [ASP.NET Core](../../app-service/quickstart-dotnetcore.md), [Node.js](../../app-service/quickstart-nodejs.md), [Python](../../app-service/quickstart-python.md), or [Java](../../app-service/quickstart-java.md) quickstarts to create and publish a new web app to App Service.
Whether you use an existing web app or create a new one, take note of the following:
You need these names throughout this tutorial.
## Configure authentication and authorization
-You now have a web app running on App Service. Next, you enable authentication and authorization for the web app. You use Azure AD as the identity provider. For more information, see [Configure Azure AD authentication for your App Service application](/azure/app-service/configure-authentication-provider-aad.md).
+You now have a web app running on App Service. Next, you enable authentication and authorization for the web app. You use Azure AD as the identity provider. For more information, see [Configure Azure AD authentication for your App Service application](../../app-service/configure-authentication-provider-aad.md).
In the [Azure portal](https://portal.azure.com) menu, select **Resource groups**, or search for and select **Resource groups** from any page.
active-directory 5 Secure Access B2b https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/5-secure-access-b2b.md
You can use an allowlist or blocklist to [restrict invitations to B2B users](../
> Limiting to a predefined domain may inadvertently prevent authorized collaboration with organizations, which have other domains for their users. For example, if doing business with an organization Contoso, the initial point of contact with Contoso might be one of their US-based employees who has an email with a ".com" domain. However, if you only allow the ".com" domain you may inadvertently omit their Canadian employees who have ".ca" domain. > [!IMPORTANT]
-> These lists do not apply to users who are already in your directory. By default, they also do not apply to OneDrive for Business and SharePoint allow/blocklists which are separate unless you enable the [SharePoint/OneDrive B2B integration](https://docs.microsoft.com/sharepoint/sharepoint-azureb2b-integration).
+> These lists do not apply to users who are already in your directory. By default, they also do not apply to OneDrive for Business and SharePoint allow/blocklists which are separate unless you enable the [SharePoint/OneDrive B2B integration](/sharepoint/sharepoint-azureb2b-integration).
Some organizations use a list of known ΓÇÿbad actorΓÇÖ domains provided by their managed security provider for their blocklist. For example, if the organization is legitimately doing business with Contoso and using a .com domain, there may be an unrelated organization that has been using the Contoso .org domain and attempting a phishing attack to impersonate Contoso employees.
See the following articles on securing external access to resources. We recommen
8. [Secure access with Sensitivity labels](8-secure-access-sensitivity-labels.md)
-9. [Secure access to Microsoft Teams, OneDrive, and SharePoint](9-secure-access-teams-sharepoint.md)
+9. [Secure access to Microsoft Teams, OneDrive, and SharePoint](9-secure-access-teams-sharepoint.md)
active-directory Entitlement Management Logs And Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-logs-and-reporting.md
Archiving Azure AD audit logs requires you to have Azure Monitor in an Azure sub
## View events for an access package
-To view events for an access package, you must have access to the underlying Azure monitor workspace (see [Manage access to log data and workspaces in Azure Monitor](../../azure-monitor/logs/manage-access.md#manage-access-using-azure-permissions) for information) and in one of the following roles:
+To view events for an access package, you must have access to the underlying Azure monitor workspace (see [Manage access to log data and workspaces in Azure Monitor](../../azure-monitor/logs/manage-access.md#azure-rbac) for information) and in one of the following roles:
- Global administrator - Security administrator
active-directory How To Assign Managed Identity Via Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-assign-managed-identity-via-azure-policy.md
+
+ Title: Use Azure Policy to assign managed identities (preview)
+description: Documentation for the Azure Policy that can be used to assign managed identities to Azure resources.
+++
+editor: barclayn
++++ Last updated : 05/23/2022++++
+# [Preview] Use Azure Policy to assign managed identities
++
+[Azure Policy](../../governance/policy/overview.md) helps enforce organizational standards and assess compliance at scale. Through its compliance dashboard, Azure policy provides an aggregated view that helps administrators evaluate the overall state of the environment. You have the ability to drill down to the per-resource, per-policy granularity. It also helps bring your resources to compliance through bulk remediation for existing resources and automatic remediation for new resources. Common use cases for Azure Policy include implementing governance for:
+
+- Resource consistency
+- Regulatory compliance
+- Security
+- Cost
+- Management
++
+Policy definitions for these common use cases are already available in your Azure environment to help you get started.
+
+Azure Monitoring Agents require a [managed identity](overview.md) on the monitored Azure Virtual Machines (VMs). This document describes the behavior of a built-in Azure Policy provided by Microsoft that helps ensure a managed identity, needed for these scenarios, is assigned to VMs at scale.
+
+While using system-assigned managed identity is possible, when used at scale (for example, for all VMs in a subscription) it results in substantial number of identities created (and deleted) in Azure AD (Azure Active Directory). To avoid this churn of identities, it is recommended to use user-assigned managed identities, which can be created once and shared across multiple VMs.
+
+> [!NOTE]
+> We recommend using a user-assigned managed identity per Azure subscription per Azure region.
+
+The policy is designed to implement this recommendation.
+
+## Policy definition and details
+
+- [Policy for Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd367bd60-64ca-4364-98ea-276775bddd94)
+- [Policy for Virtual Machine Scale Sets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F516187d4-ef64-4a1b-ad6b-a7348502976c)
+++
+When executed, the policy takes the following actions:
+
+1. Create, if not exist, a new built-in user-assigned managed identity in the subscription and each Azure region based on the VMs that are in scope of the policy.
+2. Once created, put a lock on the user-assigned managed identity so that it will not be accidentally deleted.
+3. Assign the built-in user-assigned managed identity to Virtual Machines from the subscription and region based on the VMs that are in scope of the policy.
+> [!NOTE]
+> If the Virtual Machine has exactly 1 user-assigned managed identity already assigned, then the policy skips this VM to assign the built-in identity. This is to make sure assignment of the policy does not break applications that take a dependency on [the default behavior of the token endpoint on IMDS.](managed-identities-faq.md#what-identity-will-imds-default-to-if-dont-specify-the-identity-in-the-request)
++
+There are two scenarios to use the policy:
+
+- Let the policy create and use a ΓÇ£built-inΓÇ¥ user-assigned managed identity.
+- Bring your own user-assigned managed identity.
+
+The policy takes the following input parameters:
+
+- Bring-Your-Own-UAMI? - Should the policy create, if not exist, a new user-assigned managed identity?
+- If set to true, then you must specify:
+ - Name of the managed identity
+ - Resource group in which the managed identity should be created.
+- If set to false, then no additional input is needed.
+ - The policy will create the required user-assigned managed identity called ΓÇ£built-in-identityΓÇ¥ in a resource group called ΓÇ£built-in-identity-rg".
+
+## Using the policy
+### Creating the policy assignment
+
+The policy definition can be assigned to different scopes in Azure ΓÇô at the management group subscription or a specific resource group. As policies need to be enforced all the time, the assignment operation is performed using a managed identity associated with the policy-assignment object. The policy assignment object supports both system-assigned and user-assigned managed identity.
+For example, Joe can create a user-assigned managed identity called PolicyAssignmentMI. The built-in policy creates a user-assigned managed identity in each subscription and in each region with resources that are in scope of the policy assignment. The user-assigned managed identities created by the policy has the following resourceId format:
+
+> /subscriptions/your-subscription-id/resourceGroups/built-in-identity-rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/built-in-identity-{location}
+
+For example:
+> /subscriptions/aaaabbbb-aaaa-bbbb-1111-111122223333/resourceGroups/built-in-identity-rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/built-in-identity-eastus
+
+### Required authorization
+
+For PolicyAssignmentMI managed identity to be able to assign the built-in policy across the specified scope, it needs the following permissions, expressed as an Azure RBAC (Azure role-based access control) Role Assignment:
+
+| Principal| Role / Action | Scope | Purpose |
+|-|-|-|-|
+|PolicyAssigmentMI |Managed Identity Operator | /subscription/subscription-id/resourceGroups/built-in-identity <br> OR <br>Bring-your-own-User-assinged-Managed identity |Required to assign the built-in identity to VMs.|
+|PolicyAssigmentMI |Contributor | /subscription/subscription-id> |Required to create the resource-group that holds the built-in managed identity in the subscription. |
+|PolicyAssigmentMI |Managed Identity Contributor | /subscription/subscription-id/resourceGroups/built-in-identity |Required to create a new user-assigned managed identity.|
+|PolicyAssigmentMI |User Access Administrator | /subscription/subscription-id/resourceGroups/built-in-identity <br> OR <br>Bring-your-own-User-assigned-Managed identity |Required to set a lock on the user-assigned managed identity created by the policy.|
++
+As the policy assignment object must have this permission ahead of time, PolicyAssignmentMI cannot be a system-assigned managed identity for this scenario. The user performing the policy assignment task must pre-authorize PolicyAssignmentMI ahead of time with the above role assignments.
+
+As you can see the resultant least privilege role required is ΓÇ£contributorΓÇ¥ at the subscription scope.
+++
+## Known issues
+
+Possible race condition with another deployment that changes the identities assigned to a VM can result in unexpected results.
+
+If there are two or more parallel deployments updating the same virtual machine and they all change the identity configuration of the virtual machine, then it is possible, under specific race conditions, that all expected identities will NOT be assigned to the machines.
+For example, if the policy in this document is updating the managed identities of a VM and at the same time another process is also making changes to the managed identities section, then it is not guaranteed that all the expected identities are properly assigned to the VM.
++
+## Next steps
+
+- [Deploy Azure Monitoring Agent](../../azure-monitor/overview.md)
active-directory Howto Analyze Activity Logs Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-analyze-activity-logs-log-analytics.md
In this article, you learn how to analyze the Azure AD activity logs in your Log
To follow along, you need:
-* A Log Analytics workspace in your Azure subscription. Learn how to [create a Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md).
+* A [Log Analytics workspace](../../azure-monitor/logs/log-analytics-workspace-overview.md) in your Azure subscription. Learn how to [create a Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md).
* First, complete the steps to [route the Azure AD activity logs to your Log Analytics workspace](howto-integrate-activity-logs-with-log-analytics.md).
-* [Access](../../azure-monitor/logs/manage-access.md#manage-access-using-workspace-permissions) to the log analytics workspace
+* [Access](../../azure-monitor/logs/manage-access.md#azure-rbac) to the log analytics workspace
* The following roles in Azure Active Directory (if you are accessing Log Analytics through Azure Active Directory portal) - Security Admin - Security Reader
active-directory Howto Use Azure Monitor Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-use-azure-monitor-workbooks.md
To use Monitor workbooks, you need:
- A [Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md). -- [Access](../../azure-monitor/logs/manage-access.md#manage-access-using-workspace-permissions) to the log analytics workspace
+- [Access](../../azure-monitor/logs/manage-access.md#azure-rbac) to the log analytics workspace
- Following roles in Azure Active Directory (if you are accessing Log Analytics through Azure Active Directory portal) - Security administrator - Security reader
To use Monitor workbooks, you need:
## Roles
-To access workbooks in Azure Active Directory, you must have access to the underlying [Log Analytics](../../azure-monitor/logs/manage-access.md#manage-access-using-azure-permissions) workspace and be assigned to one of the following roles:
+To access workbooks in Azure Active Directory, you must have access to the underlying [Log Analytics workspace](../../azure-monitor/logs/manage-access.md#azure-rbac) and be assigned to one of the following roles:
- Global Reader
active-directory Reference Basic Info Sign In Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-basic-info-sign-in-logs.md
This attribute describes the type of cross-tenant access used by the actor to ac
- `b2bDirectConnect` - A cross tenant sign-in performed by a B2B. - `microsoftSupport`- A cross tenant sign-in performed by a Microsoft support agent in a Microsoft customer tenant. - `serviceProvider` - A cross-tenant sign-in performed by a Cloud Service Provider (CSP) or similar admin on behalf of that CSP's customer in a tenant-- `unknownFutureValue` - A sentinel value used by MS Graph to help clients handle changes in enum lists. For more information, see [Best practices for working with Microsoft Graph](https://docs.microsoft.com/graph/best-practices-concept).
+- `unknownFutureValue` - A sentinel value used by MS Graph to help clients handle changes in enum lists. For more information, see [Best practices for working with Microsoft Graph](/graph/best-practices-concept).
If the sign-in did not the pass the boundaries of a tenant, the value is `none`.
This value shows whether continuous access evaluation (CAE) was applied to the s
## Next steps * [Sign-in logs in Azure Active Directory](concept-sign-ins.md)
-* [What is the sign-in diagnostic in Azure AD?](overview-sign-in-diagnostics.md)
+* [What is the sign-in diagnostic in Azure AD?](overview-sign-in-diagnostics.md)
active-directory Manage Roles Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/manage-roles-portal.md
If PIM is enabled, you have additional capabilities, such as making a user eligi
$roleAssignmentEligible = Open-AzureADMSPrivilegedRoleAssignmentRequest -ProviderId 'aadRoles' -ResourceId $aadTenant.Id -RoleDefinitionId $roleDefinition.Id -SubjectId $user.objectId -Type 'AdminAdd' -AssignmentState 'Eligible' -schedule $schedule -reason "Review billing info" ```
-## Microsoft Graph PIM API
+## Microsoft Graph API
-Follow these instructions to assign a role using the Microsoft Graph PIM API.
+Follow these instructions to assign a role using the Microsoft Graph API.
### Assign a role
active-directory Empactis Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/empactis-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Empactis | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Empactis'
description: Learn how to configure single sign-on between Azure Active Directory and Empactis.
Previously updated : 03/13/2019 Last updated : 05/26/2022
-# Tutorial: Azure Active Directory integration with Empactis
+# Tutorial: Azure AD SSO integration with Empactis
-In this tutorial, you learn how to integrate Empactis with Azure Active Directory (Azure AD).
-Integrating Empactis with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Empactis with Azure Active Directory (Azure AD). When you integrate Empactis with Azure AD, you can:
-* You can control in Azure AD who has access to Empactis.
-* You can enable your users to be automatically signed-in to Empactis (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Empactis.
+* Enable your users to be automatically signed-in to Empactis with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with Empactis, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* Empactis single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Empactis single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Empactis supports **IDP** initiated SSO
+* Empactis supports **IDP** initiated SSO.
-## Adding Empactis from the gallery
+## Add Empactis from the gallery
To configure the integration of Empactis into Azure AD, you need to add Empactis from the gallery to your list of managed SaaS apps.
-**To add Empactis from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **Empactis**, select **Empactis** from result panel then click **Add** button to add the application.
-
- ![Empactis in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Empactis** in the search box.
+1. Select **Empactis** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you configure and test Azure AD single sign-on with Empactis based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Empactis needs to be established.
+## Configure and test Azure AD SSO for Empactis
-To configure and test Azure AD single sign-on with Empactis, you need to complete the following building blocks:
+Configure and test Azure AD SSO with Empactis using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Empactis.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Empactis Single Sign-On](#configure-empactis-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Empactis test user](#create-empactis-test-user)** - to have a counterpart of Britta Simon in Empactis that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+To configure and test Azure AD SSO with Empactis, perform the following steps:
-### Configure Azure AD single sign-on
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Empactis SSO](#configure-empactis-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Empactis test user](#create-empactis-test-user)** - to have a counterpart of B.Simon in Empactis that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+## Configure Azure AD SSO
-To configure Azure AD single sign-on with Empactis, perform the following steps:
+Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Empactis** application integration page, select **Single sign-on**.
+1. In the Azure portal, on the **Empactis** application integration page, find the **Manage** section and select **Single sign-on**.
+1. On the **Select a Single sign-on method** page, select **SAML**.
+1. On the **Set up Single Sign-On with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Configure single sign-on link](common/select-sso.png)
-
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
-
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
4. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
- ![Empactis Domain and URLs single sign-on information](common/preintegrated.png)
- 5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
- ![The Certificate download link](common/certificatebase64.png)
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
6. On the **Set up Empactis** section, copy the appropriate URL(s) as per your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
-
- a. Login URL
-
- b. Azure AD Identifier
-
- c. Logout URL
-
-### Configure Empactis Single Sign-On
-
-To configure single sign-on on **Empactis** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Empactis support team](mailto:support@empactis.com). They set this setting to have the SAML SSO connection set properly on both sides.
+ ![Screenshot shows to copy appropriate configuration U R L.](common/copy-configuration-urls.png "Configuration")
### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
+In this section, you'll create a test user in the Azure portal called B.Simon.
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field, enter **BrittaSimon**.
-
- b. In the **User name** field, type **brittasimon@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Empactis.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Empactis**.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Empactis.
- ![Enterprise applications blade](common/enterprise-applications.png)
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Empactis**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. In the **Add Assignment** dialog, click the **Assign** button.
-2. In the applications list, select **Empactis**.
+## Configure Empactis SSO
- ![The Empactis link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog, select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog, click the **Assign** button.
+To configure single sign-on on **Empactis** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Empactis support team](mailto:support@empactis.com). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Empactis test user In this section, you create a user called Britta Simon in Empactis. Work with [Empactis support team](mailto:support@empactis.com) to add the users in the Empactis platform. Users must be created and activated before you use single sign-on.
-### Test single sign-on
-
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+## Test SSO
-When you click the Empactis tile in the Access Panel, you should be automatically signed in to the Empactis for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+In this section, you test your Azure AD single sign-on configuration with following options.
-## Additional Resources
+* Click on Test this application in Azure portal and you should be automatically signed in to the Empactis for which you set up the SSO.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Empactis tile in the My Apps, you should be automatically signed in to the Empactis for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Empactis you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Iwellnessnow Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/iwellnessnow-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with iWellnessNow | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with iWellnessNow'
description: Learn how to configure single sign-on between Azure Active Directory and iWellnessNow.
Previously updated : 08/07/2019 Last updated : 05/26/2022
-# Tutorial: Integrate iWellnessNow with Azure Active Directory
+# Tutorial: Azure AD SSO integration with iWellnessNow
In this tutorial, you'll learn how to integrate iWellnessNow with Azure Active Directory (Azure AD). When you integrate iWellnessNow with Azure AD, you can:
In this tutorial, you'll learn how to integrate iWellnessNow with Azure Active D
* Enable your users to be automatically signed-in to iWellnessNow with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * iWellnessNow single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* iWellnessNow supports **SP and IDP** initiated SSO
+* iWellnessNow supports **SP and IDP** initiated SSO.
-## Adding iWellnessNow from the gallery
+## Add iWellnessNow from the gallery
To configure the integration of iWellnessNow into Azure AD, you need to add iWellnessNow from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **iWellnessNow** in the search box. 1. Select **iWellnessNow** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on
+## Configure and test Azure AD SSO for iWellnessNow
Configure and test Azure AD SSO with iWellnessNow using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in iWellnessNow.
-To configure and test Azure AD SSO with iWellnessNow, complete the following building blocks:
+To configure and test Azure AD SSO with iWellnessNow, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
-2. **[Configure iWellnessNow SSO](#configure-iwellnessnow-sso)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-5. **[Create iWellnessNow test user](#create-iwellnessnow-test-user)** - to have a counterpart of B.Simon in iWellnessNow that is linked to the Azure AD representation of user.
-6. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure iWellnessNow SSO](#configure-iwellnessnow-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create iWellnessNow test user](#create-iwellnessnow-test-user)** - to have a counterpart of B.Simon in iWellnessNow that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-### Configure Azure AD SSO
+## Configure Azure AD SSO
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **iWellnessNow** application integration page, find the **Manage** section and select **Single sign-on**.
+1. In the Azure portal, on the **iWellnessNow** application integration page, find the **Manage** section and select **Single sign-on**.
1. On the **Select a Single sign-on method** page, select **SAML**.
-1. On the **Set up Single Sign-On with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up Single Sign-On with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
1. On the **Basic SAML Configuration** section, if you have **Service Provider metadata file** and wish to configure in **IDP** initiated mode, perform the following steps: a. Click **Upload metadata file**.
- ![Upload metadata file](common/upload-metadata.png)
+ ![Screenshot shows to upload metadata file.](common/upload-metadata.png "Metadata")
b. Click on **folder logo** to select the metadata file and click **Upload**.
- ![choose metadata file](common/browse-upload-metadata.png)
+ ![Screenshot shows to choose metadata file.](common/browse-upload-metadata.png "Folder")
c. After the metadata file is successfully uploaded, the **Identifier** and **Reply URL** values get auto populated in Basic SAML Configuration section.
- ![Screenshot shows the Basic SAML Configuration, where you can enter Reply U R L, and select Save.](common/idp-intiated.png)
- > [!Note]
- > If the **Identifier** and **Reply URL** values do not get auto polulated, then fill in the values manually according to your requirement.
+ > If the **Identifier** and **Reply URL** values do not get auto populated, then fill in the values manually according to your requirement.
1. If you don't have **Service Provider metadata file** and wish to configure the application in **IDP** initiated mode, perform the following steps:
- ![iWellnessNow Domain and URLs single sign-on information](common/idp-intiated.png)
-
- a. In the **Identifier** textbox, type a URL using the following pattern: `http://<CustomerName>.iwellnessnow.com`
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `http://<CustomerName>.iwellnessnow.com`
- b. In the **Reply URL** textbox, type a URL using the following pattern: `https://<CustomerName>.iwellnessnow.com/ssologin`
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://<CustomerName>.iwellnessnow.com/ssologin`
1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
- ![Screenshot shows Set additional U R Ls where you can enter a Sign on U R L.](common/metadata-upload-additional-signon.png)
- In the **Sign-on URL** text box, type a URL using the following pattern: `https://<CustomerName>.iwellnessnow.com/` > [!NOTE]
- > These values are not real. Update these values with the actual Sign-on URL, Identifier and Reply URL. Contact [iWellnessNow Client support team](mailto:info@iwellnessnow.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [iWellnessNow Client support team](mailto:info@iwellnessnow.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, find **Metadata XML** and select **Download** to download the certificate and save it on your computer.
- ![The Certificate download link](common/metadataxml.png)
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
1. On the **Set up iWellnessNow** section, copy the appropriate URL(s) based on your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
-
-### Configure iWellnessNow SSO
-
-To configure single sign-on on **iWellnessNow** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [iWellnessNow support team](mailto:info@iwellnessnow.com). They set this setting to have the SAML SSO connection set properly on both sides.
+ ![Screenshot shows to copy appropriate configuration U R L.](common/copy-configuration-urls.png "Configuration")
### Create an Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **iWellnessNow**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
+## Configure iWellnessNow SSO
+
+To configure single sign-on on **iWellnessNow** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [iWellnessNow support team](mailto:info@iwellnessnow.com). They set this setting to have the SAML SSO connection set properly on both sides.
+ ### Create iWellnessNow test user In this section, you create a user called Britta Simon in iWellnessNow. Work with [iWellnessNow support team](mailto:info@iwellnessnow.com) to add the users in the iWellnessNow platform. Users must be created and activated before you use single sign-on.
-### Test SSO
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to iWellnessNow Sign on URL where you can initiate the login flow.
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+* Go to iWellnessNow Sign-on URL directly and initiate the login flow from there.
-When you click the iWellnessNow tile in the Access Panel, you should be automatically signed in to the iWellnessNow for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+#### IDP initiated:
-## Additional resources
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the iWellnessNow for which you set up the SSO.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the iWellnessNow tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the iWellnessNow for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure iWellnessNow you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Jobbadmin Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/jobbadmin-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Jobbadmin | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Jobbadmin'
description: Learn how to configure single sign-on between Azure Active Directory and Jobbadmin.
Previously updated : 02/25/2019 Last updated : 02/25/2022
-# Tutorial: Azure Active Directory integration with Jobbadmin
+# Tutorial: Azure AD SSO integration with Jobbadmin
-In this tutorial, you learn how to integrate Jobbadmin with Azure Active Directory (Azure AD).
-Integrating Jobbadmin with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Jobbadmin with Azure Active Directory (Azure AD). When you integrate Jobbadmin with Azure AD, you can:
-* You can control in Azure AD who has access to Jobbadmin.
-* You can enable your users to be automatically signed-in to Jobbadmin (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Jobbadmin.
+* Enable your users to be automatically signed-in to Jobbadmin with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with Jobbadmin, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* Jobbadmin single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Jobbadmin single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Jobbadmin supports **SP** initiated SSO
+* Jobbadmin supports **SP** initiated SSO.
-## Adding Jobbadmin from the gallery
+## Add Jobbadmin from the gallery
To configure the integration of Jobbadmin into Azure AD, you need to add Jobbadmin from the gallery to your list of managed SaaS apps.
-**To add Jobbadmin from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **Jobbadmin**, select **Jobbadmin** from result panel then click **Add** button to add the application.
-
- ![Jobbadmin in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with Jobbadmin based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Jobbadmin needs to be established.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Jobbadmin** in the search box.
+1. Select **Jobbadmin** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-To configure and test Azure AD single sign-on with Jobbadmin, you need to complete the following building blocks:
+## Configure and test Azure AD SSO for Jobbadmin
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Jobbadmin Single Sign-On](#configure-jobbadmin-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Jobbadmin test user](#create-jobbadmin-test-user)** - to have a counterpart of Britta Simon in Jobbadmin that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+Configure and test Azure AD SSO with Jobbadmin using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Jobbadmin.
-### Configure Azure AD single sign-on
+To configure and test Azure AD SSO with Jobbadmin, perform the following steps:
-In this section, you enable Azure AD single sign-on in the Azure portal.
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Jobbadmin SSO](#configure-jobbadmin-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Jobbadmin test user](#create-jobbadmin-test-user)** - to have a counterpart of B.Simon in Jobbadmin that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-To configure Azure AD single sign-on with Jobbadmin, perform the following steps:
+## Configure Azure AD SSO
-1. In the [Azure portal](https://portal.azure.com/), on the **Jobbadmin** application integration page, select **Single sign-on**.
+Follow these steps to enable Azure AD SSO in the Azure portal.
- ![Configure single sign-on link](common/select-sso.png)
-
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
-
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+1. In the Azure portal, on the **Jobbadmin** application integration page, find the **Manage** section and select **single sign-on**.
+2. On the **Select a single sign-on method** page, select **SAML**.
+3. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
4. On the **Basic SAML Configuration** section, perform the following steps:
- ![Jobbadmin Domain and URLs single sign-on information](common/sp-identifier-reply.png)
-
- a. In the **Sign on URL** text box, type a URL using the following pattern:
- `https://<instancename>.jobbnorge.no/auth/saml2/login.ashx`
-
- b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
`https://<instancename>.jobnorge.no`
- c. In the **Reply URL** textbox, type a URL using the following pattern: `https://<instancename>.jobbnorge.no/auth/saml2/login.ashx`
+ b. In the **Reply URL** textbox, type a URL using the following pattern: `https://<instancename>.jobbnorge.no/auth/saml2/login.ashx`
+
+ c. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<instancename>.jobbnorge.no/auth/saml2/login.ashx`
> [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL, Identifier and Reply URL. Contact [Jobbadmin Client support team](https://www.jobbnorge.no/om-oss/kontakt-oss) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Jobbadmin Client support team](https://www.jobbnorge.no/om-oss/kontakt-oss) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
- ![The Certificate download link](common/metadataxml.png)
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
6. On the **Set up Jobbadmin** section, copy the appropriate URL(s) as per your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
-
- a. Login URL
-
- b. Azure Ad Identifier
-
- c. Logout URL
-
-### Configure Jobbadmin Single Sign-On
-
-To configure single sign-on on **Jobbadmin** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Jobbadmin support team](https://www.jobbnorge.no/om-oss/kontakt-oss). They set this setting to have the SAML SSO connection set properly on both sides.
+ ![Screenshot shows to copy appropriate configuration U R L.](common/copy-configuration-urls.png "Configuration")
### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
+In this section, you'll create a test user in the Azure portal called B.Simon.
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Jobbadmin.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Jobbadmin**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **Jobbadmin**.
-
- ![The Jobbadmin link in the Applications list](common/all-applications.png)
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Jobbadmin.
-3. In the menu on the left, select **Users and groups**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Jobbadmin**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![The "Users and groups" link](common/users-groups-blade.png)
+## Configure Jobbadmin SSO
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **Jobbadmin** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Jobbadmin support team](https://www.jobbnorge.no/om-oss/kontakt-oss). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Jobbadmin test user In this section, you create a user called Britta Simon in Jobbadmin. Work with [Jobbadmin support team](https://www.jobbnorge.no/om-oss/kontakt-oss) to add the users in the Jobbadmin platform. Users must be created and activated before you use single sign-on.
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the Jobbadmin tile in the Access Panel, you should be automatically signed in to the Jobbadmin for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal. This will redirect to Jobbadmin Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to Jobbadmin Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Jobbadmin tile in the My Apps, this will redirect to Jobbadmin Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Jobbadmin you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Jobscore Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/jobscore-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with JobScore | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with JobScore'
description: Learn how to configure single sign-on between Azure Active Directory and JobScore.
Previously updated : 02/25/2019 Last updated : 05/25/2022
-# Tutorial: Azure Active Directory integration with JobScore
+# Tutorial: Azure AD SSO integration with JobScore
-In this tutorial, you learn how to integrate JobScore with Azure Active Directory (Azure AD).
-Integrating JobScore with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate JobScore with Azure Active Directory (Azure AD). When you integrate JobScore with Azure AD, you can:
-* You can control in Azure AD who has access to JobScore.
-* You can enable your users to be automatically signed-in to JobScore (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to JobScore.
+* Enable your users to be automatically signed-in to JobScore with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with JobScore, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* JobScore single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* JobScore single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* JobScore supports **SP** initiated SSO
-
-## Adding JobScore from the gallery
-
-To configure the integration of JobScore into Azure AD, you need to add JobScore from the gallery to your list of managed SaaS apps.
-
-**To add JobScore from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
+* JobScore supports **SP** initiated SSO.
- ![The New application button](common/add-new-app.png)
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-4. In the search box, type **JobScore**, select **JobScore** from result panel then click **Add** button to add the application.
+## Add JobScore from the gallery
- ![JobScore in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with JobScore based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in JobScore needs to be established.
-
-To configure and test Azure AD single sign-on with JobScore, you need to complete the following building blocks:
-
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure JobScore Single Sign-On](#configure-jobscore-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create JobScore test user](#create-jobscore-test-user)** - to have a counterpart of Britta Simon in JobScore that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
-
-### Configure Azure AD single sign-on
-
-In this section, you enable Azure AD single sign-on in the Azure portal.
+To configure the integration of JobScore into Azure AD, you need to add JobScore from the gallery to your list of managed SaaS apps.
-To configure Azure AD single sign-on with JobScore, perform the following steps:
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **JobScore** in the search box.
+1. Select **JobScore** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-1. In the [Azure portal](https://portal.azure.com/), on the **JobScore** application integration page, select **Single sign-on**.
+## Configure and test Azure AD SSO for JobScore
- ![Configure single sign-on link](common/select-sso.png)
+Configure and test Azure AD SSO with JobScore using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in JobScore.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+To configure and test Azure AD SSO with JobScore, perform the following steps:
- ![Single sign-on select mode](common/select-saml-option.png)
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure JobScore SSO](#configure-jobscore-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create JobScore test user](#create-jobscore-test-user)** - to have a counterpart of B.Simon in JobScore that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
+## Configure Azure AD SSO
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+Follow these steps to enable Azure AD SSO in the Azure portal.
-4. On the **Basic SAML Configuration** section, perform the following steps:
+1. In the Azure portal, on the **JobScore** application integration page, find the **Manage** section and select **single sign-on**.
+2. On the **Select a single sign-on method** page, select **SAML**.
+3. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
- ![JobScore Domain and URLs single sign-on information](common/sp-signonurl.png)
+4. On the **Basic SAML Configuration** section, perform the following step:
In the **Sign-on URL** text box, type a URL using the following pattern: `https://hire.jobscore.com/auth/adfs/<company id>`
To configure Azure AD single sign-on with JobScore, perform the following steps:
5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
- ![The Certificate download link](common/metadataxml.png)
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
6. On the **Set up JobScore** section, copy the appropriate URL(s) as per your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
-
- a. Login URL
-
- b. Azure Ad Identifier
-
- c. Logout URL
-
-### Configure JobScore Single Sign-On
-
-To configure single sign-on on **JobScore** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [JobScore support team](mailto:support@jobscore.com). They set this setting to have the SAML SSO connection set properly on both sides.
+ ![Screenshot shows to copy appropriate configuration U R L.](common/copy-configuration-urls.png "Configuration")
### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
+In this section, you'll create a test user in the Azure portal called B.Simon.
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to JobScore.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **JobScore**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **JobScore**.
-
- ![The JobScore link in the Applications list](common/all-applications.png)
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to JobScore.
-3. In the menu on the left, select **Users and groups**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **JobScore**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![The "Users and groups" link](common/users-groups-blade.png)
+## Configure JobScore SSO
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **JobScore** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [JobScore support team](mailto:support@jobscore.com). They set this setting to have the SAML SSO connection set properly on both sides.
### Create JobScore test user In this section, you create a user called Britta Simon in JobScore. Work with [JobScore support team](mailto:support@jobscore.com) to add the users in the JobScore platform. Users must be created and activated before you use single sign-on.
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the JobScore tile in the Access Panel, you should be automatically signed in to the JobScore for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal. This will redirect to JobScore Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to JobScore Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the JobScore tile in the My Apps, this will redirect to JobScore Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure JobScore you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Memo 22 09 Multi Factor Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/memo-22-09-multi-factor-authentication.md
U.S. Federal agencies will be approaching this guidance from different starting
- **[Azure AD certificate-based authentication](../authentication/concept-certificate-based-authentication.md)** offers cloud native certificate based authentication (without dependency on a federated identity provider). This includes smart card implementations such as Common Access Card (CAC) & Personal Identity Verification (PIV) as well as derived PIV credentials deployed to mobile devices or security keys -- **[Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-overview)** offers passwordless multifactor authentication that is phishing-resistant. For more information, see the [Windows Hello for Business Deployment Overview](https://docs.microsoft.com/windows/security/identity-protection/hello-for-business/hello-deployment-guide)
+- **[Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-overview)** offers passwordless multifactor authentication that is phishing-resistant. For more information, see the [Windows Hello for Business Deployment Overview](/windows/security/identity-protection/hello-for-business/hello-deployment-guide)
### Protection from external phishing
For more information on deploying this method, see the following resources:
For more information on deploying this method, see the following resources: -- [Deploying Active Directory Federation Services in Azure](https://docs.microsoft.com/windows-server/identity/ad-fs/deployment/how-to-connect-fed-azure-adfs)-- [Configuring AD FS for user certificate authentication](https://docs.microsoft.com/windows-server/identity/ad-fs/operations/configure-user-certificate-authentication)
+- [Deploying Active Directory Federation Services in Azure](/windows-server/identity/ad-fs/deployment/how-to-connect-fed-azure-adfs)
+- [Configuring AD FS for user certificate authentication](/windows-server/identity/ad-fs/operations/configure-user-certificate-authentication)
### Additional phishing-resistant method considerations
The following articles are part of this documentation set:
For more information about Zero Trust, see:
-[Securing identity with Zero Trust](/security/zero-trust/deploy/identity)
+[Securing identity with Zero Trust](/security/zero-trust/deploy/identity)
active-directory Presentation Request Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/presentation-request-api.md
Previously updated : 10/08/2021 Last updated : 05/26/2022 #Customer intent: As an administrator, I am trying to learn the process of revoking verifiable credentials that I have issued.
The callback endpoint is called when a user scans the QR code, uses the deep lin
| `code` |string |The code returned when the request was retrieved by the authenticator app. Possible values: <ul><li>`request_retrieved`: The user scanned the QR code or selected the link that starts the presentation flow.</li><li>`presentation_verified`: The verifiable credential validation completed successfully.</li></ul> | | `state` |string| Returns the state value that you passed in the original payload. | | `subject`|string | The verifiable credential user DID.|
-| `issuers`| array |Returns an array of verifiable credentials requested. For each verifiable credential, it provides: </li><li>The verifiable credential type.</li><li>The claims retrieved.</li><li>The verifiable credential issuerΓÇÖs domain. </li><li>The verifiable credential issuerΓÇÖs domain validation status. </li></ul> |
+| `issuers`| array |Returns an array of verifiable credentials requested. For each verifiable credential, it provides: </li><li>The verifiable credential type(s).</li><li>The issuer's DID</li><li>The claims retrieved.</li><li>The verifiable credential issuerΓÇÖs domain. </li><li>The verifiable credential issuerΓÇÖs domain validation status. </li></ul> |
| `receipt`| string | Optional. The receipt contains the original payload sent from the wallet to the Verifiable Credentials service. The receipt should be used for troubleshooting/debugging only. The format in the receipt is not fix and can change based on the wallet and version used.| The following example demonstrates a callback payload when the authenticator app starts the presentation request:
aks Aks Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/aks-migration.md
Stateless application migration is the most straightforward case:
Carefully plan your migration of stateful applications to avoid data loss or unexpected downtime. * If you use Azure Files, you can mount the file share as a volume into the new cluster. See [Mount Static Azure Files as a Volume](./azure-files-volume.md#mount-file-share-as-a-persistent-volume).
-* If you use Azure Managed Disks, you can only mount the disk if unattached to any VM. See [Mount Static Azure Disk as a Volume](./azure-disk-volume.md#mount-disk-as-volume).
+* If you use Azure Managed Disks, you can only mount the disk if unattached to any VM. See [Mount Static Azure Disk as a Volume](./azure-disk-volume.md#mount-disk-as-a-volume).
* If neither of those approaches work, you can use a backup and restore options. See [Velero on Azure](https://github.com/vmware-tanzu/velero-plugin-for-microsoft-azure/blob/master/README.md). #### Azure Files
aks Azure Disk Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-csi.md
Title: Use Container Storage Interface (CSI) drivers for Azure Disks on Azure Kubernetes Service (AKS)
+ Title: Use Container Storage Interface (CSI) driver for Azure Disk in Azure Kubernetes Service (AKS)
description: Learn how to use the Container Storage Interface (CSI) drivers for Azure disks in an Azure Kubernetes Service (AKS) cluster. Previously updated : 04/06/2022 Last updated : 05/23/2022
-# Use the Azure disk Container Storage Interface (CSI) drivers in Azure Kubernetes Service (AKS)
+# Use the Azure disk Container Storage Interface (CSI) driver in Azure Kubernetes Service (AKS)
+ The Azure disk Container Storage Interface (CSI) driver is a [CSI specification](https://github.com/container-storage-interface/spec/blob/master/spec.md)-compliant driver used by Azure Kubernetes Service (AKS) to manage the lifecycle of Azure disks. The CSI is a standard for exposing arbitrary block and file storage systems to containerized workloads on Kubernetes. By adopting and using CSI, AKS can write, deploy, and iterate plug-ins to expose new or improve existing storage systems in Kubernetes without having to touch the core Kubernetes code and wait for its release cycles.
-To create an AKS cluster with CSI driver support, see [Enable CSI drivers for Azure disks and Azure Files on AKS](csi-storage-drivers.md).
+To create an AKS cluster with CSI driver support, see [Enable CSI driver on AKS](csi-storage-drivers.md). This article describes how to use the Azure disk CSI driver version 1.
+
+> [!NOTE]
+> Azure disk CSI driver v2 (preview) improves scalability and reduces pod failover latency. It uses shared disks to provision attachment replicas on multiple cluster nodes and integrates with the pod scheduler to ensure a node with an attachment replica is chosen on pod failover. Azure disk CSI driver v2 (preview) also provides the ability to fine tune performance. If you're interested in participating in the preview, submit a request: [https://aka.ms/DiskCSIv2Preview](https://aka.ms/DiskCSIv2Preview). This preview version is provided without a service level agreement, and you can occasionally expect breaking changes while in preview. The preview version isn't recommended for production workloads. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
> [!NOTE] > *In-tree drivers* refers to the current storage drivers that are part of the core Kubernetes code versus the new CSI drivers, which are plug-ins.
-## Azure Disk CSI driver new features
-Besides original in-tree driver features, Azure Disk CSI driver already provides following new features:
-- performance improvement when attach or detach disks in parallel
- - in-tree driver attaches or detaches disks in serial while CSI driver would attach or detach disks in batch, there would be significant improvement when there are multiple disks attaching to one node.
-- ZRS disk support
+## Azure disk CSI driver features
+
+In addition to in-tree driver features, Azure disk CSI driver supports the following features:
+
+- Performance improvements during concurrent disk attach and detach
+ - In-tree drivers attach or detach disks in serial, while CSI drivers attach or detach disks in batch. There is significant improvement when there are multiple disks attaching to one node.
+- Zone-redundant storage (ZRS) disk support
- `Premium_ZRS`, `StandardSSD_ZRS` disk types are supported, check more details about [Zone-redundant storage for managed disks](../virtual-machines/disks-redundancy.md) - [Snapshot](#volume-snapshots) - [Volume clone](#clone-volumes) - [Resize disk PV without downtime](#resize-a-persistent-volume-without-downtime)
+## Storage class driver dynamic disk parameters
+
+|Name | Meaning | Available Value | Mandatory | Default value
+| | | | |
+|skuName | Azure disk storage account type (alias: `storageAccountType`)| `Standard_LRS`, `Premium_LRS`, `StandardSSD_LRS`, `UltraSSD_LRS`, `Premium_ZRS`, `StandardSSD_ZRS` | No | `StandardSSD_LRS`|
+|kind | Managed or unmanaged (blob based) disk | `managed` (`dedicated` and `shared` are deprecated) | No | `managed`|
+|fsType | File System Type | `ext4`, `ext3`, `ext2`, `xfs`, `btrfs` for Linux, `ntfs` for Windows | No | `ext4` for Linux, `ntfs` for Windows|
+|cachingMode | [Azure Data Disk Host Cache Setting](../virtual-machines/windows/premium-storage-performance.md#disk-caching) | `None`, `ReadOnly`, `ReadWrite` | No | `ReadOnly`|
+|location | Specify Azure region where Azure disks will be created | `eastus`, `westus`, etc. | No | If empty, driver will use the same location name as current AKS cluster|
+|resourceGroup | Specify the resource group where the Azure disk will be created | Existing resource group name | No | If empty, driver will use the same resource group name as current AKS cluster|
+|DiskIOPSReadWrite | [UltraSSD disk](../virtual-machines/linux/disks-ultra-ssd.md) IOPS Capability (minimum: 2 IOPS/GiB ) | 100~160000 | No | `500`|
+|DiskMBpsReadWrite | [UltraSSD disk](../virtual-machines/linux/disks-ultra-ssd.md) Throughput Capability(minimum: 0.032/GiB) | 1~2000 | No | `100`|
+|LogicalSectorSize | Logical sector size in bytes for Ultra disk. Supported values are 512 ad 4096. 4096 is the default. | `512`, `4096` | No | `4096`|
+|tags | Azure disk [tags](../azure-resource-manager/management/tag-resources.md) | Tag format: `key1=val1,key2=val2` | No | ""|
+|diskEncryptionSetID | ResourceId of the disk encryption set to use for [enabling encryption at rest](../virtual-machines/windows/disk-encryption.md) | format: `/subscriptions/{subs-id}/resourceGroups/{rg-name}/providers/Microsoft.Compute/diskEncryptionSets/{diskEncryptionSet-name}` | No | ""|
+|diskEncryptionType | Encryption type of the disk encryption set | `EncryptionAtRestWithCustomerKey`(by default), `EncryptionAtRestWithPlatformAndCustomerKeys` | No | ""|
+|writeAcceleratorEnabled | [Write Accelerator on Azure Disks](../virtual-machines/windows/how-to-enable-write-accelerator.md) | `true`, `false` | No | ""|
+|networkAccessPolicy | NetworkAccessPolicy property to prevent generation of the SAS URI for a disk or a snapshot | `AllowAll`, `DenyAll`, `AllowPrivate` | No | `AllowAll`|
+|diskAccessID | ARM ID of the DiskAccess resource to use private endpoints on disks | | No | ``|
+|enableBursting | [Enable on-demand bursting](../virtual-machines/disk-bursting.md) beyond the provisioned performance target of the disk. On-demand bursting should only be applied to Premium disk and when the disk size > 512GB. Ultra and shared disk is not supported. Bursting is disabled by default. | `true`, `false` | No | `false`|
+|useragent | User agent used for [customer usage attribution](../marketplace/azure-partner-customer-usage-attribution.md)| | No | Generated Useragent formatted `driverName/driverVersion compiler/version (OS-ARCH)`|
+|enableAsyncAttach | Allow multiple disk attach operations (in batch) on one node in parallel.<br> While this can speed up disk attachment, you may encounter Azure API throttling limit when there are large number of volume attachments. | `true`, `false` | No | `false`|
+|subscriptionID | Specify Azure subscription ID where the Azure disk will be created | Azure subscription ID | No | If not empty, `resourceGroup` must be provided.|
+ ## Use CSI persistent volumes with Azure disks
-A [persistent volume](concepts-storage.md#persistent-volumes) (PV) represents a piece of storage that's provisioned for use with Kubernetes pods. A PV can be used by one or many pods and can be dynamically or statically provisioned. This article shows you how to dynamically create PVs with Azure disks for use by a single pod in an AKS cluster. For static provisioning, see [Manually create and use a volume with Azure disks](azure-disk-volume.md).
+A [persistent volume](concepts-storage.md#persistent-volumes) (PV) represents a piece of storage that's provisioned for use with Kubernetes pods. A PV can be used by one or many pods and can be dynamically or statically provisioned. This article shows you how to dynamically create PVs with Azure disks for use by a single pod in an AKS cluster. For static provisioning, see [Create a static volume with Azure disks](azure-disk-volume.md).
For more information on Kubernetes volumes, see [Storage options for applications in AKS][concepts-storage]. ## Dynamically create Azure disk PVs by using the built-in storage classes
-A storage class is used to define how a unit of storage is dynamically created with a persistent volume. For more information on Kubernetes storage classes, see [Kubernetes storage classes][kubernetes-storage-classes].
-When you use storage CSI drivers on AKS, there are two additional built-in `StorageClasses` that use the Azure disk CSI storage drivers. The additional CSI storage classes are created with the cluster alongside the in-tree default storage classes.
+A storage class is used to define how a unit of storage is dynamically created with a persistent volume. For more information on Kubernetes storage classes, see [Kubernetes storage classes][kubernetes-storage-classes].
+
+When you use the Azure disk storage CSI driver on AKS, there are two additional built-in `StorageClasses` that use the Azure disk CSI storage driver. The additional CSI storage classes are created with the cluster alongside the in-tree default storage classes.
- `managed-csi`: Uses Azure Standard SSD locally redundant storage (LRS) to create a managed disk. - `managed-csi-premium`: Uses Azure Premium LRS to create a managed disk.
The reclaim policy in both storage classes ensures that the underlying Azure dis
To leverage these storage classes, create a [PVC](concepts-storage.md#persistent-volume-claims) and respective pod that references and uses them. A PVC is used to automatically provision storage based on a storage class. A PVC can use one of the pre-created storage classes or a user-defined storage class to create an Azure-managed disk for the desired SKU and size. When you create a pod definition, the PVC is specified to request the desired storage.
-Create an example pod and respective PVC with the [kubectl apply][kubectl-apply] command:
+Create an example pod and respective PVC by running the [kubectl apply][kubectl-apply] command:
```console $ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi-driver/master/deploy/example/pvc-azuredisk-csi.yaml
persistentvolumeclaim/pvc-azuredisk created
pod/nginx-azuredisk created ```
-After the pod is in the running state, create a new file called `test.txt`.
+After the pod is in the running state, run the following command to create a new file called `test.txt`.
```bash $ kubectl exec nginx-azuredisk -- touch /mnt/azuredisk/test.txt ```
-You can now validate that the disk is correctly mounted by running the following command and verifying you see the `test.txt` file in the output:
+To validate the disk is correctly mounted, run the following command and verify you see the `test.txt` file in the output:
```console $ kubectl exec nginx-azuredisk -- ls /mnt/azuredisk
test.txt
## Create a custom storage class
-The default storage classes suit the most common scenarios, but not all. For some cases, you might want to have your own storage class customized with your own parameters. For example, we have a scenario where you might want to change the `volumeBindingMode` class.
+The default storage classes are suitable for most common scenarios. For some cases, you might want to have your own storage class customized with your own parameters. For example, you might want to change the `volumeBindingMode` class.
-You can use a `volumeBindingMode: Immediate` class that guarantees that occurs immediately once the PVC is created. In cases where your node pools are topology constrained, for example, using availability zones, PVs would be bound or provisioned without knowledge of the pod's scheduling requirements (in this case to be in a specific zone).
+You can use a `volumeBindingMode: Immediate` class that guarantees it occurs immediately once the PVC is created. In cases where your node pools are topology constrained, for example when using availability zones, PVs would be bound or provisioned without knowledge of the pod's scheduling requirements (in this case to be in a specific zone).
-To address this scenario, you can use `volumeBindingMode: WaitForFirstConsumer`, which delays the binding and provisioning of a PV until a pod that uses the PVC is created. In this way, the PV will conform and be provisioned in the availability zone (or other topology) that's specified by the pod's scheduling constraints. The default storage classes use `volumeBindingMode: WaitForFirstConsumer` class.
+To address this scenario, you can use `volumeBindingMode: WaitForFirstConsumer`, which delays the binding and provisioning of a PV until a pod that uses the PVC is created. This way, the PV conforms and is provisioned in the availability zone (or other topology) that's specified by the pod's scheduling constraints. The default storage classes use `volumeBindingMode: WaitForFirstConsumer` class.
-Create a file named `sc-azuredisk-csi-waitforfirstconsumer.yaml`, and paste the following manifest.
-The storage class is the same as our `managed-csi` storage class but with a different `volumeBindingMode` class.
+Create a file named `sc-azuredisk-csi-waitforfirstconsumer.yaml`, and then paste the following manifest. The storage class is the same as our `managed-csi` storage class, but with a different `volumeBindingMode` class.
```yaml kind: StorageClass
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer ```
-Create the storage class with the [kubectl apply][kubectl-apply] command, and specify your `sc-azuredisk-csi-waitforfirstconsumer.yaml` file:
+Create the storage class by running the [kubectl apply][kubectl-apply] command and specify your `sc-azuredisk-csi-waitforfirstconsumer.yaml` file:
```console $ kubectl apply -f sc-azuredisk-csi-waitforfirstconsumer.yaml
storageclass.storage.k8s.io/azuredisk-csi-waitforfirstconsumer created
The Azure disk CSI driver supports creating [snapshots of persistent volumes](https://kubernetes-csi.github.io/docs/snapshot-restore-feature.html). As part of this capability, the driver can perform either *full* or [*incremental* snapshots](../virtual-machines/disks-incremental-snapshots.md) depending on the value set in the `incremental` parameter (by default, it's true).
-For details on all the parameters, see [volume snapshot class parameters](https://github.com/kubernetes-sigs/azuredisk-csi-driver/blob/master/docs/driver-parameters.md#volumesnapshotclass).
+The following table provides details for all of the parameters.
+
+|Name | Meaning | Available Value | Mandatory | Default value
+| | | | |
+|resourceGroup | Resource group for storing snapshot shots | EXISTING RESOURCE GROUP | No | If not specified, snapshot will be stored in the same resource group as source Azure disk
+|incremental | Take [full or incremental snapshot](../virtual-machines/windows/incremental-snapshots.md) | `true`, `false` | No | `true`
+|tags | azure disk [tags](../azure-resource-manager/management/tag-resources.md) | Tag format: 'key1=val1,key2=val2' | No | ""
+|userAgent | User agent used for [customer usage attribution](../marketplace/azure-partner-customer-usage-attribution.md) | | No | Generated Useragent formatted `driverName/driverVersion compiler/version (OS-ARCH)`
+|subscriptionID | Specify Azure subscription ID in which Azure disk will be created | Azure subscription ID | No | If not empty, `resourceGroup` must be provided, `incremental` must set as `false`
### Create a volume snapshot
persistentvolumeclaim/pvc-azuredisk-cloning created
pod/nginx-restored-cloning created ```
-We can now check the content of the cloned volume by running the following command and confirming we still see our `test.txt` created file.
+You can verify the content of the cloned volume by running the following command and confirming the file `test.txt` is created.
```console $ kubectl exec nginx-restored-cloning -- ls /mnt/azuredisk
You can request a larger volume for a PVC. Edit the PVC object, and specify a la
> [!NOTE] > A new PV is never created to satisfy the claim. Instead, an existing volume is resized.
-In AKS, the built-in `managed-csi` storage class already allows for expansion, so use the [PVC created earlier with this storage class](#dynamically-create-azure-disk-pvs-by-using-the-built-in-storage-classes). The PVC requested a 10-Gi persistent volume. We can confirm that by running:
+In AKS, the built-in `managed-csi` storage class already supports expansion, so use the [PVC created earlier with this storage class](#dynamically-create-azure-disk-pvs-by-using-the-built-in-storage-classes). The PVC requested a 10-Gi persistent volume. You can confirm by running the following command:
```console $ kubectl exec -it nginx-azuredisk -- df -h /mnt/azuredisk
Filesystem Size Used Avail Use% Mounted on
``` > [!IMPORTANT]
-> Currently, Azure disk CSI driver supports resizing PVCs without downtime on specific regions.
+> Azure disk CSI driver supports resizing PVCs without downtime in specific regions.
> Follow this [link][expand-an-azure-managed-disk] to register the disk online resize feature. > If your cluster is not in the supported region list, you need to delete application first to detach disk on the node before expanding PVC.
-Let's expand the PVC by increasing the `spec.resources.requests.storage` field:
+Expand the PVC by increasing the `spec.resources.requests.storage` field running the following command:
```console $ kubectl patch pvc pvc-azuredisk --type merge --patch '{"spec": {"resources": {"requests": {"storage": "15Gi"}}}}'
$ kubectl patch pvc pvc-azuredisk --type merge --patch '{"spec": {"resources": {
persistentvolumeclaim/pvc-azuredisk patched ```
-Let's confirm the volume is now larger:
+Run the following command to confirm the volume size has increased:
```console $ kubectl get pv
pvc-391ea1a6-0191-4022-b915-c8dc4216174a 15Gi RWO Delete
(...) ```
-And after a few minutes, confirm the size of the PVC and inside the pod:
+And after a few minutes, run the following commands to confirm the size of the PVC and inside the pod:
```console $ kubectl get pvc pvc-azuredisk
Filesystem Size Used Avail Use% Mounted on
## Windows containers
-The Azure disk CSI driver also supports Windows nodes and containers. If you want to use Windows containers, follow the [Windows containers quickstart][aks-quickstart-cli] to add a Windows node pool.
+The Azure disk CSI driver supports Windows nodes and containers. If you want to use Windows containers, follow the [Windows containers quickstart][aks-quickstart-cli] to add a Windows node pool.
-After you have a Windows node pool, you can now use the built-in storage classes like `managed-csi`. You can deploy an example [Windows-based stateful set](https://github.com/kubernetes-sigs/azuredisk-csi-driver/blob/master/deploy/example/windows/statefulset.yaml) that saves timestamps into the file `data.txt` by deploying the following command with the [kubectl apply][kubectl-apply] command:
+After you have a Windows node pool, you can now use the built-in storage classes like `managed-csi`. You can deploy an example [Windows-based stateful set](https://github.com/kubernetes-sigs/azuredisk-csi-driver/blob/master/deploy/example/windows/statefulset.yaml) that saves timestamps into the file `data.txt` by running the following [kubectl apply][kubectl-apply] command:
```console $ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi-driver/master/deploy/example/windows/statefulset.yaml
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-c
statefulset.apps/busybox-azuredisk created ```
-You can now validate the contents of the volume by running:
+To validate the content of the volume, run the following command:
```console $ kubectl exec -it busybox-azuredisk-0 -- cat c:\\mnt\\azuredisk\\data.txt # on Linux/MacOS Bash
$ kubectl exec -it busybox-azuredisk-0 -- cat c:\mnt\azuredisk\data.txt # on Win
## Next steps -- To learn how to use CSI drivers for Azure Files, see [Use Azure Files with CSI drivers](azure-files-csi.md).
+- To learn how to use CSI driver for Azure Files, see [Use Azure Files with CSI driver](azure-files-csi.md).
- For more information about storage best practices, see [Best practices for storage and backups in Azure Kubernetes Service][operator-best-practices-storage]. <!-- LINKS - external -->
aks Azure Disk Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-volume.md
Title: Create a static volume for pods in Azure Kubernetes Service (AKS)
description: Learn how to manually create a volume with Azure disks for use with a pod in Azure Kubernetes Service (AKS) Previously updated : 05/09/2019 Last updated : 05/17/2022 #Customer intent: As a developer, I want to learn how to manually create and attach storage to a specific pod in AKS.
-# Manually create and use a volume with Azure disks in Azure Kubernetes Service (AKS)
+# Create a static volume with Azure disks in Azure Kubernetes Service (AKS)
Container-based applications often need to access and persist data in an external data volume. If a single pod needs access to storage, you can use Azure disks to present a native volume for application use. This article shows you how to manually create an Azure disk and attach it to a pod in AKS.
For more information on Kubernetes volumes, see [Storage options for application
This article assumes that you have an existing AKS cluster with 1.21 or later version. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
-If you want to interact with Azure Disks on an AKS cluster with 1.20 or previous version, see the [Kubernetes plugin for Azure Disks][kubernetes-disks].
+If you want to interact with Azure disks on an AKS cluster with 1.20 or previous version, see the [Kubernetes plugin for Azure disks][kubernetes-disks].
-## Create an Azure disk
-
-When you create an Azure disk for use with AKS, you can create the disk resource in the **node** resource group. This approach allows the AKS cluster to access and manage the disk resource. If you instead create the disk in a separate resource group, you must grant the Azure Kubernetes Service (AKS) managed identity for your cluster the `Contributor` role to the disk's resource group.
-
-For this article, create the disk in the node resource group. First, get the resource group name with the [az aks show][az-aks-show] command and add the `--query nodeResourceGroup` query parameter. The following example gets the node resource group for the AKS cluster name *myAKSCluster* in the resource group name *myResourceGroup*:
+## Storage class static provisioning
-```azurecli-interactive
-$ az aks show --resource-group myResourceGroup --name myAKSCluster --query nodeResourceGroup -o tsv
+The following table describes the Storage Class parameters for the Azure disk CSI driver static provisioning:
-MC_myResourceGroup_myAKSCluster_eastus
-```
+|Name | Meaning | Available Value | Mandatory | Default value|
+| | | | | |
+|volumeHandle| Azure disk URI | `/subscriptions/{sub-id}/resourcegroups/{group-name}/providers/microsoft.compute/disks/{disk-id}` | Yes | N/A|
+|volumeAttributes.fsType | File system type | `ext4`, `ext3`, `ext2`, `xfs`, `btrfs` for Linux, `ntfs` for Windows | No | `ext4` for Linux, `ntfs` for Windows |
+|volumeAttributes.partition | Partition number of the existing disk (only supported on Linux) | `1`, `2`, `3` | No | Empty (no partition) </br>- Make sure partition format is like `-part1` |
+|volumeAttributes.cachingMode | [Disk host cache setting](../virtual-machines/windows/premium-storage-performance.md#disk-caching)| `None`, `ReadOnly`, `ReadWrite` | No | `ReadOnly`|
-Now create a disk using the [az disk create][az-disk-create] command. Specify the node resource group name obtained in the previous command, and then a name for the disk resource, such as *myAKSDisk*. The following example creates a *20*GiB disk, and outputs the ID of the disk once created. If you need to create a disk for use with Windows Server containers, add the `--os-type windows` parameter to correctly format the disk.
-
-```azurecli-interactive
-az disk create \
- --resource-group MC_myResourceGroup_myAKSCluster_eastus \
- --name myAKSDisk \
- --size-gb 20 \
- --query id --output tsv
-```
+## Create an Azure disk
-> [!NOTE]
-> Azure disks are billed by SKU for a specific size. These SKUs range from 32GiB for S4 or P4 disks to 32TiB for S80 or P80 disks (in preview). The throughput and IOPS performance of a Premium managed disk depends on both the SKU and the instance size of the nodes in the AKS cluster. See [Pricing and Performance of Managed Disks][managed-disk-pricing-performance].
-
-The disk resource ID is displayed once the command has successfully completed, as shown in the following example output. This disk ID is used to mount the disk in the next step.
-
-```console
-/subscriptions/<subscriptionID>/resourceGroups/MC_myAKSCluster_myAKSCluster_eastus/providers/Microsoft.Compute/disks/myAKSDisk
-```
-
-## Mount disk as volume
-Create a *pv-azuredisk.yaml* file with a *PersistentVolume*. Update `volumeHandle` with disk resource ID. For example:
-
-```yaml
-apiVersion: v1
-kind: PersistentVolume
-metadata:
- name: pv-azuredisk
-spec:
- capacity:
- storage: 20Gi
- accessModes:
- - ReadWriteOnce
- persistentVolumeReclaimPolicy: Retain
- storageClassName: managed-csi
- csi:
- driver: disk.csi.azure.com
- readOnly: false
- volumeHandle: /subscriptions/<subscriptionID>/resourceGroups/MC_myAKSCluster_myAKSCluster_eastus/providers/Microsoft.Compute/disks/myAKSDisk
- volumeAttributes:
- fsType: ext4
-```
-
-Create a *pvc-azuredisk.yaml* file with a *PersistentVolumeClaim* that uses the *PersistentVolume*. For example:
-
-```yaml
-apiVersion: v1
-kind: PersistentVolumeClaim
-metadata:
- name: pvc-azuredisk
-spec:
- accessModes:
- - ReadWriteOnce
- resources:
- requests:
- storage: 20Gi
- volumeName: pv-azuredisk
- storageClassName: managed-csi
-```
-
-Use the `kubectl` commands to create the *PersistentVolume* and *PersistentVolumeClaim*.
-
-```console
-kubectl apply -f pv-azuredisk.yaml
-kubectl apply -f pvc-azuredisk.yaml
-```
-
-Verify your *PersistentVolumeClaim* is created and bound to the *PersistentVolume*.
-
-```console
-$ kubectl get pvc pvc-azuredisk
-
-NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
-pvc-azuredisk Bound pv-azuredisk 20Gi RWO 5s
-```
-
-Create a *azure-disk-pod.yaml* file to reference your *PersistentVolumeClaim*. For example:
-
-```yaml
-apiVersion: v1
-kind: Pod
-metadata:
- name: mypod
-spec:
- containers:
- - image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
- name: mypod
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- volumeMounts:
- - name: azure
- mountPath: /mnt/azure
- volumes:
- - name: azure
- persistentVolumeClaim:
- claimName: pvc-azuredisk
-```
-
-```console
-kubectl apply -f azure-disk-pod.yaml
-```
+When you create an Azure disk for use with AKS, you can create the disk resource in the **node** resource group. This approach allows the AKS cluster to access and manage the disk resource. If instead you created the disk in a separate resource group, you must grant the Azure Kubernetes Service (AKS) managed identity for your cluster the `Contributor` role to the disk's resource group. In this exercise, you're going to create the disk in the same resource group as your cluster.
+
+1. Identify the resource group name using the [az aks show][az-aks-show] command and add the `--query nodeResourceGroup` parameter. The following example gets the node resource group for the AKS cluster name *myAKSCluster* in the resource group name *myResourceGroup*:
+
+ ```azurecli-interactive
+ $ az aks show --resource-group myResourceGroup --name myAKSCluster --query nodeResourceGroup -o tsv
+
+ MC_myResourceGroup_myAKSCluster_eastus
+ ```
+
+2. Create a disk using the [az disk create][az-disk-create] command. Specify the node resource group name obtained in the previous command, and then a name for the disk resource, such as *myAKSDisk*. The following example creates a *20*GiB disk, and outputs the ID of the disk after it's created. If you need to create a disk for use with Windows Server containers, add the `--os-type windows` parameter to correctly format the disk.
+
+ ```azurecli-interactive
+ az disk create \
+ --resource-group MC_myResourceGroup_myAKSCluster_eastus \
+ --name myAKSDisk \
+ --size-gb 20 \
+ --query id --output tsv
+ ```
+
+ > [!NOTE]
+ > Azure disks are billed by SKU for a specific size. These SKUs range from 32GiB for S4 or P4 disks to 32TiB for S80 or P80 disks (in preview). The throughput and IOPS performance of a Premium managed disk depends on both the SKU and the instance size of the nodes in the AKS cluster. See [Pricing and Performance of Managed Disks][managed-disk-pricing-performance].
+
+ The disk resource ID is displayed once the command has successfully completed, as shown in the following example output. This disk ID is used to mount the disk in the next section.
+
+ ```console
+ /subscriptions/<subscriptionID>/resourceGroups/MC_myAKSCluster_myAKSCluster_eastus/providers/Microsoft.Compute/disks/myAKSDisk
+ ```
+
+## Mount disk as a volume
+
+1. Create a *pv-azuredisk.yaml* file with a *PersistentVolume*. Update `volumeHandle` with disk resource ID from the previous step. For example:
+
+ ```yaml
+ apiVersion: v1
+ kind: PersistentVolume
+ metadata:
+ name: pv-azuredisk
+ spec:
+ capacity:
+ storage: 20Gi
+ accessModes:
+ - ReadWriteOnce
+ persistentVolumeReclaimPolicy: Retain
+ storageClassName: managed-csi
+ csi:
+ driver: disk.csi.azure.com
+ readOnly: false
+ volumeHandle: /subscriptions/<subscriptionID>/resourceGroups/MC_myAKSCluster_myAKSCluster_eastus/providers/Microsoft.Compute/disks/myAKSDisk
+ volumeAttributes:
+ fsType: ext4
+ ```
+
+2. Create a *pvc-azuredisk.yaml* file with a *PersistentVolumeClaim* that uses the *PersistentVolume*. For example:
+
+ ```yaml
+ apiVersion: v1
+ kind: PersistentVolumeClaim
+ metadata:
+ name: pvc-azuredisk
+ spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 20Gi
+ volumeName: pv-azuredisk
+ storageClassName: managed-csi
+ ```
+
+3. Use the `kubectl` commands to create the *PersistentVolume* and *PersistentVolumeClaim*, referencing the two YAML files created earlier:
+
+ ```console
+ kubectl apply -f pv-azuredisk.yaml
+ kubectl apply -f pvc-azuredisk.yaml
+ ```
+
+4. To verify your *PersistentVolumeClaim* is created and bound to the *PersistentVolume*, run the
+following command:
+
+ ```console
+ $ kubectl get pvc pvc-azuredisk
+
+ NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
+ pvc-azuredisk Bound pv-azuredisk 20Gi RWO 5s
+ ```
+
+5. Create a *azure-disk-pod.yaml* file to reference your *PersistentVolumeClaim*. For example:
+
+ ```yaml
+ apiVersion: v1
+ kind: Pod
+ metadata:
+ name: mypod
+ spec:
+ containers:
+ - image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
+ name: mypod
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ volumeMounts:
+ - name: azure
+ mountPath: /mnt/azure
+ volumes:
+ - name: azure
+ persistentVolumeClaim:
+ claimName: pvc-azuredisk
+ ```
+
+6. Run the following command to apply the configuration and mount the volume, referencing the YAML
+configuration file created in the previous steps:
+
+ ```console
+ kubectl apply -f azure-disk-pod.yaml
+ ```
## Next steps
-For associated best practices, see [Best practices for storage and backups in AKS][operator-best-practices-storage].
+To learn about our recommended storage and backup practices, see [Best practices for storage and backups in AKS][operator-best-practices-storage].
<!-- LINKS - external --> [kubernetes-disks]: https://github.com/kubernetes/examples/blob/master/staging/volumes/azure_disk/README.md
aks Csi Storage Drivers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-storage-drivers.md
Title: Enable Container Storage Interface (CSI) drivers on Azure Kubernetes Service (AKS)
+ Title: Container Storage Interface (CSI) drivers in Azure Kubernetes Service (AKS)
description: Learn how to enable the Container Storage Interface (CSI) drivers for Azure disks and Azure Files in an Azure Kubernetes Service (AKS) cluster. Previously updated : 05/06/2022 Last updated : 05/23/2022
-# Enable Container Storage Interface (CSI) drivers on Azure Kubernetes Service (AKS)
+# Container Storage Interface (CSI) drivers in Azure Kubernetes Service (AKS)
The Container Storage Interface (CSI) is a standard for exposing arbitrary block and file storage systems to containerized workloads on Kubernetes. By adopting and using CSI, Azure Kubernetes Service (AKS) can write, deploy, and iterate plug-ins to expose new or improve existing storage systems in Kubernetes without having to touch the core Kubernetes code and wait for its release cycles.
The CSI storage driver support on AKS allows you to natively use:
> > *In-tree drivers* refers to the current storage drivers that are part of the core Kubernetes code opposed to the new CSI drivers, which are plug-ins.
+> [!NOTE]
+> Azure disk CSI driver v2 (preview) improves scalability and reduces pod failover latency. It uses shared disks to provision attachment replicas on multiple cluster nodes and integrates with the pod scheduler to ensure a node with an attachment replica is chosen on pod failover. Azure disk CSI driver v2 (preview) also provides the ability to fine tune performance. If you're interested in participating in the preview, submit a request: [https://aka.ms/DiskCSIv2Preview](https://aka.ms/DiskCSIv2Preview). This preview version is provided without a service level agreement, and you can occasionally expect breaking changes while in preview. The preview version isn't recommended for production workloads. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+ ## Migrate custom in-tree storage classes to CSI If you created in-tree driver storage classes, those storage classes continue to work since CSI migration is turned on after upgrading your cluster to 1.21.x. If you want to use CSI features you'll need to perform the migration.
parameters:
## Migrate in-tree persistent volumes > [!IMPORTANT]
-> If your in-tree persistent volume `reclaimPolicy` is set to **Delete**, you need to change its policy to **Retain** to persist your data. This can be achieved using a [patch operation on the PV](https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/). For example:
+> If your in-tree persistent volume `reclaimPolicy` is set to **Delete**, you need to change its policy to **Retain** to persist your data. This can be achieved using a [patch operation on the PV](https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/). For example:
> > ```console > $ kubectl patch pv pv-azuredisk --type merge --patch '{"spec": {"persistentVolumeReclaimPolicy": "Retain"}}'
If you have in-tree Azure File persistent volumes, get `secretName`, `shareName`
<!-- LINKS - internal --> [azure-disk-volume]: azure-disk-volume.md
-[azure-disk-static-mount]: azure-disk-volume.md#mount-disk-as-volume
+[azure-disk-static-mount]: azure-disk-volume.md#mount-disk-as-a-volume
[azure-file-static-mount]: azure-files-volume.md#mount-file-share-as-a-persistent-volume [azure-files-pvc]: azure-files-dynamic-pv.md [premium-storage]: ../virtual-machines/disks-types.md
aks Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr.md
az k8s-extension delete --resource-group myResourceGroup --cluster-name myAKSClu
[k8s-version-support-policy]: ./supported-kubernetes-versions.md?tabs=azure-cli#kubernetes-version-support-policy [arc-k8s-cluster]: ../azure-arc/kubernetes/quickstart-connect-cluster.md [update-extension]: ./cluster-extensions.md#update-extension-instance
-[install-cli]: https://docs.microsoft.com/cli/azure/install-azure-cli
+[install-cli]: /cli/azure/install-azure-cli
<!-- LINKS EXTERNAL --> [kubernetes-production]: https://docs.dapr.io/operations/hosting/kubernetes/kubernetes-production
az k8s-extension delete --resource-group myResourceGroup --cluster-name myAKSClu
[dapr-oss-support]: https://docs.dapr.io/operations/support/support-release-policy/ [dapr-supported-version]: https://docs.dapr.io/operations/support/support-release-policy/#supported-versions [dapr-troubleshooting]: https://docs.dapr.io/operations/troubleshooting/common_issues/
-[supported-cloud-regions]: https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc
+[supported-cloud-regions]: https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc
aks Draft https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/draft.md
You can also run the command on a specific directory using the `--destination` f
az aks draft up --destination /Workspaces/ContosoAir ```
+## Use Web Application Routing with Draft to make your application accessible over the internet
+
+[Web Application Routing][web-app-routing] is the easiest way to get your web application up and running in Kubernetes securely, removing the complexity of ingress controllers and certificate and DNS management while offering configuration for enterprises looking to bring their own. Web Application Routing offers a managed ingress controller based on nginx that you can use without restrictions and integrates out of the box with Open Service Mesh to secure intra-cluster communications.
+
+To set up Draft with Web Application Routing, use `az aks draft update` and pass in the DNS name and Azure Key Vault-stored certificate when prompted:
+
+```azure-cli-interactive
+az aks draft update
+```
+
+You can also run the command on a specific directory using the `--destination` flag:
+
+```azure-cli-interactive
+az aks draft update --destination /Workspaces/ContosoAir
+```
+ <!-- LINKS INTERNAL --> [deploy-cluster]: ./tutorial-kubernetes-deploy-cluster.md [az-feature-register]: /cli/azure/feature#az-feature-register
aks Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/integrations.md
The below table shows the available add-ons.
| ingress-appgw | Use Application Gateway Ingress Controller with your AKS cluster. | [What is Application Gateway Ingress Controller?][agic] | | open-service-mesh | Use Open Service Mesh with your AKS cluster. | [Open Service Mesh AKS add-on][osm] | | azure-keyvault-secrets-provider | Use Azure Keyvault Secrets Provider addon.| [Use the Azure Key Vault Provider for Secrets Store CSI Driver in an AKS cluster][keyvault-secret-provider] |
+| web_application_routing | Use a managed NGINX ingress Controller with your AKS cluster.| [Web Application Routing Overview][web-app-routing] |
## Extensions
aks Monitor Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/monitor-aks.md
You require at least one Log Analytics workspace to support Container insights a
If you're just getting started with Azure Monitor, then start with a single workspace and consider creating additional workspaces as your requirements evolve. Many environments will use a single workspace for all the Azure resources they monitor. You can even share a workspace used by [Microsoft Defender for Cloud and Microsoft Sentinel](../azure-monitor/vm/monitor-virtual-machine-security.md), although many customers choose to segregate their availability and performance telemetry from security data.
-See [Designing your Azure Monitor Logs deployment](../azure-monitor/logs/design-logs-deployment.md) for details on logic that you should consider for designing a workspace configuration.
+See [Designing your Azure Monitor Logs deployment](../azure-monitor/logs/workspace-design.md) for details on logic that you should consider for designing a workspace configuration.
### Enable container insights When you enable Container insights for your AKS cluster, it deploys a containerized version of the [Log Analytics agent](../agents/../azure-monitor/agents/log-analytics-agent.md) that sends data to Azure Monitor. There are multiple methods to enable it depending whether you're working with a new or existing AKS cluster. See [Enable Container insights](../azure-monitor/containers/container-insights-onboard.md) for prerequisites and configuration options.
aks Use Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-managed-identity.md
A successful cluster creation using your own managed identities contains this us
A Kubelet identity enables access to be granted to the existing identity prior to cluster creation. This feature enables scenarios such as connection to ACR with a pre-created managed identity.
+> [!WARNING]
+> Updating kubelet MI will upgrade Nodepool, which causes downtime for your AKS cluster as the nodes in the nodepools will be cordoned/drained and then reimaged.
+ ### Prerequisites - You must have the Azure CLI, version 2.26.0 or later installed.
aks Use Network Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-network-policies.md
description: Learn how to secure traffic that flows in and out of pods by using Kubernetes network policies in Azure Kubernetes Service (AKS) Previously updated : 03/16/2021 Last updated : 03/29/2022
az network vnet create \
--subnet-prefix 10.240.0.0/16 # Create a service principal and read in the application ID
-SP=$(az ad sp create-for-rbac --role Contributor --output json)
+SP=$(az ad sp create-for-rbac --output json)
SP_ID=$(echo $SP | jq -r .appId) SP_PASSWORD=$(echo $SP | jq -r .password)
kubectl run backend --image=mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine --la
Create another pod and attach a terminal session to test that you can successfully reach the default NGINX webpage: ```console
-kubectl run --rm -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11 network-policy --namespace development
+kubectl run --rm -it --image=mcr.microsoft.com/dotnet/runtime-deps:6.0 network-policy --namespace development
+```
+
+Install `wget`:
+
+```console
+apt-get update && apt-get install -y wget
``` At the shell prompt, use `wget` to confirm that you can access the default NGINX webpage:
kubectl apply -f backend-policy.yaml
Let's see if you can use the NGINX webpage on the back-end pod again. Create another test pod and attach a terminal session: ```console
-kubectl run --rm -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11 network-policy --namespace development
+kubectl run --rm -it --image=mcr.microsoft.com/dotnet/runtime-deps:6.0 network-policy --namespace development
+```
+
+Install `wget`:
+
+```console
+apt-get update && apt-get install -y wget
``` At the shell prompt, use `wget` to see if you can access the default NGINX webpage. This time, set a timeout value to *2* seconds. The network policy now blocks all inbound traffic, so the page can't be loaded, as shown in the following example:
kubectl apply -f backend-policy.yaml
Schedule a pod that is labeled as *app=webapp,role=frontend* and attach a terminal session: ```console
-kubectl run --rm -it frontend --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11 --labels app=webapp,role=frontend --namespace development
+kubectl run --rm -it frontend --image=mcr.microsoft.com/dotnet/runtime-deps:6.0 --labels app=webapp,role=frontend --namespace development
+```
+
+Install `wget`:
+
+```console
+apt-get update && apt-get install -y wget
``` At the shell prompt, use `wget` to see if you can access the default NGINX webpage:
exit
The network policy allows traffic from pods labeled *app: webapp,role: frontend*, but should deny all other traffic. Let's test to see whether another pod without those labels can access the back-end NGINX pod. Create another test pod and attach a terminal session: ```console
-kubectl run --rm -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11 network-policy --namespace development
+kubectl run --rm -it --image=mcr.microsoft.com/dotnet/runtime-deps:6.0 network-policy --namespace development
+```
+
+Install `wget`:
+
+```console
+apt-get update && apt-get install -y wget
``` At the shell prompt, use `wget` to see if you can access the default NGINX webpage. The network policy blocks the inbound traffic, so the page can't be loaded, as shown in the following example:
kubectl label namespace/production purpose=production
Schedule a test pod in the *production* namespace that is labeled as *app=webapp,role=frontend*. Attach a terminal session: ```console
-kubectl run --rm -it frontend --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11 --labels app=webapp,role=frontend --namespace production
+kubectl run --rm -it frontend --image=mcr.microsoft.com/dotnet/runtime-deps:6.0 --labels app=webapp,role=frontend --namespace production
+```
+
+Install `wget`:
+
+```console
+apt-get update && apt-get install -y wget
``` At the shell prompt, use `wget` to confirm that you can access the default NGINX webpage:
kubectl apply -f backend-policy.yaml
Schedule another pod in the *production* namespace and attach a terminal session: ```console
-kubectl run --rm -it frontend --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11 --labels app=webapp,role=frontend --namespace production
+kubectl run --rm -it frontend --image=mcr.microsoft.com/dotnet/runtime-deps:6.0 --labels app=webapp,role=frontend --namespace production
+```
+
+Install `wget`:
+
+```console
+apt-get update && apt-get install -y wget
``` At the shell prompt, use `wget` to see that the network policy now denies traffic:
exit
With traffic denied from the *production* namespace, schedule a test pod back in the *development* namespace and attach a terminal session: ```console
-kubectl run --rm -it frontend --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11 --labels app=webapp,role=frontend --namespace development
+kubectl run --rm -it frontend --image=mcr.microsoft.com/dotnet/runtime-deps:6.0 --labels app=webapp,role=frontend --namespace development
+```
+
+Install `wget`:
+
+```console
+apt-get update && apt-get install -y wget
``` At the shell prompt, use `wget` to see that the network policy allows the traffic:
aks Web App Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/web-app-routing.md
+
+ Title: Web Application Routing add-on on Azure Kubernetes Service (AKS) (Preview)
+description: Use the Web Application Routing add-on to securely access applications deployed on Azure Kubernetes Service (AKS).
+++ Last updated : 05/13/2021+++
+# Web Application Routing (Preview)
+
+The Web Application Routing solution makes it easy to access applications that are deployed to your Azure Kubernetes Service (AKS) cluster. When the solution's enabled, it configures an [Ingress controller](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/) in your AKS cluster, SSL termination, and Open Service Mesh (OSM) for E2E encryption of inter cluster communication. As applications are deployed, the solution also creates publicly accessible DNS names for application endpoints.
++
+## Limitations
+
+- Web Application Routing currently doesn't support named ports in ingress backend.
+
+## Web Application Routing solution overview
+
+The add-on deploys four components: an [nginx ingress controller][nginx], [Secrets Store CSI Driver][csi-driver], [Open Service Mesh (OSM)][osm], and [External-DNS][external-dns] controller.
+
+- **Nginx ingress Controller**: The ingress controller exposed to the internet.
+- **External-dns**: Watches for Kubernetes Ingress resources and creates DNS A records in the cluster-specific DNS zone.
+- **CSI driver**: Connector used to communicate with keyvault to retrieve SSL certificates for ingress controller.
+- **OSM**: A lightweight, extensible, cloud native service mesh that allows users to uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments.
+- **External-DNS controller**: Watches for Kubernetes Ingress resources and creates DNS A records in the cluster-specific DNS zone.
+
+## Prerequisites
+
+- An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free).
+- [Azure CLI installed](/cli/azure/install-azure-cli).
+
+### Install the `aks-preview` Azure CLI extension
+
+You also need the *aks-preview* Azure CLI extension version `0.5.75` or later. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command.
+
+```azurecli-interactive
+# Install the aks-preview extension
+az extension add --name aks-preview
+
+# Update the extension to make sure you have the latest version installed
+az extension update --name aks-preview
+```
+
+### Install the `osm` CLI
+
+Since Web Application Routing uses OSM internally to secure intranet communication, we need to set up the `osm` CLI. This command-line tool contains everything needed to install and configure Open Service Mesh. The binary is available on the [OSM GitHub releases page][osm-release].
+
+## Deploy Web Application Routing with the Azure CLI
+
+The Web Application Routing routing add-on can be enabled with the Azure CLI when deploying an AKS cluster. To do so, use the [az aks create][az-aks-create] command with the `--enable-addons` argument.
+
+```azurecli
+az aks create --resource-group myResourceGroup --name myAKSCluster --enable-addons web_application_routing
+```
+
+> [!TIP]
+> If you want to enable multiple add-ons, provide them as a comma-separated list. For example, to enable Web Application Routing routing and monitoring, use the format `--enable-addons web_application_routing,monitoring`.
+
+You can also enable Web Application Routing on an existing AKS cluster using the [az aks enable-addons][az-aks-enable-addons] command. To enable Web Application Routing on an existing cluster, add the `--addons` parameter and specify *web_application_routing* as shown in the following example:
+
+```azurecli
+az aks enable-addons --resource-group myResourceGroup --name myAKSCluster --addons web_application_routing
+```
+
+After the cluster is deployed or updated, use the [az aks show][az-aks-show] command to retrieve the DNS zone name.
+
+## Connect to your AKS cluster
+
+To connect to the Kubernetes cluster from your local computer, you use [kubectl][kubectl], the Kubernetes command-line client.
+
+If you use the Azure Cloud Shell, `kubectl` is already installed. You can also install it locally using the `az aks install-cli` command:
+
+```azurecli
+az aks install-cli
+```
+
+To configure `kubectl` to connect to your Kubernetes cluster, use the [az aks get-credentials][az-aks-get-credentials] command. The following example gets credentials for the AKS cluster named *MyAKSCluster* in the *MyResourceGroup*:
+
+```azurecli
+az aks get-credentials --resource-group MyResourceGroup --name MyAKSCluster
+```
+
+## Create the application namespace
+
+For the sample application environment, let's first create a namespace called `hello-web-app-routing` to run the example pods:
+
+```bash
+kubectl create namespace hello-web-app-routing
+```
+
+We also need to add the application namespace to the OSM control plane:
+
+```bash
+osm namespace add hello-web-app-routing
+```
+
+## Grant permissions for Web Application Routing
+
+Identify the Web Application Routing-associated managed identity within the cluster resource group `webapprouting-<CLUSTER_NAME>`. In this walkthrough, the identity is named `webapprouting-myakscluster`.
++
+Copy the identity's object ID:
++
+### Grant access to Azure Key Vault
+
+Grant `GET` permissions for Web Application Routing to retrieve certificates from Azure Key Vault:
+
+```azurecli
+az keyvault set-policy --name myapp-contoso --object-id <WEB_APP_ROUTING_MSI_OBJECT_ID> --secret-permissions get --certificate-permissions get
+```
+
+## Use Web Application Routing
+
+The Web Application Routing solution may only be triggered on service resources that are annotated as follows:
+
+```yaml
+annotations:
+ kubernetes.azure.com/ingress-host: myapp.contoso.com
+ kubernetes.azure.com/tls-cert-keyvault-uri: myapp-contoso.vault.azure.net
+```
+
+These annotations in the service manifest would direct Web Application Routing to create an ingress servicing `myapp.contoso.com` connected to the keyvault `myapp-contoso`.
+
+Create a file named **samples-web-app-routing.yaml** and copy in the following YAML. On line 29-31, update `<MY_HOSTNAME>` and `<MY_KEYVAULT_URI>` with the DNS zone name collected in the previous step of this article.
+
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: aks-helloworld
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: aks-helloworld
+ template:
+ metadata:
+ labels:
+ app: aks-helloworld
+ spec:
+ containers:
+ - name: aks-helloworld
+ image: mcr.microsoft.com/azuredocs/aks-helloworld:v1
+ ports:
+ - containerPort: 80
+ env:
+ - name: TITLE
+ value: "Welcome to Azure Kubernetes Service (AKS)"
+
+apiVersion: v1
+kind: Service
+metadata:
+ name: aks-helloworld
+annotations:
+ kubernetes.azure.com/ingress-host: <MY_HOSTNAME>
+ kubernetes.azure.com/tls-cert-keyvault-uri: <MY_KEYVAULT_URI>
+spec:
+ type: ClusterIP
+ ports:
+ - port: 80
+ selector:
+ app: aks-helloworld
+```
+
+Use the [kubectl apply][kubectl-apply] command to create the resources.
+
+```bash
+kubectl apply -f samples-web-app-routing.yaml -n hello-web-app-routing
+```
+
+The following example shows the created resources:
+
+```bash
+$ kubectl apply -f samples-web-app-routing.yaml -n hello-web-app-routing
+
+deployment.apps/aks-helloworld created
+service/aks-helloworld created
+```
+
+## Verify the managed ingress was created
+
+```bash
+$ kubectl get ingress -n hello-web-app-routing -n hello-web-app-routing
+```
+
+Open a web browser to *<MY_HOSTNAME>*, for example *myapp.contoso.com* and verify you see the demo application. The application may take a few minutes to appear.
+
+## Remove Web Application Routing
+
+First, remove the associated namespace:
+
+```bash
+kubectl delete namespace hello-web-app-routing
+```
+
+The Web Application Routing add-on can be removed using the Azure CLI. To do so run the following command, substituting your AKS cluster and resource group name.
+
+```azurecli
+az aks disable-addons --addons web_application_routing --name myAKSCluster --resource-group myResourceGroup --no-wait
+```
+
+When the Web Application Routing add-on is disabled, some Kubernetes resources may remain in the cluster. These resources include *configMaps* and *secrets*, and are created in the *app-routing-system* namespace. To maintain a clean cluster, you may want to remove these resources.
+
+Look for *addon-web-application-routing* resources using the following [kubectl get][kubectl-get] commands:
+
+## Clean up
+
+Remove the associated Kubernetes objects created in this article using `kubectl delete`.
+
+```bash
+kubectl delete -f samples-web-app-routing.yaml
+```
+
+The example output shows Kubernetes objects have been removed.
+
+```bash
+$ kubectl delete -f samples-web-app-routing.yaml
+
+deployment "aks-helloworld" deleted
+service "aks-helloworld" deleted
+```
+
+<!-- LINKS - internal -->
+[az-aks-create]: /cli/azure/aks#az-aks-create
+[az-aks-show]: /cli/azure/aks#az-aks-show
+[ingress-https]: ./ingress-tls.md
+[az-aks-enable-addons]: /cli/azure/aks#az-aks-enable-addons
+[az-aks-install-cli]: /cli/azure/aks#az-aks-install-cli
+[az-aks-get-credentials]: /cli/azure/aks#az-aks-get-credentials
+[csi-driver]: https://github.com/Azure/secrets-store-csi-driver-provider-azure
+[az-extension-add]: /cli/azure/extension#az-extension-add
+[az-extension-update]: /cli/azure/extension#az-extension-update
+
+<!-- LINKS - external -->
+[osm-release]: https://github.com/openservicemesh/osm/releases/
+[nginx]: https://kubernetes.github.io/ingress-nginx/
+[osm]: https://openservicemesh.io/
+[external-dns]: https://github.com/kubernetes-incubator/external-dns
+[kubectl]: https://kubernetes.io/docs/user-guide/kubectl/
+[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
+[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
+[kubectl-delete]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#delete
+[kubectl-logs]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs
+[ingress]: https://kubernetes.io/docs/concepts/services-networking/ingress/
+[ingress-resource]: https://kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource
api-management Api Management Access Restriction Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-access-restriction-policies.md
This policy can be used in the following policy [sections](./api-management-howt
- **Policy sections:** inbound - **Policy scopes:** all scopes
+> [!NOTE]
+> If you configure this policy at more than one scope, IP filtering is applied in the order of [policy evaluation](set-edit-policies.md#use-base-element-to-set-policy-evaluation-order) in your policy definition.
+ ## <a name="SetUsageQuota"></a> Set usage quota by subscription The `quota` policy enforces a renewable or lifetime call volume and/or bandwidth quota, on a per subscription basis.
api-management Api Management Howto Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-aad.md
Title: Authorize developer accounts by using Azure Active Directory
+ Title: Authorize access to API Management developer portal by using Azure AD
-description: Learn how to authorize users by using Azure Active Directory in API Management.
---
+description: Learn how to enable user sign-in to the API Management developer portal by using Azure Active Directory.
+ - Previously updated : 09/20/2021 Last updated : 05/20/2022
In this article, you'll learn how to:
- Complete the [Create an Azure API Management instance](get-started-create-service-instance.md) quickstart. -- [Import and publish](import-and-publish.md) an Azure API Management instance.
+- [Import and publish](import-and-publish.md) an API in the Azure API Management instance.
[!INCLUDE [azure-cli-prepare-your-environment-no-header.md](../../includes/azure-cli-prepare-your-environment-no-header.md)] [!INCLUDE [premium-dev-standard.md](../../includes/api-management-availability-premium-dev-standard.md)]
-## Authorize developer accounts by using Azure AD
++
+## Enable user sign-in using Azure AD - portal
+
+To simplify the configuration, API Management can automatically enable an Azure AD application and identity provider for users of the developer portal. Alternatively, you can manually enable the Azure AD application and identity provider.
+
+### Automatically enable Azure AD application and identity provider
+
+1. In the left menu of your API Management instance, under **Developer portal**, select **Portal overview**.
+1. On the **Portal overview** page, scroll down to **Enable user sign-in with Azure Active Directory**.
+1. Select **Enable Azure AD**.
+1. On the **Enable Azure AD** page, select **Enable Azure AD**.
+1. Select **Close**.
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Select ![Arrow icon.](./media/api-management-howto-aad/arrow.png).
-1. Search for and select **API Management services**.
-1. Select your API Management service instance.
-1. Under **Developer portal**, select **Identities**.
+ :::image type="content" source="media/api-management-howto-aad/enable-azure-ad-portal.png" alt-text="Screenshot of enabling Azure AD in the developer portal overview page.":::
+
+After the Azure AD provider is enabled:
+
+* Users in the specified Azure AD instance can [sign into the developer portal by using an Azure AD account](#log_in_to_dev_portal).
+* You can manage the Azure AD configuration on the **Developer portal** > **Identities** page in the portal.
+* Optionally configure other sign-in settings by selecting **Identities** > **Settings**. For example, you might want to redirect anonymous users to the sign-in page.
+* Republish the developer portal after any configuration change.
+
+### Manually enable Azure AD application and identity provider
+
+1. In the left menu of your API Management instance, under **Developer portal**, select **Identities**.
1. Select **+Add** from the top to open the **Add identity provider** pane to the right. 1. Under **Type**, select **Azure Active Directory** from the drop-down menu. * Once selected, you'll be able to enter other necessary information.
In this article, you'll learn how to:
* See more information about these controls later in the article. 1. Save the **Redirect URL** for later.
- :::image type="content" source="media/api-management-howto-aad/api-management-with-aad001.png" alt-text="Add identity provider in Azure portal":::
+ :::image type="content" source="media/api-management-howto-aad/api-management-with-aad001.png" alt-text="Screenshot of adding identity provider in Azure portal.":::
> [!NOTE] > There are two redirect URLs:<br/>
In this article, you'll learn how to:
1. Navigate to [App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) to register an app in Active Directory. 1. Select **New registration**. On the **Register an application** page, set the values as follows:
- * Set **Name** to a meaningful name. e.g., *developer-portal*
+ * Set **Name** to a meaningful name such as *developer-portal*
* Set **Supported account types** to **Accounts in this organizational directory only**.
- * Set **Redirect URI** to the value you saved from step 9.
+ * In **Redirect URI**, select **Web** and paste the redirect URL you saved from a previous step.
* Select **Register**. 1. After you've registered the application, copy the **Application (client) ID** from the **Overview** page.
In this article, you'll learn how to:
* Choose **Add**. 1. Copy the client **Secret value** before leaving the page. You will need it later. 1. Under **Manage** in the side menu, select **Authentication**.
-1. Under the **Implicit grant and hybrid flows** sections, select the **ID tokens** checkbox.
+ 1. Under the **Implicit grant and hybrid flows** section, select the **ID tokens** checkbox.
+ 1. Select **Save**.
+1. Under **Manage** in the side menu, select **Token configuration** > **+ Add optional claim**.
+ 1. In **Token type**, select **ID**.
+ 1. Select (check) the following claims: **email**, **family_name**, **given_name**.
+ 1. Select **Add**. If prompted, select **Turn on the Microsoft Graph email, profile permission**.
1. Switch to the browser tab with your API Management instance. 1. Paste the secret into the **Client secret** field in the **Add identity provider** pane. > [!IMPORTANT] > Update the **Client secret** before the key expires.
-1. In the **Add identity provider** pane's **Allowed Tenants** field, specify the Azure AD instances' domains to which you want to grant access to the API Management service instance APIs.
+1. In the **Add identity provider** pane's **Allowed tenants** field, specify the Azure AD instance's domains to which you want to grant access to the API Management service instance APIs.
* You can separate multiple domains with newlines, spaces, or commas. > [!NOTE]
In this article, you'll learn how to:
> 1. Enter the domain name of the Azure AD tenant to which they want to grant access. > 1. Select **Submit**.
-1. After you specify the desired configuration, select **Add**.
+1. After you specify the desired configuration, select **Add**.
+1. Republish the developer portal for the Azure AD configuration to take effect. In the left menu, under **Developer portal**, select **Portal overview** > **Publish**.
-Once changes are saved, users in the specified Azure AD instance can [sign into the developer portal by using an Azure AD account](#log_in_to_dev_portal).
+After the Azure AD provider is enabled:
+
+* Users in the specified Azure AD instance can [sign into the developer portal by using an Azure AD account](#log_in_to_dev_portal).
+* You can manage the Azure AD configuration on the **Developer portal** > **Identities** page in the portal.
+* Optionally configure other sign-in settings by selecting **Identities** > **Settings**. For example, you might want to redirect anonymous users to the sign-in page.
+* Republish the developer portal after any configuration change.
## Add an external Azure AD group
Follow these steps to grant:
az rest --method PATCH --uri "https://graph.microsoft.com/v1.0/$($tenantId)/applications/$($appObjectID)" --body "{'requiredResourceAccess':[{'resourceAccess': [{'id': 'e1fe6dd8-ba31-4d61-89e7-88639da4683d','type': 'Scope'},{'id': '7ab1d382-f21e-4acd-a863-ba3e13f7da61','type': 'Role'}],'resourceAppId': '00000003-0000-0000-c000-000000000000'}]}" ```
-2. Log out and log back in to the Azure portal.
-3. Navigate to the App Registration page for the application you registered in [the previous section](#authorize-developer-accounts-by-using-azure-ad).
-4. Click **API Permissions**. You should see the permissions granted by the Azure CLI script in step 1.
-5. Select **Grant admin consent for {tenantname}** so that you grant access for all users in this directory.
+1. Sign out and sign back in to the Azure portal.
+1. Navigate to the App Registration page for the application you registered in [the previous section](#enable-user-sign-in-using-azure-adportal).
+1. Select **API Permissions**. You should see the permissions granted by the Azure CLI script in step 1.
+1. Select **Grant admin consent for {tenantname}** so that you grant access for all users in this directory.
Now you can add external Azure AD groups from the **Groups** tab of your API Management instance. 1. Under **Developer portal** in the side menu, select **Groups**.
-2. Select the **Add Azure AD group** button.
+1. Select the **Add Azure AD group** button.
- !["Add A A D group" button](./media/api-management-howto-aad/api-management-with-aad008.png)
+ !["Screenshot showing Add Azure AD group button.](./media/api-management-howto-aad/api-management-with-aad008.png)
1. Select the **Tenant** from the drop-down.
-2. Search for and select the group that you want to add.
-3. Press the **Select** button.
+1. Search for and select the group that you want to add.
+1. Press the **Select** button.
Once you add an external Azure AD group, you can review and configure its properties: 1. Select the name of the group from the **Groups** tab.
Users from the configured Azure AD instance can now:
* View and subscribe to any groups for which they have visibility. > [!NOTE]
-> Learn more about the difference between **Delegated** and **Application** permissions types in [Permissions and consent in the Microsoft identity platform](../active-directory/develop/v2-permissions-and-consent.md#permission-types) article.
+> Learn more about the difference between **Delegated** and **Application** permissions types in [Permissions and consent in the Microsoft identity platform](../active-directory/develop/v2-permissions-and-consent.md#permission-types) article.
## <a id="log_in_to_dev_portal"></a> Developer portal: Add Azure AD account authentication In the developer portal, you can sign in with Azure AD using the **Sign-in button: OAuth** widget included on the sign-in page of the default developer portal content. ++ Although a new account will automatically be created when a new user signs in with Azure AD, consider adding the same widget to the sign-up page. The **Sign-up form: OAuth** widget represents a form used for signing up with OAuth. > [!IMPORTANT]
api-management Api Management Howto Disaster Recovery Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-disaster-recovery-backup-restore.md
All of the tasks that you do on resources using the Azure Resource Manager must
Before calling the APIs that generate the backup and restore, you need to get a token. The following example uses the [Microsoft.IdentityModel.Clients.ActiveDirectory](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) NuGet package to retrieve the token. > [!IMPORTANT]
-> The [Microsoft.IdentityModel.Clients.ActiveDirectory](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) NuGet package and Azure AD Authentication Library (ADAL) have been deprecated. No new features have been added since June 30, 2020. We strongly encourage you to upgrade, see the [migration guide](/azure/active-directory/develop/msal-migration) for more details.
+> The [Microsoft.IdentityModel.Clients.ActiveDirectory](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) NuGet package and Azure AD Authentication Library (ADAL) have been deprecated. No new features have been added since June 30, 2020. We strongly encourage you to upgrade, see the [migration guide](../active-directory/develop/msal-migration.md) for more details.
```csharp using Microsoft.IdentityModel.Clients.ActiveDirectory;
API Management **Premium** tier also supports [zone redundancy](zone-redundancy.
[api-management-arm-token]: ./media/api-management-howto-disaster-recovery-backup-restore/api-management-arm-token.png [api-management-endpoint]: ./media/api-management-howto-disaster-recovery-backup-restore/api-management-endpoint.png [control-plane-ip-address]: virtual-network-reference.md#control-plane-ip-addresses
-[azure-storage-ip-firewall]: ../storage/common/storage-network-security.md#grant-access-from-an-internet-ip-range
+[azure-storage-ip-firewall]: ../storage/common/storage-network-security.md#grant-access-from-an-internet-ip-range
api-management Api Management Policy Expressions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-policy-expressions.md
The `context` variable is implicitly available in every policy [expression](api-
|-|-| |context|[Api](#ref-context-api): [IApi](#ref-iapi)<br /><br /> [Deployment](#ref-context-deployment)<br /><br /> Elapsed: TimeSpan - time interval between the value of Timestamp and current time<br /><br /> [LastError](#ref-context-lasterror)<br /><br /> [Operation](#ref-context-operation)<br /><br /> [Product](#ref-context-product)<br /><br /> [Request](#ref-context-request)<br /><br /> RequestId: Guid - unique request identifier<br /><br /> [Response](#ref-context-response)<br /><br /> [Subscription](#ref-context-subscription)<br /><br /> Timestamp: DateTime - point in time when request was received<br /><br /> Tracing: bool - indicates if tracing is on or off <br /><br /> [User](#ref-context-user)<br /><br /> [Variables](#ref-context-variables): IReadOnlyDictionary<string, object><br /><br /> void Trace(message: string)| |<a id="ref-context-api"></a>context.Api|Id: string<br /><br /> IsCurrentRevision: bool<br /><br /> Name: string<br /><br /> Path: string<br /><br /> Revision: string<br /><br /> ServiceUrl: [IUrl](#ref-iurl)<br /><br /> Version: string |
-|<a id="ref-context-deployment"></a>context.Deployment|GatewayId: string (returns 'managed' for managed gateways)<br /><br /> Region: string<br /><br /> ServiceName: string<br /><br /> Certificates: IReadOnlyDictionary<string, X509Certificate2>|
+|<a id="ref-context-deployment"></a>context.Deployment|GatewayId: string (returns 'managed' for managed gateways)<br /><br /> Region: string<br /><br /> ServiceId: string<br /><br /> ServiceName: string<br /><br /> Certificates: IReadOnlyDictionary<string, X509Certificate2>|
|<a id="ref-context-lasterror"></a>context.LastError|Source: string<br /><br /> Reason: string<br /><br /> Message: string<br /><br /> Scope: string<br /><br /> Section: string<br /><br /> Path: string<br /><br /> PolicyId: string<br /><br /> For more information about context.LastError, see [Error handling](api-management-error-handling-policies.md).| |<a id="ref-context-operation"></a>context.Operation|Id: string<br /><br /> Method: string<br /><br /> Name: string<br /><br /> UrlTemplate: string| |<a id="ref-context-product"></a>context.Product|Apis: IEnumerable<[IApi](#ref-iapi)\><br /><br /> ApprovalRequired: bool<br /><br /> Groups: IEnumerable<[IGroup](#ref-igroup)\><br /><br /> Id: string<br /><br /> Name: string<br /><br /> State: enum ProductState {NotPublished, Published}<br /><br /> SubscriptionLimit: int?<br /><br /> SubscriptionRequired: bool|
api-management Policy Fragments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-fragments.md
For example, insert the policy fragment named *ForwardContext* in the inbound po
``` > [!TIP]
-> To see the content of an included fragment displayed in the policy definition, select **Recalculate effective policy** in the policy editor.
+> To see the content of an included fragment displayed in the policy definition, select **Calculate effective policy** in the policy editor.
## Manage policy fragments
api-management Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/private-endpoint.md
Previously updated : 02/23/2022 Last updated : 03/31/2022
You can configure a [private endpoint](../private-link/private-endpoint-overview
* Configure custom DNS settings or an Azure DNS private zone to map the API Management hostname to the endpoint's private IP address. + With a private endpoint and Private Link, you can: - Create multiple Private Link connections to an API Management instance.
api-management Virtual Network Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/virtual-network-concepts.md
Title: Azure API Management with an Azure virtual network
-description: Learn about scenarios and requirements to connect your API Management instance to an Azure virtual network.
+description: Learn about scenarios and requirements to secure your API Management instance using an Azure virtual network.
Previously updated : 01/14/2022 Last updated : 05/26/2022 # Use a virtual network with Azure API Management
-With Azure virtual networks (VNets), you can place ("inject") your API Management instance in a non-internet-routable network to which you control access. In a virtual network, your API Management instance can securely access other networked Azure resources and also connect to on-premises networks using various VPN technologies. To learn more about Azure VNets, start with the information in the [Azure Virtual Network Overview](../virtual-network/virtual-networks-overview.md).
+API Management provides several options to secure access to your API Management instance and APIs using an Azure virtual network. API Management supports the following options, which are mutually exclusive:
+
+* **Integration (injection)** of the API Management instance into the virtual network, enabling the gateway to access resources in the network.
+
+ You can choose one of two integration modes: *external* or *internal*. They differ in whether inbound connectivity to the gateway and other API Management endpoints is allowed from the internet or only from within the virtual network.
+
+* **Enabling secure and private connectivity** to the API Management gateway using a *private endpoint* (preview).
-> [!TIP]
-> API Management also supports [private endpoints](../private-link/private-endpoint-overview.md). A private endpoint enables secure client connectivity to your API Management instance using a private IP address from your virtual network and Azure Private Link. [Learn more](private-endpoint.md) about using private endpoints with API Management.
+The following table compares virtual networking options. For more information, see later sections of this article and links to detailed guidance.
+
+|Networking model |Supported tiers |Supported components |Supported traffic |Usage scenario |
+|||||-|
+|**[Virtual network - external](#virtual-network-integration)** | Developer, Premium | Azure portal, gateway, management plane, and Git repository | Inbound and outbound traffic can be allowed to internet, peered virtual networks, Express Route, and S2S VPN connections. | External access to private and on-premises backends
+|**[Virtual network - internal](#virtual-network-integration)** | Developer, Premium | Developer portal, gateway, management plane, and Git repository. | Inbound and outbound traffic can be allowed to peered virtual networks, Express Route, and S2S VPN connections. | Internal access to private and on-premises backends
+|**[Private endpoint (preview)](#private-endpoint)** | Developer, Basic, Standard, Premium | Gateway only (managed gateway supported, self-hosted gateway not supported). | Only inbound traffic can be allowed from internet, peered virtual networks, Express Route, and S2S VPN connections. | Secure client connection to API Management gateway |
+
+## Virtual network integration
+With Azure virtual networks (VNets), you can place ("inject") your API Management instance in a non-internet-routable network to which you control access. In a virtual network, your API Management instance can securely access other networked Azure resources and also connect to on-premises networks using various VPN technologies. To learn more about Azure VNets, start with the information in the [Azure Virtual Network Overview](../virtual-network/virtual-networks-overview.md).
-This article explains VNet connectivity options, requirements, and considerations for your API Management instance. You can use the Azure portal, Azure CLI, Azure Resource Manager templates, or other tools for the configuration. You control inbound and outbound traffic into the subnet in which API Management is deployed by using [network security groups][NetworkSecurityGroups].
+ You can use the Azure portal, Azure CLI, Azure Resource Manager templates, or other tools for the configuration. You control inbound and outbound traffic into the subnet in which API Management is deployed by using [network security groups](../virtual-network/network-security-groups-overview.md).
For detailed deployment steps and network configuration, see: * [Connect to an external virtual network using Azure API Management](./api-management-using-with-vnet.md). * [Connect to an internal virtual network using Azure API Management](./api-management-using-with-internal-vnet.md). -
-## Access options
-
-When created, an API Management instance must be accessible from the internet. Using a virtual network, you can configure the developer portal, API gateway, and other API Management endpoints to be accessible either from the internet (external mode) or only within the VNet (internal mode).
+### Access options
+Using a virtual network, you can configure the developer portal, API gateway, and other API Management endpoints to be accessible either from the internet (external mode) or only within the VNet (internal mode).
* **External** - The API Management endpoints are accessible from the public internet via an external load balancer. The gateway can access resources within the VNet.
- :::image type="content" source="media/virtual-network-concepts/api-management-vnet-external.png" alt-text="Connect to external VNet":::
+ :::image type="content" source="media/virtual-network-concepts/api-management-vnet-external.png" alt-text="Diagram showing a connection to external VNet." lightbox="media/virtual-network-concepts/api-management-vnet-external.png":::
Use API Management in external mode to access backend services deployed in the virtual network. * **Internal** - The API Management endpoints are accessible only from within the VNet via an internal load balancer. The gateway can access resources within the VNet.
- :::image type="content" source="media/virtual-network-concepts/api-management-vnet-internal.png" alt-text="Connect to internal VNet":::
+ :::image type="content" source="media/virtual-network-concepts/api-management-vnet-internal.png" alt-text="Diagram showing a connection to internal VNet." lightbox="media/virtual-network-concepts/api-management-vnet-internal.png":::
Use API Management in internal mode to:
When created, an API Management instance must be accessible from the internet. U
* Manage your APIs hosted in multiple geographic locations, using a single gateway endpoint.
-## Network resource requirements
+### Network resource requirements
The following are virtual network resource requirements for API Management. Some requirements differ depending on the version (`stv2` or `stv1`) of the [compute platform](compute-infrastructure.md) hosting your API Management instance.
-### [stv2](#tab/stv2)
+#### [stv2](#tab/stv2)
* An Azure Resource Manager virtual network is required. * You must provide a Standard SKU [public IPv4 address](../virtual-network/ip-services/public-ip-addresses.md#sku) in addition to specifying a virtual network and subnet.
The following are virtual network resource requirements for API Management. Some
* The API Management service, virtual network and subnet, and public IP address resource must be in the same region and subscription. * For multi-region API Management deployments, configure virtual network resources separately for each location.
-### [stv1](#tab/stv1)
+#### [stv1](#tab/stv1)
* An Azure Resource Manager virtual network is required.
-* The subnet used to connect to the API Management instance must be dedicated to API Management. It cannot contain other Azure resource types.
+* The subnet used to connect to the API Management instance must be dedicated to API Management. It can't contain other Azure resource types.
* The API Management service, virtual network, and subnet resources must be in the same region and subscription.
-* For multi-region API Management deployments, you configure virtual network resources separately for each location.
+* For multi-region API Management deployments, configure virtual network resources separately for each location.
-## Subnet size
+### Subnet size
The minimum size of the subnet in which API Management can be deployed is /29, which gives three usable IP addresses. Each extra scale [unit](api-management-capacity.md) of API Management requires two more IP addresses. The minimum size requirement is based on the following considerations:
The minimum size of the subnet in which API Management can be deployed is /29, w
* When deploying into an [internal VNet](./api-management-using-with-internal-vnet.md), the instance requires an extra IP address for the internal load balancer.
-## Routing
+### Routing
See the Routing guidance when deploying your API Management instance into an [external VNet](./api-management-using-with-vnet.md#routing) or [internal VNet](./api-management-using-with-internal-vnet.md#routing). Learn more about the [IP addresses of API Management](api-management-howto-ip-addresses.md).
-## DNS
+### DNS
-* In external mode, the VNet enables [Azure-provided name resolution](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#azure-provided-name-resolution) by default for your API Management endpoints and other Azure resources. It does not provide name resolution for on-premises resources. Optionally, configure your own DNS solution.
+* In external mode, the VNet enables [Azure-provided name resolution](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#azure-provided-name-resolution) by default for your API Management endpoints and other Azure resources. It doesn't provide name resolution for on-premises resources. Optionally, configure your own DNS solution.
* In internal mode, you must provide your own DNS solution to ensure name resolution for API Management endpoints and other required Azure resources. We recommend configuring an Azure [private DNS zone](../dns/private-dns-overview.md). For more information, see the DNS guidance when deploying your API Management instance into an [external VNet](./api-management-using-with-vnet.md#routing) or [internal VNet](./api-management-using-with-internal-vnet.md#routing).
-For more information, see:
+Related information:
* [Name resolution for resources in Azure virtual networks](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server). * [Create an Azure private DNS zone](../dns/private-dns-getstarted-portal.md) > [!IMPORTANT] > If you plan to use a custom DNS solution for the VNet, set it up **before** deploying an API Management service into it. Otherwise, you'll need to update the API Management service each time you change the DNS server(s) by running the [Apply Network Configuration Operation](/rest/api/apimanagement/current-ga/api-management-service/apply-network-configuration-updates), or by selecting **Apply network configuration** in the service instance's network configuration window in the Azure portal.
-## Limitations
+### Limitations
-Some limitations differ depending on the version (`stv2` or `stv1`) of the [compute platform](compute-infrastructure.md) hosting your API Management instance.
+Some virtual network limitations differ depending on the version (`stv2` or `stv1`) of the [compute platform](compute-infrastructure.md) hosting your API Management instance.
-### [stv2](#tab/stv2)
+#### [stv2](#tab/stv2)
* A subnet containing API Management instances can't be moved across subscriptions. * For multi-region API Management deployments configured in internal VNet mode, users own the routing and are responsible for managing the load balancing across multiple regions. * To import an API to API Management from an [OpenAPI specification](import-and-publish.md), the specification URL must be hosted at a publicly accessible internet address.
-### [stv1](#tab/stv1)
+#### [stv1](#tab/stv1)
-* A subnet containing API Management instances can't be movacross subscriptions.
+* A subnet containing API Management instances can't be moved across subscriptions.
* For multi-region API Management deployments configured in internal VNet mode, users own the routing and are responsible for managing the load balancing across multiple regions. * To import an API to API Management from an [OpenAPI specification](import-and-publish.md), the specification URL must be hosted at a publicly accessible internet address.
-* Due to platform limitations, connectivity between a resource in a globally peered VNet in another region and an API Management service in internal mode will not work. For more information, see the [virtual network documentation](../virtual-network/virtual-network-manage-peering.md#requirements-and-constraints).
+* Due to platform limitations, connectivity between a resource in a globally peered VNet in another region and an API Management service in internal mode won't work. For more information, see the [virtual network documentation](../virtual-network/virtual-network-manage-peering.md#requirements-and-constraints).
+## Private endpoint
+
+API Management supports [private endpoints](../private-link/private-endpoint-overview.md). A private endpoint enables secure client connectivity to your API Management instance using a private IP address from your virtual network and Azure Private Link.
++
+With a private endpoint and Private Link, you can:
+
+* Create multiple Private Link connections to an API Management instance.
+* Use the private endpoint to send inbound traffic on a secure connection.
+* Use policy to distinguish traffic that comes from the private endpoint.
+* Limit incoming traffic only to private endpoints, preventing data exfiltration.
+
+> [!IMPORTANT]
+> * API Management support for private endpoints is currently in preview.
+> * During the preview period, a private endpoint connection supports only incoming traffic to the API Management managed gateway.
+
+For more information, see [Connect privately to API Management using a private endpoint](private-endpoint.md).
+
+## Advanced networking configurations
+
+### Secure API Management endpoints with a web application firewall
+
+You may have scenarios where you need both secure external and internal access to your API Management instance, and flexibility to reach private and on-premises backends. For these scenarios, you may choose to manage external access to the endpoints of an API Management instance with a web application firewall (WAF).
+
+One example is to deploy an API Management instance in an internal virtual network, and route public access to it using an internet-facing Azure Application Gateway:
++
+For more information, see [Integrate API Management in an internal virtual network with Application Gateway](api-management-howto-integrate-internal-vnet-appgateway.md).
++ ## Next steps Learn more about:
Learn more about:
* [Connecting a virtual network from different deployment models](../vpn-gateway/vpn-gateway-connect-different-deployment-models-powershell.md) * [Virtual network frequently asked questions](../virtual-network/virtual-networks-faq.md)
-Connect to a virtual network:
+Virtual network configuration with API Management:
* [Connect to an external virtual network using Azure API Management](./api-management-using-with-vnet.md). * [Connect to an internal virtual network using Azure API Management](./api-management-using-with-internal-vnet.md).
+* [Connect privately to API Management using a private endpoint](private-endpoint.md)
+
-Review the following topics
+Related articles:
* [Connecting a Virtual Network to backend using Vpn Gateway](../vpn-gateway/design.md#s2smulti) * [Connecting a Virtual Network from different deployment models](../vpn-gateway/vpn-gateway-connect-different-deployment-models-powershell.md)
Review the following topics
* [Virtual Network Frequently asked Questions](../virtual-network/virtual-networks-faq.md) * [Service tags](../virtual-network/network-security-groups-overview.md#service-tags)
-[api-management-using-vnet-menu]: ./media/api-management-using-with-vnet/api-management-menu-vnet.png
-[api-management-setup-vpn-select]: ./media/api-management-using-with-vnet/api-management-using-vnet-select.png
-[api-management-setup-vpn-add-api]: ./media/api-management-using-with-vnet/api-management-using-vnet-add-api.png
-[api-management-vnet-private]: ./media/virtual-network-concepts/api-management-vnet-internal.png
-[api-management-vnet-public]: ./media/virtual-network-concepts/api-management-vnet-external.png
-[Enable VPN connections]: #enable-vpn
-[Connect to a web service behind VPN]: #connect-vpn
-[Related content]: #related-content
-[UDRs]: ../virtual-network/virtual-networks-udr-overview.md
-[NetworkSecurityGroups]: ../virtual-network/network-security-groups-overview.md
-[ServiceEndpoints]: ../virtual-network/virtual-network-service-endpoints-overview.md
-[ServiceTags]: ../virtual-network/network-security-groups-overview.md#service-tags
+
app-service Configure Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-custom-container.md
The only exception is the `C:\home\LogFiles` directory, which is used to store t
::: zone pivot="container-linux"
-You can use the */home* directory in your custom container file system to persist files across restarts and share them across instances. The `/home` directory is provided to enable your custom container to access persistent storage. Saving data within `/home` will contribute to the [storage space quota](https://docs.microsoft.com/azure/azure-resource-manager/management/azure-subscription-service-limits#app-service-limits) included with your App Service Plan.
+You can use the */home* directory in your custom container file system to persist files across restarts and share them across instances. The `/home` directory is provided to enable your custom container to access persistent storage. Saving data within `/home` will contribute to the [storage space quota](../azure-resource-manager/management/azure-subscription-service-limits.md#app-service-limits) included with your App Service Plan.
When persistent storage is disabled, then writes to the `/home` directory are not persisted across app restarts or across multiple instances. When persistent storage is enabled, all writes to the `/home` directory are persisted and can be accessed by all instances of a scaled-out app. Additionally, any contents inside the `/home` directory of the container are overwritten by any existing files already present on the persistent storage when the container starts.
The following lists show supported and unsupported Docker Compose configuration
Or, see additional resources: - [Environment variables and app settings reference](reference-app-settings.md)-- [Load certificate in Windows/Linux containers](configure-ssl-certificate-in-code.md#load-certificate-in-linuxwindows-containers)
+- [Load certificate in Windows/Linux containers](configure-ssl-certificate-in-code.md#load-certificate-in-linuxwindows-containers)
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md
If your App Service Environment doesn't pass the validation checks or you try to
|Migrate is not available for this kind|App Service Environment v1 can't be migrated using the migration feature at this time. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. Otherwise, wait for the migration feature to support this App Service Environment configuration. | |Full migration cannot be called before IP addresses are generated|You'll see this error if you attempt to migrate before finishing the pre-migration steps. |Ensure you've completed all pre-migration steps before you attempt to migrate. See the [step-by-step guide for migrating](how-to-migrate.md). | |Migration to ASEv3 is not allowed for this ASE|You won't be able to migrate using the migration feature. |Migrate using one of the [manual migration options](migration-alternatives.md). |
-|Subscription has too many App Service Environments. Please remove some before trying to create more.|The App Service Environment [quota for your subscription](/azure/azure-resource-manager/management/azure-subscription-service-limits#app-service-limits) has been met. |Remove unneeded environments or contact support to review your options. |
+|Subscription has too many App Service Environments. Please remove some before trying to create more.|The App Service Environment [quota for your subscription](../../azure-resource-manager/management/azure-subscription-service-limits.md#app-service-limits) has been met. |Remove unneeded environments or contact support to review your options. |
|`<ZoneRedundant><DedicatedHosts><ASEv3/ASE>` is not available in this location|You'll see this error if you're trying to migrate an App Service Environment in a region that doesn't support one of your requested features. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. Otherwise, wait for the migration feature to support this App Service Environment configuration. | |Migrate cannot be called on this ASE until the active upgrade has finished. |App Service Environments can't be migrated during platform upgrades. You can set your [upgrade preference](using-an-ase.md#upgrade-preference) from the Azure portal. |Wait until the upgrade finishes and then migrate. |
There's no cost to migrate your App Service Environment. You'll stop being charg
> [App Service Environment v3 Networking](networking.md) > [!div class="nextstepaction"]
-> [Using an App Service Environment v3](using.md)
+> [Using an App Service Environment v3](using.md)
app-service Tutorial Connect Msi Azure Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-azure-database.md
[App Service](overview.md) provides a highly scalable, self-patching web hosting service in Azure. It also provides a [managed identity](overview-managed-identity.md) for your app, which is a turn-key solution for securing access to Azure databases, including: - [Azure SQL Database](/azure/azure-sql/database/)-- [Azure Database for MySQL](/azure/mysql/)-- [Azure Database for PostgreSQL](/azure/postgresql/)
+- [Azure Database for MySQL](../mysql/index.yml)
+- [Azure Database for PostgreSQL](../postgresql/index.yml)
> [!NOTE]
-> This tutorial doesn't include guidance for [Azure Cosmos DB](/azure/cosmos-db/), which supports Azure Active Directory authentication differently. For information, see Cosmos DB documentation. For example: [Use system-assigned managed identities to access Azure Cosmos DB data](../cosmos-db/managed-identity-based-authentication.md).
+> This tutorial doesn't include guidance for [Azure Cosmos DB](../cosmos-db/index.yml), which supports Azure Active Directory authentication differently. For information, see Cosmos DB documentation. For example: [Use system-assigned managed identities to access Azure Cosmos DB data](../cosmos-db/managed-identity-based-authentication.md).
Managed identities in App Service make your app more secure by eliminating secrets from your app, such as credentials in the connection strings. This tutorial shows you how to connect to the above-mentioned databases from App Service using managed identities.
What you learned:
> [Tutorial: Connect to Azure services that don't support managed identities (using Key Vault)](tutorial-connect-msi-key-vault.md) > [!div class="nextstepaction"]
-> [Tutorial: Isolate back-end communication with Virtual Network integration](tutorial-networking-isolate-vnet.md)
+> [Tutorial: Isolate back-end communication with Virtual Network integration](tutorial-networking-isolate-vnet.md)
app-service Webjobs Dotnet Deploy Vs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/webjobs-dotnet-deploy-vs.md
# Develop and deploy WebJobs using Visual Studio
-This article explains how to use Visual Studio to deploy a console app project to a web app in [Azure App Service](overview.md) as an [Azure WebJob](/azure/app-service/webjobs-create). For information about how to deploy WebJobs by using the [Azure portal](https://portal.azure.com), see [Run background tasks with WebJobs in Azure App Service](webjobs-create.md).
+This article explains how to use Visual Studio to deploy a console app project to a web app in [Azure App Service](overview.md) as an [Azure WebJob](./webjobs-create.md). For information about how to deploy WebJobs by using the [Azure portal](https://portal.azure.com), see [Run background tasks with WebJobs in Azure App Service](webjobs-create.md).
You can choose to develop a WebJob that runs as either a [.NET Core app](#webjobs-as-net-core-console-apps) or a [.NET Framework app](#webjobs-as-net-framework-console-apps). Version 3.x of the [Azure WebJobs SDK](webjobs-sdk-how-to.md) lets you develop WebJobs that run as either .NET Core apps or .NET Framework apps, while version 2.x supports only the .NET Framework. The way that you deploy a WebJobs project is different for .NET Core projects than for .NET Framework projects.
To create a new WebJobs-enabled project, use the console app project template an
Create a project that is configured to deploy automatically as a WebJob when you deploy a web project in the same solution. Use this option when you want to run your WebJob in the same web app in which you run the related web application. > [!NOTE]
-> The WebJobs new-project template automatically installs NuGet packages and includes code in *Program.cs* for the [WebJobs SDK](/azure/app-service/webjobs-sdk-get-started). If you don't want to use the WebJobs SDK, remove or change the `host.RunAndBlock` statement in *Program.cs*.
+> The WebJobs new-project template automatically installs NuGet packages and includes code in *Program.cs* for the [WebJobs SDK](./webjobs-sdk-get-started.md). If you don't want to use the WebJobs SDK, remove or change the `host.RunAndBlock` statement in *Program.cs*.
> >
If you enable **Always on** in Azure, you can use Visual Studio to change the We
## Next steps > [!div class="nextstepaction"]
-> [Learn more about the WebJobs SDK](webjobs-sdk-how-to.md)
+> [Learn more about the WebJobs SDK](webjobs-sdk-how-to.md)
application-gateway Http Response Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/http-response-codes.md
An HTTP 499 response is presented if a client request that is sent to applicatio
#### 500 ΓÇô Internal Server Error
-Azure Application Gateway shouldn't exhibit 500 response codes. Please open a support request if you see this code, because this issue is an internal error to the service. For information on how to open a support case, see [Create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request).
+Azure Application Gateway shouldn't exhibit 500 response codes. Please open a support request if you see this code, because this issue is an internal error to the service. For information on how to open a support case, see [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md).
#### 502 ΓÇô Bad Gateway HTTP 502 errors can have several root causes, for example: - NSG, UDR, or custom DNS is blocking access to backend pool members.-- Back-end VMs or instances of [virtual machine scale sets](/azure/virtual-machine-scale-sets/overview) aren't responding to the default health probe.
+- Back-end VMs or instances of [virtual machine scale sets](../virtual-machine-scale-sets/overview.md) aren't responding to the default health probe.
- Invalid or improper configuration of custom health probes. - Azure Application Gateway's [back-end pool isn't configured or empty](application-gateway-troubleshooting-502.md#empty-backendaddresspool). - None of the VMs or instances in [virtual machine scale set are healthy](application-gateway-troubleshooting-502.md#unhealthy-instances-in-backendaddresspool).
HTTP 504 errors are presented if a request is sent to application gateways using
## Next steps
-If the information in this article doesn't help to resolve the issue, [submit a support ticket](https://azure.microsoft.com/support/options/).
+If the information in this article doesn't help to resolve the issue, [submit a support ticket](https://azure.microsoft.com/support/options/).
applied-ai-services Concept Read https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-read.md
# Form Recognizer read model
-The Form Recognizer v3.0 preview includes the new Read OCR model. Form Recognizer Read builds on the success of COmputer Vision Read and optimizes even more for analyzing documents, including new document formats in the future. It extracts printed and handwritten text from documents and images and can handle mixed languages in the documents and text line. The read model can detect lines, words, locations, and additionally detect languages. It is the foundational technology powering the text extraction in Form Recognizer Layout, prebuilt, general document, and custom models.
+The Form Recognizer v3.0 preview includes the new Read OCR model. Form Recognizer Read builds on the success of Computer Vision Read and optimizes even more for analyzing documents, including new document formats in the future. It extracts printed and handwritten text from documents and images and can handle mixed languages in the documents and text line. The read model can detect lines, words, locations, and additionally detect languages. It is the foundational technology powering the text extraction in Form Recognizer Layout, prebuilt, general document, and custom models.
## Development options
Form Recognizer preview version supports several languages for the read model. *
### Text lines and words
-Read API extracts text from documents and images. It accepts PDFs and images of documents and handles printed and/or handwritten text, and supports mixed languages. Text is extracted as text lnes, words, bounding boxes, confidence scores, and style, whether handwritten or not, supported for Latin languages only.
+Read API extracts text from documents and images. It accepts PDFs and images of documents and handles printed and/or handwritten text, and supports mixed languages. Text is extracted as text lines, words, bounding boxes, confidence scores, and style, whether handwritten or not, supported for Latin languages only.
### Language detection
attestation Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/audit-logs.md
Individual blobs are stored as text, formatted as a JSON blob. LetΓÇÖs look at a
} ```
-Most of these fields are documented in the [Top-level common schema](/azure/azure-monitor/essentials/resource-logs-schema#top-level-common-schema). The following table lists the field names and descriptions for the entries not included in the top-level common schema:
+Most of these fields are documented in the [Top-level common schema](../azure-monitor/essentials/resource-logs-schema.md#top-level-common-schema). The following table lists the field names and descriptions for the entries not included in the top-level common schema:
| Field Name | Description | ||--|
The properties contain additional Azure attestation specific context:
| infoDataReceived | Information about the request received from the client. Includes some HTTP headers, the number of headers received, the content type and content length | ## Next steps-- [How to enable Microsoft Azure Attestation logging ](azure-diagnostic-monitoring.md)
+- [How to enable Microsoft Azure Attestation logging ](azure-diagnostic-monitoring.md)
automanage Automanage Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/automanage-linux.md
Automanage supports the following Linux distributions and versions:
|[Guest configuration](../governance/policy/concepts/guest-configuration.md) | Guest configuration is used to monitor the configuration and report on the compliance of the machine. The Automanage service will install the Azure Linux baseline using the guest configuration extension. For Linux machines, the guest configuration service will install the baseline in audit-only mode. You will be able to see where your VM is out of compliance with the baseline, but noncompliance won't be automatically remediated. Learn [more](../governance/policy/concepts/guest-configuration.md). |Production, Dev/Test | |[Boot Diagnostics](../virtual-machines/boot-diagnostics.md) | Boot diagnostics is a debugging feature for Azure virtual machines (VM) that allows diagnosis of VM boot failures. Boot diagnostics enables a user to observe the state of their VM as it is booting up by collecting serial log information and screenshots. This will only be enabled for machines that are using managed disks. |Production, Dev/Test | |[Azure Automation Account](../automation/automation-create-standalone-account.md) |Azure Automation supports management throughout the lifecycle of your infrastructure and applications. Learn [more](../automation/automation-intro.md). |Production, Dev/Test |
-|[Log Analytics Workspace](../azure-monitor/logs/log-analytics-overview.md) |Azure Monitor stores log data in a Log Analytics workspace, which is an Azure resource and a container where data is collected, aggregated, and serves as an administrative boundary. Learn [more](../azure-monitor/logs/design-logs-deployment.md). |Production, Dev/Test |
+|[Log Analytics Workspace](../azure-monitor/logs/log-analytics-workspace-overview.md) |Azure Monitor stores log data in a Log Analytics workspace, which is an Azure resource and a container where data is collected, aggregated, and serves as an administrative boundary. Learn [more](../azure-monitor/logs/workspace-design.md). |Production, Dev/Test |
<sup>1</sup> The configuration profile selection is available when you are enabling Automanage. Learn [more](automanage-virtual-machines.md#configuration-profile). You can also create your own custom profile with the set of Azure services and settings that you need.
automanage Virtual Machines Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/virtual-machines-best-practices.md
For all of these services, we will auto-onboard, auto-configure, monitor for dri
|Change Tracking & Inventory |Change Tracking and Inventory combines change tracking and inventory functions to allow you to track virtual machine and server infrastructure changes. The service supports change tracking across services, daemons software, registry, and files in your environment to help you diagnose unwanted changes and raise alerts. Inventory support allows you to query in-guest resources for visibility into installed applications and other configuration items. Learn [more](../automation/change-tracking/overview.md). |Azure VM Best Practices ΓÇô Production, Azure VM Best Practices ΓÇô Dev/Test |No | |Guest configuration | Guest configuration is used to monitor the configuration and report on the compliance of the machine. The Automanage service will install the [Windows security baselines](/windows/security/threat-protection/windows-security-baselines) using the guest configuration extension. Learn [more](../governance/policy/concepts/guest-configuration.md). |Azure VM Best Practices ΓÇô Production, Azure VM Best Practices ΓÇô Dev/Test |No | |Azure Automation Account |Azure Automation supports management throughout the lifecycle of your infrastructure and applications. Learn [more](../automation/automation-intro.md). |Azure VM Best Practices ΓÇô Production, Azure VM Best Practices ΓÇô Dev/Test |No |
-|Log Analytics Workspace |Azure Monitor stores log data in a Log Analytics workspace, which is an Azure resource and a container where data is collected, aggregated, and serves as an administrative boundary. Learn [more](../azure-monitor/logs/design-logs-deployment.md). |Azure VM Best Practices ΓÇô Production, Azure VM Best Practices ΓÇô Dev/Test |No |
+|Log Analytics Workspace |Azure Monitor stores log data in a Log Analytics workspace, which is an Azure resource and a container where data is collected, aggregated, and serves as an administrative boundary. Learn [more](../azure-monitor/logs/log-analytics-workspace-overview.md). |Azure VM Best Practices ΓÇô Production, Azure VM Best Practices ΓÇô Dev/Test |No |
<sup>1</sup> Configuration profiles are available when you are enabling Automanage. Learn [more](automanage-virtual-machines.md). You can also adjust the default settings of the configuration profile and set your own preferences within the best practices constraints.
automation Automation Hybrid Runbook Worker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-hybrid-runbook-worker.md
Azure Automation provides native integration of the Hybrid Runbook Worker role t
| Platform | Description | ||| |**Extension-based (V2)** |Installed using the [Hybrid Runbook Worker VM extension](./extension-based-hybrid-runbook-worker-install.md), without any dependency on the Log Analytics agent reporting to an Azure Monitor Log Analytics workspace. **This is the recommended platform**.|
-|**Agent-based (V1)** |Installed after the [Log Analytics agent](../azure-monitor/agents/log-analytics-agent.md) reporting to an Azure Monitor [Log Analytics workspace](../azure-monitor/logs/design-logs-deployment.md) is completed.|
+|**Agent-based (V1)** |Installed after the [Log Analytics agent](../azure-monitor/agents/log-analytics-agent.md) reporting to an Azure Monitor [Log Analytics workspace](../azure-monitor/logs/log-analytics-workspace-overview.md) is completed.|
:::image type="content" source="./media/automation-hybrid-runbook-worker/hybrid-worker-group-platform.png" alt-text="Hybrid worker group showing platform field":::
There are two types of Runbook Workers - system and user. The following table de
|**System** |Supports a set of hidden runbooks used by the Update Management feature that are designed to install user-specified updates on Windows and Linux machines.<br> This type of Hybrid Runbook Worker isn't a member of a Hybrid Runbook Worker group, and therefore doesn't run runbooks that target a Runbook Worker group. | |**User** |Supports user-defined runbooks intended to run directly on the Windows and Linux machine that are members of one or more Runbook Worker groups. |
-Agent-based (V1) Hybrid Runbook Workers rely on the [Log Analytics agent](../azure-monitor/agents/log-analytics-agent.md) reporting to an Azure Monitor [Log Analytics workspace](../azure-monitor/logs/design-logs-deployment.md). The workspace isn't only to collect monitoring data from the machine, but also to download the components required to install the agent-based Hybrid Runbook Worker.
+Agent-based (V1) Hybrid Runbook Workers rely on the [Log Analytics agent](../azure-monitor/agents/log-analytics-agent.md) reporting to an Azure Monitor [Log Analytics workspace](../azure-monitor/logs/log-analytics-workspace-overview.md). The workspace isn't only to collect monitoring data from the machine, but also to download the components required to install the agent-based Hybrid Runbook Worker.
When Azure Automation [Update Management](./update-management/overview.md) is enabled, any machine connected to your Log Analytics workspace is automatically configured as a system Hybrid Runbook Worker. To configure it as a user Windows Hybrid Runbook Worker, see [Deploy an agent-based Windows Hybrid Runbook Worker in Automation](automation-windows-hrw-install.md) and for Linux, see [Deploy an agent-based Linux Hybrid Runbook Worker in Automation](./automation-linux-hrw-install.md).
automation Automation Linux Hrw Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-linux-hrw-install.md
Before you start, make sure that you have the following.
The Hybrid Runbook Worker role depends on an Azure Monitor Log Analytics workspace to install and configure the role. You can create it through [Azure Resource Manager](../azure-monitor/logs/resource-manager-workspace.md#create-a-log-analytics-workspace), through [PowerShell](../azure-monitor/logs/powershell-workspace-configuration.md?toc=%2fpowershell%2fmodule%2ftoc.json), or in the [Azure portal](../azure-monitor/logs/quick-create-workspace.md).
-If you don't have an Azure Monitor Log Analytics workspace, review the [Azure Monitor Log design guidance](../azure-monitor/logs/design-logs-deployment.md) before you create the workspace.
+If you don't have an Azure Monitor Log Analytics workspace, review the [Azure Monitor Log design guidance](../azure-monitor/logs/workspace-design.md) before you create the workspace.
### Log Analytics agent
automation Automation Runbook Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-runbook-types.md
The same Azure sandbox and Hybrid Runbook Worker can execute **PowerShell 5.1**
Ensure that you select the right Runtime Version for modules.
-For example : if you are executing a runbook for a Sharepoint automation scenario in **Runtime version** *7.1 (preview)*, then import the module in **Runtime version** **7.1 (preview)**; if you are executing a runbook for a Sharepoint automation scenario in **Runtime version** **5.1**, then import the module in **Runtime version** *5.1*. In this case, you would see two entries for the module, one for **Runtime Version** **7.1(preview)** and other for **5.1**.
+For example : if you are executing a runbook for a SharePoint automation scenario in **Runtime version** *7.1 (preview)*, then import the module in **Runtime version** **7.1 (preview)**; if you are executing a runbook for a SharePoint automation scenario in **Runtime version** **5.1**, then import the module in **Runtime version** *5.1*. In this case, you would see two entries for the module, one for **Runtime Version** **7.1(preview)** and other for **5.1**.
:::image type="content" source="./media/automation-runbook-types/runbook-types.png" alt-text="runbook Types.":::
automation Automation Solution Vm Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-solution-vm-management.md
The following are limitations with the current feature:
- The runbooks for the Start/Stop VMs during off hours feature work with an [Azure Run As account](./automation-security-overview.md#run-as-accounts). The Run As account is the preferred authentication method because it uses certificate authentication instead of a password that might expire or change frequently. -- An [Azure Monitor Log Analytics workspace](../azure-monitor/logs/design-logs-deployment.md) that stores the runbook job logs and job stream results in a workspace to query and analyze. The Automation account and Log Analytics workspace need to be in the same subscription and supported region. The workspace needs to already exist, you cannot create a new workspace during deployment of this feature.
+- An [Azure Monitor Log Analytics workspace](../azure-monitor/logs/log-analytics-workspace-overview.md) that stores the runbook job logs and job stream results in a workspace to query and analyze. The Automation account and Log Analytics workspace need to be in the same subscription and supported region. The workspace needs to already exist, you cannot create a new workspace during deployment of this feature.
We recommend that you use a separate Automation account for working with VMs enabled for the Start/Stop VMs during off-hours feature. Azure module versions are frequently upgraded, and their parameters might change. The feature isn't upgraded on the same cadence and it might not work with newer versions of the cmdlets that it uses. Before importing the updated modules into your production Automation account(s), we recommend you import them into a test Automation account to verify there aren't any compatibility issues.
automation Automation Update Azure Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-update-azure-modules.md
If you develop your scripts locally, it's recommended to have the same module ve
## Update Az modules
-You can update Az modules through the portal **(recommended)** or through the runbook.
+The following sections explains on how you can update Az modules either through the **portal** (recommended) or through the runbook.
### Update Az modules through portal
The Azure team will regularly update the module version and provide an option to
### Update Az modules through runbook
-To update the Azure modules in your Automation account, you must use the [Update-AutomationAzureModulesForAccount](https://github.com/Microsoft/AzureAutomation-Account-Modules-Update) runbook, available as open source. To start using this runbook to update your Azure modules, download it from the GitHub repository. You can then import it into your Automation account or run it as a script. To learn how to import a runbook in your Automation account, see [Import a runbook](manage-runbooks.md#import-a-runbook). In case of any runbook failure, we recommend that you modify the parameters in the runbook according to your specific needs, as the runbook is available as open-source and provided as a reference.
+To update the Azure modules in your Automation account:
+
+1. Use the [Update-AutomationAzureModulesForAccount](https://github.com/Microsoft/AzureAutomation-Account-Modules-Update) runbook, available as open source.
+1. Download from the GitHub repository, to start using this runbook to update your Azure modules.
+1. Import it into your Automation account or run it as a script. To learn how to import a runbook in your Automation account, see [Import a runbook](manage-runbooks.md#import-a-runbook).
+
+>[!NOTE]
+> We recommend you to update Az modules through Azure portal. You can also perform this using the `Update-AutomationAzureModulesForAccount` script, available as open-source and provided as a reference. However, in case of any runbook failure, you need to modify parameters in the runbook as required or debug the script as per the scenario.
The **Update-AutomationAzureModulesForAccount** runbook supports updating the Azure, AzureRM, and Az modules by default. Review the [Update Azure modules runbook README](https://github.com/microsoft/AzureAutomation-Account-Modules-Update/blob/master/README.md) for more information on updating Az.Automation modules with this runbook. There are additional important factors that you need to take into account when using the Az modules in your Automation account. To learn more, see [Manage modules in Azure Automation](shared-resources/modules.md).
automation Automation Windows Hrw Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-windows-hrw-install.md
Before you start, make sure that you have the following.
The Hybrid Runbook Worker role depends on an Azure Monitor Log Analytics workspace to install and configure the role. You can create it through [Azure Resource Manager](../azure-monitor/logs/resource-manager-workspace.md#create-a-log-analytics-workspace), through [PowerShell](../azure-monitor/logs/powershell-workspace-configuration.md?toc=%2fpowershell%2fmodule%2ftoc.json), or in the [Azure portal](../azure-monitor/logs/quick-create-workspace.md).
-If you don't have an Azure Monitor Log Analytics workspace, review the [Azure Monitor Log design guidance](../azure-monitor/logs/design-logs-deployment.md) before you create the workspace.
+If you don't have an Azure Monitor Log Analytics workspace, review the [Azure Monitor Log design guidance](../azure-monitor/logs/workspace-design.md) before you create the workspace.
### Log Analytics agent
automation Enable From Runbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/enable-from-runbook.md
This method uses two runbooks:
* Azure subscription. If you don't have one yet, you can [activate your MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/) or sign up for a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * [Automation account](../automation-security-overview.md) to manage machines.
-* [Log Analytics workspace](../../azure-monitor/logs/design-logs-deployment.md)
+* [Log Analytics workspace](../../azure-monitor/logs/log-analytics-workspace-overview.md)
* A [virtual machine](../../virtual-machines/windows/quick-create-portal.md). * Two Automation assets, which are used by the **Enable-AutomationSolution** runbook. This runbook, if it doesn't already exist in your Automation account, is automatically imported by the **Enable-MultipleSolution** runbook during its first run. * *LASolutionSubscriptionId*: Subscription ID of where the Log Analytics workspace is located.
automation Disable Local Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/disable-local-authentication.md
Disabling local authentication doesn't take effect immediately. Allow a few minu
>[!NOTE] > Currently, PowerShell support for the new API version (2021-06-22) or the flag ΓÇô `DisableLocalAuth` is not available. However, you can use the Rest-API with this API version to update the flag.
-To allow list and enroll your subscription for this feature in your respective regions, follow the steps in [how to create an Azure support request - Azure supportability | Microsoft Docs](/azure/azure-portal/supportability/how-to-create-azure-support-request).
+To allow list and enroll your subscription for this feature in your respective regions, follow the steps in [how to create an Azure support request - Azure supportability | Microsoft Docs](../azure-portal/supportability/how-to-create-azure-support-request.md).
## Re-enable local authentication
Update Management patching will not work when local authentication is disabled.
## Next steps-- [Azure Automation account authentication overview](./automation-security-overview.md)
+- [Azure Automation account authentication overview](./automation-security-overview.md)
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/overview.md
These Azure services can work with Automation job and runbook resources using an
## Pricing for Azure Automation
-Process automation includes runbook jobs and watchers. Billing for jobs is based on the number of job run time minutes used in the month, and for watchers, it is on the number of hours used in a month. The charges for process automation are incurred whenever a [job](/azure/automation/start-runbooks) or [watcher](/azure/automation/automation-scenario-using-watcher-task) runs.
+Process automation includes runbook jobs and watchers. Billing for jobs is based on the number of job run time minutes used in the month, and for watchers, it is on the number of hours used in a month. The charges for process automation are incurred whenever a [job](./start-runbooks.md) or [watcher](./automation-scenario-using-watcher-task.md) runs.
You create Automation accounts with a Basic SKU, wherein the first 500 job run time minutes are free per subscription. You are billed only for minutes/hours that exceed the 500 mins free included units. You can review the prices associated with Azure Automation on the [pricing](https://azure.microsoft.com/pricing/details/automation/) page.
You can review the prices associated with Azure Automation on the [pricing](http
## Next steps > [!div class="nextstepaction"]
-> [Create an Automation account](./quickstarts/create-account-portal.md)
+> [Create an Automation account](./quickstarts/create-account-portal.md)
automation Quickstart Create Automation Account Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/quickstart-create-automation-account-template.md
If you're new to Azure Automation and Azure Monitor, it's important that you und
* Review [workspace mappings](how-to/region-mappings.md) to specify the supported regions inline or in a parameter file. Only certain regions are supported for linking a Log Analytics workspace and an Automation account in your subscription.
-* If you're new to Azure Monitor Logs and haven't deployed a workspace already, review the [workspace design guidance](../azure-monitor/logs/design-logs-deployment.md). This document will help you learn about access control, and help you understand the recommended design implementation strategies for your organization.
+* If you're new to Azure Monitor Logs and haven't deployed a workspace already, review the [workspace design guidance](../azure-monitor/logs/workspace-design.md). This document will help you learn about access control, and help you understand the recommended design implementation strategies for your organization.
## Review the template
automation Update Agent Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/update-agent-issues.md
Results are shown on the page when they're ready. The checks sections show what'
### Operating system
-The operating system check verifies whether the Hybrid Runbook Worker is running [one of the supported operating systems.](/azure/automation/update-management/operating-system-requirements.md#windows-operating-system)
+The operating system check verifies whether the Hybrid Runbook Worker is running [one of the supported operating systems.](../update-management/operating-system-requirements.md)
one of the supported operating systems ### .NET 4.6.2
automation Enable From Runbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/enable-from-runbook.md
This method uses two runbooks:
* Azure subscription. If you don't have one yet, you can [activate your MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/) or sign up for a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * [Automation account](../automation-security-overview.md) to manage machines.
-* [Log Analytics workspace](../../azure-monitor/logs/design-logs-deployment.md)
+* [Log Analytics workspace](../../azure-monitor/logs/log-analytics-workspace-overview.md)
* A [virtual machine](../../virtual-machines/windows/quick-create-portal.md). * Two Automation assets, which are used by the **Enable-AutomationSolution** runbook. This runbook, if it doesn't already exist in your Automation account, is automatically imported by the **Enable-MultipleSolution** runbook during its first run. * *LASolutionSubscriptionId*: Subscription ID of where the Log Analytics workspace is located.
automation Enable From Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/enable-from-template.md
If you're new to Azure Automation and Azure Monitor, it's important that you und
* Review [workspace mappings](../how-to/region-mappings.md) to specify the supported regions inline or in a parameter file. Only certain regions are supported for linking a Log Analytics workspace and an Automation account in your subscription.
-* If you're new to Azure Monitor logs and have not deployed a workspace already, you should review the [workspace design guidance](../../azure-monitor/logs/design-logs-deployment.md). It will help you to learn about access control, and understand the design implementation strategies we recommend for your organization.
+* If you're new to Azure Monitor logs and have not deployed a workspace already, you should review the [workspace design guidance](../../azure-monitor/logs/workspace-design.md). It will help you to learn about access control, and understand the design implementation strategies we recommend for your organization.
## Deploy template
automation Plan Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/plan-deployment.md
Update Management is an Azure Automation feature, and therefore requires an Auto
Update Management depends on a Log Analytics workspace in Azure Monitor to store assessment and update status log data collected from managed machines. Integration with Log Analytics also enables detailed analysis and alerting in Azure Monitor. You can use an existing workspace in your subscription, or create a new one dedicated only for Update Management.
-If you are new to Azure Monitor Logs and the Log Analytics workspace, you should review the [Design a Log Analytics workspace](../../azure-monitor/logs/design-logs-deployment.md) deployment guide.
+If you are new to Azure Monitor Logs and the Log Analytics workspace, you should review the [Design a Log Analytics workspace](../../azure-monitor/logs/workspace-design.md) deployment guide.
## Step 3 - Supported operating systems
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md
For complete release version information, see [Version log](version-log.md).
- Set `--readable-secondaries` to any value between 0 and the number of replicas minus 1. - `--readable-secondaries` only applies to Business Critical tier. - Automatic backups are taken on the primary instance in a Business Critical service tier when there are multiple replicas. When a failover happens, backups move to the new primary. -- [ReadWriteMany (RWX) capable storage class](/azure/aks/concepts-storage#azure-disks) is required for backups, for both General Purpose and Business Critical service tiers. Specifying a non-ReadWriteMany storage class will cause the SQL Managed Instance to be stuck in "Pending" status during deployment.
+- [ReadWriteMany (RWX) capable storage class](../../aks/concepts-storage.md#azure-disks) is required for backups, for both General Purpose and Business Critical service tiers. Specifying a non-ReadWriteMany storage class will cause the SQL Managed Instance to be stuck in "Pending" status during deployment.
- Billing support when using multiple read replicas. For additional information about service tiers, see [High Availability with Azure Arc-enabled SQL Managed Instance (preview)](managed-instance-high-availability.md).
For instructions see [What are Azure Arc-enabled data services?](overview.md)
- [Plan an Azure Arc-enabled data services deployment](plan-azure-arc-data-services.md) (requires installing the client tools first) - [Create an Azure SQL Managed Instance on Azure Arc](create-sql-managed-instance.md) (requires creation of an Azure Arc data controller first) - [Create an Azure Database for PostgreSQL Hyperscale server group on Azure Arc](create-postgresql-hyperscale-server-group.md) (requires creation of an Azure Arc data controller first)-- [Resource providers for Azure services](../../azure-resource-manager/management/azure-services-resource-providers.md)
+- [Resource providers for Azure services](../../azure-resource-manager/management/azure-services-resource-providers.md)
azure-arc Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions.md
A conceptual overview of this feature is available in [Cluster extensions - Azur
| [Azure API Management on Azure Arc](../../api-management/how-to-deploy-self-hosted-gateway-azure-arc.md) | Deploy and manage API Management gateway on Azure Arc-enabled Kubernetes clusters. | | [Azure Arc-enabled Machine Learning](../../machine-learning/how-to-attach-kubernetes-anywhere.md) | Deploy and run Azure Machine Learning on Azure Arc-enabled Kubernetes clusters. | | [Flux (GitOps)](./conceptual-gitops-flux2.md) | Use GitOps with Flux to manage cluster configuration and application deployment. |
-| [Dapr extension for Azure Kubernetes Service (AKS) and Arc-enabled Kubernetes](/azure/aks/dapr)| Eliminates the overhead of downloading Dapr tooling and manually installing and managing the runtime on your clusters. |
+| [Dapr extension for Azure Kubernetes Service (AKS) and Arc-enabled Kubernetes](../../aks/dapr.md)| Eliminates the overhead of downloading Dapr tooling and manually installing and managing the runtime on your clusters. |
## Usage of cluster extensions
Learn more about the cluster extensions currently available for Azure Arc-enable
> [Event Grid on Kubernetes](../../event-grid/kubernetes/overview.md) > > [!div class="nextstepaction"]
-> [Azure API Management on Azure Arc](../../api-management/how-to-deploy-self-hosted-gateway-azure-arc.md)
+> [Azure API Management on Azure Arc](../../api-management/how-to-deploy-self-hosted-gateway-azure-arc.md)
azure-arc Tutorial Akv Secrets Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-akv-secrets-provider.md
Title: Azure Key Vault Secrets Provider extension
-description: Tutorial for setting up Azure Key Vault provider for Secrets Store CSI Driver interface as an extension on Azure Arc enabled Kubernetes cluster
+ Title: Use Azure Key Vault Secrets Provider extension to fetch secrets into Azure Arc-enabled Kubernetes clusters
+description: Learn how to set up the Azure Key Vault Provider for Secrets Store CSI Driver interface as an extension on Azure Arc enabled Kubernetes cluster
Previously updated : 5/13/2022 Last updated : 5/26/2022
-# Using Azure Key Vault Secrets Provider extension to fetch secrets into Arc clusters
+# Use the Azure Key Vault Secrets Provider extension to fetch secrets into Azure Arc-enabled Kubernetes clusters
-The Azure Key Vault Provider for Secrets Store CSI Driver allows for the integration of Azure Key Vault as a secrets store with a Kubernetes cluster via a [CSI volume](https://kubernetes-csi.github.io/docs/).
+The Azure Key Vault Provider for Secrets Store CSI Driver allows for the integration of Azure Key Vault as a secrets store with a Kubernetes cluster via a [CSI volume](https://kubernetes-csi.github.io/docs/). For Azure Arc-enabled Kubernetes clusters, you can install the Azure Key Vault Secrets Provider extension to fetch secrets.
-## Prerequisites
-1. Ensure you have met all the common prerequisites for cluster extensions listed [here](extensions.md#prerequisites).
-2. Use az k8s-extension CLI version >= v0.4.0
-
-### Support limitations for Azure Key Vault (AKV) secrets provider extension
-- Following Kubernetes distributions are currently supported
- - Cluster API Azure
- - Azure Kubernetes Service on Azure Stack HCI (AKS-HCI)
- - Google Kubernetes Engine
- - OpenShift Kubernetes Distribution
- - Canonical Kubernetes Distribution
- - Elastic Kubernetes Service
- - Tanzu Kubernetes Grid
--
-## Features
+Benefits of the Azure Key Vault Secrets Provider extension include the folllowing:
- Mounts secrets/keys/certs to pod using a CSI Inline volume - Supports pod portability with the SecretProviderClass CRD - Supports Linux and Windows containers - Supports sync with Kubernetes Secrets - Supports auto rotation of secrets
+- Extension components are deployed to availability zones, making them zone redundant
+## Prerequisites
-## Install AKV secrets provider extension on an Arc enabled Kubernetes cluster
+- A cluster with a supported Kubernetes distribution that has already been [connected to Azure Arc](quickstart-connect-cluster.md). The following Kubernetes distributions are currently supported for this scenario:
+ - Cluster API Azure
+ - Azure Kubernetes Service on Azure Stack HCI (AKS-HCI)
+ - Google Kubernetes Engine
+ - OpenShift Kubernetes Distribution
+ - Canonical Kubernetes Distribution
+ - Elastic Kubernetes Service
+ - Tanzu Kubernetes Grid
+- Ensure you have met the [general prerequisites for cluster extensions](extensions.md#prerequisites). You must use version 0.4.0 or newer of the `k8s-extension` Azure CLI extension.
-The following steps assume that you already have a cluster with supported Kubernetes distribution connected to Azure Arc.
+## Install the Azure Key Vault Secrets Provider extension on an Arc-enabled Kubernetes cluster
-To deploy using Azure portal, go to the cluster's **Extensions** blade under **Settings**. Click on **+Add** button.
+You can install the Azure Key Vault Secrets Provider extension on your connected cluster in the Azure portal, by using Azure CLI, or by deploying ARM template.
-[![Extensions located under Settings for Arc enabled Kubernetes cluster](media/tutorial-akv-secrets-provider/extension-install-add-button.jpg)](media/tutorial-akv-secrets-provider/extension-install-add-button.jpg#lightbox)
+> [!TIP]
+> Only one instance of the extension can be deployed on each Azure Arc-enabled Kubernetes cluster.
-From the list of available extensions, select the **Azure Key Vault Secrets Provider** to deploy the latest version of the extension. You can also choose to customize the installation through the portal by changing the defaults on **Configuration** tab.
+### Azure portal
-[![AKV Secrets Provider available as an extension by clicking on Add button on Extensions blade](media/tutorial-akv-secrets-provider/extension-install-new-resource.jpg)](media/tutorial-akv-secrets-provider/extension-install-new-resource.jpg#lightbox)
+1. In the [Azure portal](https://portal/azure.com), navigate to **Kubernetes - Azure Arc** and select your cluster.
+1. Select **Extensions** (under **Settings**), and then select **+ Add**.
-Alternatively, you can use the CLI experience captured below.
+ [![Screenshot showing the Extensions page for an Arc-enabled Kubernetes cluster in the Azure portal.](media/tutorial-akv-secrets-provider/extension-install-add-button.jpg)](media/tutorial-akv-secrets-provider/extension-install-add-button.jpg#lightbox)
-Set the environment variables:
-```azurecli-interactive
-export CLUSTER_NAME=<arc-cluster-name>
-export RESOURCE_GROUP=<resource-group-name>
-```
+1. From the list of available extensions, select **Azure Key Vault Secrets Provider** to deploy the latest version of the extension.
-```azurecli-interactive
-az k8s-extension create --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --cluster-type connectedClusters --extension-type Microsoft.AzureKeyVaultSecretsProvider --name akvsecretsprovider
-```
+ [![Screenshot of the Azure Key Vault Secrets Provider extension in the Azure portal.](media/tutorial-akv-secrets-provider/extension-install-new-resource.jpg)](media/tutorial-akv-secrets-provider/extension-install-new-resource.jpg)
+
+1. Follow the prompts to deploy the extension. If needed, you can customize the installation by changing the default options on the **Configuration** tab.
+
+### Azure CLI
+
+1. Set the environment variables:
+
+ ```azurecli-interactive
+ export CLUSTER_NAME=<arc-cluster-name>
+ export RESOURCE_GROUP=<resource-group-name>
+ ```
+
+2. Install the Secrets Store CSI Driver and the Azure Key Vault Secrets Provider extension by running the following command:
-The above will install the Secrets Store CSI Driver and the Azure Key Vault Provider on your cluster nodes. You should see output similar to the output shown below. It may take 3-5 minutes for the actual AKV secrets provider helm chart to get deployed to the cluster.
+ ```azurecli-interactive
+ az k8s-extension create --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --cluster-type connectedClusters --extension-type Microsoft.AzureKeyVaultSecretsProvider --name akvsecretsprovider
+ ```
-Note that only one instance of AKV secrets provider extension can be deployed on an Arc connected Kubernetes cluster.
+You should see output similar to the example below. Note that it may take several minutes before the secrets provider Helm chart is deployed to the cluster.
```json {
Note that only one instance of AKV secrets provider extension can be deployed on
} ```
-### Install AKV secrets provider extension using ARM template
-After connecting your cluster to Azure Arc, create a json file with the following format, making sure to update the \<cluster-name\> value:
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "ConnectedClusterName": {
- "defaultValue": "<cluster-name>",
- "type": "String",
- "metadata": {
- "description": "The Connected Cluster name."
- }
- },
- "ExtensionInstanceName": {
- "defaultValue": "akvsecretsprovider",
- "type": "String",
- "metadata": {
- "description": "The extension instance name."
- }
- },
- "ExtensionVersion": {
- "defaultValue": "",
- "type": "String",
- "metadata": {
- "description": "The version of the extension type."
- }
- },
- "ExtensionType": {
- "defaultValue": "Microsoft.AzureKeyVaultSecretsProvider",
- "type": "String",
- "metadata": {
- "description": "The extension type."
- }
- },
- "ReleaseTrain": {
- "defaultValue": "stable",
- "type": "String",
- "metadata": {
- "description": "The release train."
- }
- }
- },
- "functions": [],
- "resources": [
- {
- "type": "Microsoft.KubernetesConfiguration/extensions",
- "apiVersion": "2021-09-01",
- "name": "[parameters('ExtensionInstanceName')]",
- "properties": {
- "extensionType": "[parameters('ExtensionType')]",
- "releaseTrain": "[parameters('ReleaseTrain')]",
- "version": "[parameters('ExtensionVersion')]"
- },
- "scope": "[concat('Microsoft.Kubernetes/connectedClusters/', parameters('ConnectedClusterName'))]"
- }
- ]
-}
-```
-Now set the environment variables:
-```azurecli-interactive
-export TEMPLATE_FILE_NAME=<template-file-path>
-export DEPLOYMENT_NAME=<desired-deployment-name>
-```
-
-Finally, run this command to install the AKV secrets provider extension through az CLI:
-
-```azurecli-interactive
-az deployment group create --name $DEPLOYMENT_NAME --resource-group $RESOURCE_GROUP --template-file $TEMPLATE_FILE_NAME
-```
-Now, you should be able to view the AKV provider resources and use the extension in your cluster.
+### ARM template
+
+1. Create a .json file using the following format. Be sure to update the \<cluster-name\> value to refer to your cluster.
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "ConnectedClusterName": {
+ "defaultValue": "<cluster-name>",
+ "type": "String",
+ "metadata": {
+ "description": "The Connected Cluster name."
+ }
+ },
+ "ExtensionInstanceName": {
+ "defaultValue": "akvsecretsprovider",
+ "type": "String",
+ "metadata": {
+ "description": "The extension instance name."
+ }
+ },
+ "ExtensionVersion": {
+ "defaultValue": "",
+ "type": "String",
+ "metadata": {
+ "description": "The version of the extension type."
+ }
+ },
+ "ExtensionType": {
+ "defaultValue": "Microsoft.AzureKeyVaultSecretsProvider",
+ "type": "String",
+ "metadata": {
+ "description": "The extension type."
+ }
+ },
+ "ReleaseTrain": {
+ "defaultValue": "stable",
+ "type": "String",
+ "metadata": {
+ "description": "The release train."
+ }
+ }
+ },
+ "functions": [],
+ "resources": [
+ {
+ "type": "Microsoft.KubernetesConfiguration/extensions",
+ "apiVersion": "2021-09-01",
+ "name": "[parameters('ExtensionInstanceName')]",
+ "properties": {
+ "extensionType": "[parameters('ExtensionType')]",
+ "releaseTrain": "[parameters('ReleaseTrain')]",
+ "version": "[parameters('ExtensionVersion')]"
+ },
+ "scope": "[concat('Microsoft.Kubernetes/connectedClusters/', parameters('ConnectedClusterName'))]"
+ }
+ ]
+ }
+ ```
+
+1. Now set the environment variables by using the following Azure CLI command:
+
+ ```azurecli-interactive
+ export TEMPLATE_FILE_NAME=<template-file-path>
+ export DEPLOYMENT_NAME=<desired-deployment-name>
+ ```
+
+1. Finally, run this Azure CLI command to install the Azure Key Vault Secrets Provider extension:
+
+ ```azurecli-interactive
+ az deployment group create --name $DEPLOYMENT_NAME --resource-group $RESOURCE_GROUP --template-file $TEMPLATE_FILE_NAME
+ ```
+
+You should now be able to view the secret provider resources and use the extension in your cluster.
## Validate the extension installation
-Run the following command.
+To confirm successful installation of the Azure Key Vault Secrets Provider extension, run the following command.
```azurecli-interactive az k8s-extension show --cluster-type connectedClusters --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --name akvsecretsprovider ```
-You should see a JSON output similar to the output below:
+You should see output similar to the example below.
+ ```json { "aksAssignedIdentity": null,
You should see a JSON output similar to the output below:
} ```
-## Create or use an existing Azure Key Vault
+## Create or select an Azure Key Vault
+
+Next, specify the Azure Key Vault to use with your connected cluster. If you don't already have one, create a new Key Vault by using the following commands. Keep in mind that the name of your Key Vault must be globally unique.
+
+```azurecli
+az keyvault create -n $AZUREKEYVAULT_NAME -g $AKV_RESOURCE_GROUP -l $AZUREKEYVAULT_LOCATION
+
+Next, set the following environment variables:
-Set the environment variables:
```azurecli-interactive export AKV_RESOURCE_GROUP=<resource-group-name> export AZUREKEYVAULT_NAME=<AKV-name> export AZUREKEYVAULT_LOCATION=<AKV-location> ```
-You will need an Azure Key Vault resource containing the secret content. Keep in mind that the Key Vault's name must be globally unique.
-
-```azurecli
-az keyvault create -n $AZUREKEYVAULT_NAME -g $AKV_RESOURCE_GROUP -l $AZUREKEYVAULT_LOCATION
-```
-
-Azure Key Vault can store keys, secrets, and certificates. In this example, we'll set a plain text secret called `DemoSecret`:
+Azure Key Vault can store keys, secrets, and certificates. For this example, you can set a plain text secret called `DemoSecret` by using the following command:
```azurecli az keyvault secret set --vault-name $AZUREKEYVAULT_NAME -n DemoSecret --value MyExampleSecret ```
-Take note of the following properties for use in the next section:
+Before you move on to the next section, take note of the following properties:
-- Name of secret object in Key Vault
+- Name of the secret object in Key Vault
- Object type (secret, key, or certificate)-- Name of your Azure Key Vault resource-- Azure Tenant ID the Subscription belongs to
+- Name of your Key Vault resource
+- The Azure Tenant ID for the subscription to which the Key Vault belongs
## Provide identity to access Azure Key Vault
-The Secrets Store CSI Driver on Arc connected clusters currently allows for the following methods to access an Azure Key Vault instance:
-- Service Principal-
-Follow the steps below to provide identity to access Azure Key Vault
+Currently, the Secrets Store CSI Driver on Arc-enabled clusters can be accessed through a service principal. Follow the steps below to provide an identity that can access your Key Vault.
1. Follow the steps [here](../../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal) to create a service principal in Azure. Take note of the Client ID and Client Secret generated in this step.
-2. Provide Azure Key Vault GET permission to the created service principal by following the steps [here](../../key-vault/general/assign-access-policy.md).
-3. Use the client ID and Client Secret from step 1 to create a Kubernetes secret on the Arc connected cluster:
-```bash
-kubectl create secret generic secrets-store-creds --from-literal clientid="<client-id>" --from-literal clientsecret="<client-secret>"
-```
-4. Label the created secret:
-```bash
-kubectl label secret secrets-store-creds secrets-store.csi.k8s.io/used=true
-```
-5. Create a SecretProviderClass with the following YAML, filling in your values for key vault name, tenant ID, and objects to retrieve from your AKV instance:
-```yml
-# This is a SecretProviderClass example using service principal to access Keyvault
-apiVersion: secrets-store.csi.x-k8s.io/v1
-kind: SecretProviderClass
-metadata:
- name: akvprovider-demo
-spec:
- provider: azure
- parameters:
- usePodIdentity: "false"
- keyvaultName: <key-vault-name>
- objects: |
- array:
- - |
- objectName: DemoSecret
- objectType: secret # object types: secret, key or cert
- objectVersion: "" # [OPTIONAL] object versions, default to latest if empty
- tenantId: <tenant-Id> # The tenant ID of the Azure Key Vault instance
-```
-6. Apply the SecretProviderClass to your cluster:
-
-```bash
-kubectl apply -f secretproviderclass.yaml
-```
-7. Create a pod with the following YAML, filling in the name of your identity:
-
-```yml
-# This is a sample pod definition for using SecretProviderClass and service principal to access Keyvault
-kind: Pod
-apiVersion: v1
-metadata:
- name: busybox-secrets-store-inline
-spec:
- containers:
- - name: busybox
- image: k8s.gcr.io/e2e-test-images/busybox:1.29
- command:
- - "/bin/sleep"
- - "10000"
- volumeMounts:
- - name: secrets-store-inline
- mountPath: "/mnt/secrets-store"
- readOnly: true
- volumes:
- - name: secrets-store-inline
- csi:
- driver: secrets-store.csi.k8s.io
- readOnly: true
- volumeAttributes:
- secretProviderClass: "akvprovider-demo"
- nodePublishSecretRef:
- name: secrets-store-creds
-```
-8. Apply the pod to your cluster:
-
-```bash
-kubectl apply -f pod.yaml
-```
+1. Provide Azure Key Vault GET permission to the created service principal by following the steps [here](../../key-vault/general/assign-access-policy.md).
+1. Use the client ID and Client Secret from step 1 to create a Kubernetes secret on the Arc connected cluster:
+
+ ```bash
+ kubectl create secret generic secrets-store-creds --from-literal clientid="<client-id>" --from-literal clientsecret="<client-secret>"
+ ```
+
+1. Label the created secret:
+
+ ```bash
+ kubectl label secret secrets-store-creds secrets-store.csi.k8s.io/used=true
+ ```
+
+1. Create a SecretProviderClass with the following YAML, filling in your values for key vault name, tenant ID, and objects to retrieve from your AKV instance:
+
+ ```yml
+ # This is a SecretProviderClass example using service principal to access Keyvault
+ apiVersion: secrets-store.csi.x-k8s.io/v1
+ kind: SecretProviderClass
+ metadata:
+ name: akvprovider-demo
+ spec:
+ provider: azure
+ parameters:
+ usePodIdentity: "false"
+ keyvaultName: <key-vault-name>
+ objects: |
+ array:
+ - |
+ objectName: DemoSecret
+ objectType: secret # object types: secret, key or cert
+ objectVersion: "" # [OPTIONAL] object versions, default to latest if empty
+ tenantId: <tenant-Id> # The tenant ID of the Azure Key Vault instance
+ ```
+
+1. Apply the SecretProviderClass to your cluster:
+
+ ```bash
+ kubectl apply -f secretproviderclass.yaml
+ ```
+
+1. Create a pod with the following YAML, filling in the name of your identity:
+
+ ```yml
+ # This is a sample pod definition for using SecretProviderClass and service principal to access Keyvault
+ kind: Pod
+ apiVersion: v1
+ metadata:
+ name: busybox-secrets-store-inline
+ spec:
+ containers:
+ - name: busybox
+ image: k8s.gcr.io/e2e-test-images/busybox:1.29
+ command:
+ - "/bin/sleep"
+ - "10000"
+ volumeMounts:
+ - name: secrets-store-inline
+ mountPath: "/mnt/secrets-store"
+ readOnly: true
+ volumes:
+ - name: secrets-store-inline
+ csi:
+ driver: secrets-store.csi.k8s.io
+ readOnly: true
+ volumeAttributes:
+ secretProviderClass: "akvprovider-demo"
+ nodePublishSecretRef:
+ name: secrets-store-creds
+ ```
+
+1. Apply the pod to your cluster:
+
+ ```bash
+ kubectl apply -f pod.yaml
+ ```
## Validate the secrets+ After the pod starts, the mounted content at the volume path specified in your deployment YAML is available.+ ```Bash ## show secrets held in secrets-store kubectl exec busybox-secrets-store-inline -- ls /mnt/secrets-store/
kubectl exec busybox-secrets-store-inline -- cat /mnt/secrets-store/DemoSecret
``` ## Additional configuration options
-Following configuration settings are available for Azure Key Vault secrets provider extension:
+
+The following configuration settings are available for the Azure Key Vault Secrets Provider extension:
| Configuration Setting | Default | Description | | | -- | -- |
-| enableSecretRotation | false | Boolean type; Periodically update the pod mount and Kubernetes Secret with the latest content from external secrets store |
-| rotationPollInterval | 2m | Secret rotation poll interval duration if `enableSecretRotation` is `true`. This can be tuned based on how frequently the mounted contents for all pods and Kubernetes secrets need to be resynced to the latest |
-| syncSecret.enabled | false | Boolean input; In some cases, you may want to create a Kubernetes Secret to mirror the mounted content. This configuration setting allows SecretProviderClass to allow secretObjects field to define the desired state of the synced Kubernetes secret objects |
+| enableSecretRotation | false | Boolean type. If `true`, periodically updates the pod mount and Kubernetes Secret with the latest content from external secrets store |
+| rotationPollInterval | 2m | Specifies the secret rotation poll interval duration if `enableSecretRotation` is `true`. This duration can be adjusted based on how frequently the mounted contents for all pods and Kubernetes secrets need to be resynced to the latest. |
+| syncSecret.enabled | false | Boolean input. In some cases, you may want to create a Kubernetes Secret to mirror the mounted content. If `true`, `SecretProviderClass` allows the `secretObjects` field to define the desired state of the synced Kubernetes Secret objects. |
-These settings can be changed either at the time of extension installation using `az k8s-extension create` command or post installation using `az k8s-extension update` command.
+These settings can be specified when the extension is installed by using the `az k8s-extension create` command:
-Use following command to add configuration settings while creating extension instance:
```azurecli-interactive az k8s-extension create --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --cluster-type connectedClusters --extension-type Microsoft.AzureKeyVaultSecretsProvider --name akvsecretsprovider --configuration-settings secrets-store-csi-driver.enableSecretRotation=true secrets-store-csi-driver.rotationPollInterval=3m secrets-store-csi-driver.syncSecret.enabled=true ```
-Use following command to update configuration settings of existing extension instance:
+You can also change the settings after installation by using the `az k8s-extension update` command:
+ ```azurecli-interactive az k8s-extension update --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --cluster-type connectedClusters --name akvsecretsprovider --configuration-settings secrets-store-csi-driver.enableSecretRotation=true secrets-store-csi-driver.rotationPollInterval=3m secrets-store-csi-driver.syncSecret.enabled=true ```
-## Uninstall Azure Key Vault secrets provider extension
-Use the below command:
+## Uninstall the Azure Key Vault Secrets Provider extension
+
+To uninstall the extension, run the following command:
+ ```azurecli-interactive az k8s-extension delete --cluster-type connectedClusters --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --name akvsecretsprovider ```
-Note that the uninstallation does not delete the CRDs that are created at the time of extension installation.
-Verify that the extension instance has been deleted.
+> [!NOTE]
+> Uninstalling the extension doesn't delete the Custom Resource Definitions (CRDs) that were created when the extension was installed.
+
+To confirm that the extension instance has been deleted, run the following command:
+ ```azurecli-interactive az k8s-extension list --cluster-type connectedClusters --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP ```
-This output should not include AKV secrets provider. If you don't have any other extensions installed on your cluster, it will just be an empty array.
-
-## Reconciliation and Troubleshooting
-Azure Key Vault secrets provider extension is self-healing. All extension components that are deployed on the cluster at the time of extension installation are reconciled to their original state in case somebody tries to intentionally or unintentionally change or delete them. The only exception to that is CRDs. In case the CRDs are deleted, they are not reconciled. You can bring them back by using the 'az k8s-exstension create' command again and providing the existing extension instance name.
-
-Some common issues and troubleshooting steps for Azure Key Vault secrets provider are captured in the open source documentation [here](https://azure.github.io/secrets-store-csi-driver-provider-azure/docs/troubleshooting/) for your reference.
-Additional troubleshooting steps that are specific to the Secrets Store CSI Driver Interface can be referenced [here](https://secrets-store-csi-driver.sigs.k8s.io/troubleshooting.html).
+If the extension was successfully removed, you won't see the the Azure Key Vault Secrets Provider extension listed in the output. If you don't have any other extensions installed on your cluster, you'll see an empty array.
-## Frequently asked questions
+## Reconciliation and troubleshooting
-### Is the extension of Azure Key Vault Secrets Provider zone redundant?
+The Azure Key Vault Secrets Provider extension is self-healing. If somebody tries to change or delete an extension component that was deployed when the extension was installed, that component will be reconciled to its original state. The only exceptions are for Custom Resource Definitions (CRDs). If CRDs are deleted, they won't be reconciled. To restore deleted CRDs, use the `az k8s-exstension create` command again with the existing extension instance name.
-Yes, all components of Azure Key Vault Secrets Provider are deployed on availability zones and are hence zone redundant.
+For more information about resolving common issues, see the open source troubleshooting guides for [Azure Key Vault provider for Secrets Store CSI driver](https://azure.github.io/secrets-store-csi-driver-provider-azure/docs/troubleshooting/) and [Secrets Store CSI Driver](https://secrets-store-csi-driver.sigs.k8s.io/troubleshooting.html).
## Next steps
-> **Just want to try things out?**
-> Get started quickly with an [Azure Arc Jumpstart scenario](https://aka.ms/arc-jumpstart-akv-secrets-provider) using Cluster API.
+- Want to try things out? Get started quickly with an [Azure Arc Jumpstart scenario](https://aka.ms/arc-jumpstart-akv-secrets-provider) using Cluster API.
+- Learn more about [Azure Key Vault](/azure/key-vault/general/overview).
azure-arc Tutorial Arc Enabled Open Service Mesh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-arc-enabled-open-service-mesh.md
Title: Azure Arc-enabled Open Service Mesh description: Open Service Mesh (OSM) extension on Azure Arc-enabled Kubernetes cluster Previously updated : 05/02/2022 Last updated : 05/25/2022
Azure Arc-enabled Open Service Mesh can be deployed through Azure portal, Azure
### Current support limitations - Only one instance of Open Service Mesh can be deployed on an Azure Arc-connected Kubernetes cluster.-- Support is available for Azure Arc-enabled Open Service Mesh version v1.0.0-1 and above. Find the latest version [here](https://github.com/Azure/osm-azure/releases). Supported release versions are appended with notes. Ignore the tags associated with intermediate releases.
+- Support is available for the two most recently released minor versions of Arc-enabled Open Service Mesh. Find the latest version [here](https://github.com/Azure/osm-azure/releases). Supported release versions are appended with notes. Ignore the tags associated with intermediate releases.
- The following Kubernetes distributions are currently supported: - AKS Engine - AKS on HCI
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/overview.md
When you connect your machine to Azure Arc-enabled servers, you can perform many
* Perform post-deployment configuration and automation tasks using supported [Arc-enabled servers VM extensions](manage-vm-extensions.md) for your non-Azure Windows or Linux machine. * **Monitor**: * Monitor operating system performance and discover application components to monitor processes and dependencies with other resources using [VM insights](../../azure-monitor/vm/vminsights-overview.md).
- * Collect other log data, such as performance data and events, from the operating system or workloads running on the machine with the [Log Analytics agent](../../azure-monitor/agents/agents-overview.md#log-analytics-agent). This data is stored in a [Log Analytics workspace](../../azure-monitor/logs/design-logs-deployment.md).
+ * Collect other log data, such as performance data and events, from the operating system or workloads running on the machine with the [Log Analytics agent](../../azure-monitor/agents/agents-overview.md#log-analytics-agent). This data is stored in a [Log Analytics workspace](../../azure-monitor/logs/log-analytics-workspace-overview.md).
> [!NOTE] > At this time, enabling Azure Automation Update Management directly from an Azure Arc-enabled server is not supported. See [Enable Update Management from your Automation account](../../automation/update-management/enable-from-automation-account.md) to understand requirements and [how to enable Update Management for non-Azure VMs](../../automation/update-management/enable-from-automation-account.md#enable-non-azure-vms).
-Log data collected and stored in a Log Analytics workspace from the hybrid machine contains properties specific to the machine, such as a Resource ID, to support [resource-context](../../azure-monitor/logs/design-logs-deployment.md#access-mode) log access.
+Log data collected and stored in a Log Analytics workspace from the hybrid machine contains properties specific to the machine, such as a Resource ID, to support [resource-context](../../azure-monitor/logs/manage-access.md#access-mode) log access.
Watch this video to learn more about Azure monitoring, security, and update services across hybrid and multicloud environments.
azure-arc Plan At Scale Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/plan-at-scale-deployment.md
In this phase, system engineers or administrators enable the core features in th
|--|-|| | [Create a resource group](../../azure-resource-manager/management/manage-resource-groups-portal.md#create-resource-groups) | A dedicated resource group to include only Azure Arc-enabled servers and centralize management and monitoring of these resources. | One hour | | Apply [Tags](../../azure-resource-manager/management/tag-resources.md) to help organize machines. | Evaluate and develop an IT-aligned [tagging strategy](/azure/cloud-adoption-framework/decision-guides/resource-tagging/) that can help reduce the complexity of managing your Azure Arc-enabled servers and simplify making management decisions. | One day |
-| Design and deploy [Azure Monitor Logs](../../azure-monitor/logs/data-platform-logs.md) | Evaluate [design and deployment considerations](../../azure-monitor/logs/design-logs-deployment.md) to determine if your organization should use an existing or implement another Log Analytics workspace to store collected log data from hybrid servers and machines.<sup>1</sup> | One day |
+| Design and deploy [Azure Monitor Logs](../../azure-monitor/logs/data-platform-logs.md) | Evaluate [design and deployment considerations](../../azure-monitor/logs/workspace-design.md) to determine if your organization should use an existing or implement another Log Analytics workspace to store collected log data from hybrid servers and machines.<sup>1</sup> | One day |
| [Develop an Azure Policy](../../governance/policy/overview.md) governance plan | Determine how you will implement governance of hybrid servers and machines at the subscription or resource group scope with Azure Policy. | One day | | Configure [Role based access control](../../role-based-access-control/overview.md) (RBAC) | Develop an access plan to control who has access to manage Azure Arc-enabled servers and ability to view their data from other Azure services and solutions. | One day | | Identify machines with Log Analytics agent already installed | Run the following log query in [Log Analytics](../../azure-monitor/logs/log-analytics-overview.md) to support conversion of existing Log Analytics agent deployments to extension-managed agent:<br> Heartbeat <br> &#124; summarize arg_max(TimeGenerated, OSType, ResourceId, ComputerEnvironment) by Computer <br> &#124; where ComputerEnvironment == "Non-Azure" and isempty(ResourceId) <br> &#124; project Computer, OSType | One hour |
azure-arc Scenario Onboard Azure Sentinel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/scenario-onboard-azure-sentinel.md
This article is intended to help you onboard your Azure Arc-enabled server to [M
Before you start, make sure that you've met the following requirements: -- A [Log Analytics workspace](../../azure-monitor/logs/data-platform-logs.md). For more information about Log Analytics workspaces, see [Designing your Azure Monitor Logs deployment](../../azure-monitor/logs/design-logs-deployment.md).
+- A [Log Analytics workspace](../../azure-monitor/logs/data-platform-logs.md). For more information about Log Analytics workspaces, see [Designing your Azure Monitor Logs deployment](../../azure-monitor/logs/workspace-design.md).
- Microsoft Sentinel [enabled in your subscription](../../sentinel/quickstart-onboard.md).
azure-cache-for-redis Cache How To Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-geo-replication.md
description: Learn how to replicate your Azure Cache for Redis Premium instances
Previously updated : 02/08/2021 Last updated : 05/24/2022 # Configure geo-replication for Premium Azure Cache for Redis instances
-In this article, you'll learn how to configure a geo-replicated Azure Cache using the Azure portal.
+In this article, you learn how to configure a geo-replicated Azure Cache using the Azure portal.
Geo-replication links together two Premium Azure Cache for Redis instances and creates a data replication relationship. These cache instances are typically located in different Azure regions, though that isn't required. One instance acts as the primary, and the other as the secondary. The primary handles read and write requests and propagate changes to the secondary. This process continues until the link between the two instances is removed.
Yes, geo-replication of caches in VNets is supported with caveats:
- Geo-replication between caches in different VNets is also supported. - If the VNets are in the same region, you can connect them using [VNet peering](../virtual-network/virtual-network-peering-overview.md) or a [VPN Gateway VNet-to-VNet connection](../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md). - If the VNets are in different regions, geo-replication using VNet peering is supported. A client VM in VNet 1 (region 1) isn't able to access the cache in VNet 2 (region 2) using its DNS name because of a constraint with Basic internal load balancers. For more information about VNet peering constraints, see [Virtual Network - Peering - Requirements and constraints](../virtual-network/virtual-network-manage-peering.md#requirements-and-constraints). We recommend using a VPN Gateway VNet-to-VNet connection.+
+To configure your VNet effectively and avoid geo-replication issues, you must configure both the inbound and outbound ports correctly. For more information on avoiding the most common VNet misconfiguration issues, see [Geo-replication peer port requirements](cache-how-to-premium-vnet.md#geo-replication-peer-port-requirements).
Using [this Azure template](https://azure.microsoft.com/resources/templates/redis-vnet-geo-replication/), you can quickly deploy two geo-replicated caches into a VNet connected with a VPN Gateway VNet-to-VNet connection.
azure-maps Authentication Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/authentication-best-practices.md
The single most important part of your application is its security. No matter how good the user experience might be, if your application isn't secure a hacker can ruin it.
-The following are some tips to keep your Azure Maps application secure. When using Azure, be sure to familiarize yourself with the security tools available to you. For more information, See the [introduction to Azure security](/azure/security/fundamentals/overview).
+The following are some tips to keep your Azure Maps application secure. When using Azure, be sure to familiarize yourself with the security tools available to you. For more information, See the [introduction to Azure security](../security/fundamentals/overview.md).
## Understanding security threats
When creating a publicly facing client application with Azure Maps using any of
Subscription key-based authentication (Shared Key) can be used in either client side applications or web services, however it is the least secure approach to securing your application or web service. This is because the key grants access to all Azure Maps REST API that are available in the SKU (Pricing Tier) selected when creating the Azure Maps account and the key can be easily obtained from an HTTP request. If you do use subscription keys, be sure to [rotate them regularly](how-to-manage-authentication.md#manage-and-rotate-shared-keys) and keep in mind that Shared Key doesn't allow for configurable lifetime, it must be done manually. You should also consider using [Shared Key authentication with Azure Key Vault](how-to-secure-daemon-app.md#scenario-shared-key-authentication-with-azure-key-vault), which enables you to securely store your secret in Azure.
-If using [Azure Active Directory (Azure AD) authentication](/azure/active-directory/fundamentals/active-directory-whatis) or [Shared Access Signature (SAS) Token authentication](azure-maps-authentication.md#shared-access-signature-token-authentication) (preview), access to Azure Maps REST APIs is authorized using [role-based access control (RBAC)](azure-maps-authentication.md#authorization-with-role-based-access-control). RBAC enables you to control what access is given to the issued tokens. You should consider how long access should be granted for the tokens. Unlike Shared Key authentication, the lifetime of these tokens is configurable.
+If using [Azure Active Directory (Azure AD) authentication](../active-directory/fundamentals/active-directory-whatis.md) or [Shared Access Signature (SAS) Token authentication](azure-maps-authentication.md#shared-access-signature-token-authentication) (preview), access to Azure Maps REST APIs is authorized using [role-based access control (RBAC)](azure-maps-authentication.md#authorization-with-role-based-access-control). RBAC enables you to control what access is given to the issued tokens. You should consider how long access should be granted for the tokens. Unlike Shared Key authentication, the lifetime of these tokens is configurable.
> [!TIP] > > For more information on configuring token lifetimes see:
-> - [Configurable token lifetimes in the Microsoft identity platform (preview)](/azure/active-directory/develop/active-directory-configurable-token-lifetimes)
+> - [Configurable token lifetimes in the Microsoft identity platform (preview)](../active-directory/develop/active-directory-configurable-token-lifetimes.md)
> - [Create SAS tokens](azure-maps-authentication.md#create-sas-tokens) ### Public client and confidential client applications
-There are different security concerns between public and confidential client applications. See [Public client and confidential client applications](/azure/active-directory/develop/msal-client-applications) in the Microsoft identity platform documentation for more information about what is considered a *public* versus *confidential* client application.
+There are different security concerns between public and confidential client applications. See [Public client and confidential client applications](../active-directory/develop/msal-client-applications.md) in the Microsoft identity platform documentation for more information about what is considered a *public* versus *confidential* client application.
### Public client applications
For apps that run on devices or desktop computers or in a web browser, you shoul
### Confidential client applications
-For apps that run on servers (such as web services and service/daemon apps), if you prefer to avoid the overhead and complexity of managing secrets, consider [Managed Identities](/azure/active-directory/managed-identities-azure-resources/overview). Managed identities can provide an identity for your web service to use when connecting to Azure Maps using Azure Active Directory (Azure AD) authentication. In this case, your web service will use that identity to obtain the required Azure AD tokens. You should use Azure RBAC to configure what access the web service is given, using the [Least privileged roles](/azure/active-directory/roles/delegate-by-task) possible.
+For apps that run on servers (such as web services and service/daemon apps), if you prefer to avoid the overhead and complexity of managing secrets, consider [Managed Identities](../active-directory/managed-identities-azure-resources/overview.md). Managed identities can provide an identity for your web service to use when connecting to Azure Maps using Azure Active Directory (Azure AD) authentication. In this case, your web service will use that identity to obtain the required Azure AD tokens. You should use Azure RBAC to configure what access the web service is given, using the [Least privileged roles](../active-directory/roles/delegate-by-task.md) possible.
## Next steps
For apps that run on servers (such as web services and service/daemon apps), if
> [Manage authentication in Azure Maps](how-to-manage-authentication.md) > [!div class="nextstepaction"]
-> [Tutorial: Add app authentication to your web app running on Azure App Service](../app-service/scenario-secure-app-authentication-app-service.md)
+> [Tutorial: Add app authentication to your web app running on Azure App Service](../app-service/scenario-secure-app-authentication-app-service.md)
azure-monitor Azure Monitor Agent Extension Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md
description: This article describes the version details for the Azure Monitor ag
Previously updated : 5/19/2022 Last updated : 5/25/2022
We strongly recommended to update to the latest version at all times, or opt in
## Version details | Release Date | Release notes | Windows | Linux | |:|:|:|:|
-| April 2022 | <ul><li>Private IP information added in Log Analytics <i>Heartbeat</i> table for Windows</li><li>Fixed bugs in Windows IIS log collection (preview) <ul><li>Updated IIS site column name to match backend KQL transform</li><li>Added delay to IIS upload task to account for IIS buffering</li></ul></li></ul> | 1.4.1.0<sup>Hotfix</sup> | Coming soon |
+| April 2022 | <ul><li>Private IP information added in Log Analytics <i>Heartbeat</i> table for Windows and Linux</li><li>Fixed bugs in Windows IIS log collection (preview) <ul><li>Updated IIS site column name to match backend KQL transform</li><li>Added delay to IIS upload task to account for IIS buffering</li></ul></li><li>Fixed Linux CEF syslog forwarding for Sentinel</li><li>Removed 'error' message for Azure MSI token retrieval failure on Arc to show as 'Info' instead</li><li>Support added for Ubuntu 22.04, AlmaLinux and RockyLinux distros</li></ul> | 1.4.1.0<sup>Hotfix</sup> | 1.19.3 |
| March 2022 | <ul><li>Fixed timestamp and XML format bugs in Windows Event logs</li><li>Full Windows OS information in Log Analytics Heartbeat table</li><li>Fixed Linux performance counters to collect instance values instead of 'total' only</li></ul> | 1.3.0.0 | 1.17.5.0 | | February 2022 | <ul><li>Bugfixes for the AMA Client installer (private preview)</li><li>Versioning fix to reflect appropriate Windows major/minor/hotfix versions</li><li>Internal test improvement on Linux</li></ul> | 1.2.0.0 | 1.15.3 | | January 2022 | <ul><li>Syslog RFC compliance for Linux</li><li>Fixed issue for Linux perf counters not flowing on restart</li><li>Fixed installation failure on Windows Server 2008 R2 SP1</li></ul> | 1.1.5.1<sup>Hotfix</sup> | 1.15.2.0<sup>Hotfix</sup> |
azure-monitor Azure Monitor Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-overview.md
The Azure Monitor agent can coexist (run side by side on the same machine) with
| Resource type | Installation method | Additional information | |:|:|:| | Virtual machines, scale sets | [Virtual machine extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) | Installs the agent using Azure extension framework |
-| On-premise servers (Arc-enabled servers) | [Virtual machine extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) (after installing [Arc agent](/azure/azure-arc/servers/deployment-options)) | Installs the agent using Azure extension framework, provided for on-premise by first installing [Arc agent](/azure/azure-arc/servers/deployment-options) |
+| On-premise servers (Arc-enabled servers) | [Virtual machine extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) (after installing [Arc agent](../../azure-arc/servers/deployment-options.md)) | Installs the agent using Azure extension framework, provided for on-premise by first installing [Arc agent](../../azure-arc/servers/deployment-options.md) |
| Windows 10, 11 desktops, workstations | [Client installer (preview)](./azure-monitor-agent-windows-client.md) | Installs the agent using a Windows MSI installer | | Windows 10, 11 laptops | [Client installer (preview)](./azure-monitor-agent-windows-client.md) | Installs the agent using a Windows MSI installer. The installs works on laptops but the agent is **not optimized yet** for battery, network consumption |
To configure the agent to use private links for network communications with Azur
## Next steps - [Install the Azure Monitor agent](azure-monitor-agent-manage.md) on Windows and Linux virtual machines.-- [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor.
+- [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor.
azure-monitor Data Collection Text Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-text-log.md
This article describes how to configure the collection of file-based text logs,
## Prerequisites To complete this procedure, you need the following: -- Log Analytics workspace where you have at least [contributor rights](../logs/manage-access.md#manage-access-using-azure-permissions) .
+- Log Analytics workspace where you have at least [contributor rights](../logs/manage-access.md#azure-rbac) .
- [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace. - An agent with supported log file as described in the next section.
The [data collection rule (DCR)](../essentials/data-collection-rule-overview.md)
"Microsoft-W3CIISLog" ], "logDirectories": [
- "C:\\inetpub\\logs\\LogFiles\\*.log"
+ "C:\\inetpub\\logs\\LogFiles\\"
], "name": "myIisLogsDataSource" }
azure-monitor Om Agents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/om-agents.md
Before starting, review the following requirements.
* Azure Monitor only supports System Center Operations Manager 2016 or later, Operations Manager 2012 SP1 UR6 or later, and Operations Manager 2012 R2 UR2 or later. Proxy support was added in Operations Manager 2012 SP1 UR7 and Operations Manager 2012 R2 UR3. * Integrating System Center Operations Manager 2016 with US Government cloud requires an updated Advisor management pack included with Update Rollup 2 or later. System Center Operations Manager 2012 R2 requires an updated Advisor management pack included with Update Rollup 3 or later. * All Operations Manager agents must meet minimum support requirements. Ensure that agents are at the minimum update, otherwise Windows agent communication may fail and generate errors in the Operations Manager event log.
-* A Log Analytics workspace. For further information, review [Log Analytics workspace overview](../logs/design-logs-deployment.md).
-* You authenticate to Azure with an account that is a member of the [Log Analytics Contributor role](../logs/manage-access.md#manage-access-using-azure-permissions).
+* A Log Analytics workspace. For further information, review [Log Analytics workspace overview](../logs/workspace-design.md).
+* You authenticate to Azure with an account that is a member of the [Log Analytics Contributor role](../logs/manage-access.md#azure-rbac).
* Supported Regions - Only the following Azure regions are supported by System Center Operations Manager to connect to a Log Analytics workspace: - West Central US
azure-monitor Alerts Troubleshoot Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot-metric.md
Therefore, to create a metric alert rule, all involved subscriptions must be reg
- The subscription containing the action groups associated with the alert rule (if defined) - The subscription in which the alert rule is saved
-Learn more about [registering resource providers](https://docs.microsoft.com/azure/azure-resource-manager/management/resource-providers-and-types).
+Learn more about [registering resource providers](../../azure-resource-manager/management/resource-providers-and-types.md).
## Naming restrictions for metric alert rules
The table below lists the metrics that aren't supported by dynamic thresholds.
## Next steps -- For general troubleshooting information about alerts and notifications, see [Troubleshooting problems in Azure Monitor alerts](alerts-troubleshoot.md).
+- For general troubleshooting information about alerts and notifications, see [Troubleshooting problems in Azure Monitor alerts](alerts-troubleshoot.md).
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
If you don't need to migrate an existing resource, and instead want to create a
- A Log Analytics workspace with the access control mode set to the **`use resource or workspace permissions`** setting.
- - Workspace-based Application Insights resources aren't compatible with workspaces set to the dedicated **`workspace based permissions`** setting. To learn more about Log Analytics workspace access control, consult the [Log Analytics configure access control mode guidance](../logs/manage-access.md#configure-access-control-mode)
+ - Workspace-based Application Insights resources aren't compatible with workspaces set to the dedicated **`workspace based permissions`** setting. To learn more about Log Analytics workspace access control, consult the [access control mode guidance](../logs/manage-access.md#access-control-mode)
- If you don't already have an existing Log Analytics Workspace, [consult the Log Analytics workspace creation documentation](../logs/quick-create-workspace.md).
From within the Application Insights resource pane, select **Properties** > **Ch
**Error message:** *The selected workspace is configured with workspace-based access mode. Some APM features may be impacted. Select another workspace or allow resource-based access in the workspace settings. You can override this error by using CLI.*
-In order for your workspace-based Application Insights resource to operate properly you need to change the access control mode of your target Log Analytics workspace to the **resource or workspace permissions** setting. This setting is located in the Log Analytics workspace UI under **Properties** > **Access control mode**. For detailed instructions, consult the [Log Analytics configure access control mode guidance](../logs/manage-access.md#configure-access-control-mode). If your access control mode is set to the exclusive **Require workspace permissions** setting, migration via the portal migration experience will remain blocked.
+In order for your workspace-based Application Insights resource to operate properly you need to change the access control mode of your target Log Analytics workspace to the **resource or workspace permissions** setting. This setting is located in the Log Analytics workspace UI under **Properties** > **Access control mode**. For detailed instructions, consult the [Log Analytics configure access control mode guidance](../logs/manage-access.md#access-control-mode). If your access control mode is set to the exclusive **Require workspace permissions** setting, migration via the portal migration experience will remain blocked.
If you canΓÇÖt change the access control mode for security reasons for your current target workspace, we recommend creating a new Log Analytics workspace to use for the migration.
azure-monitor Data Model Pageview Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-pageview-telemetry.md
# PageView telemetry: Application Insights data model
-PageView telemetry (in [Application Insights](./app-insights-overview.md)) is logged when an application user opens a new page of a monitored application. The `Page` in this context is a logical unit that is defined by the developer to be an application tab or a screen and is not necessarily correlated to a browser webpage load or refresh action. This distinction can be further understood in the context of single-page applications (SPA) where the switch between pages is not tied to browser page actions. [`pageViews.duration`](https://docs.microsoft.com/azure/azure-monitor/reference/tables/pageviews) is the time it takes for the application to present the page to the user.
+PageView telemetry (in [Application Insights](./app-insights-overview.md)) is logged when an application user opens a new page of a monitored application. The `Page` in this context is a logical unit that is defined by the developer to be an application tab or a screen and is not necessarily correlated to a browser webpage load or refresh action. This distinction can be further understood in the context of single-page applications (SPA) where the switch between pages is not tied to browser page actions. [`pageViews.duration`](/azure/azure-monitor/reference/tables/pageviews) is the time it takes for the application to present the page to the user.
> [!NOTE]
-> By default, Application Insights SDKs log single PageView events on each browser webpage load action, with [`pageViews.duration`](https://docs.microsoft.com/azure/azure-monitor/reference/tables/pageviews) populated by [browser timing](#measuring-browsertiming-in-application-insights). Developers can extend additional tracking of PageView events by using the [trackPageView API call](./api-custom-events-metrics.md#page-views).
+> By default, Application Insights SDKs log single PageView events on each browser webpage load action, with [`pageViews.duration`](/azure/azure-monitor/reference/tables/pageviews) populated by [browser timing](#measuring-browsertiming-in-application-insights). Developers can extend additional tracking of PageView events by using the [trackPageView API call](./api-custom-events-metrics.md#page-views).
## Measuring browserTiming in Application Insights
Modern browsers expose measurements for page load actions with the [Performance
* If itΓÇÖs not, then the *deprecated* [`PerformanceTiming`](https://developer.mozilla.org/en-US/docs/Web/API/PerformanceTiming) interface is used and the delta between [`NavigationStart`](https://developer.mozilla.org/en-US/docs/Web/API/PerformanceTiming/navigationStart) and [`LoadEventEnd`](https://developer.mozilla.org/en-US/docs/Web/API/PerformanceTiming/loadEventEnd) is calculated. * The developer specifies a duration value when logging custom PageView events using the [trackPageView API call](./api-custom-events-metrics.md#page-views).
-![Screenshot of the Metrics page in Application Insights showing graphic displays of metrics data for a web application.](./media/javascript/page-view-load-time.png)
+![Screenshot of the Metrics page in Application Insights showing graphic displays of metrics data for a web application.](./media/javascript/page-view-load-time.png)
azure-monitor Autoscale Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-best-practices.md
You can also use an Activity Log alert to monitor the health of the autoscale en
In addition to using activity log alerts, you can also configure email or webhook notifications to get notified for scale actions via the notifications tab on the autoscale setting.
+## Send data securely using TLS 1.2
+To ensure the security of data in transit to Azure Monitor, we strongly encourage you to configure the agent to use at least Transport Layer Security (TLS) 1.2. Older versions of TLS/Secure Sockets Layer (SSL) have been found to be vulnerable and while they still currently work to allow backwards compatibility, they are **not recommended**, and the industry is quickly moving to abandon support for these older protocols.
+
+The [PCI Security Standards Council](https://www.pcisecuritystandards.org/) has set a deadline of [June 30th, 2018](https://www.pcisecuritystandards.org/pdfs/PCI_SSC_Migrating_from_SSL_and_Early_TLS_Resource_Guide.pdf) to disable older versions of TLS/SSL and upgrade to more secure protocols. Once Azure drops legacy support, if your agents cannot communicate over at least TLS 1.2 you would not be able to send data to Azure Monitor Logs.
+
+We recommend you do NOT explicit set your agent to only use TLS 1.2 unless absolutely necessary. Allowing the agent to automatically detect, negotiate, and take advantage of future security standards is preferable. Otherwise you may miss the added security of the newer standards and possibly experience problems if TLS 1.2 is ever deprecated in favor of those newer standards.
++ ## Next Steps - [Create an Activity Log Alert to monitor all autoscale engine operations on your subscription.](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/monitor-autoscale-alert) - [Create an Activity Log Alert to monitor all failed autoscale scale in/scale out operations on your subscription](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/monitor-autoscale-failed-alert)
azure-monitor Azure Monitor Monitoring Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/azure-monitor-monitoring-reference.md
The following schemas are relevant to action groups, which are part of the notif
## See Also - See [Monitoring Azure Azure Monitor](monitor-azure-monitor.md) for a description of what Azure Monitor monitors in itself. -- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
+- See [Monitoring Azure resources with Azure Monitor](./essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
azure-monitor Best Practices Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-data-collection.md
This article is part of the scenario [Recommendations for configuring Azure Moni
> [!IMPORTANT] > The features of Azure Monitor and their configuration will vary depending on your business requirements balanced with the cost of the enabled features. Each step below will identify whether there is potential cost, and you should assess these costs before proceeding. See [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) for complete pricing details.
-## Create Log Analytics workspace
-You require at least one Log Analytics workspace to enable [Azure Monitor Logs](logs/data-platform-logs.md), which is required for collecting such data as logs from Azure resources, collecting data from the guest operating system of Azure virtual machines, and for most Azure Monitor insights. Other services such as Microsoft Sentinel and Microsoft Defender for Cloud also use a Log Analytics workspace and can share the same one that you use for Azure Monitor. You can start with a single workspace to support this monitoring, but see [Designing your Azure Monitor Logs deployment](logs/design-logs-deployment.md) for guidance on when to use multiple workspaces.
+## Design Log Analytics workspace architecture
+You require at least one Log Analytics workspace to enable [Azure Monitor Logs](logs/data-platform-logs.md), which is required for collecting such data as logs from Azure resources, collecting data from the guest operating system of Azure virtual machines, and for most Azure Monitor insights. Other services such as Microsoft Sentinel and Microsoft Defender for Cloud also use a Log Analytics workspace and can share the same one that you use for Azure Monitor.
-There is no cost for creating a Log Analytics workspace, but there is a potential charge once you configure data to be collected into it. See [Azure Monitor Logs pricing details](logs/cost-logs.md) for details.
+There is no cost for creating a Log Analytics workspace, but there is a potential charge once you configure data to be collected into it. See [Azure Monitor Logs pricing details](logs/cost-logs.md) for details on how log data is charged.
+
+See [Create a Log Analytics workspace in the Azure portal](logs/quick-create-workspace.md) to create an initial Log Analytics workspace and [Manage access to Log Analytics workspaces](logs/manage-access.md) to configure access. You can use scalable methods such as Resource Manager templates to configure workspaces, though this is often not required since most environments will require a minimal number.
+
+Start with a single workspace to support initial monitoring, but see [Design a Log Analytics workspace configuration](logs/workspace-design.md) for guidance on when to use multiple workspaces and how to locate and configure them.
-See [Create a Log Analytics workspace in the Azure portal](logs/quick-create-workspace.md) to create an initial Log Analytics workspace. See [Manage access to log data and workspaces in Azure Monitor](logs/manage-access.md) to configure access. You can use scalable methods such as Resource Manager templates to configure workspaces though, this is often not required since most environments will require a minimal number.
## Collect data from Azure resources Some monitoring of Azure resources is available automatically with no configuration required, while you must perform configuration steps to collect additional monitoring data. The following table illustrates the configuration steps required to collect all available data from your Azure resources, including at which step data is sent to Azure Monitor Metrics and Azure Monitor Logs. The sections below describe each step in further detail.
azure-monitor Container Insights Azure Redhat Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-azure-redhat-setup.md
Container insights supports monitoring Azure Red Hat OpenShift as described in t
## Prerequisites -- A [Log Analytics workspace](../logs/design-logs-deployment.md).
+- A [Log Analytics workspace](../logs/workspace-design.md).
Container insights supports a Log Analytics workspace in the regions listed in Azure [Products by region](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=monitor). To create your own workspace, it can be created through [Azure Resource Manager](../logs/resource-manager-workspace.md), through [PowerShell](../logs/powershell-workspace-configuration.md?toc=%2fpowershell%2fmodule%2ftoc.json), or in the [Azure portal](../logs/quick-create-workspace.md). -- To enable and access the features in Container insights, at a minimum you need to be a member of the Azure *Contributor* role in the Azure subscription, and a member of the [*Log Analytics Contributor*](../logs/manage-access.md#manage-access-using-azure-permissions) role of the Log Analytics workspace configured with Container insights.
+- To enable and access the features in Container insights, at a minimum you need to be a member of the Azure *Contributor* role in the Azure subscription, and a member of the [*Log Analytics Contributor*](../logs/manage-access.md#azure-rbac) role of the Log Analytics workspace configured with Container insights.
-- To view the monitoring data, you are a member of the [*Log Analytics reader*](../logs/manage-access.md#manage-access-using-azure-permissions) role permission with the Log Analytics workspace configured with Container insights.
+- To view the monitoring data, you are a member of the [*Log Analytics reader*](../logs/manage-access.md#azure-rbac) role permission with the Log Analytics workspace configured with Container insights.
## Identify your Log Analytics workspace ID
azure-monitor Container Insights Azure Redhat4 Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-azure-redhat4-setup.md
Container insights supports monitoring Azure Red Hat OpenShift v4.x as described
- The [Kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) command-line tool -- A [Log Analytics workspace](../logs/design-logs-deployment.md).
+- A [Log Analytics workspace](../logs/workspace-design.md).
Container insights supports a Log Analytics workspace in the regions listed in Azure [Products by region](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=monitor). To create your own workspace, it can be created through [Azure Resource Manager](../logs/resource-manager-workspace.md), through [PowerShell](../logs/powershell-workspace-configuration.md?toc=%2fpowershell%2fmodule%2ftoc.json), or in the [Azure portal](../logs/quick-create-workspace.md). -- To enable and access the features in Container insights, you need to have, at minimum, an Azure *Contributor* role in the Azure subscription and a [*Log Analytics Contributor*](../logs/manage-access.md#manage-access-using-azure-permissions) role in the Log Analytics workspace, configured with Container insights.
+- To enable and access the features in Container insights, you need to have, at minimum, an Azure *Contributor* role in the Azure subscription and a [*Log Analytics Contributor*](../logs/manage-access.md#azure-rbac) role in the Log Analytics workspace, configured with Container insights.
-- To view the monitoring data, you need to have [*Log Analytics reader*](../logs/manage-access.md#manage-access-using-azure-permissions) role in the Log Analytics workspace, configured with Container insights.
+- To view the monitoring data, you need to have [*Log Analytics reader*](../logs/manage-access.md#azure-rbac) role in the Log Analytics workspace, configured with Container insights.
## Enable monitoring for an existing cluster
azure-monitor Container Insights Enable Arc Enabled Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md
description: "Collect metrics and logs of Azure Arc-enabled Kubernetes clusters
- You've met the pre-requisites listed under the [generic cluster extensions documentation](../../azure-arc/kubernetes/extensions.md#prerequisites). - A Log Analytics workspace: Azure Monitor Container Insights supports a Log Analytics workspace in the regions listed under Azure [products by region page](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=monitor). You can create your own workspace through [Azure Resource Manager](../logs/resource-manager-workspace.md), [PowerShell](../logs/powershell-workspace-configuration.md), or [Azure portal](../logs/quick-create-workspace.md).-- You need to have [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role assignment on the Azure subscription containing the Azure Arc-enabled Kubernetes resource. If the Log Analytics workspace is in a different subscription, then [Log Analytics Contributor](../logs/manage-access.md#manage-access-using-azure-permissions) role assignment is needed on the Log Analytics workspace.-- To view the monitoring data, you need to have [Log Analytics Reader](../logs/manage-access.md#manage-access-using-azure-permissions) role assignment on the Log Analytics workspace.
+- You need to have [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role assignment on the Azure subscription containing the Azure Arc-enabled Kubernetes resource. If the Log Analytics workspace is in a different subscription, then [Log Analytics Contributor](../logs/manage-access.md#azure-rbac) role assignment is needed on the Log Analytics workspace.
+- To view the monitoring data, you need to have [Log Analytics Reader](../logs/manage-access.md#azure-rbac) role assignment on the Log Analytics workspace.
- The following endpoints need to be enabled for outbound access in addition to the ones mentioned under [connecting a Kubernetes cluster to Azure Arc](../../azure-arc/kubernetes/quickstart-connect-cluster.md#meet-network-requirements). | Endpoint | Port |
azure-monitor Container Insights Hybrid Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-hybrid-setup.md
The following configurations are officially supported with Container insights. I
Before you start, make sure that you have the following: -- A [Log Analytics workspace](../logs/design-logs-deployment.md).
+- A [Log Analytics workspace](../logs/workspace-design.md).
Container insights supports a Log Analytics workspace in the regions listed in Azure [Products by region](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=monitor). To create your own workspace, it can be created through [Azure Resource Manager](../logs/resource-manager-workspace.md), through [PowerShell](../logs/powershell-workspace-configuration.md?toc=%2fpowershell%2fmodule%2ftoc.json), or in the [Azure portal](../logs/quick-create-workspace.md).
Before you start, make sure that you have the following:
- You are a member of the **Log Analytics contributor role** to enable container monitoring. For more information about how to control access to a Log Analytics workspace, see [Manage access to workspace and log data](../logs/manage-access.md). -- To view the monitoring data, you need to have [*Log Analytics reader*](../logs/manage-access.md#manage-access-using-azure-permissions) role in the Log Analytics workspace, configured with Container insights.
+- To view the monitoring data, you need to have [*Log Analytics reader*](../logs/manage-access.md#azure-rbac) role in the Log Analytics workspace, configured with Container insights.
- [HELM client](https://helm.sh/docs/using_helm/) to onboard the Container insights chart for the specified Kubernetes cluster.
azure-monitor Container Insights Onboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-onboard.md
Kubelet secure port (:10250) should be opened in the cluster's virtual network f
[!INCLUDE [log-analytics-agent-note](../../../includes/log-analytics-agent-note.md)] -- To view the monitoring data, you need to have [*Log Analytics reader*](../logs/manage-access.md#manage-access-using-azure-permissions) role in the Log Analytics workspace, configured with Container insights.
+- To view the monitoring data, you need to have [*Log Analytics reader*](../logs/manage-access.md#azure-rbac) role in the Log Analytics workspace, configured with Container insights.
- Prometheus metrics aren't collected by default. Before you [configure the agent](container-insights-prometheus-integration.md) to collect the metrics, it's important to review the [Prometheus documentation](https://prometheus.io/) to understand what data can be scraped and what methods are supported. - An AKS cluster can be attached to a Log Analytics workspace in a different Azure subscription in the same Azure AD Tenant. This cannot currently be done with the Azure Portal, but can be done with Azure CLI or Resource Manager template.
azure-monitor Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/diagnostic-settings.md
Platform logs and metrics can be sent to the destinations in the following table
| Destination | Description | |:|:|
-| [Log Analytics workspace](../logs/design-logs-deployment.md) | Metrics are converted to log form. This option may not be available for all resource types. Sending them to the Azure Monitor Logs store (which is searchable via Log Analytics) helps you to integrate them into queries, alerts, and visualizations with existing log data.
+| [Log Analytics workspace](../logs/workspace-design.md) | Metrics are converted to log form. This option may not be available for all resource types. Sending them to the Azure Monitor Logs store (which is searchable via Log Analytics) helps you to integrate them into queries, alerts, and visualizations with existing log data.
| [Azure storage account](../../storage/blobs/index.yml) | Archiving logs and metrics to an Azure storage account is useful for audit, static analysis, or backup. Compared to Azure Monitor Logs and a Log Analytics workspace, Azure storage is less expensive and logs can be kept there indefinitely. | | [Event Hubs](../../event-hubs/index.yml) | Sending logs and metrics to Event Hubs allows you to stream data to external systems such as third-party SIEMs and other Log Analytics solutions. | | [Azure Monitor partner integrations](../../partner-solutions/overview.md)| Specialized integrations between Azure Monitor and other non-Microsoft monitoring platforms. Useful when you are already using one of the partners. |
azure-monitor Resource Logs Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs-schema.md
The schema for resource logs varies depending on the resource and log category.
| Azure Storage | [Blobs](../../storage/blobs/monitor-blob-storage-reference.md#resource-logs-preview), [Files](../../storage/files/storage-files-monitoring-reference.md#resource-logs-preview), [Queues](../../storage/queues/monitor-queue-storage-reference.md#resource-logs-preview), [Tables](../../storage/tables/monitor-table-storage-reference.md#resource-logs-preview) | | Azure Stream Analytics |[Job logs](../../stream-analytics/stream-analytics-job-diagnostic-logs.md) | | Azure Traffic Manager | [Traffic Manager log schema](../../traffic-manager/traffic-manager-diagnostic-logs.md) |
-| Azure Video Indexer|[Monitor Azure Video Indexer data reference](/azure/azure-video-indexer/monitor-video-indexer-data-reference)|
+| Azure Video Indexer|[Monitor Azure Video Indexer data reference](../../azure-video-indexer/monitor-video-indexer-data-reference.md)|
| Azure Virtual Network | Schema not available | | Virtual network gateways | [Logging for Virtual Network Gateways](../../vpn-gateway/troubleshoot-vpn-with-azure-diagnostics.md)|
The schema for resource logs varies depending on the resource and log category.
* [Learn more about resource logs](../essentials/platform-logs-overview.md) * [Stream resource logs to Event Hubs](./resource-logs.md#send-to-azure-event-hubs) * [Change resource log diagnostic settings by using the Azure Monitor REST API](/rest/api/monitor/diagnosticsettings)
-* [Analyze logs from Azure Storage with Log Analytics](./resource-logs.md#send-to-log-analytics-workspace)
+* [Analyze logs from Azure Storage with Log Analytics](./resource-logs.md#send-to-log-analytics-workspace)
azure-monitor Cross Workspace Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cross-workspace-query.md
If you manage subscriptions in other Azure Active Directory (Azure AD) tenants t
There are two methods to query data that is stored in multiple workspace and apps: 1. Explicitly by specifying the workspace and app details. This technique is detailed in this article.
-2. Implicitly using [resource-context queries](./design-logs-deployment.md#access-mode). When you query in the context of a specific resource, resource group or a subscription, the relevant data will be fetched from all workspaces that contains data for these resources. Application Insights data that is stored in apps, will not be fetched.
+2. Implicitly using [resource-context queries](manage-access.md#access-mode). When you query in the context of a specific resource, resource group or a subscription, the relevant data will be fetched from all workspaces that contains data for these resources. Application Insights data that is stored in apps, will not be fetched.
> [!IMPORTANT] > If you are using a [workspace-based Application Insights resource](../app/create-workspace-resource.md), telemetry is stored in a Log Analytics workspace with all other log data. Use the workspace() expression to write a query that includes applications in multiple workspaces. For multiple applications in the same workspace, you don't need a cross workspace query.
azure-monitor Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/customer-managed-keys.md
Customer-Managed key is provided on dedicated cluster and these operations are r
## Next steps - Learn about [Log Analytics dedicated cluster billing](cost-logs.md#dedicated-clusters)-- Learn about [proper design of Log Analytics workspaces](./design-logs-deployment.md)
+- Learn about [proper design of Log Analytics workspaces](./workspace-design.md)
azure-monitor Data Collector Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-collector-api.md
To use the HTTP Data Collector API, you create a POST request that includes the
| Authorization |The authorization signature. Later in the article, you can read about how to create an HMAC-SHA256 header. | | Log-Type |Specify the record type of the data that's being submitted. It can contain only letters, numbers, and the underscore (_) character, and it can't exceed 100 characters. | | x-ms-date |The date that the request was processed, in RFC 7234 format. |
-| x-ms-AzureResourceId | The resource ID of the Azure resource that the data should be associated with. It populates the [_ResourceId](./log-standard-columns.md#_resourceid) property and allows the data to be included in [resource-context](./design-logs-deployment.md#access-mode) queries. If this field isn't specified, the data won't be included in resource-context queries. |
+| x-ms-AzureResourceId | The resource ID of the Azure resource that the data should be associated with. It populates the [_ResourceId](./log-standard-columns.md#_resourceid) property and allows the data to be included in [resource-context](manage-access.md#access-mode) queries. If this field isn't specified, the data won't be included in resource-context queries. |
| time-generated-field | The name of a field in the data that contains the timestamp of the data item. If you specify a field, its contents are used for **TimeGenerated**. If you don't specify this field, the default for **TimeGenerated** is the time that the message is ingested. The contents of the message field should follow the ISO 8601 format YYYY-MM-DDThh:mm:ssZ. Note: the Time Generated value cannot be older than 3 days before received time or the row will be dropped.| | | |
azure-monitor Data Platform Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-platform-logs.md
This configuration will be different depending on the data source. For example:
For a complete list of data sources that you can configure to send data to Azure Monitor Logs, see [What is monitored by Azure Monitor?](../monitor-reference.md). ## Log Analytics workspaces
-Azure Monitor Logs stores the data that it collects in one or more [Log Analytics workspaces](./design-logs-deployment.md). You must create at least one workspace to use Azure Monitor Logs. See [Log Analytics workspace overview](log-analytics-workspace-overview.md) For a description of Log Analytics workspaces.
+Azure Monitor Logs stores the data that it collects in one or more [Log Analytics workspaces](./workspace-design.md). You must create at least one workspace to use Azure Monitor Logs. See [Log Analytics workspace overview](log-analytics-workspace-overview.md) For a description of Log Analytics workspaces.
## Log Analytics Log Analytics is a tool in the Azure portal. Use it to edit and run log queries and interactively analyze their results. You can then use those queries to support other features in Azure Monitor, such as log query alerts and workbooks. Access Log Analytics from the **Logs** option on the Azure Monitor menu or from most other services in the Azure portal.
azure-monitor Design Logs Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/design-logs-deployment.md
- Title: Designing your Azure Monitor Logs deployment | Microsoft Docs
-description: This article describes the considerations and recommendations for customers preparing to deploy a workspace in Azure Monitor.
---- Previously updated : 05/04/2022---
-# Designing your Azure Monitor Logs deployment
-
-Azure Monitor stores [log](data-platform-logs.md) data in a Log Analytics workspace, which is an Azure resource and a container where data is collected, aggregated, and serves as an administrative boundary. While you can deploy one or more workspaces in your Azure subscription, there are several considerations you should understand in order to ensure your initial deployment is following our guidelines to provide you with a cost effective, manageable, and scalable deployment meeting your organization's needs.
-
-Data in a workspace is organized into tables, each of which stores different kinds of data and has its own unique set of properties based on the resource generating the data. Most data sources will write to their own tables in a Log Analytics workspace.
-
-![Example workspace data model](./media/design-logs-deployment/logs-data-model-01.png)
-
-A Log Analytics workspace provides:
-
-* A geographic location for data storage.
-* Data isolation by granting different users access rights following one of our recommended design strategies.
-* Scope for configuration of settings like [pricing tier](cost-logs.md#commitment-tiers), [retention](data-retention-archive.md), and [data capping](daily-cap.md).
-
-Workspaces are hosted on physical clusters. By default, the system is creating and managing these clusters. Customers that ingest more than 4TB/day are expected to create their own dedicated clusters for their workspaces - it enables them better control and higher ingestion rate.
-
-This article provides a detailed overview of the design and migration considerations, access control overview, and an understanding of the design implementations we recommend for your IT organization.
---
-## Important considerations for an access control strategy
-
-Identifying the number of workspaces you need is influenced by one or more of the following requirements:
-
-* You are a global company and you need log data stored in specific regions for data sovereignty or compliance reasons.
-* You are using Azure and you want to avoid outbound data transfer charges by having a workspace in the same region as the Azure resources it manages.
-* You manage multiple departments or business groups, and you want each to see their own data, but not data from others. Also, there is no business requirement for a consolidated cross department or business group view.
-
-IT organizations today are modeled following either a centralized, decentralized, or an in-between hybrid of both structures. As a result, the following workspace deployment models have been commonly used to map to one of these organizational structures:
-
-* **Centralized**: All logs are stored in a central workspace and administered by a single team, with Azure Monitor providing differentiated access per-team. In this scenario, it is easy to manage, search across resources, and cross-correlate logs. The workspace can grow significantly depending on the amount of data collected from multiple resources in your subscription, with additional administrative overhead to maintain access control to different users. This model is known as "hub and spoke".
-* **Decentralized**: Each team has their own workspace created in a resource group they own and manage, and log data is segregated per resource. In this scenario, the workspace can be kept secure and access control is consistent with resource access, but it's difficult to cross-correlate logs. Users who need a broad view of many resources cannot analyze the data in a meaningful way.
-* **Hybrid**: Security audit compliance requirements further complicate this scenario because many organizations implement both deployment models in parallel. This commonly results in a complex, expensive, and hard-to-maintain configuration with gaps in logs coverage.
-
-When using the Log Analytics agents to collect data, you need to understand the following in order to plan your agent deployment:
-
-* To collect data from Windows agents, you can [configure each agent to report to one or more workspaces](./../agents/agent-windows.md), even while it is reporting to a System Center Operations Manager management group. The Windows agent can report up to four workspaces.
-* The Linux agent does not support multi-homing and can only report to a single workspace.
-
-If you are using System Center Operations Manager 2012 R2 or later:
-
-* Each Operations Manager management group can be [connected to only one workspace](../agents/om-agents.md).
-* Linux computers reporting to a management group must be configured to report directly to a Log Analytics workspace. If your Linux computers are already reporting directly to a workspace and you want to monitor them with Operations Manager, follow these steps to [report to an Operations Manager management group](../agents/agent-manage.md#configure-agent-to-report-to-an-operations-manager-management-group).
-* You can install the Log Analytics Windows agent on the Windows computer and have it report to both Operations Manager integrated with a workspace, and a different workspace.
-
-## Access control overview
-
-With Azure role-based access control (Azure RBAC), you can grant users and groups only the amount of access they need to work with monitoring data in a workspace. This allows you to align with your IT organization operating model using a single workspace to store collected data enabled on all your resources. For example, you grant access to your team responsible for infrastructure services hosted on Azure virtual machines (VMs), and as a result they'll have access to only the logs generated by the VMs. This is following our new resource-context log model. The basis for this model is for every log record emitted by an Azure resource, it is automatically associated with this resource. Logs are forwarded to a central workspace that respects scoping and Azure RBAC based on the resources.
-
-The data a user has access to is determined by a combination of factors that are listed in the following table. Each is described in the sections below.
-
-| Factor | Description |
-|:|:|
-| [Access mode](#access-mode) | Method the user uses to access the workspace. Defines the scope of the data available and the access control mode that's applied. |
-| [Access control mode](#access-control-mode) | Setting on the workspace that defines whether permissions are applied at the workspace or resource level. |
-| [Permissions](./manage-access.md) | Permissions applied to individual or groups of users for the workspace or resource. Defines what data the user will have access to. |
-| [Table level Azure RBAC](./manage-access.md#table-level-azure-rbac) | Optional granular permissions that apply to all users regardless of their access mode or access control mode. Defines which data types a user can access. |
-
-## Access mode
-
-The *access mode* refers to how a user accesses a Log Analytics workspace and defines the scope of data they can access.
-
-Users have two options for accessing the data:
-
-* **Workspace-context**: You can view all logs in the workspace you have permission to. Queries in this mode are scoped to all data in all tables in the workspace. This is the access mode used when logs are accessed with the workspace as the scope, such as when you select **Logs** from the **Azure Monitor** menu in the Azure portal.
-
- ![Log Analytics context from workspace](./media/design-logs-deployment/query-from-workspace.png)
-
-* **Resource-context**: When you access the workspace for a particular resource, resource group, or subscription, such as when you select **Logs** from a resource menu in the Azure portal, you can view logs for only resources in all tables that you have access to. Queries in this mode are scoped to only data associated with that resource. This mode also enables granular Azure RBAC.
-
- ![Log Analytics context from resource](./media/design-logs-deployment/query-from-resource.png)
-
- > [!NOTE]
- > Logs are available for resource-context queries only if they were properly associated with the relevant resource. Currently, the following resources have limitations:
- > - Computers outside of Azure - Supported for resource-context only via [Azure Arc for Servers](../../azure-arc/servers/index.yml)
- > - Service Fabric
- > - Application Insights - Supported for resource-context only when using [Workspace-based Application Insights resource](../app/create-workspace-resource.md)
- >
- > You can test if logs are properly associated with their resource by running a query and inspecting the records you're interested in. If the correct resource ID is in the [_ResourceId](./log-standard-columns.md#_resourceid) property, then data is available to resource-centric queries.
-
-Azure Monitor automatically determines the right mode depending on the context you perform the log search from. The scope is always presented in the top-left section of Log Analytics.
-
-### Comparing access modes
-
-The following table summarizes the access modes:
-
-| Issue | Workspace-context | Resource-context |
-|:|:|:|
-| Who is each model intended for? | Central administration. Administrators who need to configure data collection and users who need access to a wide variety of resources. Also currently required for users who need to access logs for resources outside of Azure. | Application teams. Administrators of Azure resources being monitored. |
-| What does a user require to view logs? | Permissions to the workspace. See **Workspace permissions** in [Manage access using workspace permissions](./manage-access.md#manage-access-using-workspace-permissions). | Read access to the resource. See **Resource permissions** in [Manage access using Azure permissions](./manage-access.md#manage-access-using-azure-permissions). Permissions can be inherited (such as from the containing resource group) or directly assigned to the resource. Permission to the logs for the resource will be automatically assigned. |
-| What is the scope of permissions? | Workspace. Users with access to the workspace can query all logs in the workspace from tables that they have permissions to. See [Table access control](./manage-access.md#table-level-azure-rbac) | Azure resource. User can query logs for specific resources, resource groups, or subscription they have access to from any workspace but can't query logs for other resources. |
-| How can user access logs? | <ul><li>Start **Logs** from **Azure Monitor** menu.</li></ul> <ul><li>Start **Logs** from **Log Analytics workspaces**.</li></ul> <ul><li>From Azure Monitor [Workbooks](../best-practices-analysis.md#workbooks).</li></ul> | <ul><li>Start **Logs** from the menu for the Azure resource</li></ul> <ul><li>Start **Logs** from **Azure Monitor** menu.</li></ul> <ul><li>Start **Logs** from **Log Analytics workspaces**.</li></ul> <ul><li>From Azure Monitor [Workbooks](../best-practices-analysis.md#workbooks).</li></ul> |
-
-## Access control mode
-
-The *Access control mode* is a setting on each workspace that defines how permissions are determined for the workspace.
-
-* **Require workspace permissions**: This control mode does not allow granular Azure RBAC. For a user to access the workspace, they must be granted permissions to the workspace or to specific tables.
-
- If a user accesses the workspace following the workspace-context mode, they have access to all data in any table they've been granted access to. If a user accesses the workspace following the resource-context mode, they have access to only data for that resource in any table they've been granted access to.
-
- This is the default setting for all workspaces created before March 2019.
-
-* **Use resource or workspace permissions**: This control mode allows granular Azure RBAC. Users can be granted access to only data associated with resources they can view by assigning Azure `read` permission.
-
- When a user accesses the workspace in workspace-context mode, workspace permissions apply. When a user accesses the workspace in resource-context mode, only resource permissions are verified, and workspace permissions are ignored. Enable Azure RBAC for a user by removing them from workspace permissions and allowing their resource permissions to be recognized.
-
- This is the default setting for all workspaces created after March 2019.
-
- > [!NOTE]
- > If a user has only resource permissions to the workspace, they are only able to access the workspace using resource-context mode assuming the workspace access mode is set to **Use resource or workspace permissions**.
-
-To learn how to change the access control mode in the portal, with PowerShell, or using a Resource Manager template, see [Configure access control mode](./manage-access.md#configure-access-control-mode).
-
-## Scale and ingestion volume rate limit
-
-Azure Monitor is a high scale data service that serves thousands of customers sending petabytes of data each month at a growing pace. Workspaces are not limited in their storage space and can grow to petabytes of data. There is no need to split workspaces due to scale.
-
-To protect and isolate Azure Monitor customers and its backend infrastructure, there is a default ingestion rate limit that is designed to protect from spikes and floods situations. The rate limit default is about **6 GB/minute** and is designed to enable normal ingestion. For more details on ingestion volume limit measurement, see [Azure Monitor service limits](../service-limits.md#data-ingestion-volume-rate).
-
-Customers that ingest less than 4TB/day will usually not meet these limits. Customers that ingest higher volumes or that have spikes as part of their normal operations shall consider moving to [dedicated clusters](./logs-dedicated-clusters.md) where the ingestion rate limit could be raised.
-
-When the ingestion rate limit is activated or get to 80% of the threshold, an event is added to the *Operation* table in your workspace. It is recommended to monitor it and create an alert. See more details in [data ingestion volume rate](../service-limits.md#data-ingestion-volume-rate).
--
-## Recommendations
-
-![Resource-context design example](./media/design-logs-deployment/workspace-design-resource-context-01.png)
-
-This scenario covers a single workspace design in your IT organization's subscription that is not constrained by data sovereignty or regulatory compliance, or needs to map to the regions your resources are deployed within. It allows your organization's security and IT admin teams the ability to leverage the improved integration with Azure access management and more secure access control.
-
-All resources, monitoring solutions, and Insights such as Application Insights and VM insights, supporting infrastructure and applications maintained by the different teams are configured to forward their collected log data to the IT organization's centralized shared workspace. Users on each team are granted access to logs for resources they have been given access to.
-
-Once you have deployed your workspace architecture, you can enforce this on Azure resources with [Azure Policy](../../governance/policy/overview.md). It provides a way to define policies and ensure compliance with your Azure resources so they send all their resource logs to a particular workspace. For example, with Azure virtual machines or virtual machine scale sets, you can use existing policies that evaluate workspace compliance and report results, or customize to remediate if non-compliant.
-
-## Workspace consolidation migration strategy
-
-For customers who have already deployed multiple workspaces and are interested in consolidating to the resource-context access model, we recommend you take an incremental approach to migrate to the recommended access model, and you don't attempt to achieve this quickly or aggressively. Following a phased approach to plan, migrate, validate, and retire following a reasonable timeline will help avoid any unplanned incidents or unexpected impact to your cloud operations. If you do not have a data retention policy for compliance or business reasons, you need to assess the appropriate length of time to retain data in the workspace you are migrating from during the process. While you are reconfiguring resources to report to the shared workspace, you can still analyze the data in the original workspace as necessary. Once the migration is complete, if you're governed to retain data in the original workspace before the end of the retention period, don't delete it.
-
-While planning your migration to this model, consider the following:
-
-* Understand what industry regulations and internal policies regarding data retention you must comply with.
-* Make sure that your application teams can work within the existing resource-context functionality.
-* Identify the access granted to resources for your application teams and test in a development environment before implementing in production.
-* Configure the workspace to enable **Use resource or workspace permissions**.
-* Remove application teams permission to read and query the workspace.
-* Enable and configure any monitoring solutions, Insights such as Container insights and/or Azure Monitor for VMs, your Automation account(s), and management solutions such as Update Management, Start/Stop VMs, etc., that were deployed in the original workspace.
-
-## Next steps
-
-To implement the security permissions and controls recommended in this guide, review [manage access to logs](./manage-access.md).
azure-monitor Log Analytics Workspace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-analytics-workspace-overview.md
A Log Analytics workspace is a unique environment for log data from Azure Monito
You can use a single workspace for all your data collection, or you may create multiple workspaces based on a variety of requirements such as the geographic location of the data, access rights that define which users can access data, and configuration settings such as the pricing tier and data retention.
-To create a new workspace, see [Create a Log Analytics workspace in the Azure portal](./quick-create-workspace.md). For considerations on creating multiple workspaces, see [Designing your Azure Monitor Logs deployment](design-logs-deployment.md).
+To create a new workspace, see [Create a Log Analytics workspace in the Azure portal](./quick-create-workspace.md). For considerations on creating multiple workspaces, see Design a Log Analytics workspace configuration(workspace-design.md).
## Data structure
To access archived data, you must first retrieve data from it in an Analytics Lo
## Permissions
-Permission to data in a Log Analytics workspace is defined by the [access control mode](design-logs-deployment.md#access-control-mode), which is a setting on each workspace. Users can either be given explicit access to the workspace using a [built-in or custom role](../roles-permissions-security.md), or you can allow access to data collected for Azure resources to users with access to those resources.
+Permission to data in a Log Analytics workspace is defined by the [access control mode](manage-access.md#access-control-mode), which is a setting on each workspace. Users can either be given explicit access to the workspace using a [built-in or custom role](../roles-permissions-security.md), or you can allow access to data collected for Azure resources to users with access to those resources.
See [Manage access to log data and workspaces in Azure Monitor](manage-access.md) for details on the different permission options and on configuring permissions. ## Next steps - [Create a new Log Analytics workspace](quick-create-workspace.md)-- See [Designing your Azure Monitor Logs deployment](design-logs-deployment.md) for considerations on creating multiple workspaces.
+- See Design a Log Analytics workspace configuration(workspace-design.md) for considerations on creating multiple workspaces.
- [Learn about log queries to retrieve and analyze data from a Log Analytics workspace.](./log-query-overview.md)
azure-monitor Logs Dedicated Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-dedicated-clusters.md
Log Analytics Dedicated Clusters use a commitment tier pricing model of at least
Provide the following properties when creating new dedicated cluster: -- **ClusterName**--must be unique per resource group-- **ResourceGroupName**--use central IT resource group since clusters are usually shared by many teams in the organization. For more design considerations, review [Designing your Azure Monitor Logs deployment](../logs/design-logs-deployment.md).
+- **ClusterName**: Must be unique for the resource group.
+- **ResourceGroupName**: You should use a central IT resource group because clusters are usually shared by many teams in the organization. For more design considerations, review Design a Log Analytics workspace configuration(../logs/workspace-design.md).
- **Location**-- **SkuCapacity**--the Commitment Tier (formerly called capacity reservations) can be set to 500, 1000, 2000 or 5000 GB/day. For more information on cluster costs, see [Dedicate clusters](./cost-logs.md#dedicated-clusters).
+- **SkuCapacity**: The Commitment Tier (formerly called capacity reservations) can be set to 500, 1000, 2000 or 5000 GB/day. For more information on cluster costs, see [Dedicate clusters](./cost-logs.md#dedicated-clusters).
The user account that creates the clusters must have the standard Azure resource creation permission: `Microsoft.Resources/deployments/*` and cluster write permission `Microsoft.OperationalInsights/clusters/write` by having in their role assignments this specific action or `Microsoft.OperationalInsights/*` or `*/write`. After you create your cluster resource, you can edit additional properties such as *sku*, *keyVaultProperties, or *billingType*. See more details below.
-You can have up to five active clusters per subscription per region. If the cluster is deleted, it is still reserved for 14 days. You can have up to four reserved clusters per subscription per region (active or recently deleted).
+You can have up to five active clusters per subscription per region. If the cluster is deleted, it is still reserved for 14 days. You can have up to seven reserved clusters per subscription per region (active or recently deleted).
> [!NOTE] > Cluster creation triggers resource allocation and provisioning. This operation can take a few hours to complete.
Authorization: Bearer <token>
- A maximum of five active clusters can be created in each region and subscription. -- A maximum number of four reserved clusters (active or recently deleted) can be created in each region and subscription.
+- A maximum number of seven reserved clusters (active or recently deleted) can exist in each region and subscription.
- A maximum of 1,000 Log Analytics workspaces can be linked to a cluster.
Authorization: Bearer <token>
## Next steps - Learn about [Log Analytics dedicated cluster billing](cost-logs.md#dedicated-clusters)-- Learn about [proper design of Log Analytics workspaces](../logs/design-logs-deployment.md)
+- Learn about [proper design of Log Analytics workspaces](../logs/workspace-design.md)
azure-monitor Manage Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/manage-access.md
Title: Manage Log Analytics workspaces in Azure Monitor | Microsoft Docs
+ Title: Manage access to Log Analytics workspaces
description: You can manage access to data stored in a Log Analytics workspace in Azure Monitor using resource, workspace, or table-level permissions. This article details how to complete.
-# Manage access to log data and workspaces in Azure Monitor
+# Manage access to Log Analytics workspaces
+ The data in a Log Analytics workspace that a user can access is determined by a combination of factors including settings on the workspace itself, the user's access to resources sending data to the workspace, and the method that the user accesses the workspace. This article describes how access is managed and how to perform any required configuration.
-Azure Monitor stores [log](../logs/data-platform-logs.md) data in a Log Analytics workspace. A workspace is a container that includes data and configuration information. To manage access to log data, you perform various administrative tasks related to your workspace.
+## Overview
+The factors that define the data a user can access are briefly described in the following table. Each is further described in the sections below.
-This article explains how to manage access to logs and to administer the workspaces that contain them, including how to grant access to:
+| Factor | Description |
+|:|:|
+| [Access mode](#access-mode) | Method the user uses to access the workspace. Defines the scope of the data available and the access control mode that's applied. |
+| [Access control mode](#access-control-mode) | Setting on the workspace that defines whether permissions are applied at the workspace or resource level. |
+| [Azure RBAC](#azure-rbac) | Permissions applied to individual or groups of users for the workspace or resource sending data to the workspace. Defines what data the user will have access to. |
+| [Table level Azure RBAC](#table-level-azure-rbac) | Optional permissions that defines specific data types in the workspace that a user can access. Apply to all users regardless of their access mode or access control mode. |
-* The workspace using workspace permissions.
-* Users who need access to log data from specific resources using Azure role-based access control (Azure RBAC) - also known as [resource-context](../logs/design-logs-deployment.md#access-mode)
-* Users who need access to log data in a specific table in the workspace using Azure RBAC.
-To understand the Logs concepts around Azure RBAC and access strategies, read [designing your Azure Monitor Logs deployment](../logs/design-logs-deployment.md)
+## Access mode
+The *access mode* refers to how a user accesses a Log Analytics workspace and defines the data they can access during the current session. The mode is determined according to the [scope](scope.md) you select in Log Analytics.
-## Configure access control mode
+There are two access modes:
-You can view the [access control mode](../logs/design-logs-deployment.md) configured on a workspace from the Azure portal or with Azure PowerShell. You can change this setting using one of the following supported methods:
+- **Workspace-context**: You can view all logs in the workspace that you have permission to. Queries in this mode are scoped to all data in all tables in the workspace. This is the access mode used when logs are accessed with the workspace as the scope, such as when you select **Logs** from the **Azure Monitor** menu in the Azure portal.
-* Azure portal
+ - **Resource-context**: When you access the workspace for a particular resource, resource group, or subscription, such as when you select **Logs** from a resource menu in the Azure portal, you can view logs for only resources in all tables that you have access to. Queries in this mode are scoped to only data associated with that resource. This mode also enables granular Azure RBAC. Workspaces use a resource-context log model where every log record emitted by an Azure resource, is automatically associated with this resource.
-* Azure PowerShell
+
+Records are only available in resource-context queries if they are associated with the relevant resource. You can check this association by running a query and verifying that the [_ResourceId](./log-standard-columns.md#_resourceid) column is populated.
-* Azure Resource Manager template
+There are known limitations with the following resources:
-### From the Azure portal
+- Computers outside of Azure. Resource-context is only supported with [Azure Arc for Servers](../../azure-arc/servers/index.yml).
+- Application Insights. Supported for resource-context only when using [Workspace-based Application Insights resource](../app/create-workspace-resource.md)
+- Service Fabric
-You can view the current workspace access control mode on the **Overview** page for the workspace in the **Log Analytics workspace** menu.
-![View workspace access control mode](media/manage-access/view-access-control-mode.png)
+### Comparing access modes
+
+The following table summarizes the access modes:
+
+| Issue | Workspace-context | Resource-context |
+|:|:|:|
+| Who is each model intended for? | Central administration.<br>Administrators who need to configure data collection and users who need access to a wide variety of resources. Also currently required for users who need to access logs for resources outside of Azure. | Application teams.<br>Administrators of Azure resources being monitored. Allows them to focus on their resource without filtering. |
+| What does a user require to view logs? | Permissions to the workspace.<br>See **Workspace permissions** in [Manage access using workspace permissions](./manage-access.md#azure-rbac). | Read access to the resource.<br>See **Resource permissions** in [Manage access using Azure permissions](./manage-access.md#azure-rbac). Permissions can be inherited from the resource group or subscription or directly assigned to the resource. Permission to the logs for the resource will be automatically assigned. The user doesn't require access to the workspace.|
+| What is the scope of permissions? | Workspace.<br>Users with access to the workspace can query all logs in the workspace from tables that they have permissions to. See [Table access control](./manage-access.md#table-level-azure-rbac) | Azure resource.<br>User can query logs for specific resources, resource groups, or subscription they have access to in any workspace but can't query logs for other resources. |
+| How can user access logs? | Start **Logs** from **Azure Monitor** menu.<br><br>Start **Logs** from **Log Analytics workspaces**.<br><br>From Azure Monitor [Workbooks](../best-practices-analysis.md#workbooks). | Start **Logs** from the menu for the Azure resource. User will have access to data for that resource.<br><br>Start **Logs** from **Azure Monitor** menu. User will have access to data for all resources they have access to.<br><br>Start **Logs** from **Log Analytics workspaces**. User will have access to data for all resources they have access to.<br><br>From Azure Monitor [Workbooks](../best-practices-analysis.md#workbooks). |
+
+## Access control mode
+
+The *Access control mode* is a setting on each workspace that defines how permissions are determined for the workspace.
+
+* **Require workspace permissions**. This control mode does not allow granular Azure RBAC. For a user to access the workspace, they must be [granted permissions to the workspace](#azure-rbac) or to [specific tables](#table-level-azure-rbac).
+
+ If a user accesses the workspace in [workspace-context mode](#access-mode), they have access to all data in any table they've been granted access to. If a user accesses the workspace in [resource-context mode](#access-mode), they have access to only data for that resource in any table they've been granted access to.
+
+ This is the default setting for all workspaces created before March 2019.
+
+* **Use resource or workspace permissions**. This control mode allows granular Azure RBAC. Users can be granted access to only data associated with resources they can view by assigning Azure `read` permission.
+
+ When a user accesses the workspace in [workspace-context mode](#access-mode), workspace permissions apply. When a user accesses the workspace in [resource-context mode](#access-mode), only resource permissions are verified, and workspace permissions are ignored. Enable Azure RBAC for a user by removing them from workspace permissions and allowing their resource permissions to be recognized.
+
+ This is the default setting for all workspaces created after March 2019.
+
+ > [!NOTE]
+ > If a user has only resource permissions to the workspace, they are only able to access the workspace using resource-context mode assuming the workspace access mode is set to **Use resource or workspace permissions**.
+
+### Configure access control mode for a workspace
+
-1. Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
-1. In the Azure portal, select Log Analytics workspaces > your workspace.
+# [Azure portal](#tab/portal)
+
+View the current workspace access control mode on the **Overview** page for the workspace in the **Log Analytics workspace** menu.
+
+![View workspace access control mode](media/manage-access/view-access-control-mode.png)
-You can change this setting from the **Properties** page of the workspace. Changing the setting will be disabled if you don't have permissions to configure the workspace.
+Change this setting from the **Properties** page of the workspace. Changing the setting will be disabled if you don't have permissions to configure the workspace.
![Change workspace access mode](media/manage-access/change-access-control-mode.png)
-### Using PowerShell
+# [PowerShell](#tab/powershell)
-Use the following command to examine the access control mode for all workspaces in the subscription:
+Use the following command to view the access control mode for all workspaces in the subscription:
```powershell Get-AzResource -ResourceType Microsoft.OperationalInsights/workspaces -ExpandProperties | foreach {$_.Name + ": " + $_.Properties.features.enableLogAccessUsingOnlyResourcePermissions}
DefaultWorkspace38917: True
DefaultWorkspace21532: False ```
-A value of `False` means the workspace is configured with the workspace-context access mode. A value of `True` means the workspace is configured with the resource-context access mode.
+A value of `False` means the workspace is configured with *workspace-context* access mode. A value of `True` means the workspace is configured with *resource-context* access mode.
> [!NOTE] > If a workspace is returned without a boolean value and is blank, this also matches the results of a `False` value. >
-Use the following script to set the access control mode for a specific workspace to the resource-context permission:
+Use the following script to set the access control mode for a specific workspace to *resource-context* permission:
```powershell $WSName = "my-workspace"
else
Set-AzResource -ResourceId $Workspace.ResourceId -Properties $Workspace.Properties -Force ```
-Use the following script to set the access control mode for all workspaces in the subscription to the resource-context permission:
+Use the following script to set the access control mode for all workspaces in the subscription to *resource-context* permission:
```powershell Get-AzResource -ResourceType Microsoft.OperationalInsights/workspaces -ExpandProperties | foreach {
Set-AzResource -ResourceId $_.ResourceId -Properties $_.Properties -Force
} ```
-### Using a Resource Manager template
+# [Resource Manager](#tab/arm)
To configure the access mode in an Azure Resource Manager template, set the **enableLogAccessUsingOnlyResourcePermissions** feature flag on the workspace to one of the following values.
-* **false**: Set the workspace to workspace-context permissions. This is the default setting if the flag isn't set.
-* **true**: Set the workspace to resource-context permissions.
+* **false**: Set the workspace to *workspace-context* permissions. This is the default setting if the flag isn't set.
+* **true**: Set the workspace to *resource-context* permissions.
-## Manage access using workspace permissions
-
-Each workspace can have multiple accounts associated with it, and each account can have access to multiple workspaces. Access is managed using [Azure role-based access control (Azure RBAC)](../../role-based-access-control/role-assignments-portal.md).
+
-The following activities also require Azure permissions:
+## Azure RBAC
+Access to a workspace is managed using [Azure role-based access control (Azure RBAC)](../../role-based-access-control/role-assignments-portal.md). To grant access to the Log Analytics workspace using Azure permissions, follow the steps in [assign Azure roles to manage access to your Azure subscription resources](../../role-based-access-control/role-assignments-portal.md).
+### Workspace permissions
+Each workspace can have multiple accounts associated with it, and each account can have access to multiple workspaces. The following table lists the Azure permissions for different workspace actions:
|Action |Azure Permissions Needed |Notes | |-|-||
-| Adding and removing monitoring solutions | `Microsoft.Resources/deployments/*` <br> `Microsoft.OperationalInsights/*` <br> `Microsoft.OperationsManagement/*` <br> `Microsoft.Automation/*` <br> `Microsoft.Resources/deployments/*/write` | These permissions need to be granted at resource group or subscription level. |
-| Changing the pricing tier | `Microsoft.OperationalInsights/workspaces/*/write` | |
-| Viewing data in the *Backup* and *Site Recovery* solution tiles | Administrator / Co-administrator | Accesses resources deployed using the classic deployment model |
-| Creating a workspace in the Azure portal | `Microsoft.Resources/deployments/*` <br> `Microsoft.OperationalInsights/workspaces/*` ||
-| View workspace basic properties and enter the workspace blade in the portal | `Microsoft.OperationalInsights/workspaces/read` ||
-| Query logs using any interface | `Microsoft.OperationalInsights/workspaces/query/read` ||
-| Access all log types using queries | `Microsoft.OperationalInsights/workspaces/query/*/read` ||
-| Access a specific log table | `Microsoft.OperationalInsights/workspaces/query/<table_name>/read` ||
-| Read the workspace keys to allow sending logs to this workspace | `Microsoft.OperationalInsights/workspaces/sharedKeys/action` ||
+| Change the pricing tier | `Microsoft.OperationalInsights/workspaces/*/write` |
+| Creating a workspace in the Azure portal | `Microsoft.Resources/deployments/*` <br> `Microsoft.OperationalInsights/workspaces/*` |
+| View workspace basic properties and enter the workspace blade in the portal | `Microsoft.OperationalInsights/workspaces/read` |
+| Query logs using any interface | `Microsoft.OperationalInsights/workspaces/query/read` |
+| Access all log types using queries | `Microsoft.OperationalInsights/workspaces/query/*/read` |
+| Access a specific log table | `Microsoft.OperationalInsights/workspaces/query/<table_name>/read` |
+| Read the workspace keys to allow sending logs to this workspace | `Microsoft.OperationalInsights/workspaces/sharedKeys/action` |
+| Add and remove monitoring solutions | `Microsoft.Resources/deployments/*` <br> `Microsoft.OperationalInsights/*` <br> `Microsoft.OperationsManagement/*` <br> `Microsoft.Automation/*` <br> `Microsoft.Resources/deployments/*/write`<br><br>These permissions need to be granted at resource group or subscription level. |
+| View data in the *Backup* and *Site Recovery* solution tiles | Administrator / Co-administrator<br><br>Accesses resources deployed using the classic deployment model |
+
+### Built-in roles
+Assign users to these roles to give them access at different scopes:
-## Manage access using Azure permissions
+* Subscription - Access to all workspaces in the subscription
+* Resource Group - Access to all workspace in the resource group
+* Resource - Access to only the specified workspace
-To grant access to the Log Analytics workspace using Azure permissions, follow the steps in [assign Azure roles to manage access to your Azure subscription resources](../../role-based-access-control/role-assignments-portal.md). For example custom roles, see [Example custom roles](#custom-role-examples)
+Create assignments at the resource level (workspace) to assure accurate access control. Use [custom roles](../../role-based-access-control/custom-roles.md) to create roles with the specific permissions needed.
-Azure has two built-in user roles for Log Analytics workspaces:
+> [!NOTE]
+> To add and remove users to a user role, you must to have `Microsoft.Authorization/*/Delete` and `Microsoft.Authorization/*/Write` permission.
-* Log Analytics Reader
-* Log Analytics Contributor
+
+#### Log Analytics Reader
+Members of the *Log Analytics Reader* role can view all monitoring data and monitoring settings, including the configuration of Azure diagnostics on all Azure resources.
Members of the *Log Analytics Reader* role can:
-* View and search all monitoring data
-* View monitoring settings, including viewing the configuration of Azure diagnostics on all Azure resources.
+- View and search all monitoring data
+- View monitoring settings, including viewing the configuration of Azure diagnostics on all Azure resources.
-The Log Analytics Reader role includes the following Azure actions:
+*Log Analytics Reader* includes the following Azure actions:
| Type | Permission | Description | | - | - | -- |
-| Action | `*/read` | Ability to view all Azure resources and resource configuration. Includes viewing: <br> Virtual machine extension status <br> Configuration of Azure diagnostics on resources <br> All properties and settings of all resources. <br> For workspaces, it allows full unrestricted permissions to read the workspace settings and perform query on the data. See more granular options above. |
-| Action | `Microsoft.OperationalInsights/workspaces/analytics/query/action` | Deprecated, no need to assign them to users. |
-| Action | `Microsoft.OperationalInsights/workspaces/search/action` | Deprecated, no need to assign them to users. |
+| Action | `*/read` | Ability to view all Azure resources and resource configuration.<br>Includes viewing:<br>- Virtual machine extension status<br>- Configuration of Azure diagnostics on resources<br>- All properties and settings of all resources.<br><br>For workspaces, allows full unrestricted permissions to read the workspace settings and query data. See more granular options above. |
| Action | `Microsoft.Support/*` | Ability to open support cases | |Not Action | `Microsoft.OperationalInsights/workspaces/sharedKeys/read` | Prevents reading of workspace key required to use the data collection API and to install agents. This prevents the user from adding new resources to the workspace |
+| Action | `Microsoft.OperationalInsights/workspaces/analytics/query/action` | Deprecated. |
+| Action | `Microsoft.OperationalInsights/workspaces/search/action` | Deprecated. |
+#### Log Analytics Contributor
Members of the *Log Analytics Contributor* role can:
-* Includes all the privileges of the *Log Analytics Reader role*, allowing the user to read all monitoring data
-* Create and configure Automation accounts
-* Add and remove management solutions
-
- > [!NOTE]
- > In order to successfully perform the last two actions, this permission needs to be granted at the resource group or subscription level.
+- Read all monitoring data granted by the *Log Analytics Reader role*.
+- Edit monitoring settings for Azure resources, including
+ - Adding the VM extension to VMs
+ - Configuring Azure diagnostics on all Azure resources
+- Create and configure Automation accounts. Permission needs to be granted at the resource group or subscription level.
+- Add and remove management solutions. Permission needs to be granted at the resource group or subscription level.
+- Read storage account keys
+- Configure the collection of logs from Azure Storage
-* Read storage account keys
-* Configure the collection of logs from Azure Storage
-* Edit monitoring settings for Azure resources, including
- * Adding the VM extension to VMs
- * Configuring Azure diagnostics on all Azure resources
-> [!NOTE]
-> You can use the ability to add a virtual machine extension to a virtual machine to gain full control over a virtual machine.
+> [!WARNING]
+> You can use the permission to add a virtual machine extension to a virtual machine to gain full control over a virtual machine.
The Log Analytics Contributor role includes the following Azure actions: | Permission | Description | | - | -- |
-| `*/read` | Ability to view all resources and resource configuration. Includes viewing: <br> Virtual machine extension status <br> Configuration of Azure diagnostics on resources <br> All properties and settings of all resources. <br> For workspaces, it allows full unrestricted permissions to read the workspace setting and perform query on the data. See more granular options above. |
+| `*/read` | Ability to view all Azure resources and resource configuration.<br><br>Includes viewing:<br>- Virtual machine extension status<br>- Configuration of Azure diagnostics on resources<br>- All properties and settings of all resources.<br><br>For workspaces, allows full unrestricted permissions to read the workspace settings and query data. See more granular options above. |
| `Microsoft.Automation/automationAccounts/*` | Ability to create and configure Azure Automation accounts, including adding and editing runbooks | | `Microsoft.ClassicCompute/virtualMachines/extensions/*` <br> `Microsoft.Compute/virtualMachines/extensions/*` | Add, update and remove virtual machine extensions, including the Microsoft Monitoring Agent extension and the OMS Agent for Linux extension | | `Microsoft.ClassicStorage/storageAccounts/listKeys/action` <br> `Microsoft.Storage/storageAccounts/listKeys/action` | View the storage account key. Required to configure Log Analytics to read logs from Azure storage accounts |
The Log Analytics Contributor role includes the following Azure actions:
| `Microsoft.Resources/deployments/*` | Create and delete deployments. Required for adding and removing solutions, workspaces, and automation accounts | | `Microsoft.Resources/subscriptions/resourcegroups/deployments/*` | Create and delete deployments. Required for adding and removing solutions, workspaces, and automation accounts |
-To add and remove users to a user role, it is necessary to have `Microsoft.Authorization/*/Delete` and `Microsoft.Authorization/*/Write` permission.
-
-Use these roles to give users access at different scopes:
-
-* Subscription - Access to all workspaces in the subscription
-* Resource Group - Access to all workspace in the resource group
-* Resource - Access to only the specified workspace
-We recommend performing assignments at the resource level (workspace) to assure accurate access control. Use [custom roles](../../role-based-access-control/custom-roles.md) to create roles with the specific permissions needed.
### Resource permissions
-When users query logs from a workspace using resource-context access, they'll have the following permissions on the resource:
+When users query logs from a workspace using [resource-context access](#access-mode), they'll have the following permissions on the resource:
| Permission | Description | | - | -- |
When users query logs from a workspace using resource-context access, they'll ha
`/read` permission is usually granted from a role that includes _\*/read or_ _\*_ permissions such as the built-in [Reader](../../role-based-access-control/built-in-roles.md#reader) and [Contributor](../../role-based-access-control/built-in-roles.md#contributor) roles. Custom roles that include specific actions or dedicated built-in roles might not include this permission.
-See [Defining per-table access control](#table-level-azure-rbac) below if you want to create different access control for different tables.
-
-## Custom role examples
-
-1. To grant a user access to log data from their resources, perform the following:
-
- * Configure the workspace access control mode to **use workspace or resource permissions**
- * Grant users `*/read` or `Microsoft.Insights/logs/*/read` permissions to their resources. If they are already assigned the [Log Analytics Reader](../../role-based-access-control/built-in-roles.md#reader) role on the workspace, it is sufficient.
+### Custom role examples
+In addition to using the built-in roles for Log Analytics workspace, you can create custom roles to assign more granular permissions. Following are some common examples.
-2. To grant a user access to log data from their resources and configure their resources to send logs to the workspace, perform the following:
+**Grant a user access to log data from their resources.**
- * Configure the workspace access control mode to **use workspace or resource permissions**
+- Configure the workspace access control mode to **use workspace or resource permissions**
+- Grant users `*/read` or `Microsoft.Insights/logs/*/read` permissions to their resources. If they are already assigned the [Log Analytics Reader](../../role-based-access-control/built-in-roles.md#reader) role on the workspace, it is sufficient.
- * Grant users the following permissions on the workspace: `Microsoft.OperationalInsights/workspaces/read` and `Microsoft.OperationalInsights/workspaces/sharedKeys/action`. With these permissions, users cannot perform any workspace-level queries. They can only enumerate the workspace and use it as a destination for diagnostic settings or agent configuration.
+**Grant a user access to log data from their resources and configure their resources to send logs to the workspace.**
- * Grant users the following permissions to their resources: `Microsoft.Insights/logs/*/read` and `Microsoft.Insights/diagnosticSettings/write`. If they are already assigned the [Log Analytics Contributor](../../role-based-access-control/built-in-roles.md#contributor) role, assigned the Reader role, or granted `*/read` permissions on this resource, it is sufficient.
+- Configure the workspace access control mode to **use workspace or resource permissions**
+- Grant users the following permissions on the workspace: `Microsoft.OperationalInsights/workspaces/read` and `Microsoft.OperationalInsights/workspaces/sharedKeys/action`. With these permissions, users cannot perform any workspace-level queries. They can only enumerate the workspace and use it as a destination for diagnostic settings or agent configuration.
+- Grant users the following permissions to their resources: `Microsoft.Insights/logs/*/read` and `Microsoft.Insights/diagnosticSettings/write`. If they are already assigned the [Log Analytics Contributor](../../role-based-access-control/built-in-roles.md#contributor) role, assigned the Reader role, or granted `*/read` permissions on this resource, it is sufficient.
-3. To grant a user access to log data from their resources without being able to read security events and send data, perform the following:
+**Grant a user access to log data from their resources without being able to read security events and send data.**
- * Configure the workspace access control mode to **use workspace or resource permissions**
+- Configure the workspace access control mode to **use workspace or resource permissions**
+- Grant users the following permissions to their resources: `Microsoft.Insights/logs/*/read`.
+- Add the following NonAction to block users from reading the SecurityEvent type: `Microsoft.Insights/logs/SecurityEvent/read`. The NonAction shall be in the same custom role as the action that provides the read permission (`Microsoft.Insights/logs/*/read`). If the user inherent the read action from another role that is assigned to this resource or to the subscription or resource group, they would be able to read all log types. This is also true if they inherit `*/read`, that exist for example, with the Reader or Contributor role.
- * Grant users the following permissions to their resources: `Microsoft.Insights/logs/*/read`.
+**Grant a user access to log data from their resources and read all Azure AD sign-in and read Update Management solution log data from the workspace.**
- * Add the following NonAction to block users from reading the SecurityEvent type: `Microsoft.Insights/logs/SecurityEvent/read`. The NonAction shall be in the same custom role as the action that provides the read permission (`Microsoft.Insights/logs/*/read`). If the user inherent the read action from another role that is assigned to this resource or to the subscription or resource group, they would be able to read all log types. This is also true if they inherit `*/read`, that exist for example, with the Reader or Contributor role.
-
-4. To grant a user access to log data from their resources and read all Azure AD sign-in and read Update Management solution log data from the workspace, perform the following:
-
- * Configure the workspace access control mode to **use workspace or resource permissions**
-
- * Grant users the following permissions on the workspace:
-
- * `Microsoft.OperationalInsights/workspaces/read` ΓÇô required so the user can enumerate the workspace and open the workspace blade in the Azure portal
- * `Microsoft.OperationalInsights/workspaces/query/read` ΓÇô required for every user that can execute queries
- * `Microsoft.OperationalInsights/workspaces/query/SigninLogs/read` ΓÇô to be able to read Azure AD sign-in logs
- * `Microsoft.OperationalInsights/workspaces/query/Update/read` ΓÇô to be able to read Update Management solution logs
- * `Microsoft.OperationalInsights/workspaces/query/UpdateRunProgress/read` ΓÇô to be able to read Update Management solution logs
- * `Microsoft.OperationalInsights/workspaces/query/UpdateSummary/read` ΓÇô to be able to read Update management logs
- * `Microsoft.OperationalInsights/workspaces/query/Heartbeat/read` ΓÇô required to be able to use Update Management solution
- * `Microsoft.OperationalInsights/workspaces/query/ComputerGroup/read` ΓÇô required to be able to use Update Management solution
-
- * Grant users the following permissions to their resources: `*/read`, assigned to the Reader role, or `Microsoft.Insights/logs/*/read`.
+- Configure the workspace access control mode to **use workspace or resource permissions**
+- Grant users the following permissions on the workspace:
+ - `Microsoft.OperationalInsights/workspaces/read` ΓÇô required so the user can enumerate the workspace and open the workspace blade in the Azure portal
+ - `Microsoft.OperationalInsights/workspaces/query/read` ΓÇô required for every user that can execute queries
+ - `Microsoft.OperationalInsights/workspaces/query/SigninLogs/read` ΓÇô to be able to read Azure AD sign-in logs
+ - `Microsoft.OperationalInsights/workspaces/query/Update/read` ΓÇô to be able to read Update Management solution logs
+ - `Microsoft.OperationalInsights/workspaces/query/UpdateRunProgress/read` ΓÇô to be able to read Update Management solution logs
+ - `Microsoft.OperationalInsights/workspaces/query/UpdateSummary/read` ΓÇô to be able to read Update management logs
+ - `Microsoft.OperationalInsights/workspaces/query/Heartbeat/read` ΓÇô required to be able to use Update Management solution
+ - `Microsoft.OperationalInsights/workspaces/query/ComputerGroup/read` ΓÇô required to be able to use Update Management solution
+- Grant users the following permissions to their resources: `*/read`, assigned to the Reader role, or `Microsoft.Insights/logs/*/read`.
## Table level Azure RBAC
+Table level Azure RBAC allows you to define more granular control to data in a Log Analytics workspace by defining specific data types that are accessible only to a specific set of users.
+
+Implement table access control with [Azure custom roles](../../role-based-access-control/custom-roles.md) to either grant access to specific [tables](../logs/data-platform-logs.md) in the workspace. These roles are applied to workspaces with either workspace-context or resource-context [access control modes](#access-control-mode) regardless of the user's [access mode](#access-mode).
-**Table level Azure RBAC** allows you to define more granular control to data in a Log Analytics workspace in addition to the other permissions. This control allows you to define specific data types that are accessible only to a specific set of users.
+Create a [custom role](../../role-based-access-control/custom-roles.md) with the following actions to define access to a particular table.
-You implement table access control with [Azure custom roles](../../role-based-access-control/custom-roles.md) to either grant access to specific [tables](../logs/data-platform-logs.md) in the workspace. These roles are applied to workspaces with either workspace-context or resource-context [access control modes](../logs/design-logs-deployment.md#access-control-mode) regardless of the user's [access mode](../logs/design-logs-deployment.md#access-mode).
+* Include the **Actions** section of the role definition. To subtract access from the allowed **Actions**, include it in the **NotActions** section.
+* Use `Microsoft.OperationalInsights/workspaces/query/*` to specify all tables.
-Create a [custom role](../../role-based-access-control/custom-roles.md) with the following actions to define access to table access control.
-* To grant access to a table, include it in the **Actions** section of the role definition. To subtract access from the allowed **Actions**, include it in the **NotActions** section.
-* Use Microsoft.OperationalInsights/workspaces/query/* to specify all tables.
+### Examples
+Following are examples of custom role actions to grant and deny access to specific tables.
-For example, to create a role with access to the _Heartbeat_ and _AzureActivity_ tables, create a custom role using the following actions:
+**Grant access to the _Heartbeat_ and _AzureActivity_ tables.**
``` "Actions": [
For example, to create a role with access to the _Heartbeat_ and _AzureActivity_
], ```
-To create a role with access to only the _SecurityBaseline_ table, create a custom role using the following actions:
+**Grant access to only the _SecurityBaseline_ table.**
``` "Actions": [
To create a role with access to only the _SecurityBaseline_ table, create a cust
"Microsoft.OperationalInsights/workspaces/query/SecurityBaseline/read" ], ```
-The examples above define a list of tables that are allowed. This example shows blocked list definition when a user can access all tables but the _SecurityAlert_ table:
++
+**Grant access to all tables except the _SecurityAlert_ table.**
``` "Actions": [
The examples above define a list of tables that are allowed. This example shows
### Custom logs
- Custom logs are created from data sources such as custom logs and HTTP Data Collector API. The easiest way to identify the type of log is by checking the tables listed under [Custom Logs in the log schema](./log-analytics-tutorial.md#view-table-information).
+ Custom logs are tables created from data sources such as [text logs](../agents/data-sources-custom-logs.md) and [HTTP Data Collector API](data-collector-api.md). The easiest way to identify the type of log is by checking the tables listed under [Custom Logs in the log schema](./log-analytics-tutorial.md#view-table-information).
- You can't grant access to individual custom logs, but you can grant access to all custom logs. To create a role with access to all custom logs, create a custom role using the following actions:
+> [!NOTE]
+> Tables created by the [custom logs API](../essentials/../logs/custom-logs-overview.md) does not yet support table level RBAC.
+
+ You can't grant access to individual custom logs tables, but you can grant access to all custom logs. To create a role with access to all custom log tables, create a custom role using the following actions:
``` "Actions": [
The examples above define a list of tables that are allowed. This example shows
"Microsoft.OperationalInsights/workspaces/query/Tables.Custom/read" ], ```
-An alternative approach to manage access to custom logs is to assign them to an Azure resource and manage access using the resource-context paradigm. To use this method, you must include the resource ID by specifying it in the [x-ms-AzureResourceId](../logs/data-collector-api.md#request-headers) header when data is ingested to Log Analytics via the [HTTP Data Collector API](../logs/data-collector-api.md). The resource ID must be valid and have access rules applied to it. After the logs are ingested, they are accessible to those with read access to the resource, as explained here.
-Sometimes custom logs come from sources that are not directly associated to a specific resource. In this case, create a resource group just to manage access to these logs. The resource group does not incur any cost, but gives you a valid resource ID to control access to the custom logs. For example, if a specific firewall is sending custom logs, create a resource group called "MyFireWallLogs" and make sure that the API requests contain the resource ID of "MyFireWallLogs". The firewall log records are then accessible only to users that were granted access to either MyFireWallLogs or those with full workspace access.
+An alternative approach to manage access to custom logs is to assign them to an Azure resource and manage access using resource-context access control.Include the resource ID by specifying it in the [x-ms-AzureResourceId](../logs/data-collector-api.md#request-headers) header when data is ingested to Log Analytics via the [HTTP Data Collector API](../logs/data-collector-api.md). The resource ID must be valid and have access rules applied to it. After the logs are ingested, they are accessible to users with read access to the resource.
+
+Some custom logs come from sources that are not directly associated to a specific resource. In this case, create a resource group to manage access to these logs. The resource group does not incur any cost, but gives you a valid resource ID to control access to the custom logs. For example, if a specific firewall is sending custom logs, create a resource group called *MyFireWallLogs* and make sure that the API requests contain the resource ID of *MyFireWallLogs*. The firewall log records are then accessible only to users that were granted access to either MyFireWallLogs or those with full workspace access
### Considerations
-* If a user is granted global read permission with the standard Reader or Contributor roles that include the _\*/read_ action, it will override the per-table access control and give them access to all log data.
-* If a user is granted per-table access but no other permissions, they would be able to access log data from the API but not from the Azure portal. To provide access from the Azure portal, use Log Analytics Reader as its base role.
-* Administrators and owners of the subscription will have access to all data types regardless of any other permission settings.
-* Workspace owners are treated like any other user for per-table access control.
-* We recommend assigning roles to security groups instead of individual users to reduce the number of assignments. This will also help you use existing group management tools to configure and verify access.
+- If a user is granted global read permission with the standard Reader or Contributor roles that include the _\*/read_ action, it will override the per-table access control and give them access to all log data.
+- If a user is granted per-table access but no other permissions, they would be able to access log data from the API but not from the Azure portal. To provide access from the Azure portal, use Log Analytics Reader as its base role.
+- Administrators and owners of the subscription will have access to all data types regardless of any other permission settings.
+- Workspace owners are treated like any other user for per-table access control.
+- Assign roles to security groups instead of individual users to reduce the number of assignments. This will also help you use existing group management tools to configure and verify access.
## Next steps
azure-monitor Oms Portal Transition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/oms-portal-transition.md
While most features will continue to work without performing any migration, you
Refer to [Common questions for transition from OMS portal to Azure portal for Log Analytics users](../overview.md) for information about how to transition to the Azure portal. ## User access and role migration
-Azure portal access management is richer and more powerful than the access management in the OMS Portal. See [Designing your Azure Monitor Logs workspace](../logs/design-logs-deployment.md) for details of access management in Log Analytics.
+Azure portal access management is richer and more powerful than the access management in the OMS Portal. See [Designing your Azure Monitor Logs workspace](../logs/workspace-design.md) for details of access management in Log Analytics.
> [!NOTE] > Previous versions of this article stated that the permissions would automatically be converted from the OMS portal to the Azure portal. This automatic conversion is no longer planned, and you must perform the conversion yourself.
azure-monitor Tutorial Custom Logs Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-custom-logs-api.md
In this tutorial, you learn to:
## Prerequisites To complete this tutorial, you need the following: -- Log Analytics workspace where you have at least [contributor rights](manage-access.md#manage-access-using-azure-permissions) .
+- Log Analytics workspace where you have at least [contributor rights](manage-access.md#azure-rbac) .
- [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace. ## Collect workspace details
azure-monitor Tutorial Custom Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-custom-logs.md
In this tutorial, you learn to:
## Prerequisites To complete this tutorial, you need the following: -- Log Analytics workspace where you have at least [contributor rights](manage-access.md#manage-access-using-azure-permissions) .
+- Log Analytics workspace where you have at least [contributor rights](manage-access.md#azure-rbac) .
- [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace.
azure-monitor Tutorial Ingestion Time Transformations Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-ingestion-time-transformations-api.md
In this tutorial, you learn to:
## Prerequisites To complete this tutorial, you need the following: -- Log Analytics workspace where you have at least [contributor rights](manage-access.md#manage-access-using-azure-permissions).-
-
- To configure this table for ingestion-time transformations, the table must already have some data.
-
- The table can't be linked to the workspaceΓÇÖs default DCR.
--- [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace.
+- Log Analytics workspace where you have at least [contributor rights](manage-access.md#azure-rbac).
+- [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace.
+- The table must already have some data.
+- The table can't be linked to the [workspace's transformation DCR](../essentials/data-collection-rule-overview.md#types-of-data-collection-rules).
## Overview of tutorial
azure-monitor Tutorial Ingestion Time Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-ingestion-time-transformations.md
In this tutorial, you learn to:
## Prerequisites To complete this tutorial, you need the following: -- A Log Analytics workspace where you have at least [contributor rights](manage-access.md#manage-access-using-azure-permissions).-- A [supported Azure table](../logs/tables-feature-support.md) in the workspace.
-
- To configure this table for ingestion-time transformations, the table must already have some data.
-
- The table can't be linked to the workspaceΓÇÖs default DCR.
-
-- [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace.
+- Log Analytics workspace where you have at least [contributor rights](manage-access.md#azure-rbac).
+- [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace.
+- The table must already have some data.
+- The table can't be linked to the [workspace's transformation DCR](../essentials/data-collection-rule-overview.md#types-of-data-collection-rules).
## Overview of tutorial
azure-monitor Workspace Design Service Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/workspace-design-service-providers.md
+
+ Title: Azure Monitor Logs for Service Providers | Microsoft Docs
+description: Azure Monitor Logs can help Managed Service Providers (MSPs), large enterprises, Independent Software Vendors (ISVs) and hosting service providers manage and monitor servers in customer's on-premises or cloud infrastructure.
+++ Last updated : 02/03/2020+++
+# Log Analytics workspace design for service providers
+
+Log Analytics workspaces in Azure Monitor can help managed service providers (MSPs), large enterprises, independent software vendors (ISVs), and hosting service providers manage and monitor servers in customer's on-premises or cloud infrastructure.
+
+Large enterprises share many similarities with service providers, particularly when there is a centralized IT team that is responsible for managing IT for many different business units. For simplicity, this document uses the term *service provider* but the same functionality is also available for enterprises and other customers.
+
+For partners and service providers who are part of the [Cloud Solution Provider (CSP)](https://partner.microsoft.com/membership/cloud-solution-provider) program, Log Analytics in Azure Monitor is one of the Azure services available in Azure CSP subscriptions.
+
+Log Analytics in Azure Monitor can also be used by a service provider managing customer resources through the Azure delegated resource management capability in [Azure Lighthouse](../../lighthouse/overview.md).
+
+## Architectures for Service Providers
+
+Log Analytics workspaces provide a method for the administrator to control the flow and isolation of [log](../logs/data-platform-logs.md) data and create an architecture that addresses its specific business needs. [This article](../logs/workspace-design.md) explains the design, deployment, and migration considerations for a workspace, and the [manage access](../logs/manage-access.md) article discusses how to apply and manage permissions to log data. Service providers have additional considerations.
+
+There are three possible architectures for service providers regarding Log Analytics workspaces:
+
+### 1. Distributed - Logs are stored in workspaces located in the customer's tenant
+
+In this architecture, a workspace is deployed in the customer's tenant that is used for all the logs of that customer.
+
+There are two ways that service provider administrators can gain access to a Log Analytics workspace in a customer tenant:
+
+- A customer can add individual users from the service provider as [Azure Active Directory guest users (B2B)](../../active-directory/external-identities/what-is-b2b.md). The service provider administrators will have to sign in to each customer's directory in the Azure portal to be able to access these workspaces. This also requires the customers to manage individual access for each service provider administrator.
+- For greater scalability and flexibility, service providers can use [Azure Lighthouse](../../lighthouse/overview.md) to access the customerΓÇÖs tenant. With this method, the service provider administrators are included in an Azure AD user group in the service providerΓÇÖs tenant, and this group is granted access during the onboarding process for each customer. These administrators can then access each customerΓÇÖs workspaces from within their own service provider tenant, rather than having to log into each customerΓÇÖs tenant individually. Accessing your customersΓÇÖ Log Analytics workspaces resources in this way reduces the work required on the customer side, and can make it easier to gather and analyze data across multiple customers managed by the same service provider via tools such as [Azure Monitor Workbooks](../visualize/workbooks-overview.md). For more info, see [Monitor customer resources at scale](../../lighthouse/how-to/monitor-at-scale.md).
+
+The advantages of the distributed architecture are:
+
+* The customer can confirm specific levels of permissions via [Azure delegated resource management](../../lighthouse/concepts/architecture.md), or can manage access to the logs using their own [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md).
+* Logs can be collected from all types of resources, not just agent-based VM data. For example, Azure Audit Logs.
+* Each customer can have different settings for their workspace such as retention and data capping.
+* Isolation between customers for regulatory and compliancy.
+* The charge for each workspace will be rolled into the customer's subscription.
+
+The disadvantages of the distributed architecture are:
+
+* Centrally visualizing and analyzing data [across customer tenants](cross-workspace-query.md) with tools such as Azure Monitor Workbooks can result in slower experiences, especially when analyzing data across more than 50+ workspaces.
+* If customers are not onboarded for Azure delegated resource management, service provider administrators must be provisioned in the customer directory, and it is harder for the service provider to manage a large number of customer tenants at once.
+
+### 2. Central - Logs are stored in a workspace located in the service provider tenant
+
+In this architecture, the logs are not stored in the customer's tenants but only in a central location within one of the service provider's subscriptions. The agents that are installed on the customer's VMs are configured to send their logs to this workspace using the workspace ID and secret key.
+
+The advantages of the centralized architecture are:
+
+* It is easy to manage a large number of customers and integrate them to various backend systems.
+* The service provider has full ownership over the logs and the various artifacts such as functions and saved queries.
+* The service provider can perform analytics across all of its customers.
+
+The disadvantages of the centralized architecture are:
+
+* This architecture is applicable only for agent-based VM data, it will not cover PaaS, SaaS and Azure fabric data sources.
+* It might be hard to separate the data between the customers when they are merged into a single workspace. The only good method to do so is to use the computer's fully qualified domain name (FQDN) or via the Azure subscription ID.
+* All data from all customers will be stored in the same region with a single bill and same retention and configuration settings.
+* Azure fabric and PaaS services such as Azure Diagnostics and Azure Audit Logs requires the workspace to be in the same tenant as the resource, thus they cannot send the logs to the central workspace.
+* All VM agents from all customers will be authenticated to the central workspace using the same workspace ID and key. There is no method to block logs from a specific customer without interrupting other customers.
+
+### 3. Hybrid - Logs are stored in workspace located in the customer's tenant and some of them are pulled to a central location.
+
+The third architecture mix between the two options. It is based on the first distributed architecture where the logs are local to each customer but using some mechanism to create a central repository of logs. A portion of the logs is pulled into a central location for reporting and analytics. This portion could be small number of data types or a summary of the activity such as daily statistics.
+
+There are two options to implement logs in a central location:
+
+1. Central workspace: The service provider can create a workspace in its tenant and use a script that utilizes the [Query API](https://dev.loganalytics.io/) with the [Data Collection API](../logs/data-collector-api.md) to bring the data from the various workspaces to this central location. Another option, other than a script, is to use [Azure Logic Apps](../../logic-apps/logic-apps-overview.md).
+
+2. Power BI as a central location: Power BI can act as the central location when the various workspaces export data to it using the integration between the Log Analytics workspace and [Power BI](./log-powerbi.md).
+
+## Next steps
+
+* Automate creation and configuration of workspaces using [Resource Manager templates](../logs/resource-manager-workspace.md)
+
+* Automate creation of workspaces using [PowerShell](../logs/powershell-workspace-configuration.md)
+
+* Use [Alerts](../alerts/alerts-overview.md) to integrate with existing systems
+
+* Generate summary reports using [Power BI](./log-powerbi.md)
+
+* Onboard customers to [Azure delegated resource management](../../lighthouse/concepts/architecture.md).
azure-monitor Workspace Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/workspace-design.md
+
+ Title: Design a Log Analytics workspace architecture
+description: Describes the considerations and recommendations for customers preparing to deploy a workspace in Azure Monitor.
+ Last updated : 05/25/2022+++
+# Design a Log Analytics workspace architecture
+While a single [Log Analytics workspace](log-analytics-workspace-overview.md) may be sufficient for many environments using Azure Monitor and Microsoft Sentinel, many organizations will create multiple workspaces to optimize costs and better meet different business requirements. This article presents a set of criteria for determining whether to use a single workspace or multiple workspaces and the configuration and placement of those workspace to meet your particular requirements while optimizing your costs.
+
+> [!NOTE]
+> This article includes both Azure Monitor and Microsoft Sentinel since many customers need to consider both in their design, and most of the decision criteria applies to both. If you only use one of these services, then you can simply ignore the other in your evaluation.
+
+## Design strategy
+Your design should always start with a single workspace since this reduces the complexity of managing multiple workspaces and in querying data from them. There are no performance limitations from the amount of data in your workspace, and multiple services and data sources can send data to the same workspace. As you identify criteria to create additional workspaces, your design should use the fewest number that will match your particular requirements.
+
+Designing a workspace configuration includes evaluation of multiple criteria, some of which may in conflict. For example, you may be able to reduce egress charges by creating a separate workspace in each Azure region, but consolidating into a single workspace might allow you to reduce charges even more with a commitment tier. Evaluate each of the criteria below independently and consider your particular requirements and priorities in determining which design will be most effective for your particular environment.
++
+## Design criteria
+The following table briefly presents the criteria that you should consider in designing your workspace architecture. The sections below describe each of these criteria in full detail.
+
+| Criteria | Description |
+|:|:|
+| [Segregate operational and security data](#segregate-operational-and-security-data) | Many customers will create separate workspaces for their operational and security data for data ownership and the additional cost from Microsoft Sentinel. In some cases though, you may be able to save cost by consolidating into a single workspace to qualify for a commitment tier. |
+| [Azure tenants](#azure-tenants) | If you have multiple Azure tenants, you'll usually create a workspace in each because several data sources can only send monitoring data to a workspace in the same Azure tenant. |
+| [Azure regions](#azure-regions) | Each workspace resides in a particular Azure region, and you may have regulatory or compliance requirements to store data in particular locations. |
+| [Data ownership](#data-ownership) | You may choose to create separate workspaces to define data ownership, for example by subsidiaries or affiliated companies. |
+| [Split billing](#split-billing) | By placing workspaces in separate subscriptions, they can be billed to different parties. |
+| [Data retention and archive](#data-retention-and-archive) | You can set different retention settings for each table in a workspace, but you need a separate workspace if you require different retention settings for different resources that send data to the same tables. |
+| [Commitment tiers](#commitment-tiers) | Commitment tiers allow you to reduce your ingestion cost by committing to a minimum amount of daily data in a single workspace. |
+| [Legacy agent limitations](#legacy-agent-limitations) | Legacy virtual machine agents have limitations on the number of workspaces they can connect to. |
+| [Data access control](#data-access-control) | Configure access to the workspace and to different tables and data from different resources. |
+
+### Segregate operational and security data
+Most customers who use both Azure Monitor and Microsoft Sentinel will create a dedicated workspace for each to segregate ownership of data between your operational and security teams and also to optimize costs. If Microsoft Sentinel is enabled in a workspace, then all data in that workspace is subject to Sentinel pricing, even if it's operational data collected by Azure Monitor. While a workspace with Sentinel gets 3 months of free data retention instead of 31 days, this will typically result in higher cost for operational data in a workspace without Sentinel. See [Azure Monitor Logs pricing details](cost-logs.md#workspaces-with-microsoft-sentinel).
+
+The exception is if combining data in the same workspace helps you reach a [commitment tier](#commitment-tiers), which provides a discount to your ingestion charges. For example, consider an organization that has operational data and security data each ingesting about 50 GB per day. Combining the data in the same workspace would allow a commitment tier at 100 GB per day that would provide a 15% discount for Azure Monitor and 50% discount for Sentinel.
+
+If you create separate workspaces for other criteria then you'll usually create additional workspace pairs. For example, if you have two Azure tenants, you may create four workspaces - an operational and security workspace in each tenant.
++
+- **If you use both Azure Monitor and Microsoft Sentinal**, create a separate workspace for each. Consider combining the two if it helps you reach a commitment tier.
++
+### Azure tenants
+Most resources can only send monitoring data to a workspace in the same Azure tenant. Virtual machines using the [Azure Monitor agent](../agents/azure-monitor-agent-overview.md) or the [Log Analytics agents](../agents/log-analytics-agent.md) can send data to workspaces in separate Azure tenants, which may be a scenario that you consider as a [service provider](#multiple-tenant-strategies).
+
+- **If you have a single Azure tenant**, then create a single workspace for that tenant.
+- **If you have multiple Azure tenants**, then create a workspace for each tenant. See [Multiple tenant strategies](#multiple-tenant-strategies) for other options including strategies for service providers.
+
+### Azure regions
+Log Analytics workspaces each reside in a [particular Azure region](https://azure.microsoft.com/global-infrastructure/geographies/), and you may have regulatory or compliance purposes for keeping data in a particular region. For example, an international company might locate a workspace in each major geographical region, such as United States and Europe.
+
+- **If you have requirements for keeping data in a particular geography**, create a separate workspace for each region with such requirements.
+- **If you do not have requirements for keeping data in a particular geography**, use a single workspace for all regions.
+
+You should also consider potential [bandwidth charges](https://azure.microsoft.com/pricing/details/bandwidth/) that may apply when sending data to a workspace from a resource in another region, although these charges are usually minor relative to data ingestion costs for most customers. These charges will typically result from sending data to the workspace from a virtual machine. Monitoring data from other Azure resources using [diagnostic settings](../essentials/diagnostic-settings.md) does not [incur egress charges](../usage-estimated-costs.md#data-transfer-charges).
+
+Use the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator) to estimate the cost and determine which regions you actually need. Consider workspaces in multiple regions if bandwidth charges are significant.
++
+- **If bandwidth charges are significant enough to justify the additional complexity**, create a separate workspace for each region with virtual machines.
+- **If bandwidth charges are not significant enough to justify the additional complexity**, use a single workspace for all regions.
++
+### Data ownership
+You may have a requirement to segregate data or define boundaries based on ownership. For example, you may have different subsidiaries or affiliated companies that require delineation of their monitoring data.
+
+- **If you require data segregation**, use a separate workspace for each data owner.
+- **If you do not require data segregation**, use a single workspace for all data owners.
+
+### Split billing
+You may need to split billing between different parties or perform charge back to a customer or internal business unit. [Azure Cost Management + Billing](../usage-estimated-costs.md#azure-cost-management--billing) allows you to view charges by workspace. You can also use a log query to view [billable data volume by Azure resource, resource group, or subscription](analyze-usage.md#data-volume-by-azure-resource-resource-group-or-subscription), which may be sufficient for your billing requirements.
+
+- **If you do not need to split billing or perform charge back**, use a single workspace for all cost owners.
+- **If you need to split billing or perform charge back**, consider whether [Azure Cost Management + Billing](../usage-estimated-costs.md#azure-cost-management--billing) or a log query provides granular enough cost reporting for your requirements. If not, use a separate workspace for each cost owner.
+
+### Data retention and archive
+You can configure default [data retention and archive settings](data-retention-archive.md) for a workspace or [configure different settings for each table](data-retention-archive.md#set-retention-and-archive-policy-by-table). You may require different settings for different sets of data in a particular table. If this is the case, then you would need to separate that data into different workspaces, each with unique retention settings.
+
+- **If you can use the same retention and archive settings for all data in each table**, use a single workspace for all resources.
+- **If you can require different retention and archive settings for different resources in the same table**, use a separate workspace for different resources.
+++
+### Commitment tiers
+[Commitment tiers](../logs/cost-logs.md#commitment-tiers) provide a discount to your workspace ingestion costs when you commit to a particular amount of daily data. You may choose to consolidate data in a single workspace in order to reach the level of a particular tier. This same volume of data spread across multiple workspaces would not be eligible for the same tier, unless you have a dedicated cluster.
+
+If you can commit to daily ingestion of at least 500 GB/day, then you should implement a [dedicated cluster](../logs/cost-logs.md#dedicated-clusters) which provides additional functionality and performance. Dedicated clusters also allow you to combine the data from multiple workspaces in the cluster to reach the level of a commitment tier.
+
+- **If you will ingest at least 500 GB/day across all resources**, create a dedicated cluster and set the appropriate commitment tier.
+- **If you will ingest at least 100 GB/day across resources**, consider combining them into a single workspace to take advantage of a commitment tier.
+++
+### Legacy agent limitations
+While you should avoid sending duplicate data to multiple workspaces because of the additional charges, you may have virtual machines connected to multiple workspaces. The most common scenario is an agent connected to separate workspaces for Azure Monitor and Microsoft Sentinel.
+
+ The [Azure Monitor agent](../agents/azure-monitor-agent-overview.md) and [Log Analytics agent for Windows](../agents/log-analytics-agent.md) can connect to multiple workspaces. The [Log Analytics agent for Linux](../agents/log-analytics-agent.md) though can only connect to a single workspace.
+
+- **If you use the Log Analytics agent for Linux**, migrate to the [Azure Monitor agent](../agents/azure-monitor-agent-overview.md) or ensure that your Linux machines only require access to a single workspace.
++
+### Data access control
+When you grant a user [access to a workspace](manage-access.md#azure-rbac), they have access to all data in that workspace. This is appropriate for a member of a central administration or security team who must access data for all resources. Access to the workspace is also determined by resource-context RBAC and table-level RBAC.
+
+Resource-context RBAC](manage-access.md#access-mode)
+By default, if a user has read access to an Azure resource, they inherit permissions to any of that resource's monitoring data sent to the workspace. This allows users to access information about resources they manage without being granted explicit access to the workspace. If you need to block this access, you can change the [access control mode](manage-access.md#access-control-mode) to require explicit workspace permissions.
+
+- **If you want users to be able to access data for their resources**, keep the default access control mode of *Use resource or workspace permissions*.
+- **If you want to explicitly assign permissions for all users**, change the access control mode to *Require workspace permissions*.
++
+[Table-level RBAC](manage-access.md#table-level-azure-rbac)
+With table-level RBAC, you can grant or deny access to specific tables in the workspace. This allows you to implement granular permissions required for specific situations in your environment.
+
+For example, you might grant access to only specific tables collected by Sentinel to an internal auditing team. Or you might deny access to security related tables to resource owners who need operational data related to their resources.
+
+- **If you don't require granular access control by table**, grant the operations and security team access to their resources and allow resource owners to use resource-context RBAC for their resources.
+- **If you require granular access control by table**, grant or deny access to specific tables using table-level RBAC.
++
+## Working with multiple workspaces
+Since many designs will include multiple workspaces, Azure Monitor and Microsoft Sentinel include features to assist you in analyzing this data across workspaces. For details, see the following:
+
+- [Create a log query across multiple workspaces and apps in Azure Monitor](cross-workspace-query.md)
+- [Extend Microsoft Sentinel across workspaces and tenants](../../sentinel/extend-sentinel-across-workspaces-tenants.md).
+## Multiple tenant strategies
+Environments with multiple Azure tenants, including service providers (MSPs), independent software vendors (ISVs), and large enterprises, often require a strategy where a central administration team has access to administer workspaces located in other tenants. Each of the tenants may represent separate customers or different business units.
+
+> [!NOTE]
+> For partners and service providers who are part of the [Cloud Solution Provider (CSP) program](https://partner.microsoft.com/membership/cloud-solution-provider), Log Analytics in Azure Monitor is one of the Azure services available in Azure CSP subscriptions.
+
+There are two basic strategies for this functionality as described below.
+
+### Distributed architecture
+In a distributed architecture, a Log Analytics workspace is created in each Azure tenant. This is the only option you can use if you're monitoring Azure services other than virtual machines.
+
+There are two options to allow service provider administrators to access the workspaces in the customer tenants.
++
+- Use [Azure Lighthouse](../../lighthouse/overview.md) to access each customer tenant. The service provider administrators are included in an Azure AD user group in the service providerΓÇÖs tenant, and this group is granted access during the onboarding process for each customer. The administrators can then access each customerΓÇÖs workspaces from within their own service provider tenant, rather than having to log into each customerΓÇÖs tenant individually. For more information, see [Monitor customer resources at scale](../../lighthouse/how-to/monitor-at-scale.md).
+
+- Add individual users from the service provider as [Azure Active Directory guest users (B2B)](../../active-directory/external-identities/what-is-b2b.md). The customer tenant administrators manage individual access for each service provider administrator, and the service provider administrators must log in to the directory for each tenant in the Azure portal to be able to access these workspaces.
++
+Advantages to this strategy are:
+
+- Logs can be collected from all types of resources.
+- The customer can confirm specific levels of permissions with [Azure delegated resource management](../../lighthouse/concepts/architecture.md), or can manage access to the logs using their own [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md).
+- Each customer can have different settings for their workspace such as retention and data cap.
+- Isolation between customers for regulatory and compliance.
+- The charge for each workspace in included in the bill for the customer's subscription.
+
+Disadvantages to this strategy are:
+
+- Centrally visualizing and analyzing data across customer tenants with tools such as Azure Monitor Workbooks can result in slower experiences, especially when analyzing data across more than 50 workspaces.
+- If customers are not onboarded for Azure delegated resource management, service provider administrators must be provisioned in the customer directory. This makes it more difficult for the service provider to manage a large number of customer tenants at once.
+### Centralized
+A single workspace is created in the service provider's subscription. This option can only collect data from customer virtual machines. Agents installed on the virtual machines are configured to send their logs to this central workspace.
+
+Advantages to this strategy are:
+
+- Easy to manage a large number of customers.
+- Service provider has full ownership over the logs and the various artifacts such as functions and saved queries.
+- Service provider can perform analytics across all of its customers.
+
+Disadvantages to this strategy are:
+
+- Logs can only be collected from virtual machines with an agent. It will not work with PaaS, SaaS and Azure fabric data sources.
+- It may be difficult to separate data between customers, since their data shares a single workspace. Queries need to use the computer's fully qualified domain name (FQDN) or the Azure subscription ID.
+- All data from all customers will be stored in the same region with a single bill and same retention and configuration settings.
++
+### Hybrid
+In a hybrid model, each tenant has its own workspace, and some mechanism is used to pull data into a central location for reporting and analytics. This data could include a small number of data types or a summary of the activity such as daily statistics.
+
+There are two options to implement logs in a central location:
+
+- Central workspace. The service provider creates a workspace in its tenant and use a script that utilizes the [Query API](api/overview.md) with the [custom logs API](custom-logs-overview.md) to bring the data from the tenant workspaces to this central location. Another option is to use [Azure Logic Apps](../../logic-apps/logic-apps-overview.md) to copy data to the central workspace.
+
+- Power BI. The tenant workspaces export data to Power BI using the integration between the [Log Analytics workspace and Power BI](log-powerbi.md).
++
+## Next steps
+
+- [Learn more about designing and configuring data access in a workspace.](manage-access.md)
+- [Get sample workspace architectures for Microsoft Sentinel.](../../sentinel/sample-workspace-designs.md)
azure-monitor View Designer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/view-designer.md
The views that you create with View Designer contain the elements that are descr
| Visualization parts | Present a visualization of data in the Log Analytics workspace based on one or more [log queries](../logs/log-query-overview.md). Most parts include a header, which provides a high-level visualization, and a list, which displays the top results. Each part type provides a different visualization of the records in the Log Analytics workspace. You select elements in the part to perform a log query that provides detailed records. | ## Required permissions
-You require at least [contributor level permissions](../logs/manage-access.md#manage-access-using-azure-permissions) in the Log Analytics workspace to create or modify views. If you don't have this permission, then the View Designer option won't be displayed in the menu.
+You require at least [contributor level permissions](../logs/manage-access.md#azure-rbac) in the Log Analytics workspace to create or modify views. If you don't have this permission, then the View Designer option won't be displayed in the menu.
## Work with an existing view
azure-monitor Monitor Virtual Machine Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-configure.md
You require at least one Log Analytics workspace to support VM insights and to c
Many environments use a single workspace for all their virtual machines and other Azure resources they monitor. You can even share a workspace used by [Microsoft Defender for Cloud and Microsoft Sentinel](monitor-virtual-machine-security.md), although many customers choose to segregate their availability and performance telemetry from security data. If you're getting started with Azure Monitor, start with a single workspace and consider creating more workspaces as your requirements evolve.
-For complete details on logic that you should consider for designing a workspace configuration, see [Designing your Azure Monitor Logs deployment](../logs/design-logs-deployment.md).
+For complete details on logic that you should consider for designing a workspace configuration, see Design a Log Analytics workspace configuration(../logs/workspace-design.md).
### Multihoming agents Multihoming refers to a virtual machine that connects to multiple workspaces. Typically, there's little reason to multihome agents for Azure Monitor alone. Having an agent send data to multiple workspaces most likely creates duplicate data in each workspace, which increases your overall cost. You can combine data from multiple workspaces by using [cross-workspace queries](../logs/cross-workspace-query.md) and [workbooks](../visualizations/../visualize/workbooks-overview.md).
azure-monitor Vminsights Configure Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-configure-workspace.md
Access Log Analytics workspaces in the Azure portal from the **Log Analytics wor
[![Log Anlytics workspaces](media/vminsights-configure-workspace/log-analytics-workspaces.png)](media/vminsights-configure-workspace/log-analytics-workspaces.png#lightbox)
-You can create a new Log Analytics workspace using any of the following methods. See [Designing your Azure Monitor Logs deployment](../logs/design-logs-deployment.md) for guidance on determining the number of workspaces you should use in your environment and how to design their access strategy.
+You can create a new Log Analytics workspace using any of the following methods. See Design a Log Analytics workspace configuration(../logs/workspace-design.md) for guidance on determining the number of workspaces you should use in your environment and how to design their access strategy.
* [Azure portal](../logs/quick-create-workspace.md)
VM insights supports a Log Analytics workspace in any of the [regions supported
>You can monitor Azure VMs in any region. The VMs themselves aren't limited to the regions supported by the Log Analytics workspace. ## Azure role-based access control
-To enable and access the features in VM insights, you must have the [Log Analytics contributor role](../logs/manage-access.md#manage-access-using-azure-permissions) in the workspace. To view performance, health, and map data, you must have the [monitoring reader role](../roles-permissions-security.md#built-in-monitoring-roles) for the Azure VM. For more information about how to control access to a Log Analytics workspace, see [Manage workspaces](../logs/manage-access.md).
+To enable and access the features in VM insights, you must have the [Log Analytics contributor role](../logs/manage-access.md#azure-rbac) in the workspace. To view performance, health, and map data, you must have the [monitoring reader role](../roles-permissions-security.md#built-in-monitoring-roles) for the Azure VM. For more information about how to control access to a Log Analytics workspace, see [Manage workspaces](../logs/manage-access.md).
## Add VMInsights solution to workspace Before a Log Analytics workspace can be used with VM insights, it must have the *VMInsights* solution installed. The methods for configuring the workspace are described in the following sections.
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/overview.md
Title: Azure Resource Manager overview description: Describes how to use Azure Resource Manager for deployment, management, and access control of resources on Azure. Previously updated : 02/03/2022 Last updated : 05/26/2022 # What is Azure Resource Manager?
To learn about Azure Resource Manager templates (ARM templates), see the [ARM te
## Consistent management layer
-When a user sends a request from any of the Azure tools, APIs, or SDKs, Resource Manager receives the request. It authenticates and authorizes the request. Resource Manager sends the request to the Azure service, which takes the requested action. Because all requests are handled through the same API, you see consistent results and capabilities in all the different tools.
+When you send a request through any of the Azure APIs, tools, or SDKs, Resource Manager receives the request. It authenticates and authorizes the request before forwarding it to the appropriate Azure service. Because all requests are handled through the same API, you see consistent results and capabilities in all the different tools.
The following image shows the role Azure Resource Manager plays in handling Azure requests.
If you're new to Azure Resource Manager, there are some terms you might not be f
* **ARM template** - A JavaScript Object Notation (JSON) file that defines one or more resources to deploy to a resource group, subscription, management group, or tenant. The template can be used to deploy the resources consistently and repeatedly. See [Template deployment overview](../templates/overview.md). * **Bicep file** - A file for declaratively deploying Azure resources. Bicep is a language that's been designed to provide the best authoring experience for infrastructure as code solutions in Azure. See [Bicep overview](../bicep/overview.md).
+For more definitions of Azure terminology, see [Azure fundamental concepts](/azure/cloud-adoption-framework/ready/considerations/fundamental-concepts).
+ ## The benefits of using Resource Manager With Resource Manager, you can:
azure-resource-manager Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/best-practices.md
Title: Best practices for templates description: Describes recommended approaches for authoring Azure Resource Manager templates (ARM templates). Offers suggestions to avoid common problems when using templates. Previously updated : 04/23/2021 Last updated : 05/26/2022 # ARM template best practices
This article shows you how to use recommended practices when constructing your A
## Template limits
-Limit the size of your template to 4 MB. The 4-MB limit applies to the final state of the template after it has been expanded with iterative resource definitions, and values for variables and parameters. The parameter file is also limited to 4 MB. You may get an error with a template or parameter file of less than 4 MB, if the total size of the request is too large. For more information about how to simplify your template to avoid a large request, see [Resolve errors for job size exceeded](error-job-size-exceeded.md).
+Limit the size of your template to 4 MB. The 4-MB limit applies to the final state of the template after it has been expanded with iterative resource definitions, and values for variables and parameters. The parameter file is also limited to 4 MB. You may get an error with a template or parameter file of less than 4 MB if the total size of the request is too large. For more information about how to simplify your template to avoid a large request, see [Resolve errors for job size exceeded](error-job-size-exceeded.md).
You're also limited to: * 256 parameters * 256 variables
-* 800 resources (including copy count)
+* 800 resources (including [copy count](copy-resources.md))
* 64 output values * 24,576 characters in a template expression
When deciding what [dependencies](./resource-dependency.md) to set, use the foll
* Set a child resource as dependent on its parent resource.
-* Resources with the [condition element](conditional-resource-deployment.md) set to false are automatically removed from the dependency order. Set the dependencies as if the resource is always deployed.
+* Resources with the [condition element](conditional-resource-deployment.md) set to `false` are automatically removed from the dependency order. Set the dependencies as if the resource is always deployed.
* Let dependencies cascade without setting them explicitly. For example, your virtual machine depends on a virtual network interface, and the virtual network interface depends on a virtual network and public IP addresses. Therefore, the virtual machine is deployed after all three resources, but don't explicitly set the virtual machine as dependent on all three resources. This approach clarifies the dependency order and makes it easier to change the template later.
The following information can be helpful when you work with [resources](./syntax
} ```
-* Assign public IP addresses to a virtual machine only when an application requires it. To connect to a virtual machine (VM) for debugging, or for management or administrative purposes, use inbound NAT rules, a virtual network gateway, or a jumpbox.
+* Assign public IP addresses to a virtual machine only when an application requires it. To connect to a virtual machine for administrative purposes, use inbound NAT rules, a virtual network gateway, or a jumpbox.
For more information about connecting to virtual machines, see:
- * [Run VMs for an N-tier architecture in Azure](/azure/architecture/reference-architectures/n-tier/n-tier-sql-server)
- * [Set up WinRM access for VMs in Azure Resource Manager](../../virtual-machines/windows/winrm.md)
- * [Allow external access to your VM by using the Azure portal](../../virtual-machines/windows/nsg-quickstart-portal.md)
- * [Allow external access to your VM by using PowerShell](../../virtual-machines/windows/nsg-quickstart-powershell.md)
- * [Allow external access to your Linux VM by using Azure CLI](../../virtual-machines/linux/nsg-quickstart.md)
+ * [What is Azure Bastion?](../../bastion/bastion-overview.md)
+ * [How to connect and sign on to an Azure virtual machine running Windows](../../virtual-machines/windows/connect-logon.md)
+ * [Setting up WinRM access for Virtual Machines in Azure Resource Manager](../../virtual-machines/windows/winrm.md)
+ * [Connect to a Linux VM](../../virtual-machines/linux-vm-connect.md)
* The `domainNameLabel` property for public IP addresses must be unique. The `domainNameLabel` value must be between 3 and 63 characters long, and follow the rules specified by this regular expression: `^[a-z][a-z0-9-]{1,61}[a-z0-9]$`. Because the `uniqueString` function generates a string that is 13 characters long, the `dnsPrefixString` parameter is limited to 50 characters.
azure-resource-manager Deploy What If https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deploy-what-if.md
You can use the what-if operation through the Azure SDKs.
## Next steps
+- [ARM Deployment Insights](https://marketplace.visualstudio.com/items?itemName=AuthorityPartnersInc.arm-deployment-insights) extension provides an easy way to integrate the what-if operation in your Azure DevOps pipeline.
- To use the what-if operation in a pipeline, see [Test ARM templates with What-If in a pipeline](https://4bes.nl/2021/03/06/test-arm-templates-with-what-if/). - If you notice incorrect results from the what-if operation, please report the issues at [https://aka.ms/whatifissues](https://aka.ms/whatifissues). - For a Microsoft Learn module that covers using what if, see [Preview changes and validate Azure resources by using what-if and the ARM template test toolkit](/learn/modules/arm-template-test/).
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/overview.md
Title: Templates overview description: Describes the benefits using Azure Resource Manager templates (ARM templates) for deployment of resources. Previously updated : 12/01/2021 Last updated : 05/26/2022 # What are ARM templates?
azure-signalr Signalr Howto Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-diagnostic-logs.md
If you find that you can't establish SignalR client connections to Azure SignalR
When encountering message related problem, you can take advantage of messaging logs to troubleshoot. Firstly, [enable resource logs](#enable-resource-logs) in service, logs for server and client. > [!NOTE]
-> For ASP.NET Core, see [here](https://docs.microsoft.com/aspnet/core/signalr/diagnostics) to enable logging in server and client.
+> For ASP.NET Core, see [here](/aspnet/core/signalr/diagnostics) to enable logging in server and client.
>
-> For ASP.NET, see [here](https://docs.microsoft.com/aspnet/signalr/overview/testing-and-debugging/enabling-signalr-tracing) to enable logging in server and client.
+> For ASP.NET, see [here](/aspnet/signalr/overview/testing-and-debugging/enabling-signalr-tracing) to enable logging in server and client.
If you don't mind potential performance impact and no client-to-server direction message, check the `Messaging` in `Log Source Settings/Types` to enable *collect-all* log collecting behavior. For more information about this behavior, see [collect all section](#collect-all).
For **collect all** collecting behavior:
SignalR service only trace messages in direction **from server to client via SignalR service**. The tracing ID will be generated in server, the message will carry the tracing ID to SignalR service. > [!NOTE]
-> If you want to trace message and [send messages from outside a hub](https://docs.microsoft.com/aspnet/core/signalr/hubcontext) in your app server, you need to enable **collect all** collecting behavior to collect message logs for the messages which are not originated from diagnostic clients.
+> If you want to trace message and [send messages from outside a hub](/aspnet/core/signalr/hubcontext) in your app server, you need to enable **collect all** collecting behavior to collect message logs for the messages which are not originated from diagnostic clients.
> Diagnostic clients works for both **collect all** and **collect partially** collecting behaviors. It has higher priority to collect logs. For more information, see [diagnostic client section](#diagnostic-client). By checking the sign in server and service side, you can easily find out whether the message is sent from server, arrives at SignalR service, and leaves from SignalR service. Basically, by checking if the *received* and *sent* message are matched or not based on message tracing ID, you can tell whether the message loss issue is in server or SignalR service in this direction. For more information, see the [details](#message-flow-detail-for-path3) below.
azure-video-indexer Deploy With Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/deploy-with-arm-template.md
The resource will be deployed to your subscription and will create the Azure Vid
If you're new to Azure Video Indexer, see:
-* [Azure Video Indexer Documentation](/azure/azure-video-indexer)
+* [Azure Video Indexer Documentation](./index.yml)
* [Azure Video Indexer Developer Portal](https://api-portal.videoindexer.ai/) * After completing this tutorial, head to other Azure Video Indexer samples, described on [README.md](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/README.md)
If you're new to template deployment, see:
## Next steps
-[Connect an existing classic paid Azure Video Indexer account to ARM-based account](connect-classic-account-to-arm.md)
+[Connect an existing classic paid Azure Video Indexer account to ARM-based account](connect-classic-account-to-arm.md)
azure-video-indexer Monitor Video Indexer Data Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/monitor-video-indexer-data-reference.md
Azure Video Indexer currently does not support any monitoring on metrics.
--**OPTION 2 EXAMPLE** -
-<!-- OPTION 2 - Link to the metrics as above, but work in extra information not found in the automated metric-supported reference article. NOTE: YOU WILL NOW HAVE TO MANUALLY MAINTAIN THIS SECTION to make sure it stays in sync with the metrics-supported link. For highly customized example, see [CosmosDB](https://docs.microsoft.com/azure/cosmos-db/monitor-cosmos-db-reference#metrics). They even regroup the metrics into usage type vs. resource provider and type.
+<!-- OPTION 2 - Link to the metrics as above, but work in extra information not found in the automated metric-supported reference article. NOTE: YOU WILL NOW HAVE TO MANUALLY MAINTAIN THIS SECTION to make sure it stays in sync with the metrics-supported link. For highly customized example, see [CosmosDB](../cosmos-db/monitor-cosmos-db-reference.md#metrics). They even regroup the metrics into usage type vs. resource provider and type.
--> <!-- Example format. Mimic the setup of metrics supported, but add extra information -->
This section refers to all of the Azure Monitor Logs Kusto tables relevant to Az
<!-**OPTION 2 EXAMPLE** -
-<!-- OPTION 2 - List out your tables adding additional information on what each table is for. Individually link to each table using the table name. For example, link to [AzureMetrics](https://docs.microsoft.com/azure/azure-monitor/reference/tables/azuremetrics).
+<!-- OPTION 2 - List out your tables adding additional information on what each table is for. Individually link to each table using the table name. For example, link to [AzureMetrics](/azure/azure-monitor/reference/tables/azuremetrics).
NOTE: YOU WILL NOW HAVE TO MANUALLY MAINTAIN THIS SECTION to make sure it stays in sync with the automatically generated list. You can group these sections however you want provided you include the proper links back to the proper tables. -->
The following table lists the operations related to Azure Video Indexer that may
<!-- NOTE: This information may be hard to find or not listed anywhere. Please ask your PM for at least an incomplete list of what type of messages could be written here. If you can't locate this, contact azmondocs@microsoft.com for help -->
-For more information on the schema of Activity Log entries, see [Activity Log schema](/azure/azure-monitor/essentials/activity-log-schema).
+For more information on the schema of Activity Log entries, see [Activity Log schema](../azure-monitor/essentials/activity-log-schema.md).
## Schemas <!-- REQUIRED. Please keep heading in this order -->
The following schemas are in use by Azure Video Indexer
<!-- replace below with the proper link to your main monitoring service article --> - See [Monitoring Azure Azure Video Indexer](monitor-video-indexer.md) for a description of monitoring Azure Video Indexer.-- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
+- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
azure-video-indexer Monitor Video Indexer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/monitor-video-indexer.md
Keep the headings in this order.
When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation.
-This article describes the monitoring data generated by Azure Video Indexer. Azure Video Indexer uses [Azure Monitor](/azure/azure-monitor/overview). If you are unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource).
+This article describes the monitoring data generated by Azure Video Indexer. Azure Video Indexer uses [Azure Monitor](../azure-monitor/overview.md). If you are unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md).
<!-- Optional diagram showing monitoring for your service. -->
Some services in Azure have a special focused pre-built monitoring dashboard in
## Monitoring data <!-- REQUIRED. Please keep headings in this order -->
-Azure Video Indexer collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](/azure/azure-monitor/essentials/monitor-azure-resource#monitoring-data-from-Azure-resources).
+Azure Video Indexer collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data-from-azure-resources).
See [Monitoring *Azure Video Indexer* data reference](monitor-video-indexer-data-reference.md) for detailed information on the metrics and logs metrics created by Azure Video Indexer.
Currently Azure Video Indexer does not support monitoring of metrics.
<!-- REQUIRED. Please keep headings in this order If you don't support metrics, say so. Some services may be only onboarded to logs -->
-<!--You can analyze metrics for *Azure Video Indexer* with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](/azure/azure-monitor/essentials/metrics-getting-started) for details on using this tool.
+<!--You can analyze metrics for *Azure Video Indexer* with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool.
<!-- Point to the list of metrics available in your monitor-service-reference article. --> <!--For a list of the platform metrics collected for Azure Video Indexer, see [Monitoring *Azure Video Indexer* data reference metrics](monitor-service-reference.md#metrics)
If you don't support metrics, say so. Some services may be only onboarded to log
<!--Guest OS metrics must be collected by agents running on the virtual machines hosting your service. <!-- Add additional information as appropriate --> <!--For more information, see [Overview of Azure Monitor agents](/azure/azure-monitor/platform/agents-overview)
-For reference, you can see a list of [all resource metrics supported in Azure Monitor](/azure/azure-monitor/essentials/metrics-supported).
+For reference, you can see a list of [all resource metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md).
<!-- Optional: Call out additional information to help your customers. For example, you can include additional information here about how to use metrics explorer specifically for your service. Remember that the UI is subject to change quite often so you will need to maintain these screenshots yourself if you add them in. -->
If you don't support resource logs, say so. Some services may be only onboarded
Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties.
-All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](/azure/azure-monitor/essentials/resource-logs-schema) The schema for Azure Video Indexer resource logs is found in the [Azure Video Indexer Data Reference](monitor-video-indexer-data-reference.md#schemas)
+All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../azure-monitor/essentials/resource-logs-schema.md) The schema for Azure Video Indexer resource logs is found in the [Azure Video Indexer Data Reference](monitor-video-indexer-data-reference.md#schemas)
-The [Activity log](/azure/azure-monitor/essentials/activity-log) is a type of platform sign-in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
+The [Activity log](../azure-monitor/essentials/activity-log.md) is a type of platform sign-in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
For a list of the types of resource logs collected for Azure Video Indexer, see [Monitoring Azure Video Indexer data reference](monitor-video-indexer-data-reference.md#resource-logs)
For a list of the tables used by Azure Monitor Logs and queryable by Log Analyti
<!-- Add sample Log Analytics Kusto queries for your service. --> > [!IMPORTANT]
-> When you select **Logs** from the Azure Video Indexer account menu, Log Analytics is opened with the query scope set to the current Azure Video Indexer account. This means that log queries will only include data from that resource. If you want to run a query that includes data from other Azure Video Indexer account or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](/azure/azure-monitor/logs/scope) for details.
+> When you select **Logs** from the Azure Video Indexer account menu, Log Analytics is opened with the query scope set to the current Azure Video Indexer account. This means that log queries will only include data from that resource. If you want to run a query that includes data from other Azure Video Indexer account or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](../azure-monitor/logs/scope.md) for details.
<!-- REQUIRED: Include queries that are helpful for figuring out the health and state of your service. Ideally, use some of these queries in the alerts section. It's possible that some of your queries may be in the Log Analytics UI (sample or example queries). Check if so. -->
VIAudit
This information is the BIGGEST request we get in Azure Monitor so do not avoid it long term. People don't know what to monitor for best results. Be prescriptive -->
-Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](/azure/azure-monitor/alerts/alerts-metric-overview), [logs](/azure/azure-monitor/alerts/alerts-unified-log), and the [activity log](/azure/azure-monitor/alerts/activity-log-alerts). Different types of alerts have benefits and drawbacks.
+Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](../azure-monitor/alerts/alerts-metric-overview.md), [logs](../azure-monitor/alerts/alerts-unified-log.md), and the [activity log](../azure-monitor/alerts/activity-log-alerts.md). Different types of alerts have benefits and drawbacks.
<!-- only include next line if applications run on your service and work with App Insights. -->
-<!-- If you are creating or running an application which run on <*service*> [Azure Monitor Application Insights](/azure/azure-monitor/overview#application-insights) may offer additional types of alerts.
+<!-- If you are creating or running an application which run on <*service*> [Azure Monitor Application Insights](../azure-monitor/overview.md#application-insights) may offer additional types of alerts.
<!-- end --> The following table lists common and recommended alert rules for Azure Video Indexer.
VIAudit
<!-- Add additional links. You can change the wording of these and add more if useful. --> - See [Monitoring Azure Video Indexer data reference](monitor-video-indexer-data-reference.md) for a reference of the metrics, logs, and other important values created by Azure Video Indexer account.-- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
+- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
azure-video-indexer Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/network-security.md
# NSG service tags for Azure Video Indexer
-Azure Video Indexer (formerly Video Analyzer for Media) is a service hosted on Azure. In some architecture cases the service needs to interact with other services in order to index video files (that is, a Storage Account) or when a customer orchestrates indexing jobs against our API endpoint using their own service hosted on Azure (i.e AKS, Web Apps, Logic Apps, Functions). Customers who would like to limit access to their resources on a network level can use [Network Security Groups with Service Tags](https://docs.microsoft.com/azure/virtual-network/service-tags-overview). A service tag represents a group of IP address prefixes from a given Azure service, in this case Azure Video Indexer. Microsoft manages the address prefixes grouped by the service tag and automatically updates the service tag as addresses change in our backend, minimizing the complexity of frequent updates to network security rules by the customer.
+Azure Video Indexer (formerly Video Analyzer for Media) is a service hosted on Azure. In some architecture cases the service needs to interact with other services in order to index video files (that is, a Storage Account) or when a customer orchestrates indexing jobs against our API endpoint using their own service hosted on Azure (i.e AKS, Web Apps, Logic Apps, Functions). Customers who would like to limit access to their resources on a network level can use [Network Security Groups with Service Tags](../virtual-network/service-tags-overview.md). A service tag represents a group of IP address prefixes from a given Azure service, in this case Azure Video Indexer. Microsoft manages the address prefixes grouped by the service tag and automatically updates the service tag as addresses change in our backend, minimizing the complexity of frequent updates to network security rules by the customer.
## Get started with service tags
This tag contains the IP addresses of Azure Video Indexer services for all regio
## Using Azure CLI
-You can also use Azure CLI to create a new or update an existing NSG rule and add the **AzureVideoAnalyzerForMedia** service tag using the `--source-address-prefixes`. For a full list of CLI commands and parameters see [az network nsg](https://docs.microsoft.com/cli/azure/network/nsg/rule?view=azure-cli-latest)
+You can also use Azure CLI to create a new or update an existing NSG rule and add the **AzureVideoAnalyzerForMedia** service tag using the `--source-address-prefixes`. For a full list of CLI commands and parameters see [az network nsg](/cli/azure/network/nsg/rule?view=azure-cli-latest)
Example of a security rule using service tags. For more details, visit https://aka.ms/servicetags
azure-vmware Azure Vmware Solution Platform Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-platform-updates.md
Last updated 12/22/2021
Azure VMware Solution will apply important updates starting in March 2021. You'll receive a notification through Azure Service Health that includes the timeline of the maintenance. For more information, see [Host maintenance and lifecycle management](concepts-private-clouds-clusters.md#host-maintenance-and-lifecycle-management).
+## May 23, 2022
+
+All new Azure VMware Solution private clouds in regions (Germany West Central, Australia East, Central US and UK West), are now deployed with VMware vCenter Server version 7.0 Update 3c and ESXi version 7.0 Update 3c.
+
+Any existing private clouds in the previously mentioned regions will be upgraded to those versions. For more information, please see [VMware ESXi 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-70u3c-release-notes.html) and [VMware vCenter Server 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u3c-release-notes.html).
+
+You'll receive a notification through Azure Service Health that includes the timeline of the upgrade. You can reschedule an upgrade as needed. This notification also provides details on the upgraded component, its effect on workloads, private cloud access, and other Azure services.
+ ## May 9, 2022 All new Azure VMware Solution private clouds in regions (France Central, Brazil South, Japan West, Australia Southeast, Canada East, East Asia, and Southeast Asia), are now deployed with VMware vCenter Server version 7.0 Update 3c and ESXi version 7.0 Update 3c.
azure-vmware Deploy Arc For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-arc-for-azure-vmware-solution.md
az feature show --name AzureArcForAVS --namespace Microsoft.AVS
Use the following steps to guide you through the process to onboard in Arc for Azure VMware Solution (Preview).
-1. Sign into the jumpbox VM and extract the contents from the compressed file from the following [location](https://github.com/Azure/ArcOnAVS/releases/tag/v2.0.0). The extracted file contains the scripts to install the preview software.
+1. Sign into the jumpbox VM and extract the contents from the compressed file from the following [location](https://github.com/Azure/ArcOnAVS/releases/latest). The extracted file contains the scripts to install the preview software.
1. Open the 'config_avs.json' file and populate all the variables. **Config JSON**
Appendix 1 shows proxy URLs required by the Azure Arc-enabled private cloud. The
**Additional URL resources** - [Google Container Registry](http://gcr.io/)-- [Red Hat Quay.io](http://quay.io/)
+- [Red Hat Quay.io](http://quay.io/)
azure-vmware Integrate Azure Native Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/integrate-azure-native-services.md
You can configure the Log Analytics workspace with Microsoft Sentinel for alert
If you are new to Azure or unfamiliar with any of the services previously mentioned, review the following articles: - [Automation account authentication overview](../automation/automation-security-overview.md)-- [Designing your Azure Monitor Logs deployment](../azure-monitor/logs/design-logs-deployment.md) and [Azure Monitor](../azure-monitor/overview.md)
+- [Designing your Azure Monitor Logs deployment](../azure-monitor/logs/workspace-design.md) and [Azure Monitor](../azure-monitor/overview.md)
- [Planning](../security-center/security-center-planning-and-operations-guide.md) and [Supported platforms](../security-center/security-center-os-coverage.md) for Microsoft Defender for Cloud - [Enable Azure Monitor for VMs overview](../azure-monitor/vm/vminsights-enable-overview.md) - [What is Azure Arc enabled servers?](../azure-arc/servers/overview.md) and [What is Azure Arc enabled Kubernetes?](../azure-arc/kubernetes/overview.md)
Can collect data from different [sources to monitor and analyze](../azure-monito
Monitor guest operating system performance to discover and map application dependencies for Azure VMware Solution or on-premises VMs. Your Log Analytics workspace in Azure Monitor enables log collection and performance counter collection using the Log Analytics agent or extensions.
-1. [Design your Azure Monitor Logs deployment](../azure-monitor/logs/design-logs-deployment.md)
+1. [Design your Azure Monitor Logs deployment](../azure-monitor/logs/workspace-design.md)
1. [Enable Azure Monitor for VMs overview](../azure-monitor/vm/vminsights-enable-overview.md)
backup Backup Azure Database Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-database-postgresql.md
# Azure Database for PostgreSQL backup with long-term retention
-This article describes how to back up Azure Database for PostgreSQL server. Before you begin, review the [supported configurations, feature considerations and known limitations](https://docs.microsoft.com/azure/backup/backup-azure-database-postgresql-support-matrix)
+This article describes how to back up Azure Database for PostgreSQL server. Before you begin, review the [supported configurations, feature considerations and known limitations](./backup-azure-database-postgresql-support-matrix.md)
## Configure backup on Azure PostgreSQL databases
Azure Backup service creates a job for scheduled backups or if you trigger on-de
## Next steps
-[Troubleshoot PostgreSQL database backup by using Azure Backup](backup-azure-database-postgresql-troubleshoot.md)
+[Troubleshoot PostgreSQL database backup by using Azure Backup](backup-azure-database-postgresql-troubleshoot.md)
backup Backup Instant Restore Capability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-instant-restore-capability.md
In a scenario where a retention policy is set as ΓÇ£1ΓÇ¥, you can find two snaps
- You clean up snapshots, which are past retention. - The garbage collector (GC) in the backend is under heavy load.
+> [!NOTE]
+> Azure Backup manages backups in automatic way. Azure Backup retains old snapshop as these are needed to mantain this backup for consistency purpose. If you delete snapshot manually, you might encounter problem in backup consistency.
+> If there are errors in your backup history, you need to stop backup with retain data option and resume the backup.
+> Consider creating a **backup strategy** if you've a particular scenario (for example, a virtual machine with multiple disks and requires oversize space). You need to separately create a backup for **VM with OS Disk** and create a different backup for **the other disks**.
+ ### I donΓÇÖt need Instant Restore functionality. Can it be disabled? Instant restore feature is enabled for everyone and can't be disabled. You can reduce the snapshot retention to a minimum of one day.
backup Backup Rbac Rs Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-rbac-rs-vault.md
The following table captures the Backup management actions and corresponding Azu
| Validate before configuring backup | Backup Operator | Backup vault | | | | Disk Backup Reader | Disk to be backed up| | | Enable backup from backup vault | Backup Operator | Backup vault | |
-| | Disk Backup Reader | Disk to be backed up | In addition, the backup vault MSI should be given [these permissions](/azure/backup/disk-backup-faq##what-are-the-permissions-used-by-azure-backup-during-backup-and-restore-operation-) |
+| | Disk Backup Reader | Disk to be backed up | In addition, the backup vault MSI should be given [these permissions](./disk-backup-faq.yml) |
| On demand backup of disk | Backup Operator | Backup vault | | | Validate before restoring a disk | Backup Operator | Backup vault | | | | Disk Restore Operator | Resource group where disks will be restored to | | | Restoring a disk | Backup Operator | Backup vault | |
-| | Disk Restore Operator | Resource group where disks will be restored to | In addition, the backup vault MSI should be given [these permissions](/azure/backup/disk-backup-faq##what-are-the-permissions-used-by-azure-backup-during-backup-and-restore-operation-) |
+| | Disk Restore Operator | Resource group where disks will be restored to | In addition, the backup vault MSI should be given [these permissions](./disk-backup-faq.yml) |
### Minimum role requirements for Azure blob backup
The following table captures the Backup management actions and corresponding Azu
| Validate before configuring backup | Backup Operator | Backup vault | | | | Storage account backup contributor | Storage account containing the blob | | | Enable backup from backup vault | Backup Operator | Backup vault | |
-| | Storage account backup contributor | Storage account containing the blob | In addition, the backup vault MSI should be given [these permissions](/azure/backup/blob-backup-configure-manage#grant-permissions-to-the-backup-vault-on-storage-accounts) |
+| | Storage account backup contributor | Storage account containing the blob | In addition, the backup vault MSI should be given [these permissions](./blob-backup-configure-manage.md#grant-permissions-to-the-backup-vault-on-storage-accounts) |
| On demand backup of blob | Backup Operator | Backup vault | | | Validate before restoring a blob | Backup Operator | Backup vault | | | | Storage account backup contributor | Storage account containing the blob | | | Restoring a blob | Backup Operator | Backup vault | |
-| | Storage account backup contributor | Storage account containing the blob | In addition, the backup vault MSI should be given [these permissions](/azure/backup/blob-backup-configure-manage#grant-permissions-to-the-backup-vault-on-storage-accounts) |
+| | Storage account backup contributor | Storage account containing the blob | In addition, the backup vault MSI should be given [these permissions](./blob-backup-configure-manage.md#grant-permissions-to-the-backup-vault-on-storage-accounts) |
### Minimum role requirements for Azure database for PostGreSQL server backup
The following table captures the Backup management actions and corresponding Azu
| Validate before configuring backup | Backup Operator | Backup vault | | | | Reader | Azure PostGreSQL server | | | Enable backup from backup vault | Backup Operator | Backup vault | |
-| | Contributor | Azure PostGreSQL server | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.DBforPostgreSQL/servers/write Microsoft.DBforPostgreSQL/servers/read In addition, the backup vault MSI should be given [these permissions](/azure/backup/backup-azure-database-postgresql-overview#set-of-permissions-needed-for-azure-postgresql-database-backup) |
+| | Contributor | Azure PostGreSQL server | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.DBforPostgreSQL/servers/write Microsoft.DBforPostgreSQL/servers/read In addition, the backup vault MSI should be given [these permissions](./backup-azure-database-postgresql-overview.md#set-of-permissions-needed-for-azure-postgresql-database-backup) |
| On demand backup of PostGreSQL server | Backup Operator | Backup vault | | | Validate before restoring a server | Backup Operator | Backup vault | | | | Contributor | Target Azure PostGreSQL server | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.DBforPostgreSQL/servers/write Microsoft.DBforPostgreSQL/servers/read | Restoring a server | Backup Operator | Backup vault | |
-| | Contributor | Target Azure PostGreSQL server | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.DBforPostgreSQL/servers/write Microsoft.DBforPostgreSQL/servers/read In addition, the backup vault MSI should be given [these permissions](/azure/backup/backup-azure-database-postgresql-overview#set-of-permissions-needed-for-azure-postgresql-database-restore) |
+| | Contributor | Target Azure PostGreSQL server | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.DBforPostgreSQL/servers/write Microsoft.DBforPostgreSQL/servers/read In addition, the backup vault MSI should be given [these permissions](./backup-azure-database-postgresql-overview.md#set-of-permissions-needed-for-azure-postgresql-database-restore) |
## Next steps
The following table captures the Backup management actions and corresponding Azu
* [PowerShell](../role-based-access-control/role-assignments-powershell.md) * [Azure CLI](../role-based-access-control/role-assignments-cli.md) * [REST API](../role-based-access-control/role-assignments-rest.md)
-* [Azure role-based access control troubleshooting](../role-based-access-control/troubleshooting.md): Get suggestions for fixing common issues.
+* [Azure role-based access control troubleshooting](../role-based-access-control/troubleshooting.md): Get suggestions for fixing common issues.
batch Create Pool Ephemeral Os Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/create-pool-ephemeral-os-disk.md
For Batch workloads, the main benefits of using ephemeral OS disks are reduced c
To determine whether a VM series supports ephemeral OS disks, check the documentation for each VM instance. For example, the [Ddv4 and Ddsv4-series](../virtual-machines/ddv4-ddsv4-series.md) supports ephemeral OS disks.
-Alternately, you can programmatically query to check the 'EphemeralOSDiskSupported' capability. An example PowerShell cmdlet to query this capability is provided in the [ephemeral OS disk frequently asked questions](../virtual-machines/ephemeral-os-disks.md#frequently-asked-questions).
+Alternately, you can programmatically query to check the 'EphemeralOSDiskSupported' capability. An example PowerShell cmdlet to query this capability is provided in the [ephemeral OS disk frequently asked questions](../virtual-machines/ephemeral-os-disks-faq.md).
## Create a pool that uses ephemeral OS disks
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/Overview/language-support.md
Consider the following:
## Supporting multiple languages in one QnA Maker resource
-This functionality is not supported in our current Generally Available (GA) stable release. Check out [question answering](https://docs.microsoft.com/azure/cognitive-services/language-service/question-answering/overview) to test out this functionality.
+This functionality is not supported in our current Generally Available (GA) stable release. Check out [question answering](../../language-service/question-answering/overview.md) to test out this functionality.
## Supporting multiple languages in one knowledge base
This additional ranking is an internal working of the QnA Maker's ranker.
## Next steps > [!div class="nextstepaction"]
-> [Language selection](../index.yml)
+> [Language selection](../index.yml)
cognitive-services Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/limits.md
These represent the limits when Prebuilt API is used to *Generate response* or c
> Support for unstructured file/content and is available only in question answering. ## Alterations limits
-[Alterations](https://docs.microsoft.com/rest/api/cognitiveservices/qnamaker/alterations/replace) do not allow these special characters: ',', '?', ':', ';', '\"', '\'', '(', ')', '{', '}', '[', ']', '-', '+', '.', '/', '!', '*', '-', '_', '@', '#'
+[Alterations](/rest/api/cognitiveservices/qnamaker/alterations/replace) do not allow these special characters: ',', '?', ':', ';', '\"', '\'', '(', ')', '{', '}', '[', ']', '-', '+', '.', '/', '!', '*', '-', '_', '@', '#'
## Next steps
-Learn when and how to change [service pricing tiers](How-To/set-up-qnamaker-service-azure.md#upgrade-qna-maker-sku).
+Learn when and how to change [service pricing tiers](How-To/set-up-qnamaker-service-azure.md#upgrade-qna-maker-sku).
cognitive-services Captioning Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/captioning-concepts.md
The following are aspects to consider when using captioning:
* Consider output formats such as SRT (SubRip Text) and WebVTT (Web Video Text Tracks). These can be loaded onto most video players such as VLC, automatically adding the captions on to your video. > [!TIP]
-> Try the [Azure Video Indexer](/azure/azure-video-indexer/video-indexer-overview) as a demonstration of how you can get captions for videos that you upload.
+> Try the [Azure Video Indexer](../../azure-video-indexer/video-indexer-overview.md) as a demonstration of how you can get captions for videos that you upload.
Captioning can accompany real time or pre-recorded speech. Whether you're showing captions in real time or with a recording, you can use the [Speech SDK](speech-sdk.md) or [Speech CLI](spx-overview.md) to recognize speech and get transcriptions. You can also use the [Batch transcription API](batch-transcription.md) for pre-recorded video.
There are some situations where [training a custom model](custom-speech-overview
## Next steps * [Captioning quickstart](captioning-quickstart.md)
-* [Get speech recognition results](get-speech-recognition-results.md)
+* [Get speech recognition results](get-speech-recognition-results.md)
cognitive-services Encryption Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/encryption-data-at-rest.md
By default, your subscription uses Microsoft-managed encryption keys. There is a
There is also an option to manage your subscription with your own keys. Customer-managed keys (CMK), also known as Bring your own key (BYOK), offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data.
-You must use Azure Key Vault to store your customer-managed keys. You can either create your own keys and store them in a key vault, or you can use the Azure Key Vault APIs to generate keys. The Cognitive Services resource and the key vault must be in the same region and in the same Azure Active Directory (Azure AD) tenant, but they can be in different subscriptions. For more information about Azure Key Vault, see [What is Azure Key Vault?](/azure/key-vault/general/overview).
+You must use Azure Key Vault to store your customer-managed keys. You can either create your own keys and store them in a key vault, or you can use the Azure Key Vault APIs to generate keys. The Cognitive Services resource and the key vault must be in the same region and in the same Azure Active Directory (Azure AD) tenant, but they can be in different subscriptions. For more information about Azure Key Vault, see [What is Azure Key Vault?](../../../key-vault/general/overview.md).
### Customer-managed keys for Language services
cognitive-services Model Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/model-lifecycle.md
Previously updated : 05/09/2022 Last updated : 05/25/2022
Use the table below to find which model versions are supported by each feature:
| Question answering | `2021-10-01` | `2021-10-01` | | | Text Analytics for health | `2021-05-15`, `2022-03-01` | `2022-03-01` | | | Key phrase extraction | `2021-06-01` | `2021-06-01` | |
-| Text summarization | `2021-08-01` | `2021-08-01` | |
-
+| Document summarization (preview) | `2021-08-01` | | `2021-08-01` |
+| Conversation summarization (preview) | `2022-05-15-preview` | | `2022-05-15-preview` |
## Custom features
cognitive-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/faq.md
Use this article to quickly get the answers to common questions about conversati
See the [quickstart](./quickstart.md) to quickly create your first project, or the [how-to article](./how-to/create-project.md) for more details.
-## How do I connect conversation language projects to other service applications?
-See the [orchestration workflow documentation](../orchestration-workflow/overview.md) for more information.
+## Can I use more than one conversational language understanding project together?
+
+Yes, using orchestration workflow. See the [orchestration workflow documentation](../orchestration-workflow/overview.md) for more information.
+
+## What is the difference between LUIS and conversational language understanding?
+
+Conversational language understanding is the next generation of LUIS.
## Training is taking a long time, is this expected?
Yes, you can [import any LUIS application](./concepts/backwards-compatibility.md
No, the service only supports JSON format. You can go to LUIS, import the `.LU` file and export it as a JSON file.
+## Can I use conversational language understanding with custom question answering?
+
+Yes, you can use [orchestration workflow](../orchestration-workflow/overview.md) to orchestrate between different conversational language understanding and [question answering](../question-answering/overview.md) projects. Start by creating orchestration workflow projects, then connect your conversational language understanding and custom question answering projects. To perform this action, make sure that your projects are under the same Language resource.
+ ## How do I handle out of scope or domain utterances that aren't relevant to my intents? Add any out of scope utterances to the [none intent](./concepts/none-intent.md). ## Is there any SDK support?
-Yes, only for predictions, and [samples are available](https://aka.ms/cluSampleCode). There is currently no authoring support for the SDK.
+Yes, only for predictions, and samples are available for [Python](https://aka.ms/sdk-samples-conversation-python) and [C#](https://aka.ms/sdk-sample-conversation-dot-net). There is currently no authoring support for the SDK.
+
+## What are the training modes?
+
-## Can I connect to Orchestration workflow projects?
+|Training mode | Description | Language availability | Pricing |
+|||||
+|Standard training | Faster training times for quicker model iteration. | Can only train projects in English. | Included in your [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/language-service/). |
+|Advanced training | Slower training times using fine-tuned neural network transformer models. | Can train [multilingual projects](language-support.md#multi-lingual-option). | May incur [additional charges](https://azure.microsoft.com/pricing/details/cognitive-services/language-service/).
-Yes, you can connect your CLU project in orchestration workflow. All you need is to make sure that both projects are under the same Language resource
+See [training modes](how-to/train-model.md#training-modes) for more information.
## Are there APIs for this feature?
cognitive-services View Model Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/view-model-evaluation.md
See the [project development lifecycle](../overview.md#project-development-lifec
### [Language studio](#tab/Language-studio)
+> [!Note]
+> The results here are for the machine learning entity component only.
+ In the **view model details** page, you'll be able to see all your models, with their current training status, and the date they were last trained. [!INCLUDE [Model performance](../includes/language-studio/model-performance.md)]
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/language-support.md
Use this article to learn about the languages currently supported by CLU feature
## Multi-lingual option
+> [!TIP]
+> See [How to train a model](how-to/train-model.md#training-modes) for information on which training mode you should use for multilingual projects.
+ With conversational language understanding, you can train a model in one language and use to predict intents and entities from utterances in another language. This feature is powerful because it helps save time and effort. Instead of building separate projects for every language, you can handle multi-lingual dataset in one project. Your dataset doesn't have to be entirely in the same language but you should enable the multi-lingual option for your project while creating or later in project settings. If you notice your model performing poorly in certain languages during the evaluation process, consider adding more data in these languages to your training set. You can train your project entirely with English utterances, and query it in: French, German, Mandarin, Japanese, Korean, and others. Conversational language understanding makes it easy for you to scale your projects to multiple languages by using multilingual technology to train your models.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/overview.md
Conversational language understanding is one of the custom features offered by [Azure Cognitive Service for Language](../overview.md). It is a cloud-based API service that applies machine-learning intelligence to enable you to build natural language understanding component to be used in an end-to-end conversational application.
-Conversational language understanding (CLU) enables users to build custom natural language understanding models to predict the overall intention of an incoming utterance and extract important information from it. CLU only provides the intelligence to understand the input text for the client application and doesn't perform any actions. By creating a CLU project, developers can iteratively tag utterances, train and evaluate model performance before making it available for consumption. The quality of the tagged data greatly impacts model performance. To simplify building and customizing your model, the service offers a custom web portal that can be accessed through the [Language studio](https://aka.ms/languageStudio). You can easily get started with the service by following the steps in this [quickstart](quickstart.md).
+Conversational language understanding (CLU) enables users to build custom natural language understanding models to predict the overall intention of an incoming utterance and extract important information from it. CLU only provides the intelligence to understand the input text for the client application and doesn't perform any actions. By creating a CLU project, developers can iteratively label utterances, train and evaluate model performance before making it available for consumption. The quality of the labeled data greatly impacts model performance. To simplify building and customizing your model, the service offers a custom web portal that can be accessed through the [Language studio](https://aka.ms/languageStudio). You can easily get started with the service by following the steps in this [quickstart](quickstart.md).
This documentation contains the following article types:
Follow these steps to get the most out of your model:
1. **Build schema**: Know your data and define the actions and relevant information that needs to be recognized from user's input utterances. In this step you create the [intents](glossary.md#intent) that you want to assign to user's utterances, and the relevant [entities](glossary.md#entity) you want extracted.
-2. **Tag data**: The quality of data tagging is a key factor in determining model performance.
+2. **Label data**: The quality of data labeling is a key factor in determining model performance.
-3. **Train model**: Your model starts learning from your tagged data.
+3. **Train model**: Your model starts learning from your labeled data.
4. **View model evaluation details**: View the evaluation details for your model to determine how well it performs when introduced to new data.
cognitive-services Adding Synonyms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/tutorials/adding-synonyms.md
As you can see, when `troubleshoot` was not added as a synonym, we got a low con
> [!IMPORTANT] > Synonyms are case insensitive. Synonyms also might not work as expected if you add stop words as synonyms. The list of stop words can be found here: [List of stop words](https://github.com/Azure-Samples/azure-search-sample-dat). > For instance, if you add the abbreviation **IT** for Information technology, the system might not be able to recognize Information Technology because **IT** is a stop word and is filtered when a query is processed.
-> Synonyms do not allow these special characters: ',', '?', ':', ';', '\"', '\'', '(', ')', '{', '}', '[', ']', '-', '+', '.', '/', '!', '*', '-', '_', '@', '#'
## Notes * Synonyms can be added in any order. The ordering is not considered in any computational logic.
-* Special characters are not allowed for synonyms. For hyphenated words like "COVID-19", they are treated the same as "COVID 19", and "space" can be used as a term separator.
* In case of overlapping synonym words between 2 sets of alterations, it may have unexpected results and it is not recommended to use overlapping sets.
+* Special characters are not allowed for synonyms. For hyphenated words like "COVID-19", they are treated the same as "COVID 19", and "space" can be used as a term separator. Following is the list of special characters **not allowed**:
+
+|Special character | Symbol|
+|--|--|
+|Comma | ,|
+|Question mark | ?|
+|Colon| :|
+|Semicolon| ;|
+|Double quotation mark| \"|
+|Single quotation mark| \'|
+|Open parenthesis|(|
+|Close parenthesis|)|
+|Open brace|{|
+|Close brace|}|
+|Open bracket|[|
+|Close bracket|]|
+|Hyphen/dash|-|
+|Plus sign|+|
+|Period|.|
+|Forward slash|/|
+|Exclamation mark|!|
+|Asterisk|\*|
+|Underscore|\_|
+|Ampersand|@|
+|Hash|#|
+ ## Next steps
cognitive-services Document Summarization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/how-to/document-summarization.md
Previously updated : 03/16/2022 Last updated : 05/26/2022
Using the above example, the API might return the following summarized sentences
## See also
-* [Document summarization overview](../overview.md)
+* [Summarization overview](../overview.md)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/language-support.md
Previously updated : 05/11/2022 Last updated : 05/26/2022
Conversation summarization supports the following languages:
## Next steps
-[Document summarization overview](overview.md)
+* [Summarization overview](overview.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/overview.md
Previously updated : 05/06/2022 Last updated : 05/26/2022
As you use document summarization in your applications, see the following refere
## Responsible AI
-An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which itΓÇÖs deployed. Read the [transparency note for document summarization](/legal/cognitive-services/language-service/transparency-note-extractive-summarization?context=/azure/cognitive-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
+An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which itΓÇÖs deployed. Read the [transparency note for summarization](/legal/cognitive-services/language-service/transparency-note-extractive-summarization?context=/azure/cognitive-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
+
+* [Transparency note for Azure Cognitive Service for Language](/legal/cognitive-services/language-service/transparency-note?context=/azure/cognitive-services/language-service/context/context)
+* [Integration and responsible use](/legal/cognitive-services/language-service/guidance-integration-responsible-use-summarization?context=/azure/cognitive-services/language-service/context/context)
+* [Characteristics and limitations of summarization](/legal/cognitive-services/language-service/characteristics-and-limitations-summarization?context=/azure/cognitive-services/language-service/context/context)
+* [Data, privacy, and security](/legal/cognitive-services/language-service/data-privacy?context=/azure/cognitive-services/language-service/context/context)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-support.md
These Cognitive Services are language agnostic and don't have limitations based
* [Computer Vision](./computer-vision/language-support.md) * [Ink Recognizer (Preview)](/previous-versions/azure/cognitive-services/Ink-Recognizer/language-support)
-* [Video Indexer](/azure/azure-video-indexer/language-identification-model.md#guidelines-and-limitations)
+* [Video Indexer](../azure-video-indexer/language-identification-model.md#guidelines-and-limitations)
## Language
communication-services Enable Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/enable-logging.md
You'll also be prompted to select a destination to store the logs. Platform logs
| Destination | Description | |:|:|
-| [Log Analytics workspace](../../../azure-monitor/logs/design-logs-deployment.md) | Sending logs and metrics to a Log Analytics workspace allows you to analyze them with other monitoring data collected by Azure Monitor using powerful log queries and also to use other Azure Monitor features such as alerts and visualizations. |
+| [Log Analytics workspace](../../../azure-monitor/logs/log-analytics-workspace-overview.md) | Sending logs and metrics to a Log Analytics workspace allows you to analyze them with other monitoring data collected by Azure Monitor using powerful log queries and also to use other Azure Monitor features such as alerts and visualizations. |
| [Event Hubs](../../../event-hubs/index.yml) | Sending logs and metrics to Event Hubs allows you to stream data to external systems such as third-party SIEMs and other log analytics solutions. | | [Azure storage account](../../../storage/blobs/index.yml) | Archiving logs and metrics to an Azure storage account is useful for audit, static analysis, or backup. Compared to Azure Monitor Logs and a Log Analytics workspace, Azure storage is less expensive and logs can be kept there indefinitely. |
communication-services Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/log-analytics.md
## Overview and access
-Before you can take advantage of [Log Analytics](../../../azure-monitor/logs/log-analytics-overview.md) for your Communications Services logs, you must first follow the steps outlined in [Enable logging in Diagnostic Settings](enable-logging.md). Once you have enabled your logs and a [Log Analytics Workspace](../../../azure-monitor/logs/design-logs-deployment.md), you will have access to many helpful [default query packs](../../../azure-monitor/logs/query-packs.md#default-query-pack) that will help you quickly visualize and understand the data available in your logs, which are described below. Through Log Analytics, you also get access to more Communications Services Insights via Azure Monitor Workbooks (see: [Communications Services Insights](insights.md)), the ability to create our own queries and Workbooks, [REST API access](https://dev.loganalytics.io/) to any query.
+Before you can take advantage of [Log Analytics](../../../azure-monitor/logs/log-analytics-overview.md) for your Communications Services logs, you must first follow the steps outlined in [Enable logging in Diagnostic Settings](enable-logging.md). Once you have enabled your logs and a [Log Analytics Workspace](../../../azure-monitor/logs/workspace-design.md), you will have access to many helpful [default query packs](../../../azure-monitor/logs/query-packs.md#default-query-pack) that will help you quickly visualize and understand the data available in your logs, which are described below. Through Log Analytics, you also get access to more Communications Services Insights via Azure Monitor Workbooks (see: [Communications Services Insights](insights.md)), the ability to create our own queries and Workbooks, [REST API access](https://dev.loganalytics.io/) to any query.
### Access You can access the queries by starting on your Communications Services resource page, and then clicking on "Logs" in the left navigation within the Monitor section:
communication-services Email Authentication Best Practice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/email/email-authentication-best-practice.md
A DMARC policy record allows a domain to announce that their email uses authenti
## Next steps
-* [Best practices for implementing DMARC](https://docs.microsoft.com/microsoft-365/security/office-365-security/use-dmarc-to-validate-email?view=o365-worldwide#best-practices-for-implementing-dmarc-in-microsoft-365&preserve-view=true)
+* [Best practices for implementing DMARC](/microsoft-365/security/office-365-security/use-dmarc-to-validate-email?preserve-view=true&view=o365-worldwide#best-practices-for-implementing-dmarc-in-microsoft-365)
-* [Troubleshoot your DMARC implementation](https://docs.microsoft.com/microsoft-365/security/office-365-security/use-dmarc-to-validate-email?view=o365-worldwide#troubleshooting-your-dmarc-implementation&preserve-view=true)
+* [Troubleshoot your DMARC implementation](/microsoft-365/security/office-365-security/use-dmarc-to-validate-email?preserve-view=true&view=o365-worldwide#troubleshooting-your-dmarc-implementation)
* [Email domains and sender authentication for Azure Communication Services](./email-domain-and-sender-authentication.md)
The following documents may be interesting to you:
- Familiarize yourself with the [Email client library](../email/sdk-features.md) - How to send emails with custom verified domains?[Add custom domains](../../quickstarts/email/add-custom-verified-domains.md)-- How to send emails with Azure Managed Domains?[Add Azure Managed domains](../../quickstarts/email/add-azure-managed-domains.md)
+- How to send emails with Azure Managed Domains?[Add Azure Managed domains](../../quickstarts/email/add-azure-managed-domains.md)
communication-services Closed Captions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/closed-captions.md
Here are main scenarios where Closed Captions are useful:
## Availability
-The private preview will be available on all platforms.
+Closed Captions are supported in Private Preview only in ACS to ACS calls on all platforms.
- Android - iOS - Web
communication-services Events Playbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/events-playbook.md
The goal of this document is to reduce the time it takes for Event Management Pl
## What are virtual events and event management platforms?
-Microsoft empowers event platforms to integrate event capabilities using [Microsoft Teams](https://docs.microsoft.com/microsoftteams/quick-start-meetings-live-events), [Graph](https://docs.microsoft.com/graph/api/application-post-onlinemeetings?view=graph-rest-beta&tabs=http) and [Azure Communication Services](https://docs.microsoft.com/azure/communication-services/overview). Virtual Events are a communication modality where event organizers schedule and configure a virtual environment for event presenters and participants to engage with content through voice, video, and chat. Event management platforms enable users to configure events and for attendees to participate in those events, within their platform, applying in-platform capabilities and gamification. Learn more about[ Teams Meetings, Webinars and Live Events](https://docs.microsoft.com/microsoftteams/quick-start-meetings-live-events) that are used throughout this article to enable virtual event scenarios.
+Microsoft empowers event platforms to integrate event capabilities using [Microsoft Teams](/microsoftteams/quick-start-meetings-live-events), [Graph](/graph/api/application-post-onlinemeetings?tabs=http&view=graph-rest-beta) and [Azure Communication Services](../overview.md). Virtual Events are a communication modality where event organizers schedule and configure a virtual environment for event presenters and participants to engage with content through voice, video, and chat. Event management platforms enable users to configure events and for attendees to participate in those events, within their platform, applying in-platform capabilities and gamification. Learn more about[ Teams Meetings, Webinars and Live Events](/microsoftteams/quick-start-meetings-live-events) that are used throughout this article to enable virtual event scenarios.
## What are the building blocks of an event management platform?
For event attendees, they are presented with an experience that enables them to
- Teams Client (Web or Desktop): Attendees can directly join events using a Teams Client by using a provided join link. They get access to the full Teams experience. -- Azure Communication
+- Azure Communication
### 3. Host & Organizer experience
Microsoft Graph enables event management platforms to empower organizers to sche
1. Create an account that will own the meetings and is branded appropriately. This is the account that will create the events and which will receive notifications for it. We recommend to not user a personal production account given the overhead it might incur in the form of remainders.
- 1. As part of the application setup, the service account is used to login into the solution once. With this permission the application can retrieve and store an access token on behalf of the service account that will own the meetings. Your application will need to store the tokens generated from the login and place them in a secure location such as a key vault. The application will need to store both the access token and the refresh token. Learn more about [auth tokens](https://docs.microsoft.com/azure/active-directory/develop/access-tokens). and [refresh tokens](https://docs.microsoft.com/azure/active-directory/develop/refresh-tokens).
+ 1. As part of the application setup, the service account is used to login into the solution once. With this permission the application can retrieve and store an access token on behalf of the service account that will own the meetings. Your application will need to store the tokens generated from the login and place them in a secure location such as a key vault. The application will need to store both the access token and the refresh token. Learn more about [auth tokens](../../active-directory/develop/access-tokens.md). and [refresh tokens](../../active-directory/develop/refresh-tokens.md).
- 1. The application will require "on behalf of" permissions with the [offline scope](https://docs.microsoft.com/azure/active-directory/develop/v2-permissions-and-consent#offline_access) to act on behalf of the service account for the purpose of creating meetings. Individual Graph APIs require different scopes, learn more in the links detailed below as we introduce the required APIs.
+ 1. The application will require "on behalf of" permissions with the [offline scope](../../active-directory/develop/v2-permissions-and-consent.md#offline_access) to act on behalf of the service account for the purpose of creating meetings. Individual Graph APIs require different scopes, learn more in the links detailed below as we introduce the required APIs.
1. Refresh tokens can be revoked in the event of a breach or account termination
Microsoft Graph enables event management platforms to empower organizers to sche
2. Organizer logins to Contoso platform to create an event and generate a registration URL. To enable these capabilities developers should use:
- 1. The [Create Calendar Event API](https://docs.microsoft.com/graph/api/user-post-events?view=graph-rest-1.0&tabs=http) to POST the new event to be created. The Event object returned will contain the join URL required for the next step. Need to set the following parameter: `isonlinemeeting: true` and `onlineMeetingProvider: "teamsForBusiness"`. Set a time zone for the event, using the `Prefer` header.
+ 1. The [Create Calendar Event API](/graph/api/user-post-events?tabs=http&view=graph-rest-1.0) to POST the new event to be created. The Event object returned will contain the join URL required for the next step. Need to set the following parameter: `isonlinemeeting: true` and `onlineMeetingProvider: "teamsForBusiness"`. Set a time zone for the event, using the `Prefer` header.
- 1. Next, use the [Create Online Meeting API](https://docs.microsoft.com/graph/api/application-post-onlinemeetings?view=graph-rest-beta&tabs=http) to `GET` the online meeting information using the join URL generated from the step above. The `OnlineMeeting` object will contain the `meetingId` required for the registration steps.
+ 1. Next, use the [Create Online Meeting API](/graph/api/application-post-onlinemeetings?tabs=http&view=graph-rest-beta) to `GET` the online meeting information using the join URL generated from the step above. The `OnlineMeeting` object will contain the `meetingId` required for the registration steps.
1. By using these APIs, developers are creating a calendar event to show up in the OrganizerΓÇÖs calendar and the Teams online meeting where attendees will join. >[!NOTE] >Known issue with double calendar entries for organizers when using the Calendar and Online Meeting APIs.
-3. To enable registration for an event, Contoso can use the [External Meeting Registration API](https://docs.microsoft.com/graph/api/resources/externalmeetingregistration?view=graph-rest-beta) to POST. The API requires Contoso to pass in the `meetingId` of the `OnlineMeeting` created above. Registration is optional. You can set options on who can register.
+3. To enable registration for an event, Contoso can use the [External Meeting Registration API](/graph/api/resources/externalmeetingregistration?view=graph-rest-beta) to POST. The API requires Contoso to pass in the `meetingId` of the `OnlineMeeting` created above. Registration is optional. You can set options on who can register.
### Register attendees with Microsoft Graph
-Event management platforms can use a custom registration flow to register attendees. This flow is powered by the [External Meeting Registrant API](https://docs.microsoft.com/graph/api/externalmeetingregistrant-post?view=graph-rest-beta&tabs=http). By using the API Contoso will receive a unique `Teams Join URL` for each attendee. This URL will be used as part of the attendee experience either through Teams or Azure Communication Services to have the attendee join the meeting.
+Event management platforms can use a custom registration flow to register attendees. This flow is powered by the [External Meeting Registrant API](/graph/api/externalmeetingregistrant-post?tabs=http&view=graph-rest-beta). By using the API Contoso will receive a unique `Teams Join URL` for each attendee. This URL will be used as part of the attendee experience either through Teams or Azure Communication Services to have the attendee join the meeting.
### Communicate with your attendees using Azure Communication Services Through Azure Communication Services, developers can use SMS and Email capabilities to send remainders to attendees for the event they have registered. Communication can also include confirmation for the event as well as information for joining and participating. -- [SMS capabilities](https://docs.microsoft.com/azure/communication-services/quickstarts/sms/send) enable you to send text messages to your attendees. -- [Email capabilities](https://docs.microsoft.com/azure/communication-services/quickstarts/email/send-email) support direct communication to your attendees using custom domains.
+- [SMS capabilities](../quickstarts/sms/send.md) enable you to send text messages to your attendees.
+- [Email capabilities](../quickstarts/email/send-email.md) support direct communication to your attendees using custom domains.
### Leverage Azure Communication Services to build a custom attendee experience >[!NOTE]
-> Limitations when using Azure Communication Services as part of a Teams Webinar experience. Please visit our [documentation for more details.](https://docs.microsoft.com/azure/communication-services/concepts/join-teams-meeting#limitations-and-known-issues)
+> Limitations when using Azure Communication Services as part of a Teams Webinar experience. Please visit our [documentation for more details.](../concepts/join-teams-meeting.md#limitations-and-known-issues)
-Attendee experience can be directly embedded into an application or platform using [Azure Communication Services](https://docs.microsoft.com/azure/communication-services/overview) so that your attendees never need to leave your platform. It provides low-level calling and chat SDKs which support [interoperability with Teams Events](https://docs.microsoft.com/azure/communication-services/concepts/teams-interop), as well as a turn-key UI Library which can be used to reduce development time and easily embed communications. Azure Communication Services enables developers to have flexibility with the type of solution they need. Review [limitations](https://docs.microsoft.com/azure/communication-services/concepts/join-teams-meeting#limitations-and-known-issues) of using Azure Communication Services for webinar scenarios.
+Attendee experience can be directly embedded into an application or platform using [Azure Communication Services](../overview.md) so that your attendees never need to leave your platform. It provides low-level calling and chat SDKs which support [interoperability with Teams Events](../concepts/teams-interop.md), as well as a turn-key UI Library which can be used to reduce development time and easily embed communications. Azure Communication Services enables developers to have flexibility with the type of solution they need. Review [limitations](../concepts/join-teams-meeting.md#limitations-and-known-issues) of using Azure Communication Services for webinar scenarios.
-1. To start, developers can leverage Microsoft Graph APIs to retrieve the join URL. This URL is provided uniquely per attendee during [registration](https://docs.microsoft.com/graph/api/externalmeetingregistrant-post?view=graph-rest-beta&tabs=http). Alternatively, it can be [requested for a given meeting](https://docs.microsoft.com/graph/api/onlinemeeting-get?view=graph-rest-beta&tabs=http).
+1. To start, developers can leverage Microsoft Graph APIs to retrieve the join URL. This URL is provided uniquely per attendee during [registration](/graph/api/externalmeetingregistrant-post?tabs=http&view=graph-rest-beta). Alternatively, it can be [requested for a given meeting](/graph/api/onlinemeeting-get?tabs=http&view=graph-rest-beta).
-2. Before developers dive into using [Azure Communication Services](https://docs.microsoft.com/azure/communication-services/overview), they must [create a resource](https://docs.microsoft.com/azure/communication-services/quickstarts/create-communication-resource?tabs=windows&pivots=platform-azp).
+2. Before developers dive into using [Azure Communication Services](../overview.md), they must [create a resource](../quickstarts/create-communication-resource.md?pivots=platform-azp&tabs=windows).
-3. Once a resource is created, developers must [generate access tokens](https://docs.microsoft.com/azure/communication-services/quickstarts/access-tokens?pivots=programming-language-javascript) for attendees to access Azure Communication Services. We recommend using a [trusted service architecture](https://docs.microsoft.com/azure/communication-services/concepts/client-and-server-architecture).
+3. Once a resource is created, developers must [generate access tokens](../quickstarts/access-tokens.md?pivots=programming-language-javascript) for attendees to access Azure Communication Services. We recommend using a [trusted service architecture](../concepts/client-and-server-architecture.md).
-4. Developers can leverage [headless SDKs](https://docs.microsoft.com/azure/communication-services/concepts/teams-interop) or [UI Library](https://azure.github.io/communication-ui-library/) using the join link URL to join the Teams meeting through [Teams Interoperability](https://docs.microsoft.com/azure/communication-services/concepts/teams-interop). Details below:
+4. Developers can leverage [headless SDKs](../concepts/teams-interop.md) or [UI Library](https://azure.github.io/communication-ui-library/) using the join link URL to join the Teams meeting through [Teams Interoperability](../concepts/teams-interop.md). Details below:
|Headless SDKs | UI Library | |-||
-| Developers can leverage the [calling](https://docs.microsoft.com/azure/communication-services/quickstarts/voice-video-calling/get-started-teams-interop?pivots=platform-javascript) and [chat](https://docs.microsoft.com/azure/communication-services/quickstarts/chat/meeting-interop?pivots=platform-javascript) SDKs to join a Teams meeting with your custom client | Developers can choose between the [call + chat](https://azure.github.io/communication-ui-library/?path=/docs/composites-meeting-basicexample--basic-example) or pure [call](https://azure.github.io/communication-ui-library/?path=/docs/composites-call-basicexample--basic-example) and [chat](https://azure.github.io/communication-ui-library/?path=/docs/composites-chat-basicexample--basic-example) composites to build their experience. Alternatively, developers can leverage [composable components](https://azure.github.io/communication-ui-library/?path=/docs/quickstarts-uicomponents--page) to build a custom Teams interop experience.|
+| Developers can leverage the [calling](../quickstarts/voice-video-calling/get-started-teams-interop.md?pivots=platform-javascript) and [chat](../quickstarts/chat/meeting-interop.md?pivots=platform-javascript) SDKs to join a Teams meeting with your custom client | Developers can choose between the [call + chat](https://azure.github.io/communication-ui-library/?path=/docs/composites-meeting-basicexample--basic-example) or pure [call](https://azure.github.io/communication-ui-library/?path=/docs/composites-call-basicexample--basic-example) and [chat](https://azure.github.io/communication-ui-library/?path=/docs/composites-chat-basicexample--basic-example) composites to build their experience. Alternatively, developers can leverage [composable components](https://azure.github.io/communication-ui-library/?path=/docs/quickstarts-uicomponents--page) to build a custom Teams interop experience.|
>[!NOTE]
->Azure Communication Services is a consumption-based service billed through Azure. For more information on pricing visit our resources.
--
+>Azure Communication Services is a consumption-based service billed through Azure. For more information on pricing visit our resources.
communication-services File Sharing Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/file-sharing-tutorial.md
The diagram below shows a typical flow of a file sharing scenario for both uploa
## Setup File Storage using Azure Blob
-You can follow the tutorial [Upload file to Azure Blob Storage with an Azure Function](https://docs.microsoft.com/azure/developer/javascript/how-to/with-web-app/azure-function-file-upload) to write the backend code required for file sharing.
+You can follow the tutorial [Upload file to Azure Blob Storage with an Azure Function](/azure/developer/javascript/how-to/with-web-app/azure-function-file-upload) to write the backend code required for file sharing.
Once implemented, you can call this Azure Function inside the `uploadHandler` function to upload files to Azure Blob Storage. For the remaining of the tutorial, we will assume you have generated the function using the tutorial for Azure Blob Storage linked above.
You may also want to:
- [Add chat to your app](../quickstarts/chat/get-started.md) - [Creating user access tokens](../quickstarts/access-tokens.md) - [Learn about client and server architecture](../concepts/client-and-server-architecture.md)-- [Learn about authentication](../concepts/authentication.md)
+- [Learn about authentication](../concepts/authentication.md)
container-apps Azure Resource Manager Api Spec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-resource-manager-api-spec.md
Previously updated : 05/13/2022 Last updated : 05/26/2022
The following example ARM template deploys a container app.
"name": "[parameters('containerappName')]", "location": "[parameters('location')]", "identity": {
- "type": "None"
+ "type": "None"
}, "properties": { "managedEnvironmentId": "[resourceId('Microsoft.App/managedEnvironments', parameters('environment_name'))]",
The following example ARM template deploys a container app.
"cpu": 0.5, "memory": "1Gi" },
- "probes":[
+ "probes": [
{
- "type":"liveness",
- "httpGet":{
- "path":"/health",
- "port":8080,
- "httpHeaders":[
- {
- "name":"Custom-Header",
- "value":"liveness probe"
- }]
- },
- "initialDelaySeconds":7,
- "periodSeconds":3
+ "type": "liveness",
+ "httpGet": {
+ "path": "/health",
+ "port": 8080,
+ "httpHeaders": [
+ {
+ "name": "Custom-Header",
+ "value": "liveness probe"
+ }
+ ]
+ },
+ "initialDelaySeconds": 7,
+ "periodSeconds": 3
}, {
- "type":"readiness",
- "tcpSocket":
- {
- "port": 8081
- },
- "initialDelaySeconds": 10,
- "periodSeconds": 3
+ "type": "readiness",
+ "tcpSocket": {
+ "port": 8081
+ },
+ "initialDelaySeconds": 10,
+ "periodSeconds": 3
}, {
- "type": "startup",
- "httpGet": {
- "path": "/startup",
- "port": 8080,
- "httpHeaders": [
- {
- "name": "Custom-Header",
- "value": "startup probe"
- }]
- },
- "initialDelaySeconds": 3,
- "periodSeconds": 3
+ "type": "startup",
+ "httpGet": {
+ "path": "/startup",
+ "port": 8080,
+ "httpHeaders": [
+ {
+ "name": "Custom-Header",
+ "value": "startup probe"
+ }
+ ]
+ },
+ "initialDelaySeconds": 3,
+ "periodSeconds": 3
} ], "volumeMounts": [
properties:
probes: - type: liveness httpGet:
- - path: "/health"
- port: 8080
- httpHeaders:
- - name: "Custom-Header"
- value: "liveness probe"
- initialDelaySeconds: 7
- periodSeconds: 3
+ path: "/health"
+ port: 8080
+ httpHeaders:
+ - name: "Custom-Header"
+ value: "liveness probe"
+ initialDelaySeconds: 7
+ periodSeconds: 3
- type: readiness tcpSocket: - port: 8081
properties:
periodSeconds: 3 - type: startup httpGet:
- - path: "/startup"
- port: 8080
- httpHeaders:
- - name: "Custom-Header"
- value: "startup probe"
+ path: "/startup"
+ port: 8080
+ httpHeaders:
+ - name: "Custom-Header"
+ value: "startup probe"
initialDelaySeconds: 3 periodSeconds: 3 scale:
container-apps Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/disaster-recovery.md
In the unlikely event of a full region outage, you have the option of using one
- **Manual recovery**: Manually deploy to a new region, or wait for the region to recover, and then manually redeploy all environments and apps. -- **Resilient recovery**: First, deploy your container apps in advance to multiple regions. Next, use Azure Front Door or Azure Traffic Manager to handle incoming requests, pointing traffic to your primary region. Then, should an outage occur, you can redirect traffic away from the affected region. See [Cross-region replication in Azure](/azure/availability-zones/cross-region-replication-azure) for more information.
+- **Resilient recovery**: First, deploy your container apps in advance to multiple regions. Next, use Azure Front Door or Azure Traffic Manager to handle incoming requests, pointing traffic to your primary region. Then, should an outage occur, you can redirect traffic away from the affected region. See [Cross-region replication in Azure](../availability-zones/cross-region-replication-azure.md) for more information.
> [!NOTE] > Regardless of which strategy you choose, make sure your deployment configuration files are in source control so you can easily redeploy if necessary.
In the unlikely event of a full region outage, you have the option of using one
Additionally, the following resources can help you create your own disaster recovery plan: - [Failure and disaster recovery for Azure applications](/azure/architecture/reliability/disaster-recovery)-- [Azure resiliency technical guidance](/azure/architecture/checklist/resiliency-per-service)
+- [Azure resiliency technical guidance](/azure/architecture/checklist/resiliency-per-service)
container-apps Firewall Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/firewall-integration.md
Firewall settings Network Security Groups (NSGs) needed to configure virtual networks closely resemble the settings required by Kubernetes.
-Some outbound dependencies of Azure Kubernetes Service (AKS) clusters rely exclusively on fully qualified domain names (FQDN), therefore securing an AKS cluster purely with NSGs isn't possible. Refer to [Control egress traffic for cluster nodes in Azure Kubernetes Service](/azure/aks/limit-egress-traffic) for details.
+Some outbound dependencies of Azure Kubernetes Service (AKS) clusters rely exclusively on fully qualified domain names (FQDN), therefore securing an AKS cluster purely with NSGs isn't possible. Refer to [Control egress traffic for cluster nodes in Azure Kubernetes Service](../aks/limit-egress-traffic.md) for details.
* You can lock down a network via NSGs with more restrictive rules than the default NSG rules. * To fully secure a cluster, use a combination of NSGs and a firewall.
As the following rules require allowing all IPs, use a Firewall solution to lock
| `dc.services.visualstudio.com` | HTTPS | `443` | This endpoint is used for metrics and monitoring using Azure Monitor. | | `*.ods.opinsights.azure.com` | HTTPS | `443` | This endpoint is used by Azure Monitor for ingesting log analytics data. | | `*.oms.opinsights.azure.com` | HTTPS | `443` | This endpoint is used by `omsagent`, which is used to authenticate the log analytics service. |
-| `*.monitoring.azure.com` | HTTPS | `443` | This endpoint is used to send metrics data to Azure Monitor. |
+| `*.monitoring.azure.com` | HTTPS | `443` | This endpoint is used to send metrics data to Azure Monitor. |
container-apps Get Started Existing Container Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/get-started-existing-container-image.md
az monitor log-analytics query \
```powershell $LOG_ANALYTICS_WORKSPACE_CLIENT_ID=(az containerapp env show --name $CONTAINERAPPS_ENVIRONMENT --resource-group $RESOURCE_GROUP --query properties.appLogsConfiguration.logAnalyticsConfiguration.customerId --out tsv)
-az monitor log-analytics query \
+
+az monitor log-analytics query `
--workspace $LOG_ANALYTICS_WORKSPACE_CLIENT_ID ` --analytics-query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'my-container-app' | project ContainerAppName_s, Log_s, TimeGenerated" ` --out table
container-apps Microservices Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr.md
az storage account create \
# [PowerShell](#tab/powershell)
-```powershell
-New-AzStorageAccount -ResourceGroupName $RESOURCE_GROUP `
- -Name $STORAGE_ACCOUNT `
- -Location $LOCATION `
- -SkuName Standard_RAGRS
+```azurecli
+az storage account create `
+ --name $STORAGE_ACCOUNT `
+ --resource-group $RESOURCE_GROUP `
+ --location "$LOCATION" `
+ --sku Standard_RAGRS `
+ --kind StorageV2
```
STORAGE_ACCOUNT_KEY=`az storage account keys list --resource-group $RESOURCE_GRO
# [PowerShell](#tab/powershell)
-```powershell
-$STORAGE_ACCOUNT_KEY=(Get-AzStorageAccountKey -ResourceGroupName $RESOURCE_GROUP -AccountName $STORAGE_ACCOUNT)| Where-Object -Property KeyName -Contains 'key1' | Select-Object -ExpandProperty Value
+```azurecli
+$STORAGE_ACCOUNT_KEY=(az storage account keys list --resource-group $RESOURCE_GROUP --account-name $STORAGE_ACCOUNT --query '[0].value' --out tsv)
``` - ### Configure the state store component
az containerapp env dapr-component set \
# [PowerShell](#tab/powershell)
-```powershell
+```azurecli
az containerapp env dapr-component set ` --name $CONTAINERAPPS_ENVIRONMENT --resource-group $RESOURCE_GROUP ` --dapr-component-name statestore `
az monitor log-analytics query \
# [PowerShell](#tab/powershell)
-```powershell
-$LOG_ANALYTICS_WORKSPACE_CLIENT_ID=(az containerapp env show --name $CONTAINERAPPS_ENVIRONMENT --resource-group $RESOURCE_GROUP --query properties.appLogsConfiguration.logAnalyticsConfiguration.customerId --out tsv)
+```azurecli
+$LOG_ANALYTICS_WORKSPACE_CLIENT_ID=`
+(az containerapp env show --name $CONTAINERAPPS_ENVIRONMENT --resource-group $RESOURCE_GROUP --query properties.appLogsConfiguration.logAnalyticsConfiguration.customerId --out tsv)
-$queryResults = Invoke-AzOperationalInsightsQuery -WorkspaceId $LOG_ANALYTICS_WORKSPACE_CLIENT_ID -Query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'nodeapp' and (Log_s contains 'persisted' or Log_s contains 'order') | project ContainerAppName_s, Log_s, TimeGenerated | take 5"
-$queryResults.Results
+az monitor log-analytics query `
+ --workspace $LOG_ANALYTICS_WORKSPACE_CLIENT_ID `
+ --analytics-query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'nodeapp' and (Log_s contains 'persisted' or Log_s contains 'order') | project ContainerAppName_s, Log_s, TimeGenerated | take 5" `
+ --out table
```
az group delete \
# [PowerShell](#tab/powershell)
-```powershell
-Remove-AzResourceGroup -Name $RESOURCE_GROUP -Force
+```azurecli
+az group delete `
+ --resource-group $RESOURCE_GROUP
```
container-apps Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/networking.md
As you create a custom VNET, keep in mind the following situations:
- Each [revision](revisions.md) is assigned an IP address in the subnet. - You can restrict inbound requests to the environment exclusively to the VNET by deploying the environment as [internal](vnet-custom-internal.md).
-As you begin to design the network around your container app, refer to [Plan virtual networks](/azure/virtual-network/virtual-network-vnet-plan-design-arm) for important concerns surrounding running virtual networks on Azure.
+As you begin to design the network around your container app, refer to [Plan virtual networks](../virtual-network/virtual-network-vnet-plan-design-arm.md) for important concerns surrounding running virtual networks on Azure.
:::image type="content" source="media/networking/azure-container-apps-virtual-network.png" alt-text="Diagram of how Azure Container Apps environments use an existing V NET, or you can provide your own.":::
When you deploy an internal or an external environment into your own network, a
## Next steps - [Deploy with an external environment](vnet-custom.md)-- [Deploy with an internal environment](vnet-custom-internal.md)
+- [Deploy with an internal environment](vnet-custom-internal.md)
container-apps Quickstart Code To Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quickstart-code-to-cloud.md
To complete this project, you'll need the following items:
| Requirement | Instructions | |--|--|
-| Azure account | If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). You need the *Contributor* or *Owner* permission on the Azure subscription to proceed. <br><br>Refer to [Assign Azure roles using the Azure portal](/azure/role-based-access-control/role-assignments-portal?tabs=current) for details. |
+| Azure account | If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). You need the *Contributor* or *Owner* permission on the Azure subscription to proceed. <br><br>Refer to [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md?tabs=current) for details. |
| GitHub Account | Sign up for [free](https://github.com/join). | | git | [Install git](https://git-scm.com/downloads) | | Azure CLI | Install the [Azure CLI](/cli/azure/install-azure-cli).|
To complete this project, you'll need the following items:
| Requirement | Instructions | |--|--|
-| Azure account | If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). You need the *Contributor* or *Owner* permission on the Azure subscription to proceed. Refer to [Assign Azure roles using the Azure portal](/azure/role-based-access-control/role-assignments-portal?tabs=current) for details. |
+| Azure account | If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). You need the *Contributor* or *Owner* permission on the Azure subscription to proceed. Refer to [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md?tabs=current) for details. |
| GitHub Account | Sign up for [free](https://github.com/join). | | git | [Install git](https://git-scm.com/downloads) | | Azure CLI | Install the [Azure CLI](/cli/azure/install-azure-cli).|
az acr create `
## Build your application
-With [ACR tasks](/azure/container-registry/container-registry-tasks-overview), you can build and push the docker image for the album API without installing Docker locally.
+With [ACR tasks](../container-registry/container-registry-tasks-overview.md), you can build and push the docker image for the album API without installing Docker locally.
### Build the container with ACR
container-apps Scale App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/scale-app.md
There are two scale properties that apply to all rules in your container app:
| Scale property | Description | Default value | Min value | Max value | ||||||
-| `minReplicas` | Minimum number of replicas running for your container app. | 0 | 0 | 10 |
-| `maxReplicas` | Maximum number of replicas running for your container app. | n/a | 1 | 10 |
+| `minReplicas` | Minimum number of replicas running for your container app. | 0 | 0 | 30 |
+| `maxReplicas` | Maximum number of replicas running for your container app. | 10 | 1 | 30 |
- If your container app scales to zero, then you aren't billed. - Individual scale rules are defined in the `rules` array. - If you want to ensure that an instance of your application is always running, set `minReplicas` to 1 or higher. - Replicas not processing, but that remain in memory are billed in the "idle charge" category.-- Changes to scaling rules are a [revision-scope](overview.md) change.-- When using non-HTTP event scale rules, setting the `properties.configuration.activeRevisionsMode` property of the container app to `single` is recommended.---
+- Changes to scaling rules are a [revision-scope](revisions.md#revision-scope-changes) change.
+- It's recommended to set the `properties.configuration.activeRevisionsMode` property of the container app to `single`, when using non-HTTP event scale rules.
+- Container Apps implements the KEDA ScaledObject with the following default settings.
+ - pollingInterval: 30 seconds
+ - cooldownPeriod: 300 seconds
## Scale triggers
With an HTTP scaling rule, you have control over the threshold that determines w
| Scale property | Description | Default value | Min value | Max value | ||||||
-| `concurrentRequests`| Once the number of requests exceeds this then another replica is added. Replicas will continue to be added up to the `maxReplicas` amount as the number of concurrent requests increase. | 10 | 1 | n/a |
+| `concurrentRequests`| When the number of requests exceeds this value, then another replica is added. Replicas will continue to be added up to the `maxReplicas` amount as the number of concurrent requests increase. | 10 | 1 | n/a |
In the following example, the container app scales out up to five replicas and can scale down to zero. The scaling threshold is set to 100 concurrent requests per second.
In the following example, the container app scales out up to five replicas and c
:::image type="content" source="media/scalers/http-scale-rule.png" alt-text="A screenshot showing how to add an h t t p scale rule.":::
-1. Select **Create** when you are done.
+1. Select **Create** when you're done.
:::image type="content" source="media/scalers/create-http-scale-rule.png" alt-text="A screenshot showing the newly created http scale rule."::: ## Event-driven
-Container Apps can scale based of a wide variety of event types. Any event supported by [KEDA](https://keda.sh/docs/scalers/), is supported in Container Apps.
+Container Apps can scale based of a wide variety of event types. Any event supported by [KEDA](https://keda.sh/docs/scalers/) is supported in Container Apps.
Each event type features different properties in the `metadata` section of the KEDA definition. Use these properties to define a scale rule in Container Apps.
The container app scales according to the following behavior:
... "scale": { "minReplicas": "0",
- "maxReplicas": "10",
+ "maxReplicas": "30",
"rules": [ { "name": "queue-based-autoscaling",
To create a custom scale trigger, first create a connection string secret to aut
1. Select **Add**, and then enter your secret key/value information.
-1. Select **Add** when you are done.
+1. Select **Add** when you're done.
:::image type="content" source="media/scalers/connection-string.png" alt-text="A screenshot showing how to create a connection string.":::
To create a custom scale trigger, first create a connection string secret to aut
:::image type="content" source="media/scalers/add-scale-rule.png" alt-text="A screenshot showing how to add a scale rule.":::
-1. Enter a **Rule name**, select **Custom** and enter a **Custom rule type**. Enter your **Secret reference** and **Trigger parameter** and then add your **Metadata** parameters. select **Add** when you are done.
+1. Enter a **Rule name**, select **Custom** and enter a **Custom rule type**. Enter your **Secret reference** and **Trigger parameter** and then add your **Metadata** parameters. select **Add** when you're done.
:::image type="content" source="media/scalers/custom-scaler.png" alt-text="A screenshot showing how to configure a custom scale rule.":::
-1. Select **Create** when you are done.
+1. Select **Create** when you're done.
> [!NOTE] > In multiple revision mode, adding a new scale trigger creates a new revision of your application but your old revision remains available with the old scale rules. Use the **Revision management** page to manage their traffic allocations.
Azure Container Apps supports KEDA ScaledObjects and all of the available [KEDA
... "scale": { "minReplicas": "0",
- "maxReplicas": "10",
+ "maxReplicas": "30",
"rules": [ { "name": "<YOUR_TRIGGER_NAME>",
Azure Container Apps supports KEDA ScaledObjects and all of the available [KEDA
} ```
-The following is an example of setting up an [Azure Storage Queue](https://keda.sh/docs/scalers/azure-storage-queue/) scaler that you can configure to auto scale based on Azure Storage Queues.
+The following YAML is an example of setting up an [Azure Storage Queue](https://keda.sh/docs/scalers/azure-storage-queue/) scaler that you can configure to auto scale based on Azure Storage Queues.
-Below is the KEDA trigger specification for an Azure Storage Queue. To set up a scale rule in Azure Container Apps, you will need the trigger `type` and any other required parameters. You can also add other optional parameters which vary based on the scaler you are using.
+Below is the KEDA trigger specification for an Azure Storage Queue. To set up a scale rule in Azure Container Apps, you'll need the trigger `type` and any other required parameters. You can also add other optional parameters, which vary based on the scaler you're using.
In this example, you need the `accountName` and the name of the cloud environment that the queue belongs to `cloud` to set up your scaler in Azure Container Apps.
Now your JSON config file should look like this:
... "scale": { "minReplicas": "0",
- "maxReplicas": "10",
+ "maxReplicas": "30",
"rules": [ { "name": "queue-trigger",
Now your JSON config file should look like this:
``` > [!NOTE]
-> KEDA ScaledJobs are not supported. See [KEDA scaling Jobs](https://keda.sh/docs/concepts/scaling-jobs/#overview) for more details.
+> KEDA ScaledJobs are not supported. For more information, see [KEDA Scaling Jobs](https://keda.sh/docs/concepts/scaling-jobs/#overview).
## CPU
The following example shows how to create a memory scaling rule.
## Considerations -- Vertical scaling is not supported.
+- Vertical scaling isn't supported.
- Replica quantities are a target amount, not a guarantee.
- - Even if you set `maxReplicas` to `1`, there is no assurance of thread safety.
--- If you are using [Dapr actors](https://docs.dapr.io/developing-applications/building-blocks/actors/actors-overview/) to manage states, you should keep in mind that scaling to zero is not supported. Dapr uses virtual actors to manage asynchronous calls which means their in-memory representation is not tied to their identity or lifetime.
+
+- If you're using [Dapr actors](https://docs.dapr.io/developing-applications/building-blocks/actors/actors-overview/) to manage states, you should keep in mind that scaling to zero isn't supported. Dapr uses virtual actors to manage asynchronous calls, which means their in-memory representation isn't tied to their identity or lifetime.
## Next steps
container-apps Storage Mounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/storage-mounts.md
See the [ARM template API specification](azure-resource-manager-api-spec.md) for
## Azure Files
-You can mount a file share from [Azure Files](/azure/storage/files/) as a volume inside a container.
+You can mount a file share from [Azure Files](../storage/files/index.yml) as a volume inside a container.
Azure Files storage has the following characteristics:
To enable Azure Files storage in your container, you need to set up your contain
| Requirement | Instructions | |--|--| | Azure account | If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). |
-| Azure Storage account | [Create a storage account](/azure/storage/common/storage-account-create?tabs=azure-cli#create-a-storage-account-1). |
+| Azure Storage account | [Create a storage account](../storage/common/storage-account-create.md?tabs=azure-cli#create-a-storage-account-1). |
| Azure Container Apps environment | [Create a container apps environment](environment.md). | ### Configuration
The following ARM template snippets demonstrate how to add an Azure Files share
See the [ARM template API specification](azure-resource-manager-api-spec.md) for a full example.
container-instances Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/availability-zones.md
# Deploy an Azure Container Instances (ACI) container group in an availability zone (preview)
-An [availability zone][availability-zone-overview] is a physically separate zone in an Azure region. You can use availability zones to protect your containerized applications from an unlikely failure or loss of an entire data center. Three types of Azure services support availability zones: *zonal*, *zone-redundant*, and *always-available* services. You can learn more about these types of services and how they promote resiliency in the [Highly available services section of Azure services that support availability zones](/azure/availability-zones/az-region#highly-available-services).
+An [availability zone][availability-zone-overview] is a physically separate zone in an Azure region. You can use availability zones to protect your containerized applications from an unlikely failure or loss of an entire data center. Three types of Azure services support availability zones: *zonal*, *zone-redundant*, and *always-available* services. You can learn more about these types of services and how they promote resiliency in the [Highly available services section of Azure services that support availability zones](../availability-zones/az-region.md#highly-available-services).
Azure Container Instances (ACI) supports *zonal* container group deployments, meaning the instance is pinned to a specific, self-selected availability zone. The availability zone is specified at the container group level. Containers within a container group can't have unique availability zones. To change your container group's availability zone, you must delete the container group and create another container group with the new availability zone.
container-registry Container Registry Check Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-check-health.md
Fetch refresh token for registry 'myregistry.azurecr.io' : OK
Fetch access token for registry 'myregistry.azurecr.io' : OK ```
+## Check if registry is configured with quarantine
+
+Once you enable a container registry to be quarantined, every image you publish to this repository will be quarantined. Any attempts to access or pull quarantined images will fail with an error. For more information, See [pull the quarantine image](https://github.com/Azure/acr/tree/main/docs/preview/quarantine#pull-the-quarantined-image).
+ ## Next steps For details about error codes returned by the [az acr check-health][az-acr-check-health] command, see the [Health check error reference](container-registry-health-error-reference.md).
container-registry Container Registry Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-private-link.md
az acr update --name $REGISTRY_NAME --public-network-enabled false
Consider the following options to execute the `az acr build` successfully. > [!NOTE]
-> Once you disable public network [access here](/azure/container-registry/container-registry-private-link#disable-public-access), then `az acr build` commands will no longer work.
+> Once you disable public network [access here](#disable-public-access), then `az acr build` commands will no longer work.
-1. Assign a [dedicated agent pool.](/azure/container-registry/tasks-agent-pools#Virtual-network-support)
-2. If agent pool is not available in the region, add the regional [Azure Container Registry Service Tag IPv4](/azure/virtual-network/service-tags-overview#use-the-service-tag-discovery-api) to the [firewall access rules.](/azure/container-registry/container-registry-firewall-access-rules#allow-access-by-ip-address-range)
-3. Create an ACR task with a managed identity, and enable trusted services to [access network restricted ACR.](/azure/container-registry/allow-access-trusted-services#example-acr-tasks)
+1. Assign a [dedicated agent pool.](./tasks-agent-pools.md)
+2. If agent pool is not available in the region, add the regional [Azure Container Registry Service Tag IPv4](../virtual-network/service-tags-overview.md#use-the-service-tag-discovery-api) to the [firewall access rules.](./container-registry-firewall-access-rules.md#allow-access-by-ip-address-range)
+3. Create an ACR task with a managed identity, and enable trusted services to [access network restricted ACR.](./allow-access-trusted-services.md#example-acr-tasks)
## Validate private link connection
container-registry Container Registry Support Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-support-policies.md
This article provides details about Azure Container Registry (ACR) support polic
>* [Encrypt using Customer managed keys](container-registry-customer-managed-keys.md) >* [Enable Content trust](container-registry-content-trust.md) >* [Scan Images using Azure Security Center](../defender-for-cloud/defender-for-container-registries-introduction.md)
->* [ACR Tasks](/azure/container-registry/container-registry-tasks-overview)
+>* [ACR Tasks](./container-registry-tasks-overview.md)
>* [Import container images to ACR](container-registry-import-images.md) >* [Image locking in ACR](container-registry-image-lock.md) >* [Synchronize content with ACR using Connected Registry](intro-connected-registry.md)
This article provides details about Azure Container Registry (ACR) support polic
## Upstream bugs The ACR support will identify the root cause of every issue raised. The team will report all the identified bugs as an [issue in the ACR repository](https://github.com/Azure/acr/issues) with supporting details. The engineering team will review and provide a workaround solution, bug fix, or upgrade with a new release timeline. All the bug fixes integrate from upstream.
-Customers can watch the issues, bug fixes, add more details, and follow the new releases.
+Customers can watch the issues, bug fixes, add more details, and follow the new releases.
cosmos-db Audit Restore Continuous https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/audit-restore-continuous.md
# Audit the point in time restore action for continuous backup mode in Azure Cosmos DB [!INCLUDE[appliesto-all-apis-except-cassandra](includes/appliesto-all-apis-except-cassandra.md)]
-Azure Cosmos DB provides you the list of all the point in time restores for continuous mode that were performed on a Cosmos DB account using [Activity Logs](/azure/azure-monitor/essentials/activity-log). Activity logs can be viewed for any Cosmos DB account from the **Activity Logs** page in the Azure portal. The Activity Log shows all the operations that were triggered on the specific account. When a point in time restore is triggered, it shows up as `Restore Database Account` operation on the source account as well as the target account. The Activity Log for the source account can be used to audit restore events, and the activity logs on the target account can be used to get the updates about the progress of the restore.
+Azure Cosmos DB provides you the list of all the point in time restores for continuous mode that were performed on a Cosmos DB account using [Activity Logs](../azure-monitor/essentials/activity-log.md). Activity logs can be viewed for any Cosmos DB account from the **Activity Logs** page in the Azure portal. The Activity Log shows all the operations that were triggered on the specific account. When a point in time restore is triggered, it shows up as `Restore Database Account` operation on the source account as well as the target account. The Activity Log for the source account can be used to audit restore events, and the activity logs on the target account can be used to get the updates about the progress of the restore.
## Audit the restores that were triggered on a live database account
For the accounts that were already deleted, there would not be any database acco
:::image type="content" source="media/restore-account-continuous-backup/continuous-backup-restore-details-deleted-json.png" alt-text="Azure Cosmos DB restore audit activity log." lightbox="media/restore-account-continuous-backup/continuous-backup-restore-details-deleted-json.png":::
-The activity logs can also be accessed using Azure CLI or Azure PowerShell. For more information on activity logs, review [Azure Activity log - Azure Monitor](/azure/azure-monitor/essentials/activity-log).
+The activity logs can also be accessed using Azure CLI or Azure PowerShell. For more information on activity logs, review [Azure Activity log - Azure Monitor](../azure-monitor/essentials/activity-log.md).
## Track the progress of the restore operation
The account status would be *Creating*, but it would have an Activity Log page.
* Provision an account with continuous backup by using the [Azure portal](provision-account-continuous-backup.md#provision-portal), [PowerShell](provision-account-continuous-backup.md#provision-powershell), the [Azure CLI](provision-account-continuous-backup.md#provision-cli), or [Azure Resource Manager](provision-account-continuous-backup.md#provision-arm-template). * [Manage permissions](continuous-backup-restore-permissions.md) required to restore data with continuous backup mode. * Learn about the [resource model of continuous backup mode](continuous-backup-restore-resource-model.md).
- * Explore the [Frequently asked questions for continuous mode](continuous-backup-restore-frequently-asked-questions.yml).
+ * Explore the [Frequently asked questions for continuous mode](continuous-backup-restore-frequently-asked-questions.yml).
cosmos-db Continuous Backup Restore Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-introduction.md
For example, if you have 1 TB of data in two regions then:
* Restore cost is calculated as (1000 * 0.15) = $150 per restore > [!TIP]
-> For more information about measuring the current data usage of your Azure Cosmos DB account, see [Explore Azure Monitor Cosmos DB insights](/azure/azure-monitor/insights/cosmosdb-insights-overview#view-utilization-and-performance-metrics-for-azure-cosmos-db).
+> For more information about measuring the current data usage of your Azure Cosmos DB account, see [Explore Azure Monitor Cosmos DB insights](../azure-monitor/insights/cosmosdb-insights-overview.md#view-utilization-and-performance-metrics-for-azure-cosmos-db).
## Customer-managed keys
Currently the point in time restore functionality has the following limitations:
* Restore continuous backup account using [Azure portal](restore-account-continuous-backup.md#restore-account-portal), [PowerShell](restore-account-continuous-backup.md#restore-account-powershell), [CLI](restore-account-continuous-backup.md#restore-account-cli), or [Azure Resource Manager](restore-account-continuous-backup.md#restore-arm-template). * [Migrate to an account from periodic backup to continuous backup](migrate-continuous-backup.md). * [Manage permissions](continuous-backup-restore-permissions.md) required to restore data with continuous backup mode.
-* [Resource model of continuous backup mode](continuous-backup-restore-resource-model.md)
+* [Resource model of continuous backup mode](continuous-backup-restore-resource-model.md)
cosmos-db Hierarchical Partition Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/hierarchical-partition-keys.md
You can test the subpartitioning feature using the latest version of the local e
.\CosmosDB.Emulator.exe /EnablePreview ```
-For more information, see [Azure Cosmos DB emulator](/azure/cosmos-db/local-emulator).
+For more information, see [Azure Cosmos DB emulator](./local-emulator.md).
## Limitations and known issues
For more information, see [Azure Cosmos DB emulator](/azure/cosmos-db/local-emul
* See the FAQ on [hierarchical partition keys.](hierarchical-partition-keys-faq.yml) * Learn more about [partitioning in Azure Cosmos DB.](partitioning-overview.md)
-* Learn more about [using Azure Resource Manager templates with Azure Cosmos DB.](/azure/templates/microsoft.documentdb/databaseaccounts)
+* Learn more about [using Azure Resource Manager templates with Azure Cosmos DB.](/azure/templates/microsoft.documentdb/databaseaccounts)
cosmos-db How To Container Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-container-copy.md
- Title: Create and manage intra-account container copy jobs in Azure Cosmos DB
-description: Learn how to create, monitor, and manage container copy jobs within an Azure Cosmos DB account using CLI commands.
--- Previously updated : 04/18/2022---
-# Create and manage intra-account container copy jobs in Azure Cosmos DB (Preview)
-
-[Container copy jobs](intra-account-container-copy.md) creates offline copies of collections within an Azure Cosmos DB account.
-
-This article describes how to create, monitor, and manage intra-account container copy jobs using Azure CLI commands.
-
-## Set shell variables
-
-First, set all of the variables that each individual script will use.
-
-```azurecli-interactive
-$accountName = "<cosmos-account-name>"
-$resourceGroup = "<resource-group-name>"
-$jobName = ""
-$sourceDatabase = ""
-$sourceContainer = ""
-$destinationDatabase = ""
-$destinationContainer = ""
-```
-
-## Create an intra-account container copy job for SQL API account
-
-Create a job to copy a container within an Azure Cosmos DB SQL API account:
-
-```azurecli-interactive
-az cosmosdb dts copy \
- --resource-group $resourceGroup \
- --job-name $jobName \
- --account-name $accountName \
- --source-sql-container database=$sourceDatabase container=$sourceContainer \
- --dest-sql-container database=$destinationDatabase container=$destinationContainer
-```
-
-## Create intra-account container copy job for Cassandra API account
-
-Create a job to copy a container within an Azure Cosmos DB Cassandra API account:
-
-```azurecli-interactive
-az cosmosdb dts copy \
- --resource-group $resourceGroup \
- --job-name $jobName \
- --account-name $accountName \
- --source-cassandra-table keyspace=$sourceKeySpace table=$sourceTable \
- --dest-cassandra-table keyspace=$destinationKeySpace table=$destinationTable
-```
-
-## Monitor the progress of a container copy job
-
-View the progress and status of a copy job:
-
-```azurecli-interactive
-az cosmosdb dts show \
- --account-name $accountName \
- --resource-group $resourceGroup \
- --job-name $jobName
-```
-
-## List all the container copy jobs created in an account
-
-To list all the container copy jobs created in an account:
-
-```azurecli-interactive
-az cosmosdb dts list \
- --account-name $accountName \
- --resource-group $resourceGroup
-```
-
-## Pause a container copy job
-
-In order to pause an ongoing container copy job, you may use the command:
-
-```azurecli-interactive
-az cosmosdb dts pause \
- --account-name $accountName \
- --resource-group $resourceGroup \
- --job-name $jobName
-```
-
-## Resume a container copy job
-
-In order to resume an ongoing container copy job, you may use the command:
-
-```azurecli-interactive
-az cosmosdb dts resume \
- --account-name $accountName \
- --resource-group $resourceGroup \
- --job-name $jobName
-```
-
-## Next steps
--- For more information about intra-account container copy jobs, see [Container copy jobs](intra-account-container-copy.md).
cosmos-db Intra Account Container Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/intra-account-container-copy.md
- Title: Intra-account container copy jobs in Azure Cosmos DB
-description: Learn about container data copy capability within an Azure Cosmos DB account.
--- Previously updated : 04/18/2022---
-# Intra-account container copy jobs in Azure Cosmos DB (Preview)
-
-You can perform offline container copy within an Azure Cosmos DB account using container copy jobs.
-
-You may need to copy data within your Azure Cosmos DB account if you want to achieve any of these scenarios:
-
-* Copy all items from one container to another.
-* Change the [granularity at which throughput is provisioned - from database to container](set-throughput.md) and vice-versa.
-* Change the [partition key](partitioning-overview.md#choose-partitionkey) of a container.
-* Update the [unique keys](unique-keys.md) for a container.
-* Rename a container/database.
-* Adopt new features that are only supported on new containers.
-
-Intra-account container copy jobs can be currently [created and managed using CLI commands](how-to-container-copy.md).
-
-## Get started
-
-To get started using container copy jobs, enroll in the preview by filing a support ticket in the [Azure portal](https://portal.azure.com).
-
-## How does intra-account container copy work?
-
-Intra-account container copy jobs perform offline data copy using the source container's incremental change feed log.
-
-* Within the platform, we allocate two 4-vCPU 16-GB memory server-side compute instances per Azure Cosmos DB account by default.
-* The instances are allocated when one or more container copy jobs are created within the account.
-* The container copy jobs run on these instances.
-* The instances are shared by all the container copy jobs running within the same account.
-* The platform may de-allocate the instances if they're idle for >15 mins.
-
-> [!NOTE]
-> We currently only support offline container copy. So, we strongly recommend to stop performing any operations on the source container prior to beginning the container copy.
-> Item deletions and updates done on the source container after beginning the copy job may not be captured. Hence, continuing to perform operations on the source container while the container job is in progress may result in data missing on the target container.
-
-## Overview of steps needed to do a container copy
-
-1. Stop the operations on the source container by pausing the application instances or any clients connecting to it.
-2. [Create the container copy job](how-to-container-copy.md).
-3. [Monitor the progress of the container copy job](how-to-container-copy.md#monitor-the-progress-of-a-container-copy-job) and wait until it's completed.
-4. Resume the operations by appropriately pointing the application or client to the source or target container copy as intended.
-
-## Factors affecting the rate of container copy job
-
-The rate of container copy job progress is determined by these factors:
-
-* Source container/database throughput setting.
-
-* Target container/database throughput setting.
-
-* Server-side compute instances allocated to the Azure Cosmos DB account for the performing the data transfer.
-
- > [!IMPORTANT]
- > The default SKU offers two 4-vCPU 16-GB server-side instances per account. You may opt to sign up for [larger SKUs](#large-skus-preview) in preview.
-
-## FAQs
-
-### Is there an SLA for the container copy jobs?
-
-Container copy jobs are currently supported on best-effort basis. We don't provide any SLA guarantees for the time taken to complete these jobs.
-
-### Can I create multiple container copy jobs within an account?
-
-Yes, you can create multiple jobs within the same account. The jobs will run consecutively. You can [list all the jobs](how-to-container-copy.md#list-all-the-container-copy-jobs-created-in-an-account) created within an account and monitor their progress.
-
-### Can I copy an entire database within the Azure Cosmos DB account?
-
-You'll have to create a job for each collection in the database.
-
-### I have an Azure Cosmos DB account with multiple regions. In which region will the container copy job run?
-
-The container copy job will run in the write region. If there are accounts configured with multi-region writes, the job will run in one of the regions from the list.
-
-### What happens to the container copy jobs when the account's write region changes?
-
-The account's write region may change in the rare scenario of a region outage or due to manual failover. In such scenario, incomplete container copy jobs created within the account would fail. You would need to recreate such jobs. Recreated jobs would then run against the new (current) write region.
-
-## Large SKUs preview
-
-If you want to run the container copy jobs faster, you may do so by adjusting one of the [factors that affect the rate of the copy job](#factors-affecting-the-rate-of-container-copy-job). In order to adjust the configuration of the server-side compute instances, you may sign up for "Large SKU support for container copy" preview.
-
-This preview will allow you to choose larger a SKU size for the server-side instances. Large SKU sizes are billable at a higher rate. You can also choose a node count of up to 5 of these instances.
-
-## Next Steps
--- You can learn about [how to create, monitor and manage container copy jobs within Azure Cosmos DB account using CLI commands](how-to-container-copy.md).
cosmos-db Monitor Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor-cosmos-db.md
Azure Cosmos DB stores data in the following tables.
### Sample Kusto queries
-Prior to using Log Analytics to issue Kusto queries, you must [enable diagnostic logs for control plane operations](/azure/cosmos-db/audit-control-plane-logs#enable-diagnostic-logs-for-control-plane-operations). When enabling diagnostic logs, you will select between storing your data in a single [AzureDiagnostics table (legacy)](/azure/azure-monitor/essentials/resource-logs#azure-diagnostics-mode) or [resource-specific tables](/azure/azure-monitor/essentials/resource-logs#resource-specific).
+Prior to using Log Analytics to issue Kusto queries, you must [enable diagnostic logs for control plane operations](./audit-control-plane-logs.md#enable-diagnostic-logs-for-control-plane-operations). When enabling diagnostic logs, you will select between storing your data in a single [AzureDiagnostics table (legacy)](../azure-monitor/essentials/resource-logs.md#azure-diagnostics-mode) or [resource-specific tables](../azure-monitor/essentials/resource-logs.md#resource-specific).
When you select **Logs** from the Azure Cosmos DB menu, Log Analytics is opened with the query scope set to the current Azure Cosmos DB account. Log queries will only include data from that resource. > [!IMPORTANT] > If you want to run a query that includes data from other accounts or data from other Azure services, select **Logs** from the **Azure Monitor** menu. For more information, see [Log query scope and time range in Azure Monitor Log Analytics](../azure-monitor/logs/scope.md).
-Here are some queries that you can enter into the **Log search** search bar to help you monitor your Azure Cosmos resources. The exact text of the queries will depend on the [collection mode](/azure/azure-monitor/essentials/resource-logs#select-the-collection-mode) you selected when you enabled diagnostics logs.
+Here are some queries that you can enter into the **Log search** search bar to help you monitor your Azure Cosmos resources. The exact text of the queries will depend on the [collection mode](../azure-monitor/essentials/resource-logs.md#select-the-collection-mode) you selected when you enabled diagnostics logs.
#### [AzureDiagnostics table (legacy)](#tab/azure-diagnostics)
To learn more, see the [Azure monitoring REST API](../azure-monitor/essentials/r
## Next steps * See [Azure Cosmos DB monitoring data reference](monitor-cosmos-db-reference.md) for a reference of the logs and metrics created by Azure Cosmos DB.
-* See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
+* See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/autoscale.md
The script in this article creates an Azure Cosmos DB Cassandra API account, key
- This script requires Azure CLI version 2.12.1 or later.
- - You can run the script in the Bash environment in [Azure Cloud Shell](/azure/cloud-shell/quickstart). When Cloud Shell opens, make sure to select **Bash** in the environment field at the upper left of the shell window. Cloud Shell has the latest version of Azure CLI.
+ - You can run the script in the Bash environment in [Azure Cloud Shell](../../../../cloud-shell/quickstart.md). When Cloud Shell opens, make sure to select **Bash** in the environment field at the upper left of the shell window. Cloud Shell has the latest version of Azure CLI.
[![Launch Cloud Shell in a new window](../../../../../includes/media/cloud-shell-try-it/hdi-launch-cloud-shell.png)](https://shell.azure.com)
az group delete --name $resourceGroup
## Next steps
-[Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb)
+[Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb)
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/autoscale.md
The script in this article creates an Azure Cosmos DB Gremlin API account, datab
- This script requires Azure CLI version 2.30 or later.
- - You can run the script in the Bash environment in [Azure Cloud Shell](/azure/cloud-shell/quickstart). When Cloud Shell opens, make sure to select **Bash** in the environment field at the upper left of the shell window. Cloud Shell has the latest version of Azure CLI.
+ - You can run the script in the Bash environment in [Azure Cloud Shell](../../../../cloud-shell/quickstart.md). When Cloud Shell opens, make sure to select **Bash** in the environment field at the upper left of the shell window. Cloud Shell has the latest version of Azure CLI.
[![Launch Cloud Shell in a new window](../../../../../includes/media/cloud-shell-try-it/hdi-launch-cloud-shell.png)](https://shell.azure.com)
az group delete --name $resourceGroup
## Next steps
-[Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb)
+[Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb)
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/serverless.md
The script in this article creates an Azure Cosmos DB Gremlin API serverless acc
- This script requires Azure CLI version 2.30 or later.
- - You can run the script in the Bash environment in [Azure Cloud Shell](/azure/cloud-shell/quickstart). When Cloud Shell opens, make sure to select **Bash** in the environment field at the upper left of the shell window. Cloud Shell has the latest version of Azure CLI.
+ - You can run the script in the Bash environment in [Azure Cloud Shell](../../../../cloud-shell/quickstart.md). When Cloud Shell opens, make sure to select **Bash** in the environment field at the upper left of the shell window. Cloud Shell has the latest version of Azure CLI.
[![Launch Cloud Shell in a new window](../../../../../includes/media/cloud-shell-try-it/hdi-launch-cloud-shell.png)](https://shell.azure.com)
az group delete --name $resourceGroup
## Next steps
-[Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb)
+[Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb)
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/serverless.md
Any container that is created in a serverless account is a serverless container.
- Serverless containers can store a maximum of 50 GB of data and indexes. > [!NOTE]
-> Serverless containers up to 1 TB are currently in preview with Azure Cosmos DB. To try the new feature, register the *"Azure Cosmos DB Serverless 1 TB Container Preview"* [preview feature in your Azure subscription](/azure/azure-resource-manager/management/preview-features).
+> Serverless containers up to 1 TB are currently in preview with Azure Cosmos DB. To try the new feature, register the *"Azure Cosmos DB Serverless 1 TB Container Preview"* [preview feature in your Azure subscription](../azure-resource-manager/management/preview-features.md).
## Monitoring your consumption
Get started with serverless with the following articles:
- [Request Units in Azure Cosmos DB](request-units.md) - [Choose between provisioned throughput and serverless](throughput-serverless.md)-- [Pricing model in Azure Cosmos DB](how-pricing-works.md)
+- [Pricing model in Azure Cosmos DB](how-pricing-works.md)
cosmos-db Sql Query Join https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-join.md
The results are:
``` > [!IMPORTANT]
-> This example uses mulitple JOIN expressions in a single query. There is a maximum amount of JOINs that can be used in a single query. For more information, see [SQL query limits](/azure/cosmos-db/concepts-limits#sql-query-limits).
+> This example uses mulitple JOIN expressions in a single query. There is a maximum amount of JOINs that can be used in a single query. For more information, see [SQL query limits](../concepts-limits.md#sql-query-limits).
The following extension of the preceding example performs a double join. You could view the cross product as the following pseudo-code:
For example, consider the earlier query that projected the familyName, child's g
- [Getting started](sql-query-getting-started.md) - [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmosdb-dotnet)-- [Subqueries](sql-query-subquery.md)
+- [Subqueries](sql-query-subquery.md)
cosmos-db How To Use Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-use-python.md
This quickstart shows how to access the Azure Cosmos DB [Table API](introduction
The sample application is written in [Python3.6](https://www.python.org/downloads/), though the principles apply to all Python3.6+ applications. You can use [Visual Studio Code](https://code.visualstudio.com/) as an IDE.
-If you don't have an [Azure subscription](/azure/guides/developer/azure-developer-guide#understanding-accounts-subscriptions-and-billing), create a [free account](https://azure.microsoft.com/free/dotnet) before you begin.
+If you don't have an [Azure subscription](../../guides/developer/azure-developer-guide.md#understanding-accounts-subscriptions-and-billing), create a [free account](https://azure.microsoft.com/free/dotnet) before you begin.
## Sample application
Remove-AzResourceGroup -Name $resourceGroupName
In this quickstart, you've learned how to create an Azure Cosmos DB account, create a table using the Data Explorer, and run an app. Now you can query your data using the Table API. > [!div class="nextstepaction"]
-> [Import table data to the Table API](table-import.md)
+> [Import table data to the Table API](table-import.md)
cosmos-db Throughput Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/throughput-serverless.md
Azure Cosmos DB is available in two different capacity modes: [provisioned throu
| Performance | < 10-ms latency for point-reads and writes covered by SLA | < 10-ms latency for point-reads and < 30 ms for writes covered by SLO | | Billing model | Billing is done on a per-hour basis for the RU/s provisioned, regardless of how many RUs were consumed. | Billing is done on a per-hour basis for the number of RUs consumed by your database operations. |
-<sup>1</sup> Serverless containers up to 1 TB are currently in preview with Azure Cosmos DB. To try the new feature, register the *"Azure Cosmos DB Serverless 1 TB Container Preview"* [preview feature in your Azure subscription](/azure/azure-resource-manager/management/preview-features).
+<sup>1</sup> Serverless containers up to 1 TB are currently in preview with Azure Cosmos DB. To try the new feature, register the *"Azure Cosmos DB Serverless 1 TB Container Preview"* [preview feature in your Azure subscription](../azure-resource-manager/management/preview-features.md).
## Estimating your expected consumption
For more information, see [estimating serverless costs](plan-manage-costs.md#est
- Read more about [provisioning throughput on Azure Cosmos DB](set-throughput.md) - Read more about [Azure Cosmos DB serverless](serverless.md)-- Get familiar with the concept of [Request Units](request-units.md)
+- Get familiar with the concept of [Request Units](request-units.md)
cost-management-billing Create Customer Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/create-customer-subscription.md
+
+ Title: Create a subscription for a partner's customer
+
+description: Learn how a Microsoft Partner creates a subscription for a customer in the Azure portal.
+++++ Last updated : 05/25/2022+++
+# Create a subscription for a partner's customer
+
+This article helps a Microsoft Partner with a [Microsoft Partner Agreement](https://www.microsoft.com/licensing/news/introducing-microsoft-partner-agreement) create a [Microsoft Customer Agreement](https://azure.microsoft.com/pricing/purchase-options/microsoft-customer-agreement/) subscription for their customer.
+
+To learn more about billing accounts and identify your billing account type, see [View billing accounts in Azure portal](view-all-accounts.md).
+
+## Permission required to create Azure subscriptions
+
+You need the following permissions to create customer subscriptions:
+
+- Global Admin and Admin Agent role in the CSP partner organization.
+
+For more information, see [Partner Center - Assign users roles and permissions](/partner-center/permissions-overview). The user needs to sign in to the partner tenant to create Azure subscriptions.
+
+## Create a subscription as a partner for a customer
+
+Partners with a Microsoft Partner Agreement use the following steps to create a new Microsoft Azure Plan subscription for their customers. The subscription is created under the partnerΓÇÖs billing account and billing profile.
+
+1. Sign in to the Azure portal using your Partner Center account.
+ Make sure you are in your Partner Center directory (tenant), not a customerΓÇÖs tenant.
+1. Navigate to **Cost Management + Billing**.
+1. Select the Billing scope for your billing account where the customer account resides.
+1. In the left menu under **Billing**, select **Customers**.
+ :::image type="content" source="./media/create-customer-subscription/customers-list.png" alt-text="Screenshot showing the Customers list where you see your list of customers." lightbox="./media/create-customer-subscription/customers-list.png" :::
+1. On the Customers page, select the customer. If you have only one customer, the selection is unavailable.
+1. In the left menu, under **Products + services**, select **All billing subscriptions**.
+1. On the Azure subscription page, select **+ Add** to create a subscription. Then select the type of subscription to add. For example, **Usage based/ Azure subscription**.
+ :::image type="content" source="./media/create-customer-subscription/all-billing-subscriptions-add.png" alt-text="Screenshot showing navigation to Add where you create a customer subscription." lightbox="./media/create-customer-subscription/all-billing-subscriptions-add.png" :::
+1. On the Basics tab, enter a subscription name.
+1. Select the partner's billing account.
+1. Select the partner's billing profile.
+1. Select the customer that you're creating the subscription for.
+1. If applicable, select a reseller.
+1. Next to **Plan**, select **Microsoft Azure Plan for DevTest** if the subscription will be used for development or testing workloads. Otherwise, select **Microsoft Azure Plan**.
+ :::image type="content" source="./media/create-customer-subscription/create-customer-subscription-basics-tab.png" alt-text="Screenshot showing the Basics tab where you enter basic information about the customer subscription." lightbox="./media/create-customer-subscription/create-customer-subscription-basics-tab.png" :::
+1. Optionally, select the Tags tab and then enter tag pairs for **Name** and **Value**.
+1. Select **Review + create**. You should see a message stating `Validation passed`.
+1. Verify that the subscription information is correct, then select **Create**. You'll see a notification that the subscription is getting created.
+
+After the new subscription is created, the customer can see it in on the **Subscriptions** page.
+
+## Create an Azure subscription programmatically
+
+You can also create subscriptions programmatically. For more information, see [Create Azure subscriptions programmatically](programmatically-create-subscription.md).
+
+## Need help? Contact us.
+
+If you have questions or need help, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
+
+## Next steps
+
+- [Add or change Azure subscription administrators](add-change-subscription-administrator.md)
+- [Move resources to new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md)
+- [Create management groups for resource organization and management](../../governance/management-groups/create-management-group-portal.md)
+- [Cancel your subscription for Azure](cancel-azure-subscription.md)
cost-management-billing Create Enterprise Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/create-enterprise-subscription.md
+
+ Title: Create an Enterprise Agreement subscription
+
+description: Learn how to add a new Enterprise Agreement subscription in the Azure portal. See information about billing account forms and view other available resources.
+++++ Last updated : 05/25/2022+++
+# Create an Enterprise Agreement subscription
+
+This article helps you create an [Enterprise Agreement (EA)](https://azure.microsoft.com/pricing/enterprise-agreement/) subscription for yourself or for someone else in your current Azure Active Directory (Azure AD) directory/tenant. You may want another subscription to avoid hitting subscription quota limits, to create separate environments for security, or to isolate data for compliance reasons.
+
+If you want to create subscriptions for Microsoft Customer Agreements, see [Create a Microsoft Customer Agreement subscription](create-subscription.md). If you're a Microsoft Partner and you want to create a subscription for a customer, see [Create a subscription for a partner's customer](create-customer-subscription.md). Or, if you have a Microsoft Online Service Program (MOSP) billing account, also called pay-as-you-go, you can create subscriptions starting in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade) and then you complete the process at https://signup.azure.com/.
+
+To learn more about billing accounts and identify your billing account type, see [View billing accounts in Azure portal](view-all-accounts.md).
+
+## Permission required to create Azure subscriptions
+
+You need the following permissions to create subscriptions for an EA:
+
+- Account Owner role on the Enterprise Agreement enrollment. For more information, see [Understand Azure Enterprise Agreement administrative roles in Azure](understand-ea-roles.md).
+
+## Create an EA subscription
+
+Use the following information to create an EA subscription.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Navigate to **Subscriptions** and then select **Add**.
+ :::image type="content" source="./media/create-enterprise-subscription/subscription-add.png" alt-text="Screenshot showing the Subscription page where you Add a subscription." lightbox="./media/create-enterprise-subscription/subscription-add.png" :::
+1. On the Create a subscription page, on the **Basics** tab, type a **Subscription name**.
+1. Select the **Billing account** where the new subscription will get created.
+1. Select the **Enrollment account** where the subscription will get created.
+1. Select an **Offer type**, select **Enterprise Dev/Test** if the subscription will be used for development or testing workloads. Otherwise, select **Microsoft Azure Enterprise**.
+ :::image type="content" source="./media/create-enterprise-subscription/create-subscription-basics-tab-enterprise-agreement.png" alt-text="Screenshot showing the Basics tab where you enter basic information about the enterprise subscription." lightbox="./media/create-enterprise-subscription/create-subscription-basics-tab-enterprise-agreement.png" :::
+1. Select the **Advanced** tab.
+1. Select your **Subscription directory**. It's the Azure Active Directory (Azure AD) where the new subscription will get created.
+1. Select a **Management group**. It's the Azure AD management group that the new subscription is associated with. You can only select management groups in the current directory.
+1. Select more or more **Subscription owners**. You can select only users or service principals in the selected subscription directory. You can't select guest directory users. If you select a service principal, enter its App ID.
+ :::image type="content" source="./media/create-enterprise-subscription/create-subscription-advanced-tab.png" alt-text="Screenshot showing the Advanced tab where you specify the directory, management group, and owner for the EA subscription. " lightbox="./media/create-enterprise-subscription/create-subscription-advanced-tab.png" :::
+1. Select the **Tags** tab.
+1. Enter tag pairs for **Name** and **Value**.
+ :::image type="content" source="./media/create-enterprise-subscription/create-subscription-tags-tab.png" alt-text="Screenshot showing the tags tab where you enter tag and value pairs." lightbox="./media/create-enterprise-subscription/create-subscription-tags-tab.png" :::
+1. Select **Review + create**. You should see a message stating `Validation passed`.
+1. Verify that the subscription information is correct, then select **Create**. You'll see a notification that the subscription is getting created.
+
+After the new subscription is created, the account owner can see it in on the **Subscriptions** page.
++
+## Create an Azure subscription programmatically
+
+You can also create subscriptions programmatically. For more information, see [Create Azure subscriptions programmatically](programmatically-create-subscription.md).
+
+## Need help? Contact us.
+
+If you have questions or need help, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
+
+## Next steps
+
+- [Add or change Azure subscription administrators](add-change-subscription-administrator.md)
+- [Move resources to new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md)
+- [Create management groups for resource organization and management](../../governance/management-groups/create-management-group-portal.md)
+- [Cancel your subscription for Azure](cancel-azure-subscription.md)
cost-management-billing Create Subscription Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/create-subscription-request.md
+
+ Title: Create a Microsoft Customer Agreement subscription request
+
+description: Learn how to create an Azure subscription request in the Azure portal. See information about billing account forms and view other available resources.
+++++ Last updated : 05/25/2022+++
+# Create a Microsoft Customer Agreement subscription request
+
+This article helps you create a [Microsoft Customer Agreement](https://azure.microsoft.com/pricing/purchase-options/microsoft-customer-agreement/) subscription for someone else that's in a different Azure Active Directory (Azure AD) directory/tenant. After the request is created, the recipient accepts the subscription request. You may want another subscription to avoid hitting subscription quota limits, to create separate environments for security, or to isolate data for compliance reasons.
+
+If you instead want to create a subscription for yourself or for someone else in your current Azure Active Directory (Azure AD) directory/tenant, see [Create a Microsoft Customer Agreement subscription](create-subscription.md). If you want to create subscriptions for Enterprise Agreements, see [Create an EA subscription](create-enterprise-subscription.md). If you're a Microsoft Partner and you want to create a subscription for a customer, see [Create a subscription for a partner's customer](create-customer-subscription.md). Or, if you have a Microsoft Online Service Program (MOSP) billing account, also called pay-as-you-go, you can create subscriptions starting in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade) and then you complete the process at https://signup.azure.com/.
+
+To learn more about billing accounts and identify your billing account type, see [View billing accounts in Azure portal](view-all-accounts.md).
+
+## Permission required to create Azure subscriptions
+
+You need one of the following permissions to create a Microsoft Customer Agreement (MCA) subscription request.
+
+- Owner or contributor role on the invoice section, billing profile or billing account.
+- Azure subscription creator role on the invoice section.
+
+For more information, see [Subscription billing roles and task](understand-mca-roles.md#subscription-billing-roles-and-tasks).
+
+## Create a subscription request
+
+The subscription creator uses the following procedure to create a subscription request for a person in a different Azure Active Directory (Azure AD). After creation, the request is sent to the subscription acceptor (recipient) by email.
+
+A link to the subscription request is also created. The creator can manually share the link with the acceptor.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Navigate to **Subscriptions** and then select **Add**.
+ :::image type="content" source="./media/create-subscription-request/subscription-add.png" alt-text="Screenshot showing the Subscription page where you Add a subscription." lightbox="./media/create-subscription-request/subscription-add.png" :::
+1. On the Create a subscription page, on the **Basics** tab, type a **Subscription name**.
+1. Select the **Billing account** where the new subscription will get created.
+1. Select the **Billing profile** where the subscription will get created.
+1. Select the **Invoice section** where the subscription will get created.
+1. Next to **Plan**, select **Microsoft Azure Plan for DevTest** if the subscription will be used for development or testing workloads. Otherwise, select **Microsoft Azure Plan**.
+ :::image type="content" source="./media/create-subscription-request/create-subscription-basics-tab.png" alt-text="Screenshot showing the Basics tab where you enter basic information about the subscription." lightbox="./media/create-subscription-request/create-subscription-basics-tab.png" :::
+1. Select the **Advanced** tab.
+1. Select your **Subscription directory**. It's the Azure Active Directory (Azure AD) where the new subscription will get created.
+1. The **Management group** option is unavailable because you can only select management groups in the current directory.
+1. Select more or more **Subscription owners**. You can select only users or service principals in the selected subscription directory. You can't select guest directory users. If you select a service principal, enter its App ID.
+ :::image type="content" source="./media/create-subscription-request/create-subscription-advanced-tab-external.png" alt-text="Screenshot showing the Advanced tab where you specify the directory, management group, and owner. " lightbox="./media/create-subscription-request/create-subscription-advanced-tab-external.png" :::
+1. Select the **Tags** tab.
+1. Enter tag pairs for **Name** and **Value**.
+ :::image type="content" source="./media/create-subscription-request/create-subscription-tags-tab.png" alt-text="Screenshot showing the tags tab where you enter tag and value pairs." lightbox="./media/create-subscription-request/create-subscription-tags-tab.png" :::
+1. Select **Review + create**. You should see a message stating `The subscription will be created once the subscription owner accepts this request in the target directory.`
+1. Verify that the subscription information is correct, then select **Request**. You'll see a notification that the request is getting created and sent to the acceptor.
+
+After the new subscription is sent, the acceptor receives an email with subscription acceptance information with a link where they can accept the new subscription.
+
+The subscription creator can also view the subscription request details from **Subscriptions** > **View Requests**. There they can open the subscription request to view its details and copy the **Accept ownership URL**. Then they can manually send the link to the subscription acceptor.
++
+## Accept subscription ownership
+
+The subscription acceptor receives an email inviting them to accept subscription ownership. Select the **Accept ownership** get started.
++
+Or, the subscription creator might have manually sent the acceptor an **Accept ownership URL** link. The acceptor uses the following steps to review and accept subscription ownership.
+
+1. In either case above, select the link to open the Accept subscription ownership page in the Azure portal.
+1. On the Basics tab, you can optionally change the subscription name.
+1. Select the Advanced tab where you can optionally change the Azure AD management group that the new subscription is associated with. You can only select management groups in the current directory.
+1. Select the Tags tab to optionally enter tag pairs for **Name** and **Value**.
+1. Select the Review + accept tab. You should see a message stating `Validation passed. Click on the Accept button below to initiate subscription creation`.
+1. Select **Accept**. You'll see a status message stating that the subscription is being created. Then you'll see another status message stating th the subscription was successfully created. The acceptor becomes the subscription owner.
+
+After the new subscription is created, the acceptor can see it in on the **Subscriptions** page.
+
+## Create an Azure subscription programmatically
+
+You can also create subscriptions programmatically. For more information, see [Create Azure subscriptions programmatically](programmatically-create-subscription.md).
+
+## Need help? Contact us.
+
+If you have questions or need help, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
+
+## Next steps
+
+- [Add or change Azure subscription administrators](add-change-subscription-administrator.md)
+- [Move resources to new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md)
+- [Create management groups for resource organization and management](../../governance/management-groups/create-management-group-portal.md)
+- [Cancel your subscription for Azure](cancel-azure-subscription.md)
cost-management-billing Create Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/create-subscription.md
Title: Create an additional Azure subscription
-description: Learn how to add a new Azure subscription in the Azure portal. See information about billing account forms and view additional available resources.
+ Title: Create a Microsoft Customer Agreement subscription
+
+description: Learn how to add a new Microsoft Customer Agreement subscription in the Azure portal. See information about billing account forms and view other available resources.
Previously updated : 11/11/2021 Last updated : 05/25/2022
-# Create an additional Azure subscription
+# Create a Microsoft Customer Agreement subscription
-You can create an additional subscription for your [Enterprise Agreement (EA)](https://azure.microsoft.com/pricing/enterprise-agreement/), [Microsoft Customer Agreement](https://azure.microsoft.com/pricing/purchase-options/microsoft-customer-agreement/) or [Microsoft Partner Agreement](https://www.microsoft.com/licensing/news/introducing-microsoft-partner-agreement) billing account in the Azure portal. You may want an additional subscription to avoid hitting subscription limits, to create separate environments for security, or to isolate data for compliance reasons.
+This article helps you create a [Microsoft Customer Agreement](https://azure.microsoft.com/pricing/purchase-options/microsoft-customer-agreement/) subscription for yourself or for someone else in your current Azure Active Directory (Azure AD) directory/tenant. You may want another subscription to avoid hitting subscription quota limits, to create separate environments for security, or to isolate data for compliance reasons.
-If you have a Microsoft Online Service Program (MOSP) billing account, you can create additional subscriptions in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade).
+If you want to create a Microsoft Customer Agreement subscription in a different Azure AD tenant, see [Create an MCA subscription request](create-subscription-request.md).
-To learn more about billing accounts and identify the type of your billing account, see [View billing accounts in Azure portal](view-all-accounts.md).
+If you want to create subscriptions for Enterprise Agreements, see [Create an EA subscription](create-enterprise-subscription.md). If you're a Microsoft Partner and you want to create a subscription for a customer, see [Create a subscription for a partner's customer](create-customer-subscription.md). Or, if you have a Microsoft Online Service Program (MOSP) billing account, also called pay-as-you-go, you can create subscriptions starting in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade) and then you complete the process at https://signup.azure.com/.
-## Permission required to create Azure subscriptions
-
-You need the following permissions to create subscriptions:
-
-|Billing account |Permission |
-|||
-|Enterprise Agreement (EA) | Account Owner role on the Enterprise Agreement enrollment. For more information, see [Understand Azure Enterprise Agreement administrative roles in Azure](understand-ea-roles.md). |
-|Microsoft Customer Agreement (MCA) | Owner or contributor role on the invoice section, billing profile or billing account. Or Azure subscription creator role on the invoice section. For more information, see [Subscription billing roles and task](understand-mca-roles.md#subscription-billing-roles-and-tasks). |
-|Microsoft Partner Agreement (MPA) | Global Admin and Admin Agent role in the CSP partner organization. To learn more, see [Partner Center - Assign users roles and permissions](/partner-center/permissions-overview). The user needs to sign to partner tenant to create Azure subscriptions. |
-
-## Create a subscription in the Azure portal
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Search for **Subscriptions**.
-
- ![Screenshot that shows search in portal for subscription](./media/create-subscription/billing-search-subscription-portal.png)
-
-1. Select **Add**.
-
- ![Screenshot that shows the Add button in Subscriptions view](./media/create-subscription/subscription-add.png)
-
-1. If you have access to multiple billing accounts, select the billing account for which you want to create the subscription.
-
-1. Fill the form and select **Create**. The tables below list the fields on the form for each type of billing account.
+To learn more about billing accounts and identify your billing account type, see [View billing accounts in Azure portal](view-all-accounts.md).
-**Enterprise Agreement**
-
-|Field |Definition |
-|||
-|Name | The display name that helps you easily identify the subscription in the Azure portal. |
-|Offer | Select EA Dev/Test, if you plan to use this subscription for development or testing workloads else use Microsoft Azure Enterprise. DevTest offer must be enabled for your enrollment account to create EA Dev/Test subscriptions.|
-
-**Microsoft Customer Agreement**
-
-|Field |Definition |
-|||
-|Billing profile | The charges for your subscription will be billed to the billing profile that you select. If you have access to only one billing profile, the selection will be greyed out. |
-|Invoice section | The charges for your subscription will appear on this section of the billing profile's invoice. If you have access to only one invoice section, the selection will be greyed out. |
-|Plan | Select Microsoft Azure Plan for DevTest, if you plan to use this subscription for development or testing workloads else use Microsoft Azure Plan. If only one plan is enabled for the billing profile, the selection will be greyed out. |
-|Name | The display name that helps you easily identify the subscription in the Azure portal. |
-
-**Microsoft Partner Agreement**
+## Permission required to create Azure subscriptions
-|Field |Definition |
-|||
-|Customer | The subscription is created for the customer that you select. If you have only one customer, the selection will be greyed out. |
-|Reseller | The reseller that will provide services to the customer. This is an optional field, which is only applicable to Indirect providers in the CSP two-tier model. |
-|Name | The display name that helps you easily identify the subscription in the Azure portal. |
+You need the following permissions to create subscriptions for a Microsoft Customer Agreement (MCA):
-## Create a subscription as a partner for a customer
+- Owner or contributor role on the invoice section, billing profile or billing account. Or Azure subscription creator role on the invoice section.
-Partners with a Microsoft Partner Agreement use the following steps to create a new Microsoft Azure Plan subscription for their customers. The subscription is created under the partnerΓÇÖs billing account and billing profile.
+For more information, see [Subscription billing roles and task](understand-mca-roles.md#subscription-billing-roles-and-tasks).
-1. Sign in to the Azure portal using your Partner Center account.
-Make sure you are in your Partner Center directory (tenant), not a customerΓÇÖs tenant.
-1. Navigate to **Cost Management + Billing**.
-1. Select the Billing scope for the billing account where the customer account resides.
-1. In the left menu under **Billing**, select **Customers**.
-1. On the Customers page, select the customer.
-1. In the left menu, under **Products + services**, select **Azure Subscriptions**.
-1. On the Azure subscription page, select **+ Add** to create a subscription.
-1. Enter details about the subscription and when complete, select **Review + create**.
+## Create a subscription
+Use the following procedure to create a subscription for yourself or for someone in the current Azure Active Directory. When you're done, the new subscription is created immediately.
-## Create an additional Azure subscription programmatically
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Navigate to **Subscriptions** and then select **Add**.
+ :::image type="content" source="./media/create-subscription/subscription-add.png" alt-text="Screenshot showing the Subscription page where you Add a subscription." lightbox="./media/create-subscription/subscription-add.png" :::
+1. On the Create a subscription page, on the **Basics** tab, type a **Subscription name**.
+1. Select the **Billing account** where the new subscription will get created.
+1. Select the **Billing profile** where the subscription will get created.
+1. Select the **Invoice section** where the subscription will get created.
+1. Next to **Plan**, select **Microsoft Azure Plan for DevTest** if the subscription will be used for development or testing workloads. Otherwise, select **Microsoft Azure Plan**.
+ :::image type="content" source="./media/create-subscription/create-subscription-basics-tab.png" alt-text="Screenshot showing the Basics tab where you enter basic information about the subscription." lightbox="./media/create-subscription/create-subscription-basics-tab.png" :::
+1. Select the **Advanced** tab.
+1. Select your **Subscription directory**. It's the Azure Active Directory (Azure AD) where the new subscription will get created.
+1. Select a **Management group**. It's the Azure AD management group that the new subscription is associated with. You can only select management groups in the current directory.
+1. Select more or more **Subscription owners**. You can select only users or service principals in the selected subscription directory. You can't select guest directory users. If you select a service principal, enter its App ID.
+ :::image type="content" source="./media/create-subscription/create-subscription-advanced-tab.png" alt-text="Screenshot showing the Advanced tab where you can specify the directory, management group, and owner. " lightbox="./media/create-subscription/create-subscription-advanced-tab.png" :::
+1. Select the **Tags** tab.
+1. Enter tag pairs for **Name** and **Value**.
+ :::image type="content" source="./media/create-subscription/create-subscription-tags-tab.png" alt-text="Screenshot showing the tags tab where you enter tag and value pairs." lightbox="./media/create-subscription/create-subscription-tags-tab.png" :::
+1. Select **Review + create**. You should see a message stating `Validation passed`.
+1. Verify that the subscription information is correct, then select **Create**. You'll see a notification that the subscription is getting created.
+
+After the new subscription is created, the owner of the subscription can see it in on the **Subscriptions** page.
+
+## Create an Azure subscription programmatically
+
+You can also create subscriptions programmatically. For more information, see [Create Azure subscriptions programmatically](programmatically-create-subscription.md).
-You can also create additional subscriptions programmatically. For more information, see:
+## Need help? Contact us.
-- [Create EA subscriptions programmatically with latest API](programmatically-create-subscription-enterprise-agreement.md)-- [Create MCA subscriptions programmatically with latest API](programmatically-create-subscription-microsoft-customer-agreement.md)-- [Create MPA subscriptions programmatically with latest API](Programmatically-create-subscription-microsoft-customer-agreement.md)
+If you have questions or need help, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
## Next steps - [Add or change Azure subscription administrators](add-change-subscription-administrator.md) - [Move resources to new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md) - [Create management groups for resource organization and management](../../governance/management-groups/create-management-group-portal.md)-- [Cancel your subscription for Azure](cancel-azure-subscription.md)-
-## Need help? Contact us.
-
-If you have questions or need help, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
+- [Cancel your Azure subscription](cancel-azure-subscription.md)
cost-management-billing Mca Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/mca-overview.md
Previously updated : 09/15/2021 Last updated : 05/26/2022
Azure plans determine the pricing and service level agreements for Azure subscri
| Plan | Definition | ||-| |Microsoft Azure Plan | Allow users to create subscriptions that can run any workloads. |
-|Microsoft Azure Plan for Dev/Test | Allow Visual Studio subscribers to create subscriptions that are restricted for development or testing workloads. These subscriptions get benefits such as lower rates and access to exclusive virtual machine images in the Azure portal. |
+|Microsoft Azure Plan for Dev/Test | Allow Visual Studio subscribers to create subscriptions that are restricted for development or testing workloads. These subscriptions get benefits such as lower rates and access to exclusive virtual machine images in the Azure portal. Azure Plan for DevTest is only available for Microsoft Customer Agreement customers who purchase through a Microsoft Sales representative. |
## Invoice sections
data-factory How To Sqldb To Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-sqldb-to-cosmosdb.md
SQL schemas are typically modeled using third normal form, resulting in normaliz
Using Azure Data Factory, we'll build a pipeline that uses a single Mapping Data Flow to read from two Azure SQL Database normalized tables that contain primary and foreign keys as the entity relationship. ADF will join those tables into a single stream using the data flow Spark engine, collect joined rows into arrays and produce individual cleansed documents for insert into a new Azure Cosmos DB container.
-This guide will build a new container on the fly called "orders" that will use the ```SalesOrderHeader``` and ```SalesOrderDetail``` tables from the standard SQL Server [Adventure Works sample database](https://docs.microsoft.com/sql/samples/adventureworks-install-configure?view=sql-server-ver15&tabs=ssms). Those tables represent sales transactions joined by ```SalesOrderID```. Each unique detail records has its own primary key of ```SalesOrderDetailID```. The relationship between header and detail is ```1:M```. We'll join on ```SalesOrderID``` in ADF and then roll each related detail record into an array called "detail".
+This guide will build a new container on the fly called "orders" that will use the ```SalesOrderHeader``` and ```SalesOrderDetail``` tables from the standard SQL Server [Adventure Works sample database](/sql/samples/adventureworks-install-configure?tabs=ssms&view=sql-server-ver15). Those tables represent sales transactions joined by ```SalesOrderID```. Each unique detail records has its own primary key of ```SalesOrderDetailID```. The relationship between header and detail is ```1:M```. We'll join on ```SalesOrderID``` in ADF and then roll each related detail record into an array called "detail".
The representative SQL query for this guide is:
If everything looks good, you are now ready to create a new pipeline, add this d
## Next steps * Build the rest of your data flow logic by using mapping data flows [transformations](concepts-data-flow-overview.md).
-* [Download the completed pipeline template](https://github.com/kromerm/adfdataflowdocs/blob/master/sampledata/SQL%20Orders%20to%20CosmosDB.zip) for this tutorial and import the template into your factory.
+* [Download the completed pipeline template](https://github.com/kromerm/adfdataflowdocs/blob/master/sampledata/SQL%20Orders%20to%20CosmosDB.zip) for this tutorial and import the template into your factory.
data-factory Monitor Metrics Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-metrics-alerts.md
Here are some of the metrics emitted by Azure Data Factory version 2.
| Total entities count | Total number of entities | Count | Total | The total number of entities in the Azure Data Factory instance. | | Total factory size (GB unit) | Total size of entities | Gigabyte | Total | The total size of entities in the Azure Data Factory instance. |
-For service limits and quotas please see [quotas and limits](https://docs.microsoft.com/azure/azure-resource-manager/management/azure-subscription-service-limits#azure-data-factory-limits).
+For service limits and quotas please see [quotas and limits](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-data-factory-limits).
To access the metrics, complete the instructions in [Azure Monitor data platform](../azure-monitor/data-platform.md). > [!NOTE]
Sign in to the Azure portal, and select **Monitor** > **Alerts** to create alert
## Next steps
-[Configure diagnostics settings and workspace](monitor-configure-diagnostics.md)
+[Configure diagnostics settings and workspace](monitor-configure-diagnostics.md)
data-factory Data Factory Create Data Factories Programmatically https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-create-data-factories-programmatically.md
In the walkthrough, you create a data factory with a pipeline that contains a co
The Copy Activity performs the data movement in Azure Data Factory. The activity is powered by a globally available service that can copy data between various data stores in a secure, reliable, and scalable way. See [Data Movement Activities](data-factory-data-movement-activities.md) article for details about the Copy Activity. > [!IMPORTANT]
-> The [Microsoft.IdentityModel.Clients.ActiveDirectory](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) NuGet package and Azure AD Authentication Library (ADAL) have been deprecated. No new features have been added since June 30, 2020. We strongly encourage you to upgrade, see the [migration guide](/azure/active-directory/develop/msal-migration) for more details.
+> The [Microsoft.IdentityModel.Clients.ActiveDirectory](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) NuGet package and Azure AD Authentication Library (ADAL) have been deprecated. No new features have been added since June 30, 2020. We strongly encourage you to upgrade, see the [migration guide](../../active-directory/develop/msal-migration.md) for more details.
1. Using Visual Studio 2012/2013/2015, create a C# .NET console application. 1. Launch **Visual Studio** 2012/2013/2015.
while (response != null);
## Next steps See the following example for creating a pipeline using .NET SDK that copies data from an Azure blob storage to Azure SQL Database: -- [Create a pipeline to copy data from Blob Storage to SQL Database](data-factory-copy-activity-tutorial-using-dotnet-api.md)
+- [Create a pipeline to copy data from Blob Storage to SQL Database](data-factory-copy-activity-tutorial-using-dotnet-api.md)
databox-online Azure Stack Edge Gpu Connect Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-connect-resource-manager.md
Set the Azure Resource Manager environment and verify that your device to client
- -- - AzASE https://management.myasegpu.wdshcsso.com/ https://login.myasegpu.wdshcsso.c... ```
- For more information, go to [Set-AzEnvironment](/powershell/module/azurerm.profile/set-azurermenvironment?view=azurermps-6.13.0&preserve-view=true).
+ For more information, go to [Set-AzEnvironment](/powershell/module/az.accounts/set-azenvironment?view=azps-7.5.0).
- Define the environment inline for every cmdlet that you execute. This ensures that all the API calls are going through the correct environment. By default, the calls would go through the Azure public but you want these to go through the environment that you set for Azure Stack Edge device.
defender-for-cloud Defender For Containers Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-enable.md
Title: How to enable Microsoft Defender for Containers in Microsoft Defender for
description: Enable the container protections of Microsoft Defender for Containers zone_pivot_groups: k8s-host Previously updated : 05/10/2022 Last updated : 05/26/2022 # Enable Microsoft Defender for Containers
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
If you have any existing connectors created with the classic cloud connectors ex
- (Optional) Select **Configure**, to edit the configuration as required.
-1. By default the **Containers** plan is set to **On**. This is necessary to have Defender for Containers protect your AWS EKS clusters. Ensure you have fulfilled the [network requirements](https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-enable?tabs=aks-deploy-portal%2Ck8s-deploy-asc%2Ck8s-verify-asc%2Ck8s-remove-arc%2Caks-removeprofile-api&pivots=defender-for-container-eks&source=docs#network-requirements) for the Defender for Containers plan.
+1. By default the **Containers** plan is set to **On**. This is necessary to have Defender for Containers protect your AWS EKS clusters. Ensure you have fulfilled the [network requirements](./defender-for-containers-enable.md?pivots=defender-for-container-eks&source=docs&tabs=aks-deploy-portal%2ck8s-deploy-asc%2ck8s-verify-asc%2ck8s-remove-arc%2caks-removeprofile-api#network-requirements) for the Defender for Containers plan.
> [!Note] > Azure Arc-enabled Kubernetes, the Defender Arc extension, and the Azure Policy Arc extension should be installed. Use the dedicated Defender for Cloud recommendations to deploy the extensions (and Arc, if necessary) as explained in [Protect Amazon Elastic Kubernetes Service clusters](defender-for-containers-enable.md?tabs=defender-for-container-eks).
You can also check out the following blogs:
Connecting your AWS account is part of the multicloud experience available in Microsoft Defender for Cloud. For related information, see the following page: - [Security recommendations for AWS resources - a reference guide](recommendations-reference-aws.md).-- [Connect your GCP projects to Microsoft Defender for Cloud](quickstart-onboard-gcp.md)
+- [Connect your GCP projects to Microsoft Defender for Cloud](quickstart-onboard-gcp.md)
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Learn how [JIT protects your AWS EC2 instances](just-in-time-access-overview.md#
### Add and remove the Defender profile for AKS clusters from the CLI
-The Defender profile (preview) is required for Defender for Containers to provide the runtime protections and collects signals from nodes. You can now use the Azure CLI to [add and remove the Defender profile](/includes/defender-for-containers-enable-plan-aks.md#deploy-the-defender-profile) for an AKS cluster.
+The Defender profile (preview) is required for Defender for Containers to provide the runtime protections and collects signals from nodes. You can now use the Azure CLI to [add and remove the Defender profile](defender-for-containers-enable.md?tabs=k8s-deploy-cli%2Ck8s-deploy-asc%2Ck8s-verify-asc%2Ck8s-remove-arc%2Ck8s-remove-cli&pivots=defender-for-container-aks#use-azure-cli-to-deploy-the-defender-extension) for an AKS cluster.
> [!NOTE] > This option is included in [Azure CLI 3.7 and above](/cli/azure/update-azure-cli.md).
defender-for-iot Integrate With Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/integrate-with-active-directory.md
You can associate Active Directory groups defined here with specific permission
## Next steps
-For more information, see [how to create and manage users](/azure/defender-for-iot/organizations/how-to-create-and-manage-users).
+For more information, see [how to create and manage users](./how-to-create-and-manage-users.md).
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
Title: What's new in Microsoft Defender for IoT description: This article lets you know what's new in the latest release of Defender for IoT. Previously updated : 03/22/2022 Last updated : 05/25/2022 # What's new in Microsoft Defender for IoT?
Noted features listed below are in PREVIEW. The [Azure Preview Supplemental Term
The Defender for IoT architecture uses on-premises sensors and management servers. This section describes the servicing information and timelines for the available on-premises software versions. -- Each General Availability (GA) version of the Defender for IoT sensor and on-premises management console software is supported for nine months after release. Fixes and new functionality are applied to each new version and are not applied to older versions.
+- Each General Availability (GA) version of the Defender for IoT sensor and on-premises management console software is supported for nine months after release. Fixes and new functionality are applied to each new version and aren't applied to older versions.
- Software update packages include new functionality and security patches. Urgent, high-risk security updates are applied in minor versions that may be released throughout the quarter.
For more information, see the [Microsoft Security Development Lifecycle practice
| 10.5.3 | 10/2021 | 07/2022 | | 10.5.2 | 10/2021 | 07/2022 |
+## May 2022
+
+We've recently optimized and enhanced our documentation as follows:
+
+- [Updated appliance catalog for OT environments](#updated-appliance-catalog-for-ot-environments)
+- [Documentation reorganization for end-user organizations](#documentation-reorganization-for-end-user-organizations)
+
+### Updated appliance catalog for OT environments
+
+We've refreshed and revamped the catalog of supported appliances for monitoring OT environments. These appliances support flexible deployment options for environments of all sizes and can be used to host both the OT monitoring sensor and on-premises management consoles.
+
+Use the new pages as follows:
+
+1. **Understand which hardware model best fits your organization's needs.** For more information, see [Which appliances do I need?](ot-appliance-sizing.md)
+
+1. **Learn about the preconfigured hardware appliances that are available to purchase, or system requirements for virtual machines.** For more information, see [Pre-configured physical appliances for OT monitoring](ot-pre-configured-appliances.md) and [OT monitoring with virtual appliances](ot-virtual-appliances.md).
+
+ For more information about each appliance type, use the linked reference page, or browse through our new **Reference > OT monitoring appliances** section.
+
+ :::image type="content" source="media/release-notes/appliance-catalog.png" alt-text="Screenshot of the new appliance catalog reference section." lightbox="media/release-notes/appliance-catalog.png":::
+
+ Reference articles for each appliance type, including virtual appliances, include specific steps to configure the appliance for OT monitoring with Defender for IoT. Generic software installation and troubleshooting procedures are still documented in [Defender for IoT software installation](how-to-install-software.md).
+
+### Documentation reorganization for end-user organizations
+
+We recently reorganized our Defender for IoT documentation for end-user organizations, highlighting a clearer path for onboarding and getting started.
+
+Check out our new structure to follow through viewing devices and assets, managing alerts, vulnerabilities and threats, integrating with other services, and deploying and maintaining your Defender for IoT system.
+
+**New and updated articles include**:
+
+- [Welcome to Microsoft Defender for IoT for organizations](overview.md)
+- [Microsoft Defender for IoT architecture](architecture.md)
+- [Quickstart: Get started with Defender for IoT](getting-started.md)
+- [Tutorial: Microsoft Defender for IoT trial setup](tutorial-onboarding.md)
+- [Tutorial: Get started with Enterprise IoT](tutorial-getting-started-eiot-sensor.md)
+- [Plan your sensor connections for OT monitoring](plan-network-monitoring.md)
+- [About Microsoft Defender for IoT network setup](how-to-set-up-your-network.md)
+
+> [!NOTE]
+> To send feedback on docs via GitHub, scroll to the bottom of the page and select the **Feedback** option for **This page**. We'd be glad to hear from you!
+>
+ ## April 2022 **Sensor software version**: 22.1.4
Other alert updates include:
- **Access contextual data** for each alert, such as events that occurred around the same time, or a map of connected devices. Maps of connected devices are available for sensor console alerts only. -- **Alert statuses** are updated, and for example now include a *Closed* status instead of *Acknowledged*.
+- **Alert statuses** are updated, and, for example, now include a *Closed* status instead of *Acknowledged*.
- **Alert storage** for 90 days from the time that they're first detected.
Unicode characters are now supported when working with sensor certificate passph
## Next steps
-[Getting started with Defender for IoT](getting-started.md)
+[Getting started with Defender for IoT](getting-started.md)
digital-twins Concepts 3D Scenes Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-3d-scenes-studio.md
To work with 3D Scenes Studio, you'll need the following required resources:
* You'll need *Azure Digital Twins Data Owner* or *Azure Digital Twins Data Reader* access to the instance * The instance should be populated with [models](concepts-models.md) and [twins](concepts-twins-graph.md)
-* An [Azure storage account](/azure/storage/common/storage-account-create?tabs=azure-portal), and a [private container](/azure/storage/blobs/storage-quickstart-blobs-portal#create-a-container) in the storage account
+* An [Azure storage account](../storage/common/storage-account-create.md?tabs=azure-portal), and a [private container](../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) in the storage account
* To **view** 3D scenes, you'll need at least *Storage Blob Data Reader* access to these storage resources. To **build** 3D scenes, you'll need *Storage Blob Data Contributor* or *Storage Blob Data Owner* access.
- You can grant required roles at either the storage account level or the container level. For more information about Azure storage permissions, see [Assign an Azure role](/azure/storage/blobs/assign-azure-role-data-access?tabs=portal#assign-an-azure-role).
+ You can grant required roles at either the storage account level or the container level. For more information about Azure storage permissions, see [Assign an Azure role](../storage/blobs/assign-azure-role-data-access.md?tabs=portal#assign-an-azure-role).
* You should also configure [CORS](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) for your storage account, so that 3D Scenes Studio will be able to access your storage container. For complete CORS setting information, see [Use 3D Scenes Studio (preview)](how-to-use-3d-scenes-studio.md#prerequisites). Then, you can access 3D Scenes Studio at this link: [3D Scenes Studio](https://dev.explorer.azuredigitaltwins-test.net/3dscenes).
These limits are recommended because 3D Scenes Studio leverages the standard [Az
Try out 3D Scenes Studio with a sample scenario in [Get started with 3D Scenes Studio](quickstart-3d-scenes-studio.md).
-Or, learn how to use the studio's full feature set in [Use 3D Scenes Studio](how-to-use-3d-scenes-studio.md).
+Or, learn how to use the studio's full feature set in [Use 3D Scenes Studio](how-to-use-3d-scenes-studio.md).
digital-twins How To Use 3D Scenes Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-3d-scenes-studio.md
To use 3D Scenes Studio, you'll need the following resources:
* An Azure Digital Twins instance. For instructions, see [Set up an instance and authentication](how-to-set-up-instance-cli.md). * Obtain *Azure Digital Twins Data Owner* or *Azure Digital Twins Data Reader* access to the instance. For instructions, see [Set up user access permissions](how-to-set-up-instance-cli.md#set-up-user-access-permissions). * Take note of the *host name* of your instance to use later.
-* An Azure storage account. For instructions, see [Create a storage account](/azure/storage/common/storage-account-create?tabs=azure-portal).
-* A private container in the storage account. For instructions, see [Create a container](/azure/storage/blobs/storage-quickstart-blobs-portal#create-a-container).
+* An Azure storage account. For instructions, see [Create a storage account](../storage/common/storage-account-create.md?tabs=azure-portal).
+* A private container in the storage account. For instructions, see [Create a container](../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container).
* Take note of the *URL* of your storage container to use later.
-* *Storage Blob Data Owner* or *Storage Blob Data Contributor* access to your storage resources. You can grant required roles at either the storage account level or the container level. For instructions and more information about permissions to Azure storage, see [Assign an Azure role](/azure/storage/blobs/assign-azure-role-data-access?tabs=portal#assign-an-azure-role).
+* *Storage Blob Data Owner* or *Storage Blob Data Contributor* access to your storage resources. You can grant required roles at either the storage account level or the container level. For instructions and more information about permissions to Azure storage, see [Assign an Azure role](../storage/blobs/assign-azure-role-data-access.md?tabs=portal#assign-an-azure-role).
You should also configure [CORS](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) for your storage account, so that 3D Scenes Studio will be able to access your storage container. You can use the following [Azure CLI](/cli/azure/what-is-azure-cli) command to set the minimum required methods, origins, and headers. The command contains one placeholder for the name of your storage account.
When the recipient pastes this URL into their browser, the specified scene will
Try out 3D Scenes Studio with a sample scenario in [Get started with 3D Scenes Studio](quickstart-3d-scenes-studio.md).
-Or, visualize your Azure Digital Twins graph differently using [Azure Digital Twins Explorer](how-to-use-azure-digital-twins-explorer.md).
+Or, visualize your Azure Digital Twins graph differently using [Azure Digital Twins Explorer](how-to-use-azure-digital-twins-explorer.md).
digital-twins Quickstart 3D Scenes Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/quickstart-3d-scenes-studio.md
To see the models that have been uploaded and how they relate to each other, sel
Next, create a new storage account and a container in the storage account. 3D Scenes Studio will use this storage container to store your 3D file and configuration information.
-You'll also set up read and write permissions to the storage account. In order to set these backing resources up quickly, this section uses the [Azure Cloud Shell](/azure/cloud-shell/overview).
+You'll also set up read and write permissions to the storage account. In order to set these backing resources up quickly, this section uses the [Azure Cloud Shell](../cloud-shell/overview.md).
1. Navigate to the [Cloud Shell](https://shell.azure.com) in your browser.
You may also want to delete the downloaded sample 3D file from your local machin
Next, continue on to the Azure Digital Twins tutorials to build out your own Azure Digital Twins environment. > [!div class="nextstepaction"]
-> [Code a client app](tutorial-code.md)
+> [Code a client app](tutorial-code.md)
dns Dns Delegate Domain Azure Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-delegate-domain-azure-dns.md
Previously updated : 04/19/2021 Last updated : 05/25/2022 #Customer intent: As an experienced network administrator, I want to configure Azure DNS, so I can host DNS zones.
In this example, we'll reference the parent domain a `contoso.net`.
1. Go to the [Azure portal](https://portal.azure.com/) to create a DNS zone. Search for and select **DNS zones**.
- ![DNS zone](./media/dns-delegate-domain-azure-dns/openzone650.png)
+1. Select **+ Create**.
-1. Select **Create DNS zone**.
-
-1. On the **Create DNS zone** page, enter the following values, and then select **Create**. For example, `contoso.net`.
-
- > [!NOTE]
- > If the new zone that you are creating is a child zone (e.g. Parent zone = `contoso.net` Child zone = `child.contoso.net`), please refer to our [Creating a new Child DNS zone tutorial](./tutorial-public-dns-zones-child.md)
+1. On the **Create DNS zone** page, enter the following values, and then select **Review + create**.
| **Setting** | **Value** | **Details** | |--|--|--|
- | **Resource group** | ContosoRG | Create a resource group. The resource group name must be unique within the subscription that you selected. The location of the resource group has no impact on the DNS zone. The DNS zone location is always "global," and isn't shown. |
- | **Zone child** | leave unchecked | Since this zone is **not** a [child zone](./tutorial-public-dns-zones-child.md) you should leave this unchecked |
- | **Name** | `contoso.net` | Field for your parent zone name |
- | **Location** | East US | This field is based on the location selected as part of Resource group creation |
+ | **Resource group** | *ContosoRG* | Create a resource group. The resource group name must be unique within the subscription that you selected. The location of the resource group doesn't affect the DNS zone. The DNS zone location is always "global," and isn't shown. |
+ | **This zone is a child of an existing zone already hosted in Azure DNS** | leave unchecked | Leave this box unchecked since the DNS zone is **not** a [child zone](./tutorial-public-dns-zones-child.md). |
+ | **Name** | *contoso.net* | Enter your parent DNS zone name |
+ | **Resource group location** | *East US* | This field is based on the location selected as part of Resource group creation |
+1. Select **Create**.
++
+ > [!NOTE]
+ > If the new zone that you are creating is a child zone (e.g. Parent zone = `contoso.net` Child zone = `child.contoso.net`), please refer to our [Creating a new Child DNS zone tutorial](./tutorial-public-dns-zones-child.md)
## Retrieve name servers Before you can delegate your DNS zone to Azure DNS, you need to know the name servers for your zone. Azure DNS gives name servers from a pool each time a zone is created.
-1. With the DNS zone created, in the Azure portal **Favorites** pane, select **All resources**. On the **All resources** page, select your DNS zone. If the subscription you've selected already has several resources in it, you can enter your domain name in the **Filter by name** box to easily access the application gateway.
+1. Select **Resource groups** in the left-hand menu, select the **ContosoRG** resource group, and then from the **Resources** list, select **contoso.net** DNS zone.
-1. Retrieve the name servers from the DNS zone page. In this example, the zone `contoso.net` has been assigned name servers `ns1-01.azure-dns.com`, `ns2-01.azure-dns.net`, *`ns3-01.azure-dns.org`, and `ns4-01.azure-dns.info`:
+1. Retrieve the name servers from the DNS zone page. In this example, the zone `contoso.net` has been assigned name servers `ns1-01.azure-dns.com`, `ns2-01.azure-dns.net`, `ns3-01.azure-dns.org`, and `ns4-01.azure-dns.info`:
- ![List of name servers](./media/dns-delegate-domain-azure-dns/viewzonens500.png)
+ :::image type="content" source="./media/dns-delegate-domain-azure-dns/dns-name-servers.png" alt-text="Screenshot of D N S zone showing name servers" lightbox="./media/dns-delegate-domain-azure-dns/dns-name-servers.png":::
Azure DNS automatically creates authoritative NS records in your zone for the assigned name servers.
You don't have to specify the Azure DNS name servers. If the delegation is set u
## Clean up resources
-You can keep the **contosoRG** resource group if you intend to do the next tutorial. Otherwise, delete the **contosoRG** resource group to delete the resources created in this tutorial.
+When no longer needed, you can delete all resources created in this tutorial by following these steps to delete the resource group **ContosoRG**:
+
+1. From the left-hand menu, select **Resource groups**.
+
+2. Select the **ContosoRG** resource group.
+
+3. Select **Delete resource group**.
-Select the **contosoRG** resource group, and then select **Delete resource group**.
+4. Enter **ContosoRG** and select **Delete**.
## Next steps
dns Dns Private Resolver Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-overview.md
Previously updated : 05/10/2022 Last updated : 05/25/2022 #Customer intent: As an administrator, I want to evaluate Azure DNS Private Resolver so I can determine if I want to use it instead of my current DNS resolver service.
Azure DNS Private Resolver is a new service that enables you to query Azure DNS
## How does it work?
-Azure DNS Private Resolver requires an [Azure Virtual Network](/azure/virtual-network/virtual-networks-overview). When you create an Azure DNS Private Resolver inside a virtual network, one or more [inbound endpoints](#inbound-endpoints) are established that can be used as the destination for DNS queries. The resolver's [outbound endpoint](#outbound-endpoints) processes DNS queries based on a [DNS forwarding ruleset](#dns-forwarding-rulesets) that you configure. DNS queries that are initiated in networks linked to a ruleset can be sent to other DNS servers.
+Azure DNS Private Resolver requires an [Azure Virtual Network](../virtual-network/virtual-networks-overview.md). When you create an Azure DNS Private Resolver inside a virtual network, one or more [inbound endpoints](#inbound-endpoints) are established that can be used as the destination for DNS queries. The resolver's [outbound endpoint](#outbound-endpoints) processes DNS queries based on a [DNS forwarding ruleset](#dns-forwarding-rulesets) that you configure. DNS queries that are initiated in networks linked to a ruleset can be sent to other DNS servers.
The DNS query process when using an Azure DNS Private Resolver is summarized below: 1. A client in a virtual network issues a DNS query.
-2. If the DNS servers for this virtual network are [specified as custom](/azure/virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances#specify-dns-servers), then the query is forwarded to the specified IP addresses.
+2. If the DNS servers for this virtual network are [specified as custom](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#specify-dns-servers), then the query is forwarded to the specified IP addresses.
3. If Default (Azure-provided) DNS servers are configured in the virtual network, and there are Private DNS zones [linked to the same virtual network](private-dns-virtual-network-links.md), these zones are consulted. 4. If the query doesn't match a Private DNS zone linked to the virtual network, then [Virtual network links](#virtual-network-links) for [DNS forwarding rulesets](#dns-forwarding-rulesets) are consulted. 5. If no ruleset links are present, then Azure DNS is used to resolve the query.
The DNS query process when using an Azure DNS Private Resolver is summarized bel
8. If multiple matches are present, the longest suffix is used. 9. If no match is found, no DNS forwarding occurs and Azure DNS is used to resolve the query.
-The architecture for Azure DNS Private Resolver is summarized in the following figure. DNS resolution between Azure virtual networks and on-premises networks requires [Azure ExpressRoute](/azure/expressroute/expressroute-introduction) or a [VPN](/azure/vpn-gateway/vpn-gateway-about-vpngateways).
+The architecture for Azure DNS Private Resolver is summarized in the following figure. DNS resolution between Azure virtual networks and on-premises networks requires [Azure ExpressRoute](../expressroute/expressroute-introduction.md) or a [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md).
[ ![Azure DNS Private Resolver architecture](./media/dns-resolver-overview/resolver-architecture.png) ](./media/dns-resolver-overview/resolver-architecture.png#lightbox)
Subnets used for DNS resolver have the following limitations:
Outbound endpoints have the following limitations: - An outbound endpoint can't be deleted unless the DNS forwarding ruleset and the virtual network links under it are deleted
-### DNS forwarding ruleset restrictions
-
-DNS forwarding rulesets have the following limitations:
-- A DNS forwarding ruleset can't be deleted unless the virtual network links under it are deleted- ### Other restrictions -- DNS resolver endpoints can't be updated to include IP configurations from a different subnet - IPv6 enabled subnets aren't supported in Public Preview
DNS forwarding rulesets have the following limitations:
* Learn how to create an Azure DNS Private Resolver by using [Azure PowerShell](./dns-private-resolver-get-started-powershell.md) or [Azure portal](./dns-private-resolver-get-started-portal.md). * Learn about some of the other key [networking capabilities](../networking/fundamentals/networking-overview.md) of Azure.
-* [Learn module: Introduction to Azure DNS](/learn/modules/intro-to-azure-dns).
+* [Learn module: Introduction to Azure DNS](/learn/modules/intro-to-azure-dns).
event-grid Event Schema Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema-blob-storage.md
Title: Azure Blob Storage as Event Grid source description: Describes the properties that are provided for blob storage events with Azure Event Grid Previously updated : 09/08/2021 Last updated : 05/26/2022 # Azure Blob Storage as an Event Grid source
These events are triggered if you enable a hierarchical namespace on the storage
> [!NOTE] > For **Azure Data Lake Storage Gen2**, if you want to ensure that the **Microsoft.Storage.BlobCreated** event is triggered only when a Block Blob is completely committed, filter the event for the `FlushWithClose` REST API call. This API call triggers the **Microsoft.Storage.BlobCreated** event only after data is fully committed to a Block Blob. To learn how to create a filter, see [Filter events for Event Grid](./how-to-filter-events.md).
-## Example event
+### List of policy-related events
+
+These events are triggered when the actions defined by a policy are performed.
+
+ |Event name |Description|
+ |-|--|
+ |**Microsoft.Storage.BlobInventoryPolicyCompleted** |Triggered when the inventory run completes for a rule that is defined an inventory policy . This event also occurs if the inventory run fails with a user error before it starts to run. For example, an invalid policy, or an error that occurs when a destination container is not present will trigger the event. |
+ |**Microsoft.Storage.LifecyclePolicyCompleted** |Triggered when the actions defined by a lifecycle management policy are performed. |
+
+## Example events
When an event is triggered, the Event Grid service sends data about that event to subscribing endpoint. This section contains an example of what that data would look like for each blob storage event. # [Event Grid event schema](#tab/event-grid-event-schema)
If the blob storage account has a hierarchical namespace, the data looks similar
}] ```
+### Microsoft.Storage.BlobInventoryPolicyCompleted event
+
+```json
+{
+ "topic": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/BlobInventory/providers/Microsoft.EventGrid/topics/BlobInventoryTopic",
+ "subject": "BlobDataManagement/BlobInventory",
+ "eventType": "Microsoft.Storage.BlobInventoryPolicyCompleted",
+ "id": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "data": {
+ "scheduleDateTime": "2021-05-28T03:50:27Z",
+ "accountName": "testaccount",
+ "ruleName": "Rule_1",
+ "policyRunStatus": "Succeeded",
+ "policyRunStatusMessage": "Inventory run succeeded, refer manifest file for inventory details.",
+ "policyRunId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "manifestBlobUrl": "https://testaccount.blob.core.windows.net/inventory-destination-container/2021/05/26/13-25-36/Rule_1/Rule_1.csv"
+ },
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2021-05-28T15:03:18Z"
+}
+```
+
+### Microsoft.Storage.LifecyclePolicyCompleted event
+
+```json
+{
+ "topic": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/contosoresourcegroup/providers/Microsoft.Storage/storageAccounts/contosostorageaccount",
+ "subject": "BlobDataManagement/LifeCycleManagement/SummaryReport",
+ "eventType": "Microsoft.Storage.LifecyclePolicyCompleted",
+ "id": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "data": {
+ "scheduleTime": "2022/05/24 22:57:29.3260160",
+ "deleteSummary": {
+ "totalObjectsCount": 16,
+ "successCount": 14,
+ "errorList": ""
+ },
+ "tierToCoolSummary": {
+ "totalObjectsCount": 0,
+ "successCount": 0,
+ "errorList": ""
+ },
+ "tierToArchiveSummary": {
+ "totalObjectsCount": 0,
+ "successCount": 0,
+ "errorList": ""
+ }
+ },
+ "dataVersion": "1",
+ "metadataVersion": "1",
+ "eventTime": "2022-05-26T00:00:40.1880331"
+}
+```
+ # [Cloud event schema](#tab/cloud-event-schema) ### Microsoft.Storage.BlobCreated event
If the blob storage account has a hierarchical namespace, the data looks similar
- ## Event properties # [Event Grid event schema](#tab/event-grid-event-schema)
event-grid Receive Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/receive-events.md
module.exports = function (context, req) {
### Test Blob Created event handling
-Test the new functionality of the function by putting a [Blob storage event](./event-schema-blob-storage.md#example-event) into the test field and running:
+Test the new functionality of the function by putting a [Blob storage event](./event-schema-blob-storage.md#example-events) into the test field and running:
```json [{
expressroute Designing For Disaster Recovery With Expressroute Privatepeering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/designing-for-disaster-recovery-with-expressroute-privatepeering.md
The Scenario 2 is illustrated in the following diagram. In the diagram, green li
The solution is illustrated in the following diagram. As illustrated, you can architect the scenario either using more specific route (Option 1) or AS-path prepend (Option 2) to influence VNet path selection. To influence on-premises network route selection for Azure bound traffic, you need configure the interconnection between the on-premises location as less preferable. How you configure the interconnection link as preferable depends on the routing protocol used within the on-premises network. You can use local preference with iBGP or metric with IGP (OSPF or IS-IS). > [!IMPORTANT] > When one or multiple ExpressRoute circuits are connected to multiple virtual networks, virtual network to virtual network traffic can route via ExpressRoute. However, this is not recommended. To enable virtual network to virtual network connectivity, [configure virtual network peering](../virtual-network/virtual-network-manage-peering.md).
expressroute Expressroute Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-faqs.md
Your existing circuit will continue advertising the prefixes for Microsoft 365.
* Microsoft peering of ExpressRoute circuits that are configured on or after August 1, 2017 will not have any prefixes advertised until a route filter is attached to the circuit. You will see no prefixes by default. ### If I have multiple Virtual Networks (Vnets) connected to the same ExpressRoute circuit, can I use ExpressRoute for Vnet-to-Vnet connectivity?
-Vnet-to-Vnet connectivity over ExpressRoute is not recommended. To acheive this, configure [Virtual Network Peering](https://docs.microsoft.com/azure/virtual-network/virtual-network-peering-overview?msclkid=b64a7b6ac19e11eca60d5e3e5d0764f5).
+Vnet-to-Vnet connectivity over ExpressRoute is not recommended. To acheive this, configure [Virtual Network Peering](../virtual-network/virtual-network-peering-overview.md?msclkid=b64a7b6ac19e11eca60d5e3e5d0764f5).
## <a name="expressRouteDirect"></a>ExpressRoute Direct
Vnet-to-Vnet connectivity over ExpressRoute is not recommended. To acheive this,
### Does the ExpressRoute service store customer data?
-No.
+No.
firewall Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/overview.md
Previously updated : 05/12/2022 Last updated : 05/26/2022 # Customer intent: As an administrator, I want to evaluate Azure Firewall so I can determine if I want to use it.
Azure Firewall Standard has the following known issues:
| Error encountered when creating more than 2000 rule collections. | The maximal number of NAT/Application or Network rule collections is 2000 (Resource Manager limit). | This is a current limitation. | |Unable to see Network Rule Name in Azure Firewall Logs|Azure Firewall network rule log data does not show the Rule name for network traffic.|Network rule name logging is in preview. For for information, see [Azure Firewall preview features](firewall-preview.md#network-rule-name-logging-preview).| |XFF header in HTTP/S|XFF headers are overwritten with the original source IP address as seen by the firewall. This is applicable for the following use cases:<br>- HTTP requests<br>- HTTPS requests with TLS termination|A fix is being investigated.|
-| Firewall logs (Resource specific tables - Preview) | Resource specific log queries are in preview mode and aren't currently supported. | A fix is being investigated.|
|Can't upgrade to Premium with Availability Zones in the Southeast Asia region|You can't currently upgrade to Azure Firewall Premium with Availability Zones in the Southeast Asia region.|Deploy a new Premium firewall in Southeast Asia without Availability Zones, or deploy in a region that supports Availability Zones.| |CanΓÇÖt deploy Firewall with Availability Zones with a newly created Public IP address|When you deploy a Firewall with Availability Zones, you canΓÇÖt use a newly created Public IP address.|First create a new zone redundant Public IP address, then assign this previously created IP address during the Firewall deployment.
governance Exemption Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/exemption-structure.md
assignment.
## Next steps -- Study the [Microsoft.Authorization policyExemptions resource type](https://docs.microsoft.com/azure/templates/microsoft.authorization/policyexemptions?tabs=json).
+- Study the [Microsoft.Authorization policyExemptions resource type](/azure/templates/microsoft.authorization/policyexemptions?tabs=json).
- Learn about the [policy definition structure](./definition-structure.md). - Understand how to [programmatically create policies](../how-to/programmatically-create.md). - Learn how to [get compliance data](../how-to/get-compliance-data.md). - Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md). - Review what a management group is with
- [Organize your resources with Azure management groups](../../management-groups/overview.md).
+ [Organize your resources with Azure management groups](../../management-groups/overview.md).
hdinsight Hbase Troubleshoot Phoenix No Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/hbase-troubleshoot-phoenix-no-data.md
Title: HDP upgrade & no data in Apache Phoenix views in Azure HDInsight
description: HDP upgrade causes no data in Apache Phoenix views in Azure HDInsight Previously updated : 08/08/2019 Last updated : 05/26/2022 # Scenario: HDP upgrade causes no data in Apache Phoenix views in Azure HDInsight
If you didn't see your problem or are unable to solve your issue, visit one of t
* Connect with [@AzureSupport](https://twitter.com/azuresupport) - the official Microsoft Azure account for improving customer experience. Connecting the Azure community to the right resources: answers, support, and experts.
-* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
+* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
hdinsight Hdinsight Cluster Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-cluster-availability.md
description: Learn how to use Apache Ambari to monitor cluster health and availa
Previously updated : 05/01/2020 Last updated : 05/26/2022 # How to monitor cluster availability with Apache Ambari in Azure HDInsight
hdinsight Hdinsight Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes-archive.md
This release applies for both HDInsight 3.6 and HDInsight 4.0. HDInsight release
HDInsight added Dav4-series support in this release. Learn more about [Dav4-series here](../virtual-machines/dav4-dasv4-series.md). #### Kafka REST Proxy GA
-Kafka REST Proxy enables you to interact with your Kafka cluster via a REST API over HTTPS. Kafka Rest Proxy is general available starting from this release. Learn more about [Kafka REST Proxy here](./kafk).
+Kafka REST Proxy enables you to interact with your Kafka cluster via a REST API over HTTPS. Kafka REST Proxy is general available starting from this release. Learn more about [Kafka REST Proxy here](./kafk).
#### Moving to Azure virtual machine scale sets HDInsight now uses Azure virtual machines to provision the cluster. The service is gradually migrating to [Azure virtual machine scale sets](../virtual-machine-scale-sets/overview.md). The entire process may take months. After your regions and subscriptions are migrated, newly created HDInsight clusters will run on virtual machine scale sets without customer actions. No breaking change is expected.
hdinsight Hortonworks Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hortonworks-release-notes.md
description: Learn the Apache Hadoop components and versions in Azure HDInsight.
Previously updated : 04/22/2020 Last updated : 05/26/2022 # Hortonworks release notes associated with HDInsight versions
The section provides links to release notes for the Hortonworks Data Platform di
[hdp-1-3-0]: https://docs.hortonworks.com/HDPDocuments/HDP1/HDP-1.3.0/bk_releasenotes_hdp_1.x/content/ch_relnotes-hdp1.3.0_1.html
-[hdp-1-1-0]: https://docs.hortonworks.com/HDPDocuments/HDP1/HDP-1.3.0/bk_releasenotes_hdp_1.x/content/ch_relnotes-hdp1.1.1.16_1.html
+[hdp-1-1-0]: https://docs.hortonworks.com/HDPDocuments/HDP1/HDP-1.3.0/bk_releasenotes_hdp_1.x/content/ch_relnotes-hdp1.1.1.16_1.html
hdinsight Apache Hive Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/apache-hive-replication.md
Title: How to use Apache Hive replication in Azure HDInsight clusters
description: Learn how to use Hive replication in HDInsight clusters to replicate the Hive metastore and the Azure Data Lake Storage Gen 2 data lake. Previously updated : 10/08/2020 Last updated : 05/26/2022 # How to use Apache Hive replication in Azure HDInsight clusters
To learn more about the items discussed in this article, see:
- [Azure HDInsight business continuity](../hdinsight-business-continuity.md) - [Azure HDInsight business continuity architectures](../hdinsight-business-continuity-architecture.md) - [Azure HDInsight highly available solution architecture case study](../hdinsight-high-availability-case-study.md)-- [What is Apache Hive and HiveQL on Azure HDInsight?](../hadoop/hdinsight-use-hive.md)
+- [What is Apache Hive and HiveQL on Azure HDInsight?](../hadoop/hdinsight-use-hive.md)
hdinsight Apache Hive Warehouse Connector Zeppelin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/apache-hive-warehouse-connector-zeppelin.md
Previously updated : 05/28/2020 Last updated : 05/26/2022 # Integrate Apache Zeppelin with Hive Warehouse Connector in Azure HDInsight
hive.executeQuery("select * from testers").show()
* [HWC and Apache Spark operations](./apache-hive-warehouse-connector-operations.md) * [HWC integration with Apache Spark and Apache Hive](./apache-hive-warehouse-connector.md)
-* [Use Interactive Query with HDInsight](./apache-interactive-query-get-started.md)
+* [Use Interactive Query with HDInsight](./apache-interactive-query-get-started.md)
hdinsight Interactive Query Troubleshoot Tez View Slow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/interactive-query-troubleshoot-tez-view-slow.md
Title: Apache Ambari Tez View loads slowly in Azure HDInsight
description: Apache Ambari Tez View may load slowly or may not load at all in Azure HDInsight Previously updated : 04/06/2020 Last updated : 05/26/2022 # Scenario: Apache Ambari Tez View loads slowly in Azure HDInsight
hdinsight Llap Schedule Based Autoscale Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/llap-schedule-based-autoscale-best-practices.md
Feature Supportability with HDInsight 4.0 Interactive Query(LLAP) Autoscale
### **Interactive Query Cluster setup for Autoscale**
-1. [Create an HDInsight Interactive Query Cluster.](/azure/hdinsight/hdinsight-hadoop-provision-linux-clusters)
+1. [Create an HDInsight Interactive Query Cluster.](../hdinsight-hadoop-provision-linux-clusters.md)
2. Post successful creation of cluster, navigate to **Azure Portal** and apply the recommended Script Action ```
Feature Supportability with HDInsight 4.0 Interactive Query(LLAP) Autoscale
```
-3. [Enable and Configure Schedule-Based Autoscale](/azure/hdinsight/hdinsight-autoscale-clusters#create-a-cluster-with-schedule-based-autoscaling)
+3. [Enable and Configure Schedule-Based Autoscale](../hdinsight-autoscale-clusters.md#create-a-cluster-with-schedule-based-autoscaling)
> [!NOTE]
If the above guidelines didn't resolve your query, visit one of the following.
* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/). ## **Other References:**
- * [Interactive Query in Azure HDInsight](/azure/hdinsight/interactive-query/apache-interactive-query-get-started)
- * [Create a cluster with Schedule-based Autoscaling](/azure/hdinsight/interactive-query/apache-interactive-query-get-started)
- * [Azure HDInsight Interactive Query Cluster (Hive LLAP) sizing guide](/azure/hdinsight/interactive-query/hive-llap-sizing-guide)
- * [Hive Warehouse Connector in Azure HDInsight](/azure/hdinsight/interactive-query/apache-hive-warehouse-connector)
+ * [Interactive Query in Azure HDInsight](./apache-interactive-query-get-started.md)
+ * [Create a cluster with Schedule-based Autoscaling](./apache-interactive-query-get-started.md)
+ * [Azure HDInsight Interactive Query Cluster (Hive LLAP) sizing guide](./hive-llap-sizing-guide.md)
+ * [Hive Warehouse Connector in Azure HDInsight](./apache-hive-warehouse-connector.md)
hdinsight Apache Spark Improve Performance Iocache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-improve-performance-iocache.md
Title: Apache Spark performance - Azure HDInsight IO Cache (Preview)
description: Learn about Azure HDInsight IO Cache and how to use it to improve Apache Spark performance. Previously updated : 12/23/2019 Last updated : 05/26/2022 # Improve performance of Apache Spark workloads using Azure HDInsight IO Cache
hdinsight Apache Spark Streaming High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-streaming-high-availability.md
description: How to set up Apache Spark Streaming for a high-availability scenar
Previously updated : 11/29/2019 Last updated : 05/26/2022 # Create high-availability Apache Spark Streaming jobs with YARN
hdinsight Apache Spark Troubleshoot Job Slowness Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-troubleshoot-job-slowness-container.md
Title: Apache Spark slow when Azure HDInsight storage has many files
description: Apache Spark job runs slowly when the Azure storage container contains many files in Azure HDInsight Previously updated : 08/21/2019 Last updated : 05/26/2022 # Apache Spark job run slowly when the Azure storage container contains many files in Azure HDInsight
healthcare-apis Get Started With Azure Api Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/get-started-with-azure-api-fhir.md
Refer to the steps in the [Quickstart guide](fhir-paas-portal-quickstart.md) for
## Accessing Azure API for FHIR
-When you're working with healthcare data, it's important to ensure that the data is secure, and it can't be accessed by unauthorized users or applications. FHIR servers use [OAuth 2.0](https://oauth.net/2/) to ensure this data security. Azure API for FHIR is secured using [Azure Active Directory (Azure AD)](https://docs.microsoft.com/azure/active-directory/), which is an example of an OAuth 2.0 identity provider. [Azure AD identity configuration for Azure API for FHIR](././../azure-api-for-fhir/azure-active-directory-identity-configuration.md) provides an overview of FHIR server authorization, and the steps needed to obtain a token to access a FHIR server. While these steps apply to any FHIR server and any identity provider, this article will walk you through Azure API for FHIR as the FHIR server and Azure AD as our identity provider. For more information about accessing Azure API for FHIR, see [Access control overview](././../azure-api-for-fhir/azure-active-directory-identity-configuration.md#access-control-overview).
+When you're working with healthcare data, it's important to ensure that the data is secure, and it can't be accessed by unauthorized users or applications. FHIR servers use [OAuth 2.0](https://oauth.net/2/) to ensure this data security. Azure API for FHIR is secured using [Azure Active Directory (Azure AD)](../../active-directory/index.yml), which is an example of an OAuth 2.0 identity provider. [Azure AD identity configuration for Azure API for FHIR](././../azure-api-for-fhir/azure-active-directory-identity-configuration.md) provides an overview of FHIR server authorization, and the steps needed to obtain a token to access a FHIR server. While these steps apply to any FHIR server and any identity provider, this article will walk you through Azure API for FHIR as the FHIR server and Azure AD as our identity provider. For more information about accessing Azure API for FHIR, see [Access control overview](././../azure-api-for-fhir/azure-active-directory-identity-configuration.md#access-control-overview).
### Access token validation
For more information about the two kinds of application registrations, see [Regi
## Configure Azure RBAC for FHIR
-The article [Configure Azure RBAC for FHIR](configure-azure-rbac.md), describes how to use [Azure role-based access control (Azure RBAC)](https://docs.microsoft.com/azure/role-based-access-control/) to assign access to the Azure API for FHIR data plane. Azure RBAC is the preferred method for assigning data plane access when data plane users are managed in the Azure AD tenant associated with your Azure subscription. If you're using an external Azure AD tenant, refer to the [local RBAC assignment reference](configure-local-rbac.md).
+The article [Configure Azure RBAC for FHIR](configure-azure-rbac.md), describes how to use [Azure role-based access control (Azure RBAC)](../../role-based-access-control/index.yml) to assign access to the Azure API for FHIR data plane. Azure RBAC is the preferred method for assigning data plane access when data plane users are managed in the Azure AD tenant associated with your Azure subscription. If you're using an external Azure AD tenant, refer to the [local RBAC assignment reference](configure-local-rbac.md).
## Next steps
This article described the basic steps to get started using Azure API for FHIR.
>[What is Azure API for FHIR?](overview.md) >[!div class="nextstepaction"]
->[Frequently asked questions about Azure API for FHIR](fhir-faq.yml)
---
+>[Frequently asked questions about Azure API for FHIR](fhir-faq.yml)
healthcare-apis Dicomweb Standard Apis C Sharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicomweb-standard-apis-c-sharp.md
Previously updated : 02/15/2022 Last updated : 05/26/2022
After you've deployed an instance of the DICOM service, retrieve the URL for you
In your application, install the following NuGet packages:
-* [DICOM Client](https://microsofthealthoss.visualstudio.com/FhirServer/_packaging?_a=package&feed=Public&package=Microsoft.Health.Dicom.Client&protocolType=NuGet)
+* [DICOM Client](https://microsofthealthoss.visualstudio.com/FhirServer/_artifacts/feed/Public/NuGet/Microsoft.Health.Dicom.Client/)
* [fo-dicom](https://www.nuget.org/packages/fo-dicom/)
healthcare-apis Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/import-data.md
Below are some error codes you may encounter and the solutions to help you resol
**Cause:** We use managed identity for source storage auth. This error may be caused by a missing or wrong role assignment.
-**Solution:** Assign _Storage Blob Data Contributor_ role to the FHIR server following [the RBAC guide.](https://docs.microsoft.com/azure/role-based-access-control/role-assignments-portal?tabs=current)
+**Solution:** Assign _Storage Blob Data Contributor_ role to the FHIR server following [the RBAC guide.](../../role-based-access-control/role-assignments-portal.md?tabs=current)
### 500 Internal Server Error
In this article, you've learned about how the Bulk import feature enables import
>[Configure export settings and set up a storage account](configure-export-data.md) >[!div class="nextstepaction"]
->[Copy data from Azure API for FHIR to Azure Synapse Analytics](copy-to-synapse.md)
+>[Copy data from Azure API for FHIR to Azure Synapse Analytics](copy-to-synapse.md)
healthcare-apis Iot Git Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-git-projects.md
Previously updated : 02/16/2022 Last updated : 05/25/2022 # Open-source projects
HealthKit
* [microsoft/healthkit-to-fhir](https://github.com/microsoft/healthkit-to-fhir): Provides a simple way to create FHIR Resources from HKObjects
-Google Fit on FHIR
+Fit on FHIR
-* [microsoft/googlefit-on-fhir](https://github.com/microsoft/googlefit-on-fhir): Bring Google Fit&#174; data to a FHIR service.
+* [microsoft/fit-on-fhir](https://github.com/microsoft/fit-on-fhir): Bring Google Fit&#174; data to a FHIR service.
Health Data Sync
hpc-cache Access Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/access-policies.md
Title: Use access policies in Azure HPC Cache description: How to create and apply custom access policies to limit client access to storage targets in Azure HPC Cache-+ Previously updated : 03/11/2021- Last updated : 05/19/2022+ # Control client access
If you don't need fine-grained control over storage target access, you can use t
## Create a client access policy
-Use the **Client access policies** page in the Azure portal to create and manage policies. <!-- is there AZ CLI for this? -->
+Use the **Client access policies** page in the Azure portal to create and manage policies. <!-- is there AZ CLI for this yet? -->
[![screenshot of client access policies page. Several policies are defined, and some are expanded to show their rules](media/policies-overview.png)](media/policies-overview.png#lightbox)
Check this box to allow the specified clients to directly mount this export's su
Choose whether or not to set root squash for clients that match this rule.
-This setting controls how Azure HPC Cache treats requests from the root user on client machines. When root squash is enabled, root users from a client are automatically mapped to a non-privileged user when they send requests through the Azure HPC Cache. It also prevents client requests from using set-UID permission bits.
+This setting controls how Azure HPC Cache treats requests from the root user on client machines. When root squash is enabled, root users from a client are automatically mapped to a non-privileged user when they send requests through the Azure HPC Cache. It also prevents client requests from using set-UID permission bits.
If root squash is disabled, a request from the client root user (UID 0) is passed through to a back-end NFS storage system as root. This configuration might allow inappropriate file access.
-Setting root squash for client requests can help compensate for the required ``no_root_squash`` setting on NAS systems that are used as storage targets. (Read more about [NFS storage target prerequisites](hpc-cache-prerequisites.md#nfs-storage-requirements).) It also can improve security when used with Azure Blob storage targets.
+Setting root squash for client requests can provide extra security for your storage target back-end systems. This might be important if you use a NAS system that is configured with ``no_root_squash`` as a storage target. (Read more about [NFS storage target prerequisites](hpc-cache-prerequisites.md#nfs-storage-requirements).)
If you turn on root squash, you must also set the anonymous ID user value. The portal accepts integer values between 0 and 4294967295. (The old values -2 and -1 are supported for backward compatibility, but not recommended for new configurations.)
hpc-cache Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/configuration.md
Title: Configure Azure HPC Cache settings description: Explains how to configure additional settings for the cache like MTU, custom NTP and DNS configuration, and how to access the express snapshots from Azure Blob storage targets.-+ Previously updated : 04/08/2021- Last updated : 05/16/2022+ # Configure additional Azure HPC Cache settings
To see the settings, open the cache's **Networking** page in the Azure portal.
![screenshot of networking page in Azure portal](media/networking-page.png)
-> [!NOTE]
-> A previous version of this page included a cache-level root squash setting, but this setting has moved to [client access policies](access-policies.md).
- <!-- >> [!TIP] > The [Managing Azure HPC Cache video](https://azure.microsoft.com/resources/videos/managing-hpc-cache/) shows the networking page and its settings. -->
hpc-cache Hpc Cache Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/hpc-cache-prerequisites.md
More information is included in [Troubleshoot NAS configuration and NFS storage
* Check firewall settings to be sure that they allow traffic on all of these required ports. Be sure to check firewalls used in Azure as well as on-premises firewalls in your data center.
-* Root access (read/write): The cache connects to the back-end system as user ID 0. Check these settings on your storage system:
-
- * Enable `no_root_squash`. This option ensures that the remote root user can access files owned by root.
-
- * Check export policies to make sure they don't include restrictions on root access from the cache's subnet.
-
- * If your storage has any exports that are subdirectories of another export, make sure the cache has root access to the lowest segment of the path. Read [Root access on directory paths](troubleshoot-nas.md#allow-root-access-on-directory-paths) in the NFS storage target troubleshooting article for details.
-
-* NFS back-end storage must be a compatible hardware/software platform. The storage must support NFS Version 3 (NFSv3). Contact the Azure HPC Cache team for more details.
+* NFS back-end storage must be a compatible hardware/software platform. The storage must support NFS Version 3 (NFSv3). Contact the Azure HPC Cache team for details.
### NFS-mounted blob (ADLS-NFS) storage requirements
hpc-cache Troubleshoot Nas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/troubleshoot-nas.md
Title: Troubleshoot Azure HPC Cache NFS storage targets description: Tips to avoid and fix configuration errors and other problems that can cause failure when creating an NFS storage target-+ Previously updated : 03/18/2020- Last updated : 05/26/2022+ # Troubleshoot NAS configuration and NFS storage target issues This article gives solutions for some common configuration errors and other issues that could prevent Azure HPC Cache from adding an NFS storage system as a storage target.
-This article includes details about how to check ports and how to enable root access to a NAS system. It also includes detailed information about less common issues that might cause NFS storage target creation to fail.
+This article includes details about how to check ports and how to enable needed access to a NAS system. It also includes detailed information about less common issues that might cause NFS storage target creation to fail.
> [!TIP] > Before using this guide, read [prerequisites for NFS storage targets](hpc-cache-prerequisites.md#nfs-storage-requirements).
Make sure that all of the ports returned by the ``rpcinfo`` query allow unrestri
Check these settings both on the NAS itself and also on any firewalls between the storage system and the cache subnet.
-## Check root access
+## Check root squash settings
-Azure HPC Cache needs access to your storage system's exports to create the storage target. Specifically, it mounts the exports as user ID 0.
+Root squash settings can disrupt file access if they are improperly configured. You should check that the settings on each storage export and on the matching HPC Cache client access policies are consistent.
-Different storage systems use different methods to enable this access:
+Root squash prevents requests sent by a local superuser root on the client from being sent to a back-end storage system as root. It reassigns requests from root to a non-privileged user ID (UID) like 'nobody'.
-* Linux servers generally add ``no_root_squash`` to the exported path in ``/etc/exports``.
-* NetApp and EMC systems typically control access with export rules that are tied to specific IP addresses or networks.
+> [!TIP]
+>
+> Previous versions of Azure HPC Cache required NAS storage systems to allow root access from the HPC Cache. Now, you don't need to allow root access on a storage target export unless you want HPC Cache clients to have root access to the export.
+
+Root squash can be configured in an HPC Cache system in these places:
+
+* At the Azure HPC Cache - Use [client access policies](access-policies.md#root-squash) to configure root squash for clients that match specific filter rules. A client access policy is part of each NFS storage target namespace path.
+
+ The default client access policy does not squash root.
+
+* At the storage export - You can configure your storage system to reassign incoming requests from root to a non-privileged user ID (UID).
+
+These two settings should match. That is, if a storage system export squashes root, you should change its HPC Cache client access rule to also squash root. If the settings don't match, you can have access problems when you try to read or write to the back-end storage system through the HPC Cache.
-If using export rules, remember that the cache can use multiple different IP addresses from the cache subnet. Allow access from the full range of possible subnet IP addresses.
+This table illustrates the behavior for different root squash scenarios when a client request is sent as UID 0 (root). The scenarios marked with * are ***not recommended*** because they can cause access problems.
-> [!NOTE]
-> Although the cache needs root access to the back-end storage system, you can restrict access for clients that connect through the cache. Read [Control client access](access-policies.md#root-squash) for details.
+| Setting | UID sent from client | UID sent from HPC Cache | Effective UID on back-end storage |
+|--|--|--|--|
+| no root squash | 0 (root) | 0 (root) | 0 (root) |
+| *root squash at HPC Cache only | 0 (root) | 65534 (nobody) | 65534 (nobody) |
+| *root squash at NAS storage only | 0 (root) | 0 (root) | 65534 (nobody) |
+| root squash at HPC Cache and NAS | 0 (root) | 65534 (nobody) | 65534 (nobody) |
-Work with your NAS storage vendor to enable the right level of access for the cache.
+(UID 65534 is an example; when you turn on root squash in a client access policy you can customize the UID.)
-### Allow root access on directory paths
-<!-- linked in prereqs article -->
+## Check access on directory paths
+<!-- previously linked in prereqs article as allow-root-access-on-directory-paths -->
-For NAS systems that export hierarchical directories, Azure HPC Cache needs root access to each export level.
+For NAS systems that export hierarchical directories, check that Azure HPC Cache has appropriate access to each export level in the path to the files you are using.
For example, a system might show three exports like these:
For example, a system might show three exports like these:
The export ``/ifs/accounting/payroll`` is a child of ``/ifs/accounting``, and ``/ifs/accounting`` is itself a child of ``/ifs``.
-If you add the ``payroll`` export as an HPC Cache storage target, the cache actually mounts ``/ifs/`` and accesses the payroll directory from there. So Azure HPC Cache needs root access to ``/ifs`` in order to access the ``/ifs/accounting/payroll`` export.
+If you add the ``payroll`` export as an HPC Cache storage target, the cache actually mounts ``/ifs/`` and accesses the payroll directory from there. So Azure HPC Cache needs sufficient access to ``/ifs`` in order to access the ``/ifs/accounting/payroll`` export.
This requirement is related to the way the cache indexes files and avoids file collisions, using file handles that the storage system provides.
A NAS system with hierarchical exports can give different file handles for the s
The back-end storage system keeps internal aliases for file handles, but Azure HPC Cache cannot tell which file handles in its index reference the same item. So it is possible that the cache can have different writes cached for the same file, and apply the changes incorrectly because it does not know that they are the same file.
-To avoid this possible file collision for files in multiple exports, Azure HPC Cache automatically mounts the shallowest available export in the path (``/ifs`` in the example) and uses the file handle given from that export. If multiple exports use the same base path, Azure HPC Cache needs root access to that path.
+To avoid this possible file collision for files in multiple exports, Azure HPC Cache automatically mounts the shallowest available export in the path (``/ifs`` in the example) and uses the file handle given from that export. If multiple exports use the same base path, Azure HPC Cache needs access to that path.
<!-- ## Enable export listing
iot-central Concepts Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-quotas-limits.md
There are various quotas and limits that apply to IoT Central applications. IoT
| - | -- | -- | | Number of concurrent job executions | 5 | For performance reasons, you shouldn't exceed this limit. |
-## Organizations
+## Users, roles, and organizations
| Item | Quota or limit | Notes | | - | -- | -- |
iot-dps Libraries Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/libraries-sdks.md
The Device Provisioning Service (DPS) libraries and SDKs help developers build I
| Platform | Package | Code repository | Samples | Quickstart | Reference | | --|--|--|--|--|--|
-| .NET|[NuGet](https://www.nuget.org/packages/Microsoft.Azure.Devices.Provisioning.Client/) |[GitHub](https://github.com/Azure/azure-iot-sdk-csharp/)|[Samples](https://github.com/Azure-Samples/azure-iot-samples-csharp/tree/main/provisioning/Samples/device)|[Quickstart](/azure/iot-dps/quick-create-simulated-device-x509?tabs=windows&pivots=programming-language-csharp)| [Reference](/dotnet/api/microsoft.azure.devices.provisioning.client) |
-| C|[apt-get, MBED, Arduino IDE or iOS](https://github.com/Azure/azure-iot-sdk-c/blob/master/readme.md#packages-and-libraries)|[GitHub](https://github.com/Azure/azure-iot-sdk-c/blob/master/provisioning\_client)|[Samples](https://github.com/Azure/azure-iot-sdk-c/tree/main/provisioning_client/samples)|[Quickstart](/azure/iot-dps/quick-create-simulated-device-x509?tabs=windows&pivots=programming-language-ansi-c)|[Reference](/azure/iot-hub/iot-c-sdk-ref/) |
-| Java|[Maven](https://mvnrepository.com/artifact/com.microsoft.azure.sdk.iot.provisioning/provisioning-device-client)|[GitHub](https://github.com/Azure/azure-iot-sdk-java/blob/main/provisioning)|[Samples](https://github.com/Azure/azure-iot-sdk-java/tree/main/provisioning/provisioning-samples)|[Quickstart](/azure/iot-dps/quick-create-simulated-device-x509?tabs=windows&pivots=programming-language-java)|[Reference](/java/api/com.microsoft.azure.sdk.iot.provisioning.device) |
-| Node.js|[npm](https://www.npmjs.com/package/azure-iot-provisioning-device) |[GitHub](https://github.com/Azure/azure-iot-sdk-node/tree/main/provisioning)|[Samples](https://github.com/Azure/azure-iot-sdk-node/tree/main/provisioning/device/samples)|[Quickstart](/azure/iot-dps/quick-create-simulated-device-x509?tabs=windows&pivots=programming-language-nodejs)|[Reference](/javascript/api/overview/azure/iothubdeviceprovisioning) |
-| Python|[pip](https://pypi.org/project/azure-iot-device/) |[GitHub](https://github.com/Azure/azure-iot-sdk-python)|[Samples](https://github.com/Azure/azure-iot-sdk-python/tree/main/azure-iot-device/samples/async-hub-scenarios)|[Quickstart](/azure/iot-dps/quick-create-simulated-device-x509?tabs=windows&pivots=programming-language-python)|[Reference](/python/api/azure-iot-device/azure.iot.device.provisioningdeviceclient) |
+| .NET|[NuGet](https://www.nuget.org/packages/Microsoft.Azure.Devices.Provisioning.Client/) |[GitHub](https://github.com/Azure/azure-iot-sdk-csharp/)|[Samples](https://github.com/Azure-Samples/azure-iot-samples-csharp/tree/main/provisioning/Samples/device)|[Quickstart](./quick-create-simulated-device-x509.md?pivots=programming-language-csharp&tabs=windows)| [Reference](/dotnet/api/microsoft.azure.devices.provisioning.client) |
+| C|[apt-get, MBED, Arduino IDE or iOS](https://github.com/Azure/azure-iot-sdk-c/blob/master/readme.md#packages-and-libraries)|[GitHub](https://github.com/Azure/azure-iot-sdk-c/blob/master/provisioning\_client)|[Samples](https://github.com/Azure/azure-iot-sdk-c/tree/main/provisioning_client/samples)|[Quickstart](./quick-create-simulated-device-x509.md?pivots=programming-language-ansi-c&tabs=windows)|[Reference](/azure/iot-hub/iot-c-sdk-ref/) |
+| Java|[Maven](https://mvnrepository.com/artifact/com.microsoft.azure.sdk.iot.provisioning/provisioning-device-client)|[GitHub](https://github.com/Azure/azure-iot-sdk-jav?pivots=programming-language-java&tabs=windows)|[Reference](/java/api/com.microsoft.azure.sdk.iot.provisioning.device) |
+| Node.js|[npm](https://www.npmjs.com/package/azure-iot-provisioning-device) |[GitHub](https://github.com/Azure/azure-iot-sdk-node/tree/main/provisioning)|[Samples](https://github.com/Azure/azure-iot-sdk-node/tree/main/provisioning/device/samples)|[Quickstart](./quick-create-simulated-device-x509.md?pivots=programming-language-nodejs&tabs=windows)|[Reference](/javascript/api/overview/azure/iothubdeviceprovisioning) |
+| Python|[pip](https://pypi.org/project/azure-iot-device/) |[GitHub](https://github.com/Azure/azure-iot-sdk-python)|[Samples](https://github.com/Azure/azure-iot-sdk-python/tree/main/azure-iot-device/samples/async-hub-scenarios)|[Quickstart](./quick-create-simulated-device-x509.md?pivots=programming-language-python&tabs=windows)|[Reference](/python/api/azure-iot-device/azure.iot.device.provisioningdeviceclient) |
Microsoft also provides embedded device SDKs to facilitate development on resource-constrained devices. To learn more, see the [IoT Device Development Documentation](../iot-develop/about-iot-sdks.md).
Microsoft also provides embedded device SDKs to facilitate development on resour
| Platform | Package | Code repository | Samples | Quickstart | Reference | | --|--|--|--|--|--|
-| .NET|[NuGet](https://www.nuget.org/packages/Microsoft.Azure.Devices.Provisioning.Service/) |[GitHub](https://github.com/Azure/azure-iot-sdk-csharp/)|[Samples](https://github.com/Azure-Samples/azure-iot-samples-csharp/tree/main/provisioning/Samples/service)|[Quickstart](/azure/iot-dps/quick-enroll-device-tpm?tabs=symmetrickey&pivots=programming-language-csharp)|[Reference](/dotnet/api/microsoft.azure.devices.provisioning.service) |
-| Java|[Maven](https://mvnrepository.com/artifact/com.microsoft.azure.sdk.iot.provisioning/provisioning-service-client)|[GitHub](https://github.com/Azure/azure-iot-sdk-java/blob/main/provisioning)|[Samples](https://github.com/Azure/azure-iot-sdk-java/tree/main/provisioning/provisioning-samples)|[Quickstart](/azure/iot-dps/quick-enroll-device-tpm?tabs=symmetrickey&pivots=programming-language-java)|[Reference](/java/api/com.microsoft.azure.sdk.iot.provisioning.service) |
-| Node.js|[npm](https://www.npmjs.com/package/azure-iot-provisioning-service)|[GitHub](https://github.com/Azure/azure-iot-sdk-node/tree/main/provisioning)|[Samples](https://github.com/Azure/azure-iot-sdk-node/tree/main/provisioning/service/samples)|[Quickstart](/azure/iot-dps/quick-enroll-device-tpm?tabs=symmetrickey&pivots=programming-language-nodejs)|[Reference](/javascript/api/overview/azure/iothubdeviceprovisioning) |
+| .NET|[NuGet](https://www.nuget.org/packages/Microsoft.Azure.Devices.Provisioning.Service/) |[GitHub](https://github.com/Azure/azure-iot-sdk-csharp/)|[Samples](https://github.com/Azure-Samples/azure-iot-samples-csharp/tree/main/provisioning/Samples/service)|[Quickstart](./quick-enroll-device-tpm.md?pivots=programming-language-csharp&tabs=symmetrickey)|[Reference](/dotnet/api/microsoft.azure.devices.provisioning.service) |
+| Java|[Maven](https://mvnrepository.com/artifact/com.microsoft.azure.sdk.iot.provisioning/provisioning-service-client)|[GitHub](https://github.com/Azure/azure-iot-sdk-jav?pivots=programming-language-java&tabs=symmetrickey)|[Reference](/java/api/com.microsoft.azure.sdk.iot.provisioning.service) |
+| Node.js|[npm](https://www.npmjs.com/package/azure-iot-provisioning-service)|[GitHub](https://github.com/Azure/azure-iot-sdk-node/tree/main/provisioning)|[Samples](https://github.com/Azure/azure-iot-sdk-node/tree/main/provisioning/service/samples)|[Quickstart](./quick-enroll-device-tpm.md?pivots=programming-language-nodejs&tabs=symmetrickey)|[Reference](/javascript/api/overview/azure/iothubdeviceprovisioning) |
## Management SDKs
Microsoft also provides embedded device SDKs to facilitate development on resour
## Next steps
-The Device Provisioning Service documentation also provides [tutorials](how-to-legacy-device-symm-key.md) and [additional samples](quick-create-simulated-device-tpm.md) that you can use to try out the SDKs and libraries.
+The Device Provisioning Service documentation also provides [tutorials](how-to-legacy-device-symm-key.md) and [additional samples](quick-create-simulated-device-tpm.md) that you can use to try out the SDKs and libraries.
iot-dps Monitor Iot Dps Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/monitor-iot-dps-reference.md
For more information on the schema of Activity Log entries, see [Activity Log s
- See [Monitoring Azure IoT Hub Device Provisioning Service](monitor-iot-dps.md) for a description of monitoring Azure IoT Hub Device Provisioning Service. -- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
+- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
iot-edge How To Deploy At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-at-scale.md
Review your deployment information, then select **Create**.
To monitor your deployment, see [Monitor IoT Edge deployments](how-to-monitor-iot-edge-deployments.md).
+> [!NOTE]
+> When a new IoT Edge deployment is created, sometimes it can take up to 5 minutes for the IoT Hub to process the new configuration and propagate the new desired properties to the targeted devices.
+ ## Modify a deployment When you modify a deployment, the changes immediately replicate to all targeted devices. You can modify the following settings and features for an existing deployment:
When you delete a deployment, any deployed devices take on their next highest pr
## Next steps
-Learn more about [Deploying modules to IoT Edge devices](module-deployment-monitoring.md).
+Learn more about [Deploying modules to IoT Edge devices](module-deployment-monitoring.md).
iot-edge How To Deploy Cli At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-cli-at-scale.md
The create command for deployment takes the following parameters:
To monitor a deployment by using the Azure CLI, see [Monitor IoT Edge deployments](how-to-monitor-iot-edge-deployments.md#monitor-a-deployment-with-azure-cli).
+> [!NOTE]
+> When a new IoT Edge deployment is created, sometimes it can take up to 5 minutes for the IoT Hub to process the new configuration and propagate the new desired properties to the targeted devices.
+ ## Modify a deployment When you modify a deployment, the changes immediately replicate to all targeted devices.
iot-hub Iot Hub Rm Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-rm-rest.md
To complete this tutorial, you need the following:
4. In NuGet Package Manager, search for **Microsoft.IdentityModel.Clients.ActiveDirectory**. Click **Install**, in **Review Changes** click **OK**, then click **I Accept** to accept the license. > [!IMPORTANT]
- > The [Microsoft.IdentityModel.Clients.ActiveDirectory](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) NuGet package and Azure AD Authentication Library (ADAL) have been deprecated. No new features have been added since June 30, 2020. We strongly encourage you to upgrade, see the [migration guide](/azure/active-directory/develop/msal-migration) for more details.
+ > The [Microsoft.IdentityModel.Clients.ActiveDirectory](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) NuGet package and Azure AD Authentication Library (ADAL) have been deprecated. No new features have been added since June 30, 2020. We strongly encourage you to upgrade, see the [migration guide](../active-directory/develop/msal-migration.md) for more details.
5. In Program.cs, replace the existing **using** statements with the following code:
iot-hub Iot Hub Rm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-rm-template.md
To complete this tutorial, you need the following:
4. In NuGet Package Manager, search for **Microsoft.IdentityModel.Clients.ActiveDirectory**. Click **Install**, in **Review Changes** click **OK**, then click **I Accept** to accept the license. > [!IMPORTANT]
- > The [Microsoft.IdentityModel.Clients.ActiveDirectory](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) NuGet package and Azure AD Authentication Library (ADAL) have been deprecated. No new features have been added since June 30, 2020. We strongly encourage you to upgrade, see the [migration guide](/azure/active-directory/develop/msal-migration) for more details.
+ > The [Microsoft.IdentityModel.Clients.ActiveDirectory](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) NuGet package and Azure AD Authentication Library (ADAL) have been deprecated. No new features have been added since June 30, 2020. We strongly encourage you to upgrade, see the [migration guide](../active-directory/develop/msal-migration.md) for more details.
5. In Program.cs, replace the existing **using** statements with the following code:
iot-hub Quickstart Send Telemetry Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/quickstart-send-telemetry-cli.md
Title: Quickstart - Send telemetry to Azure IoT Hub (CLI) quickstart
-description: This quickstart shows developers new to IoT Hub how to get started by using the Azure CLI to create an IoT hub, send telemetry, and view messages between a device and the hub.
+description: This quickstart shows developers new to IoT Hub how to get started by using the Azure CLI to create an IoT hub, send telemetry, and view messages between a device and the hub.
Previously updated : 03/24/2022 Last updated : 05/26/2022 # Quickstart: Send telemetry from a device to an IoT hub and monitor it with the Azure CLI
-IoT Hub is an Azure service that enables you to ingest high volumes of telemetry from your IoT devices into the cloud for storage or processing. In this quickstart, you use the Azure CLI to create an IoT Hub and a simulated device, send device telemetry to the hub, and send a cloud-to-device message. You also use the Azure portal to visualize device metrics. This is a basic workflow for developers who use the CLI to interact with an IoT Hub application.
+IoT Hub is an Azure service that enables you to ingest high volumes of telemetry from your IoT devices into the cloud for storage or processing. In this codeless quickstart, you use the Azure CLI to create an IoT Hub and a simulated device. You'll send device telemetry to the hub, and send messages, call methods, and update properties on the device. You'll also use the Azure portal to visualize device metrics. This article shows a basic workflow for developers who use the CLI to interact with an IoT Hub application.
## Prerequisites - If you don't have an Azure subscription, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- Azure CLI. You can run all commands in this quickstart using the Azure Cloud Shell, an interactive CLI shell that runs in your browser. If you use the Cloud Shell, you don't need to install anything. If you prefer to use the CLI locally, this quickstart requires Azure CLI version 2.0.76 or later. Run `az --version` to find the version. To install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+- Azure CLI. You can run all commands in this quickstart using the Azure Cloud Shell, an interactive CLI shell that runs in your browser or in an app such as Windows Terminal. If you use the Cloud Shell, you don't need to install anything. If you prefer to use the CLI locally, this quickstart requires Azure CLI version 2.36 or later. Run `az --version` to find the version. To install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
## Sign in to the Azure portal
To launch the Cloud Shell:
> [!NOTE] > If this is the first time you've used the Cloud Shell, it prompts you to create storage, which is required to use the Cloud Shell. Select a subscription to create a storage account and Microsoft Azure Files share.
-2. Select your preferred CLI environment in the **Select environment** dropdown. This quickstart uses the **Bash** environment. All the following CLI commands work in the PowerShell environment too.
-
+2. Select your preferred CLI environment in the **Select environment** dropdown. This quickstart uses the **Bash** environment. All the following CLI commands work in PowerShell too.
![Select CLI environment](media/quickstart-send-telemetry-cli/cloud-shell-environment.png) ## Prepare two CLI sessions
-In this section, you prepare two Azure CLI sessions. If you're using the Cloud Shell, you will run the two sessions in separate browser tabs. If using a local CLI client, you will run two separate CLI instances. You'll use the first session as a simulated device, and the second session to monitor and send messages. To run a command, select **Copy** to copy a block of code in this quickstart, paste it into your shell session, and run it.
+Next, you prepare two Azure CLI sessions. If you're using the Cloud Shell, you'll run these sessions in separate Cloud Shell tabs. If using a local CLI client, you'll run separate CLI instances. Use the separate CLI sessions for the following tasks:
+- The first session simulates an IoT device that communicates with your IoT hub.
+- The second session either monitors the device in the first session, or sends messages, commands, and property updates.
+
+To run a command, select **Copy** to copy a block of code in this quickstart, paste it into your shell session, and run it.
-Azure CLI requires you to be logged into your Azure account. All communication between your Azure CLI shell session and your IoT hub is authenticated and encrypted. As a result, this quickstart does not need additional authentication that you'd use with a real device, such as a connection string.
+Azure CLI requires you to be logged into your Azure account. All communication between your Azure CLI shell session and your IoT hub is authenticated and encrypted. As a result, this quickstart doesn't need extra authentication that you'd use with a real device, such as a connection string.
-- Run the [az extension add](/cli/azure/extension#az-extension-add) command to add the Microsoft Azure IoT Extension for Azure CLI to your CLI shell. The IOT Extension adds IoT Hub, IoT Edge, and IoT Device Provisioning Service (DPS) specific commands to Azure CLI.
+- In the first CLI session, run the [az extension add](/cli/azure/extension#az-extension-add) command. The command adds the Microsoft Azure IoT Extension for Azure CLI to your CLI shell. The IOT Extension adds IoT Hub, IoT Edge, and IoT Device Provisioning Service (DPS) specific commands to Azure CLI.
```azurecli az extension add --name azure-iot
Azure CLI requires you to be logged into your Azure account. All communication b
[!INCLUDE [iot-hub-cli-version-info](../../includes/iot-hub-cli-version-info.md)] -- Open a second CLI session. If you're using the Cloud Shell, select **Open new session**. If you're using the CLI locally, open a second instance.
+- Open the second CLI session. If you're using the Cloud Shell in a browser, use the **Open new session** button. If using the CLI locally, open a second CLI instance.
>[!div class="mx-imgBorder"] >![Open new Cloud Shell session](media/quickstart-send-telemetry-cli/cloud-shell-new-session.png)
In this section, you use the Azure CLI to create a resource group and an IoT hub
> [!TIP] > Optionally, you can create an Azure resource group, an IoT hub, and other resources by using the [Azure portal](iot-hub-create-through-portal.md), [Visual Studio Code](iot-hub-create-use-iot-toolkit.md), or other programmatic methods.
-1. Run the [az group create](/cli/azure/group#az-group-create) command to create a resource group. The following command creates a resource group named *MyResourceGroup* in the *eastus* location.
+1. In the first CLI session, run the [az group create](/cli/azure/group#az-group-create) command to create a resource group. The following command creates a resource group named *MyResourceGroup* in the *eastus* location.
```azurecli az group create --name MyResourceGroup --location eastus ```
-1. Run the [az iot hub create](/cli/azure/iot/hub#az-iot-hub-create) command to create an IoT hub. It might take a few minutes to create an IoT hub.
+1. In the first CLI session, run the [Az PowerShell module iot hub create](/cli/azure/iot/hub#az-iot-hub-create) command to create an IoT hub. It takes a few minutes to create an IoT hub.
*YourIotHubName*. Replace this placeholder and the surrounding braces in the following command, using the name you chose for your IoT hub. An IoT hub name must be globally unique in Azure. Use your IoT hub name in the rest of this quickstart wherever you see the placeholder.
In this section, you use the Azure CLI to create a resource group and an IoT hub
## Create and monitor a device
-In this section, you create a simulated device in the first CLI session. The simulated device sends device telemetry to your IoT hub. In the second CLI session, you monitor events and telemetry, and send a cloud-to-device message to the simulated device.
+In this section, you create a simulated device in the first CLI session. The simulated device sends device telemetry to your IoT hub. In the second CLI session, you monitor events and telemetry.
To create and start a simulated device:
-1. Run the [az iot hub device-identity create](/cli/azure/iot/hub/device-identity#az-iot-hub-device-identity-create) command in the first CLI session. This creates the simulated device identity.
+1. In the first CLI session, run the [az iot hub device-identity create](/cli/azure/iot/hub/device-identity#az-iot-hub-device-identity-create) command. This command creates the simulated device identity.
*YourIotHubName*. Replace this placeholder below with the name you chose for your IoT hub. *simDevice*. You can use this name directly for the simulated device in the rest of this quickstart. Optionally, use a different name. ```azurecli
- az iot hub device-identity create --device-id simDevice --hub-name {YourIoTHubName}
+ az iot hub device-identity create -d simDevice -n {YourIoTHubName}
```
-1. Run the [az iot device simulate](/cli/azure/iot/device#az-iot-device-simulate) command in the first CLI session. This starts the simulated device. The device sends telemetry to your IoT hub and receives messages from it.
+1. In the first CLI session, run the [az iot device simulate](/cli/azure/iot/device#az-iot-device-simulate) command. This command starts the simulated device. The device sends telemetry to your IoT hub and receives messages from it.
*YourIotHubName*. Replace this placeholder below with the name you chose for your IoT hub.
To create and start a simulated device:
To monitor a device:
-1. In the second CLI session, run the [az iot hub monitor-events](/cli/azure/iot/hub#az-iot-hub-monitor-events) command. This starts monitoring the simulated device. The output shows telemetry that the simulated device sends to the IoT hub.
+1. In the second CLI session, run the [az iot hub monitor-events](/cli/azure/iot/hub#az-iot-hub-monitor-events) command. This command continuously monitors the simulated device. The output shows telemetry such as events and property state changes that the simulated device sends to the IoT hub.
*YourIotHubName*. Replace this placeholder below with the name you chose for your IoT hub.-
+
```azurecli
- az iot hub monitor-events --output table --hub-name {YourIoTHubName}
+ az iot hub monitor-events --output table -p all -n {YourIoTHubName}
```
+
+ :::image type="content" source="media/quickstart-send-telemetry-cli/cloud-shell-monitor.png" alt-text="Screenshot of monitoring events on a simulated device.":::
- ![Cloud Shell monitor events](media/quickstart-send-telemetry-cli/cloud-shell-monitor.png)
-
-1. After you monitor the simulated device in the second CLI session, press Ctrl+C to stop monitoring.
+1. After you monitor the simulated device in the second CLI session, press Ctrl+C to stop monitoring. Keep the second CLI session open to use in later steps.
## Use the CLI to send a message
-In this section, you use the second CLI session to send a message to the simulated device.
+In this section, you send a message to the simulated device.
-1. In the first CLI session, confirm that the simulated device is running. If the device has stopped, run the following command to start it:
+1. In the first CLI session, confirm that the simulated device is still running. If the device stopped, run the following command to restart it:
*YourIotHubName*. Replace this placeholder below with the name you chose for your IoT hub.
In this section, you use the second CLI session to send a message to the simulat
az iot device simulate -d simDevice -n {YourIoTHubName} ```
-1. In the second CLI session, run the [az iot device c2d-message send](/cli/azure/iot/device/c2d-message#az-iot-device-c2d-message-send) command. This sends a cloud-to-device message from your IoT hub to the simulated device. The message includes a string and two key-value pairs.
+1. In the second CLI session, run the [az iot device c2d-message send](/cli/azure/iot/device/c2d-message#az-iot-device-c2d-message-send) command. This command sends a cloud-to-device message from your IoT hub to the simulated device. The message includes a string and two key-value pairs.
*YourIotHubName*. Replace this placeholder below with the name you chose for your IoT hub.
In this section, you use the second CLI session to send a message to the simulat
1. In the first CLI session, confirm that the simulated device received the message.
- ![Cloud Shell cloud-to-device message](media/quickstart-send-telemetry-cli/cloud-shell-receive-message.png)
+ :::image type="content" source="media/quickstart-send-telemetry-cli/cloud-shell-receive-message.png" alt-text="Screenshot of a simulated device receiving a message.":::
++
+## Use the CLI to call a device method
+
+In this section, you call a direct method on the simulated device.
+
+1. As you did before, confirm that the simulated device in the first CLI session is running. If not, restart it.
+
+1. In the second CLI session, run the [az iot hub invoke-device-method](/cli/azure/iot/hub#az-iot-hub-invoke-device-method) command. In this example, there's no preexisting method for the device. The command calls an example method name on the simulated device and returns a payload.
+
+ *YourIotHubName*. Replace this placeholder below with the name you chose for your IoT hub.
+
+ ```azurecli
+ az iot hub invoke-device-method --mn MySampleMethod -d simDevice -n {YourIoTHubName}
+ ```
+1. In the first CLI session, confirm the output shows the method call.
+
+ :::image type="content" source="media/quickstart-send-telemetry-cli/cloud-shell-method-payload.png" alt-text="Screenshot of a simulated device displaying output after a method was invoked.":::
+
+## Use the CLI to update device properties
+
+In this section, you update the state of the simulated device by setting property values.
+
+1. As you did before, confirm that the simulated device in the first CLI session is running. If not, restart it.
+
+1. In the second CLI session, run the [az iot hub device-twin update](/cli/azure/iot/hub/device-twin#az-iot-hub-device-twin-update) command. This command updates the properties to the desired state on the IoT hub device twin that corresponds to your simulated device. In this case, the command sets example temperature condition properties.
+
+ *YourIotHubName*. Replace this placeholder below with the name you chose for your IoT hub.
+
+ ```azurecli
+ az iot hub device-twin update -d simDevice --desired '{"conditions":{"temperature":{"warning":98, "critical":107}}}' -n {YourIoTHubName}
+ ```
+
+1. In the first CLI session, confirm that the simulated device outputs the property update.
+
+ :::image type="content" source="media/quickstart-send-telemetry-cli/cloud-shell-device-twin-update.png" alt-text="Screenshot that shows how to update properties on a device.":::
+
+1. In the second CLI session, run the [az iot hub device-twin show](/cli/azure/iot/hub/device-twin#az-iot-hub-device-twin-show) command. This command reports changes to the device properties.
+
+ *YourIotHubName*. Replace this placeholder below with the name you chose for your IoT hub.
+
+ ```azurecli
+ az iot hub device-twin show -d simDevice --query properties.reported -n {YourIoTHubName}
+ ```
-1. After you view the message, close the second CLI session. Keep the first CLI session open. You use it to clean up resources in a later step.
+ :::image type="content" source="media/quickstart-send-telemetry-cli/cloud-shell-device-twin-show-update.png" alt-text="Screenshot that shows the updated properties on a device twin.":::
## View messaging metrics in the portal
The Azure portal enables you to manage all aspects of your IoT hub and devices.
To visualize messaging metrics in the Azure portal:
-1. In the left navigation menu on the portal, select **All Resources**. This lists all resources in your subscription, including the IoT hub you created.
+1. In the left navigation menu on the portal, select **All Resources**. This tab lists all resources in your subscription, including the IoT hub you created.
1. Select the link on the IoT hub you created. The portal displays the overview page for the hub.
If you continue to the next recommended article, you can keep the resources you'
To delete a resource group by name:
-1. Run the [az group delete](/cli/azure/group#az-group-delete) command. This removes the resource group, the IoT Hub, and the device registration you created.
+1. Run the [az group delete](/cli/azure/group#az-group-delete) command. This command removes the resource group, the IoT Hub, and the device registration you created.
```azurecli az group delete --name MyResourceGroup
To delete a resource group by name:
## Next steps
-In this quickstart, you used the Azure CLI to create an IoT hub, create a simulated device, send telemetry, monitor telemetry, send a cloud-to-device message, and clean up resources. You used the Azure portal to visualize messaging metrics on your device.
+In this quickstart, you used the Azure CLI to create an IoT hub, create a simulated device, send and monitor telemetry, call a method, set desired properties, and clean up resources. You used the Azure portal to visualize messaging metrics on your device.
-If you are a device developer, the suggested next step is to see the telemetry quickstart that uses the Azure IoT Device SDK for C. Optionally, see one of the available Azure IoT Hub telemetry quickstart articles in your preferred language or SDK.
+If you're a device developer, the suggested next step is to see the telemetry quickstart that uses the Azure IoT Device SDK for C. Optionally, see one of the available Azure IoT Hub telemetry quickstart articles in your preferred language or SDK.
To learn how to control your simulated device from a back-end application, continue to the next quickstart.
key-vault About Keys Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/about-keys-details.md
Following table shows a summary of key types and supported algorithms.
|Key types/sizes/curves| Encrypt/Decrypt<br>(Wrap/Unwrap) | Sign/Verify | | | | |
-|EC-P256, EC-P256K, EC-P384, EC-521|NA|ES256<br>ES256K<br>ES384<br>ES512|
+|EC-P256, EC-P256K, EC-P384, EC-P521|NA|ES256<br>ES256K<br>ES384<br>ES512|
|RSA 2K, 3K, 4K| RSA1_5<br>RSA-OAEP<br>RSA-OAEP-256|PS256<br>PS384<br>PS512<br>RS256<br>RS384<br>RS512<br>RSNULL| |AES 128-bit, 256-bit <br/>(Managed HSM only)| AES-KW<br>AES-GCM<br>AES-CBC| NA| |||
key-vault Overview Storage Keys Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/overview-storage-keys-powershell.md
$sasTemplate="sv=2018-03-28&ss=bfqt&srt=sco&sp=rw&spr=https"
|`SignedProtocol (spr)`|Optional. Specifies the protocol permitted for a request made with the account SAS. Possible values are both HTTPS and HTTP (`https,http`) or HTTPS only (`https`). The default value is `https,http`.<br /><br /> Note that HTTP only is not a permitted value.| For more information about account SAS, see:
-[Create an account SAS](https://docs.microsoft.com/rest/api/storageservices/create-account-sas)
+[Create an account SAS](/rest/api/storageservices/create-account-sas)
> [!NOTE] > Key Vault ignores lifetime parameters like 'Signed Expiry', 'Signed Start' and parameters introduced after 2018-03-28 version
The output of this command will show your SAS definition string.
## Next steps - [Managed storage account key samples](https://github.com/Azure-Samples?utf8=%E2%9C%93&q=key+vault+storage&type=&language=)-- [Key Vault PowerShell reference](/powershell/module/az.keyvault/#key_vault)
+- [Key Vault PowerShell reference](/powershell/module/az.keyvault/#key_vault)
key-vault Overview Storage Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/overview-storage-keys.md
SAS definition template will be the passed to the `--template-uri` parameter in
|`SignedProtocol (spr)`|Optional. Specifies the protocol permitted for a request made with the account SAS. Possible values are both HTTPS and HTTP (`https,http`) or HTTPS only (`https`). The default value is `https,http`.<br /><br /> Note that HTTP only isn't a permitted value.| For more information about account SAS, see:
-[Create an account SAS](https://docs.microsoft.com/rest/api/storageservices/create-account-sas)
+[Create an account SAS](/rest/api/storageservices/create-account-sas)
> [!NOTE] > Key Vault ignores lifetime parameters like 'Signed Expiry', 'Signed Start' and parameters introduced after 2018-03-28 version
az keyvault storage sas-definition show --id https://<YourKeyVaultName>.vault.az
- Learn more about [keys, secrets, and certificates](/rest/api/keyvault/). - Review articles on the [Azure Key Vault team blog](/archive/blogs/kv/).-- See the [az keyvault storage](/cli/azure/keyvault/storage) reference documentation.
+- See the [az keyvault storage](/cli/azure/keyvault/storage) reference documentation.
lab-services Add Lab Creator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/add-lab-creator.md
You might need to add an external user as a lab creator. If that is the case, yo
- A non-Microsoft email account, such as one provided by Yahoo or Google. However, these types of accounts must be linked with a Microsoft account. - A GitHub account. This account must be linked with a Microsoft account.
-For instructions to add someone as a guest account in Azure AD, see [Quickstart: Add guest users in the Azure portal - Azure AD](/azure/active-directory/external-identities/b2b-quickstart-add-guest-users-portal). If using an email account that's provided by your universityΓÇÖs Azure AD, you don't have to add them as a guest account.
+For instructions to add someone as a guest account in Azure AD, see [Quickstart: Add guest users in the Azure portal - Azure AD](../active-directory/external-identities/b2b-quickstart-add-guest-users-portal.md). If using an email account that's provided by your universityΓÇÖs Azure AD, you don't have to add them as a guest account.
Once the user has an Azure AD account, [add the Azure AD user account to Lab Creator role](#add-azure-ad-user-account-to-lab-creator-role).
See the following articles:
- [As a lab owner, create and manage labs](how-to-manage-labs.md) - [As a lab owner, set up and publish templates](how-to-create-manage-template.md) - [As a lab owner, configure and control usage of a lab](how-to-configure-student-usage.md)-- [As a lab user, access labs](how-to-use-lab.md)
+- [As a lab user, access labs](how-to-use-lab.md)
lab-services Capacity Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/capacity-limits.md
These actions may be disabled if there no more cores that can be enabled for you
If you reach the cores limit, you can request a limit increase to continue using Azure Lab Services. The request process is a checkpoint to ensure your subscription isnΓÇÖt involved in any cases of fraud or unintentional, sudden large-scale deployments.
-To create a support request, you must be an [Owner](/azure/role-based-access-control/built-in-roles), [Contributor](/azure/role-based-access-control/built-in-roles), or be assigned to the [Support Request Contributor](/azure/role-based-access-control/built-in-roles) role at the subscription level. For information about creating support requests in general, see how to create a [How to create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request).
+To create a support request, you must be an [Owner](../role-based-access-control/built-in-roles.md), [Contributor](../role-based-access-control/built-in-roles.md), or be assigned to the [Support Request Contributor](../role-based-access-control/built-in-roles.md) role at the subscription level. For information about creating support requests in general, see how to create a [How to create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md).
The admin can follow these steps to request a limit increase:
Before you set up a large number of VMs across your labs, we recommend that you
See the following articles: - [As an admin, see VM sizing](administrator-guide.md#vm-sizing).-- [Frequently asked questions](classroom-labs-faq.yml).
+- [Frequently asked questions](classroom-labs-faq.yml).
lab-services How To Attach Detach Shared Image Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-attach-detach-shared-image-gallery.md
This article shows you how to attach or detach an Azure Compute Gallery to a lab plan. > [!IMPORTANT]
-> Lab plan administrators must manually [replicate images](/azure/virtual-machines/shared-image-galleries) to other regions in the compute gallery. Replicate an Azure Compute Gallery image to the same region as the lab plan to be shown in the list of virtual machine images during lab creation.
+> Lab plan administrators must manually [replicate images](../virtual-machines/shared-image-galleries.md) to other regions in the compute gallery. Replicate an Azure Compute Gallery image to the same region as the lab plan to be shown in the list of virtual machine images during lab creation.
Saving images to a compute gallery and replicating those images incurs additional cost. This cost is separate from the Azure Lab Services usage cost. For more information about Azure Compute Gallery pricing, see [Azure Compute Gallery ΓÇô Billing](../virtual-machines/azure-compute-gallery.md#billing).
To learn how to save a template image to the compute gallery or use an image fro
To explore other options for bringing custom images to compute gallery outside of the context of a lab, see [Recommended approaches for creating custom images](approaches-for-custom-image-creation.md).
-For more information about compute galleries in general, see [compute gallery](../virtual-machines/shared-image-galleries.md).
+For more information about compute galleries in general, see [compute gallery](../virtual-machines/shared-image-galleries.md).
lab-services How To Connect Vnet Injection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-connect-vnet-injection.md
You can connect to your own virtual network to your lab plan when you create the
Before you configure VNet injection for your lab plan: -- [Create a virtual network](/azure/virtual-network/quick-create-portal). The virtual network must be in the same region as the lab plan.-- [Create a subnet](/azure/virtual-network/virtual-network-manage-subnet) for the virtual network.-- [Create a network security group (NSG)](/azure/virtual-network/manage-network-security-group) and apply it to the subnet.
+- [Create a virtual network](../virtual-network/quick-create-portal.md). The virtual network must be in the same region as the lab plan.
+- [Create a subnet](../virtual-network/virtual-network-manage-subnet.md) for the virtual network.
+- [Create a network security group (NSG)](../virtual-network/manage-network-security-group.md) and apply it to the subnet.
- [Delegate the subnet](#delegate-the-virtual-network-subnet-for-use-with-a-lab-plan) to **Microsoft.LabServices/labplans**. Certain on-premises networks are connected to Azure Virtual Network either through [ExpressRoute](../expressroute/expressroute-introduction.md) or [Virtual Network Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md). These services must be set up outside of Azure Lab Services. To learn more about connecting an on-premises network to Azure using ExpressRoute, see [ExpressRoute overview](../expressroute/expressroute-introduction.md). For on-premises connectivity using a Virtual Network Gateway, the gateway, specified virtual network, network security group, and the lab plan all must be in the same region.
Certain on-premises networks are connected to Azure Virtual Network either throu
## Delegate the virtual network subnet for use with a lab plan
-After you create a subnet for your virtual network, you must [delegate the subnet](/azure/virtual-network/subnet-delegation-overview) for use with Azure Lab Services.
+After you create a subnet for your virtual network, you must [delegate the subnet](../virtual-network/subnet-delegation-overview.md) for use with Azure Lab Services.
Only one lab plan at a time can be delegated for use with one subnet.
-1. Create a [virtual network](/azure/virtual-network/manage-virtual-network), [subnet](/azure/virtual-network/virtual-network-manage-subnet), and [network security group (NSG)](/azure/virtual-network/manage-network-security-group) if not done already.
+1. Create a [virtual network](../virtual-network/manage-virtual-network.md), [subnet](../virtual-network/virtual-network-manage-subnet.md), and [network security group (NSG)](../virtual-network/manage-network-security-group.md) if not done already.
1. Open the **Subnets** page for your virtual network. 1. Select the subnet you wish to delegate to Lab Services to open the property window for that subnet. 1. For the **Delegate subnet to a service** property, select **Microsoft.LabServices/labplans**. Select **Save**.
See the following articles:
- As an admin, [attach a compute gallery to a lab plan](how-to-attach-detach-shared-image-gallery.md). - As an admin, [configure automatic shutdown settings for a lab plan](how-to-configure-auto-shutdown-lab-plans.md).-- As an admin, [add lab creators to a lab plan](add-lab-creator.md).
+- As an admin, [add lab creators to a lab plan](add-lab-creator.md).
lab-services How To Create A Lab With Shared Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-create-a-lab-with-shared-resource.md
To use a shared resource, the lab plan must be set up to use advanced networking
> [!WARNING] > Advanced networking must be enabled during lab plan creation. It can't be added later.
-When your lab plan is set to use advanced networking, the template VM and student VMs should now have access to the shared resource. You might have to update the virtual network's [network security group](/azure/virtual-network/network-security-groups-overview), virtual network's [user-defined routes](/azure/virtual-network/virtual-networks-udr-overview#user-defined) or server's firewall rules.
+When your lab plan is set to use advanced networking, the template VM and student VMs should now have access to the shared resource. You might have to update the virtual network's [network security group](../virtual-network/network-security-groups-overview.md), virtual network's [user-defined routes](../virtual-network/virtual-networks-udr-overview.md#user-defined) or server's firewall rules.
## Tips
One of the most common shared resources is a license server. The following list
## Next steps
-As an administrator, [create a lab plan with advanced networking](how-to-connect-vnet-injection.md).
+As an administrator, [create a lab plan with advanced networking](how-to-connect-vnet-injection.md).
lab-services How To Use Shared Image Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-use-shared-image-gallery.md
An educator can pick a custom image available in the compute gallery for the tem
>[!IMPORTANT] >Azure Compute Gallery images will not show if they have been disabled or if the region of the lab plan is different than the gallery images.
-For more information about replicating images, see [replication in Azure Compute Gallery](/azure/virtual-machines/shared-image-galleries.md). For more information about disabling gallery images for a lab plan, see [enable and disable images](how-to-attach-detach-shared-image-gallery.md#enable-and-disable-images).
+For more information about replicating images, see [replication in Azure Compute Gallery](../virtual-machines/shared-image-galleries.md). For more information about disabling gallery images for a lab plan, see [enable and disable images](how-to-attach-detach-shared-image-gallery.md#enable-and-disable-images).
### Re-save a custom image to compute gallery
To learn about how to set up a compute gallery by attaching and detaching it to
To explore other options for bringing custom images to compute gallery outside of the context of a lab, see [Recommended approaches for creating custom images](approaches-for-custom-image-creation.md).
-For more information about compute galleries in general, see [Azure Compute Gallery overview](../virtual-machines/shared-image-galleries.md).
+For more information about compute galleries in general, see [Azure Compute Gallery overview](../virtual-machines/shared-image-galleries.md).
lab-services Lab Services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/lab-services-whats-new.md
In this release, there are a few known issues:
- When using virtual network injection, use caution in making changes to the virtual network and subnet. Changes may cause the lab VMs to stop working. For example, deleting your virtual network will cause all the lab VMs to stop working. We plan to improve this experience in the future, but for now make sure to delete labs before deleting networks. - Moving lab plan and lab resources from one Azure region to another isn't supported.-- Azure Compute [resource provider must be registered](/azure/azure-resource-manager/management/resource-providers-and-types) before Azure Lab Services can [create and attach an Azure Compute Gallery resource](how-to-attach-detach-shared-image-gallery.md#create-and-attach-a-compute-gallery).
+- Azure Compute [resource provider must be registered](../azure-resource-manager/management/resource-providers-and-types.md) before Azure Lab Services can [create and attach an Azure Compute Gallery resource](how-to-attach-detach-shared-image-gallery.md#create-and-attach-a-compute-gallery).
### Lab plans replace lab accounts
Let's cover each step to get started with the April 2022 Update (preview) in mor
1. **Validate images**. Each of the VM sizes has been remapped to use a newer Azure VM Compute SKU. If using an [attached compute gallery](how-to-attach-detach-shared-image-gallery.md), validate images with new [Azure VM Compute SKUs](administrator-guide.md#vm-sizing). Validate that each image in the compute gallery is replicated to regions the lab plans and labs are in. 1. **Configure integrations**. Optionally, configure [integration with Canvas](lab-services-within-canvas-overview.md) including [adding the app and linking lab plans](how-to-get-started-create-lab-within-canvas.md). Alternately, configure [integration with Teams](lab-services-within-teams-overview.md) by [adding the app to Teams groups](how-to-get-started-create-lab-within-teams.md). 1. **Create labs**. Create labs to test educator and student experience in preparation for general availability of the updates. Lab administrators and educators should validate performance based on common student workloads.
-1. **Update cost management reports.** Update reports to include the new cost entry type, `Microsoft.LabServices/labs`, for labs created using the April 2022 Update (preview). [Built-in and custom tags](cost-management-guide.md#understand-the-entries) allow for [grouping](/azure/cost-management-billing/costs/quick-acm-cost-analysis) in cost analysis. For more information about tracking costs, see [Cost management for Azure Lab Services](cost-management-guide.md).
+1. **Update cost management reports.** Update reports to include the new cost entry type, `Microsoft.LabServices/labs`, for labs created using the April 2022 Update (preview). [Built-in and custom tags](cost-management-guide.md#understand-the-entries) allow for [grouping](../cost-management-billing/costs/quick-acm-cost-analysis.md) in cost analysis. For more information about tracking costs, see [Cost management for Azure Lab Services](cost-management-guide.md).
## Next steps - As an admin, [create a lab plan](tutorial-setup-lab-plan.md). - As an admin, [manage your lab plan](how-to-manage-lab-plans.md). - As an educator, [create a lab](tutorial-setup-lab.md).-- As a student, [access a lab](how-to-use-lab.md).
+- As a student, [access a lab](how-to-use-lab.md).
lab-services Quick Create Lab Plan Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/quick-create-lab-plan-powershell.md
New-AzRoleAssignment -SignInName <emailOrUserprincipalname> `
-ResourceGroupName "MyResourceGroup" ```
-For more information about role assignments, see [Assign Azure roles using Azure PowerShell](/azure/role-based-access-control/role-assignments-powershell).
+For more information about role assignments, see [Assign Azure roles using Azure PowerShell](../role-based-access-control/role-assignments-powershell.md).
## Clean up resources
$plan | Remove-AzLabServicesLabPlan
In this QuickStart, you created a resource group and a lab plan. As an admin, you can learn more about [Azure PowerShell module](/powershell/azure) and [Az.LabServices cmdlets](/powershell/module/az.labservices/). > [!div class="nextstepaction"]
-> [Quickstart: Create a lab using PowerShell and the Azure module](quick-create-lab-powershell.md)
+> [Quickstart: Create a lab using PowerShell and the Azure module](quick-create-lab-powershell.md)
lab-services Quick Create Lab Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/quick-create-lab-template.md
Alternately, an educator may delete a lab from the Azure Lab Services website: [
For a step-by-step tutorial that guides you through the process of creating a template, see: > [!div class="nextstepaction"]
-> [Tutorial: Create and deploy your first ARM template](/azure/azure-resource-manager/templates/template-tutorial-create-first-template)
+> [Tutorial: Create and deploy your first ARM template](../azure-resource-manager/templates/template-tutorial-create-first-template.md)
lighthouse Cross Tenant Management Experience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/cross-tenant-management-experience.md
Most tasks and services can be performed on delegated resources across managed t
- View alerts for delegated subscriptions, with the ability to view and refresh alerts across all subscriptions - View activity log details for delegated subscriptions-- [Log analytics](../../azure-monitor/logs/service-providers.md): Query data from remote workspaces in multiple tenants (note that automation accounts used to access data from workspaces in customer tenants must be created in the same tenant)
+- [Log analytics](../../azure-monitor/logs/workspace-design.md#multiple-tenant-strategies): Query data from remote workspaces in multiple tenants (note that automation accounts used to access data from workspaces in customer tenants must be created in the same tenant)
- Create, view, and manage [metric alerts](../../azure-monitor/alerts/alerts-metric.md), [log alerts](../../azure-monitor/alerts/alerts-log.md), and [activity log alerts](../../azure-monitor/alerts/alerts-activity-log.md) in customer tenants - Create alerts in customer tenants that trigger automation, such as Azure Automation runbooks or Azure Functions, in the managing tenant through webhooks - Create [diagnostic settings](../..//azure-monitor/essentials/diagnostic-settings.md) in workspaces created in customer tenants, to send resource logs to workspaces in the managing tenant
lighthouse Monitor At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/monitor-at-scale.md
As a service provider, you may have onboarded multiple customer tenants to [Azur
This topic shows you how to use [Azure Monitor Logs](../../azure-monitor/logs/data-platform-logs.md) in a scalable way across the customer tenants you're managing. Though we refer to service providers and customers in this topic, this guidance also applies to [enterprises using Azure Lighthouse to manage multiple tenants](../concepts/enterprise.md). > [!NOTE]
-> Be sure that users in your managing tenants have been granted the [necessary roles for managing Log Analytics workspaces](../../azure-monitor/logs/manage-access.md#manage-access-using-azure-permissions) on your delegated customer subscriptions.
+> Be sure that users in your managing tenants have been granted the [necessary roles for managing Log Analytics workspaces](../../azure-monitor/logs/manage-access.md#azure-rbac) on your delegated customer subscriptions.
## Create Log Analytics workspaces
load-balancer Load Balancer Multivip Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-multivip-overview.md
The Floating IP rule type is the foundation of several load balancer configurati
* Multiple frontend configurations are only supported with IaaS VMs and virtual machine scale sets. * With the Floating IP rule, your application must use the primary IP configuration for outbound SNAT flows. If your application binds to the frontend IP address configured on the loopback interface in the guest OS, Azure's outbound SNAT is not available to rewrite the outbound flow and the flow fails. Review [outbound scenarios](load-balancer-outbound-connections.md).
-* Floating IP is not currently supported on secondary IP configurations for Internal Load Balancing scenarios.
+* Floating IP is not currently supported on secondary IP configurations.
* Public IP addresses have an effect on billing. For more information, see [IP Address pricing](https://azure.microsoft.com/pricing/details/ip-addresses/) * Subscription limits apply. For more information, see [Service limits](../azure-resource-manager/management/azure-subscription-service-limits.md#networking-limits) for details.
load-balancer Upgrade Basicinternal Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-basicInternal-standard.md
This article introduces a PowerShell script that creates a Standard Load Balance
* The Basic Load Balancer needs to be in the same resource group as the backend VMs and NICs. * If the Standard load balancer is created in a different region, you wonΓÇÖt be able to associate the VMs existing in the old region to the newly created Standard Load Balancer. To work around this limitation, make sure to create a new VM in the new region. * If your Load Balancer does not have any frontend IP configuration or backend pool, you are likely to hit an error running the script. Make sure they are not empty.
-* The script cannot migrate Virtual Machine Scale Set from Basic Load Balancer's backend to Standard Load Balancer's backend. We recommend manually creating a Standard Load Balancer and follow [Update or delete a load balancer used by virtual machine scale sets](https://docs.microsoft.com/azure/load-balancer/update-load-balancer-with-vm-scale-set) to complete the migration.
+* The script cannot migrate Virtual Machine Scale Set from Basic Load Balancer's backend to Standard Load Balancer's backend. We recommend manually creating a Standard Load Balancer and follow [Update or delete a load balancer used by virtual machine scale sets](./update-load-balancer-with-vm-scale-set.md) to complete the migration.
## Change IP allocation method to Static for frontend IP Configuration (Ignore this step if it's already static)
Yes it migrates traffic. If you would like to migrate traffic personally, use [t
## Next steps
-[Learn about Standard Load Balancer](load-balancer-overview.md)
+[Learn about Standard Load Balancer](load-balancer-overview.md)
load-testing How To Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-configure-customer-managed-keys.md
Azure Load Testing Preview automatically encrypts all data stored in your load testing resource with keys that Microsoft provides (service-managed keys). Optionally, you can add a second layer of security by also providing your own (customer-managed) keys. Customer-managed keys offer greater flexibility for controlling access and using key-rotation policies.
-The keys you provide are stored securely using [Azure Key Vault](/azure/key-vault/general/overview). You can create a separate key for each Azure Load Testing resource you enable with customer-managed keys.
+The keys you provide are stored securely using [Azure Key Vault](../key-vault/general/overview.md). You can create a separate key for each Azure Load Testing resource you enable with customer-managed keys.
Azure Load Testing uses the customer-managed key to encrypt the following data in the load testing resource:
You have to set the **Soft Delete** and **Purge Protection** properties on your
# [Azure portal](#tab/portal)
-To learn how to create a key vault with the Azure portal, see [Create a key vault using the Azure portal](/azure/key-vault/general/quick-create-portal). When you create the key vault, select **Enable purge protection**, as shown in the following image.
+To learn how to create a key vault with the Azure portal, see [Create a key vault using the Azure portal](../key-vault/general/quick-create-portal.md). When you create the key vault, select **Enable purge protection**, as shown in the following image.
:::image type="content" source="media/how-to-configure-customer-managed-keys/purge-protection-on-azure-key-vault.png" alt-text="Screenshot that shows how to enable purge protection on a new key vault.":::
$keyVault = New-AzKeyVault -Name <key-vault> `
-EnablePurgeProtection ```
-To learn how to enable purge protection on an existing key vault with PowerShell, see [Azure Key Vault recovery overview](/azure/key-vault/general/key-vault-recovery?tabs=azure-powershell).
+To learn how to enable purge protection on an existing key vault with PowerShell, see [Azure Key Vault recovery overview](../key-vault/general/key-vault-recovery.md?tabs=azure-powershell).
# [Azure CLI](#tab/azure-cli)
az keyvault create \
--enable-purge-protection ```
-To learn how to enable purge protection on an existing key vault with Azure CLI, see [Azure Key Vault recovery overview](/azure/key-vault/general/key-vault-recovery?tabs=azure-cli).
+To learn how to enable purge protection on an existing key vault with Azure CLI, see [Azure Key Vault recovery overview](../key-vault/general/key-vault-recovery.md?tabs=azure-cli).
## Add a key
-Next, add a key to the key vault. Azure Load Testing encryption supports RSA keys. For more information about supported key types, see [About keys](/azure/key-vault/keys/about-keys).
+Next, add a key to the key vault. Azure Load Testing encryption supports RSA keys. For more information about supported key types, see [About keys](../key-vault/keys/about-keys.md).
# [Azure portal](#tab/portal)
-To learn how to add a key with the Azure portal, see [Set and retrieve a key from Azure Key Vault using the Azure portal](/azure/key-vault/keys/quick-create-portal).
+To learn how to add a key with the Azure portal, see [Set and retrieve a key from Azure Key Vault using the Azure portal](../key-vault/keys/quick-create-portal.md).
# [PowerShell](#tab/powershell)
To configure customer-managed keys for a new Azure Load Testing resource, follow
1. In the Azure portal, navigate to the **Azure Load Testing** page, and select the **Create** button to create a new resource.
-1. Follow the steps outlined in [create an Azure Load Testing resource](/azure/load-testing/quickstart-create-and-run-load-test#create_resource) to fill out the fields on the **Basics** tab.
+1. Follow the steps outlined in [create an Azure Load Testing resource](./quickstart-create-and-run-load-test.md#create_resource) to fill out the fields on the **Basics** tab.
1. Go to the **Encryption** tab. In the **Encryption type** field, select **Customer-managed keys (CMK)**.
You can change the managed identity for customer-managed keys for an existing Az
1. If the encryption type is **Customer-managed keys**, select the type of identity to use to authenticate to the key vault. The options include **System-assigned** (the default) or **User-assigned**.
- To learn more about each type of managed identity, see [Managed identity types](/azure/active-directory/managed-identities-azure-resources/overview#managed-identity-types).
+ To learn more about each type of managed identity, see [Managed identity types](../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types).
- If you select System-assigned, the system-assigned managed identity needs to be enabled on the resource and granted access to the AKV before changing the identity for customer-managed keys. - If you select **User-assigned**, you must select an existing user-assigned identity that has permissions to access the key vault. To learn how to create a user-assigned identity, see [Use managed identities for Azure Load Testing Preview](how-to-use-a-managed-identity.md).
When you revoke the encryption key you may be able to run tests for about 10 min
## Next steps - Learn how to [Monitor server-side application metrics](./how-to-monitor-server-side-metrics.md).-- Learn how to [Parameterize a load test](./how-to-parameterize-load-tests.md).
+- Learn how to [Parameterize a load test](./how-to-parameterize-load-tests.md).
load-testing Monitor Load Testing Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/monitor-load-testing-reference.md
Operational log entries include elements listed in the following table:
<!-- replace below with the proper link to your main monitoring service article --> - See [Monitor Azure Load Testing](monitor-load-testing.md) for a description of monitoring Azure Load Testing.-- See [Monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
+- See [Monitor Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
load-testing Monitor Load Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/monitor-load-testing.md
The following sections build on this article by describing the specific data gat
## Monitoring data
-Azure Load Testing collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](/azure/azure-monitor/essentials/monitor-azure-resource#monitoring-data-from-Azure-resources).
+Azure Load Testing collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data-from-azure-resources).
See [Monitor Azure Load Testing data reference](monitor-load-testing-reference.md) for detailed information on logs metrics created by Azure Load Testing.
The following sections describe which types of logs you can collect.
Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties.
-All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](/azure/azure-monitor/essentials/resource-logs-schema). You can find the schema for Azure Load Testing resource logs in the [Monitor Azure Load Testing data reference](monitor-load-testing-reference.md#resource-logs).
+All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../azure-monitor/essentials/resource-logs-schema.md). You can find the schema for Azure Load Testing resource logs in the [Monitor Azure Load Testing data reference](monitor-load-testing-reference.md#resource-logs).
-The [Activity log](/azure/azure-monitor/essentials/activity-log) is a type of platform log in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
+The [Activity log](../azure-monitor/essentials/activity-log.md) is a type of platform log in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
For a list of resource logs types collected for Azure Load Testing, see [Monitor Azure Load Testing data reference](monitor-load-testing-reference.md#resource-logs).
For a list of resource logs types collected for Azure Load Testing, see [Monitor
> [!IMPORTANT]
-> When you select **Logs** from the Azure Load Testing menu, Log Analytics is opened with the query scope set to the current [service name]. This means that log queries will only include data from that resource. If you want to run a query that includes data from other [service resource] or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](/azure/azure-monitor/logs/scope) for details.
+> When you select **Logs** from the Azure Load Testing menu, Log Analytics is opened with the query scope set to the current [service name]. This means that log queries will only include data from that resource. If you want to run a query that includes data from other [service resource] or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](../azure-monitor/logs/scope.md) for details.
Following are queries that you can use to help you monitor your Azure Load Testing resources:
AzureLoadTestingOperation
- See [Monitor Azure Load Testing data reference](monitor-load-testing-reference.md) for a reference of the metrics, logs, and other important values created by Azure Load Testing. -- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
+- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
load-testing Quickstart Create And Run Load Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/quickstart-create-and-run-load-test.md
Learn more about the [key concepts for Azure Load Testing](./concept-load-testin
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- Azure RBAC role with permission to create and manage resources in the subscription, such as [Contributor](/azure/role-based-access-control/built-in-roles#contributor) or [Owner](/azure/role-based-access-control/built-in-roles#owner)
+- Azure RBAC role with permission to create and manage resources in the subscription, such as [Contributor](../role-based-access-control/built-in-roles.md#contributor) or [Owner](../role-based-access-control/built-in-roles.md#owner)
## <a name="create_resource"></a> Create an Azure Load Testing resource
You now have an Azure Load Testing resource, which you used to load test an exte
You can reuse this resource to learn how to identify performance bottlenecks in an Azure-hosted application by using server-side metrics. > [!div class="nextstepaction"]
-> [Identify performance bottlenecks](./tutorial-identify-bottlenecks-azure-portal.md)
+> [Identify performance bottlenecks](./tutorial-identify-bottlenecks-azure-portal.md)
logic-apps Logic Apps Add Run Inline Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-add-run-inline-code.md
Title: Add and run code snippets by using inline code
-description: Learn how to create and run code snippets by using inline code actions for automated tasks and workflows that you create with Azure Logic Apps.
+ Title: Run code snippets in workflows
+description: Run code snippets in workflows using Inline Code operations in Azure Logic Apps.
ms.suite: integration Previously updated : 05/25/2021 Last updated : 05/24/2022
-# Add and run code snippets by using inline code in Azure Logic Apps
+# Run code snippets in workflows with Inline Code operations in Azure Logic Apps
-When you want to run a piece of code inside your logic app workflow, you can add the built-in Inline Code action as a step in your logic app's workflow. This action works best when you want to run code that fits this scenario:
+To create and run a code snippet in your logic app workflow without much setup, you can use the **Inline Code** built-in connector. This connector has an action that returns the result from the code snippet so that you can use that output in your workflow's subsequent actions.
-* Runs in JavaScript. More languages are in development.
+Currently, the connector only has a single action, which works best for a code snippet with the following attributes, but more actions are in development. The Inline Code connector also has
+[different limits](logic-apps-limits-and-config.md#inline-code-action-limits), based on whether your logic app workflow is [Consumption or Standard](logic-apps-overview.md#resource-environment-differences).
-* Finishes running in five seconds or fewer.
+| Action | Language | Language version | Run duration | Data size | Other notes |
+|--|-||--|--|-|
+| **Execute JavaScript Code** | JavaScript | **Standard**: <br>Node.js 12.x.x or 14.x.x <br><br>**Consumption**: <br>Node.js 8.11.1 <br><br>For more information, review [Standard built-in objects](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects). | Finishes in 5 seconds or fewer. | Handles data up to 50 MB. | - Doesn't require working with the [**Variables** actions](logic-apps-create-variables-store-values.md), which are unsupported by the action. <br><br>- Doesn't support the `require()` function for running JavaScript. |
+|||||||
-* Handles data up to 50 MB in size.
+To run code that doesn't fit these attributes, you can [create and call a function through Azure Functions](logic-apps-azure-functions.md) instead.
-* Doesn't require working with the [**Variables** actions](../logic-apps/logic-apps-create-variables-store-values.md), which are not yet supported.
+This article shows how the action works in an example workflow that starts with an Office 365 Outlook trigger. The workflow runs when a new email arrives in the associated Outlook email account. The sample code snippet extracts any email addresses that exist the email body and returns those addresses as output that you can use in a subsequent action.
-* Uses Node.js version 8.11.1 for [multi-tenant based logic apps](logic-apps-overview.md) or [Node.js versions 12.x.x or 14.x.x](https://nodejs.org/en/download/releases/) for [single-tenant based logic apps](single-tenant-overview-compare.md).
+The following diagram shows the highlights from example workflow:
- For more information, see [Standard built-in objects](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects).
+### [Consumption](#tab/consumption)
- > [!NOTE]
- > The `require()` function isn't supported by the Inline Code action for running JavaScript.
+![Screenshot showing an example Consumption logic app workflow with the Inline Code action.](./media/logic-apps-add-run-inline-code/inline-code-overview-consumption.png)
-This action runs the code snippet and returns the output from that snippet as a token that's named `Result`. You can use this token with subsequent actions in your logic app's workflow. For other scenarios where you want to create a function for your code, try [creating and calling a function through Azure Functions instead](../logic-apps/logic-apps-azure-functions.md) in your logic app.
+### [Standard](#tab/standard)
-In this article, the example logic app triggers when a new email arrives in a work or school account. The code snippet extracts and returns any email addresses that appear in the email body.
+![Screenshot showing an example Standard logic app workflow with the Inline Code action.](./media/logic-apps-add-run-inline-code/inline-code-overview-standard.png)
-![Screenshot that shows an example logic app](./media/logic-apps-add-run-inline-code/inline-code-example-overview.png)
+ ## Prerequisites
-* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+* The logic app workflow where you want to add your code snippet. The workflow must already start with a trigger.
-* The logic app workflow where you want to add your code snippet, including a trigger. The example in this topic uses the Office 365 Outlook trigger that's named **When a new email arrives**.
+ This article's example uses the Office 365 Outlook trigger that's named **When a new email arrives**.
- If you don't have a logic app, review the following documentation:
+ If you don't have a workflow, review the following documentation:
- * Multi-tenant: [Quickstart: Create your first logic app](../logic-apps/quickstart-create-first-logic-app-workflow.md)
- * Single-tenant: [Create single-tenant based logic app workflows](create-single-tenant-workflows-azure-portal.md)
+ * Consumption: [Quickstart: Create your first logic app](quickstart-create-first-logic-app-workflow.md)
-* Based on whether your logic app is multi-tenant or single-tenant, review the following information.
+ * Standard: [Create single-tenant based logic app workflows](create-single-tenant-workflows-azure-portal.md)
- * Multi-tenant: Requires Node.js version 8.11.1. You also need an empty [integration account](../logic-apps/logic-apps-enterprise-integration-create-integration-account.md) that's linked to your logic app. Make sure that you use an integration account that's appropriate for your use case or scenario.
+* Based on whether your logic app is Consumption or Standard, review the following requirements:
- For example, [Free-tier](../logic-apps/logic-apps-pricing.md#integration-accounts) integration accounts are meant only for exploratory scenarios and workloads, not production scenarios, are limited in usage and throughput, and aren't supported by a service-level agreement (SLA).
+ * Consumption: Requires [Node.js version 8.11.10](https://nodejs.org/en/download/releases/) and a [link to an integration account](logic-apps-enterprise-integration-create-integration-account.md), empty or otherwise, from your logic app resource.
- Other integration account tiers incur costs, but include SLA support, offer more throughput, and have higher limits. Learn more about integration account [tiers](../logic-apps/logic-apps-pricing.md#integration-accounts), [pricing](https://azure.microsoft.com/pricing/details/logic-apps/), and [limits](../logic-apps/logic-apps-limits-and-config.md#integration-account-limits).
+ > [!IMPORTANT]
+ >
+ > Make sure that you use an integration account that's appropriate for your use case or scenario.
+ >
+ > For example, [Free-tier](logic-apps-pricing.md#integration-accounts) integration accounts are meant only
+ > for exploratory scenarios and workloads, not production scenarios, are limited in usage and throughput,
+ > and aren't supported by a service-level agreement (SLA).
+ >
+ > Other integration account tiers incur costs, but include SLA support, offer more throughput, and have higher limits.
+ > Learn more about [integration account tiers](logic-apps-pricing.md#integration-accounts),
+ > [limits](logic-apps-limits-and-config.md#integration-account-limits), and
+ > [pricing](https://azure.microsoft.com/pricing/details/logic-apps/).
- * Single-tenant: Requires [Node.js versions 10.x.x, 11.x.x, or 12.x.x](https://nodejs.org/en/download/releases/). However, you don't need an integration account, but the Inline Code action is renamed **Inline Code Operations** and has [updated limits](logic-apps-limits-and-config.md).
+ * Standard: Requires [Node.js versions 12.x.x or 14.x.x](https://nodejs.org/en/download/releases/), but no integration account.
-## Add inline code
+## Add the Inline Code action
-1. If you haven't already, in the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
+### [Consumption](#tab/consumption)
-1. In your workflow, choose where to add the Inline Code action, either as a new step at the end of your workflow or between steps.
+1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
- To add the action between steps, move your mouse pointer over the arrow that connects those steps. Select the plus sign (**+**) that appears, and select **Add an action**.
+1. On the designer, add the Inline Code action to your workflow. You can add an action either as a new step at the end of your workflow or between steps. This example adds the action under the Office 365 Outlook trigger.
- This example adds the action under the Office 365 Outlook trigger.
+ * To add the action at the end of your workflow, select **New step**.
- ![Add the new step under the trigger](./media/logic-apps-add-run-inline-code/add-new-step.png)
+ * To add the action between steps, move your mouse pointer over the arrow that connects those steps. Select the plus sign (**+**) that appears, and select **Add an action**.
-1. In the action search box, enter `inline code`. From the actions list, select the action named **Execute JavaScript Code**.
+1. In the **Choose an operation** search box, enter **inline code**. From the actions list, select the action named **Execute JavaScript Code**.
- ![Select the "Execute JavaScript Code" action](./media/logic-apps-add-run-inline-code/select-inline-code-action.png)
+ ![Screenshot showing Consumption workflow designer and "Execute JavaScript Code" action selected.](./media/logic-apps-add-run-inline-code/select-inline-code-action-consumption.png)
The action appears in the designer and by default, contains some sample code, including a `return` statement.
- ![Inline Code action with default sample code](./media/logic-apps-add-run-inline-code/inline-code-action-default.png)
+ ![Screenshot showing the Inline Code action with default sample code.](./media/logic-apps-add-run-inline-code/inline-code-action-default-consumption.png)
1. In the **Code** box, delete the sample code, and enter your code. Write the code that you'd put inside a method, but without the method signature.
+ > [!TIP]
+ >
+ > When your cursor is in the **Code** box, the dynamic content list appears. Although you'll
+ > use this list later, you can ignore and leave the list open for now. Don't select **Hide**.
+ If you start typing a recognized keyword, the autocomplete list appears so that you can select from available keywords, for example:
- ![Keyword autocomplete list](./media/logic-apps-add-run-inline-code/auto-complete.png)
+ ![Screenshot showing the Consumption workflow, Inline Code action, and keyword autocomplete list.](./media/logic-apps-add-run-inline-code/auto-complete-consumption.png)
- This example code snippet first creates a variable that stores a *regular expression*, which specifies a pattern to match in input text. The code then creates a variable that stores the email body data from the trigger.
+ The following example code snippet first creates a variable named **myResult** that stores a *regular expression*, which specifies a pattern to match in input text. The code then creates a variable named **email** that stores the email message's body content from the trigger outputs.
- ![Create variables](./media/logic-apps-add-run-inline-code/save-email-body-variable.png)
+ ![Screenshot showing the Consumption workflow, Inline Code action, and example code that creates variables.](./media/logic-apps-add-run-inline-code/save-email-body-variable-consumption.png)
- To make the results from the trigger and previous actions easier to reference, the dynamic content list appears when your cursor is inside the **Code** box. For this example, the list shows available results from the trigger, including the **Body** token, which you can now select.
+1. With your cursor still in the **Code** box, from the open dynamic content list, find the **When a new email arrives** section, and select the **Body** property, which references the email message's body.
- After you select the **Body** token, the inline code action resolves the token to a `workflowContext` object that references the email's `Body` property value:
+ ![Screenshot showing the Consumption workflow, Inline Code action, dynamic content list, and email message's "Body" property selected.](./media/logic-apps-add-run-inline-code/select-output-consumption.png)
- ![Select result](./media/logic-apps-add-run-inline-code/inline-code-example-select-outputs.png)
+ The dynamic content list shows the outputs from the trigger and any preceding actions when those outputs match the input format for the edit box that's currently in focus. This list makes these outputs easier to use and reference from your workflow. For this example, the list shows the outputs from the Outlook trigger, including the email message's **Body** property.
- In the **Code** box, your snippet can use the read-only `workflowContext` object as input. This object includes properties that give your code access to the results from the trigger and previous actions in your workflow. For more information, see [Reference trigger and action results in your code](#workflowcontext) later in this topic.
+ After you select the **Body** property, the Inline Code action resolves the token to a read-only `workflowContext` JSON object, which your snippet can use as input. The `workflowContext` object includes properties that give your code access to the outputs from the trigger and preceding actions in your workflow, such as the trigger's `body` property, which differs from the email message's **Body** property. For more information about the `workflowContext` object, see [Reference trigger and action outputs using the workflowContext object](#workflowcontext) later in this article.
- > [!NOTE]
- > If your code snippet references action names that use the dot (.) operator, you must add those
- > action names to the [**Actions** parameter](#add-parameters). Those references must also enclose
- > the action names with square brackets ([]) and quotation marks, for example:
+ > [!IMPORTANT]
+ >
+ > If your code snippet references action names that include the dot (**.**) operator,
+ > those references have to enclose these action names with square brackets (**[]**)
+ > and quotation marks (**""**), for example:
>
- > `// Correct`</br>
- > `workflowContext.actions["my.action.name"].body`</br>
+ > `// Correct`</br>
+ > `workflowContext.actions["my.action.name"].body`
> > `// Incorrect`</br> > `workflowContext.actions.my.action.name.body`
+ >
+ > Also, in the Inline Code action, you have to add the [**Actions** parameter](#add-parameters)
+ > and then add these action names to that parameter. For more information, see
+ > [Add dependencies as parameters to an Inline Code action](#add-parameters) later in this article.
+
+1. To differentiate the email message's **Body** property that you selected from the trigger's `body` property, rename the second `body` property to `Body` instead. Add the closing semicolon (**;**) at the end to finish the code statement.
+
+ ![Screenshot showing the Consumption logic app workflow, Inline Code action, and renamed "Body" property with closing semicolon.](./media/logic-apps-add-run-inline-code/rename-body-property-consumption.png)
+
+ The Inline Code action doesn't syntactically require a `return` statement. However, by including the `return` statement, you can more easily reference the action results later in your workflow by using the **Result** token in later actions.
+
+ In this example, the code snippet returns the result by calling the `match()` function, which finds any matches in the email message body to the specified regular expression. The **Create HTML table** action then uses the **Result** token to reference the results from the Inline Code action and creates a single result.
+
+ ![Screenshot showing the finished Consumption logic app workflow.](./media/logic-apps-add-run-inline-code/inline-code-complete-example-consumption.png)
+
+1. When you're done, save your workflow.
+
+### [Standard](#tab/standard)
+
+1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
+
+1. On the designer, add the Inline Code action to your workflow. You can add an action either as a new step at the end of your workflow or between steps. This example adds the action under the Office 365 Outlook trigger.
+
+ * To add the action at the end of your workflow, select the plus sign (**+**), and then select **Add an action**.
+
+ * To add the action between steps, move your mouse pointer over the arrow that connects those steps. Select the plus sign (**+**) that appears, and select **Add an action**.
+
+1. In the **Choose an operation** search box, enter **inline code**. From the actions list, select the action named **Execute JavaScript Code**.
- The Inline Code action doesn't require a `return` statement, but the results from a `return` statement are available for reference in later actions through the **Result** token. For example, the code snippet returns the result by calling the `match()` function, which finds matches in the email body against the regular expression. The **Compose** action uses the **Result** token to reference the results from the inline code action and creates a single result.
+ ![Screenshot showing Standard workflow designer and "Execute JavaScript Code" action selected.](./media/logic-apps-add-run-inline-code/select-inline-code-action-standard.png)
- ![Finished logic app](./media/logic-apps-add-run-inline-code/inline-code-complete-example.png)
+1. In the **code** box, enter your code. Write the code that you'd put inside a method, but without the method signature.
+
+ > [!TIP]
+ >
+ > When your cursor is in the **code** box, the dynamic content list appears. Although you'll
+ > use this list later, you can ignore and leave the list open for now. Don't select **Hide**.
+
+ If you start typing a recognized keyword, the autocomplete list appears so that you can select from available keywords, for example:
+
+ ![Screenshot showing the Standard workflow, Inline Code action, and keyword autocomplete list.](./media/logic-apps-add-run-inline-code/auto-complete-standard.png)
+
+ The following example code snippet first creates a variable named **myResult** that stores a *regular expression*, which specifies a pattern to match in input text. The code then creates a variable named **email** that stores the email message's body content from the trigger outputs.
+
+ ![Screenshot showing the Standard workflow, Inline Code action, and example code that creates variables.](./media/logic-apps-add-run-inline-code/save-email-body-variable-standard.png)
+
+1. With your cursor still in the **code** box, from the open dynamic content list, find the **When a new email arrives** section, and select the **Body** token, which references the email's message body.
+
+ ![Screenshot showing the Standard workflow, Inline Code action, dynamic content list, and email message's "Body" property selected.](./media/logic-apps-add-run-inline-code/select-output-standard.png)
+
+ The dynamic content list shows the outputs from the trigger and any preceding actions where those outputs match the input format for the edit box that's currently in focus. This list makes these outputs easier to use and reference from your workflow. For this example, the list shows the outputs from the Outlook trigger, including the email message's **Body** property.
+
+ After you select the **Body** property, the Inline Code action resolves the token to a read-only `workflowContext` JSON object, which your snippet can use as input. The `workflowContext` object includes properties that give your code access to the outputs from the trigger and preceding actions in your workflow, such as the trigger's `body` property, which differs from the email message's **Body** property. For more information about the `workflowContext` object, see [Reference trigger and action outputs using the workflowContext object](#workflowcontext) later in this article.
+
+ > [!IMPORTANT]
+ >
+ > If your code snippet references action names that include the dot (**.**) operator,
+ > those references have to enclose these action names with square brackets (**[]**)
+ > and quotation marks (**""**), for example:
+ >
+ > `// Correct`</br>
+ > `workflowContext.actions["my.action.name"].body`
+ >
+ > `// Incorrect`</br>
+ > `workflowContext.actions.my.action.name.body`
+ >
+ > Also, in the Inline Code action, you have to add the **Actions** parameter
+ > and then add these action names to that parameter. For more information, see
+ > [Add dependencies as parameters to an Inline Code action](#add-parameters) later in this article.
-1. When you're done, save your logic app.
+1. To differentiate the email message's **Body** property that you selected from the trigger's `body` property, rename the second `body` property to `Body` instead. Add the closing semicolon (**;**) at the end to finish the code statement.
+
+ ![Screenshot showing the Standard logic app workflow, Inline Code action, and renamed "Body" property with closing semicolon.](./media/logic-apps-add-run-inline-code/rename-body-property-standard.png)
+
+ The Inline Code action doesn't syntactically require a `return` statement. However, by including the `return` statement, you can reference the action results later in your workflow by using the **Outputs** token in later actions.
+
+ In this example, the code snippet returns the result by calling the `match()` function, which finds any matches in the email message body to the specified regular expression.
+
+ ![Screenshot showing the Standard logic app workflow and Inline Code action with "return" statement.](./media/logic-apps-add-run-inline-code/return-statement-standard.png)
+
+ The **Create HTML table** action then uses the **Outputs** token to reference the results from the Inline Code action and creates a single result.
+
+ ![Screenshot showing the finished Standard logic app workflow.](./media/logic-apps-add-run-inline-code/inline-code-complete-example-standard.png)
+
+1. When you're done, save your workflow.
++ <a name="workflowcontext"></a>
-### Reference trigger and action results in your code
+### Reference trigger and action outputs using the workflowContext object
-The `workflowContext` object has this structure, which includes the `actions`, `trigger`, and `workflow` subproperties:
+From inside your code snippet on the designer, you can use the dynamic content list to select a token that references the output from the trigger or any preceding action. When you select the token, the Inline Code action resolves that token to a read-only `workflowContext` JSON object. This object gives your code access to the outputs from the trigger, any preceding actions, and the workflow. The object uses the following structure and includes the `actions`, `trigger`, and `workflow` properties, which are also objects:
```json {
The `workflowContext` object has this structure, which includes the `actions`, `
} ```
-This table contains more information about these subproperties:
+The following table has more information about these properties:
| Property | Type | Description |
-|-||-|
-| `actions` | Object collection | Result objects from actions that run before your code snippet runs. Each object has a *key-value* pair where the key is the name of an action, and the value is equivalent to calling the [actions() function](../logic-apps/workflow-definition-language-functions-reference.md#actions) with `@actions('<action-name>')`. The action's name uses the same action name that's used in the underlying workflow definition, which replaces spaces (" ") in the action name with underscores (_). This object provides access to action property values from the current workflow instance run. |
-| `trigger` | Object | Result object from the trigger and equivalent to calling the [trigger() function](../logic-apps/workflow-definition-language-functions-reference.md#trigger). This object provides access to trigger property values from the current workflow instance run. |
-| `workflow` | Object | The workflow object and equivalent to calling the [workflow() function](../logic-apps/workflow-definition-language-functions-reference.md#workflow). This object provides access to workflow property values, such as the workflow name, run ID, and so on, from the current workflow instance run. |
-|||
+|-||-|
+| `actions` | Object collection | The result objects from any preceding actions that run before your code snippet runs. Each object has a *key-value* pair where the key is the action name, and the value is equivalent to the result from calling the [actions() function](workflow-definition-language-functions-reference.md#actions) with the `@actions('<action-name>')` expression. <br><br>The action's name uses the same action name that appears in the underlying workflow definition, which replaces spaces (**" "**) in the action name with underscores (**\_**). This object collection provides access to the action's property values from the current workflow instance run. |
+| `trigger` | Object | The result object from the trigger where the result is the equivalent to calling the [trigger() function](workflow-definition-language-functions-reference.md#trigger). This object provides access to trigger's property values from the current workflow instance run. |
+| `workflow` | Object | The workflow object that is the equivalent to calling the [workflow() function](workflow-definition-language-functions-reference.md#workflow). This object provides access to the property values, such as the workflow name, run ID, and so on, from the current workflow instance run. |
+||||
-In this topic's example, the `workflowContext` object has these properties that your code can access:
+In this article's example, the `workflowContext` JSON object might have the following sample properties and values from the Outlook trigger:
```json {
In this topic's example, the `workflowContext` object has these properties that
<a name="add-parameters"></a>
-## Add parameters
+## Add dependencies as parameters to an Inline Code action
-In some cases, you might have to explicitly require that the Inline Code action includes results from the trigger or specific actions that your code references as dependencies by adding the **Trigger** or **Actions** parameters. This option is useful for scenarios where the referenced results aren't found at run time.
+In some scenarios, you might have to explicitly require that the Inline Code action includes outputs from the trigger or actions that your code references as dependencies. For example, you have to take this extra step when your code references outputs that aren't available at workflow run time. During workflow creation time, the Azure Logic Apps engine analyzes the code snippet to determine whether the code references any trigger or action outputs. If those references exist, the engine includes those outputs automatically. At workflow run time, if the referenced trigger or action output isn't found in the `workflowContext` object, the engine generates an error. To resolve this error, you have to add that trigger or action as an explicit dependency for the Inline Code action. Another scenario that requires you to take this step is when the `workflowContext` object references a trigger or action name that uses the dot operator (**.**).
-> [!TIP]
-> If you plan to reuse your code, add references to properties by using the **Code** box so that your code
-> includes the resolved token references, rather than adding the trigger or actions as explicit dependencies.
+To add a trigger or action as a dependency, you add the **Trigger** or **Actions** parameters as applicable to the Inline Code action. You then add the trigger or action names as they appear in your workflow's underlying JSON definition.
-For example, suppose you have code that references the **SelectedOption** result from the **Send approval email** action for the Office 365 Outlook connector. At create time, the Logic Apps engine analyzes your code to determine whether you've referenced any trigger or action results and includes those results automatically. At run time, should you get an error that the referenced trigger or action result isn't available in the specified `workflowContext` object, you can add that trigger or action as an explicit dependency. In this example, you add the **Actions** parameter and specify that the Inline Code action explicitly include the result from the **Send approval email** action.
+> [!NOTE]
+>
+> You can't add **Variables** operations, loops such as **For each** or **Until**, and iteration
+> indexes as explicit dependencies.
+>
+> If you plan to reuse your code, make sure to always use the code snippet edit box to reference
+> trigger and action outputs. That way, your code includes the resolved token references, rather than
+> just add the trigger or action outputs as explicit dependencies.
-To add these parameters, open the **Add new parameter** list, and select the parameters you want:
+For example, suppose the Office 365 Outlook connector's **Send approval email** action precedes the code snippet in the sample workflow. The following example code snippet includes a reference to the **SelectedOption** output from this action.
- ![Add parameters](./media/logic-apps-add-run-inline-code/inline-code-action-add-parameters.png)
+### [Consumption](#tab/consumption)
- | Parameter | Description |
- |--|-|
- | **Actions** | Include results from previous actions. See [Include action results](#action-results). |
- | **Trigger** | Include results from the trigger. See [Include trigger results](#trigger-results). |
- |||
-
-<a name="trigger-results"></a>
-
-### Include trigger results
+![Screenshot that shows the Consumption workflow and Inline Code action with updated example code snippet.](./media/logic-apps-add-run-inline-code/add-actions-parameter-code-snippet-consumption.png)
-If you select **Triggers**, you're prompted whether to include trigger results.
+### [Standard](#tab/standard)
-* From the **Trigger** list, select **Yes**.
+![Screenshot that shows the Standard workflow and Inline Code action with updated example code snippet.](./media/logic-apps-add-run-inline-code/add-actions-parameter-code-snippet-standard.png)
-<a name="action-results"></a>
+
-### Include action results
+For this example, you have to add only the **Actions** parameter, and then add the action's JSON name, `Send_approval_email`, to the parameter. That way, you specify that the Inline Code action explicitly includes the output from the **Send approval email** action.
-If you select **Actions**, you're prompted for the actions that you want to add. However, before you start adding actions, you need the version of the action name that appears in the logic app's underlying workflow definition.
+### Find the trigger or action's JSON name
-* This capability doesn't support variables, loops, and iteration indexes.
+Before you start, you need the JSON name for the trigger or action in the underlying workflow definition.
-* Names in your logic app's workflow definition use an underscore (_), not a space.
+* Names in your workflow definition use an underscore (_), not a space.
-* For action names that use the dot operator (.), include those operators, for example:
+* If an action name uses the dot operator (.), include that operator, for example:
`My.Action.Name`
-1. On the designer toolbar, select **Code view**, and search inside the `actions` attribute for the action name.
+### [Consumption](#tab/consumption)
+
+1. On the workflow designer toolbar, select **Code view**. In the `actions` object, find the action's name.
- For example, `Send_approval_email_` is the JSON name for the **Send approval email** action.
+ For example, `Send_approval_email` is the JSON name for the **Send approval email** action.
- ![Find action name in JSON](./media/logic-apps-add-run-inline-code/find-action-name-json.png)
+ ![Screenshot showing the action name in JSON.](./media/logic-apps-add-run-inline-code/find-action-name-json.png)
1. To return to designer view, on the code view toolbar, select **Designer**.
-1. To add the first action, in the **Actions Item - 1** box, enter the action's JSON name.
+1. Now add the JSON name to the Inline Code action.
+
+### [Standard](#tab/standard)
+
+1. On the workflow menu, select **Code**. In the `actions` object, find the action's name.
+
+ For example, `Send_approval_email` is the JSON name for the **Send approval email** action.
+
+ ![Screenshot showing the action name in JSON.](./media/logic-apps-add-run-inline-code/find-action-name-json.png)
+
+1. To return to designer view, on the workflow menu, select **Designer**.
+
+1. Now add the JSON name to the Inline Code action.
+++
+### Add the trigger or action name to the Inline Code action
+
+1. In the Inline Code action, open the **Add new parameter** list.
+
+1. From the parameters list, select the following parameters as your scenario requires.
+
+ | Parameter | Description |
+ |--|-|
+ | **Actions** | Include outputs from preceding actions as dependencies. When you select this parameter, you're prompted for the actions that you want to add. |
+ | **Trigger** | Include outputs from the trigger as dependencies. When you select this parameter, you're prompted whether to include trigger results. So, from the **Trigger** list, select **Yes**. |
+ |||
+
+1. For this example, select the **Actions** parameter.
+
+ ![Screenshot showing the Inline Code action and "Actions" parameter selected.](./media/logic-apps-add-run-inline-code/add-actions-parameter.png)
+
+1. In the **Actions Item - 1** box, enter the action's JSON name.
+
+ ![Screenshot showing the "Actions Item -1" box and the action's JSON name.](./media/logic-apps-add-run-inline-code/add-action-json-name.png)
- ![Enter first action](./media/logic-apps-add-run-inline-code/add-action-parameter.png)
+1. To add another action name, select **Add new item**.
-1. To add another action, select **Add new item**.
+1. When you're done, save your workflow.
-## Reference
+## Action reference
-For more information about the **Execute JavaScript Code** action's structure and syntax in your logic app's underlying workflow definition using the Workflow Definition Language, see this action's [reference section](../logic-apps/logic-apps-workflow-actions-triggers.md#run-javascript-code).
+For more information about the **Execute JavaScript Code** action's structure and syntax in your underlying workflow definition using the Workflow Definition Language, see this action's [reference section](logic-apps-workflow-actions-triggers.md#run-javascript-code).
## Next steps
logic-apps Logic Apps Diagnosing Failures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-diagnosing-failures.md
ms.suite: integration Previously updated : 01/31/2020 Last updated : 05/24/2022 # Troubleshoot and diagnose workflow failures in Azure Logic Apps
-Your logic app generates information that can help you diagnose and debug problems in your app. You can diagnose a logic app by reviewing each step in the workflow through the Azure portal. Or, you can add some steps to a workflow for runtime debugging.
+Your logic app workflow generates information that can help you diagnose and debug problems in your app. You can diagnose your workflow by reviewing the inputs, outputs, and other information for each step in the workflow using the Azure portal. Or, you can add some steps to a workflow for runtime debugging.
<a name="check-trigger-history"></a> ## Check trigger history
-Each logic app run starts with a trigger attempt, so if the trigger doesn't fire, follow these steps:
+Each workflow run starts with a trigger, which either fires on a schedule or waits for an incoming request or event. The trigger history lists all the trigger attempts that your workflow made and information about the inputs and outputs for each trigger attempt. If the trigger doesn't fire, try the following steps.
-1. Check the trigger's status by [checking the trigger history](../logic-apps/monitor-logic-apps.md#review-trigger-history). To view more information about the trigger attempt, select that trigger event, for example:
+### [Consumption](#tab/consumption)
- ![View trigger status and history](./media/logic-apps-diagnosing-failures/logic-app-trigger-history.png)
+1. To check the trigger's status in your Consumption logic app, [review the trigger history](monitor-logic-apps.md#review-trigger-history). To view more information about the trigger attempt, select that trigger event, for example:
-1. Check the trigger's inputs to confirm that they appear as you expect. Under **Inputs link**, select the link, which shows the **Inputs** pane.
+ ![Screenshot showing Azure portal with Consumption logic app workflow trigger history.](./media/logic-apps-diagnosing-failures/logic-app-trigger-history-consumption.png)
+
+1. Check the trigger's inputs to confirm that they appear as you expect. On the **History** pane, under **Inputs link**, select the link, which shows the **Inputs** pane.
+
+ Trigger inputs include the data that the trigger expects and requires to start the workflow. Reviewing these inputs can help you determine whether the trigger inputs are correct and whether the condition was met so that the workflow can continue.
+
+ ![Screenshot showing Consumption logic app workflow trigger inputs.](./media/logic-apps-diagnosing-failures/review-trigger-inputs-consumption.png)
+
+1. Check the triggers outputs, if any, to confirm that they appear as you expect. On the **History** pane, under **Outputs link**, select the link, which shows the **Outputs** pane.
+
+ Trigger outputs include the data that the trigger passes to the next step in your workflow. Reviewing these outputs can help you determine whether the correct or expected values passed on to the next step in your workflow.
+
+ For example, an error message states that the RSS feed wasn't found:
+
+ ![Screenshot showing Consumption logic app workflow trigger outputs.](./media/logic-apps-diagnosing-failures/review-trigger-outputs-consumption.png)
+
+ > [!TIP]
+ >
+ > If you find any content that you don't recognize, learn more about
+ > [different content types](../logic-apps/logic-apps-content-type.md) in Azure Logic Apps.
+
+### [Standard](#tab/standard)
+
+1. To check the trigger's status in your Standard logic app, [review the trigger history](monitor-logic-apps.md#review-trigger-history). To view more information about the trigger attempt, select that trigger event, for example:
+
+ ![Screenshot showing Azure portal with Standard logic app workflow trigger history.](./media/logic-apps-diagnosing-failures/logic-app-trigger-history-standard.png)
+
+1. Check the trigger's inputs to confirm that they appear as you expect. On the **History** pane, under **Inputs link**, select the link, which shows the **Inputs** pane.
Trigger inputs include the data that the trigger expects and requires to start the workflow. Reviewing these inputs can help you determine whether the trigger inputs are correct and whether the condition was met so that the workflow can continue.
- For example, the `feedUrl` property here has an incorrect RSS feed value:
+ ![Screenshot showing Standard logic app workflow trigger inputs.](./media/logic-apps-diagnosing-failures/review-trigger-inputs-standard.png)
- ![Review trigger inputs for errors](./media/logic-apps-diagnosing-failures/review-trigger-inputs-for-errors.png)
+1. Check the triggers outputs, if any, to confirm that they appear as you expect. On the **History** pane, under **Outputs link**, select the link, which shows the **Outputs** pane.
-1. Check the triggers outputs, if any, to confirm that they appear as you expect. Under **Outputs link**, select the link, which shows the **Outputs** pane.
+ Trigger outputs include the data that the trigger passes to the next step in your workflow. Reviewing these outputs can help you determine whether the correct or expected values passed on to the next step in your workflow.
- Trigger outputs include the data that the trigger passes to the next step in your workflow. Reviewing these outputs can help you determine whether the correct or expected values passed on to the next step in your workflow, for example:
+ For example, an error message states that the RSS feed wasn't found:
- ![Review trigger outputs for errors](./media/logic-apps-diagnosing-failures/review-trigger-outputs-for-errors.png)
+ ![Screenshot showing Standard logic app workflow trigger outputs.](./media/logic-apps-diagnosing-failures/review-trigger-outputs-standard.png)
> [!TIP]
+ >
> If you find any content that you don't recognize, learn more about > [different content types](../logic-apps/logic-apps-content-type.md) in Azure Logic Apps. ++ <a name="check-runs-history"></a>
-## Check runs history
+## Check workflow run history
+
+Each time that the trigger fires, Azure Logic Apps creates a workflow instance and runs that instance. If a run fails, try the following steps so you can review what happened during that run. You can review the status, inputs, and outputs for each step in the workflow.
+
+### [Consumption](#tab/consumption)
+
+1. To check the workflow's run status in your Consumption logic app, [review the runs history](monitor-logic-apps.md#review-runs-history). To view more information about a failed run, including all the steps in that run in their status, select the failed run.
+
+ ![Screenshot showing Azure portal with Consumption logic app workflow runs and a failed run selected.](./media/logic-apps-diagnosing-failures/logic-app-runs-history-consumption.png)
+
+1. After all the steps in the run appear, select each step to expand their shapes.
+
+ ![Screenshot showing Consumption logic app workflow with failed step selected.](./media/logic-apps-diagnosing-failures/logic-app-run-pane-consumption.png)
-Each time that the trigger fires for an item or event, the Logic Apps engine creates and runs a separate workflow instance for each item or event. If a run fails, follow these steps to review what happened during that run, including the status for each step in the workflow plus the inputs and outputs for each step.
+1. Review the inputs, outputs, and any error messages for the failed step.
-1. Check the workflow's run status by [checking the runs history](../logic-apps/monitor-logic-apps.md#review-runs-history). To view more information about a failed run, including all the steps in that run in their status, select the failed run.
+ ![Screenshot showing Consumption logic app workflow with failed step details.](./media/logic-apps-diagnosing-failures/failed-action-inputs-consumption.png)
- ![View run history and select failed run](./media/logic-apps-diagnosing-failures/logic-app-runs-history.png)
+ For example, the following screenshot shows the outputs from the failed RSS action.
-1. After all the steps in the run appear, expand the first failed step.
+ ![Screenshot showing Consumption logic app workflow with failed step outputs.](./media/logic-apps-diagnosing-failures/failed-action-outputs-consumption.png)
- ![Expand first failed step](./media/logic-apps-diagnosing-failures/logic-app-run-pane.png)
+### [Standard](#tab/standard)
-1. Check the failed step's inputs to confirm whether they appear as you expect.
+1. To check the workflow's run status in your Standard logic app, [review the runs history](monitor-logic-apps.md#review-runs-history). To view more information about a failed run, including all the steps in that run in their status, select the failed run.
-1. Review the details for each step in a specific run. Under **Runs history**, select the run that you want to examine.
+ ![Screenshot showing Azure portal with Standard logic app workflow runs and a failed run selected.](./media/logic-apps-diagnosing-failures/logic-app-runs-history-standard.png)
- ![Review runs history](./media/logic-apps-diagnosing-failures/logic-app-runs-history.png)
+1. After all the steps in the run appear, select each step to review their details.
- ![View details for a logic app run](./media/logic-apps-diagnosing-failures/logic-app-run-details.png)
+ ![Screenshot showing Standard logic app workflow with failed step selected.](./media/logic-apps-diagnosing-failures/logic-app-run-pane-standard.png)
-1. To examine the inputs, outputs, and any error messages for a specific step, choose that step so that the shape expands and shows the details. For example:
+1. Review the inputs, outputs, and any error messages for the failed step.
- ![View step details](./media/logic-apps-diagnosing-failures/logic-app-run-details-expanded.png)
+ ![Screenshot showing Standard logic app workflow with failed step inputs.](./media/logic-apps-diagnosing-failures/failed-action-inputs-standard.png)
+
+ For example, the following screenshot shows the outputs from the failed RSS action.
+
+ ![Screenshot showing Standard logic app workflow with failed step outputs.](./media/logic-apps-diagnosing-failures/failed-action-outputs-standard.png)
++ ## Perform runtime debugging
-To help with debugging, you can add diagnostic steps to a logic app workflow, along with reviewing the trigger and runs history. For example, you can add steps that use the [Webhook Tester](https://webhook.site/) service so that you can inspect HTTP requests and determine their exact size, shape, and format.
+To help with debugging, you can add diagnostic steps to a logic app workflow, along with reviewing the trigger and runs history. For example, you can add steps that use the [Webhook Tester](https://webhook.site/) service, so you can inspect HTTP requests and determine their exact size, shape, and format.
-1. Go to the [Webhook Tester](https://webhook.site/) site and copy the generated unique URL.
+1. In a browser, go to the [Webhook Tester](https://webhook.site/) site, and copy the generated unique URL.
-1. In your logic app, add an HTTP POST action plus the body content that you want to test, for example, an expression or another step output.
+1. In your logic app, add an HTTP POST action with the body content that you want to test, for example, an expression or another step output.
1. Paste your URL from Webhook Tester into the HTTP POST action.
-1. To review how a request is formed when generated from the Logic Apps engine, run the logic app, and revisit the Webhook Tester site for more details.
+1. To review how Azure Logic Apps generates and forms a request, run the logic app workflow. You can then revisit the Webhook Tester site for more information.
+
+## Common problems - Standard logic apps
+
+### Inaccessible artifacts in Azure storage account
+
+Standard logic apps store all artifacts in an Azure storage account. You might get the following errors if these artifacts aren't accessible. For example, the storage account itself might not be accessible, or the storage account is behind a firewall but no private endpoint is set up for the storage services to use.
+
+| Azure portal location | Error |
+|--|-|
+| Overview pane | - **System.private.corelib:Access to the path 'C:\\home\\site\\wwwroot\\hostj.son is denied** <br><br>- **Azure.Storage.Blobs: This request is not authorized to perform this operation** |
+| Workflows pane | - **Cannot reach host runtime. Error details, Code: 'BadRequest', Message: 'Encountered an error (InternalServerError) from host runtime.'** <br><br>- **Cannot reach host runtime. Error details, Code: 'BadRequest', Message: 'Encountered an error (ServiceUnavailable) from host runtime.'** <br><br>- **Cannot reach host runtime. Error details, Code: 'BadRequest', Message: 'Encountered an error (BadGateway) from host runtime.'** |
+| During workflow creation and execution | - **Failed to save workflow** <br><br>- **Error in the designer: GetCallFailed. Failed fetching operations** <br><br>- **ajaxExtended call failed** |
+|||
+
+### Troubleshooting options
+
+The following list includes possible causes for these errors and steps to help troubleshoot.
+
+* For a public storage account, check access to the storage account in the following ways:
+
+ * Check the storage account's connectivity using [Azure Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md).
+
+ * In your logic app resource's app settings, confirm the storage account's connection string in the app settings, **AzureWebJobsStorage** and **WEBSITE_CONTENTAZUREFILECONNECTIONSTRING**. For more information, review [Host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md#manage-app-settings).
+
+ If connectivity fails, check whether the Shared Access Signature (SAS) key in the connection string is the most recent.
+
+* For a storage account that's behind a firewall, check access to the storage account in the following ways:
+
+ * If firewall restrictions are enabled on the storage account, check whether [private endpoints](../private-link/private-endpoint-overview.md) are set up for Blob, File, Table, and Queue storage services.
+
+ * Check the storage account's connectivity using [Azure Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md).
+
+ If you find connectivity problems, continue with the following steps:
+
+ 1. In the same virtual network that's integrated with your logic app, create an Azure virtual machine, which you can put in a different subnet.
+
+ 1. From a command prompt, run **nslookup** to check that the Blob, File, Table, and Queue storage services resolve to the expected IP addresses.
+
+ Syntax: `nslookup [StorageaccountHostName] [OptionalDNSServer]`
+
+ Blob: `nslookup {StorageaccountName}.blob.core.windows.net`
+
+ File: `nslookup {StorageaccountName}.file.core.windows.net`
+
+ Table: `nslookup {StorageaccountName}.table.core.windows.net`
+
+ Queue: `nslookup {StorageaccountName}.queue.core.windows.net`
+
+ * If the storage service has a [Service Endpoint](../virtual-network/virtual-network-service-endpoints-overview.md), the service resolves to a public IP address.
+
+ * If the storage service has a [private endpoint](../private-link/private-endpoint-overview.md), the service resolves to the respective network interface controller (NIC) private IP addresses.
+
+ 1. If the previous domain name server (DNS) queries resolve successfully, run the **psping** or **tcpping** commands to check connectivity to the storage account over port 443:
+
+ Syntax: `psping [StorageaccountHostName] [Port] [OptionalDNSServer]`
+
+ Blob: `psping {StorageaccountName}.blob.core.windows.net:443`
+
+ File: `psping {StorageaccountName}.file.core.windows.net:443`
+
+ Table: `psping {StorageaccountName}.table.core.windows.net:443`
+
+ Queue: `psping {StorageaccountName}.queue.core.windows.net:443`
+
+ 1. If each storage service is resolvable from your Azure virtual machine, find the DNS that's used by the virtual machine for resolution.
+
+ 1. Set your logic app's **WEBSITE_DNS_SERVER** app setting to the DNS, and confirm that the DNS works successfully.
+
+ 1. Confirm that VNet integration is set up correctly with appropriate virtual network and subnet in your Standard logic app.
+
+ 1. If you use [private Azure DNS zones](../dns/private-dns-privatednszone.md) for your storage account's private endpoint services, check that a [virtual network link](../dns/private-dns-virtual-network-links.md) has been created to your logic app's integrated virtual network.
+
+For more information, review [Deploy Standard logic app to a storage account behind a firewall using service or private endpoints](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/deploying-standard-logic-app-to-storage-account-behind-firewall/ba-p/2626286).
## Next steps
logic-apps Logic Apps Limits And Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-limits-and-config.md
This section lists the outbound IP addresses that Azure Logic Apps requires in y
| West Europe | 40.68.222.65, 40.68.209.23, 13.95.147.65, 23.97.218.130, 51.144.182.201, 23.97.211.179, 104.45.9.52, 23.97.210.126, 13.69.71.160, 13.69.71.161, 13.69.71.162, 13.69.71.163, 13.69.71.164, 13.69.71.165, 13.69.71.166, 13.69.71.167, 20.103.21.81, 20.103.17.247, 20.103.17.223, 20.103.16.47, 20.103.58.116, 20.103.57.29, 20.101.174.49, 20.101.174.23, 20.93.236.26, 20.93.235.107, 20.103.94.250, 20.76.174.72, 20.82.87.192, 20.82.87.16, 20.76.170.145, 20.103.91.39, 20.103.84.41, 20.76.161.156 | | West India | 104.211.164.80, 104.211.162.205, 104.211.164.136, 104.211.158.127, 104.211.156.153, 104.211.158.123, 104.211.154.59, 104.211.154.7 | | West US | 52.160.92.112, 40.118.244.241, 40.118.241.243, 157.56.162.53, 157.56.167.147, 104.42.49.145, 40.83.164.80, 104.42.38.32, 13.86.223.0, 13.86.223.1, 13.86.223.2, 13.86.223.3, 13.86.223.4, 13.86.223.5, 104.40.34.169, 104.40.32.148, 52.160.70.221, 52.160.70.105, 13.91.81.221, 13.64.231.196, 13.87.204.182, 40.78.65.193, 13.87.207.39, 104.42.44.28, 40.83.134.97, 40.78.65.112, 168.62.9.74, 168.62.28.191 |
-| West US 2 | 13.66.210.167, 52.183.30.169, 52.183.29.132, 13.66.210.167, 13.66.201.169, 13.77.149.159, 52.175.198.132, 13.66.246.219, 20.99.189.158, 20.99.189.70, 20.72.244.58, 20.72.243.225 |
+| West US 2 | 13.66.210.167, 52.183.30.169, 52.183.29.132, 13.66.201.169, 13.77.149.159, 52.175.198.132, 13.66.246.219, 20.99.189.158, 20.99.189.70, 20.72.244.58, 20.72.243.225 |
| West US 3 | 20.150.181.32, 20.150.181.33, 20.150.181.34, 20.150.181.35, 20.150.181.36, 20.150.181.37, 20.150.181.38, 20.150.173.192, 20.106.85.228, 20.150.159.163, 20.106.116.207, 20.106.116.186 | |||
logic-apps Logic Apps Workflow Actions Triggers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-workflow-actions-triggers.md
This action definition merges `abcdefg ` with a trailing space and the value `12
}, ```
-Here is the output that this action creates:
+Here's the output that this action creates:
`abcdefg 1234`
This action definition merges a string variable that contains `abcdefg` and an i
}, ```
-Here is the output that this action creates:
+Here's the output that this action creates:
`"abcdefg1234"`
Here is the output that this action creates:
### Execute JavaScript Code action
-This action runs a JavaScript code snippet and returns the results through a `Result` token that later actions can reference.
+This action runs a JavaScript code snippet and returns the results through a token that subsequent actions in the workflow can reference.
```json "Execute_JavaScript_Code": {
This action runs a JavaScript code snippet and returns the results through a `Re
"inputs": { "code": "<JavaScript-code-snippet>", "explicitDependencies": {
- "actions": [ <previous-actions> ],
+ "actions": [ <preceding-actions> ],
"includeTrigger": true } },
This action runs a JavaScript code snippet and returns the results through a `Re
| Value | Type | Description | |-||-|
-| <*JavaScript-code-snippet*> | Varies | The JavaScript code that you want to run. For code requirements and more information, see [Add and run code snippets with inline code](../logic-apps/logic-apps-add-run-inline-code.md). <p>In the `code` attribute, your code snippet can use the read-only `workflowContext` object as input. This object has subproperties that give your code access to the results from the trigger and previous actions in your workflow. For more information about the `workflowContext` object, see [Reference trigger and action results in your code](../logic-apps/logic-apps-add-run-inline-code.md#workflowcontext). |
+| <*JavaScript-code-snippet*> | Varies | The JavaScript code that you want to run. For code requirements and more information, see [Run code snippets in workflows](logic-apps-add-run-inline-code.md). <p>In the `code` attribute, your code snippet can use the read-only `workflowContext` object as input. This object has subproperties that give your code access to the outputs from the trigger and any preceding actions in your workflow. For more information about the `workflowContext` object, see [Reference trigger and action results using the workflowContext object](logic-apps-add-run-inline-code.md#workflowcontext). |
|||| *Required in some cases*
-The `explicitDependencies` attribute specifies that you want to explicitly
-include results from the trigger, previous actions, or both as dependencies
-for your code snippet. For more information about adding these dependencies, see
-[Add parameters for inline code](../logic-apps/logic-apps-add-run-inline-code.md#add-parameters).
+The `explicitDependencies` attribute specifies that you want to explicitly include results from the trigger, previous actions, or both as dependencies for your code snippet. For more information about adding these dependencies, see [Add dependencies as parameters to an Inline Code action](logic-apps-add-run-inline-code.md#add-parameters).
For the `includeTrigger` attribute, you can specify `true` or `false` values. | Value | Type | Description | |-||-|
-| <*previous-actions*> | String array | An array with your specified action names. Use the action names that appear in your workflow definition where action names use underscores (_), not spaces (" "). |
+| <*preceding-actions*> | String array | An array with the action names in JSON format as dependencies. Make sure to use the action names that appear in your workflow definition where action names use underscores (**_**), not spaces (**" "**). |
|||| *Example 1*
-This action runs code that gets your logic app's name and returns the text "Hello world from \<logic-app-name>" as the result. In this example, the code references the workflow's name by accessing the `workflowContext.workflow.name` property through the read-only `workflowContext` object. For more information about using the `workflowContext` object, see [Reference trigger and action results in your code](../logic-apps/logic-apps-add-run-inline-code.md#workflowcontext).
+This action runs code that gets your logic app workflow's name and returns the text "Hello world from \<logic-app-name>" as the result. In this example, the code references the workflow's name by accessing the `workflowContext.workflow.name` property through the read-only `workflowContext` object. For more information about using the `workflowContext` object, see [Reference trigger and action results in your code](../logic-apps/logic-apps-add-run-inline-code.md#workflowcontext).
```json "Execute_JavaScript_Code": {
This action runs code that gets your logic app's name and returns the text "Hell
*Example 2*
-This action runs code in a logic app that triggers when a new email arrives in a work or school account. The logic app also uses a send approval email action that forwards the content from the received email along with a request for approval.
+This action runs code in a logic app workflow that triggers when a new email arrives in an Outlook account. The workflow also uses the Office 365 Outlook **Send approval email** action that forwards the content from the received email along with a request for approval.
-The code extracts the email addresses from the trigger's `Body` property and returns the addresses along with the `SelectedOption` property value from the approval action. The action explicitly includes the send approval email action as a dependency in the `explicitDependencies` > `actions` attribute.
+The code extracts the email addresses from the email message's `Body` property, and returns the addresses along with the `SelectedOption` property value from the approval action. The action explicitly includes the **Send approval email** action as a dependency in the `actions` object inside the `explicitDependencies` object.
```json "Execute_JavaScript_Code": { "type": "JavaScriptCode", "inputs": {
- "code": "var re = /(([^<>()\\[\\]\\\\.,;:\\s@\"]+(\\.[^<>()\\[\\]\\\\.,;:\\s@\"]+)*)|(\".+\"))@((\\[[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}])|(([a-zA-Z\\-0-9]+\\.)+[a-zA-Z]{2,}))/g;\r\n\r\nvar email = workflowContext.trigger.outputs.body.Body;\r\n\r\nvar reply = workflowContext.actions.Send_approval_email_.outputs.body.SelectedOption;\r\n\r\nreturn email.match(re) + \" - \" + reply;\r\n;",
+ "code": "var myResult = /(([^<>()\\[\\]\\\\.,;:\\s@\"]+(\\.[^<>()\\[\\]\\\\.,;:\\s@\"]+)*)|(\".+\"))@((\\[[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}])|(([a-zA-Z\\-0-9]+\\.)+[a-zA-Z]{2,}))/g;\r\n\r\nvar email = workflowContext.trigger.outputs.body.Body;\r\n\r\nvar reply = workflowContext.actions.Send_approval_email.outputs.body.SelectedOption;\r\n\r\nreturn email.match(myResult) + \" - \" + reply;\r\n;",
"explicitDependencies": { "actions": [
- "Send_approval_email_"
+ "Send_approval_email"
] } },
The code extracts the email addresses from the trigger's `Body` property and ret
} ``` -- <a name="function-action"></a> ### Function action
This action calls a previously created [Azure function](../azure-functions/funct
| Value | Type | Description | |-||-|
-| <*Azure-function-ID*> | String | The resource ID for the Azure function you want to call. Here is the format for this value:<p>"/subscriptions/<*Azure-subscription-ID*>/resourceGroups/<*Azure-resource-group*>/providers/Microsoft.Web/sites/<*Azure-function-app-name*>/functions/<*Azure-function-name*>" |
+| <*Azure-function-ID*> | String | The resource ID for the Azure function you want to call. Here's the format for this value:<p>"/subscriptions/<*Azure-subscription-ID*>/resourceGroups/<*Azure-resource-group*>/providers/Microsoft.Web/sites/<*Azure-function-app-name*>/functions/<*Azure-function-name*>" |
| <*method-type*> | String | The HTTP method to use for calling the function: "GET", "PUT", "POST", "PATCH", or "DELETE" <p>If not specified, the default is the "POST" method. | ||||
This action definition creates a JSON object array from an integer array. The ac
}, ```
-Here is the array that this action creates:
+Here's the array that this action creates:
`[ { "number": 1 }, { "number": 2 }, { "number": 3 } ]`
This action definition creates a CSV table from the "myItemArray" variable. The
} ```
-Here is the CSV table that this action creates:
+Here's the CSV table that this action creates:
``` ID,Product_Name
This action definition creates an HTML table from the "myItemArray" variable. Th
} ```
-Here is the HTML table that this action creates:
+Here's the HTML table that this action creates:
<table><thead><tr><th>ID</th><th>Product_Name</th></tr></thead><tbody><tr><td>0</td><td>Apples</td></tr><tr><td>1</td><td>Oranges</td></tr></tbody></table>
This action definition creates an HTML table from the "myItemArray" variable. Ho
}, ```
-Here is the HTML table that this action creates:
+Here's the HTML table that this action creates:
<table><thead><tr><th>Stock_ID</th><th>Description</th></tr></thead><tbody><tr><td>0</td><td>Organic Apples</td></tr><tr><td>1</td><td>Organic Oranges</td></tr></tbody></table>
Here are some considerations to review before you enable concurrency on a trigge
* To work around this possibility, add a timeout to any action that might hold up these runs. If you're working in the code editor, see [Change asynchronous duration](#asynchronous-limits). Otherwise, if you're using the designer, follow these steps:
- 1. In your logic app, on the action where you want to add a timeout, in the upper-right corner, select the ellipses (**...**) button, and then select **Settings**.
+ 1. In your logic app workflow, select the action where you want to add a timeout. In the action's upper-right corner, select the ellipses (**...**) button, and then select **Settings**.
![Open action settings](./media/logic-apps-workflow-actions-triggers/action-settings.png)
To change the default limit, you can use either the code view editor or Logic Ap
In the underlying "for each" definition, add or update the `runtimeConfiguration.concurrency.repetitions` property, which can have a value that ranges from `1` and `50`.
-Here is an example that limits concurrent runs to 10 iterations:
+Here's an example that limits concurrent runs to 10 iterations:
```json "For_each" {
logic-apps Monitor Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/monitor-logic-apps.md
Title: Monitor status, view history, and set up alerts
description: Troubleshoot logic apps by checking run status, reviewing trigger history, and enabling alerts in Azure Logic Apps. ms.suite: integration-+ Previously updated : 05/04/2020 Last updated : 05/24/2022 # Monitor run status, review trigger history, and set up alerts for Azure Logic Apps
Last updated 05/04/2020
> review the following sections in [Create an integration workflow with single-tenant Azure Logic Apps](create-single-tenant-workflows-azure-portal.md): > [Review run history](create-single-tenant-workflows-azure-portal.md#review-run-history), [Review trigger history](create-single-tenant-workflows-azure-portal.md#review-trigger-history), and [Enable or open Application Insights after deployment](create-single-tenant-workflows-azure-portal.md#enable-open-application-insights).
-After you create and run a [Consumption logic app workflow](quickstart-create-first-logic-app-workflow.md), you can check that workflow's run status, [runs history](#review-runs-history), [trigger history](#review-trigger-history), and performance. To get notifications about failures or other possible problems, set up [alerts](#add-azure-alerts). For example, you can create an alert that detects "when more than five runs fail in an hour."
+After you create and run a [Consumption logic app workflow](quickstart-create-first-logic-app-workflow.md), you can check that workflow's run status, [trigger history](#review-trigger-history), [runs history](#review-runs-history), and performance. To get notifications about failures or other possible problems, set up [alerts](#add-azure-alerts). For example, you can create an alert that detects "when more than five runs fail in an hour."
-For real-time event monitoring and richer debugging, set up diagnostics logging for your logic app by using [Azure Monitor logs](../azure-monitor/overview.md). This Azure service helps you monitor your cloud and on-premises environments so that you can more easily maintain their availability and performance. You can then find and view events, such as trigger events, run events, and action events. By storing this information in [Azure Monitor logs](../azure-monitor/logs/data-platform-logs.md), you can create [log queries](../azure-monitor/logs/log-query-overview.md) that help you find and analyze this information. You can also use this diagnostic data with other Azure services, such as Azure Storage and Azure Event Hubs. For more information, see [Monitor logic apps by using Azure Monitor](../logic-apps/monitor-logic-apps-log-analytics.md).
+For real-time event monitoring and richer debugging, set up diagnostics logging for your logic app by using [Azure Monitor logs](../azure-monitor/overview.md). This Azure service helps you monitor your cloud and on-premises environments so that you can more easily maintain their availability and performance. You can then find and view events, such as trigger events, run events, and action events. By storing this information in [Azure Monitor logs](../azure-monitor/logs/data-platform-logs.md), you can create [log queries](../azure-monitor/logs/log-query-overview.md) that help you find and analyze this information. You can also use this diagnostic data with other Azure services, such as Azure Storage and Azure Event Hubs. For more information, see [Monitor logic apps by using Azure Monitor](monitor-logic-apps-log-analytics.md).
> [!NOTE]
-> If your logic apps run in an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md)
-> that was created to use an [internal access endpoint](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md#endpoint-access),
-> you can view and access inputs and outputs from logic app's runs history *only from inside your virtual network*. Make sure that you have network
+> If your logic apps run in an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md)
+> that was created to use an [internal access endpoint](connect-virtual-network-vnet-isolated-environment-overview.md#endpoint-access),
+> you can view and access inputs and outputs from a workflow runs history *only from inside your virtual network*. Make sure that you have network
> connectivity between the private endpoints and the computer from where you want to access runs history. For example, your client computer can exist > inside the ISE's virtual network or inside a virtual network that's connected to the ISE's virtual network, for example, through peering or a virtual
-> private network. For more information, see [ISE endpoint access](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md#endpoint-access).
+> private network. For more information, see [ISE endpoint access](connect-virtual-network-vnet-isolated-environment-overview.md#endpoint-access).
-<a name="review-runs-history"></a>
+<a name="review-trigger-history"></a>
+
+## Review trigger history
-## Review runs history
+Each workflow run starts with a trigger, which either fires on a schedule or waits for an incoming request or event. The trigger history lists all the trigger attempts that your logic app made and information about the inputs and outputs for each trigger attempt.
-Each time that the trigger fires for an item or event, the Logic Apps engine creates and runs a separate workflow instance for each item or event. By default, each workflow instance runs in parallel so that no workflow has to wait before starting a run. You can review what happened during that run, including the status for each step in the workflow plus the inputs and outputs for each step.
+### [Consumption](#tab/consumption)
1. In the [Azure portal](https://portal.azure.com), find and open your logic app workflow in the designer.
- To find your logic app, in the main Azure search box, enter `logic apps`, and then select **Logic apps**.
+ To find your logic app, in the portal search box, enter **logic apps**, and then select **Logic apps**.
- ![Find and select "Logic Apps" service](./media/monitor-logic-apps/find-your-logic-app.png)
+ ![Screenshot showing the Azure portal main search box with "logic apps" entered and "Logic apps" selected.](./media/monitor-logic-apps/find-your-logic-app.png)
- The Azure portal shows all the logic apps that are associated with your Azure subscriptions. You can filter this list based on name, subscription, resource group, location, and so on.
+ The Azure portal shows all the logic apps in your Azure subscription. You can filter this list based on name, subscription, resource group, location, and so on.
+
+ ![Screenshot showing the Azure portal with all logic apps associated with selected Azure subscriptions.](./media/monitor-logic-apps/logic-apps-list-in-subscription.png)
+
+1. Select your logic app. On your logic app's menu, select **Overview**. On the Overview pane, select **Trigger history**.
- ![View logic apps associated with subscriptions](./media/monitor-logic-apps/logic-apps-list-in-subscription.png)
+ ![Screenshot showing "Overview" pane for a Consumption logic app workflow with "Trigger history" selected.](./media/monitor-logic-apps/overview-logic-app-trigger-history-consumption.png)
-1. Select your logic app, and then select **Overview**.
+ Under **Trigger history**, all trigger attempts appear. Each time the trigger successfully fires, Azure Logic Apps creates an individual workflow instance and runs that instance. By default, each instance runs in parallel so that no workflow has to wait before starting a run. If your workflow triggers for multiple events or items at the same time, a trigger entry appears for each item with the same date and time.
- On the overview pane, under **Runs history**, all the past, current, and any waiting runs for your logic app appear. If the list shows many runs, and you can't find the entry that you want, try filtering the list.
+ ![Screenshot showing "Overview" pane for a Consumption logic app workflow with multiple trigger attempts for different items.](./media/monitor-logic-apps/logic-app-triggers-history-consumption.png)
+
+ The following table lists the possible trigger statuses:
+
+ | Trigger status | Description |
+ |-|-|
+ | **Failed** | An error occurred. To review any generated error messages for a failed trigger, select that trigger attempt and choose **Outputs**. For example, you might find inputs that aren't valid. |
+ | **Skipped** | The trigger checked the endpoint but found no data that met the specified criteria. |
+ | **Succeeded** | The trigger checked the endpoint and found available data. Usually, a **Fired** status also appears alongside this status. If not, the trigger definition might have a condition or `SplitOn` command that wasn't met. <br><br>This status can apply to a manual trigger, recurrence-based trigger, or polling trigger. A trigger can run successfully, but the run itself might still fail when the actions generate unhandled errors. |
+ |||
> [!TIP]
- > If the run status doesn't appear, try refreshing the overview page by selecting **Refresh**.
- > No run happens for a trigger that's skipped due to unmet criteria or finding no data.
+ >
+ > You can recheck the trigger without waiting for the next recurrence. On the
+ > **Overview** pane toolbar or on the designer toolbar, select **Run Trigger** > **Run**.
+
+1. To view information about a specific trigger attempt, select that trigger event.
+
+ ![Screenshot showing the Consumption workflow trigger entry selected.](./media/monitor-logic-apps/select-trigger-event-for-review.png)
+
+ If the list shows many trigger attempts, and you can't find the entry that you want, try filtering the list. If you don't find the data that you expect, try selecting **Refresh** on the toolbar.
+
+ You can now review information about the selected trigger event, for example:
+
+ ![Screenshot showing the selected Consumption workflow trigger history information.](./media/monitor-logic-apps/view-specific-trigger-details.png)
+
+### [Standard](#tab/standard)
+
+1. In the [Azure portal](https://portal.azure.com), find and open your logic app workflow in the designer.
+
+ To find your logic app, in the portal search box, enter **logic apps**, and then select **Logic apps**.
- ![Overview, runs history, and other logic app information](./media/monitor-logic-apps/overview-pane-logic-app-details-run-history.png)
+ ![Screenshot showing the Azure portal search box with "logic apps" entered and "Logic apps" selected.](./media/monitor-logic-apps/find-your-logic-app.png)
- Here are the possible run statuses:
+ The Azure portal shows all the logic apps in your Azure subscription. You can filter this list based on name, subscription, resource group, location, and so on.
+
+ ![Screenshot showing Azure portal with all logic apps associated with selected Azure subscriptions.](./media/monitor-logic-apps/logic-apps-list-in-subscription.png)
+
+1. Select your logic app. On your logic app's menu, select **Overview**. On the Overview pane, select **Trigger history**.
+
+ ![Screenshot showing Overview pane with "Trigger history" selected.](./media/monitor-logic-apps/overview-logic-app-trigger-history-standard.png)
+
+ Under **Trigger history**, all trigger attempts appear. Each time the trigger successfully fires, Azure Logic Apps creates an individual workflow instance and runs that instance. By default, each instance runs in parallel so that no workflow has to wait before starting a run. If your workflow triggers for multiple events or items at the same time, a trigger entry appears for each item with the same date and time.
+
+ ![Screenshot showing Overview pane with multiple trigger attempts for different items.](./media/monitor-logic-apps/logic-app-triggers-history-standard.png)
+
+ The following table lists the possible trigger statuses:
+
+ | Trigger status | Description |
+ |-|-|
+ | **Failed** | An error occurred. To review any generated error messages for a failed trigger, select that trigger attempt and choose **Outputs**. For example, you might find inputs that aren't valid. |
+ | **Skipped** | The trigger checked the endpoint but found no data that met the specified criteria. |
+ | **Succeeded** | The trigger checked the endpoint and found available data. Usually, a **Fired** status also appears alongside this status. If not, the trigger definition might have a condition or `SplitOn` command that wasn't met. <br><br>This status can apply to a manual trigger, recurrence-based trigger, or polling trigger. A trigger can run successfully, but the run itself might still fail when the actions generate unhandled errors. |
+ |||
+
+ > [!TIP]
+ >
+ > You can recheck the trigger without waiting for the next recurrence. On the
+ > **Overview** pane toolbar, select **Run Trigger** > **Run**.
+
+1. To view information about a specific trigger attempt, select that trigger event.
+
+ ![Screenshot showing a Standard workflow trigger entry selected.](./media/monitor-logic-apps/select-trigger-event-for-review-standard.png)
+
+ If the list shows many trigger attempts, and you can't find the entry that you want, try filtering the list. If you don't find the data that you expect, try selecting **Refresh** on the toolbar.
+
+1. Check the trigger's inputs to confirm that they appear as you expect. On the **History** pane, under **Inputs link**, select the link, which shows the **Inputs** pane.
+
+ ![Screenshot showing Standard logic app workflow trigger inputs.](./media/monitor-logic-apps/review-trigger-inputs-standard.png)
+
+1. Check the triggers outputs, if any, to confirm that they appear as you expect. On the **History** pane, under **Outputs link**, select the link, which shows the **Outputs** pane.
+
+ Trigger outputs include the data that the trigger passes to the next step in your workflow. Reviewing these outputs can help you determine whether the correct or expected values passed on to the next step in your workflow.
+
+ For example, the RSS trigger generated an error message that states that the RSS feed wasn't found.
+
+ ![Screenshot showing Standard logic app workflow trigger outputs.](./media/logic-apps-diagnosing-failures/review-trigger-outputs-standard.png)
+++
+<a name="review-runs-history"></a>
+
+## Review workflow run history
+
+Each time the trigger successfully fires, Azure Logic Apps creates a workflow instance and runs that instance. By default, each instance runs in parallel so that no workflow has to wait before starting a run. You can review what happened during each run, including the status, inputs, and outputs for each step in the workflow.
+
+### [Consumption](#tab/consumption)
+
+1. In the [Azure portal](https://portal.azure.com), find and open your logic app workflow in the designer.
+
+ To find your logic app, in the main Azure search box, enter **logic apps**, and then select **Logic apps**.
+
+ ![Screenshot showing Azure portal main search box with "logic apps" entered and "Logic apps" selected.](./media/monitor-logic-apps/find-your-logic-app.png)
+
+ The Azure portal shows all the logic apps that are associated with your Azure subscriptions. You can filter this list based on name, subscription, resource group, location, and so on.
+
+ ![Screenshot showing all the logic apps in selected Azure subscriptions.](./media/monitor-logic-apps/logic-apps-list-in-subscription.png)
+
+1. Select your logic app. On your logic app's menu, select **Overview**. On the Overview pane, select **Runs history**.
+
+ Under **Runs history**, all the past, current, and any waiting runs appear. If the trigger fires for multiple events or items at the same time, an entry appears for each item with the same date and time.
+
+ ![Screenshot showing Consumption logic app workflow "Overview" pane with "Runs history" selected.](./media/monitor-logic-apps/overview-logic-app-runs-history-consumption.png)
+
+ The following table lists the possible run statuses:
| Run status | Description | ||-| | **Aborted** | The run stopped or didn't finish due to external problems, for example, a system outage or lapsed Azure subscription. |
- | **Cancelled** | The run was triggered and started but received a cancellation request. |
+ | **Cancelled** | The run was triggered and started, but received a cancellation request. |
| **Failed** | At least one action in the run failed. No subsequent actions in the workflow were set up to handle the failure. |
- | **Running** | The run was triggered and is in progress, but this status can also appear for a run that is throttled due to [action limits](logic-apps-limits-and-config.md) or the [current pricing plan](https://azure.microsoft.com/pricing/details/logic-apps/). <p><p>**Tip**: If you set up [diagnostics logging](monitor-logic-apps-log-analytics.md), you can get information about any throttle events that happen. |
+ | **Running** | The run was triggered and is in progress. However, this status can also appear for a run that's throttled due to [action limits](logic-apps-limits-and-config.md) or the [current pricing plan](https://azure.microsoft.com/pricing/details/logic-apps/). <br><br>**Tip**: If you set up [diagnostics logging](monitor-logic-apps-log-analytics.md), you can get information about any throttle events that happen. |
| **Succeeded** | The run succeeded. If any action failed, a subsequent action in the workflow handled that failure. |
- | **Timed out** | The run timed out because the current duration exceeded the run duration limit, which is controlled by the [**Run history retention in days** setting](logic-apps-limits-and-config.md#run-duration-retention-limits). A run's duration is calculated by using the run's start time and run duration limit at that start time. <p><p>**Note**: If the run's duration also exceeds the current *run history retention limit*, which is also controlled by the [**Run history retention in days** setting](logic-apps-limits-and-config.md#run-duration-retention-limits), the run is cleared from the runs history by a daily cleanup job. Whether the run times out or completes, the retention period is always calculated by using the run's start time and *current* retention limit. So, if you reduce the duration limit for an in-flight run, the run times out. However, the run either stays or is cleared from the runs history based on whether the run's duration exceeded the retention limit. |
+ | **Timed out** | The run timed out because the current duration exceeded the run duration limit, which is controlled by the [**Run history retention in days** setting](logic-apps-limits-and-config.md#run-duration-retention-limits). A run's duration is calculated by using the run's start time and run duration limit at that start time. <br><br>**Note**: If the run's duration also exceeds the current *run history retention limit*, which is also controlled by the [**Run history retention in days** setting](logic-apps-limits-and-config.md#run-duration-retention-limits), the run is cleared from the runs history by a daily cleanup job. Whether the run times out or completes, the retention period is always calculated by using the run's start time and *current* retention limit. So, if you reduce the duration limit for an in-flight run, the run times out. However, the run either stays or is cleared from the runs history based on whether the run's duration exceeded the retention limit. |
| **Waiting** | The run hasn't started or is paused, for example, due to an earlier workflow instance that's still running. | |||
-1. To review the steps and other information for a specific run, under **Runs history**, select that run.
+1. To review the steps and other information for a specific run, under **Runs history**, select that run. If the list shows many runs, and you can't find the entry that you want, try filtering the list.
+
+ > [!TIP]
+ >
+ > If the run status doesn't appear, try refreshing the overview pane by selecting **Refresh**.
+ > No run happens for a trigger that's skipped due to unmet criteria or finding no data.
- ![Select a specific run to review](./media/monitor-logic-apps/select-specific-logic-app-run.png)
+ ![Screenshot showing the Consumption logic app workflow run selected.](./media/monitor-logic-apps/select-specific-logic-app-run-consumption.png)
The **Logic app run** pane shows each step in the selected run, each step's run status, and the time taken for each step to run, for example:
- ![Each action in the specific run](./media/monitor-logic-apps/logic-app-run-pane.png)
+ ![Screenshot showing each action in the selected workflow run.](./media/monitor-logic-apps/logic-app-run-pane-consumption.png)
To view this information in list form, on the **Logic app run** toolbar, select **Run Details**.
- ![On the toolbar, select "Run Details"](./media/monitor-logic-apps/select-run-details-on-toolbar.png)
+ ![Screenshot showing the "Logic app run" toolbar with "Run Details" selected.](./media/monitor-logic-apps/toolbar-select-run-details.png)
- The Run Details view shows each step, their status, and other information.
+ The Run Details lists each step, their status, and other information.
- ![Review details about each step in the run](./media/monitor-logic-apps/review-logic-app-run-details.png)
+ ![Screenshot showing the run details for each step in the workflow.](./media/monitor-logic-apps/review-logic-app-run-details.png)
For example, you can get the run's **Correlation ID** property, which you might need when you use the [REST API for Logic Apps](/rest/api/logic). 1. To get more information about a specific step, select either option:
- * In the **Logic app run** pane select the step so that the shape expands. You can now view information such as inputs, outputs, and any errors that happened in that step, for example:
+ * In the **Logic app run** pane, select the step so that the shape expands. You can now view information such as inputs, outputs, and any errors that happened in that step.
- ![In logic app run pane, view failed step](./media/monitor-logic-apps/specific-step-inputs-outputs-errors.png)
+ For example, suppose you had an action that failed, and you wanted to review which inputs might have caused that step to fail. By expanding the shape, you can view the inputs, outputs, and error for that step:
- * In the **Logic app run details** pane, select the step that you want.
+ ![Screenshot showing the "Logic app run" pane with the expanded shape for an example failed step.](./media/monitor-logic-apps/specific-step-inputs-outputs-errors.png)
- ![In run details pane, view failed step](./media/monitor-logic-apps/select-failed-step-in-failed-run.png)
+ * In the **Logic app run details** pane, select the step that you want.
- You can now view information such as inputs and outputs for that step, for example:
+ ![Screenshot showing the the "Logic app run details" pane with the example failed step selected.](./media/monitor-logic-apps/select-failed-step.png)
> [!NOTE]
- > All runtime details and events are encrypted within the Logic Apps service.
- > They are decrypted only when a user requests to view that data.
- > You can [hide inputs and outputs in run history](../logic-apps/logic-apps-securing-a-logic-app.md#obfuscate)
+ >
+ > All runtime details and events are encrypted within Azure Logic Apps and
+ > are decrypted only when a user requests to view that data. You can
+ > [hide inputs and outputs in run history](logic-apps-securing-a-logic-app.md#obfuscate)
> or control user access to this information by using > [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md).
-<a name="review-trigger-history"></a>
-
-## Review trigger history
-
-Each logic app run starts with a trigger. The trigger history lists all the trigger attempts that your logic app made and information about the inputs and outputs for each trigger attempt.
+### [Standard](#tab/standard)
1. In the [Azure portal](https://portal.azure.com), find and open your logic app workflow in the designer.
- To find your logic app, in the main Azure search box, enter `logic apps`, and then select **Logic Apps**.
+ To find your logic app, in the main Azure search box, enter **logic apps**, and then select **Logic apps**.
- ![Find and select "Logic Apps" service](./media/monitor-logic-apps/find-your-logic-app.png)
+ ![Screenshot showing Azure portal search box with "logic apps" entered and "Logic apps" selected.](./media/monitor-logic-apps/find-your-logic-app.png)
The Azure portal shows all the logic apps that are associated with your Azure subscriptions. You can filter this list based on name, subscription, resource group, location, and so on.
- ![View logic apps associated with subscriptions](./media/monitor-logic-apps/logic-apps-list-in-subscription.png)
+ ![Screenshot showing all logic apps in selected Azure subscriptions.](./media/monitor-logic-apps/logic-apps-list-in-subscription.png)
-1. Select your logic app, and then select **Overview**.
+1. Select your logic app. On your logic app's menu, under **Workflows**, select **Workflows**, and then select your workflow.
-1. On your logic app's menu, select **Overview**. In the **Summary** section, under **Evaluation**, select **See trigger history**.
+ > [!NOTE]
+ >
+ > By default, stateless workflows don't store run history unless you enable this capability for debugging.
+ > For more information, review [Stateful versus stateless workflows](single-tenant-overview-compare.md#stateful-stateless).
- ![View trigger history for your logic app](./media/monitor-logic-apps/overview-pane-logic-app-details-trigger-history.png)
+1. On your workflow's menu, select **Overview**. On the Overview pane, select **Run History**.
- The trigger history pane shows all the trigger attempts that your logic app has made. Each time that the trigger fires for an item or event, the Logic Apps engine creates a separate logic app instance that runs the workflow. By default, each instance runs in parallel so that no workflow has to wait before starting a run. So if your logic app triggers on multiple items at the same time, a trigger entry with the same date and time appears for each item.
+ Under **Run History**, all the past, current, and any waiting runs appear. If the trigger fires for multiple events or items at the same time, an entry appears for each item with the same date and time.
- ![Multiple trigger attempts for different items](./media/monitor-logic-apps/logic-app-trigger-history.png)
+ ![Screenshot showing Standard logic app workflow "Overview" pane with "Run History" selected.](./media/monitor-logic-apps/overview-logic-app-runs-history-standard.png)
- Here are the possible trigger attempt statuses:
+ The following table lists the possible run statuses:
- | Trigger status | Description |
- |-|-|
- | **Failed** | An error occurred. To review any generated error messages for a failed trigger, select that trigger attempt and choose **Outputs**. For example, you might find inputs that aren't valid. |
- | **Skipped** | The trigger checked the endpoint but found no data that met the specified criteria. |
- | **Succeeded** | The trigger checked the endpoint and found available data. Usually, a **Fired** status also appears alongside this status. If not, the trigger definition might have a condition or `SplitOn` command that wasn't met. <p><p>This status can apply to a manual trigger, recurrence trigger, or polling trigger. A trigger can run successfully, but the run itself might still fail when the actions generate unhandled errors. |
+ | Run status | Description |
+ ||-|
+ | **Aborted** | The run stopped or didn't finish due to external problems, for example, a system outage or lapsed Azure subscription. |
+ | **Cancelled** | The run was triggered and started, but received a cancellation request. |
+ | **Failed** | At least one action in the run failed. No subsequent actions in the workflow were set up to handle the failure. |
+ | **Running** | The run was triggered and is in progress. However, this status can also appear for a run that's throttled due to [action limits](logic-apps-limits-and-config.md) or the [current pricing plan](https://azure.microsoft.com/pricing/details/logic-apps/). <br><br>**Tip**: If you set up [diagnostics logging](monitor-logic-apps-log-analytics.md), you can get information about any throttle events that happen. |
+ | **Succeeded** | The run succeeded. If any action failed, a subsequent action in the workflow handled that failure. |
+ | **Timed out** | The run timed out because the current duration exceeded the run duration limit, which is controlled by the [**Run history retention in days** setting](logic-apps-limits-and-config.md#run-duration-retention-limits). A run's duration is calculated by using the run's start time and run duration limit at that start time. <br><br>**Note**: If the run's duration also exceeds the current *run history retention limit*, which is also controlled by the [**Run history retention in days** setting](logic-apps-limits-and-config.md#run-duration-retention-limits), the run is cleared from the runs history by a daily cleanup job. Whether the run times out or completes, the retention period is always calculated by using the run's start time and *current* retention limit. So, if you reduce the duration limit for an in-flight run, the run times out. However, the run either stays or is cleared from the runs history based on whether the run's duration exceeded the retention limit. |
+ | **Waiting** | The run hasn't started or is paused, for example, due to an earlier workflow instance that's still running. |
|||
- > [!TIP]
- > You can recheck the trigger without waiting for the next recurrence. On the overview toolbar, select **Run Trigger**,
- > and select the trigger, which forces a check. Or, select **Run Trigger** on designer toolbar.
+1. To review the steps and other information for a specific run, under **Run History**, select that run. If the list shows many runs, and you can't find the entry that you want, try filtering the list.
-1. To view information about a specific trigger attempt, on the trigger pane, select that trigger event. If the list shows many trigger attempts, and you can't find the entry that you want, try filtering the list. If you don't find the data that you expect, try selecting **Refresh** on the toolbar.
+ > [!TIP]
+ >
+ > If the run status doesn't appear, try refreshing the overview pane by selecting **Refresh**.
+ > No run happens for a trigger that's skipped due to unmet criteria or finding no data.
- ![View specific trigger attempt](./media/monitor-logic-apps/select-trigger-event-for-review.png)
+ ![Screenshot showing the Standard workflow run selected.](./media/monitor-logic-apps/select-specific-logic-app-run-standard.png)
- You can now review information about the selected trigger event, for example:
+ The workflow run pane shows each step in the selected run, each step's run status, and the time taken for each step to run, for example:
- ![View specific trigger information](./media/monitor-logic-apps/view-specific-trigger-details.png)
+ ![Screenshot showing each action in selected workflow run.](./media/monitor-logic-apps/logic-app-run-pane-standard.png)
-<a name="add-azure-alerts"></a>
+1. After all the steps in the run appear, select each step to review more information such as inputs, outputs, and any errors that happened in that step.
-## Set up monitoring alerts
+ For example, suppose you had an action that failed, and you wanted to review which inputs might have caused that step to fail.
-To get alerts based on specific metrics or exceeded thresholds for your logic app, set up [alerts in Azure Monitor](../azure-monitor/alerts/alerts-overview.md). Learn about [metrics in Azure](../azure-monitor/data-platform.md). To set up alerts without using [Azure Monitor](../azure-monitor/logs/log-query-overview.md), follow these steps.
+ ![Screenshot showing Standard logic app workflow with failed step inputs.](./media/monitor-logic-apps/failed-action-inputs-standard.png)
-1. On your logic app menu, under **Monitoring**, select **Alerts** > **New alert rule**.
+ The following screenshot shows the outputs from the failed step.
- ![Add an alert for your logic app](./media/monitor-logic-apps/add-new-alert-rule.png)
+ ![Screenshot showing Standard logic app workflow with failed step outputs.](./media/monitor-logic-apps/failed-action-outputs-standard.png)
-1. On the **Create rule** pane, under **Resource**, select your logic app, if not already selected. Under **Condition**, select **Add** so that you can define the condition that triggers the alert.
+ > [!NOTE]
+ >
+ > All runtime details and events are encrypted within Azure Logic Apps and
+ > are decrypted only when a user requests to view that data. You can
+ > [hide inputs and outputs in run history](logic-apps-securing-a-logic-app.md#obfuscate).
- ![Add a condition for the rule](./media/monitor-logic-apps/add-condition-for-rule.png)
+
-1. On the **Configure signal logic** pane, find and select the signal for which you want to get an alert. You can use the search box, or to sort the signals alphabetically, select the **Signal name** column header.
+<a name="add-azure-alerts"></a>
- For example, if you want to send an alert when a trigger fails, follow these steps:
+## Set up monitoring alerts
- 1. In the **Signal name** column, find and select the **Triggers Failed** signal.
+To get alerts based on specific metrics or exceeded thresholds for your logic app, set up [alerts in Azure Monitor](../azure-monitor/alerts/alerts-overview.md). For more information, review [Metrics in Azure](../azure-monitor/data-platform.md). To set up alerts without using [Azure Monitor](../azure-monitor/logs/log-query-overview.md), follow these steps.
- ![Select signal for creating alert](./media/monitor-logic-apps/find-and-select-signal.png)
+1. On your logic app menu, under **Monitoring**, select **Alerts**. On the toolbar, select **Create** > **Alert rule**.
- 1. On the information pane that opens for the selected signal, under **Alert logic**, set up your condition, for example:
+ ![Screenshot showing Azure portal, logic app menu with "Alerts" selected, and toolbar with "Create", "Alert rule" selected.](./media/monitor-logic-apps/add-new-alert-rule.png)
- 1. For **Operator**, select **Greater than or equal to**.
+1. On the **Select a signal** pane, under **Signal type**, select the signal for which you want to get an alert.
- 1. For **Aggregation type**, select **Count**.
+ > [!TIP]
+ >
+ > You can use the search box, or to sort the signals alphabetically,
+ > select the **Signal name** column header.
- 1. For **Threshold value**, enter `1`.
+ For example, to send an alert when a trigger fails, follow these steps:
- 1. Under **Condition preview**, confirm that your condition appears correct.
+ 1. In the **Signal name** column, find and select the **Triggers Failed** signal.
- 1. Under **Evaluated based on**, set up the interval and frequency for running the alert rule. For **Aggregation granularity (Period)**, select the period for grouping the data. For **Frequency of evaluation**, select how often you want to check the condition.
+ ![Screenshot showing "Select a signal pane", the "Signal name" column, and "Triggers Failed" signal selected.](./media/monitor-logic-apps/find-and-select-signal.png)
- 1. When you're ready, select **Done**.
+ 1. On the **Configure signal logic** pane, under **Alert logic**, set up your condition, and select **Done**, for example:
- Here's the finished condition:
+ | Property | Example value |
+ |-||
+ | **Operator** | **Greater than or equal to** |
+ | **Aggregation type** | **Count** |
+ | **Threshold value** | **1** |
+ | **Unit** | **Count** |
+ | **Condition preview** | **Whenever the count of triggers failed is greater than or equal to 1** |
+ | **Aggregation granularity (Period)** | **1 minute** |
+ | **Frequency of evaluation** | **Every 1 Minute** |
+ |||
- ![Set up condition for alert](./media/monitor-logic-apps/set-up-condition-for-alert.png)
+ For more information, review [Create, view, and manage log alerts by using Azure Monitor](../azure-monitor/alerts/alerts-activity-log.md).
- The **Create rule** page now shows the condition that you created and the cost for running that alert.
+ The following screenshot shows the finished condition:
- ![New alert on the "Create rule" page](./media/monitor-logic-apps/finished-alert-condition-cost.png)
+ ![Screenshot showing the condition for alert.](./media/monitor-logic-apps/set-up-condition-for-alert.png)
-1. Specify a name, optional description, and severity level for your alert. Either leave the **Enable rule upon creation** setting turned on, or turn off until you're ready to enable the rule.
+ The **Create an alert rule** page now shows the condition that you created and the cost for running that alert.
-1. When you're done, select **Create alert rule**.
+ ![Screenshot showing the new alert on the "Create an alert rule" page.](./media/monitor-logic-apps/finished-alert-condition-cost.png)
-> [!TIP]
-> To run a logic app from an alert, you can include the
-> [request trigger](../connectors/connectors-native-reqres.md) in your workflow,
-> which lets you perform tasks like these examples:
->
-> * [Post to Slack](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/alert-to-slack-with-logic-app)
-> * [Send a text](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/alert-to-text-message-with-logic-app)
-> * [Add a message to a queue](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/alert-to-queue-with-logic-app)
+1. If you're satisfied, select **Next: Details** to finish creating the rule.
## Next steps
-* [Monitor logic apps by using Azure Monitor](../logic-apps/monitor-logic-apps-log-analytics.md)
+* [Monitor logic apps with Azure Monitor](monitor-logic-apps-log-analytics.md)
machine-learning How To Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-image-models.md
Previously updated : 01/18/2022 Last updated : 05/26/2022 #Customer intent: I'm a data scientist with ML knowledge in the computer vision space, looking to build ML models using image data in Azure Machine Learning with full control of the model algorithm, hyperparameters, and training and deployment environments.
The following is a sample JSONL file for image classification:
Once your data is in JSONL format, you can create training and validation `MLTable` as shown below. Automated ML doesn't impose any constraints on training or validation data size for computer vision tasks. Maximum dataset size is only limited by the storage layer behind the dataset (i.e. blob store). There's no minimum number of images or labels. However, we recommend starting with a minimum of 10-15 samples per label to ensure the output model is sufficiently trained. The higher the total number of labels/classes, the more samples you need per label.
validation_data:
You can create data inputs from training and validation MLTable from your local directory or cloud storage with the following code:
-[!Notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=data-load)]
+[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=data-load)]
Training data is a required parameter and is passed in using the `training_data` parameter of the task specific `automl` type function. You can optionally specify another MLTable as a validation data with the `validation_data` parameter. If no validation data is specified, 20% of your training data will be used for validation by default, unless you pass `validation_data_size` argument with a different value.
limits:
# [Python SDK v2 (preview)](#tab/SDK-v2)
-[!Notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=limit-settings)]
+[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=limit-settings)]
sweep:
# [Python SDK v2 (preview)](#tab/SDK-v2)
-[!Notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=sweep-settings)]
+[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=sweep-settings)]
image_model:
# [Python SDK v2 (preview)](#tab/SDK-v2)
-[!Notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=pass-arguments)]
+[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=pass-arguments)]
az ml job create --file ./hello-automl-job-basic.yml --workspace-name [YOUR_AZUR
When you've configured your AutoML Job to the desired settings, you can submit the job.
-[!Notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=submit-run)]
+[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=submit-run)]
## Outputs and evaluation metrics
machine-learning How To Configure Network Isolation With V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-network-isolation-with-v2.md
The Azure Machine Learning CLI v2 uses our new v2 API platform. New features suc
As mentioned in the previous section, there are two types of operations; with ARM and with the workspace. With the __legacy v1 API__, most operations used the workspace. With the v1 API, adding a private endpoint to the workspace provided network isolation for everything except CRUD operations on the workspace or compute resources.
-With the __new v2 API__, most operations use ARM. So enabling a private endpoint on your workspace doesn't provide the same level of network isolation. Operations that use ARM communicate over public networks, and include any metadata (such as your resource IDs) or parameters used by the operation. For example, the [create or update job](/rest/api/azureml/jobs/create-or-update) api sends metadata, and [parameters](/azure/machine-learning/reference-yaml-job-command).
+With the __new v2 API__, most operations use ARM. So enabling a private endpoint on your workspace doesn't provide the same level of network isolation. Operations that use ARM communicate over public networks, and include any metadata (such as your resource IDs) or parameters used by the operation. For example, the [create or update job](/rest/api/azureml/jobs/create-or-update) api sends metadata, and [parameters](./reference-yaml-job-command.md).
> [!TIP] > * Public ARM operations do not surface data in your storage account on public networks.
az ml workspace show -g <myresourcegroup> -w <myworkspace> --query v1LegacyMode
## Next steps * [Use a private endpoint with Azure Machine Learning workspace](how-to-configure-private-link.md).
-* [Create private link for managing Azure resources](/azure/azure-resource-manager/management/create-private-link-access-portal).
+* [Create private link for managing Azure resources](../azure-resource-manager/management/create-private-link-access-portal.md).
machine-learning How To Create Component Pipeline Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-component-pipeline-python.md
Previously updated : 05/10/2022 Last updated : 05/26/2022
This article is based on the [image_classification_keras_minist_convnet.ipynb](h
Import all the Azure Machine Learning required libraries that you'll need for this article:
-[!notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=required-library)]
+[!notebook-python[] (~/azureml-examples-main/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=required-library)]
## Prepare input data for your pipeline job
Fashion-MNIST is a dataset of fashion images divided into 10 classes. Each image
To define the input data of a job that references the Web-based data, run:
-[!notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=define-input)]
+[!notebook-python[] (~/azureml-examples-main/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=define-input)]
By defining an `Input`, you create a reference to the data source location. The data remains in its existing location, so no extra storage cost is incurred.
If you're following along with the example in the [AzureML Examples repo](https:
By using command_component() function as a decorator, you can easily define the component's interface, metadata and code to execute from a python function. Each decorated Python function will be transformed into a single static specification (YAML) that the pipeline service can process. The code above define a component with display name `Prep Data` using `@command_component` decorator:
Following is what a component looks like in the studio UI.
You'll need to modify the runtime environment in which your component runs. The above code creates an object of `Environment` class, which represents the runtime environment in which the component runs. The `conda.yaml` file contains all packages used for the component like following: Now, you've prepared all source files for the `Prep Data` component.
The `train.py` file contains a normal python function, which performs the traini
After defining the training function successfully, you can use @command_component in Azure Machine Learning SDK v2 to wrap your function as a component, which can be used in AML pipelines. The code above define a component with display name `Train Image Classification Keras` using `@command_component`:
The code above define a component with display name `Train Image Classification
The train-model component has a slightly more complex configuration than the prep-data component. The `conda.yaml` is like following: Now, you've prepared all source files for the `Train Image Classification Keras` component.
If you're following along with the example in the [AzureML Examples repo](https:
The `score.py` file contains a normal python function, which performs the training model logic. The code in score.py takes three command-line arguments: `input_data`, `input_model` and `output_result`. The program score the input model using input data and then output the scoring result.
In this section, you'll learn to create a component specification in the valid Y
- Interface: inputs and outputs - Command, code, & environment: The command, code, and environment used to run the component * `name` is the unique identifier of the component. Its display name is `Score Image Classification Keras`. * This component has two inputs and one output.
For prep-data component and train-model component defined by python function, yo
In the following code, you import `prepare_data_component()` and `keras_train_component()` function from the `prep_component.py` file under `prep` folder and `train_component` file under `train` folder respectively.
-[!notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=load-from-dsl-component)]
+[!notebook-python[] (~/azureml-examples-main/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=load-from-dsl-component)]
For score component defined by yaml, you can use `load_component()` function to load.
-[!notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=load-from-yaml)]
+[!notebook-python[] (~/azureml-examples-main/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=load-from-yaml)]
## Build your pipeline Now that you've created and loaded all components and input data to build the pipeline. You can compose them into a pipeline:
-[!notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=build-pipeline)]
+[!notebook-python[] (~/azureml-examples-main/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=build-pipeline)]
The pipeline has a default compute `cpu_compute_target`, which means if you don't specify compute for a specific node, that node will run on the default compute.
We'll use `DefaultAzureCredential` to get access to workspace. `DefaultAzureCred
Reference for more available credentials if it doesn't work for you: [configure credential example](https://github.com/Azure/MachineLearningNotebooks/blob/master/configuration.ipynb), [azure-identity reference doc](/python/api/azure-identity/azure.identity?view=azure-python&preserve-view=true ).
-[!notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=credential)]
+[!notebook-python[] (~/azureml-examples-main/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=credential)]
#### Get a handle to a workspace with compute Create a `MLClient` object to manage Azure Machine Learning services.
-[!notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=workspace)]
+[!notebook-python[] (~/azureml-examples-main/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=workspace)]
> [!IMPORTANT] > This code snippet expects the workspace configuration json file to be saved in the current directory or its parent. For more information on creating a workspace, see [Create and manage Azure Machine Learning workspaces](how-to-manage-workspace.md). For more information on saving the configuration to file, see [Create a workspace configuration file](how-to-configure-environment.md#workspace).
Create a `MLClient` object to manage Azure Machine Learning services.
Now you've get a handle to your workspace, you can submit your pipeline job.
-[!notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=submit-pipeline)]
+[!notebook-python[] (~/azureml-examples-main/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=submit-pipeline)]
The code above submit this image classification pipeline job to experiment called `pipeline_samples`. It will auto create the experiment if not exists. The `pipeline_input_data` uses `fashion_ds`.
The call to `submit` the `Experiment` completes quickly, and produces output sim
You can monitor the pipeline run by opening the link or you can block until it completes by running:
-[!notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=stream-pipeline)]
+[!notebook-python[] (~/azureml-examples-main/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=stream-pipeline)]
> [!IMPORTANT] > The first pipeline run takes roughly *15 minutes*. All dependencies must be downloaded, a Docker image is created, and the Python environment is provisioned and created. Running the pipeline again takes significantly less time because those resources are reused instead of created. However, total run time for the pipeline depends on the workload of your scripts and the processes that are running in each pipeline step.
You can check the logs and outputs of each component by right clicking the compo
In the previous section, you have built a pipeline using three components to E2E complete an image classification task. You can also register components to your workspace so that they can be shared and resued within the workspace. Following is an example to register prep-data component.
-[!notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=register-component)]
+[!notebook-python[] (~/azureml-examples-main/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=register-component)]
Using `ml_client.components.get()`, you can get a registered component by name and version. Using `ml_client.compoennts.create_or_update()`, you can register a component previously loaded from python function or yaml.
machine-learning How To Create Component Pipelines Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-component-pipelines-cli.md
Previously updated : 05/10/2022 Last updated : 05/26/2022 ms.devlang: azurecli, cliv2
Open the `services.Studio.endpoint` URL you'll see a graph visualization of the
Let's take a look at the pipeline definition in the *3b_pipeline_with_data/pipeline.yml* file. Below table describes the most common used fields of pipeline YAML schema. See [full pipeline YAML schema here](reference-yaml-job-pipeline.md).
One common scenario is to read and write data in your pipeline. In AuzreML, we u
Now let's look at the *componentA.yml* as an example to understand component definition YAML. The most common used schema of the component YAML is described in below table. See [full component YAML schema here](reference-yaml-component-command.md).
Under **Jobs** tab, you'll see the history of all jobs that use this component.
Let's use `1b_e2e_registered_components` to demo how to use registered component in pipeline YAML. Navigate to `1b_e2e_registered_components` directory, open the `pipeline.yml` file. The keys and values in the `inputs` and `outputs` fields are similar to those already discussed. The only significant difference is the value of the `component` field in the `jobs.<JOB_NAME>.component` entries. The `component` value is of the form `azureml:<COMPONENT_NAME>:<COMPONENT_VERSION>`. The `train-job` definition, for instance, specifies the latest version of the registered component `my_train` should be used: ### Manage components
machine-learning How To Network Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-network-security-overview.md
Use the following steps to secure your workspace and associated resources. These
| Service | Endpoint information | Allow trusted information | | -- | -- | -- | | __Azure Key Vault__| [Service endpoint](../key-vault/general/overview-vnet-service-endpoints.md)</br>[Private endpoint](../key-vault/general/private-link-service.md) | [Allow trusted Microsoft services to bypass this firewall](how-to-secure-workspace-vnet.md#secure-azure-key-vault) |
- | __Azure Storage Account__ | [Service and private endpoint](how-to-secure-workspace-vnet.md?tabs=se#secure-azure-storage-accounts)</br>[Private endpoint](how-to-secure-workspace-vnet.md?tabs=pe#secure-azure-storage-accounts) | [Grant access from Azure resource instances](../storage/common/storage-network-security.md#grant-access-from-azure-resource-instances-preview)</br>**or**</br>[Grant access to trusted Azure services](../storage/common/storage-network-security.md#grant-access-to-trusted-azure-services) |
+ | __Azure Storage Account__ | [Service and private endpoint](how-to-secure-workspace-vnet.md?tabs=se#secure-azure-storage-accounts)</br>[Private endpoint](how-to-secure-workspace-vnet.md?tabs=pe#secure-azure-storage-accounts) | [Grant access from Azure resource instances](../storage/common/storage-network-security.md#grant-access-from-azure-resource-instances)</br>**or**</br>[Grant access to trusted Azure services](../storage/common/storage-network-security.md#grant-access-to-trusted-azure-services) |
| __Azure Container Registry__ | [Private endpoint](../container-registry/container-registry-private-link.md) | [Allow trusted services](../container-registry/allow-access-trusted-services.md) |
machine-learning How To Prepare Datasets For Automl Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-prepare-datasets-for-automl-images.md
Previously updated : 04/15/2022 Last updated : 05/26/2022 # Prepare data for computer vision tasks with automated machine learning (preview)
az ml data create -f [PATH_TO_YML_FILE] --workspace-name [YOUR_AZURE_WORKSPACE]
``` # [Python SDK v2 (preview)](#tab/SDK-v2)
-[!Notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=upload-data)]
+[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=upload-data)]
Next, you will need to get the label annotations in JSONL format. The schema of labeled data depends on the computer vision task at hand. Refer to [schemas for JSONL files for AutoML computer vision experiments](reference-automl-images-schema.md) to learn more about the required JSONL schema for each task type.
If your training data is in a different format (like, pascal VOC or COCO), [help
Once you have your labeled data in JSONL format, you can use it to create `MLTable` as shown below. MLtable packages your data into a consumable object for training. You can then pass in the `MLTable` as a data input for your AutoML training job.
machine-learning How To Read Write Data V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-read-write-data-v2.md
Previously updated : 04/15/2022 Last updated : 05/26/2022 #Customer intent: As an experienced Python developer, I need to read in my data to make it available to a remote compute to train my machine learning models.
The following YAML file demonstrates how to use the output data from one compone
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] ## Python SDK v2 (preview)
The following example defines a pipeline containing three nodes and moves data b
* `train_node` that trains a CNN model with Keras using the training data, `mnist_train.csv` . * `score_node` that scores the model using test data, `mnist_test.csv`.
-[!notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=build-pipeline)]
+[!notebook-python[] (~/azureml-examples-main/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=build-pipeline)]
## Next steps * [Install and set up Python SDK v2 (preview)](https://aka.ms/sdk-v2-install)
machine-learning How To Responsible Ai Dashboard Sdk Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-responsible-ai-dashboard-sdk-cli.md
The ` RAI Insights Dashboard Constructor` and `Gather RAI Insights Dashboard ` c
Below are specifications of the Responsible AI components and examples of code snippets in YAML and Python. To view the full code, see [sample YAML and Python notebook](https://aka.ms/RAIsamplesProgrammer)
+### Limitations
+The current set of components have a number of limitations on their use:
+
+- All models must be in registered in AzureML in MLFlow format with a sklearn flavor.
+- The models must be loadable in the component environment.
+- The models must be pickleable.
+- The models must be supplied to the RAI components using the 'Fetch Registered Model' component which we provide.
+- The dataset inputs must be `pandas` DataFrames in Parquet format.
+- A model must still be supplied even if only a causal analysis of the data is performed. The `DummyClassifier` and `DummyRegressor` estimators from SciKit-Learn can be used for this purpose.
+ ### RAI Insights Dashboard Constructor This component has three input ports:
machine-learning How To Secure Online Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-online-endpoint.md
Previously updated : 04/22/2022 Last updated : 05/26/2022 # Use network isolation with managed online endpoints (preview)
-When deploying a machine learning model to a managed online endpoint, you can secure communication with the online endpoint by using [private endpoints](/azure/private-link/private-endpoint-overview). Using a private endpoint with online endpoints is currently a preview feature.
+When deploying a machine learning model to a managed online endpoint, you can secure communication with the online endpoint by using [private endpoints](../private-link/private-endpoint-overview.md). Using a private endpoint with online endpoints is currently a preview feature.
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)]
The following diagram shows how communications flow through private endpoints to
* You must have an Azure Machine Learning workspace, and the workspace must use a private endpoint. If you don't have one, the steps in this article create an example workspace, VNet, and VM. For more information, see [Configure a private endpoint for Azure Machine Learning workspace](how-to-configure-private-link.md).
-* The Azure Container Registry for your workspace must be configured for __Premium__ tier. For more information, see [Azure Container Registry service tiers](/azure/container-registry/container-registry-skus).
+* The Azure Container Registry for your workspace must be configured for __Premium__ tier. For more information, see [Azure Container Registry service tiers](../container-registry/container-registry-skus.md).
* The Azure Container Registry and Azure Storage Account must be in the same Azure Resource Group as the workspace.
The following diagram shows how communications flow through private endpoints to
## Limitations
+* The `v1_legacy_mode` flag must be disabled (false) on your Azure Machine Learning workspace. If this flag is enabled, you won't be able to create a managed online endpoint. For more information, see [Network isolation with v2 API](how-to-configure-network-isolation-with-v2.md).
* If your Azure Machine Learning workspace has a private endpoint that was created before May 24, 2022, you must recreate the workspace's private endpoint before configuring your online endpoints to use a private endpoint. For more information on creating a private endpoint for your workspace, see [How to configure a private endpoint for Azure Machine Learning workspace](how-to-configure-private-link.md). * Secure outbound communication creates three private endpoints per deployment. One to Azure Blob storage, one to Azure Container Registry, and one to your workspace.
The following diagram shows the overall architecture of this example:
To create the resources, use the following Azure CLI commands. Replace `<UNIQUE_SUFFIX>` with a unique suffix for the resources that are created. ### Create the virtual machine jump box
When prompted, enter the password you used when creating the VM.
1. Use the following commands from the SSH session to install the CLI and Docker:
- :::code language="azurecli" source="~/azureml-examples-online-endpoint-vnet/cli/endpoints/online/managed/vnet/setup_vm/scripts/vmsetup.sh" id="setup_docker_az_cli":::
+ :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/online/managed/vnet/setup_vm/scripts/vmsetup.sh" id="setup_docker_az_cli":::
1. To create the environment variables used by this example, run the following commands. Replace `<YOUR_SUBSCRIPTION_ID>` with your Azure subscription ID. Replace `<YOUR_RESOURCE_GROUP>` with the resource group that contains your workspace. Replace `<SUFFIX_USED_IN_SETUP>` with the suffix you provided earlier. Replace `<LOCATION>` with the location of your Azure workspace. Replace `<YOUR_ENDPOINT_NAME>` with the name to use for the endpoint.
When prompted, enter the password you used when creating the VM.
# [Generic model](#tab/model)
- :::code language="azurecli" source="~/azureml-examples-online-endpoint-vnet/cli/deploy-moe-vnet.sh" id="set_env_vars":::
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-moe-vnet.sh" id="set_env_vars":::
# [MLflow model](#tab/mlflow)
- :::code language="azurecli" source="~/azureml-examples-online-endpoint-vnet/cli/deploy-moe-vnet-mlflow.sh" id="set_env_vars":::
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-moe-vnet-mlflow.sh" id="set_env_vars":::
When prompted, enter the password you used when creating the VM.
1. To configure the defaults for the CLI, use the following commands:
- :::code language="azurecli" source="~/azureml-examples-online-endpoint-vnet/cli/endpoints/online/managed/vnet/setup_vm/scripts/vmsetup.sh" id="configure_defaults":::
+ :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/online/managed/vnet/setup_vm/scripts/vmsetup.sh" id="configure_defaults":::
1. To clone the example files for the deployment, use the following command:
When prompted, enter the password you used when creating the VM.
1. To build a custom docker image to use with the deployment, use the following commands:
- :::code language="azurecli" source="~/azureml-examples-online-endpoint-vnet/cli/endpoints/online/managed/vnet/setup_vm/scripts/build_image.sh" id="build_image":::
+ :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/online/managed/vnet/setup_vm/scripts/build_image.sh" id="build_image":::
> [!TIP] > In this example, we build the Docker image before pushing it to Azure Container Registry. Alternatively, you can build the image in your vnet by using an Azure Machine Learning compute cluster and environments. For more information, see [Secure Azure Machine Learning workspace](how-to-secure-workspace-vnet.md#enable-azure-container-registry-acr).
When prompted, enter the password you used when creating the VM.
> [!TIP] > You can test or debug the Docker image locally by using the `--local` flag when creating the deployment. For more information, see the [Deploy and debug locally](how-to-deploy-managed-online-endpoints.md#deploy-and-debug-locally-by-using-local-endpoints) article.
- :::code language="azurecli" source="~/azureml-examples-online-endpoint-vnet/cli/endpoints/online/managed/vnet/setup_vm/scripts/create_moe.sh" id="create_vnet_deployment":::
+ :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/online/managed/vnet/setup_vm/scripts/create_moe.sh" id="create_vnet_deployment":::
1. To make a scoring request with the endpoint, use the following commands:
- :::code language="azurecli" source="~/azureml-examples-online-endpoint-vnet/cli/endpoints/online/managed/vnet/setup_vm/scripts/score_endpoint.sh" id="check_deployment":::
+ :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/online/managed/vnet/setup_vm/scripts/score_endpoint.sh" id="check_deployment":::
### Cleanup To delete the endpoint, use the following command: To delete the VM, use the following command: To delete all the resources created in this article, use the following command. Replace `<resource-group-name>` with the name of the resource group used in this example:
az group delete --resource-group <resource-group-name>
- [How to autoscale managed online endpoints](how-to-autoscale-endpoints.md) - [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md) - [Access Azure resources with a online endpoint and managed identity](how-to-access-resources-from-endpoints-managed-identities.md)-- [Troubleshoot online endpoints deployment](how-to-troubleshoot-online-endpoints.md)
+- [Troubleshoot online endpoints deployment](how-to-troubleshoot-online-endpoints.md)
machine-learning How To Secure Training Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-training-vnet.md
When the creation process finishes, you train your model by using the cluster in
When you enable **No public IP**, your compute cluster doesn't use a public IP for communication with any dependencies. Instead, it communicates solely within the virtual network using Azure Private Link ecosystem and service/private endpoints, eliminating the need for a public IP entirely. No public IP removes access and discoverability of compute cluster nodes from the internet thus eliminating a significant threat vector. **No public IP** clusters help comply with no public IP policies many enterprises have. > [!WARNING]
-> By default, you do not have public internet access from No Public IP Compute Cluster. You need to configure User Defined Routing (UDR) to reach to a public IP to access the internet. For example, you can use a public IP of your firewall, or you can use [Virtual Network NAT](/azure/virtual-network/nat-gateway/nat-overview) with a public IP.
+> By default, you do not have public internet access from No Public IP Compute Cluster. You need to configure User Defined Routing (UDR) to reach to a public IP to access the internet. For example, you can use a public IP of your firewall, or you can use [Virtual Network NAT](../virtual-network/nat-gateway/nat-overview.md) with a public IP.
A compute cluster with **No public IP** enabled has **no inbound communication requirements** from public internet. Specifically, neither inbound NSG rule (`BatchNodeManagement`, `AzureMachineLearning`) is required. You still need to allow inbound from source of **VirtualNetwork** and any port source, to destination of **VirtualNetwork**, and destination port of **29876, 29877** and inbound from source **AzureLoadBalancer** and any port source to destination **VirtualNetwork** and port **44224** destination.
For steps on how to create a compute instance deployed in a virtual network, see
When you enable **No public IP**, your compute instance doesn't use a public IP for communication with any dependencies. Instead, it communicates solely within the virtual network using Azure Private Link ecosystem and service/private endpoints, eliminating the need for a public IP entirely. No public IP removes access and discoverability of compute instance node from the internet thus eliminating a significant threat vector. Compute instances will also do packet filtering to reject any traffic from outside virtual network. **No public IP** instances are dependent on [Azure Private Link](how-to-configure-private-link.md) for Azure Machine Learning workspace. > [!WARNING]
-> By default, you do not have public internet access from No Public IP Compute Instance. You need to configure User Defined Routing (UDR) to reach to a public IP to access the internet. For example, you can use a public IP of your firewall, or you can use [Virtual Network NAT](/azure/virtual-network/nat-gateway/nat-overview) with a public IP.
+> By default, you do not have public internet access from No Public IP Compute Instance. You need to configure User Defined Routing (UDR) to reach to a public IP to access the internet. For example, you can use a public IP of your firewall, or you can use [Virtual Network NAT](../virtual-network/nat-gateway/nat-overview.md) with a public IP.
For **outbound connections** to work, you need to set up an egress firewall such as Azure firewall with user defined routes. For instance, you can use a firewall set up with [inbound/outbound configuration](how-to-access-azureml-behind-firewall.md) and route traffic there by defining a route table on the subnet in which the compute instance is deployed. The route table entry can set up the next hop of the private IP address of the firewall with the address prefix of 0.0.0.0/0.
This article is part of a series on securing an Azure Machine Learning workflow.
* If using CLI v2 or SDK v2 - [Network isolation for managed online endpoints](how-to-secure-online-endpoint.md) * [Enable studio functionality](how-to-enable-studio-virtual-network.md) * [Use custom DNS](how-to-custom-dns.md)
-* [Use a firewall](how-to-access-azureml-behind-firewall.md)
+* [Use a firewall](how-to-access-azureml-behind-firewall.md)
machine-learning How To Train Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-cli.md
Previously updated : 03/31/2022 Last updated : 05/26/2022
The following example shows an AutoML configuration file for training a classifi
* The training has a time out of 180 minutes * The data for training is in the folder "./training-mltable-folder". Automated ML jobs only accept data in the form of an `MLTable`. That mentioned MLTable definition is what points to the training data file, in this case a local .csv file that will be uploaded automatically: Finally, you can run it (create the AutoML job) with this CLI command:
Or like the following if providing workspace IDs explicitly instead of using the
/> az ml job create --file ./hello-automl-job-basic.yml --workspace-name [YOUR_AZURE_WORKSPACE] --resource-group [YOUR_AZURE_RESOURCE_GROUP] --subscription [YOUR_AZURE_SUBSCRIPTION] ```
-To investigate additional AutoML model training examples using other ML-tasks such as regression, time-series forecasting, image classification, object detection, NLP text-classification, etc., see the complete list of [AutoML CLI examples](https://github.com/Azure/azureml-examples/tree/sdk-preview/cli/jobs/automl-standalone-jobs).
+To investigate additional AutoML model training examples using other ML-tasks such as regression, time-series forecasting, image classification, object detection, NLP text-classification, etc., see the complete list of [AutoML CLI examples](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/automl-standalone-jobs).
### Train a model with a custom script
machine-learning How To Train Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-sdk.md
Previously updated : 05/10/2022 Last updated : 05/26/2022
ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group,
You'll create a compute called `cpu-cluster` for your job, with this code:
-[!notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/configuration.ipynb?name=create-cpu-compute)]
+[!notebook-python[] (~/azureml-examples-main/sdk/jobs/configuration.ipynb?name=create-cpu-compute)]
### 3. Environment to run the script
You'll use a curated environment provided by Azure ML for `lightgm` called `Azur
To run this script, you'll use a `command`. The command will be run by submitting it as a `job` to Azure ML.
-[!notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/single-step/lightgbm/iris/lightgbm-iris-sweep.ipynb?name=create-command)]
+[!notebook-python[] (~/azureml-examples-main/sdk/jobs/single-step/lightgbm/iris/lightgbm-iris-sweep.ipynb?name=create-command)]
-[!notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/single-step/lightgbm/iris/lightgbm-iris-sweep.ipynb?name=run-command)]
+[!notebook-python[] (~/azureml-examples-main/sdk/jobs/single-step/lightgbm/iris/lightgbm-iris-sweep.ipynb?name=run-command)]
In the above, you configured:
To perform a sweep, there needs to be input(s) against which the sweep needs to
Let us improve our model by sweeping on `learning_rate` and `boosting` inputs to the script. In the previous step, you used a specific value for these parameters, but now you'll use a range or choice of values.
-[!notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/single-step/lightgbm/iris/lightgbm-iris-sweep.ipynb?name=search-space)]
+[!notebook-python[] (~/azureml-examples-main/sdk/jobs/single-step/lightgbm/iris/lightgbm-iris-sweep.ipynb?name=search-space)]
Now that you've defined the parameters, run the sweep
-[!notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/single-step/lightgbm/iris/lightgbm-iris-sweep.ipynb?name=configure-sweep)]
+[!notebook-python[] (~/azureml-examples-main/sdk/jobs/single-step/lightgbm/iris/lightgbm-iris-sweep.ipynb?name=configure-sweep)]
-[!notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/single-step/lightgbm/iris/lightgbm-iris-sweep.ipynb?name=run-sweep)]
+[!notebook-python[] (~/azureml-examples-main/sdk/jobs/single-step/lightgbm/iris/lightgbm-iris-sweep.ipynb?name=run-sweep)]
As seen above, the `sweep` function allows user to configure the following key aspects:
machine-learning How To Use Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-data.md
These snippets use `uri_file` and `uri_folder`.
> > If you wanted to pass in just an individual file rather than the entire folder you can use the `uri_file` type.
-For a complete example, see the [working_with_uris.ipynb notebook](https://github.com/azure/azureml-previews/sdk/docs/working_with_uris.ipynb).
- Below are some common data access patterns that you can use in your *control-plane* code to submit a job to Azure Machine Learning: ### Use data with a training job
machine-learning How To Use Sweep In Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-sweep-in-pipeline.md
Previously updated : 05/10/2022 Last updated : 05/26/2022
The example used in this article can be found in [azureml-example repo](https://
Assume you already have a command component defined in `train.yaml`. A two-step pipeline job (train and predict) YAML file looks like below. The `sweep_step` is the step for hyperparameter tuning. Its type needs to be `sweep`. And `trial` refers to the command component defined in `train.yaml`. From the `search sapce` field we can see three hyparmeters (`c_value`, `kernel`, and `coef`) are added to the search space. After you submit this pipeline job, Azure Machine Learning will run the trial component multiple times to sweep over hyperparameters based on the search space and terminate policy you defined in `sweep_step`. Check [sweep job YAML schema](reference-yaml-job-sweep.md) for full schema of sweep job. Below is the trial component definition (train.yml file). The hyperparameters added to search space in pipeline.yml need to be inputs for the trial component. The source code of the trial component is under `./train-src` folder. In this example, it's a single `train.py` file. This is the code that will be executed in every trial of the sweep job. Make sure you've logged the metrics in the trial component source code with exactly the same name as `primary_metric` value in pipeline.yml file. In this example, we use `mlflow.autolog()`, which is the recommended way to track your ML experiments. See more about mlflow [here](./how-to-use-mlflow-cli-runs.md) Below code snippet is the source code of trial component. ### Python SDK
In Azure Machine Learning Python SDK v2, you can enable hyperparameter tuning fo
Below code snippet shows how to enable sweep for `train_model`.
-[!notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/pipelines/1c_pipeline_with_hyperparameter_sweep/pipeline_with_hyperparameter_sweep.ipynb?name=enable-sweep)]
+[!notebook-python[] (~/azureml-examples-main/sdk/jobs/pipelines/1c_pipeline_with_hyperparameter_sweep/pipeline_with_hyperparameter_sweep.ipynb?name=enable-sweep)]
We first load `train_component_func` defined in `train.yml` file. When creating `train_model`, we add `c_value`, `kernel` and `coef0` into search space(line 15-17). Line 30-35 defines the primary metric, sampling algorithm etc.
machine-learning Tutorial Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-auto-train-image-models.md
Previously updated : 04/15/2022 Last updated : 05/26/2022
az ml data create -f [PATH_TO_YML_FILE] --workspace-name [YOUR_AZURE_WORKSPACE]
``` # [Python SDK v2 (preview)](#tab/SDK-v2)
-[!Notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=upload-data)]
+[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=upload-data)]
Next step is to create `MLTable` from your data in jsonl format as shown below. MLtable package your data into a consumable object for training. # [CLI v2](#tab/CLI-v2) [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
validation_data:
You can create data inputs from training and validation MLTable with the following code:
-[!Notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=data-load)]
+[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=data-load)]
primary_metric: mean_average_precision
# [Python SDK v2 (preview)](#tab/SDK-v2)
-[!Notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=image-object-detection-configuration)]
+[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=image-object-detection-configuration)]
search_space:
# [Python SDK v2 (preview)](#tab/SDK-v2)
-[!Notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=sweep-settings)]
+[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=sweep-settings)]
-[!Notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=search-space-settings)]
+[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=search-space-settings)]
az ml job create --file ./hello-automl-job-basic.yml --workspace-name [YOUR_AZUR
When you've configured your AutoML Job to the desired settings, you can submit the job.
-[!Notebook-python[] (~/azureml-examples-sdk-preview/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=submit-run)]
+[!Notebook-python[] (~/azureml-examples-main/sdk/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb?name=submit-run)]
machine-learning How To Access Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-access-data.md
Datastores currently support storing connection information to the storage servi
| Storage&nbsp;type | Authentication&nbsp;type | [Azure&nbsp;Machine&nbsp;Learning studio](https://ml.azure.com/) | [Azure&nbsp;Machine&nbsp;Learning&nbsp; Python SDK](/python/api/overview/azure/ml/intro) | [Azure&nbsp;Machine&nbsp;Learning CLI](reference-azure-machine-learning-cli.md) | [Azure&nbsp;Machine&nbsp;Learning&nbsp; REST API](/rest/api/azureml/) | VS Code ||||||
-[Azure&nbsp;Blob&nbsp;Storage](/azure/storage/blobs/storage-blobs-overview)| Account key <br> SAS token | Γ£ô | Γ£ô | Γ£ô |Γ£ô |Γ£ô
-[Azure&nbsp;File&nbsp;Share](/azure/storage/files/storage-files-introduction)| Account key <br> SAS token | Γ£ô | Γ£ô | Γ£ô |Γ£ô|Γ£ô
-[Azure&nbsp;Data Lake&nbsp;Storage Gen&nbsp;1](/azure/data-lake-store/)| Service principal| Γ£ô | Γ£ô | Γ£ô |Γ£ô|
-[Azure&nbsp;Data Lake&nbsp;Storage Gen&nbsp;2](/azure/storage/blobs/data-lake-storage-introduction)| Service principal| Γ£ô | Γ£ô | Γ£ô |Γ£ô|
+[Azure&nbsp;Blob&nbsp;Storage](../../storage/blobs/storage-blobs-overview.md)| Account key <br> SAS token | Γ£ô | Γ£ô | Γ£ô |Γ£ô |Γ£ô
+[Azure&nbsp;File&nbsp;Share](../../storage/files/storage-files-introduction.md)| Account key <br> SAS token | Γ£ô | Γ£ô | Γ£ô |Γ£ô|Γ£ô
+[Azure&nbsp;Data Lake&nbsp;Storage Gen&nbsp;1](../../data-lake-store/index.yml)| Service principal| Γ£ô | Γ£ô | Γ£ô |Γ£ô|
+[Azure&nbsp;Data Lake&nbsp;Storage Gen&nbsp;2](../../storage/blobs/data-lake-storage-introduction.md)| Service principal| Γ£ô | Γ£ô | Γ£ô |Γ£ô|
[Azure&nbsp;SQL&nbsp;Database](/azure/azure-sql/database/sql-database-paas-overview)| SQL authentication <br>Service principal| Γ£ô | Γ£ô | Γ£ô |Γ£ô| [Azure&nbsp;PostgreSQL](/azure/postgresql/overview) | SQL authentication| Γ£ô | Γ£ô | Γ£ô |Γ£ô| [Azure&nbsp;Database&nbsp;for&nbsp;MySQL](/azure/mysql/overview) | SQL authentication| | Γ£ô* | Γ£ô* |Γ£ô*|
Datastores currently support storing connection information to the storage servi
### Storage guidance
-We recommend creating a datastore for an [Azure Blob container](/azure/storage/blobs/storage-blobs-introduction). Both standard and premium storage are available for blobs. Although premium storage is more expensive, its faster throughput speeds might improve the speed of your training runs, particularly if you train against a large dataset. For information about the cost of storage accounts, see the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=machine-learning-service).
+We recommend creating a datastore for an [Azure Blob container](../../storage/blobs/storage-blobs-introduction.md). Both standard and premium storage are available for blobs. Although premium storage is more expensive, its faster throughput speeds might improve the speed of your training runs, particularly if you train against a large dataset. For information about the cost of storage accounts, see the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=machine-learning-service).
-[Azure Data Lake Storage Gen2](/azure/storage/blobs/data-lake-storage-introduction) is built on top of Azure Blob storage and designed for enterprise big data analytics. A fundamental part of Data Lake Storage Gen2 is the addition of a [hierarchical namespace](/azure/storage/blobs/data-lake-storage-namespace) to Blob storage. The hierarchical namespace organizes objects/files into a hierarchy of directories for efficient data access.
+[Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md) is built on top of Azure Blob storage and designed for enterprise big data analytics. A fundamental part of Data Lake Storage Gen2 is the addition of a [hierarchical namespace](../../storage/blobs/data-lake-storage-namespace.md) to Blob storage. The hierarchical namespace organizes objects/files into a hierarchy of directories for efficient data access.
## Storage access and permissions
To ensure you securely connect to your Azure storage service, Azure Machine Lear
### Virtual network
-Azure Machine Learning requires extra configuration steps to communicate with a storage account that is behind a firewall or within a virtual network. If your storage account is behind a firewall, you can [add your client's IP address to an allowlist](/azure/storage/common/storage-network-security#managing-ip-network-rules) via the Azure portal.
+Azure Machine Learning requires extra configuration steps to communicate with a storage account that is behind a firewall or within a virtual network. If your storage account is behind a firewall, you can [add your client's IP address to an allowlist](../../storage/common/storage-network-security.md#managing-ip-network-rules) via the Azure portal.
Azure Machine Learning can receive requests from clients outside of the virtual network. To ensure that the entity requesting data from the service is safe and to enable data being displayed in your workspace, [use a private endpoint with your workspace](../how-to-configure-private-link.md).
You can find account key, SAS token, and service principal information on your [
### Permissions
-For Azure blob container and Azure Data Lake Gen 2 storage, make sure your authentication credentials have **Storage Blob Data Reader** access. Learn more about [Storage Blob Data Reader](/azure/role-based-access-control/built-in-roles#storage-blob-data-reader). An account SAS token defaults to no permissions.
+For Azure blob container and Azure Data Lake Gen 2 storage, make sure your authentication credentials have **Storage Blob Data Reader** access. Learn more about [Storage Blob Data Reader](../../role-based-access-control/built-in-roles.md#storage-blob-data-reader). An account SAS token defaults to no permissions.
* For data **read access**, your authentication credentials must have a minimum of list and read permissions for containers and objects. * For data **write access**, write and add permissions also are required.
Within this section are examples for how to create and register a datastore via
If you prefer a low code experience, see [Connect to data with Azure Machine Learning studio](../how-to-connect-data-ui.md). >[!IMPORTANT]
-> If you unregister and re-register a datastore with the same name, and it fails, the Azure Key Vault for your workspace may not have soft-delete enabled. By default, soft-delete is enabled for the key vault instance created by your workspace, but it may not be enabled if you used an existing key vault or have a workspace created prior to October 2020. For information on how to enable soft-delete, see [Turn on Soft Delete for an existing key vault](/azure/key-vault/general/soft-delete-change#turn-on-soft-delete-for-an-existing-key-vault).
+> If you unregister and re-register a datastore with the same name, and it fails, the Azure Key Vault for your workspace may not have soft-delete enabled. By default, soft-delete is enabled for the key vault instance created by your workspace, but it may not be enabled if you used an existing key vault or have a workspace created prior to October 2020. For information on how to enable soft-delete, see [Turn on Soft Delete for an existing key vault](../../key-vault/general/soft-delete-change.md#turn-on-soft-delete-for-an-existing-key-vault).
> [!NOTE]
file_datastore = Datastore.register_azure_file_share(workspace=ws,
### Azure Data Lake Storage Generation 2
-For an Azure Data Lake Storage Generation 2 (ADLS Gen 2) datastore, use [register_azure_data_lake_gen2()](/python/api/azureml-core/azureml.core.datastore.datastore#register-azure-data-lake-gen2-workspace--datastore-name--filesystem--account-name--tenant-id--client-id--client-secret--resource-url-none--authority-url-none--protocol-none--endpoint-none--overwrite-false-) to register a credential datastore connected to an Azure DataLake Gen 2 storage with [service principal permissions](/azure/active-directory/develop/howto-create-service-principal-portal).
+For an Azure Data Lake Storage Generation 2 (ADLS Gen 2) datastore, use [register_azure_data_lake_gen2()](/python/api/azureml-core/azureml.core.datastore.datastore#register-azure-data-lake-gen2-workspace--datastore-name--filesystem--account-name--tenant-id--client-id--client-secret--resource-url-none--authority-url-none--protocol-none--endpoint-none--overwrite-false-) to register a credential datastore connected to an Azure DataLake Gen 2 storage with [service principal permissions](../../active-directory/develop/howto-create-service-principal-portal.md).
-In order to utilize your service principal, you need to [register your application](/azure/active-directory/develop/app-objects-and-service-principals) and grant the service principal data access via either Azure role-based access control (Azure RBAC) or access control lists (ACL). Learn more about [access control set up for ADLS Gen 2](/azure/storage/blobs/data-lake-storage-access-control-model).
+In order to utilize your service principal, you need to [register your application](../../active-directory/develop/app-objects-and-service-principals.md) and grant the service principal data access via either Azure role-based access control (Azure RBAC) or access control lists (ACL). Learn more about [access control set up for ADLS Gen 2](../../storage/blobs/data-lake-storage-access-control-model.md).
The following code creates and registers the `adlsgen2_datastore_name` datastore to the `ws` workspace. This datastore accesses the file system `test` in the `account_name` storage account, by using the provided service principal credentials. Review the [storage access & permissions](#storage-access-and-permissions) section for guidance on virtual network scenarios, and where to find required authentication credentials.
Azure Data Factory provides efficient and resilient data transfer with more than
* [Create an Azure machine learning dataset](how-to-create-register-datasets.md) * [Train a model](../how-to-set-up-training-targets.md)
-* [Deploy a model](../how-to-deploy-and-where.md)
+* [Deploy a model](../how-to-deploy-and-where.md)
machine-learning How To Create Register Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-create-register-datasets.md
Last updated 05/11/2022
In this article, you learn how to create Azure Machine Learning datasets to access data for your local or remote experiments with the Azure Machine Learning Python SDK. To understand where datasets fit in Azure Machine Learning's overall data access workflow, see the [Securely access data](concept-data.md#data-workflow) article.
-By creating a dataset, you create a reference to the data source location, along with a copy of its metadata. Because the data remains in its existing location, you incur no extra storage cost, and don't risk the integrity of your data sources. Also datasets are lazily evaluated, which aids in workflow performance speeds. You can create datasets from datastores, public URLs, and [Azure Open Datasets](/azure/open-datasets/how-to-create-azure-machine-learning-dataset-from-open-dataset).
+By creating a dataset, you create a reference to the data source location, along with a copy of its metadata. Because the data remains in its existing location, you incur no extra storage cost, and don't risk the integrity of your data sources. Also datasets are lazily evaluated, which aids in workflow performance speeds. You can create datasets from datastores, public URLs, and [Azure Open Datasets](../../open-datasets/how-to-create-azure-machine-learning-dataset-from-open-dataset.md).
For a low-code experience, [Create Azure Machine Learning datasets with the Azure Machine Learning studio.](../how-to-connect-data-ui.md#create-datasets)
For the data to be accessible by Azure Machine Learning, datasets must be create
To create datasets from a datastore with the Python SDK:
-1. Verify that you have `contributor` or `owner` access to the underlying storage service of your registered Azure Machine Learning datastore. [Check your storage account permissions in the Azure portal](/azure/role-based-access-control/check-access).
+1. Verify that you have `contributor` or `owner` access to the underlying storage service of your registered Azure Machine Learning datastore. [Check your storage account permissions in the Azure portal](../../role-based-access-control/check-access.md).
1. Create the dataset by referencing paths in the datastore. You can create a dataset from multiple paths in multiple datastores. There is no hard limit on the number of files or data size that you can create a dataset from.
titanic_ds = titanic_ds.register(workspace = workspace,
* Learn [how to train with datasets](../how-to-train-with-datasets.md). * Use automated machine learning to [train with TabularDatasets](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb).
-* For more dataset training examples, see the [sample notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/work-with-data/).
+* For more dataset training examples, see the [sample notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/work-with-data/).
machine-learning How To Identity Based Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-identity-based-data-access.md
In this article, you learn how to connect to storage services on Azure by using identity-based data access and Azure Machine Learning datastores via the [Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/intro).
-Typically, datastores use **credential-based authentication** to confirm you have permission to access the storage service. They keep connection information, like your subscription ID and token authorization, in the [key vault](https://azure.microsoft.com/services/key-vault/) that's associated with the workspace. When you create a datastore that uses **identity-based data access**, your Azure account ([Azure Active Directory token](/azure/active-directory/fundamentals/active-directory-whatis)) is used to confirm you have permission to access the storage service. In the **identity-based data access** scenario, no authentication credentials are saved. Only the storage account information is stored in the datastore.
+Typically, datastores use **credential-based authentication** to confirm you have permission to access the storage service. They keep connection information, like your subscription ID and token authorization, in the [key vault](https://azure.microsoft.com/services/key-vault/) that's associated with the workspace. When you create a datastore that uses **identity-based data access**, your Azure account ([Azure Active Directory token](../../active-directory/fundamentals/active-directory-whatis.md)) is used to confirm you have permission to access the storage service. In the **identity-based data access** scenario, no authentication credentials are saved. Only the storage account information is stored in the datastore.
To create datastores with **identity-based** data access via the Azure Machine Learning studio UI, see [Connect to data with the Azure Machine Learning studio](../how-to-connect-data-ui.md#create-datastores).
Certain machine learning scenarios involve training models with private data. In
- An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/). - An Azure storage account with a supported storage type. These storage types are supported:
- - [Azure Blob Storage](/azure/storage/blobs/storage-blobs-overview)
- - [Azure Data Lake Storage Gen1](/azure/data-lake-store/)
- - [Azure Data Lake Storage Gen2](/azure/storage/blobs/data-lake-storage-introduction)
+ - [Azure Blob Storage](../../storage/blobs/storage-blobs-overview.md)
+ - [Azure Data Lake Storage Gen1](../../data-lake-store/index.yml)
+ - [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md)
- [Azure SQL Database](/azure/azure-sql/database/sql-database-paas-overview) - The [Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/install).
Identity-based data access supports connections to **only** the following storag
* Azure Data Lake Storage Gen2 * Azure SQL Database
-To access these storage services, you must have at least [Storage Blob Data Reader](/azure/role-based-access-control/built-in-roles#storage-blob-data-reader) access to the storage account. Only storage account owners can [change your access level via the Azure portal](/azure/storage/blobs/assign-azure-role-data-access).
+To access these storage services, you must have at least [Storage Blob Data Reader](../../role-based-access-control/built-in-roles.md#storage-blob-data-reader) access to the storage account. Only storage account owners can [change your access level via the Azure portal](../../storage/blobs/assign-azure-role-data-access.md).
If you prefer to not use your user identity (Azure Active Directory), you also have the option to grant a workspace managed-system identity (MSI) permission to create the datastore. To do so, you must have Owner permissions to the storage account and add the `grant_workspace_access= True` parameter to your data register method.
identity:
* [Create an Azure Machine Learning dataset](how-to-create-register-datasets.md) * [Train with datasets](../how-to-train-with-datasets.md)
-* [Create a datastore with key-based data access](how-to-access-data.md)
+* [Create a datastore with key-based data access](how-to-access-data.md)
machine-learning How To Track Monitor Analyze Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-track-monitor-analyze-runs.md
root_run(current_child_run).log("MyMetric", f"Data from child run {current_child
1. In the **Destination details**, select the **Send to Log Analytics workspace** and specify the **Subscription** and **Log Analytics workspace**. > [!NOTE]
- > The **Azure Log Analytics Workspace** is a different type of Azure Resource than the **Azure Machine Learning service Workspace**. If there are no options in that list, you can [create a Log Analytics Workspace](/azure/azure-monitor/logs/quick-create-workspace).
+ > The **Azure Log Analytics Workspace** is a different type of Azure Resource than the **Azure Machine Learning service Workspace**. If there are no options in that list, you can [create a Log Analytics Workspace](../../azure-monitor/logs/quick-create-workspace.md).
![Screenshot of configuring the email notification.](./media/how-to-track-monitor-analyze-runs/log-location.png)
root_run(current_child_run).log("MyMetric", f"Data from child run {current_child
![Screeenshot of the new alert rule.](./media/how-to-track-monitor-analyze-runs/new-alert-rule.png)
-1. See [how to create and manage log alerts using Azure Monitor](/azure/azure-monitor/alerts/alerts-log).
+1. See [how to create and manage log alerts using Azure Monitor](../../azure-monitor/alerts/alerts-log.md).
## Example notebooks
The following notebooks demonstrate the concepts in this article:
## Next steps * To learn how to log metrics for your experiments, see [Log metrics during training runs](../how-to-log-view-metrics.md).
-* To learn how to monitor resources and logs from Azure Machine Learning, see [Monitoring Azure Machine Learning](../monitor-azure-machine-learning.md).
+* To learn how to monitor resources and logs from Azure Machine Learning, see [Monitoring Azure Machine Learning](../monitor-azure-machine-learning.md).
managed-grafana How To Api Calls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-api-calls.md
In this article, you'll learn how to call Grafana APIs within Azure Managed Graf
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).-- An Azure Managed Grafana workspace. If you don't have one yet, [create a workspace](/azure/managed-grafana/quickstart-managed-grafana-portal).
+- An Azure Managed Grafana workspace. If you don't have one yet, [create a workspace](./quickstart-managed-grafana-portal.md).
## Sign in to Azure
Replace `<access-token>` with the access token retrieved in the previous step an
## Next steps > [!div class="nextstepaction"]
-> [Grafana UI](./grafana-app-ui.md)
+> [Grafana UI](./grafana-app-ui.md)
managed-grafana How To Data Source Plugins Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-data-source-plugins-managed-identity.md
Last updated 3/31/2022
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).-- An Azure Managed Grafana workspace. If you don't have one yet, [create a workspace](/azure/managed-grafana/how-to-permissions).
+- An Azure Managed Grafana workspace. If you don't have one yet, [create a workspace](./how-to-permissions.md).
- A resource including monitoring data with Managed Grafana monitoring permissions. Read [how to configure permissions](how-to-permissions.md) for more information. ## Sign in to Azure
Authentication and authorization are subsequently made through the provided mana
> [!div class="nextstepaction"] > [Modify access permissions to Azure Monitor](./how-to-permissions.md)
-> [Share an Azure Managed Grafana workspace](./how-to-share-grafana-workspace.md)
+> [Share an Azure Managed Grafana workspace](./how-to-share-grafana-workspace.md)
managed-grafana How To Monitor Managed Grafana Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-monitor-managed-grafana-workspace.md
In this article, you'll learn how to monitor an Azure Managed Grafana Preview wo
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).-- An Azure Managed Grafana workspace with access to at least one data source. If you don't have a workspace yet, [create an Azure Managed Grafana workspace](/azure/managed-grafan).
+- An Azure Managed Grafana workspace with access to at least one data source. If you don't have a workspace yet, [create an Azure Managed Grafana workspace](./how-to-permissions.md) and [add a data source](how-to-data-source-plugins-managed-identity.md).
## Sign in to Azure
Now that you've configured your diagnostic settings, Azure will stream all new e
> [!div class="nextstepaction"] > [Grafana UI](./grafana-app-ui.md)
-> [How to share an Azure Managed Grafana workspace](./how-to-share-grafana-workspace.md)
+> [How to share an Azure Managed Grafana workspace](./how-to-share-grafana-workspace.md)
managed-grafana How To Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-permissions.md
In this article, you'll learn how to manually edit permissions for a specific re
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).-- An Azure Managed Grafana workspace. If you don't have one yet, [create a workspace](/azure/managed-grafana/quickstart-managed-grafana-portal).
+- An Azure Managed Grafana workspace. If you don't have one yet, [create a workspace](./quickstart-managed-grafana-portal.md).
- An Azure resource with monitoring data and write permissions, such as [User Access Administrator](../../articles/role-based-access-control/built-in-roles.md#user-access-administrator) or [Owner](../../articles/role-based-access-control/built-in-roles.md#owner) ## Sign in to Azure
To change permissions for a specific resource, follow these steps:
## Next steps > [!div class="nextstepaction"]
-> [How to configure data sources for Azure Managed Grafana](./how-to-data-source-plugins-managed-identity.md)
+> [How to configure data sources for Azure Managed Grafana](./how-to-data-source-plugins-managed-identity.md)
managed-grafana How To Share Grafana Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-share-grafana-workspace.md
A DevOps team may build dashboards to monitor and diagnose an application or inf
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).-- An Azure Managed Grafana workspace. If you don't have one yet, [create a workspace](/azure/managed-grafana/how-to-permissions).
+- An Azure Managed Grafana workspace. If you don't have one yet, [create a workspace](./how-to-permissions.md).
## Supported Grafana roles
Sign in to the Azure portal at [https://portal.azure.com/](https://portal.azure.
> [!div class="nextstepaction"] > [How to configure data sources for Azure Managed Grafana](./how-to-data-source-plugins-managed-identity.md) > [How to modify access permissions to Azure Monitor](./how-to-permissions.md)
-> [How to call Grafana APIs in your automation with Azure Managed Grafana](./how-to-api-calls.md)
+> [How to call Grafana APIs in your automation with Azure Managed Grafana](./how-to-api-calls.md)
managed-grafana Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/overview.md
Azure Managed Grafana is a data visualization platform built on top of the Grafa
Azure Managed Grafana is optimized for the Azure environment. It works seamlessly with many Azure services. Specifically, for the current preview, it provides with the following integration features:
-* Built-in support for [Azure Monitor](/azure/azure-monitor/) and [Azure Data Explorer](/azure/data-explorer/)
+* Built-in support for [Azure Monitor](../azure-monitor/index.yml) and [Azure Data Explorer](/azure/data-explorer/)
* User authentication and access control using Azure Active Directory identities * Direct import of existing charts from Azure portal
You can create dashboards instantaneously by importing existing charts directly
## Next steps > [!div class="nextstepaction"]
-> [Create a workspace in Azure Managed Grafana Preview using the Azure portal](./quickstart-managed-grafana-portal.md).
+> [Create a workspace in Azure Managed Grafana Preview using the Azure portal](./quickstart-managed-grafana-portal.md).
managed-instance-apache-cassandra Visualize Prometheus Grafana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/visualize-prometheus-grafana.md
The following tasks are required to visualize metrics:
* Install the [Prometheus Dashboards](https://github.com/datastax/metric-collector-for-apache-cassandra#installing-the-prometheus-dashboards) onto the VM. >[!WARNING]
-> Prometheus and Grafana are open-source software and not supported as part of the Azure Managed Instance for Apache Cassandra service. Visualizing metrics in the way described below will require you to host and maintain a virtual machine as the server for both Prometheus and Grafana. The instructions below were tested only for Ubuntu Server 18.04, there is no guarantee that they will work with other linux distributions. Following this approach will entail supporting any issues that may arise, such as running out of space, or availability of the server. For a fully supported and hosted metrics experience, consider using [Azure Monitor metrics](monitor-clusters.md#azure-metrics), or alternatively [Azure Monitor partner integrations](/azure/azure-monitor/partners).
+> Prometheus and Grafana are open-source software and not supported as part of the Azure Managed Instance for Apache Cassandra service. Visualizing metrics in the way described below will require you to host and maintain a virtual machine as the server for both Prometheus and Grafana. The instructions below were tested only for Ubuntu Server 18.04, there is no guarantee that they will work with other linux distributions. Following this approach will entail supporting any issues that may arise, such as running out of space, or availability of the server. For a fully supported and hosted metrics experience, consider using [Azure Monitor metrics](monitor-clusters.md#azure-metrics), or alternatively [Azure Monitor partner integrations](../azure-monitor/partners.md).
## Deploy an Ubuntu server
The following tasks are required to visualize metrics:
In this article, you learned how to configure dashboards to visualize metrics in Prometheus using Grafana. Learn more about Azure Managed Instance for Apache Cassandra with the following articles: * [Overview of Azure Managed Instance for Apache Cassandra](introduction.md)
-* [Deploy a Managed Apache Spark Cluster with Azure Databricks](deploy-cluster-databricks.md)
+* [Deploy a Managed Apache Spark Cluster with Azure Databricks](deploy-cluster-databricks.md)
marketplace Azure Private Plan Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-private-plan-troubleshooting.md
While troubleshooting the Azure Subscription Hierarchy, keep these things in min
## Troubleshooting Checklist -- ISV to ensure the SaaS private plan is using the correct tenant ID for the customer - [How to find your Azure Active Directory tenant ID](../active-directory/fundamentals/active-directory-how-to-find-tenant.md). For VMs use the [Azure Subscription ID.](/azure/azure-portal/get-subscription-tenant-id)
+- ISV to ensure the SaaS private plan is using the correct tenant ID for the customer - [How to find your Azure Active Directory tenant ID](../active-directory/fundamentals/active-directory-how-to-find-tenant.md). For VMs use the [Azure Subscription ID.](../azure-portal/get-subscription-tenant-id.md)
- ISV to ensure that the Customer is not buying through a CSP. Private Plans are not available on a CSP-managed subscription. - Customer to ensure customer is logging in with an email ID that is registered under the same tenant ID (use the same user ID they used in step #1 above) - ISV to ask the customer to find the Private Plan in Azure Marketplace: [Private plans in Azure Marketplace](/marketplace/private-plans)
While troubleshooting the Azure Subscription Hierarchy, keep these things in min
## Next steps -- [Create an Azure Support Request](../azure-portal/supportability/how-to-create-azure-support-request.md)
+- [Create an Azure Support Request](../azure-portal/supportability/how-to-create-azure-support-request.md)
marketplace Azure Vm Plan Technical Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-vm-plan-technical-configuration.md
Some common reasons for reusing the technical configuration settings from anothe
- Your solution behaves differently based on the plan the user chooses to deploy. For example, the software is the same, but features vary by plan. > [!NOTE]
-> If you would like to use a public plan to create a private plan with a different price, consider creating a private offer instead of reusing the technical configuration. Learn more about [the difference between private plans and private offers](/azure/marketplace/isv-customer-faq). Learn more about [how to create a private offer](/azure/marketplace/isv-customer).
+> If you would like to use a public plan to create a private plan with a different price, consider creating a private offer instead of reusing the technical configuration. Learn more about [the difference between private plans and private offers](./isv-customer-faq.yml). Learn more about [how to create a private offer](./isv-customer.md).
Leverage [Azure Instance Metadata Service](../virtual-machines/windows/instance-metadata-service.md) (IMDS) to identify which plan your solution is deployed within to validate license or enabling of appropriate features.
Here is a list of properties that can be selected for your VM. Enable the proper
- Python version above 2.6+
- For more information, seeΓÇ»[VM Extension](/azure/marketplace/azure-vm-certification-faq).
+ For more information, seeΓÇ»[VM Extension](./azure-vm-certification-faq.yml).
-- **Supports backup**: Enable this property if your images support Azure VM backup. Learn more aboutΓÇ»[Azure VM backup](/azure/backup/backup-azure-vms-introduction).
+- **Supports backup**: Enable this property if your images support Azure VM backup. Learn more aboutΓÇ»[Azure VM backup](../backup/backup-azure-vms-introduction.md).
-- **Supports accelerated networking**: The VM images in this plan support single root I/O virtualization (SR-IOV) to a VM, enabling low latency and high throughput on the network interface. Learn more about [accelerated networking for Linux](/azure/virtual-network/create-vm-accelerated-networking-cli). Learn more about [accelerated networking for Windows](/azure/virtual-network/create-vm-accelerated-networking-powershell).
+- **Supports accelerated networking**: The VM images in this plan support single root I/O virtualization (SR-IOV) to a VM, enabling low latency and high throughput on the network interface. Learn more about [accelerated networking for Linux](../virtual-network/create-vm-accelerated-networking-cli.md). Learn more about [accelerated networking for Windows](../virtual-network/create-vm-accelerated-networking-powershell.md).
- **Is a network virtual appliance**: A network virtual appliance is a product that performs one or more network functions, such as a Load Balancer, VPN Gateway, Firewall or Application Gateway. Learn more about [network virtual appliances](https://go.microsoft.com/fwlink/?linkid=2155373). - **Supports NVMe** - Enable this property if the images in this plan support NVMe disk interface. The NVMe interface offers higher and consistent IOPS and bandwidth relative to legacy SCSI interface. -- **Supports cloud-init configuration**: Enable this property if the images in this plan support cloud-init post deployment scripts. Learn more aboutΓÇ»[cloud-init configuration](/azure/virtual-machines/linux/using-cloud-init).
+- **Supports cloud-init configuration**: Enable this property if the images in this plan support cloud-init post deployment scripts. Learn more aboutΓÇ»[cloud-init configuration](../virtual-machines/linux/using-cloud-init.md).
- **Supports hibernation** ΓÇô The images in this plan support hibernation/resume. - **Remote desktop/SSH not supported**: Enable this property if any of the following conditions are true:
- - Virtual machines deployed with these images don't allow customers to access it using Remote Desktop or SSH. Learn more aboutΓÇ»[locked VM images](/azure/marketplace/azure-vm-certification-faq#locked-down-or-ssh-disabled-offer.md). Images that are published with either SSH disabled (for Linux) or RDP disabled (for Windows) are treated as Locked down VMs. There are special business scenarios to restrict access to users. During validation checks, Locked down VMs might not allow execution of certain certification commands.
+ - Virtual machines deployed with these images don't allow customers to access it using Remote Desktop or SSH. Learn more aboutΓÇ»[locked VM images](./azure-vm-certification-faq.yml#locked-down-or-ssh-disabled-offer). Images that are published with either SSH disabled (for Linux) or RDP disabled (for Windows) are treated as Locked down VMs. There are special business scenarios to restrict access to users. During validation checks, Locked down VMs might not allow execution of certain certification commands.
- Image does not support sampleuser while deploying. - Image has limited access.
Below are examples (non-exhaustive) that might require custom templates for depl
## Image types
-Generations of a virtual machine defines the virtual hardware it uses. Based on your customerΓÇÖs needs, you can publish a Generation 1 VM, Generation 2 VM, or both. To learn more about the differences between Generation 1 and Generation 2 capabilities, see [Support for generation 2 VMs on Azure](/azure/virtual-machines/generation-2).
+Generations of a virtual machine defines the virtual hardware it uses. Based on your customerΓÇÖs needs, you can publish a Generation 1 VM, Generation 2 VM, or both. To learn more about the differences between Generation 1 and Generation 2 capabilities, see [Support for generation 2 VMs on Azure](../virtual-machines/generation-2.md).
When creating a new plan, select an Image type from the drop-down menu. You can choose either X64 Gen 1 or X64 Gen 2. To add another image type to a plan, select **+Add image type**. You will need to provide a SKU ID for each new image type that is added. > [!NOTE]
-> A published generation requires at least one image version to remain available for customers. To remove the entire plan (along with all its generations and images), select **Deprecate plan** on the **Plan Overview** page. Learn more about [deprecating plans](/azure/marketplace/deprecate-vm).
+> A published generation requires at least one image version to remain available for customers. To remove the entire plan (along with all its generations and images), select **Deprecate plan** on the **Plan Overview** page. Learn more about [deprecating plans](./deprecate-vm.md).
> ## VM images
To add a new image version, click **+Add VM image**. This will open a panel in w
Keep in mind the following when publishing VM images: 1. Provide only one new VM image per image type in a given submission.
-2. After an image has been published, you can't edit it, but you can deprecate it. Deprecating a version prevents both new and existing users from deploying a new instance of the deprecated version. Learn more about [deprecating VM images](/azure/marketplace/deprecate-vm).
+2. After an image has been published, you can't edit it, but you can deprecate it. Deprecating a version prevents both new and existing users from deploying a new instance of the deprecated version. Learn more about [deprecating VM images](./deprecate-vm.md).
3. You can add up to 16 data disks for each VM image provided. Regardless of which operating system you use, add only the minimum number of data disks that the solution requires. During deployment, customers canΓÇÖt remove disks that are part of an image, but they can always add disks during or after deployment. > [!NOTE]
marketplace Dynamics 365 Customer Engage Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-customer-engage-availability.md
Previously updated : 12/03/2021 Last updated : 05/25/2022 # Configure Dynamics 365 apps on Dataverse and Power Apps offer availability
This page lets you define where and how to make your offer available, including
To specify the markets in which your offer should be available, select **Edit markets**.
+> [!NOTE]
+> If you choose to sell through Microsoft and have Microsoft host transactions on your behalf, then the **Markets** section is not available on this page. In this case, youΓÇÖll configure the markets later when you create plans for the offer. If the **Markets** section isnΓÇÖt shown, go to [Preview audience](#preview-audience).
+ On the **Market selection** popup window, select at least one market. Choose **Select all** to make your offer available in every possible market or select only the specific markets you want. When you're finished, select **Save**. Your selections here apply only to new acquisitions; if someone already has your app in a certain market, and you later remove that market, the people who already have the offer in that market can continue to use it, but no new customers in that market will be able to get your offer.
marketplace Dynamics 365 Customer Engage Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-customer-engage-offer-setup.md
Previously updated : 04/18/2022 Last updated : 05/25/2022 # Create a Dynamics 365 apps on Dataverse and Power Apps offer
-This article describes how to create a Dynamics 365 apps on Dataverse and Power Apps offer. All offers for Dynamics 365 go through our certification process. The trial experience allows users to deploy your solution to a live Dynamics 365 environment.
-
-Before you start, create a commercial marketplace account in [Partner Center](./create-account.md) and ensure it is enrolled in the commercial marketplace program.
+This article describes how to create a _Dynamics 365 apps on Dataverse and Power Apps_ offer. Before you start, create a commercial marketplace account in [Partner Center](./create-account.md) and ensure it is enrolled in the commercial marketplace program.
## Before you begin
Enter a descriptive name that we'll use to refer to this offer solely within Par
## Setup details
-For **How do you want potential customers to interact with this listing offer?**, select the option you want to use for this offer:
+1. On the _Offer setup_ page, choose one of the following options:
-- **Enable app license management through Microsoft** ΓÇô Manage your app licenses through Microsoft. To let customers run your appΓÇÖs base functionality without a license and run premium features after theyΓÇÖve purchased a license, select the **Allow customers to install my app even if licenses are not assigned box**. If you select this second box, you need to configure your solution package to not require a license.
+ - Select **Yes** to sell through Microsoft and have Microsoft host transactions on your behalf.
+
+ If you choose this option, the Enable app license management through Microsoft check box is enabled and cannot be changed.
- > [!NOTE]
- > You cannot change this setting after you publish your offer. To learn more about this setting, see [ISV app license management](isv-app-license.md).
+ > [!NOTE]
+ > This capability is currently in Public Preview.
-- **Get it now (free)** ΓÇô List your offer to customers for free.-- **Free trial (listing)** ΓÇô List your offer to customers with a link to a free trial. Offer listing free trials are created, managed, and configured by your service and do not have subscriptions managed by Microsoft.
+ - Select **No**, if you prefer to only list your offer through the marketplace and process transactions independently.
- > [!NOTE]
- > The tokens your application will receive through your trial link can only be used to obtain user information through Azure Active Directory (Azure AD) to automate account creation in your app. Microsoft accounts are not supported for authentication using this token.
+ If you choose this option, you can use the **Enable app license management through Microsoft** check box to choose whether or not to enable app license management through Microsoft. For more information, see [ISV app license management](isv-app-license.md).
+
+1. To let customers run your appΓÇÖs base functionality without a license and run premium features after theyΓÇÖve purchased a license, select the **Allow customers to install my app even if licenses are not assigned** box. If you select this second box, you need to configure your solution package to not require a license.
+
+1. If you chose **No** in step 1 and chose not to enable app license management through Microsoft, then you can select one of the following:
+
+ - **Get it now (free)** ΓÇô List your offer to customers for free.
+ - **Free trial (listing)** ΓÇô List your offer to customers with a link to a free trial. The trial experience lets users deploy your solution to a live Dynamics 365 environment. Offer listing free trials are created, managed, and configured by your service and do not have subscriptions managed by Microsoft.
+
+ > [!NOTE]
+ > The tokens your application will receive through your trial link can only be used to obtain user information through Azure Active Directory (Azure AD) to automate account creation in your app. Microsoft accounts are not supported for authentication using this token.
-- **Contact me** ΓÇô Collect customer contact information by connecting your Customer Relationship Management (CRM) system. The customer will be asked for permission to share their information. These customer details, along with the offer name, ID, and marketplace source where they found your offer, will be sent to the CRM system that you've configured. For more information about configuring your CRM, see [Customer leads](#customer-leads).
+ - **Contact me** ΓÇô Collect customer contact information by connecting your Customer Relationship Management (CRM) system. The customer will be asked for permission to share their information. These customer details, along with the offer name, ID, and marketplace source where they found your offer, will be sent to the CRM system that you've configured. For more information about configuring your CRM, see [Customer leads](#customer-leads).
## Test drive
marketplace Dynamics 365 Customer Engage Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-customer-engage-plans.md
Previously updated : 12/03/2021 Last updated : 05/25/2022 # Create Dynamics 365 apps on Dataverse and Power Apps plans
If you enabled app license management for your offer, the **Plans overview** tab
[ ![Screenshot of the Plan overview tab for a Dynamics 365 apps on Dataverse and Power Apps offer that's been enabled for third-party app licensing.](./media/third-party-license/plan-tab-d365-workspaces.png) ](./media/third-party-license/plan-tab-d365-workspaces.png#lightbox)
-You need to define at least one plan, if your offer has app license management enabled. You can create a variety of plans with different options for the same offer. These plans (sometimes referred to as SKUs) can differ in terms of monetization or tiers of service. Later, you will map the Service IDs of these plans in your solution package to enable a runtime license check by the Dynamics platform against these plans. You will map the Service ID of each plan in your solution package. This enables the Dynamics platform to run a license check against these plans.
+You need to define at least one plan, if your offer has app license management enabled. You can create a variety of plans with different options for the same offer. These plans (sometimes referred to as SKUs) can differ in terms of monetization or tiers of service. Later, you will map the Service IDs of each plan in the metadata of your solution package to enable a runtime license check by the Dynamics platform against these plans (we'll walk you through this process later). You will map the Service ID of each plan in your solution package.
## Create a plan 1. In the left-nav, select **Plan overview**.
-1. Near the top of the **Plan overview** page, select **+ Create new plan**.
+1. Near the top of the page, select **+ Create new plan**.
1. In the dialog box that appears, in the **Plan ID** box, enter a unique plan ID. Use up to 50 lowercase alphanumeric characters, dashes, or underscores. You cannot modify the plan ID after you select **Create**. 1. In the **Plan name** box, enter a unique name for this plan. Use a maximum of 200 characters. 1. Select **Create**.
On the **Plan listing** tab, you can define the plan name and description as you
1. In the **Plan name** box, the name you provided earlier for this plan appears here. You can change it at any time. This name will appear in the commercial marketplace as the title of your offer's software plan. 1. In the **Plan description** box, explain what makes this software plan unique and any differences from other plans within your offer. This description may contain up to 3,000 characters.
-1. Select **Save draft**, and then in the breadcrumb at the top of the page, select **Plans**.
+1. Select **Save draft**.
- [ ![Screenshot shows the Plan overview link on the Plan listing page of an offer in Partner Center.](./media/third-party-license/bronze-plan-workspaces.png) ](./media/third-party-license/bronze-plan-workspaces.png#lightbox)
+## Define pricing and availability
-1. To create another plan for this offer, at the top of the **Plan overview** page, select **+ Create new plan**. Then repeat the steps in the [Create a plan](#create-a-plan) section. Otherwise, if you're done creating plans, go to the next section: Copy the Service IDs.
+If you chose to sell through Microsoft and have Microsoft host transactions on your behalf, then the **Pricing and availability** tab appears in the left-nav. Otherwise, go to [Copy the Service IDs](#copy-the-service-ids).
+
+1. In the left-nav, select **Pricing and availability**.
+1. In the **Markets** section, select **Edit markets**.
+1. On the side panel that appears, select at least one market. To make your offer available in every possible market, choose **Select all** or select only the specific markets you want. When you're finished, select **Save**.
+
+ Your selections here apply only to new acquisitions; if someone already has your app in a certain market, and you later remove that market, the people who already have the offer in that market can continue to use it, but no new customers in that market will be able to get your offer.
+
+ > [!IMPORTANT]
+ > It is your responsibility to meet any local legal requirements, even if those requirements aren't listed here or in Partner Center. Even if you select all markets, local laws, restrictions, or other factors may prevent certain offers from being listed in some countries and regions.
+
+### Configure per user pricing
+
+1. On the **Pricing and availability** tab, under **User limits**, optionally specify the minimum and maximum number of users that for this plan.
+ > [!NOTE]
+ > If you choose not to define the user limits, the default value of one to one million users will be used.
+1. Under **Billing term**, specify a monthly price, annual price, or both.
+
+ > [!NOTE]
+ > You must specify a price for your offer, even if the price is zero.
+
+### Enable a free trial
+
+You can optionally configure a free trial for each plan in your offer. To enable a free trial, select the **Allow a one-month free trial** check box.
+
+> [!IMPORTANT]
+> After your transactable offer has been published with a free trial, it cannot be disabled for that plan. Make sure this setting is correct before you publish the offer to avoid having to re-create the plan.
+
+If you select this option, customers are not charged for the first month of use. At the end of the free month, one of the following occurs:
+- If the customer chose recurring billing, they will automatically be upgraded to a paid plan and the selected payment method is charged.
+- If the customer didnΓÇÖt choose recurring billing, the plan will expire at the end of the free trial.
+
+### Choose who can see your plan
+
+You can configure each plan to be visible to everyone or to only a specific audience. You grant access to a private plan using tenant IDs with the option to include a description of each tenant ID you assign. You can add a maximum of 10 tenant IDs manually or up to 20,000 tenant IDs using a .CSV file. A private plan is not the same as a preview audience.
+
+> [!NOTE]
+> If you publish a private plan, you can change its visibility to public later. However, once you publish a public plan, you cannot change its visibility to private.
+
+#### Make your plan public
+
+1. Under **Plan visibility**, select **Public**.
+1. Select **Save draft**, and then go to [View your plans](#view-your-plans).
+
+#### Manually add tenant IDs for a private plan
+
+1. Under **Plan visibility**, select **Private**.
+1. In the **Tenant ID** box that appears, enter the Azure AD tenant ID of the audience you want to grant access to this private plan. A minimum of one tenant ID is required.
+1. (Optional) Enter a description of this audience in the **Description** box.
+1. To add another tenant ID, select **Add ID**, and then repeat steps 2 and 3.
+1. When you're done adding tenant IDs, select **Save draft**, and then go to [View your plans](#view-your-plans).
+
+#### Use a .CSV file for a private plan
+
+1. Under **Plan visibility**, select **Private**.
+1. Select the **Export Audience (csv)** link.
+1. Open the .CSV file and add the Azure IDs you want to grant access to the private offer to the **ID** column.
+1. (Optional) Enter a description for each audience in the **Description** column.
+1. Add "TenantID" in the **Type** column, for each row with an Azure ID.
+1. Save the .CSV file.
+1. On the **Pricing and availability** tab, under **Plan visibility**, select the **Import Audience (csv)** link.
+1. In the dialog box that appears, select **Yes**.
+1. Select the .CSV file and then select **Open**.
+1. Select **Save draft**, and then the next section: View your plans.
+
+### View your plans
+
+1. In the breadcrumb at the top of the page, select **Plan overview**.
+1. To create another plan for this offer, at the top of the **Plan overview** page, repeat the steps in the [Create a plan](#create-a-plan) section. Otherwise, if you're done creating plans, go to the next section: Copy the Service IDs.
## Copy the Service IDs You need to copy the Service ID of each plan you created so you can map them to your solution package in the next section: Add Service IDs to your solution package. -- For each plan you created, copy the Service ID to a safe place. YouΓÇÖll add them to your solution package in the next step. The service ID is listed on the **Plan overview** page in the form of `ISV name.offer name.plan ID`. For example, Fabrikam.F365.bronze.
+1. To go to the **Plan overview** page, in the breadcrumb at the top of the page, select **Plan overview**. If you donΓÇÖt see the breadcrumb, select **Plan overview** in the left-nav.
+
+1. For each plan you created, copy the Service ID to a safe place. YouΓÇÖll add them to your solution package in the next section. The service ID is listed on the **Plan overview** page in the form of `ISV name.offer name.plan ID`. For example, fabrikam.f365.bronze.
[ ![Screenshot of the Plan overview page. The service ID for the plan is highlighted.](./media/third-party-license/service-id-workspaces.png) ](./media/third-party-license/service-id-workspaces.png#lightbox) ## Add Service IDs to your solution package
-1. Add the Service IDs you copied in the previous step to your solution package. To learn how, see [Add licensing information to your solution](/powerapps/developer/data-platform/appendix-add-license-information-to-your-solution) and [Create an AppSource package for your app](/powerapps/developer/data-platform/create-package-app-appsource).
-1. After you create the CRM package .zip file, upload it to Azure Blob Storage. You will need to provide the SAS URL of the Azure Blob Storage account that contains the uploaded CRM package .zip file.
+1. Add the Service IDs you copied in the previous step to the metadata of your solution package. To learn how, see [Add licensing information to your solution](/powerapps/developer/data-platform/appendix-add-license-information-to-your-solution) and [Create an AppSource package for your app](/powerapps/developer/data-platform/create-package-app-appsource).
+1. After you create the CRM package .zip file, upload it to [Azure Blob Storage](/power-apps/developer/data-platform/store-appsource-package-azure-storage). You will need to provide the SAS URL of the Azure Blob Storage account that contains the uploaded CRM package .zip file, when configuring the technical configuration.
## Next steps
marketplace Dynamics 365 Review Publish https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-review-publish.md
Previously updated : 09/27/2021 Last updated : 05/25/2022 # Review and publish a Dynamics 365 offer
-This article shows you how to use Partner Center to preview your draft Dynamics 365 offer and then publish it to the commercial marketplace. It also covers how to check publishing status as it proceeds through the publishing steps.
+This article shows you how to use Partner Center to submit your Dynamics 365 offer for publishing, preview your offer, subscribe to a plan, and then publish it live to the commercial marketplace. It also covers how to check the publishing status as it proceeds through the publishing steps. You must have already created the offer that you want to publish.
-## Offer status
+## Submit your offer to publishing
-You can review your offer status on the **Overview** tab of the commercial marketplace dashboard in [Partner Center](https://partner.microsoft.com/dashboard/commercial-marketplace/overview). The **Status** of each offer will be one of the following:
+1. Return to [Partner Center](https://go.microsoft.com/fwlink/?linkid=2166002).
+1. On the Home page, select the **Marketplace offers** tile.
+1. In the **Offer alias** column, select the offer you want to publish.
+1. In the upper-right corner of the portal, select **Review and publish**.
+1. Make sure that the **Status column** for each page for the offer says **Complete**. The three possible statuses are as follows:
+
+ - **Not started** ΓÇô The page is incomplete.
+ - **Incomplete** ΓÇô The page is missing required information or has errors that need to be fixed. You'll need to go back to the page and update it.
+ - **Complete** ΓÇô The page is complete. All required data has been provided and there are no errors.
+
+1. If any of the pages have a status other than **Complete**, select the page name, correct the issue, save the page, and then select **Review and publish** again to return to this page.
+1. Some offer types require testing. After all of the pages are complete, if you see a **Notes for certification** box, provide testing instructions to the certification team to ensure that your app is tested correctly. Provide any supplementary notes helpful for understanding your app.
+1. To start the publishing process for your offer, select **Publish**. The **Offer overview** page appears and shows the offer's **Publish status**.
+
+## Publish status
+
+Your offer's publish status will change as it moves through the publication process. You can review your offer status on the **Overview** tab of the commercial marketplace offer in [Partner Center](https://partner.microsoft.com/dashboard/commercial-marketplace/overview). The **Status** of each offer will be one of the following:
| Status | Description |
-| | - |
+| | |
| Draft | Offer has been created but it isn't being published. | | Publish in progress | Offer is working its way through the publishing process. | | Attention needed | We discovered a critical issue during certification or during another publishing phase. | | Preview | We certified the offer, which now awaits a final verification by the publisher. Select **Go live** to publish the offer live. | | Live | Offer is live in the marketplace and can be seen and acquired by customers. |
-| Pending stop sell | Publisher selected "stop sell" on an offer or plan, but the action has not yet been completed. |
+| Pending stop distribution | Publisher selected "stop distribution" on an offer or plan, but the action has not yet been completed. |
| Not available in the marketplace | A previously published offer in the marketplace has been removed. |
-## Validation and publishing steps
+## Preview and subscribe to the offer
-Your offer's publish status will change as it moves through the publication process. For detailed information on this process, see [Validation and publishing steps](review-publish-offer.md#validation-and-publishing-steps).
+When the offer is ready for you to test in the preview environment, weΓÇÖll send you an email to request that you review and approve your offer preview. You can also refresh the **Offer overview** page in your browser to see if your offer has reached the Publisher sign-off phase. If it has, the **Go live** button and preview link will be available. If you chose to sell your offer through Microsoft, anyone who has been added to the preview audience can test the acquisition and deployment of your offer to ensure it meets your requirements during this stage.
-When you are ready to submit an offer for publishing, select **Review and publish** at the upper-right corner of the portal. You'll see the status of each page for your offer listed as one of the following:
+The following screenshot shows the **Offer overview** page for a _Dynamics 365 apps on Dataverse and Power apps_ offer, with a preview link under the **Go live** button. The validation steps youΓÇÖll see on this page vary depending on the selections you made when you created the offer.
-- **Not started** ΓÇô The page is incomplete.-- **Incomplete** ΓÇô The page is missing required information or has errors that need to be fixed. You'll need to go back to the page and update it.-- **Complete** ΓÇô The page is complete. All required data has been provided and there are no errors.
+- To preview your offer, select the _preview link_ under the **Go live** button. This takes you to the product details page on AppSource, where you can validate that all the details of the offer are showing correctly.
-If any of the pages have a status other than **Complete**, you need to correct the issue on that page and then return to the **Review and publish** page to confirm the status now shows as **Complete**. Some offer types require testing. If so, you will see a **Notes for certification** field where you need to provide testing instructions to the certification team and any supplementary notes helpful for understanding your app.
+ [ ![Illustrates the preview link on the Offer overview page.](./media/dynamics-365/preview-link.png) ](./media/dynamics-365/preview-link.png#lightbox)
-After all pages are complete and you have entered applicable testing notes, select **Publish** to submit your offer. We will email you when a preview version of your offer is available to approve. At that time complete the following steps:
+> [!IMPORTANT]
+> To validate the end-to-end purchase and setup flow, purchase your offer while it is in Preview. First notify Microsoft with a support ticket to ensure we are aware that you're testing the offer. Otherwise, the customer account used for the purchase will be billed and invoiced. Publisher Payout will occur when the criteria are met and will be paid out per the payout schedule with the agency fee deducted from the purchase price.
-1. Return to [Partner Center](https://go.microsoft.com/fwlink/?linkid=2166002).
-1. On the Home page, select the **Marketplace offers** tile.
+If your offer is a _Contact Me_ listing, test that a lead is created as expected by providing the Contact Me details during preview.
+
+## Test the offer in AppSource
+
+1. From the _Product details_ page of the offer, select the **Buy Now** button.
+1. Select the plan you want to purchase and then select **Next**.
+1. Select the billing term, recurring billing term, and number of users.
+1. On the Payment page, enter the sold-to address and payment method.
+1. To place the order, select the **Place order** button.
+1. Once the order is placed, you can select the **Assign licenses** button to go to the [Microsoft 365 admin center](https://admin.microsoft.com/) to assign licenses to users.
+
+## Go live
- [ ![Illustrates the Marketplace offers tile on the Partner Center Home page.](./media/workspaces/partner-center-home.png) ](./media/workspaces/partner-center-home.png#lightbox)
+After you complete your tests, you can publish the offer live to the commercial marketplace.
+1. Return to [Partner Center](https://go.microsoft.com/fwlink/?linkid=2166002).
+1. On the Home page, select the **Marketplace offers** tile.
1. On the Marketplace offers page, select the offer.
-1. Select **Review and publish**.
1. Select **Go live** to make your offer publicly available.
-After you select **Review and publish**, we will perform certification and other verification processes before your offer is published to AppSource. We will notify you when your offer is available in preview so you can go live. If there is an issue, we will notify you with the details and provide guidance on how to fix it.
+All offers for Dynamics 365 go through our certification process. Now that youΓÇÖve chosen to make your offer available in the commercial marketplace, we will perform certification and other verification processes before your offer is published to AppSource. If there is an issue, we will notify you with the details and provide guidance on how to fix it.
+
+After these validation checks are complete, your offer will be live in the marketplace.
## Next steps -- If you enabled _Third-party app license management through Microsoft_ for your offer, after you sell your offer, youΓÇÖll need to register the deal in Partner Center. To learn more, see [Managing licensing in marketplace offers](/partner-center/csp-commercial-marketplace-licensing).
+- If you enabled _Third-party app license management through Microsoft_ for your offer, after you sell your offer, youΓÇÖll need to register the deal in Partner Center. To learn more, see [Register deals you've won in Partner Center](/partner-center/register-deals).
- [Update an existing offer in the Commercial Marketplace](update-existing-offer.md)
marketplace Isv App License https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/isv-app-license.md
Previously updated : 12/03/2021 Last updated : 05/25/2022 # ISV app license management
Applies to the following offer type:
- Dynamics 365 apps on Dataverse and Power Apps
-_ISV app license management_ enables independent software vendors (ISVs) who build solutions using Dynamics 365 suite of products to manage and enforce licenses for their solutions using systems provided by Microsoft. By adopting this approach you can:
+_ISV app license management_ enables independent software vendors (ISVs) who build solutions using Dynamics 365 suite of products to manage and enforce licenses for their solutions using systems provided by Microsoft. By adopting license management, ISVs can:
-- Enable your customers to assign and unassign your solutionΓÇÖs licenses using familiar tools such as Microsoft 365 Admin Center, which they use to manage Office and Dynamics licenses.-- Have the Power Platform enforce your licenses at runtime to ensure that only licensed users can access your solution.
+- Enable your customers to assign and unassign licenses of ISV products using familiar tools such as Microsoft 365 Admin Center, which customers use to manage Office and Dynamics licenses.
+- Have the Power Platform enforce ISV product licenses at runtime to ensure that only licensed users can access your solution.
- Save yourself the effort of building and maintaining your own license management and enforcement system. -
-> [!NOTE]
-> ISV app license management is only available to ISVs participating in the ISV Connect program. Microsoft is not involved in the sale of licenses.
+ISV app license management currently supports:
+- A named user license model. Each license must be assigned to an Azure AD user or Azure AD security group.
+- [Enforcement for model-driven apps](/power-apps/maker/model-driven-apps/model-driven-app-overview).
## Prerequisites
To manage your ISV app licenses, you need to comply with the following pre-requi
## High-level process
-This table illustrates the high-level process to manage ISV app licenses:
+The process varies depending on whether Microsoft hosts transactions on your behalf (also known as a _transactable offer_) or you only list the offer through the marketplace and host transactions independently.
+
+These steps illustrate the high-level process to manage ISV app licenses:
+
+### Step 1: Create an offer
+
+| Transactable offers | Licensable-only offers |
+| | - |
+| The ISV [creates an offer in Partner Center](dynamics-365-customer-engage-offer-setup.md) and chooses to transact through MicrosoftΓÇÖs commerce system and enable Microsoft to manage the licenses of these add-ons. The ISV also defines at least one plan and configures pricing information and availability. The ISV can optionally define a private plan which only specific customers can see and purchase on [Microsoft AppSource](https://appsource.microsoft.com/). | The ISV [creates an offer in Partner Center](dynamics-365-customer-engage-offer-setup.md) and chooses to manage licenses for this offer through Microsoft. This includes defining one or more licensing plans for the offer. |
+
+### Step 2: Add license metadata to solution package
+
+The ISV creates a solution package for the offer that includes license plan information as metadata and uploads it to Partner Center for publication to Microsoft AppSource. To learn more, see [Adding license metadata to your solution](/powerapps/developer/data-platform/appendix-add-license-information-to-your-solution).
+
+### Step 3: Purchase subscription to ISV products
+
+| Transactable offers | Licensable-only offers |
+| | - |
+| Customers discover the ISVΓÇÖs offer in AppSource, purchase a subscription to the offer from AppSource, and get licenses for the ISV app. | - Customers discover the ISVΓÇÖs offer in AppSource or directly on the ISVΓÇÖs website. Customers purchase licenses for the plans they want directly from the ISV.<br>- The ISV registers the purchase with Microsoft in Partner Center. As part of [deal registration](/partner-center/csp-commercial-marketplace-licensing#register-isv-connect-deal-in-deal-registration), the ISV will specify the type and quantity of each licensing plan purchased by the customer. |
+
+### Step 4: Manage subscription
-| Step | Details |
+| Transactable offers | Licensable-only offers |
| | - |
-| Step 1: Create offer | The ISV creates an offer in Partner Center and chooses to manage licenses for this offer through Microsoft. This includes defining one or more licensing plans for the offer. For more information, see [Create a Dynamics 365 apps on Dataverse and Power Apps offer on Microsoft AppSource](dynamics-365-customer-engage-offer-setup.md). |
-| Step 2: Update package | The ISV creates a solution package for the offer that includes license plan information as metadata, and uploads it to Partner Center for publication to Microsoft AppSource. To learn more, see [Adding license metadata to your solution](/powerapps/developer/data-platform/appendix-add-license-information-to-your-solution). |
-| Step 3: Purchase licenses | Customers discover the ISVΓÇÖs offer in AppSource or directly on the ISVΓÇÖs website. Customers purchase licenses for the plans they want directly from the ISV (these offers are not purchasable through AppSource at this time). |
-| Step 4: Register deal | The ISV registers the purchase with Microsoft in Partner Center. As part of [deal registration](/partner-center/csp-commercial-marketplace-licensing#register-isv-connect-deal-in-deal-registration), the ISV will specify the type and quantity of each licensing plan purchased by the customer. |
-| Step 5: Manage licenses | The license plans will appear in Microsoft 365 Admin Center for the customer to [assign to users or groups](/microsoft-365/commerce/licenses/manage-third-party-app-licenses) in their organization. The customer can also install the application in their tenant via the Power Platform Admin Center. |
-| Step 6: Perform license check | When a user within the customerΓÇÖs organization tries to run an application, Microsoft checks to ensure that user has a license before permitting them to run it. If they donΓÇÖt have a license, the user sees a message explaining that they need to contact an administrator for a license. |
-| Step 7: View reports | ISVs can view information on provisioned and assigned licenses over a period of time and by geography. |
+| Customers can manage subscriptions for the Apps they purchased in [Microsoft 365 admin center](https://admin.microsoft.com/), just like they normally do for any of their Microsoft Office or Dynamics subscriptions. | ISVs activate and manage deals in Partner Center ([deal registration portal(https://partner.microsoft.com/)]) |
+
+### Step 5: Assign licenses
+
+Customers can assign licenses of these add-ons in license pages under the billing node in [Microsoft 365 admin center](https://admin.microsoft.com/). Customers can assign licenses to users or groups. Doing so will enable these users to launch the ISV app. Customers can also install the app from [Microsoft 365 admin center](https://admin.microsoft.com/) into their Power Platform environment.
+
+**Licensable-only offers:**
+- The license plans will appear in Microsoft 365 Admin Center for the customer to [assign to users or groups](/microsoft-365/commerce/licenses/manage-third-party-app-licenses) in their organization. The customer can also install the application in their tenant via the Power Platform Admin Center.
+
+### Step 6: Power Platform performs license checks
+
+When a user within the customerΓÇÖs organization tries to run an application, Microsoft checks to ensure that the user has a license before permitting them to run it. If they do not have a license, the user sees a message explaining that they need to contact an administrator for a license.
+
+### Step 7: View reports
+
+ISVs can view information on:
+- Orders purchased, renewed, or cancelled over time and by geography.
+
+- Provisioned and assigned licenses over a period of time and by geography.
## Enabling app license management through Microsoft
HereΓÇÖs how it works:
- After you select the **Enable app license management through Microsoft** box, you can define licensing plans for your offer. - Customers will see a **Get it now** button on the offer listing page in AppSource. Customers can select this button to contact you to purchase licenses for the app.
+> [!NOTE]
+> This check box is automatically enabled if you choose to sell your offer through Microsoft and have Microsoft host transactions on your behalf.
+ ### Allow customers to install my app even if licenses are not assigned check box
-After you select the first box, the **Allow customers to install my app even if licenses are not assigned** box appears. This option is useful if you are employing a ΓÇ£freemiumΓÇ¥ licensing strategy whereby you want to offer some basic features of your solution for free to all users and charge for premium features. Conversely, if you want to ensure that only tenants who currently own licenses for your product can download it from AppSource, then donΓÇÖt select this option.
+If you choose to list your offer through the marketplace and process transactions independently, after you select the first box, the **Allow customers to install my app even if licenses are not assigned** box appears. This option is useful if you are employing a ΓÇ£freemiumΓÇ¥ licensing strategy whereby you want to offer some basic features of your solution for free to all users and charge for premium features. Conversely, if you want to ensure that only tenants who currently own licenses for your product can download it from AppSource, then donΓÇÖt select this option.
> [!NOTE] > If you choose this option, you need to configure your solution package to not require a license.
After your offer is published, the options you chose will drive which buttons ap
:::image type="content" source="./media/third-party-license/f365.png" alt-text="Screenshot of an offer listing page on AppSource. The Get it now and Contact me buttons are shown.":::
-***Figure 1: Offer listing page on Microsoft AppSource***
- ## Next steps - [Plan a Dynamics 365 offer](marketplace-dynamics-365.md)-- [How to create a Dynamics 365 apps on Dataverse and Power Apps offer](dynamics-365-customer-engage-offer-setup.md)
+- [Create a Dynamics 365 apps on Dataverse and Power Apps offer](dynamics-365-customer-engage-offer-setup.md)
marketplace Marketplace Dynamics 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-dynamics-365.md
Previously updated : 04/13/2022 Last updated : 05/25/2022 # Plan a Microsoft Dynamics 365 offer
This article explains the different options and features of a Dynamics 365 offer
Before you start, create a commercial marketplace account in [Partner Center](./create-account.md) and ensure it is enrolled in the commercial marketplace program. Also, review the [publishing process and guidelines](/office/dev/store/submit-to-appsource-via-partner-center).
-## Licensing options
+## Listing options
-As you prepare to publish a new offer, you need to decide which licensing option to choose. This will determine what additional information you'll need to provide later as you create the offer in Partner Center.
+As you prepare to publish a new offer, you need to decide which listing option to choose. This will determine what additional information you'll need to provide later as you create the offer in Partner Center.
-These are the available licensing options for Dynamics 365 offer types:
+These are the available listing options for the _Dynamics 365 apps on Dataverse and Power Apps_ offer type:
| Offer type | Listing option | | | |
These are the available licensing options for Dynamics 365 offer types:
The following table describes the transaction process of each listing option.
-| Licensing option | Transaction process |
+| Listing option | Transaction process |
| | |
+| Transact with license management | You can choose to sell through Microsoft and have Microsoft host transactions on your behalf. For more information about this option, see [ISV app license management](isv-app-license.md).<br>Currently available to the following offer type only:<ul><li>Dynamics 365 apps on Dataverse and Power Apps</li></ul> |
+| License management | Enables you to manage your ISV app licenses in Partner Center. For more information about this option, see [ISV app license management](isv-app-license.md).<br>Currently available to the following offer type only:<ul><li>Dynamics 365 apps on Dataverse and Power Apps</li></ul> |
| Contact me | Collect customer contact information by connecting your Customer Relationship Management (CRM) system. The customer will be asked for permission to share their information. These customer details, along with the offer name, ID, and marketplace source where they found your offer, will be sent to the CRM system that you've configured. For more information about configuring your CRM, see the **Customer leads** section of your offer type's **Offer setup** page. | | Free trial (listing) | Offer your customers a one-, three- or six-month free trial. Offer listing free trials are created, managed, and configured by your service and do not have subscriptions managed by Microsoft. | | Get it now (free) | List your offer to customers for free. |
-| Get it now | Enables you to manage your ISV app licenses in Partner Center.<br>Currently available to the following offer type only:<ul><li>Dynamics 365 apps on Dataverse and Power Apps</li></ul><br>For more information about this option, see [ISV app license management](isv-app-license.md). |
## Test drive
marketplace What Is New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/what-is-new.md
Learn about important updates in the commercial marketplace program of Partner C
| Payouts | We updated the payment schedule for [Payout schedules and processes](/partner-center/payout-policy-details). | 2022-01-19 | | Analytics | Added questions and answers to the [Commercial marketplace analytics FAQ](./analytics-faq.yml), such as enrolling in the commercial marketplace, where to create a marketplace offer, getting started with programmatic access to commercial marketplace analytics reports, and more. | 2022-01-07 | | Offers | Added a new article, [Troubleshooting Private Plans in the commercial marketplace](azure-private-plan-troubleshooting.md). | 2021-12-13 |
-| Offers | We have updated the names of [Dynamics 365](./marketplace-dynamics-365.md#licensing-options) offer types:<br><br>-Dynamics 365 for Customer Engagement &amp; PowerApps is now **Dynamics 365 apps on Dataverse and Power Apps**<br>- Dynamics 365 for operations is now **Dynamics 365 Operations Apps**<br>- Dynamics 365 business central is now **Dynamics 365 Business Central** | 2021-12-03 |
+| Offers | We have updated the names of [Dynamics 365](./marketplace-dynamics-365.md#listing-options) offer types:<br><br>-Dynamics 365 for Customer Engagement &amp; PowerApps is now **Dynamics 365 apps on Dataverse and Power Apps**<br>- Dynamics 365 for operations is now **Dynamics 365 Operations Apps**<br>- Dynamics 365 business central is now **Dynamics 365 Business Central** | 2021-12-03 |
| Policy | WeΓÇÖve created an [FAQ topic](/legal/marketplace/mpa-faq) to answer publisher questions about the Microsoft Publisher Agreement. | 2021-09-27 | | Policy | We've updated the [Microsoft Publisher Agreement](/legal/marketplace/msft-publisher-agreement). For change history, see [Microsoft Publisher Agreement Version 8.0 ΓÇô October 2021 Update](/legal/marketplace/mpa-change-history-oct-2021). | 2021-09-14 | | Policy | Updated [certification](/legal/marketplace/certification-policies) policy for September; see [change history](/legal/marketplace/offer-policies-change-history). | 2021-09-10 |
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md
This article summarizes new releases and features in Azure Database for MySQL -
- **Announcing Azure Database for MySQL - Flexible Server for business-critical workloads** Azure Database for MySQL ΓÇô Flexible Server Business Critical service tier is now generally available. Business Critical service tier is ideal for Tier 1 production workloads that require low latency, high concurrency, fast failover, and high scalability, such as gaming, e-commerce, and Internet-scale applications, to learn more about [Business Critical service Tier](https://techcommunity.microsoft.com/t5/azure-database-for-mysql-blog/announcing-azure-database-for-mysql-flexible-server-for-business/ba-p/3361718).
-**Announcing the addition of new Burstable compute instances for Azure Database for MySQL - Flexible Server**
+- **Announcing the addition of new Burstable compute instances for Azure Database for MySQL - Flexible Server**
We are announcing the addition of new Burstable compute instances to support customersΓÇÖ auto-scaling compute requirements from 1 vCore up to 20 vCores. learn more about [Compute Option for Azure Database for MySQL - Flexible Server](https://docs.microsoft.com/azure/mysql/flexible-server/concepts-compute-storage). ## April 2022
mysql 10 Post Migration Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/mysql-on-premises-azure-db/10-post-migration-management.md
Last updated 06/21/2021
Once the migration has been successfully completed, the next phase it to manage the new cloud-based data workload resources. Management operations include both control plane and data plane activities. Control plane activities are those related to the Azure resources versus data plane, which is **inside** the Azure resource (in this case MySQL).
-Azure Database for MySQL provides for the ability to monitor both of these types of operational activities using Azure-based tools such as [Azure Monitor,](../../../azure-monitor/overview.md) [Log Analytics](../../../azure-monitor/logs/design-logs-deployment.md) and [Microsoft Sentinel](../../../sentinel/overview.md). In addition to the Azure-based tools, security information and event management (SIEM) systems can be configured to consume these logs as well.
+Azure Database for MySQL provides for the ability to monitor both of these types of operational activities using Azure-based tools such as [Azure Monitor,](../../../azure-monitor/overview.md) [Log Analytics](../../../azure-monitor/logs/log-analytics-overview.md) and [Microsoft Sentinel](../../../sentinel/overview.md). In addition to the Azure-based tools, security information and event management (SIEM) systems can be configured to consume these logs as well.
Whichever tool is used to monitor the new cloud-based workloads, alerts need to be created to warn Azure and database administrators of any suspicious activity. If a particular alert event has a well-defined remediation path, alerts can fire automated [Azure run books](../../../automation/learn/powershell-runbook-managed-identity.md) to address the event.
The [Planned maintenance notification](../../concepts-monitoring.md#planned-main
## WWI scenario
-WWI decided to utilize the Azure Activity logs and enable MySQL logging to flow to a [Log Analytics workspace.](../../../azure-monitor/logs/design-logs-deployment.md) This workspace is configured to be a part of [Microsoft Sentinel](../../../sentinel/index.yml) such that any [Threat Analytics](../../concepts-security.md#threat-protection) events would be surfaced, and incidents created.
+WWI decided to utilize the Azure Activity logs and enable MySQL logging to flow to a [Log Analytics workspace.](../../../azure-monitor/logs/workspace-design.md) This workspace is configured to be a part of [Microsoft Sentinel](../../../sentinel/index.yml) such that any [Threat Analytics](../../concepts-security.md#threat-protection) events would be surfaced, and incidents created.
The MySQL DBAs installed the Azure Database for [MySQL Azure PowerShell cmdlets](../../quickstart-create-mysql-server-database-using-azure-powershell.md) to make managing the MySQL Server automated versus having to log to the Azure portal each time.
mysql 13 Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/mysql-on-premises-azure-db/13-security.md
if a user or application credentials are compromised, logs are not likely to ref
## Audit logging
-MySQL has a robust built-in audit log feature. By default, this [audit log feature is disabled](../../concepts-audit-logs.md) in Azure Database for MySQL. Server level logging can be enabled by changing the `audit\_log\_enabled` server parameter. Once enabled, logs can be accessed through [Azure Monitor](../../../azure-monitor/overview.md) and [Log Analytics](../../../azure-monitor/logs/design-logs-deployment.md) by turning on [diagnostic logging.](../../howto-configure-audit-logs-portal.md#set-up-diagnostic-logs)
+MySQL has a robust built-in audit log feature. By default, this [audit log feature is disabled](../../concepts-audit-logs.md) in Azure Database for MySQL. Server level logging can be enabled by changing the `audit\_log\_enabled` server parameter. Once enabled, logs can be accessed through [Azure Monitor](../../../azure-monitor/overview.md) and [Log Analytics](../../../azure-monitor/logs/log-analytics-workspace-overview.md) by turning on [diagnostic logging.](../../howto-configure-audit-logs-portal.md#set-up-diagnostic-logs)
To query for user connection-related events, run the following KQL query:
mysql Concepts Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-aks.md
az network nic list --resource-group nodeResourceGroup -o table
## Next steps
-Create an AKS cluster [using the Azure CLI](/azure/aks/learn/quick-kubernetes-deploy-cli), [using Azure PowerShell](/azure/aks/learn/quick-kubernetes-deploy-powershell), or [using the Azure portal](/azure/aks/learn/quick-kubernetes-deploy-portal).
+Create an AKS cluster [using the Azure CLI](../../aks/learn/quick-kubernetes-deploy-cli.md), [using Azure PowerShell](../../aks/learn/quick-kubernetes-deploy-powershell.md), or [using the Azure portal](../../aks/learn/quick-kubernetes-deploy-portal.md).
mysql Single Server Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/single-server-overview.md
Single Server is available in three SKU tiers: Basic, General Purpose, and Memor
Single Server uses the FIPS 140-2 validated cryptographic module for storage encryption of data at-rest. Data, including backups, and temporary files created while running queries are encrypted. The service uses the AES 256-bit cipher included in Azure storage encryption, and the keys can be system managed (default) or [customer managed](concepts-data-encryption-mysql.md). The service encrypts data in-motion with transport layer security (SSL/TLS) enforced by default. The service supports TLS versions 1.2, 1.1 and 1.0 with an ability to enforce [minimum TLS version](concepts-ssl-connection-security.md).
-The service allows private access to the servers using [private link](concepts-data-access-security-private-link.md) and offers threat protection through the optional [Microsoft Defender for open-source relational databases](/azure/defender-for-cloud/defender-for-databases-introduction) plan. Microsoft Defender for open-source relational databases detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases.
+The service allows private access to the servers using [private link](concepts-data-access-security-private-link.md) and offers threat protection through the optional [Microsoft Defender for open-source relational databases](../../defender-for-cloud/defender-for-databases-introduction.md) plan. Microsoft Defender for open-source relational databases detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases.
In addition to native authentication, Single Server supports [Azure Active Directory](../../active-directory/fundamentals/active-directory-whatis.md) authentication. Azure AD authentication is a mechanism of connecting to the MySQL servers using identities defined and managed in Azure AD. With Azure AD authentication, you can manage database user identities and other Azure services in a central location, which simplifies and centralizes access control.
Now that you've read an introduction to Azure Database for MySQL - Single Server
- [Ruby](./connect-ruby.md) - [PHP](./connect-php.md) - [.NET (C#)](./connect-csharp.md)
- - [Go](./connect-go.md)
+ - [Go](./connect-go.md)
notification-hubs Notification Hubs Android Push Notification Google Fcm Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-android-push-notification-google-fcm-get-started.md
Your hub is now configured to work with Firebase Cloud Messaging. You also have
1. In the Project View, expand **app** > **src** > **main** > **java**. Right-click your package folder under **java**, select **New**, and then select **Java Class**. Enter **NotificationSettings** for the name, and then select **OK**.
- Make sure to update these three placeholders in the following code for the `NotificationSettings` class:
+ Make sure to update these two placeholders in the following code for the `NotificationSettings` class:
* **HubListenConnectionString**: The **DefaultListenAccessSignature** connection string for your hub. You can copy that connection string by clicking **Access Policies** in your hub in the [Azure portal]. * **HubName**: Use the name of your hub that appears in the hub page in the [Azure portal].
openshift Howto Create Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-create-service-principal.md
Title: Creating and using a service principal with an Azure Red Hat OpenShift cluster
-description: In this how-to article, learn how to create a service principal with an Azure Red Hat OpenShift cluster using Azure CLI or the Azure portal.
+description: In this how-to article, learn how to create and use a service principal with an Azure Red Hat OpenShift cluster using Azure CLI or the Azure portal.
keywords: azure, openshift, aro, red hat, azure CLI, azure portal
zone_pivot_groups: azure-red-hat-openshift-service-principal
-# Create and use a service principal with an Azure Red Hat OpenShift cluster
+# Create and use a service principal to deploy an Azure Red Hat OpenShift cluster
To interact with Azure APIs, an Azure Red Hat OpenShift cluster requires an Azure Active Directory (AD) service principal. This service principal is used to dynamically create, manage, or access other Azure resources, such as an Azure load balancer or an Azure Container Registry (ACR). For more information, see [Application and service principal objects in Azure Active Directory](../active-directory/develop/app-objects-and-service-principals.md).
-This article explains how to create and use a service principal for your Azure Red Hat OpenShift clusters using the Azure command-line interface (Azure CLI) or the Azure portal.
+This article explains how to create and use a service principal to deploy your Azure Red Hat OpenShift clusters using the Azure command-line interface (Azure CLI) or the Azure portal.
> [!NOTE] > Service principals expire in one year unless configured for longer periods. For information on extending your service principal expiration period, see [Rotate service principal credentials for your Azure Red Hat OpenShift (ARO) Cluster](howto-service-principal-credential-rotation.md). ::: zone pivot="aro-azurecli"
-## Create a service principal with Azure CLI
+## Create and use a service principal
-The following sections explain how to use the Azure CLI to create a service principal for your Azure Red Hat OpenShift cluster
+The following sections explain how to create and use a service principal to deploy an Azure Red Hat OpenShift cluster.
## Prerequisites - Azure CLI If youΓÇÖre using the Azure CLI, youΓÇÖll need Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
-On [Use the portal to create an Azure AD application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md) create a service principal. Be sure to save the client ID and the appID.
+## Create a resource group - Azure CLI
-## Create a resource group
+Run the following Azure CLI command to create a resource group.
```azurecli-interactive AZ_RG=$(az group create -n test-aro-rg -l eastus2 --query name -o tsv) ```
-## Create a service principal - Azure CLI
+## Create a service principal and assign role-based access control (RBAC) - Azure CLI
- To create a service principal with the Azure CLI, run the following command.
+ To assign the contributor role and scope the service principal to the Azure Red Hat OpenShift resource group, run the following command.
```azurecli-interactive # Get Azure subscription ID AZ_SUB_ID=$(az account show --query id -o tsv)
-# Create a service principal with contributor role and scoped to the ARO resource group
+# Create a service principal with contributor role and scoped to the Azure Red Hat OpenShift resource group
az ad sp create-for-rbac -n "test-aro-SP" --role contributor --scopes "/subscriptions/${AZ_SUB_ID}/resourceGroups/${AZ_RG}" ```
The output is similar to the following example.
"password": "yourpassword",
- "tenant": "yourtenantname" t
+ "tenant": "yourtenantname"
} ``` -
-Retain your `appId` and `password`. These values are used when you create an Azure Red Hat OpenShift cluster below.
> [!NOTE]
-> This service principal only allows a contributor over the resource group the ARO cluster is located in. If your VNet is in another resource group, you need to assign the service principal contributor role to that resource group as well.
-
-For more information, see [Manage service principal roles](/cli/azure/create-an-azure-service-principal-azure-cli#3-manage-service-principal-roles).
+> This service principal only allows a contributor over the resource group the Azure Red Hat OpenShift cluster is located in. If your VNet is in another resource group, you need to assign the service principal contributor role to that resource group as well.
To grant permissions to an existing service principal with the Azure portal, see [Create an Azure AD app and service principal in the portal](../active-directory/develop/howto-create-service-principal-portal.md#configure-access-policies-on-resources).
-## Use the service principal to create a cluster - Azure CLI
+## Use the service principal to deploy an Azure Red Hat OpenShift cluster - Azure CLI
-To use an existing service principal when you create an Azure Red Hat OpenShift cluster using the `az aro create` command, use the `--client-id` and `--client-secret` parameters to specify the appId and password from the output of the `az ad sp create-for-rbac` command:
+Using the service principal that you created when you created the Azure Red Hat OpenShift cluster, use the `az aro create` command to deploy the Azure Red Hat OpenShift cluster. Use the `--client-id` and `--client-secret` parameters to specify the appId and password from the output of the `az ad sp create-for-rbac` command, as shown in the following command.
```azure-cli az aro create \
az aro create \
The following sections explain how to use the Azure portal to create a service principal for your Azure Red Hat OpenShift cluster.
-## Prerequiste - Azure portal
+## Prerequisite - Azure portal
-On [Use the portal to create an Azure AD application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md) create a service principal. Be sure to save the client ID and the appID.
+Create a service principal, as explained in [Use the portal to create an Azure AD application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md). **Be sure to save the client ID and the appID.**
-## Create a service principal - Azure portal
+## To use the service principal to deploy an Azure Red Hat OpenShift cluster - Azure portal
-To create a service principal using the Azure portal, complete the following steps.
+To use the service principal you created to deploy a cluster, complete the following steps.
1. On the Create Azure Red Hat OpenShift **Basics** tab, create a resource group for your subscription, as shown in the following example. :::image type="content" source="./media/basics-openshift-sp.png" alt-text="Screenshot that shows how to use the Azure Red Hat service principal with Azure portal to create a cluster." lightbox="./media/basics-openshift-sp.png":::
-2. Click **Next: Authentication** to configure and deploy the service principal on the **Authentication** page of the **Azure Red Hat OpenShift** dialog.
+2. Select **Next: Authentication** to configure the service principal on the **Authentication** page of the **Azure Red Hat OpenShift** dialog.
:::image type="content" source="./media/openshift-service-principal-portal.png" alt-text="Screenshot that shows how to use the Authentication tab with Azure portal to create a service principal." lightbox="./media/openshift-service-principal-portal.png":::
In the **Cluster pull secret** section:
- **Pull secret** is your cluster's pull secret's decrypted value. If you don't have a pull secret, leave this field blank.
-After completing this tab, select **Next: Networking** to continue creating your cluster. Select **Review + Create** when you complete the remaining tabs.
+After completing this tab, select **Next: Networking** to continue deploying your cluster. Select **Review + Create** when you complete the remaining tabs.
> [!NOTE] > This service principal only allows a contributor over the resource group the Azure Red Hat OpenShift cluster is located in. If your VNet is in another resource group, you need to assign the service principal contributor role to that resource group as well.
openshift Support Policies V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/support-policies-v4.md
Azure Red Hat OpenShift 4 supports node instances on the following virtual machi
|Dsv3|Standard_D8s_v3|8|32| |Dsv3|Standard_D16s_v3|16|64| |Dsv3|Standard_D32s_v3|32|128|
+|Eiv3|Standard_E64i_v3|64|432|
+|Eisv3|Standard_E64is_v3|64|432|
+|Eis4|Standard_E80is_v4|80|504|
+|Eids4|Standard_E80ids_v4|80|504|
+|Eiv5|Standard_E104i_v5|104|672|
+|Eisv5|Standard_E104is_v5|104|672|
+|Eidv5|Standard_E104id_v5|104|672|
+|Eidsv5|Standard_E104ids_v5|104|672|
+|Fsv2|Standard_F72s_v2|72|144|
+|G|Standard_G5|32|448|
+|G|Standard_GS5|32|448|
+|Mms|Standard_M128ms|128|3892|
### General purpose
Azure Red Hat OpenShift 4 supports node instances on the following virtual machi
|Esv3|Standard_E8s_v3|8|64| |Esv3|Standard_E16s_v3|16|128| |Esv3|Standard_E32s_v3|32|256|
+|Eiv3|Standard_E64i_v3|64|432|
+|Eisv3|Standard_E64is_v3|64|432|
+|Eis4|Standard_E80is_v4|80|504|
+|Eids4|Standard_E80ids_v4|80|504|
+|Eiv5|Standard_E104i_v5|104|672|
+|Eisv5|Standard_E104is_v5|104|672|
+|Eidv5|Standard_E104id_v5|104|672|
+|Eidsv5|Standard_E104ids_v5|104|672|
### Compute optimized
Azure Red Hat OpenShift 4 supports node instances on the following virtual machi
|Fsv2|Standard_F8s_v2|8|16| |Fsv2|Standard_F16s_v2|16|32| |Fsv2|Standard_F32s_v2|32|64|
+|Fsv2|Standard_F72s_v2|72|144|
+
+### Memory and compute optimized
+
+|Series|Size|vCPU|Memory: GiB|
+|-|-|-|-|
+|Mms|Standard_M128ms|128|3892|
### Storage optimized
Azure Red Hat OpenShift 4 supports node instances on the following virtual machi
|L32s_v2|Standard_L32s_v2|32|256| |L48s_v2|Standard_L48s_v2|32|384| |L64s_v2|Standard_L48s_v2|64|512|+
+### Memory and storage optimized
+
+|Series|Size|vCPU|Memory: GiB|
+|-|-|-|-|
+|G|Standard_G5|32|448|
+|G|Standard_GS5|32|448|
postgresql Concepts Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-aks.md
There are multiple connection poolers you can use with PostgreSQL. One of these
## Next steps
-Create an AKS cluster [using the Azure CLI](/azure/aks/learn/quick-kubernetes-deploy-cli), [using Azure PowerShell](/azure/aks/learn/quick-kubernetes-deploy-powershell), or [using the Azure portal](/azure/aks/learn/quick-kubernetes-deploy-portal).
+Create an AKS cluster [using the Azure CLI](../../aks/learn/quick-kubernetes-deploy-cli.md), [using Azure PowerShell](../../aks/learn/quick-kubernetes-deploy-powershell.md), or [using the Azure portal](../../aks/learn/quick-kubernetes-deploy-portal.md).
postgresql Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-security.md
You can also connect to the server using [Azure Active Directory authentication]
## Threat protection
-You can opt in to [Advanced Threat Protection](/azure/defender-for-cloud/defender-for-databases-introduction) which detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit servers.
+You can opt in to [Advanced Threat Protection](../../defender-for-cloud/defender-for-databases-introduction.md) which detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit servers.
[Audit logging](concepts-audit.md) is available to track activity in your databases.
Oracle supports Transparent Data Encryption (TDE) to encrypt table and tablespac
## Next steps - Enable firewall rules for [IPs](concepts-firewall-rules.md) or [virtual networks](concepts-data-access-and-security-vnet.md)-- Learn about [Azure Active Directory authentication](concepts-azure-ad-authentication.md) in Azure Database for PostgreSQL
+- Learn about [Azure Active Directory authentication](concepts-azure-ad-authentication.md) in Azure Database for PostgreSQL
private-link Manage Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/manage-private-endpoint.md
Title: Manage a Private Endpoint connection in Azure
+ Title: Manage Azure Private Endpoints
-description: Learn how to manage private endpoint connections in Azure
+description: Learn how to manage private endpoints in Azure
Previously updated : 10/04/2021 Last updated : 05/17/2022
-# Manage a Private Endpoint connection
+# Manage Azure Private Endpoints
+
+Azure Private Endpoints have several options when managing the configuration and their deployment.
+
+**GroupId** and **MemberName** can be determined by querying the Private Link resource. The **GroupID** and **MemberName** values are needed to configure a static IP address for a private endpoint during creation.
+
+A private endpoint has two custom properties, static IP address and the network interface name. These properties must be set when the private endpoint is created.
+
+With a service provider and consumer deployment of a Private Link Service, an approval process is in place to make the connection.
+
+## Determine GroupID and MemberName
+
+During the creation of a private endpoint with Azure PowerShell and Azure CLI, the **GroupId** and **MemberName** of the private endpoint resource might be needed.
+
+* **GroupId** is the subresource of the private endpoint.
+
+* **MemberName** is the unique stamp for the private IP address of the endpoint.
+
+For more information about Private Endpoint subresources and their values, see [Private-link resource](private-endpoint-overview.md#private-link-resource).
+
+To determine the values of **GroupID** and **MemberName** for your private endpoint resource, use the following commands. **MemberName** is contained within the **RequiredMembers** property.
+
+# [**PowerShell**](#tab/manage-private-link-powershell)
+
+An Azure WebApp is used as the example private endpoint resource. Use **[Get-AzPrivateLinkResource](/powershell/module/az.network/get-azprivatelinkresource)** to determine **GroupId** and **MemberName**.
+
+```azurepowershell
+## Place the previously created webapp into a variable. ##
+$webapp =
+Get-AzWebApp -ResourceGroupName myResourceGroup -Name myWebApp1979
+
+$resource =
+Get-AzPrivateLinkResource -PrivateLinkResourceId $webapp.ID
+```
+
+You should receive an output similar to the below example.
++
+# [**Azure CLI**](#tab/manage-private-link-cli)
+
+An Azure WebApp is used as the example private endpoint resource. Use **[az network private-link-resource list](/cli/azure/network/private-link-resource#az-network-private-link-resource-list)** to determine **GroupId** and **MemberName**. The parameter `--type` requires the namespace for the private link resource. For the webapp used in this example, the namespace is **Microsoft.Web/sites**. To determine the namespace for your private link resource, see **[Azure services DNS zone configuration](private-endpoint-dns.md#azure-services-dns-zone-configuration)**.
+
+```azurecli
+az network private-link-resource list \
+ --resource-group MyResourceGroup \
+ --name myWebApp1979 \
+ --type Microsoft.Web/sites
+```
+
+You should receive an output similar to the below example.
++++
+## Custom properties
+
+Network interface rename and static IP address assignment are custom properties that can be set on a private endpoint when it's created.
+
+### Network interface rename
+
+By default, when a private endpoint is created the network interface associated with the private endpoint is given a random name for its network interface. The network interface must be named when the private endpoint is created. The renaming of the network interface of an existing private endpoint is unsupported.
+
+Use the following commands when creating a private endpoint to rename the network interface.
+
+# [**PowerShell**](#tab/manage-private-link-powershell)
+
+To rename the network interface when the private endpoint is created, use the `-CustomNetworkInterfaceName` parameter. The following example uses an Azure PowerShell command to create a private endpoint to an Azure WebApp. For more information, see **[New-AzPrivateEndpoint](/powershell/module/az.network/new-azprivateendpoint)**.
+
+```azurepowershell
+## Place the previously created webapp into a variable. ##
+$webapp = Get-AzWebApp -ResourceGroupName myResourceGroup -Name myWebApp1979
+
+## Create the private endpoint connection. ##
+$pec = @{
+ Name = 'myConnection'
+ PrivateLinkServiceId = $webapp.ID
+ GroupID = 'sites'
+}
+$privateEndpointConnection = New-AzPrivateLinkServiceConnection @pec
+
+## Place the virtual network you created previously into a variable. ##
+$vnet = Get-AzVirtualNetwork -ResourceGroupName 'myResourceGroup' -Name 'myVNet'
+
+## Create the private endpoint. ##
+$pe = @{
+ ResourceGroupName = 'myResourceGroup'
+ Name = 'myPrivateEndpoint'
+ Location = 'eastus'
+ Subnet = $vnet.Subnets[0]
+ PrivateLinkServiceConnection = $privateEndpointConnection
+ CustomNetworkInterfaceName = 'myPrivateEndpointNIC'
+}
+New-AzPrivateEndpoint @pe
+
+```
+
+# [**Azure CLI**](#tab/manage-private-link-cli)
+
+To rename the network interface when the private endpoint is created, use the `--nic-name` parameter. The following example uses an Azure PowerShell command to create a private endpoint to an Azure WebApp. For more information, see **[az network private-endpoint create](/cli/azure/network/private-endpoint#az-network-private-endpoint-create)**.
+
+```azurecli
+id=$(az webapp list \
+ --resource-group myResourceGroup \
+ --query '[].[id]' \
+ --output tsv)
+
+az network private-endpoint create \
+ --connection-name myConnection \
+ --name myPrivateEndpoint \
+ --private-connection-resource-id $id \
+ --resource-group myResourceGroup \
+ --subnet myBackendSubnet \
+ --group-id sites \
+ --nic-name myPrivateEndpointNIC \
+ --vnet-name myVNet
+```
+++
+### Static IP address
+
+By default, when a private endpoint is created the IP address for the endpoint is automatically assigned. The IP is assigned from the IP range of the virtual network configured for the private endpoint. A situation may arise when a static IP address for the private endpoint is required. The static IP address must be assigned when the private endpoint is created. The configuration of a static IP address for an existing private endpoint is currently unsupported.
+
+For procedures to configure a static IP address when creating a private endpoint, see [Create a private endpoint using Azure PowerShell](create-private-endpoint-powershell.md) and [Create a private endpoint using the Azure CLI](create-private-endpoint-cli.md).
+
+## Private endpoint connections
Azure Private Link works on an approval model where the Private Link service consumer can request a connection to the service provider for consuming the service. The service provider can then decide whether to allow the consumer to connect or not. Azure Private Link enables service providers to manage the private endpoint connection on their resources.
-This article provides instructions about how to manage the Private Endpoint connections.
-
-![Manage Private Endpoints](media/manage-private-endpoint/manage-private-endpoint.png)
There are two connection approval methods that a Private Link service consumer can choose from: - **Automatic**: If the service consumer has Azure Role Based Access Control permissions on the service provider resource, the consumer can choose the automatic approval method. When the request reaches the service provider resource, no action is required from the service provider and the connection is automatically approved. - **Manual**: If the service consumer doesnΓÇÖt have Azure Role Based Access Control permissions on the service provider resource, the consumer can choose the manual approval method. The connection request appears on the service resources as **Pending**. The service provider has to manually approve the request before connections can be established.
-In manual cases, service consumer can also specify a message with the request to provide more context to the service provider. The service provider has following options to choose from for all Private Endpoint connections: **Approve**, **Reject**, **Remove**.
-
-The below table shows the various service provider actions and the resulting connection states for Private Endpoints. The service provider can change the connection state at a later time without consumer intervention. The action will update the state of the endpoint on the consumer side.
+In manual cases, service consumer can also specify a message with the request to provide more context to the service provider. The service provider has following options to choose from for all private endpoint connections: **Approve**, **Reject**, **Remove**.
+The below table shows the various service provider actions and the resulting connection states for private endpoints. The service provider can change the connection state at a later time without consumer intervention. The action will update the state of the endpoint on the consumer side.
-| Service Provider Action | Service Consumer Private Endpoint State | Description |
+| Service provider action | Service consumer private endpoint state | Description |
|||| | None | Pending | Connection is created manually and is pending for approval by the Private Link resource owner. | | Approve | Approved | Connection was automatically or manually approved and is ready to be used. | | Reject | Rejected | Connection was rejected by the private link resource owner. | | Remove | Disconnected | Connection was removed by the private link resource owner, the private endpoint becomes informative and should be deleted for clean-up. |
-## Manage Private Endpoint connections on Azure PaaS resources
+## Manage private endpoint connections on Azure PaaS resources
-The Azure portal is the preferred method for managing private endpoint connections on Azure PaaS resources.
+Use the following steps to manage a private endpoint connection in the Azure portal.
1. Sign in to the [Azure portal](https://portal.azure.com).
The Azure portal is the preferred method for managing private endpoint connectio
3. In the **Private link center**, select **Private endpoints** or **Private link services**.
-4. For each of your endpoints, you can view the number of Private Endpoint connections associated with it. You can filter the resources as needed.
+4. For each of your endpoints, you can view the number of private endpoint connections associated with it. You can filter the resources as needed.
-5. Select the private endpoint. Under the connections listed, select the connection that you want to manage.
+5. Select the private endpoint. Under the connections listed, select the connection that you want to manage.
6. You can change the state of the connection by selecting from the options at the top. ## Manage Private Endpoint connections on a customer/partner owned Private Link service
-Azure PowerShell and Azure CLI are the preferred methods for managing Private Endpoint connections on Microsoft Partner Services or customer owned services.
+Use the following PowerShell and Azure CLI commands to manage private endpoint connections on Microsoft Partner Services or customer owned services.
-### PowerShell
-
+# [**PowerShell**](#tab/manage-private-link-powershell)
+ Use the following PowerShell commands to manage private endpoint connections.
-#### Get Private Link connection states
+## Get Private Link connection states
-Use [Get-AzPrivateEndpointConnection](/powershell/module/az.network/get-azprivateendpointconnection) to get the Private Endpoint connections and their states.
+Use **[Get-AzPrivateEndpointConnection](/powershell/module/az.network/get-azprivateendpointconnection)** to get the Private Endpoint connections and their states.
```azurepowershell
-Get-AzPrivateEndpointConnection -Name myPrivateLinkService -ResourceGroupName myResourceGroup
+$get = @{
+ Name = 'myPrivateLinkService'
+ ResourceGroupName = 'myResourceGroup'
+}
+Get-AzPrivateEndpointConnection @get
```
-
-#### Approve a Private Endpoint connection
-
-Use [Approve-AzPrivateEndpointConnection](/powershell/module/az.network/approve-azprivateendpointconnection) cmdlet to approve a Private Endpoint connection.
-
+
+## Approve a Private Endpoint connection
+
+Use **[Approve-AzPrivateEndpointConnection](/powershell/module/az.network/approve-azprivateendpointconnection)** cmdlet to approve a Private Endpoint connection.
+ ```azurepowershell
-Approve-AzPrivateEndpointConnection -Name myPrivateEndpointConnection -ResourceGroupName myResourceGroup -ServiceName myPrivateLinkService
+$approve = @{
+ Name = 'myPrivateEndpointConnection'
+ ServiceName = 'myPrivateLinkService'
+ ResourceGroupName = 'myResourceGroup'
+}
+Approve-AzPrivateEndpointConnection @approve
```
-
-#### Deny Private Endpoint connection
-
-Use [Deny-AzPrivateEndpointConnection](/powershell/module/az.network/deny-azprivateendpointconnection) cmdlet to reject a Private Endpoint connection.
+
+## Deny Private Endpoint connection
+
+Use **[Deny-AzPrivateEndpointConnection](/powershell/module/az.network/deny-azprivateendpointconnection)** cmdlet to reject a Private Endpoint connection.
```azurepowershell
-Deny-AzPrivateEndpointConnection -Name myPrivateEndpointConnection -ResourceGroupName myResourceGroup -ServiceName myPrivateLinkService
+$deny = @{
+ Name = 'myPrivateEndpointConnection'
+ ServiceName = 'myPrivateLinkService'
+ ResourceGroupName = 'myResourceGroup'
+}
+Deny-AzPrivateEndpointConnection @deny
```
-#### Remove Private Endpoint connection
-
-Use [Remove-AzPrivateEndpointConnection](/powershell/module/az.network/remove-azprivateendpointconnection) cmdlet to remove a Private Endpoint connection.
+## Remove Private Endpoint connection
+
+Use **[Remove-AzPrivateEndpointConnection](/powershell/module/az.network/remove-azprivateendpointconnection)** cmdlet to remove a Private Endpoint connection.
```azurepowershell
-Remove-AzPrivateEndpointConnection -Name myPrivateEndpointConnection -ResourceGroupName myResourceGroup -ServiceName myPrivateLinkService
+$remove = @{
+ Name = 'myPrivateEndpointConnection'
+ ServiceName = 'myPrivateLinkService'
+ ResourceGroupName = 'myResourceGroup'
+}
+Remove-AzPrivateEndpointConnection @remove
```
-
-### Azure CLI
-
-#### Get Private Link connection states
-Use [az network private-endpoint-connection show](/cli/azure/network/private-endpoint-connection#az-network-private-endpoint-connection-show) to get the Private Endpoint connections and their states.
+# [**Azure CLI**](#tab/manage-private-link-cli)
+
+Use the following Azure CLI commands to manage private endpoint connections.
+
+## Get Private Link connection states
+
+Use **[az network private-endpoint-connection show](/cli/azure/network/private-endpoint-connection#az-network-private-endpoint-connection-show)** to get the Private Endpoint connections and their states.
```azurecli az network private-endpoint-connection show \ --name myPrivateEndpointConnection \ --resource-group myResourceGroup ```+
+## Approve a Private Endpoint connection
-#### Approve a Private Endpoint connection
-
-Use [az network private-endpoint-connection approve](/cli/azure/network/private-endpoint-connection#az-network-private-endpoint-connection-approve) cmdlet to approve a Private Endpoint connection.
+Use **[az network private-endpoint-connection approve](/cli/azure/network/private-endpoint-connection#az-network-private-endpoint-connection-approve)** cmdlet to approve a Private Endpoint connection.
```azurecli az network private-endpoint-connection approve \
Use [az network private-endpoint-connection approve](/cli/azure/network/private-
--resource-group myResourceGroup ```
-#### Deny Private Endpoint connection
+## Deny Private Endpoint connection
-Use [az network private-endpoint-connection reject](/cli/azure/network/private-endpoint-connection#az-network-private-endpoint-connection-reject) cmdlet to reject a Private Endpoint connection.
+Use **[az network private-endpoint-connection reject](/cli/azure/network/private-endpoint-connection#az-network-private-endpoint-connection-reject)** cmdlet to reject a Private Endpoint connection.
```azurecli az network private-endpoint-connection reject \
Use [az network private-endpoint-connection reject](/cli/azure/network/private-e
--resource-group myResourceGroup ```
-#### Remove Private Endpoint connection
+## Remove Private Endpoint connection
-Use [az network private-endpoint-connection delete](/cli/azure/network/private-endpoint-connection#az-network-private-endpoint-connection-delete) cmdlet to remove a Private Endpoint connection.
+Use **[az network private-endpoint-connection delete](/cli/azure/network/private-endpoint-connection#az-network-private-endpoint-connection-delete)** cmdlet to remove a Private Endpoint connection.
```azurecli az network private-endpoint-connection delete \
Use [az network private-endpoint-connection delete](/cli/azure/network/private-e
--resource-group myResourceGroup ``` ++ ## Next steps - [Learn about Private Endpoints](private-endpoint-overview.md)
purview Concept Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-workflow.md
# Workflows in Microsoft Purview Workflows are automated, repeatable business processes that users can create within Microsoft Purview to validate and orchestrate CUD (create, update, delete) operations on their data entities. Enabling these processes allow organizations to track changes, enforce policy compliance, and ensure quality data across their data landscape.
purview How To Data Owner Policies Arc Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-data-owner-policies-arc-sql-server.md
Previously updated : 05/24/2022 Last updated : 05/25/2022 # Provision access by data owner for SQL Server on Azure Arc-enabled servers (preview)
This how-to guide describes how a data owner can delegate authoring policies in
- SQL server version 2022 CTP 2.0 or later - Complete process to onboard that SQL server with Azure Arc and enable Azure AD Authentication. [Follow this guide to learn how](https://aka.ms/sql-on-arc-AADauth).
-**Enforcement of policies is available only in the following regions for Microsoft Purview**
+**Enforcement of policies for this data source is available only in the following regions for Microsoft Purview**
- East US - UK South
+- Australia East
## Security considerations - The Server admin can turn off the Microsoft Purview policy enforcement.
This section contains a reference of how actions in Microsoft Purview data polic
## Next steps Check blog, demo and related how-to guides
-* [Demo of access policy for Azure Storage](/video/media/8ce7c554-0d48-430f-8f63-edf94946947c/purview-policy-storage-dataowner-scenario_mid.mp4)
+* [Demo of access policy for Azure Storage](https://learn-video.azurefd.net/vod/player?id=caa25ad3-7927-4dcc-88dd-6b74bcae98a2)
* [Concepts for Microsoft Purview data owner policies](./concept-data-owner-policies.md) * Blog: [Private preview: controlling access to Azure SQL at scale with policies in Purview](https://techcommunity.microsoft.com/t5/azure-sql-blog/private-preview-controlling-access-to-azure-sql-at-scale-with/ba-p/2945491) * [Enable Microsoft Purview data owner policies on all data sources in a subscription or a resource group](./how-to-data-owner-policies-resource-group.md)
purview How To Data Owner Policies Azure Sql Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-data-owner-policies-azure-sql-db.md
This section contains a reference of how actions in Microsoft Purview data polic
## Next steps Check blog, demo and related how-to guides
-* [Demo of access policy for Azure Storage](/video/media/8ce7c554-0d48-430f-8f63-edf94946947c/purview-policy-storage-dataowner-scenario_mid.mp4)
+* [Demo of access policy for Azure Storage](https://learn-video.azurefd.net/vod/player?id=caa25ad3-7927-4dcc-88dd-6b74bcae98a2)
* [Concepts for Microsoft Purview data owner policies](./concept-data-owner-policies.md) * Blog: [Private preview: controlling access to Azure SQL at scale with policies in Purview](https://techcommunity.microsoft.com/t5/azure-sql-blog/private-preview-controlling-access-to-azure-sql-at-scale-with/ba-p/2945491) * [Enable Microsoft Purview data owner policies on all data sources in a subscription or a resource group](./how-to-data-owner-policies-resource-group.md)
purview How To Data Owner Policies Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-data-owner-policies-resource-group.md
Check blog, demo and related tutorials:
* [Concepts for Microsoft Purview data owner policies](./concept-data-owner-policies.md) * [Blog: resource group-level governance can significantly reduce effort](https://techcommunity.microsoft.com/t5/azure-purview-blog/data-policy-features-resource-group-level-governance-can/ba-p/3096314)
-* [Video: Demo of data owner access policies for Azure Storage](https://www.youtube.com/watch?v=CFE8ltT19Ss)
+* [Video: Demo of data owner access policies for Azure Storage](https://learn-video.azurefd.net/vod/player?id=caa25ad3-7927-4dcc-88dd-6b74bcae98a2)
purview How To Data Owner Policies Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-data-owner-policies-storage.md
This section contains a reference of how actions in Microsoft Purview data polic
## Next steps Check blog, demo and related tutorials:
-* [Demo of access policy for Azure Storage](https://www.youtube.com/watch?v=CFE8ltT19Ss)
+* [Demo of access policy for Azure Storage](https://learn-video.azurefd.net/vod/player?id=caa25ad3-7927-4dcc-88dd-6b74bcae98a2)
* [Concepts for Microsoft Purview data owner policies](./concept-data-owner-policies.md) * [Enable Microsoft Purview data owner policies on all data sources in a subscription or a resource group](./how-to-data-owner-policies-resource-group.md) * [Blog: What's New in Microsoft Purview at Microsoft Ignite 2021](https://techcommunity.microsoft.com/t5/azure-purview/what-s-new-in-azure-purview-at-microsoft-ignite-2021/ba-p/2915954)
purview How To Request Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-request-access.md
# How to request access for a data asset If you discover a data asset in the catalog that you would like to access, you can request access directly through Azure Purview. The request will trigger a workflow that will request that the owners of the data resource grant you access to that data source.
purview How To Workflow Business Terms Approval https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-workflow-business-terms-approval.md
# Approval workflow for business terms This guide will take you through the creation and management of approval workflows for business terms.
purview How To Workflow Manage Requests Approvals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-workflow-manage-requests-approvals.md
# Manage workflow requests and approvals This article outlines how to manage requests and approvals that generated by a [workflow](concept-workflow.md) in Microsoft Purview.
purview How To Workflow Manage Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-workflow-manage-runs.md
# Manage workflow runs This article outlines how to manage workflows that are already running.
purview How To Workflow Self Service Data Access Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-workflow-self-service-data-access-hybrid.md
# Self-service access workflows for hybrid data estates [Workflows](concept-workflow.md) allow you to automate some business processes through Azure Purview. Self-service access workflows allow you to create a process for your users to request access to datasets they've discovered in Azure Purview!
purview Register Scan Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-sql-database.md
To scan your data source, you'll need to configure an authentication method in t
The following options are supported:
-* **System-assigned managed identity** (Recommended) - This is an identity associated directly with your Microsoft Purview account that allows you to authenticate directly with other Azure resources without needing to manage a go-between user or credential set. The **system-assigned** managed identity is created when your Microsoft Purview resource is created, is managed by Azure, and uses your Microsoft Purview account's name. The SAMI can't currently be used with a self-hosted integration runtime for Azure SQL. For more information, see the [managed identity overview](/azure/active-directory/managed-identities-azure-resources/overview).
+* **System-assigned managed identity** (Recommended) - This is an identity associated directly with your Microsoft Purview account that allows you to authenticate directly with other Azure resources without needing to manage a go-between user or credential set. The **system-assigned** managed identity is created when your Microsoft Purview resource is created, is managed by Azure, and uses your Microsoft Purview account's name. The SAMI can't currently be used with a self-hosted integration runtime for Azure SQL. For more information, see the [managed identity overview](../active-directory/managed-identities-azure-resources/overview.md).
* **User-assigned managed identity** (preview) - Similar to a SAMI, a user-assigned managed identity (UAMI) is a credential resource that allows Microsoft Purview to authenticate against Azure Active Directory. The **user-assigned** managed by users in Azure, rather than by Azure itself, which gives you more control over security. The UAMI can't currently be used with a self-hosted integration runtime for Azure SQL. For more information, see our [guide for user-assigned managed identities.](manage-credentials.md#create-a-user-assigned-managed-identity)
-* **Service Principal**- A service principal is an application that can be assigned permissions like any other group or user, without being associated directly with a person. Their authentication has an expiration date, and so can be useful for temporary projects. For more information, see the [service principal documentation](/azure/active-directory/develop/app-objects-and-service-principals).
+* **Service Principal**- A service principal is an application that can be assigned permissions like any other group or user, without being associated directly with a person. Their authentication has an expiration date, and so can be useful for temporary projects. For more information, see the [service principal documentation](../active-directory/develop/app-objects-and-service-principals.md).
* **SQL Authentication** - connect to the SQL database with a username and password. For more information about SQL Authentication, you can [follow the SQL authentication documentation](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication). If you need to create a login, follow this [guide to query an Azure SQL database](/azure/azure-sql/database/connect-query-portal), and use [this guide to create a login using T-SQL.](/sql/t-sql/statements/create-login-transact-sql) > [!NOTE]
Now that you've registered your source, follow the below guides to learn more ab
- [Data Estate Insights in Microsoft Purview](concept-insights.md) - [Lineage in Microsoft Purview](catalog-lineage-user-guide.md)-- [Search Data Catalog](how-to-search-catalog.md)
+- [Search Data Catalog](how-to-search-catalog.md)
purview Tutorial Azure Purview Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-azure-purview-checklist.md
This article lists prerequisites that help you get started quickly on Microsoft
|1 | Azure Active Directory Tenant |N/A |An [Azure Active Directory tenant](../active-directory/fundamentals/active-directory-access-create-new-tenant.md) should be associated with your subscription. <ul><li>*Global Administrator* or *Information Protection Administrator* role is required, if you plan to [extend Microsoft 365 Sensitivity Labels to Microsoft Purview for files and db columns](create-sensitivity-label.md)</li><li> *Global Administrator* or *Power BI Administrator* role is required, if you're planning to [scan Power BI tenants](register-scan-power-bi-tenant.md).</li></ul> | |2 |An active Azure Subscription |*Subscription Owner* |An Azure subscription is needed to deploy Microsoft Purview and its managed resources. If you don't have an Azure subscription, create a [free subscription](https://azure.microsoft.com/free/) before you begin. | |3 |Define whether you plan to deploy a Microsoft Purview with a managed event hub | N/A |A managed event hub is created as part of Microsoft Purview account creation, see Microsoft Purview account creation. You can publish messages to the event hub kafka topic ATLAS_HOOK and Microsoft Purview will consume and process it. Microsoft Purview will notify entity changes to the event hub kafka topic ATLAS_ENTITIES and user can consume and process it. |
-|4 |Register the following resource providers: <ul><li>Microsoft.Storage</li><li>Microsoft.EventHub (optional)</li><li>Microsoft.Purview</li></ul> |*Subscription Owner* or custom role to register Azure resource providers (_/register/action_) | [Register required Azure Resource Providers](/azure/azure-resource-manager/management/resource-providers-and-types) in the Azure Subscription that is designated for Microsoft Purview Account. Review [Azure resource provider operations](../role-based-access-control/resource-provider-operations.md). |
+|4 |Register the following resource providers: <ul><li>Microsoft.Storage</li><li>Microsoft.EventHub (optional)</li><li>Microsoft.Purview</li></ul> |*Subscription Owner* or custom role to register Azure resource providers (_/register/action_) | [Register required Azure Resource Providers](../azure-resource-manager/management/resource-providers-and-types.md) in the Azure Subscription that is designated for Microsoft Purview Account. Review [Azure resource provider operations](../role-based-access-control/resource-provider-operations.md). |
|5 |Update Azure Policy to allow deployment of the following resources in your Azure subscription: <ul><li>Microsoft Purview</li><li>Azure Storage</li><li>Azure Event Hubs (optional)</li></ul> |*Subscription Owner* |Use this step if an existing Azure Policy prevents deploying such Azure resources. If a blocking policy exists and needs to remain in place, please follow our [Microsoft Purview exception tag guide](create-azure-purview-portal-faq.md) and follow the steps to create an exception for Microsoft Purview accounts. | |6 | Define your network security requirements. | Network and Security architects. |<ul><li> Review [Microsoft Purview network architecture and best practices](concept-best-practices-network.md) to define what scenario is more relevant to your network requirements. </li><li>If private network is needed, use [Microsoft Purview Managed IR](catalog-managed-vnet.md) to scan Azure data sources when possible to reduce complexity and administrative overhead. </li></ul> | |7 |An Azure Virtual Network and Subnet(s) for Microsoft Purview private endpoints. | *Network Contributor* to create or update Azure VNet. |Use this step if you're planning to deploy [private endpoint connectivity with Microsoft Purview](catalog-private-link.md): <ul><li>Private endpoints for **Ingestion**.</li><li>Private endpoint for Microsoft Purview **Account**.</li><li>Private endpoint for Microsoft Purview **Portal**.</li></ul> <br> Deploy [Azure Virtual Network](../virtual-network/quick-create-portal.md) if you need one. |
This article lists prerequisites that help you get started quickly on Microsoft
|35 |Grant access to data roles in the organization |*Collection admin* |Provide access to other teams to use Microsoft Purview: <ul><li> Data curator</li><li>Data reader</li><li>Collection admin</li><li>Data source admin</li><li>Policy Author</li><li>Workflow admin</li></ul> <br> For more information, see [Access control in Microsoft Purview](catalog-permissions.md). | ## Next steps-- [Review Microsoft Purview deployment best practices](./deployment-best-practices.md)
+- [Review Microsoft Purview deployment best practices](./deployment-best-practices.md)
purview Tutorial Data Owner Policies Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-data-owner-policies-storage.md
To delete a policy in Microsoft Purview, follow these steps:
Check our demo and related tutorials: > [!div class="nextstepaction"]
-> [Demo of access policy for Azure Storage](https://docs.microsoft.com/video/media/8ce7c554-0d48-430f-8f63-edf94946947c/purview-policy-storage-dataowner-scenario_mid.mp4)
+> [Demo of access policy for Azure Storage](https://learn-video.azurefd.net/vod/player?id=caa25ad3-7927-4dcc-88dd-6b74bcae98a2)
> [Concepts for Microsoft Purview data owner policies](./concept-data-owner-policies.md)
-> [Enable Microsoft Purview data owner policies on all data sources in a subscription or a resource group](./how-to-data-owner-policies-resource-group.md)
+> [Enable Microsoft Purview data owner policies on all data sources in a subscription or a resource group](./how-to-data-owner-policies-resource-group.md)
role-based-access-control Transfer Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/transfer-subscription.md
When you create a key vault, it is automatically tied to the default Azure Activ
- Use [az sql server ad-admin list](/cli/azure/sql/server/ad-admin#az-sql-server-ad-admin-list) and the [az graph](/cli/azure/graph) extension to see if you are using Azure SQL databases with Azure AD authentication integration enabled. For more information, see [Configure and manage Azure Active Directory authentication with SQL](/azure/azure-sql/database/authentication-aad-configure). ```azurecli
- az sql server ad-admin list --ids $(az graph query -q 'resources | where type == "microsoft.sql/servers" | project id' -o tsv | cut -f1)
+ az sql server ad-admin list --ids $(az graph query -q "resources | where type == 'microsoft.sql/servers' | project id" -o tsv | cut -f1)
``` ### List ACLs
search Search Api Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-api-preview.md
Previously updated : 12/03/2021 Last updated : 05/27/2022 # Preview features in Azure Cognitive Search
Preview features that transition to general availability are removed from this l
| [**Search REST API 2021-04-30-Preview**](/rest/api/searchservice/index-preview) | Security | Modifies [Create or Update Data Source](/rest/api/searchservice/preview-api/create-or-update-data-source) to support managed identities under Azure Active Directory, for indexers that connect to external data sources. | Public preview, [Search REST API 2021-04-30-Preview](/rest/api/searchservice/index-preview). Announced in May 2021. | | [**Management REST API 2021-04-01-Preview**](/rest/api/searchmanagement/) | Security | Modifies [Create or Update Service](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update) to support new [DataPlaneAuthOptions](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update#dataplaneauthoptions). | Public preview, [Management REST API](/rest/api/searchmanagement/), API version 2021-04-01-Preview. Announced in May 2021. | | [**Reset Documents**](search-howto-run-reset-indexers.md) | Indexer | Reprocesses individually selected search documents in indexer workloads. | Use the [Reset Documents REST API](/rest/api/searchservice/preview-api/reset-documents), API versions 2021-04-30-Preview or 2020-06-30-Preview. |
-| [**Power Query connectors**](search-how-to-index-power-query-data-sources.md) | Indexer data source | Indexers can now index from other cloud platforms. If you are using an indexer to crawl external data sources for indexing, you can now use Power Query connectors to connect to Amazon Redshift, Elasticsearch, PostgreSQL, Salesforce Objects, Salesforce Reports, Smartsheet, and Snowflake. | [Sign up](https://aka.ms/azure-cognitive-search/indexer-preview) is required so that support can be enabled for your subscription on the backend. Configure this data source using [Create or Update Data Source](/rest/api/searchservice/preview-api/create-or-update-data-source), API versions 2021-04-30-Preview or 2020-06-30-Preview, or the Azure portal.|
| [**SharePoint Indexer**](search-howto-index-sharepoint-online.md) | Indexer data source | New data source for indexer-based indexing of SharePoint content. | [Sign up](https://aka.ms/azure-cognitive-search/indexer-preview) is required so that support can be enabled for your subscription on the backend. Configure this data source using [Create or Update Data Source](/rest/api/searchservice/preview-api/create-or-update-data-source), API versions 2021-04-30-Preview or 2020-06-30-Preview, or the Azure portal. | | [**MySQL indexer data source**](search-howto-index-mysql.md) | Indexer data source | Index content and metadata from Azure MySQL data sources.| [Sign up](https://aka.ms/azure-cognitive-search/indexer-preview) is required so that support can be enabled for your subscription on the backend. Configure this data source using [Create or Update Data Source](/rest/api/searchservice/preview-api/create-or-update-data-source), API versions 2021-04-30-Preview or 2020-06-30-Preview, [.NET SDK 11.2.1](/dotnet/api/azure.search.documents.indexes.models.searchindexerdatasourcetype.mysql), and Azure portal. | | [**Cosmos DB indexer: MongoDB API, Gremlin API**](search-howto-index-cosmosdb.md) | Indexer data source | For Cosmos DB, SQL API is generally available, but MongoDB and Gremlin APIs are in preview. | For MongoDB and Gremlin, [sign up first](https://aka.ms/azure-cognitive-search/indexer-preview) so that support can be enabled for your subscription on the backend. MongoDB data sources can be configured in the portal. Configure this data source using [Create or Update Data Source](/rest/api/searchservice/preview-api/create-or-update-data-source), API versions 2021-04-30-Preview or 2020-06-30-Preview. |
search Search Data Sources Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-data-sources-gallery.md
layout: LandingPage Previously updated : 01/25/2022 Last updated : 05/27/2022
Find a data connector from Microsoft or a partner to simplify data ingestion int
+ [Generally available data sources by Cognitive Search](#ga) + [Preview data sources by Cognitive Search](#preview)
-+ [Power Query Connectors (preview)](#powerquery)
+ [Data sources from our Partners](#partners) <a name="ga"></a>
Connect to Azure Storage through Azure Files share to extract content serialized
-<a name="powerquery"></a>
-
-## Power Query Connectors (preview)
-
-Connect to data on other cloud platforms using indexers and a Power Query connector as the data source. [Sign up](https://aka.ms/azure-cognitive-search/indexer-preview) to get started.
----
-### Amazon Redshift
-
-Powered by [Power Query](/power-query/power-query-what-is-power-query)
-
-Connect to [Amazon Redshift](https://aws.amazon.com/redshift/) and extract searchable content for indexing in Cognitive Search.
-
-[More details](search-how-to-index-power-query-data-sources.md)
----
-### Elasticsearch
-
-Powered by [Power Query](/power-query/power-query-what-is-power-query)
-
-Connect to [Elasticsearch](https://www.elastic.co/elasticsearch) in the cloud and extract searchable content for indexing in Cognitive Search.
-
-[More details](search-how-to-index-power-query-data-sources.md)
----
-### PostgreSQL
-
-Powered by [Power Query](/power-query/power-query-what-is-power-query)
-
-Connect to a [PostgreSQL](https://www.postgresql.org/) database in the cloud and extract searchable content for indexing in Cognitive Search.
-
-[More details](search-how-to-index-power-query-data-sources.md)
--
- :::column-end:::
- :::column span="":::
- :::column-end:::
-----
-### Salesforce Objects
-
-Powered by [Power Query](/power-query/power-query-what-is-power-query)
-
-Connect to Salesforce Objects and extract searchable content for indexing in Cognitive Search.
-
-[More details](search-how-to-index-power-query-data-sources.md)
----
-### Salesforce Reports
-
-Powered by [Power Query](/power-query/power-query-what-is-power-query)
-
-Connect to Salesforce Reports and extract searchable content for indexing in Cognitive Search.
-
-[More details](search-how-to-index-power-query-data-sources.md)
----
-### Smartsheet
-
-Powered by [Power Query](/power-query/power-query-what-is-power-query)
-
-Connect to Smartsheet and extract searchable content for indexing in Cognitive Search.
-
-[More details](search-how-to-index-power-query-data-sources.md)
--
- :::column-end:::
- :::column span="":::
- :::column-end:::
-----
-### Snowflake
-
-Powered by [Power Query](/power-query/power-query-what-is-power-query)
-
-Extract searchable data and metadata from a Snowflake database and populate an index based on field-to-field mappings between the index and your data source.
-
-[More details](search-how-to-index-power-query-data-sources.md)
--------
- :::column-end:::
- :::column span="":::
- :::column-end:::
-- <a name="partners"></a> ## Data sources from our Partners
search Search Get Started Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-arm.md
Previously updated : 05/16/2022 Last updated : 05/25/2022 # Quickstart: Deploy Cognitive Search using an Azure Resource Manager template
This article walks you through the process for using an Azure Resource Manager (
[!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)]
-Only those properties included in the template are used in the deployment. If more customization is required, such as [setting up network security](search-security-overview.md#network-security), you can [update the service configuration](/cli/azure/search/service?view=azure-cli-latest#az-search-service-update) as a post-deployment task.
+Only those properties included in the template are used in the deployment. If more customization is required, such as [setting up network security](search-security-overview.md#network-security), you can update the service as a post-deployment task. To customize an existing service with the fewest steps, use [Azure CLI](search-manage-azure-cli.md) or [Azure PowerShell](search-manage-powershell.md). If you're evaluating preview features, use the [Management REST API](search-manage-rest.md).
-If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
+Assuming your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fazure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.search%2Fazure-search-create%2Fazuredeploy.json)
search Search Get Started Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-bicep.md
This article walks you through the process for using a Bicep file to deploy an A
[!INCLUDE [About Bicep](../../includes/resource-manager-quickstart-bicep-introduction.md)]
-Only those properties included in the template are used in the deployment. If more customization is required, such as [setting up network security](search-security-overview.md#network-security), you can [update the service configuration](/cli/azure/search/service?view=azure-cli-latest#az-search-service-update) as a post-deployment task.
+Only those properties included in the template are used in the deployment. If more customization is required, such as [setting up network security](search-security-overview.md#network-security), you can update the service as a post-deployment task. To customize an existing service with the fewest steps, use [Azure CLI](search-manage-azure-cli.md) or [Azure PowerShell](search-manage-powershell.md). If you're evaluating preview features, use the [Management REST API](search-manage-rest.md).
## Prerequisites
search Search How To Index Power Query Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-how-to-index-power-query-data-sources.md
Title: Index data using Power Query connectors (preview)
+ Title: Power Query connectors (preview - retired)
description: Import data from different data sources using the Power Query connectors. -+ Previously updated : 12/17/2021 Last updated : 05/27/2022
-# Index data using Power Query connectors (preview)
+# Power Query connectors (preview - retired)
> [!IMPORTANT]
-> Power Query connector support is currently in a **gated public preview**. [Sign up](https://aka.ms/azure-cognitive-search/indexer-preview) to request access.
+> Power Query connector support was introduced as a **gated public preview** under [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/), but is now discontinued. If you have a search solution that uses a Power Query connector, please migrate to an alternative solution.
-If you are using an indexer to crawl external data sources for indexing, you can now use select [Power Query](/power-query/power-query-what-is-power-query) connectors for your data source connection in Azure Cognitive Search.
+## Migrate by November 28
-Power Query connectors can reach a broader range of data sources, including those on other cloud providers. New data sources supported in this preview include:
+The Power Query connector preview was announced in May 2021 and won't be moving forward into general availability. The following migration guidance is available for Snowflake and PostgreSQL. If you're using a different connector and need migration instructions, please use the email contact information provided in your preview sign up to request help or open a ticket with Azure Support.
+
+## Prerequisites
+
+- An Azure Storage account. If you don't have one, [create a storage account](../storage/common/storage-account-create.md).
+- An Azure Data Factory. If you don't have one, [create a Data Factory](../data-factory/quickstart-create-data-factory-portal.md). See [Data Factory Pipelines Pricing](https://azure.microsoft.com/pricing/details/data-factory/data-pipeline/) before implementation to understand the associated costs. Also, check [Data Factory pricing through examples](../data-factory/pricing-concepts.md).
+
+## Migrate a Snowflake data pipeline
+
+This section explains how to copy data from a Snowflake database to an [Azure Cognitive Search index](search-what-is-an-index.md). There's no process for directly indexing from Snowflake to Azure Cognitive Search, so this section includes a staging phase that copies database content to an Azure Storage blob container. You'll then index from that staging container using a [Data Factory pipeline](../data-factory/quickstart-create-data-factory-portal.md).
+
+### Step 1: Retrieve Snowflake database information
+
+1. Go to [Snowflake](https://app.snowflake.com/) and sign in to your Snowflake account. A Snowflake account looks like *https://<account_name>.snowflakecomputing.com*.
+
+1. Once you're signed in, collect the following information from the left pane. You'll use this information in the next step:
+
+ - From **Data**, select **Databases** and copy the name of the database source.
+ - From **Admin**, select **Users & Roles** and copy the name of the user. Make sure the user has read permissions.
+ - From **Admin**, select **Accounts** and copy the **LOCATOR** value of the account.
+ - From the Snowflake URL, similar to `https://app.snowflake.com/<region_name>/xy12345/organization)`. copy the region name. For example, in `https://app.snowflake.com/south-central-us.azure/xy12345/organization`, the region name is `south-central-us.azure`.
+ - From **Admin**, select **Warehouses** and copy the name of the warehouse associated with the database you'll use as the source.
+
+### Step 2: Configure Snowflake Linked Service
+
+1. Sign in to [Azure Data Factory Studio](https://ms-adf.azure.com/) with your Azure account.
+
+1. Select your data factory and then select **Continue**.
+
+1. From the left menu, select the **Manage** icon.
+
+ :::image type="content" source="media/search-power-query-connectors/azure-data-factory-manage-icon.png" alt-text="Screenshot showing how to choose the Manage icon in Azure Data Factory to configure Snowflake Linked Service.":::
+
+1. Under **Linked services**, select **New**.
+
+ :::image type="content" source="media/search-power-query-connectors/new-linked-service.png" alt-text="Screenshot showing how to choose New Linked Service in Azure Data Factory.":::
+
+1. On the right pane, in the data store search, enter "snowflake". Select the **Snowflake** tile and select **Continue**.
+
+ :::image type="content" source="media/search-power-query-connectors/snowflake-icon.png" alt-text="Screenshot showing how to choose Snowflake tile in new Linked Service data store." border="true":::
+
+1. Fill out the **New linked service** form with the data you collected in the previous step. The **Account name** includes a **LOCATOR** value and the region (for example: `xy56789south-central-us.azure`).
+
+ :::image type="content" source="media/search-power-query-connectors/new-linked-service-snowflake-form.png" alt-text="Screenshot showing how to fill out Snowflake Linked Service form.":::
+
+1. After the form is completed, select **Test connection**.
+
+1. If the test is successful, select **Create**.
+
+### Step 3: Configure Snowflake Dataset
+
+1. From the left menu, select the **Author** icon.
+
+1. Select **Datasets**, and then select the Datasets Actions ellipses menu (`...`).
+
+ :::image type="content" source="media/search-power-query-connectors/author-datasets.png" alt-text="Screenshot showing how to choose the Author icon and datasets option.":::
+
+1. Select **New dataset**.
+
+ :::image type="content" source="media/search-power-query-connectors/new-dataset.png" alt-text="Screenshot showing how to choose a new dataset in Azure Data Factory for Snowflake.":::
+
+1. On the right pane, in the data store search, enter "snowflake". Select the **Snowflake** tile and select **Continue**.
+
+ :::image type="content" source="media/search-power-query-connectors/new-dataset-snowflake.png" alt-text="Screenshot showing how to choose Snowflake from data source for Dataset.":::
+
+1. In **Set Properties**:
+ - Select the Linked Service you created in [Step 2](#step-2-configure-snowflake-linked-service).
+ - Select the table that you would like to import, and then select **OK**.
+
+ :::image type="content" source="media/search-power-query-connectors/set-snowflake-properties.png" alt-text="Screenshot showing how to configure dataset properties for Snowflake.":::
+
+1. Select **Save**.
+
+### Step 4: Create a new index in Azure Cognitive Search
+
+[Create a new index](/rest/api/searchservice/create-index) in your Azure Cognitive Search service with the same schema as the one you have currently configured for your Snowflake data.
+
+You can repurpose the index you're currently using for the Snowflake Power Connector. In the Azure portal, find the index and then select **Index Definition (JSON)**. Select the definition and copy it to the body of your new index request.
+
+ :::image type="content" source="media/search-power-query-connectors/snowflake-index.png" alt-text="Screenshot showing how to copy existing Azure Cognitive Search index JSON configuration for existing Snowflake index.":::
+
+### Step 5: Configure Azure Cognitive Search Linked Service
+
+1. From the left menu, select **Manage** icon.
+
+ :::image type="content" source="media/search-power-query-connectors/azure-data-factory-manage-icon.png" alt-text="Screenshot showing how to choose the Manage icon in Azure Data Factory to add a new linked service.":::
+
+1. Under **Linked services**, select **New**.
+
+ :::image type="content" source="media/search-power-query-connectors/new-linked-service.png" alt-text="Screenshot showing how to choose New Linked Service in Azure Data Factory for Cognitive Search.":::
+
+1. On the right pane, in the data store search, enter "search". Select the **Azure Search** tile and select **Continue**.
+
+ :::image type="content" source="media/search-power-query-connectors/linked-service-search-new.png" alt-text="Screenshot showing how to choose New Linked Search in Azure Data Factory to import from Snowflake.":::
+
+1. Fill out the **New linked service** values:
+
+ - Choose the Azure subscription where your Azure Cognitive Search service resides.
+ - Choose the Azure Cognitive Search service that has your Power Query connector indexer.
+ - Select **Create**.
+
+ :::image type="content" source="media/search-power-query-connectors/new-linked-service-search.png" alt-text="Screenshot showing how to choose New Linked Search Service in Azure Data Factory with its properties to import from Snowflake.":::
+
+### Step 6: Configure Azure Cognitive Search Dataset
+
+1. From the left menu, select **Author** icon.
+
+1. Select **Datasets**, and then select the Datasets Actions ellipses menu (`...`).
+
+ :::image type="content" source="media/search-power-query-connectors/author-datasets.png" alt-text="Screenshot showing how to choose the Author icon and datasets option for Cognitive Search.":::
+
+1. Select **New dataset**.
+
+ :::image type="content" source="media/search-power-query-connectors/new-dataset.png" alt-text="Screenshot showing how to choose a new dataset in Azure Data Factory.":::
+
+1. On the right pane, in the data store search, enter "search". Select the **Azure Search** tile and select **Continue**.
+
+ :::image type="content" source="media/search-power-query-connectors/new-dataset-search.png" alt-text="Screenshot showing how to choose an Azure Cognitive Search service for a Dataset in Azure Data Factory to use as sink.":::
+
+1. In **Set properties**:
+ - Select the Linked service recently created in [Step 5](#step-5-configure-azure-cognitive-search-linked-service).
+ - Choose the search index that you created in [Step 4](#step-4-create-a-new-index-in-azure-cognitive-search).
+ - Select **OK**.
+
+ :::image type="content" source="media/search-power-query-connectors/set-search-snowflake-properties.png" alt-text="Screenshot showing how to choose New Search Linked Service in Azure Data Factory for Snowflake.":::
+
+1. Select **Save**.
+
+### Step 7: Configure Azure Blob Storage Linked Service
+
+1. From the left menu, select **Manage** icon.
+
+ :::image type="content" source="media/search-power-query-connectors/azure-data-factory-manage-icon.png" alt-text="Screenshot showing how to choose the Manage icon in Azure Data Factory to link a new service.":::
+
+1. Under **Linked services**, select **New**.
+
+ :::image type="content" source="media/search-power-query-connectors/new-linked-service.png" alt-text="Screenshot showing how to choose New Linked Service in Azure Data Factory to assign a storage account.":::
+
+1. On the right pane, in the data store search, enter "storage". Select the **Azure Blob Storage** tile and select **Continue**.
+
+ :::image type="content" source="media/search-power-query-connectors/new-linked-service-blob.png" alt-text="Screenshot showing how to choose New Linked Blob Storage Service to use as sink for Snowflake in Azure Data Factory.":::
+
+1. Fill out the **New linked service** values:
+
+ - Choose the Authentication type: SAS URI. Only this authentication type can be used to import data from Snowflake into Azure Blob Storage.
+ - [Generate a SAS URL](../cognitive-services/Translator/document-translation/create-sas-tokens.md) for the storage account you'll be using for staging. Paste the Blob SAS URL into the SAS URL field.
+ - Select **Create**.
+
+ :::image type="content" source="media/search-power-query-connectors/sas-url-storage-linked-service-snowflake.png" alt-text="Screenshot showing how to fill out New Linked Search Service form in Azure Data Factory with its properties to import from SnowFlake.":::
+
+### Step 8: Configure Storage dataset
+
+1. From the left menu, select **Author** icon.
+
+1. Select **Datasets**, and then select the Datasets Actions ellipses menu (`...`).
+
+ :::image type="content" source="media/search-power-query-connectors/author-datasets.png" alt-text="Screenshot showing how to choose the Author icon and datasets option.":::
+
+1. Select **New dataset**.
+
+ :::image type="content" source="media/search-power-query-connectors/new-dataset.png" alt-text="Screenshot showing how to choose a new dataset for storage in Azure Data Factory.":::
+
+1. On the right pane, in the data store search, enter "storage". Select the **Azure Blob Storage** tile and select **Continue**.
+
+ :::image type="content" source="media/search-power-query-connectors/new-dataset-blob-storage.png" alt-text="Screenshot showing how to choose a new blob storage data store in Azure Data Factory for staging.":::
+
+1. Select **DelimitedText** format and select **Continue**.
+
+1. In **Set Properties**:
+ - Under **Linked service**, select the linked service created in [Step 7](#step-7-configure-azure-blob-storage-linked-service).
+ - Under **File path**, choose the container that will be the sink for the staging process and select **OK**.
+
+ :::image type="content" source="media/search-power-query-connectors/set-delimited-text-properties.png" alt-text="Screenshot showing how to configure properties for storage dataset for Snowflake in Azure Data Factory.":::
+
+ - In **Row delimiter**, select *Line feed (\n)*.
+ - Check **First row as a header** box.
+ - Select **Save**.
+
+ :::image type="content" source="media/search-power-query-connectors/delimited-text-snowflake-save.png" alt-text="Screenshot showing how to save a DelimitedText configuration to be used as sink for Snowflake." border="true":::
+
+### Step 9: Configure Pipeline
+
+1. From the left menu, select **Author** icon.
+
+1. Select **Pipelines**, and then select the Pipelines Actions ellipses menu (`...`).
+
+ :::image type="content" source="media/search-power-query-connectors/author-pipelines.png" alt-text="Screenshot showing how to choose the Author icon and Pipelines option to configure Pipeline for Snowflake data transformation.":::
+
+1. Select **New pipeline**.
+
+ :::image type="content" source="media/search-power-query-connectors/new-pipeline.png" alt-text="Screenshot showing how to choose a new Pipeline in Azure Data Factory to create for Snowflake data ingestion.":::
+
+1. Create and configure the [Data Factory activities](../data-factory/concepts-pipelines-activities.md) that copy from Snowflake to Azure Storage container:
+
+ - Expand **Move & transform** section and drag and drop **Copy Data** activity to the blank pipeline editor canvas.
+
+ :::image type="content" source="media/search-power-query-connectors/drag-and-drop-snowflake-copy-data.png" alt-text="Screenshot showing how to drag and drop a Copy data activity in Pipeline canvas to copy data from Snowflake.":::
+
+ - Open the **General** tab. Accept the default values unless you need to customize the execution.
+
+ - In the **Source** tab, select your Snowflake table. Leave the remaining options with the default values.
+
+ :::image type="content" source="media/search-power-query-connectors/source-snowflake.png" alt-text="Screenshot showing how to configure the Source in a pipeline to import data from Snowflake.":::
+
+ - In the **Sink** tab:
+
+ - Select *Storage DelimitedText* dataset created in [Step 8](#step-8-configure-storage-dataset).
+ - In **File Extension**, add *.csv*.
+ - Leave the remaining options with the default values.
+
+ :::image type="content" source="media/search-power-query-connectors/delimited-text-sink.png" alt-text="Screenshot showing how to configure the sink in a Pipeline to move the data to Azure Storage from Snowflake.":::
+
+ - Select **Save**.
+
+1. Configure the activities that copy from Azure Storage Blob to a search index:
+
+ - Expand **Move & transform** section and drag and drop **Copy Data** activity to the blank pipeline editor canvas.
+
+ :::image type="content" source="media/search-power-query-connectors/index-from-storage-activity.png" alt-text="Screenshot showing how to drag and drop a Copy data activity in Pipeline canvas to index from Storage.":::
+
+ - In the **General** tab, accept the default values, unless you need to customize the execution.
+
+ - In the **Source** tab:
+
+ - Select *Storage DelimitedText* dataset created in [Step 8](#step-8-configure-storage-dataset).
+ - In the **File path type** select *Wildcard file path*.
+ - Leave all remaining fields with default values.
+
+ :::image type="content" source="media/search-power-query-connectors/source-snowflake.png" alt-text="Screenshot showing how to configure the Source in a pipeline to import data from blob storage to Azure Cognitive Search index for staging phase.":::
+
+ - In the **Sink** tab, select your Azure Cognitive Search index. Leave the remaining options with the default values.
+
+ :::image type="content" source="media/search-power-query-connectors/search-sink.png" alt-text="Screenshot showing how to configure the Sink in a pipeline to import data from blob storage to Azure Cognitive Search index as final step from pipeline.":::
+
+ - Select **Save**.
+
+### Step 10: Configure Activity order
+
+1. In the Pipeline canvas editor, select the little green square at the edge of the pipeline activity tile. Drag it to the "Indexes from Storage Account to Azure Cognitive Search" activity to set the execution order.
+
+1. Select **Save**.
+
+ :::image type="content" source="media/search-power-query-connectors/pipeline-link-activities-snowflake-storage-index.png" alt-text="Screenshot showing how to link Pipeline activities to provide the order of execution for Snowflake.":::
+
+### Step 11: Add a Pipeline trigger
+
+1. Select [Add trigger](../data-factory/how-to-create-schedule-trigger.md) to schedule the pipeline run and select **New/Edit**.
+
+ :::image type="content" source="media/search-power-query-connectors/add-pipeline-trigger.png" alt-text="Screenshot showing how to add a new trigger for a Pipeline in Data Factory to run for Snowflake." border="true":::
+
+1. From the **Choose trigger** dropdown, select **New**.
+
+ :::image type="content" source="media/search-power-query-connectors/choose-trigger-new.png" alt-text="Screenshot showing how to select adding a new trigger for a Pipeline in Data Factory for Snowflake.":::
+
+1. Review the trigger options to run the pipeline and select **OK**.
+
+ :::image type="content" source="media/search-power-query-connectors/new-trigger.png" alt-text="Screenshot showing how to configure a trigger to run a Pipeline in Data Factory for Snowflake.":::
+
+1. Select **Save**.
+
+1. Select **Publish**.
+
+ :::image type="content" source="media/search-power-query-connectors/publish-pipeline.png" alt-text="How to Publish a Pipeline in Data Factory for Snowflake ingestion to index." border="true":::
+
+## Migrate a PostgreSQL data pipeline
+
+This section explains how to copy data from a PostgreSQL database to an [Azure Cognitive Search index](search-what-is-an-index.md). There's no process for directly indexing from PostgreSQL to Azure Cognitive Search, so this section includes a staging phase that copies database content to an Azure Storage blob container. You'll then index from that staging container using a [Data Factory pipeline](../data-factory/quickstart-create-data-factory-portal.md).
+
+### Step 1: Configure PostgreSQL Linked Service
+
+1. Sign in to [Azure Data Factory Studio](https://ms-adf.azure.com/) with your Azure account.
+
+1. Choose your Data Factory and select **Continue**.
+
+1. From the left menu, select the **Manage** icon.
+
+ :::image type="content" source="media/search-power-query-connectors/azure-data-factory-manage-icon.png" alt-text="How to choose the Manage icon in Azure Data Factory.":::
+
+1. Under **Linked services**, select **New**.
+
+ :::image type="content" source="media/search-power-query-connectors/new-linked-service.png" alt-text="Screenshot showing how to choose New Linked Service in Azure Data Factory.":::
+
+1. On the right pane, in the data store search, enter "postgresql". Select the **PostgreSQL** tile that represents where your PostgreSQL database is located (Azure or other) and select **Continue**. In this example, PostgreSQL database is located in Azure.
+
+ :::image type="content" source="media/search-power-query-connectors/search-postgresql-data-store.png" alt-text="How to choose PostgreSQL data store for a Linked Service in Azure Data Factory.":::
+
+1. Fill out the **New linked service** values:
+
+ - In **Account selection method**, select **Enter manually**.
+ - From your Azure Database for PostgreSQL Overview page in the [Azure portal](https://portal.azure.com/), paste the following values into their respective field:
+ - Add *Server name* to **Fully qualified domain name**.
+ - Add *Admin username* to **User name**.
+ - Add *Database* to **Database name**.
+ - Enter the Admin username password to **Username password**.
+ - Select **Create**.
+
+ :::image type="content" source="media/search-power-query-connectors/new-linked-service-postgresql.png" alt-text="Choose the Manage icon in Azure Data Factory":::
+
+### Step 2: Configure PostgreSQL Dataset
+
+1. From the left menu, select **Author** icon.
+
+1. Select **Datasets**, and then select the Datasets Actions ellipses menu (`...`).
+
+ :::image type="content" source="media/search-power-query-connectors/author-datasets.png" alt-text="Screenshot showing how to choose the Author icon and datasets option.":::
+
+1. Select **New dataset**.
+
+ :::image type="content" source="media/search-power-query-connectors/new-dataset.png" alt-text="Screenshot showing how to choose a new dataset in Azure Data Factory.":::
+
+1. On the right pane, in the data store search, enter "postgresql". Select the **Azure PostgreSQL** tile. Select **Continue**.
+
+ :::image type="content" source="media/search-power-query-connectors/new-dataset-postgresql.png" alt-text="Screenshot showing how to choose PostgreSQL data store for a Dataset in Azure Data Factory." border="true":::
+
+1. Fill out the **Set properties** values:
+
+ - Choose the PostgreSQL Linked Service created in [Step 1](#step-1-configure-postgresql-linked-service).
+ - Select the table you would like to import/index.
+ - Select **OK**.
+
+ :::image type="content" source="media/search-power-query-connectors/postgresql-set-properties.png" alt-text="Screenshot showing how to set PostgreSQL properties for dataset in Azure Data Factory.":::
+
+1. Select **Save**.
+
+### Step 3: Create a new index in Azure Cognitive Search
+
+[Create a new index](/rest/api/searchservice/create-index) in your Azure Cognitive Search service with the same schema as the one used for your PostgreSQL data.
+
+You can repurpose the index you're currently using for the PostgreSQL Power Connector. In the Azure portal, find the index and then select **Index Definition (JSON)**. Select the definition and copy it to the body of your new index request.
+
+ :::image type="content" source="media/search-power-query-connectors/postgresql-index.png" alt-text="Screenshot showing how to copy existing Azure Cognitive Search index JSON configuration.":::
+
+### Step 4: Configure Azure Cognitive Search Linked Service
+
+1. From the left menu, select the **Manage** icon.
+
+ :::image type="content" source="media/search-power-query-connectors/azure-data-factory-manage-icon.png" alt-text="Screenshot showing how to choose the Manage icon in Azure Data Factory to link a service.":::
+
+1. Under **Linked services**, select **New**.
+
+ :::image type="content" source="media/search-power-query-connectors/new-linked-service.png" alt-text="Screenshot showing how to choose New Linked Service in Azure Data Factory.":::
+
+1. On the right pane, in the data store search, enter "search". Select **Azure Search** tile and select **Continue**.
+
+ :::image type="content" source="media/search-power-query-connectors/linked-service-search-new.png" alt-text="Screenshot showing how to choose New Linked Search service in Azure Data Factory." border="true":::
+
+1. Fill out the **New linked service** values:
+
+ - Choose the Azure subscription where your Azure Cognitive Search service resides.
+ - Choose the Azure Cognitive Search service that has your Power Query connector indexer.
+ - Select **Create**.
+
+ :::image type="content" source="media/search-power-query-connectors/new-linked-service-search.png" alt-text="Screenshot showing how to choose New Linked Search Service in Azure Data Factory with its properties to import from PostgreSQL.":::
+
+### Step 5: Configure Azure Cognitive Search Dataset
+
+1. From the left menu, select **Author** icon.
+
+1. Select **Datasets**, and then select the Datasets Actions ellipses menu (`...`).
+
+ :::image type="content" source="media/search-power-query-connectors/author-datasets.png" alt-text="Screenshot showing how to choose the Author icon and datasets option.":::
+
+1. Select **New dataset**.
+
+ :::image type="content" source="media/search-power-query-connectors/new-dataset.png" alt-text="Screenshot showing how to choose a new dataset in Azure Data Factory.":::
+
+1. On the right pane, in the data store search, enter "search". Select the **Azure Search** tile and select **Continue**.
+
+ :::image type="content" source="media/search-power-query-connectors/new-dataset-search.png" alt-text="Screenshot showing how to choose an Azure Cognitive Search service for a Dataset in Azure Data Factory.":::
+
+1. In **Set properties**:
+
+ - Select the Linked service created for Azure Cognitive Search in [Step 4](#step-4-configure-azure-cognitive-search-linked-service).
+ - Choose the index that you created as part of [Step 3](#step-3-create-a-new-index-in-azure-cognitive-search).
+ - Select **OK**.
+
+ :::image type="content" source="media/search-power-query-connectors/set-search-postgresql-properties.png" alt-text="Screenshot showing how to fill out Set Properties for search dataset.":::
+
+1. Select **Save**.
+
+### Step 6: Configure Azure Blob Storage Linked Service
+
+1. From the left menu, select **Manage** icon.
+
+ :::image type="content" source="media/search-power-query-connectors/azure-data-factory-manage-icon.png" alt-text="Screenshot showing how to choose the Manage icon in Azure Data Factory to link a service.":::
+
+1. Under **Linked services**, select **New**.
+
+ :::image type="content" source="media/search-power-query-connectors/new-linked-service.png" alt-text="Screenshot showing how to choose New Linked Service in Azure Data Factory.":::
+
+1. On the right pane, in the data store search, enter "storage". Select the **Azure Blob Storage** tile and select **Continue**.
+
+ :::image type="content" source="media/search-power-query-connectors/new-linked-service-blob.png" alt-text="Screenshot showing how to choose a new data store":::
+
+1. Fill out the **New linked service** values:
+
+ - Choose the **Authentication type**: *SAS URI*. Only this method can be used to import data from PostgreSQL into Azure Blob Storage.
+ - [Generate a SAS URL](../cognitive-services/Translator/document-translation/create-sas-tokens.md) for the storage account you will be using for staging and copy the Blob SAS URL to SAS URL field.
+ - Select **Create**.
+
+ :::image type="content" source="media/search-power-query-connectors/sas-url-storage-linked-service-postgresql.png" alt-text="Screenshot showing how to fill out New Linked Search Service form in Azure Data Factory with its properties to import from PostgreSQL.":::
+
+### Step 7: Configure Storage dataset
+
+1. From the left menu, select **Author** icon.
+
+1. Select **Datasets**, and then select the Datasets Actions ellipses menu (`...`).
+
+ :::image type="content" source="media/search-power-query-connectors/author-datasets.png" alt-text="Screenshot showing how to choose the Author icon and datasets option.":::
+
+1. Select **New dataset**.
+
+ :::image type="content" source="media/search-power-query-connectors/new-dataset.png" alt-text="Screenshot showing how to choose a new dataset in Azure Data Factory.":::
+
+1. On the right pane, in the data store search, enter "storage". Select the **Azure Blob Storage** tile and select **Continue**.
+
+ :::image type="content" source="media/search-power-query-connectors/new-dataset-blob-storage.png" alt-text="Screenshot showing how to choose a new blob storage data store in Azure Data Factory.":::
+
+1. Select **DelimitedText** format and select **Continue**.
+
+1. In **Row delimiter**, select *Line feed (\n)*.
+
+1. Check **First row as a header** box.
+
+1. Select **Save**.
+
+ :::image type="content" source="media/search-power-query-connectors/delimited-text-save-postgresql.png" alt-text="Screenshot showing options to import data to Azure Storage blob." border="true":::
+
+### Step 8: Configure Pipeline
+
+1. From the left menu, select **Author** icon.
+
+1. Select **Pipelines**, and then select the Pipelines Actions ellipses menu (`...`).
+
+ :::image type="content" source="media/search-power-query-connectors/author-pipelines.png" alt-text="Screenshot showing how to choose the Author icon and Pipelines option.":::
+
+1. Select **New pipeline**.
+
+ :::image type="content" source="media/search-power-query-connectors/new-pipeline.png" alt-text="Screenshot showing how to choose a new Pipeline in Azure Data Factory.":::
+
+1. Create and configure the [Data Factory activities](../data-factory/concepts-pipelines-activities.md) that copy from PostgreSQL to Azure Storage container.
+
+ - Expand **Move & transform** section and drag and drop **Copy Data** activity to the blank pipeline editor canvas.
+
+ :::image type="content" source="media/search-power-query-connectors/postgresql-pipeline-general.png" alt-text="Screenshot showing how to drag and drop in Azure Data Factory to copy data from PostgreSQL." border="true":::
+
+ - Open the **General** tab, accept the default values, unless you need to customize the execution.
+
+ - In the **Source** tab, select your PostgreSQL table. Leave the remaining options with the default values.
+
+ :::image type="content" source="media/search-power-query-connectors/source-postgresql.png" alt-text="Screenshot showing how to configure Source to import data from PostgreSQL into Azure Storage blob in staging phase." border="true":::
+
+ - In the **Sink** tab:
+ - Select the Storage DelimitedText PostgreSQL dataset configured in [Step 7](#step-7-configure-storage-dataset).
+ - In **File Extension**, add *.csv*
+ - Leave the remaining options with the default values.
+
+ :::image type="content" source="media/search-power-query-connectors/sink-storage-postgresql.png" alt-text="Screenshot showing how to configure sink to import data from PostgreSQL into Azure Storage blob." border="true":::
+
+ - Select **Save**.
+
+1. Configure the activities that copy from Azure Storage to a search index:
+
+ - Expand **Move & transform** section and drag and drop **Copy Data** activity to the blank pipeline editor canvas.
+
+ :::image type="content" source="media/search-power-query-connectors/index-from-storage-activity-postgresql.png" alt-text="Screenshot showing how to drag and drop in Azure Data Factory to configure a copy activity." border="true":::
+
+ - In the **General** tab, leave the default values, unless you need to customize the execution.
+
+ - In the **Source** tab:
+ - Select the Storage source dataset configured in [Step 7](#step-7-configure-storage-dataset).
+ - In the **File path type** field, select *Wildcard file path*.
+ - Leave all remaining fields with default values.
+
+ :::image type="content" source="media/search-power-query-connectors/source-storage-postgresql.png" alt-text="Screenshot showing how to configure Source for indexing from Storage to Azure Cognitive Search index." border="true":::
+
+ - In the **Sink** tab, select your Azure Cognitive Search index. Leave the remaining options with the default values.
+
+ :::image type="content" source="media/search-power-query-connectors/sink-search-index-postgresql.png" alt-text="Screenshot showing how to configure Sink for indexing from Storage to Azure Cognitive Search index." border="true":::
+
+ - Select **Save**.
+
+### Step 9: Configure Activity order
+
+1. In the Pipeline canvas editor, select the little green square at the edge of the pipeline activity. Drag it to the "Indexes from Storage Account to Azure Cognitive Search" activity to set the execution order.
+
+1. Select **Save**.
+
+ :::image type="content" source="media/search-power-query-connectors/pipeline-link-acitivities-postgresql.png" alt-text="Screenshot showing how to configure activity order in the pipeline for proper execution." border="true":::
+
+### Step 10: Add a Pipeline trigger
+
+1. Select [Add trigger](../data-factory/how-to-create-schedule-trigger.md) to schedule the pipeline run and select **New/Edit**.
+
+ :::image type="content" source="media/search-power-query-connectors/add-pipeline-trigger-postgresql.png" alt-text="Screenshot showing how to add a new trigger for a Pipeline in Data Factory." border="true":::
+
+1. From the **Choose trigger** dropdown, select **New**.
+
+ :::image type="content" source="media/search-power-query-connectors/choose-trigger-new.png" alt-text="Screenshot showing how to select adding a new trigger for a Pipeline in Data Factory." border="true":::
+
+1. Review the trigger options to run the pipeline and select **OK**.
+
+ :::image type="content" source="media/search-power-query-connectors/trigger-postgresql.png" alt-text="Screenshot showing how to configure a trigger to run a Pipeline in Data Factory." border="true":::
+
+1. Select **Save**.
+
+1. Select **Publish**.
+
+ :::image type="content" source="media/search-power-query-connectors/publish-pipeline-postgresql.png" alt-text="Screenshot showing how to Publish a Pipeline in Data Factory for PostgreSQL data copy." border="true":::
+
+## Legacy content for Power Query connector preview
+
+A Power Query connector is used with a search indexer to automate data ingestion from various data sources, including those on other cloud providers. It uses [Power Query](/power-query/power-query-what-is-power-query) to retrieve the data.
+
+Data sources supported in the preview include:
+ Amazon Redshift + Elasticsearch
Power Query connectors can reach a broader range of data sources, including thos
+ Smartsheet + Snowflake
-This article shows you an Azure portal-based approach for setting up an indexer using Power Query connectors. Currently, there is no SDK support.
-
-> [!NOTE]
-> Preview functionality is provided under [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) and is not recommended for production workloads.
-
-## Supported functionality
+### Supported functionality
Power Query connectors are used in indexers. An indexer in Azure Cognitive Search is a crawler that extracts searchable data and metadata from an external data source and populates an index based on field-to-field mappings between the index and your data source. This approach is sometimes referred to as a 'pull model' because the service pulls data in without you having to write any code that adds data to an index. Indexers provide a convenient way for users to index content from their data source without having to write their own crawler or push model. Indexers that reference Power Query data sources have the same level of support for skillsets, schedules, high water mark change detection logic, and most parameters that other indexers support.
-## Prerequisites
+### Prerequisites
Before you start pulling data from one of the supported data sources, you'll want to make sure you have all your resources set up.
Before you start pulling data from one of the supported data sources, you'll wan
+ Azure Blob Storage account, used as an intermediary for your data. The data will flow from your data source, then to Blob Storage, then to the index. This requirement only exists with the initial gated preview.
-## Regional availability
+### Regional availability
The preview is only available on search services in the following regions:
The preview is only available on search services in the following regions:
+ West US + West US 2
-## Preview limitations
+### Preview limitations
-There is a lot to be excited about with this preview, but there are a few limitations. This section describes the limitations that are specific to the current version of the preview.
+This section describes the limitations that are specific to the current version of the preview.
-+ Pulling binary data from your data source is not supported in this version of the preview.
++ Pulling binary data from your data source isn't supported.
-+ [Debug sessions](cognitive-search-debug-session.md) are not supported at this time.
++ [Debug session](cognitive-search-debug-session.md) isn't supported.
-## Getting started using the Azure portal
+### Getting started using the Azure portal
The Azure portal provides support for the Power Query connectors. By sampling data and reading metadata on the container, the Import data wizard in Azure Cognitive Search can create a default index, map source fields to target index fields, and load the index in a single operation. Depending on the size and complexity of source data, you could have an operational full text search index in minutes.
The Azure portal provides support for the Power Query connectors. By sampling da
> [!VIDEO https://www.youtube.com/embed/uy-l4xFX1EE]
-### Step 1 ΓÇô Prepare source data
+#### Step 1 ΓÇô Prepare source data
Make sure your data source contains data. The Import data wizard reads metadata and performs data sampling to infer an index schema, but it also loads data from your data source. If the data is missing, the wizard will stop and return and error.
-### Step 2 ΓÇô Start Import data wizard
+#### Step 2 ΓÇô Start Import data wizard
-After you're approved for the preview, the Azure Cognitive Search team will provide you with an Azure portal link that uses a feature flag so that you can access the Power Query connectors. Open this page and start the start the wizard from the command bar in the Azure Cognitive Search service page by selecting **Import data**.
+After you're approved for the preview, the Azure Cognitive Search team will provide you with an Azure portal link that uses a feature flag so that you can access the Power Query connectors. Open this page and start the wizard from the command bar in the Azure Cognitive Search service page by selecting **Import data**.
:::image type="content" source="medi.png" alt-text="Screenshot of the Import data command" border="true":::
-### Step 3 ΓÇô Select your data source
+#### Step 3 ΓÇô Select your data source
There are a few data sources that you can pull data from using this preview. All data sources that use Power Query will include a "Powered By Power Query" on their tile. Select your data source. :::image type="content" source="media/search-power-query-connectors/power-query-import-data.png" alt-text="Screenshot of the Select a data source page." border="true":::
-Once you've selected your data source, select **Next: Configure your data** to move to the next section.
+After you've selected your data source, select **Next: Configure your data** to move to the next section.
-### Step 4 ΓÇô Configure your data
+#### Step 4 ΓÇô Configure your data
-Once you've selected your data source, you'll configure your connection. Each data source will require different information. For a few data sources, the Power Query documentation provides additional details on how to connect to your data.
+In this step, you'll configure your connection. Each data source will require different information. For a few data sources, the Power Query documentation provides more detail on how to connect to your data.
+ [PostgreSQL](/power-query/connectors/postgresql) + [Salesforce Objects](/power-query/connectors/salesforceobjects) + [Salesforce Reports](/power-query/connectors/salesforcereports)
-Once you've provided your connection credentials, select **Next**.
+After you've provided your connection credentials, select **Next**.
-### Step 5 ΓÇô Select your data
+#### Step 5 ΓÇô Select your data
-The import wizard will preview various tables that are available in your data source. In this step you'll check one table that contains the data you want to import into your index.
+The import wizard will preview various tables that are available in your data source. In this step, you'll check one table that contains the data you want to import into your index.
:::image type="content" source="media/search-power-query-connectors/power-query-preview-data.png" alt-text="Screenshot of data preview." border="true"::: Once you've selected your table, select **Next**.
-### Step 6 ΓÇô Transform your data (Optional)
+#### Step 6 ΓÇô Transform your data (Optional)
Power Query connectors provide you with a rich UI experience that allows you to manipulate your data so you can send the right data to your index. You can remove columns, filter rows, and much more.
It's not required that you transform your data before importing it into Azure Co
For more information about transforming data with Power Query, look at [Using Power Query in Power BI Desktop](/power-query/power-query-quickstart-using-power-bi).
-Once you're done transforming your data, select **Next**.
+After data is transformed, select **Next**.
-### Step 7 ΓÇô Add Azure Blob storage
+#### Step 7 ΓÇô Add Azure Blob storage
The Power Query connector preview currently requires you to provide a blob storage account. This step only exists with the initial gated preview. This blob storage account will serve as temporary storage for data that moves from your data source to an Azure Cognitive Search index.
You can get the connection string from the Azure portal by navigating to the sto
After you've provided a data source name and connection string, select ΓÇ£Next: Add cognitive skills (Optional)ΓÇ¥.
-### Step 8 ΓÇô Add cognitive skills (Optional)
+#### Step 8 ΓÇô Add cognitive skills (Optional)
[AI enrichment](cognitive-search-concept-intro.md) is an extension of indexers that can be used to make your content more searchable.
-This is an optional step for this preview. When complete, select **Next: Customize target index**.
+You can add any enrichments that add benefit to your scenario. When complete, select **Next: Customize target index**.
-### Step 9 ΓÇô Customize target index
+#### Step 9 ΓÇô Customize target index
On the Index page, you should see a list of fields with a data type and a series of checkboxes for setting index attributes. The wizard can generate a fields list based on metadata and by sampling the source data.
-You can bulk-select attributes by clicking the checkbox at the top of an attribute column. Choose Retrievable and Searchable for every field that should be returned to a client app and subject to full text search processing. You'll notice that integers are not full text or fuzzy searchable (numbers are evaluated verbatim and are often useful in filters).
+You can bulk-select attributes by selecting the checkbox at the top of an attribute column. Choose Retrievable and Searchable for every field that should be returned to a client app and subject to full text search processing. You'll notice that integers aren't full text or fuzzy searchable (numbers are evaluated verbatim and are often useful in filters).
Review the description of index attributes and language analyzers for more information.
Take a moment to review your selections. Once you run the wizard, physical data
When complete, select **Next: Create an Indexer**.
-### Step 10 ΓÇô Create an indexer
+#### Step 10 ΓÇô Create an indexer
The last step creates the indexer. Naming the indexer allows it to exist as a standalone resource, which you can schedule and manage independently of the index and data source object, created in the same wizard sequence.
When creating the indexer, you can optionally choose to run the indexer on a sch
:::image type="content" source="media/search-power-query-connectors/power-query-indexer-configuration.png" alt-text="Screenshot of Create your indexer page." border="true":::
-Once you've finished filling out this page select **Submit**.
+After you've finished filling out this page select **Submit**.
-## High Water Mark Change Detection policy
+### High Water Mark Change Detection policy
This change detection policy relies on a "high water mark" column capturing the version or time when a row was last updated.
-### Requirements
+#### Requirements
+ All inserts specify a value for the column. + All updates to an item also change the value of the column. + The value of this column increases with each insert or update.
-## Unsupported column names
+### Unsupported column names
-Field names in an Azure Cognitive Search index have to meet certain requirements. One of these requirements is that some characters such as "/" are not allowed. If a column name in your database does not meet these requirements, the index schema detection will not recognize your column as a valid field name and you won't see that column listed as a suggested field for your index. Normally, using [field mappings](search-indexer-field-mappings.md) would solve this problem but field mappings are not supported in the portal.
+Field names in an Azure Cognitive Search index have to meet certain requirements. One of these requirements is that some characters such as "/" aren't allowed. If a column name in your database does not meet these requirements, the index schema detection won't recognize your column as a valid field name and you won't see that column listed as a suggested field for your index. Normally, using [field mappings](search-indexer-field-mappings.md) would solve this problem but field mappings aren't supported in the portal.
To index content from a column in your table that has an unsupported field name, rename the column during the "Transform your data" phase of the import data process. For example, you can rename a column named "Billing code/Zip code" to "zipcode". By renaming the column, the index schema detection will recognize it as a valid field name and add it as a suggestion to your index definition. ## Next steps
-You have learned how to pull data from new data sources using the Power Query connectors. To learn more about indexers, see [Indexers in Azure Cognitive Search](search-indexer-overview.md).
+This article explained how to pull data using the Power Query connectors. Because this preview feature is discontinued, it also explains how to migrate existing solutions to a supported scenario.
+
+To learn more about indexers, see [Indexers in Azure Cognitive Search](search-indexer-overview.md).
search Search Howto Managed Identities Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-managed-identities-data-sources.md
You can use the Management REST API instead of the portal to assign a user-assig
If your Azure resource is behind a firewall, make sure there's an inbound rule that admits requests from your search service.
-+ For same-region connections to Azure Blob Storage or Azure Data Lake Storage Gen2, use a system managed identity and the [trusted service exception](search-indexer-howto-access-trusted-service-exception.md). Optionally, you can configure a [resource instance rule (preview)](../storage/common/storage-network-security.md#grant-access-from-azure-resource-instances-preview) to admit requests.
++ For same-region connections to Azure Blob Storage or Azure Data Lake Storage Gen2, use a system managed identity and the [trusted service exception](search-indexer-howto-access-trusted-service-exception.md). Optionally, you can configure a [resource instance rule](../storage/common/storage-network-security.md#grant-access-from-azure-resource-instances) to admit requests. + For all other resources and connections, [configure an IP firewall rule](search-indexer-howto-access-ip-restricted.md) that admits requests from Search. See [Indexer access to content protected by Azure network security features](search-indexer-securing-resources.md) for details.
search Search Howto Managed Identities Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-managed-identities-storage.md
This article assumes familiarity with indexer concepts and configuration. If you
For a code example in C#, see [Index Data Lake Gen2 using Azure AD](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/master/data-lake-gen2-acl-indexing/README.md) on GitHub. > [!NOTE]
-> If storage is network-protected and in the same region as your search service, you must use a system-assigned managed identity and either one of the following network options: [connect as a trusted service](search-indexer-howto-access-trusted-service-exception.md), or [connect using the resource instance rule (preview)](../storage/common/storage-network-security.md#grant-access-from-azure-resource-instances-preview).
+> If storage is network-protected and in the same region as your search service, you must use a system-assigned managed identity and either one of the following network options: [connect as a trusted service](search-indexer-howto-access-trusted-service-exception.md), or [connect using the resource instance rule](../storage/common/storage-network-security.md#grant-access-from-azure-resource-instances).
## Prerequisites
search Search Indexer Howto Access Ip Restricted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-howto-access-ip-restricted.md
On behalf of an indexer, a search service will issue outbound calls to an extern
This article explains how to find the IP address of your search service and configure an inbound IP rule on an Azure Storage account. While specific to Azure Storage, this approach also works for other Azure resources that use IP firewall rules for data access, such as Cosmos DB and Azure SQL. > [!NOTE]
-> A storage account and your search service must be in different regions if you want to define IP firewall rules. If your setup doesn't permit this, try the [trusted service exception](search-indexer-howto-access-trusted-service-exception.md) or [resource instance rule](../storage/common/storage-network-security.md#grant-access-from-azure-resource-instances-preview) instead.
+> A storage account and your search service must be in different regions if you want to define IP firewall rules. If your setup doesn't permit this, try the [trusted service exception](search-indexer-howto-access-trusted-service-exception.md) or [resource instance rule](../storage/common/storage-network-security.md#grant-access-from-azure-resource-instances) instead.
## Get a search service IP address
search Search Indexer Securing Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-securing-resources.md
There are two options for supporting data access using the system identity:
- Configure search to run as a [trusted service](search-indexer-howto-access-trusted-service-exception.md) and use the [trusted service exception](../storage/common/storage-network-security.md#trusted-access-based-on-a-managed-identity) in Azure Storage. -- Configure a [resource instance rule (preview)](../storage/common/storage-network-security.md#grant-access-from-azure-resource-instances-preview) in Azure Storage that admits inbound requests from an Azure resource.
+- Configure a [resource instance rule](../storage/common/storage-network-security.md#grant-access-from-azure-resource-instances) in Azure Storage that admits inbound requests from an Azure resource.
The above options depend on Azure Active Directory for authentication, which means that the connection must be made with an Azure AD login. Currently, only a Cognitive Search [system-assigned managed identity](search-howto-managed-identities-data-sources.md#create-a-system-managed-identity) is supported for same-region connections through a firewall.
search Search What Is Data Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-what-is-data-import.md
The pull model crawls a supported data source and automatically uploads the data
+ [Azure Cosmos DB](search-howto-index-cosmosdb.md) + [Azure SQL Database, SQL Managed Instance, and SQL Server on Azure VMs](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md) + [SharePoint in Microsoft 365 (preview)](search-howto-index-sharepoint-online.md)
-+ [Power Query data connectors (preview)](search-how-to-index-power-query-data-sources.md)
Indexers connect an index to a data source (usually a table, view, or equivalent structure), and map source fields to equivalent fields in the index. During execution, the rowset is automatically transformed to JSON and loaded into the specified index. All indexers support schedules so that you can specify how frequently the data is to be refreshed. Most indexers provide change tracking if the data source supports it. By tracking changes and deletes to existing documents in addition to recognizing new documents, indexers remove the need to actively manage the data in your index.
search Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/whats-new.md
Previously updated : 03/17/2022 Last updated : 05/27/2022 # What's new in Azure Cognitive Search Learn what's new in the service. Bookmark this page to keep up to date with service updates. Check out the [**Preview feature list**](search-api-preview.md) for an itemized list of features that are not yet approved for production workloads.
+## May 2022
+
+|Feature&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Description | Availability |
+||--||
+| [Power Query connector preview](search-how-to-index-power-query-data-sources.md) | This indexer data source was introduced in May 2021 but will not be moving forward. Please migrate your data ingestion code by November 2022. See the feature documentation for migration guidance. | Retired |
+ ## February 2022 |Feature&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Description | Availability |
security Secure Develop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/secure-develop.md
The verification phase involves a comprehensive effort to ensure that the code m
You scan your application and its dependent libraries to identify any known vulnerable components. Products that are available to perform this scan include [OWASP Dependency Check](https://www.owasp.org/index.php/OWASP_Dependency_Check),[Snyk](https://snyk.io/), and [Black Duck](https://www.blackducksoftware.com/).
-Vulnerability scanning powered by [Tinfoil Security](https://www.tinfoilsecurity.com/) is available for Azure App Service Web Apps. [Tinfoil Security scanning through App Service](https://azure.microsoft.com/blog/web-vulnerability-scanning-for-azure-app-service-powered-by-tinfoil-security/) offers developers and administrators a fast, integrated, and economical means of discovering and addressing vulnerabilities before a malicious actor can take advantage of them.
-
-> [!NOTE]
-> You can also [integrate Tinfoil Security with Azure AD](../../active-directory/saas-apps/tinfoil-security-tutorial.md). Integrating Tinfoil Security with Azure AD provides you with the
-following benefits:
-> - In Azure AD, you can control who has access to Tinfoil Security.
-> - Your users can be automatically signed in to Tinfoil Security (single sign-on) by using their Azure AD accounts.
-> - You can manage your accounts in a single, central location, the Azure portal.
- ### Test your application in an operating state Dynamic application security testing (DAST) is a process of testing an application in an operating state to find security vulnerabilities. DAST tools analyze programs while they are executing to find security vulnerabilities such as memory corruption, insecure server configuration, cross-site scripting, user privilege issues, SQL injection, and other critical security concerns.
security Encryption Atrest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/encryption-atrest.md
Resource providers and application instances store the encrypted Data Encryption
Microsoft Cloud services are used in all three cloud models: IaaS, PaaS, SaaS. Below you have examples of how they fit on each model: -- Software services, referred to as Software as a Server or SaaS, which have applications provided by the cloud such as Microsoft 365.
+- Software services, referred to as Software as a Service or SaaS, which have applications provided by the cloud such as Microsoft 365.
- Platform services in which customers use the cloud for things like storage, analytics, and service bus functionality in their applications. - Infrastructure services, or Infrastructure as a Service (IaaS) in which customer deploys operating systems and applications that are hosted in the cloud and possibly leveraging other cloud services.
security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/overview.md
Azure Monitor logs can be a useful tool in forensic and other security analysis,
The section provides additional information regarding key features in application security and summary information about these capabilities.
-### Web Application vulnerability scanning
-
-One of the easiest ways to get started with testing for vulnerabilities on your [App Service app](../../app-service/overview.md) is to use the [integration with Tinfoil Security](https://azure.microsoft.com/blog/web-vulnerability-scanning-for-azure-app-service-powered-by-tinfoil-security/) to perform one-click vulnerability scanning on your app. You can view the test results in an easy-to-understand report, and learn how to fix each vulnerability with step-by-step instructions.
- ### Penetration Testing We donΓÇÖt perform [penetration testing](./pen-testing.md) of your application for you, but we do understand that you want and need to perform testing on your own applications. ThatΓÇÖs a good thing, because when you enhance the security of your applications you help make the entire Azure ecosystem more secure. While notifying Microsoft of pen testing activities is no longer required customers must still comply with the [Microsoft Cloud Penetration Testing Rules of Engagement](https://www.microsoft.com/msrc/pentest-rules-of-engagement).
security Technical Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/technical-capabilities.md
Azure also provides several easy-to-use features to help secure both inbound and
- [Restrict access to your app by client's behavior - request frequency and concurrency](http://microsoftazurewebsitescheatsheet.info/#dynamic-ip-restrictions) -- [Scan your web app code for vulnerabilities using Tinfoil Security Scanning](https://azure.microsoft.com/blog/web-vulnerability-scanning-for-azure-app-service-powered-by-tinfoil-security/)- - [Configure TLS mutual authentication to require client certificates to connect to your web app](../../app-service/app-service-web-configure-tls-mutual-auth.md) - [Configure a client certificate for use from your app to securely connect to external resources](https://azure.microsoft.com/blog/using-certificates-in-azure-websites-applications/)
sentinel Design Your Workspace Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/design-your-workspace-architecture.md
However, this recommendation for separate workspaces for non-SOC data comes from
When planning to use resource-context or table level RBAC, consider the following information: -- <a name="note7"></a>[Decision tree note #7](#decision-tree): To configure resource-context RBAC for non-Azure resources, you may want to associate a Resource ID to the data when sending to Microsoft Sentinel, so that the permission can be scoped using resource-context RBAC. For more information, see [Explicitly configure resource-context RBAC](resource-context-rbac.md#explicitly-configure-resource-context-rbac) and [Access modes by deployment](../azure-monitor/logs/design-logs-deployment.md).
+- <a name="note7"></a>[Decision tree note #7](#decision-tree): To configure resource-context RBAC for non-Azure resources, you may want to associate a Resource ID to the data when sending to Microsoft Sentinel, so that the permission can be scoped using resource-context RBAC. For more information, see [Explicitly configure resource-context RBAC](resource-context-rbac.md#explicitly-configure-resource-context-rbac) and [Access modes by deployment](../azure-monitor/logs/workspace-design.md).
-- <a name="note8"></a>[Decision tree note #8](#decision-tree): [Resource permissions](../azure-monitor/logs/manage-access.md) or [resource-context](../azure-monitor/logs/design-logs-deployment.md) allows users to view logs only for resources that they have access to. The workspace access mode must be set to **User resource or workspace permissions**. Only tables relevant to the resources where the user has permissions will be included in search results from the **Logs** page in Microsoft Sentinel.
+- <a name="note8"></a>[Decision tree note #8](#decision-tree): [Resource permissions](../azure-monitor/logs/manage-access.md) or [resource-context](../azure-monitor/logs/workspace-design.md) allows users to view logs only for resources that they have access to. The workspace access mode must be set to **User resource or workspace permissions**. Only tables relevant to the resources where the user has permissions will be included in search results from the **Logs** page in Microsoft Sentinel.
- <a name="note9"></a>[Decision tree note #9](#decision-tree): [Table-level RBAC](../azure-monitor/logs/manage-access.md) allows you to define more granular control to data in a Log Analytics workspace in addition to the other permissions. This control allows you to define specific data types that are accessible only to a specific set of users. For more information, see [Table-level RBAC in Microsoft Sentinel](https://techcommunity.microsoft.com/t5/azure-sentinel/table-level-rbac-in-azure-sentinel/ba-p/965043).
sentinel Dns Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/dns-normalization-schema.md
If your data source supports full DNS logging and you've chosen to log multiple
For example, you might modify your query with the following normalization: ```kql
-_Im_DNS | where SrcIpAddr != "127.0.0.1" and EventSubType == "response"
+_Im_Dns | where SrcIpAddr != "127.0.0.1" and EventSubType == "response"
``` ## Parsers
_Im_Dns (responsecodename = 'NXDOMAIN', starttime = ago(1d), endtime=now())
To filter only DNS queries for a specified list of domain names, use: ```kql
-let torProxies=dynamic(["tor2web.org", "tor2web.com", "torlink.co",...]);
+let torProxies=dynamic(["tor2web.org", "tor2web.com", "torlink.co"]);
_Im_Dns (domain_has_any = torProxies) ``` > [!TIP]
sentinel Network Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/network-normalization-schema.md
The following filtering parameters are available:
For example, to filter only network sessions for a specified list of domain names, use: ```kql
-let torProxies=dynamic(["tor2web.org", "tor2web.com", "torlink.co",...]);
+let torProxies=dynamic(["tor2web.org", "tor2web.com", "torlink.co"]);
_Im_NetworkSession (hostname_has_any = torProxies) ```
sentinel Normalization About Schemas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-about-schemas.md
The allowed values for a device ID type are:
| **VectraId** | A Vectra AI assigned resource ID.| | **Other** | An ID type not listed above.|
-For example, the Azure Monitor [VM Insights solution](/azure/azure-monitor/vm/vminsights-log-search) provides network sessions information in the `VMConnection`. The table provides an Azure Resource ID in the `_ResourceId` field and a VM insights specific device ID in the `Machine` field. Use the following mapping to represent those IDs:
+For example, the Azure Monitor [VM Insights solution](../azure-monitor/vm/vminsights-log-search.md) provides network sessions information in the `VMConnection`. The table provides an Azure Resource ID in the `_ResourceId` field and a VM insights specific device ID in the `Machine` field. Use the following mapping to represent those IDs:
| Field | Map to | | -- | -- |
sentinel Process Events Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/process-events-normalization-schema.md
The following list mentions fields that have specific guidelines for process act
| Field | Class | Type | Description | ||-||--| | **EventType** | Mandatory | Enumerated | Describes the operation reported by the record. <br><br>For Process records, supported values include: <br>- `ProcessCreated` <br>- `ProcessTerminated` |
-| **EventSchemaVersion** | Mandatory | String | The version of the schema. The version of the schema documented here is `0.1.2` |
+| **EventSchemaVersion** | Mandatory | String | The version of the schema. The version of the schema documented here is `0.1.3` |
| **EventSchema** | Optional | String | The name of the schema documented here is `ProcessEvent`. | | **Dvc** fields| | | For process activity events, device fields refer to the system on which the process was executed. |
The process event schema references the following entities, which are central to
| **ParentProcessFileVersion** | Optional | String | The product version from the version information in parent process image file. <br><br> Example: `7.9.5.0` | | **ParentProcessIsHidden** | Optional | Boolean | An indication of whether the parent process is in hidden mode. | | **ParentProcessInjectedAddress** | Optional | String | The memory address in which the responsible parent process is stored. |
-| **ParentProcessId**| Mandatory | String | The process ID (PID) of the parent process. <br><br> Example: `48610176` |
+| **ParentProcessId**| Recommended | String | The process ID (PID) of the parent process. <br><br> Example: `48610176` |
| **ParentProcessGuid** | Optional | String | A generated unique identifier (GUID) of the parent process. Enables identifying the process across systems. <br><br> Example: `EF3BD0BD-2B74-60C5-AF5C-010000001E00` | | **ParentProcessIntegrityLevel** | Optional | String | Every process has an integrity level that is represented in its token. Integrity levels determine the process level of protection or access. <br><br> Windows defines the following integrity levels: **low**, **medium**, **high**, and **system**. Standard users receive a **medium** integrity level and elevated users receive a **high** integrity level. <br><br> For more information, see [Mandatory Integrity Control - Win32 apps](/windows/win32/secauthz/mandatory-integrity-control). | | **ParentProcessMD5** | Optional | MD5 | The MD5 hash of the parent process image file. <br><br>Example: `75a599802f1fa166cdadb360960b1dd0`|
The process event schema references the following entities, which are central to
| **HashType** | Recommended | String | The type of hash stored in the HASH alias field, allowed values are `MD5`, `SHA`, `SHA256`, `SHA512` and `IMPHASH`. | | <a name="targetprocesscommandline"></a> **TargetProcessCommandLine** | Mandatory | String | The command line used to run the target process. <br><br> Example: `"choco.exe" -v` | | <a name="targetprocesscurrentdirectory"></a> **TargetProcessCurrentDirectory** | Optional | String | The current directory in which the target process is executed. <br><br> Example: `c:\windows\system32` |
-| **TargetProcessCreationTime** | Mandatory | DateTime | The product version from the version information of the target process image file. |
+| **TargetProcessCreationTime** | Recommended | DateTime | The product version from the version information of the target process image file. |
| **TargetProcessId**| Mandatory | String | The process ID (PID) of the target process. <br><br>Example: `48610176`<br><br>**Note**: The type is defined as *string* to support varying systems, but on Windows and Linux this value must be numeric. <br><br>If you are using a Windows or Linux machine and used a different type, make sure to convert the values. For example, if you used a hexadecimal value, convert it to a decimal value. | | **TargetProcessGuid** | Optional | String |A generated unique identifier (GUID) of the target process. Enables identifying the process across systems. <br><br> Example: `EF3BD0BD-2B74-60C5-AF5C-010000001E00` | | **TargetProcessIntegrityLevel** | Optional | String | Every process has an integrity level that is represented in its token. Integrity levels determine the process level of protection or access. <br><br> Windows defines the following integrity levels: **low**, **medium**, **high**, and **system**. Standard users receive a **medium** integrity level and elevated users receive a **high** integrity level. <br><br> For more information, see [Mandatory Integrity Control - Win32 apps](/windows/win32/secauthz/mandatory-integrity-control). |
These are the changes in version 0.1.1 of the schema:
These are the changes in version 0.1.2 of the schema - Added the fields `ActorUserType`, `ActorOriginalUserType`, `TargetUserType`, `TargetOriginalUserType`, and `HashType`.
+These are the changes in version 0.1.3 of the schema
+
+- Changed the fields `ParentProcessId` and `TargetProcessCreationTime` from mandatory to recommended.
+ ## Next steps For more information, see:
sentinel Quickstart Onboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/quickstart-onboard.md
After you connect your data sources, choose from a gallery of expertly created w
- **Active Azure Subscription**. If you don't have one, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. -- **Log Analytics workspace**. Learn how to [create a Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md). For more information about Log Analytics workspaces, see [Designing your Azure Monitor Logs deployment](../azure-monitor/logs/design-logs-deployment.md).
+- **Log Analytics workspace**. Learn how to [create a Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md). For more information about Log Analytics workspaces, see [Designing your Azure Monitor Logs deployment](../azure-monitor/logs/workspace-design.md).
By default, you may have a default of [30 days retention](../azure-monitor/logs/cost-logs.md#legacy-pricing-tiers) in the Log Analytics workspace used for Microsoft Sentinel. To make sure that you can use the full extent of Microsoft Sentinel functionality, raise this to 90 days. For more information, see [Configure data retention and archive policies in Azure Monitor Logs](../azure-monitor/logs/data-retention-archive.md).
sentinel Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/roles.md
Consult the [Role recommendations](#role-recommendations) section for best pract
- **Log Analytics RBAC**. You can use the Log Analytics advanced Azure role-based access control across the data in your Microsoft Sentinel workspace. This includes both data type-based Azure RBAC and resource-context Azure RBAC. For more information, see:
- - [Manage log data and workspaces in Azure Monitor](../azure-monitor/logs/manage-access.md#manage-access-using-workspace-permissions)
+ - [Manage log data and workspaces in Azure Monitor](../azure-monitor/logs/manage-access.md#azure-rbac)
- [Resource-context RBAC for Microsoft Sentinel](resource-context-rbac.md) - [Table-level RBAC](https://techcommunity.microsoft.com/t5/azure-sentinel/table-level-rbac-in-azure-sentinel/ba-p/965043)
sentinel Web Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/web-normalization-schema.md
The following filtering parameters are available:
For example, to filter only Web sessions for a specified list of domain names, use: ```kql
-let torProxies=dynamic(["tor2web.org", "tor2web.com", "torlink.co",...]);
+let torProxies=dynamic(["tor2web.org", "tor2web.com", "torlink.co"]);
_Im_WebSession (url_has_any = torProxies) ```
service-connector How To Troubleshoot Front End Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-troubleshoot-front-end-error.md
This article lists error messages and suggestions to troubleshoot Service Connec
| Error message | Suggested Action | | | |
-| Unknown resource type | <ul><li>Check source and target resource to verify whether the service types are supported by Service Connector.</li><li>Check whether the specified source-target connection combination is supported by Service Connector.</li><li>Check whether the target resource exists.</li><li>Check the correctness of the target resource ID.</li></ul> |
-| Unsupported resource | <ul><li>Check whether the authentication type is supported by the specified source-target connection combination.</li></ul> |
+| Unknown resource type | Check source and target resource to verify whether the service types are supported by Service Connector. |
+| | Check whether the specified source-target connection combination is supported by Service Connector. |
+| | Check whether the target resource exists. |
+| | Check the correctness of the target resource ID. |
+| Unsupported resource | Check whether the authentication type is supported by the specified source-target connection combination. |
## Error type,error message, and suggested actions using Azure CLI
This article lists error messages and suggestions to troubleshoot Service Connec
| Error message | Suggested Action | | | |
-| The source resource ID is invalid: `{SourceId}` | <ul><li>Check whether the source resource ID supported by Service Connector.</li><li>Check the correctness of source resource ID.</li></ul> |
-| Target resource ID is invalid: `{TargetId}` | <ul><li>Check whether the target service type is supported by Service Connector.</li><li>Check the correctness of target resource ID.</li></ul> |
-| Connection ID is invalid: `{ConnectionId}` | <ul><li>Check the correctness of the connection ID.</li></ul> |
+| The source resource ID is invalid: `{SourceId}` | Check whether the source resource ID supported by Service Connector. |
+| | Check the correctness of source resource ID. |
+| Target resource ID is invalid: `{TargetId}` | Check whether the target service type is supported by Service Connector. |
+| | Check the correctness of target resource ID. |
+| Connection ID is invalid: `{ConnectionId}` | Check the correctness of the connection ID. |
#### RequiredArgumentMissingError
This article lists error messages and suggestions to troubleshoot Service Connec
| Either client type or auth info should be specified to update | Either client type or authentication information should be provided when updating a connection. | | Usage error: `{} [KEY=VALUE ...]` | Check the available keys and provide values for the auth info parameter, usually in the form of `--param key1=val1 key2=val2`. | | Unsupported Key `{Key}` is provided for parameter `{Parameter}`. All possible keys are: `{Keys}` | Check the available keys and provide values for the authentication information parameter, usually in the form of `--param key1=val1 key2=val2`. |
-| Provision failed, please create the target resource manually and then create the connection. Error details: `{ErrorTrace}` | <ul><li>Retry.</li><li>Create the target resource manually and then create the connection.</li></ul> |
+| Provision failed, please create the target resource manually and then create the connection. Error details: `{ErrorTrace}` | Retry. Create the target resource manually and then create the connection. |
## Next steps
service-connector Tutorial Portal Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-portal-key-vault.md
Now you can create a service connection to another target service and directly s
1. Select **Secrets** in the Key Vault left ToC, and select the blob storage secret name. > [!TIP]
- > Don't have permission to list secrets? Refer to [troubleshooting](/azure/key-vault/general/troubleshooting-access-issues#i-am-not-able-to-list-or-get-secretskeyscertificate-i-am-seeing-something-went-wrong-error).
+ > Don't have permission to list secrets? Refer to [troubleshooting](../key-vault/general/troubleshooting-access-issues.md#i-am-not-able-to-list-or-get-secretskeyscertificate-i-am-seeing-something-went-wrong-error).
4. Select a version ID from the Current Version list.
When no longer needed, delete the resource group and all related resources creat
## Next steps > [!div class="nextstepaction"]
-> [Service Connector internals](./concept-service-connector-internals.md)
+> [Service Connector internals](./concept-service-connector-internals.md)
service-fabric How To Patch Cluster Nodes Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-patch-cluster-nodes-windows.md
When enabling automatic OS updates, you'll also need to disable Windows Update i
> Service Fabric does not support in-VM upgrades where Windows Updates applies operating system patches without replacing the OS disk. > [!NOTE]
-> When managed disks are used ensure that Custom Extension script for mapping managed disks to drive letters handles reimage of the VM correctly. See [Create a Service Fabric cluster with attached data disks](/azure/Virtual-machine-scale-sets/virtual-machine-scale-sets-attached-disks#create-a-service-fabric-cluster-with-attached-data-disks) for an example script that handles reimage of VMs with managed disks correctly.
+> When managed disks are used ensure that Custom Extension script for mapping managed disks to drive letters handles reimage of the VM correctly. See [Create a Service Fabric cluster with attached data disks](../virtual-machine-scale-sets/virtual-machine-scale-sets-attached-disks.md#create-a-service-fabric-cluster-with-attached-data-disks) for an example script that handles reimage of VMs with managed disks correctly.
1. Enable automatic OS image upgrades and disable Windows Updates in the deployment template:
service-fabric Service Fabric Cluster Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-capacity.md
The capacity needs of your cluster will be determined by your specific workload
- Partial / single core VM sizes like Standard A0 are not supported. - *A-series* VM sizes are not supported for performance reasons. - Low-priority VMs are not supported.-- [B-Series Burstable SKU's](https://docs.microsoft.com/azure/virtual-machines/sizes-b-series-burstable) are not supported.
+- [B-Series Burstable SKU's](../virtual-machines/sizes-b-series-burstable.md) are not supported.
#### Primary node type
For more on cluster planning, see:
* [Disaster recovery planning](service-fabric-disaster-recovery.md) <!--Image references-->
-[SystemServices]: ./media/service-fabric-cluster-capacity/SystemServices.png
+[SystemServices]: ./media/service-fabric-cluster-capacity/SystemServices.png
service-fabric Service Fabric Cluster Fabric Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-fabric-settings.md
The following is a list of Fabric settings that you can customize, organized by
| **Parameter** | **Allowed Values** | **Upgrade Policy** | **Guidance or Short Description** | | | | | | |DeployedState |wstring, default is L"Disabled" |Static |2-stage removal of CSS. |
-|EnableSecretMonitoring|bool, default is FALSE |Static |Must be enabled to use Managed KeyVaultReferences. Default may become true in the future. For more information, see [KeyVaultReference support for Azure-deployed Service Fabric Applications](https://docs.microsoft.com/azure/service-fabric/service-fabric-keyvault-references)|
-|SecretMonitoringInterval|TimeSpan, default is Common::TimeSpan::FromMinutes(15) |Static |The rate at which Service Fabric will poll Key Vault for changes when using Managed KeyVaultReferences. This rate is a best effort, and changes in Key Vault may be reflected in the cluster earlier or later than the interval. For more information, see [KeyVaultReference support for Azure-deployed Service Fabric Applications](https://docs.microsoft.com/azure/service-fabric/service-fabric-keyvault-references) |
+|EnableSecretMonitoring|bool, default is FALSE |Static |Must be enabled to use Managed KeyVaultReferences. Default may become true in the future. For more information, see [KeyVaultReference support for Azure-deployed Service Fabric Applications](./service-fabric-keyvault-references.md)|
+|SecretMonitoringInterval|TimeSpan, default is Common::TimeSpan::FromMinutes(15) |Static |The rate at which Service Fabric will poll Key Vault for changes when using Managed KeyVaultReferences. This rate is a best effort, and changes in Key Vault may be reflected in the cluster earlier or later than the interval. For more information, see [KeyVaultReference support for Azure-deployed Service Fabric Applications](./service-fabric-keyvault-references.md) |
|UpdateEncryptionCertificateTimeout |TimeSpan, default is Common::TimeSpan::MaxValue |Static |Specify timespan in seconds. The default has changed to TimeSpan::MaxValue; but overrides are still respected. May be deprecated in the future. |
The following is a list of Fabric settings that you can customize, organized by
|ReplicationBatchSize|uint, default is 1|Static|Specifies the number of operations to be sent between primary and secondary replicas. If zero the primary sends one record per operation to the secondary. Otherwise the primary replica aggregates log records until the config value is reached. This will reduce network traffic.| ## Replication
-<i> **Warning Note** : Changing Replication/TranscationalReplicator settings at cluster level changes settings for all stateful services include system services. This is generally not recommended. See this document [Configure Azure Service Fabric Reliable Services - Azure Service Fabric | Microsoft Docs](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-services-configuration) to configure services at app level.</i>
+<i> **Warning Note** : Changing Replication/TranscationalReplicator settings at cluster level changes settings for all stateful services include system services. This is generally not recommended. See this document [Configure Azure Service Fabric Reliable Services - Azure Service Fabric | Microsoft Docs](./service-fabric-reliable-services-configuration.md) to configure services at app level.</i>
| **Parameter** | **Allowed Values** | **Upgrade Policy**| **Guidance or Short Description** |
The following is a list of Fabric settings that you can customize, organized by
|Level |Int, default is 4 | Dynamic |Trace etw level can take values 1, 2, 3, 4. To be supported you must keep the trace level at 4 | ## TransactionalReplicator
-<i> **Warning Note** : Changing Replication/TranscationalReplicator settings at cluster level changes settings for all stateful services include system services. This is generally not recommended. See this document [Configure Azure Service Fabric Reliable Services - Azure Service Fabric | Microsoft Docs](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-services-configuration) to configure services at app level.</i>
+<i> **Warning Note** : Changing Replication/TranscationalReplicator settings at cluster level changes settings for all stateful services include system services. This is generally not recommended. See this document [Configure Azure Service Fabric Reliable Services - Azure Service Fabric | Microsoft Docs](./service-fabric-reliable-services-configuration.md) to configure services at app level.</i>
| **Parameter** | **Allowed Values** | **Upgrade Policy** | **Guidance or Short Description** | | | | | |
The following is a list of Fabric settings that you can customize, organized by
|PropertyGroup| UserServiceMetricCapacitiesMap, default is None | Static | A collection of user services resource governance limits Needs to be static as it affects AutoDetection logic | ## Next steps
-For more information, see [Upgrade the configuration of an Azure cluster](service-fabric-cluster-config-upgrade-azure.md) and [Upgrade the configuration of a standalone cluster](service-fabric-cluster-config-upgrade-windows-server.md).
+For more information, see [Upgrade the configuration of an Azure cluster](service-fabric-cluster-config-upgrade-azure.md) and [Upgrade the configuration of a standalone cluster](service-fabric-cluster-config-upgrade-windows-server.md).
service-fabric Service Fabric Connect To Secure Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-connect-to-secure-cluster.md
catch (Exception e)
The following example relies on Microsoft.IdentityModel.Clients.ActiveDirectory, Version: 2.19.208020213. > [!IMPORTANT]
-> The [Microsoft.IdentityModel.Clients.ActiveDirectory](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) NuGet package and Azure AD Authentication Library (ADAL) have been deprecated. No new features have been added since June 30, 2020. We strongly encourage you to upgrade, see the [migration guide](/azure/active-directory/develop/msal-migration) for more details.
+> The [Microsoft.IdentityModel.Clients.ActiveDirectory](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) NuGet package and Azure AD Authentication Library (ADAL) have been deprecated. No new features have been added since June 30, 2020. We strongly encourage you to upgrade, see the [migration guide](../active-directory/develop/msal-migration.md) for more details.
For more information on AAD token acquisition, see [Microsoft.Identity.Client](/dotnet/api/microsoft.identity.client?view=azure-dotnet).
At least two certificates should be used for securing the cluster, one for the c
* [Managing your Service Fabric applications in Visual Studio](service-fabric-manage-application-in-visual-studio.md) * [Service Fabric Health model introduction](service-fabric-health-introduction.md) * [Application Security and RunAs](service-fabric-application-runas-security.md)
-* [Getting started with Service Fabric CLI](service-fabric-cli.md)
+* [Getting started with Service Fabric CLI](service-fabric-cli.md)
service-fabric Service Fabric Local Linux Cluster Windows Wsl2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-local-linux-cluster-windows-wsl2.md
Before you get started, you need:
* Set up Ubuntu 18.04 Linux Distribution from Microsoft Store while setting up WSL2 >[!TIP]
-> To install WSL2 on your Windows machine, follow the steps in the [WSL documentation](https://docs.microsoft.com/windows/wsl/install). After installing, please ensure installation of Ubuntu-18.04, make it your default distribution and it should be up and running.
+> To install WSL2 on your Windows machine, follow the steps in the [WSL documentation](/windows/wsl/install). After installing, please ensure installation of Ubuntu-18.04, make it your default distribution and it should be up and running.
> ## Set up Service Fabric SDK inside Linux Distribution
service-fabric Service Fabric Reliable Services Exception Serialization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-services-exception-serialization.md
Last updated 03/30/2022
# Remoting Exception Serialization Overview
-BinaryFormatter based serialization is not secure and Microsoft strongly recommends not to use BinaryFormatter for data processing. More details on the security implications can be found [here](https://docs.microsoft.com/dotnet/standard/serialization/binaryformatter-security-guide).
-Service Fabric had been using BinaryFormatter for serializing Exceptions. Starting ServiceFabric v9.0, [Data Contract based serialization](https://docs.microsoft.com/dotnet/api/system.runtime.serialization.datacontractserializer?view=net-6.0) for remoting exceptions is made available as an opt-in feature. It is strongly recommended to opt for DataContract remoting exception serialization by following the below mentioned steps.
+BinaryFormatter based serialization is not secure and Microsoft strongly recommends not to use BinaryFormatter for data processing. More details on the security implications can be found [here](/dotnet/standard/serialization/binaryformatter-security-guide).
+Service Fabric had been using BinaryFormatter for serializing Exceptions. Starting ServiceFabric v9.0, [Data Contract based serialization](/dotnet/api/system.runtime.serialization.datacontractserializer?view=net-6.0) for remoting exceptions is made available as an opt-in feature. It is strongly recommended to opt for DataContract remoting exception serialization by following the below mentioned steps.
Support for BinaryFormatter based remoting exception serialization will be deprecated in the future.
Existing services must follow the below order(*Service first*) to upgrade. Failu
* [Web API with OWIN in Reliable Services](./service-fabric-reliable-services-communication-aspnetcore.md) * [Windows Communication Foundation communication with Reliable Services](service-fabric-reliable-services-communication-wcf.md)
-* [Secure communication for Reliable Services](service-fabric-reliable-services-secure-communication.md)
+* [Secure communication for Reliable Services](service-fabric-reliable-services-secure-communication.md)
service-fabric Service Fabric Tutorial Dotnet App Enable Https Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-dotnet-app-enable-https-endpoint.md
if ($cert -eq $null)
$keyName=$cert.PrivateKey.CspKeyContainerInfo.UniqueKeyContainerName $keyPath = "C:\ProgramData\Microsoft\Crypto\RSA\MachineKeys\"+
+ if ($keyName -eq $null){
+ $privateKey = [System.Security.Cryptography.X509Certificates.RSACertificateExtensions]::GetRSAPrivateKey($cert)
+ $keyName = $privateKey.Key.UniqueName
+ $keyPath = "C:\ProgramData\Microsoft\Crypto\Keys"
+ }
+ $fullPath=$keyPath+$keyName $acl=(Get-Item $fullPath).GetAccessControl('Access')
service-health Azure Status Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/azure-status-overview.md
Most of our service issue communications are provided as targeted notifications
## When does Azure publish RCAs to the Status History page?
-While the [Azure status page](https://status.azure.com/status) always shows the latest health information, you can view older events using the [Azure status history page](https://status.azure.com/status/history/). The history page contains all RCAs (Root Cause Analysis) for incidents that occurred on November 20, 2019 or later and will - from that date forward - provide a 5-year RCA history. RCAs prior to November 20, 2019 aren't available.
+While the [Azure status page](https://status.azure.com/status) always shows the latest health information, you can view older events using the [Azure status history page](https://status.azure.com/status/history/). The history page contains all RCAs (Root Cause Analyses) for incidents that occurred on November 20, 2019 or later and will - from that date forward - provide a 5-year RCA history. RCAs prior to November 20, 2019 aren't available.
After June 1st 2022, the [Azure status history page](https://status.azure.com/status/history/) will only be used to provide RCAs for scenario 1 above. We're committed to publishing RCAs publicly for service issues that had the broadest impact, such as those with both a multi-service and multi-region impact. We publish to ensure that all customers and the industry at large can learn from our retrospectives on these issues, and understand what steps we're taking to make such issues less likely and/or less impactful in future.
spatial-anchors Tutorial Share Anchors Across Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/tutorials/tutorial-share-anchors-across-devices.md
Follow the instructions [here](../how-tos/setup-unity-project.md#download-asa-pa
## Deploy the Sharing Anchors service > [!NOTE]
-> In this tutorial we will be using the free tier of the Azure App Service. The free tier will time out after [20 min](https://docs.microsoft.com/azure/architecture/framework/services/compute/azure-app-service/reliability#configuration-recommendations) of inactivity and reset the memory cache.
+> In this tutorial we will be using the free tier of the Azure App Service. The free tier will time out after [20 min](/azure/architecture/framework/services/compute/azure-app-service/reliability#configuration-recommendations) of inactivity and reset the memory cache.
## [Visual Studio](#tab/VS)
spring-cloud How To Deploy With Custom Container Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-deploy-with-custom-container-image.md
This article explains how to deploy Spring Boot applications in Azure Spring App
## Prerequisites * A container image containing the application.
-* The image is pushed to an image registry. For more information, see [Azure Container Registry](/azure/container-instances/container-instances-tutorial-prepare-acr).
+* The image is pushed to an image registry. For more information, see [Azure Container Registry](../container-instances/container-instances-tutorial-prepare-acr.md).
> [!NOTE] > The web application must listen on port `1025` for Standard tier and on port `8080` for Enterprise tier. The way to change the port depends on the framework of the application. For example, specify `SERVER_PORT=1025` for Spring Boot applications or `ASPNETCORE_URLS=http://+:1025/` for ASP.Net Core applications. The probe can be disabled for applications that do not listen on any port.
When your application is restarted or scaled out, the latest image will always b
### Avoid not being able to connect to the container registry in a VNet
-If you deployed the instance to a VNet, make sure you allow the network traffic to your container registry in the NSG or Azure Firewall (if used). For more information, see [Customer responsibilities for running in VNet](/azure/spring-cloud/vnet-customer-responsibilities) to add the needed security rules.
+If you deployed the instance to a VNet, make sure you allow the network traffic to your container registry in the NSG or Azure Firewall (if used). For more information, see [Customer responsibilities for running in VNet](./vnet-customer-responsibilities.md) to add the needed security rules.
### Install an APM into the image manually
az spring app deployment create \
## Next steps
-* [How to capture dumps](/azure/spring-cloud/how-to-capture-dumps)
+* [How to capture dumps](./how-to-capture-dumps.md)
spring-cloud Quickstart Deploy Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/quickstart-deploy-apps.md
Compiling the project takes 5-10 minutes. Once completed, you should have indivi
1. If you didn't run the following commands in the previous quickstarts, set the CLI defaults. ```azurecli
- az configure --defaults group=<resource-group-name> spring-cloud=<service-name>
+ az configure --defaults group=<resource-group-name> spring=<service-name>
``` 1. Create the 2 core Spring applications for PetClinic: API gateway and customers-service.
storage Blob Upload Function Trigger Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-upload-function-trigger-javascript.md
Copy the value of the `connectionString` property and paste it somewhere to use
## Create the Computer Vision service
-Next, create the Computer Vision service account that will process our uploaded files. Computer Vision is part of Azure Cognitive Services and offers various features for extracting data out of images. You can learn more about Computer Vision on the [overview page](/azure/cognitive-services/computer-vision/overview).
+Next, create the Computer Vision service account that will process our uploaded files. Computer Vision is part of Azure Cognitive Services and offers various features for extracting data out of images. You can learn more about Computer Vision on the [overview page](../../cognitive-services/computer-vision/overview.md).
### [Azure portal](#tab/computer-vision-azure-portal)
If you're not going to continue to use this application, you can delete the reso
1. Select **Resource groups** from the Azure explorer 1. Find and right-click the `msdocs-storage-function` resource group from the list.
-1. Select **Delete**. The process to delete the resource group may take a few minutes to complete.
+1. Select **Delete**. The process to delete the resource group may take a few minutes to complete.
storage Blob Upload Function Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-upload-function-trigger.md
Copy the value of the `connectionString` property and paste it somewhere to use
## Create the Computer Vision service
-Next, create the Computer Vision service account that will process our uploaded files. Computer Vision is part of Azure Cognitive Services and offers a variety of features for extracting data out of images. You can learn more about Computer Vision on the [overview page](/azure/cognitive-services/computer-vision/overview).
+Next, create the Computer Vision service account that will process our uploaded files. Computer Vision is part of Azure Cognitive Services and offers a variety of features for extracting data out of images. You can learn more about Computer Vision on the [overview page](../../cognitive-services/computer-vision/overview.md).
### [Azure portal](#tab/azure-portal)
If you're not going to continue to use this application, you can delete the reso
2) Select the **Delete resource group** button at the top of the resource group overview page. 3) Enter the resource group name *msdocs-storage-function* in the confirmation dialog. 4) Select delete.
-The process to delete the resource group may take a few minutes to complete.
+The process to delete the resource group may take a few minutes to complete.
storage Data Lake Storage Migrate Gen1 To Gen2 Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-migrate-gen1-to-gen2-azure-portal.md
On **Feb. 29, 2024** Azure Data Lake Storage Gen1 will be retired. For more info
This article shows you how to simplify the migration by using the Azure portal. You can provide your consent in the Azure portal and then migrate your data and metadata (such as timestamps and ACLs) automatically from Azure Data Lake Storage Gen1 to Azure Data Lake Storage Gen2. For easier reading, this article uses the term *Gen1* to refer to Azure Data Lake Storage Gen1, and the term *Gen2* to refer to Azure Data Lake Storage Gen2. > [!NOTE]
-> Your account may not qualify for portal-based migration based on certain constraints. When the **Migrate data** button is not enabled in the Azure portal for your Gen1 account, if you have a support plan, you can [file a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). You can also get answers from community experts in [Microsoft Q&A](https://docs.microsoft.com/answers/topics/azure-data-lake-storage.html).
+> Your account may not qualify for portal-based migration based on certain constraints. When the **Migrate data** button is not enabled in the Azure portal for your Gen1 account, if you have a support plan, you can [file a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). You can also get answers from community experts in [Microsoft Q&A](/answers/topics/azure-data-lake-storage.html).
> [!WARNING] > Azure Data Lake Storage Gen2 doesn't support Azure Data Lake Analytics. If you're using Azure Data Lake Analytics, you'll need to migrate before proceeding. See [Migrate Azure Data Lake Analytics workloads](#migrate-azure-data-lake-analytics-workloads) for more information.
For Gen1, ensure that the [Owner](../../role-based-access-control/built-in-roles
## Migrate Azure Data Lake Analytics workloads
-Azure Data Lake Storage Gen2 doesn't support Azure Data Lake Analytics. Azure Data Lake Analytics [will be retired](https://azure.microsoft.com/updates/migrate-to-azure-synapse-analytics/) on February 29, 2024. If you attempt to use the Azure portal to migrate an Azure Data Lake Storage Gen1 account that is used for Azure Data Lake Analytics, it's possible that you'll break your Azure Data Lake Analytics workloads. You must first [migrate your Azure Data Lake Analytics workloads to Azure Synapse Analytics](/azure/data-lake-analytics/migrate-azure-data-lake-analytics-to-synapse) or another supported compute platform before attempting to migrate your Gen1 account.
+Azure Data Lake Storage Gen2 doesn't support Azure Data Lake Analytics. Azure Data Lake Analytics [will be retired](https://azure.microsoft.com/updates/migrate-to-azure-synapse-analytics/) on February 29, 2024. If you attempt to use the Azure portal to migrate an Azure Data Lake Storage Gen1 account that is used for Azure Data Lake Analytics, it's possible that you'll break your Azure Data Lake Analytics workloads. You must first [migrate your Azure Data Lake Analytics workloads to Azure Synapse Analytics](../../data-lake-analytics/migrate-azure-data-lake-analytics-to-synapse.md) or another supported compute platform before attempting to migrate your Gen1 account.
-For more information, see [Manage Azure Data Lake Analytics using the Azure portal](/azure/data-lake-analytics/data-lake-analytics-manage-use-portal).
+For more information, see [Manage Azure Data Lake Analytics using the Azure portal](../../data-lake-analytics/data-lake-analytics-manage-use-portal.md).
## Perform the migration
Post migration, if you chose the option that copies only data, then you will be
#### While providing consent I encountered the error message *Migration initiation failed*. What should I do next?
-Make sure all your Azure Data lake Analytics accounts are [migrated to Azure Synapse Analytics](/azure/data-lake-analytics/migrate-azure-data-lake-analytics-to-synapse) or another supported compute platform. Once Azure Data Lake Analytics accounts are migrated, retry the consent. If you see the issue further and you have a support plan, you can [file a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). You can also get answers from community experts in [Microsoft Q&A](https://docs.microsoft.com/answers/topics/azure-data-lake-storage.html).
+Make sure all your Azure Data lake Analytics accounts are [migrated to Azure Synapse Analytics](../../data-lake-analytics/migrate-azure-data-lake-analytics-to-synapse.md) or another supported compute platform. Once Azure Data Lake Analytics accounts are migrated, retry the consent. If you see the issue further and you have a support plan, you can [file a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). You can also get answers from community experts in [Microsoft Q&A](/answers/topics/azure-data-lake-storage.html).
#### After the migration completes, can I go back to using the Gen1 account?
When you copy the data over to your Gen2-enabled account, two factors that can a
## Next steps -- Learn about migration in general. For more information, see [Migrate Azure Data Lake Storage from Gen1 to Gen2](data-lake-storage-migrate-gen1-to-gen2.md).
+- Learn about migration in general. For more information, see [Migrate Azure Data Lake Storage from Gen1 to Gen2](data-lake-storage-migrate-gen1-to-gen2.md).
storage Storage Quickstart Blobs Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-go.md
Run the following AzureCli command to assign the storage account permissions:
az role assignment create --assignee "<ObjectID>" --role "Storage Blob Data Contributor" --scope "<StorageAccountResourceID>" ```
-Learn more about Azure's built-in RBAC roles, check out [Built-in roles](/azure/role-based-access-control/built-in-roles).
+Learn more about Azure's built-in RBAC roles, check out [Built-in roles](../../role-based-access-control/built-in-roles.md).
> Note: Azure Cli has built in helper fucntions that retrieve the storage access keys when permissions are not detected. That functionally does not transfer to the DefaultAzureCredential, which is the reason for assiging RBAC roles to your account.
See these other resources for Go development with Blob storage:
## Next steps
-In this quickstart, you learned how to transfer files between a local disk and Azure blob storage using Go. For more information about the Azure Storage Blob SDK, view the [Source Code](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/storage/azblob) and [API Reference](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob).
+In this quickstart, you learned how to transfer files between a local disk and Azure blob storage using Go. For more information about the Azure Storage Blob SDK, view the [Source Code](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/storage/azblob) and [API Reference](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob).
storage Storage Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-network-security.md
You can manage IP network rules for storage accounts through the Azure portal, P
<a id="grant-access-specific-instances"></a>
-## Grant access from Azure resource instances (preview)
+## Grant access from Azure resource instances
In some cases, an application might depend on Azure resources that cannot be isolated through a virtual network or an IP address rule. However, you'd still like to secure and restrict storage account access to only your application's Azure resources. You can configure storage accounts to allow access to specific resource instances of some Azure services by creating a resource instance rule. The types of operations that a resource instance can perform on storage account data is determined by the Azure role assignments of the resource instance. Resource instances must be from the same tenant as your storage account, but they can belong to any subscription in the tenant.
-> [!NOTE]
-> This feature is in public preview and is available in all public cloud regions.
- ### [Portal](#tab/azure-portal) You can add or remove resource network rules in the Azure portal.
You can use PowerShell commands to add or remove resource network rules.
> [!IMPORTANT] > Be sure to [set the default rule](#change-the-default-network-access-rule) to **deny**, or network rules have no effect.
-#### Install the preview module
-
-Install the latest version of the PowershellGet module. Then, close and reopen the PowerShell console.
-
-```powershell
-install-Module PowerShellGet ΓÇôRepository PSGallery ΓÇôForce
-```
-
-Install **Az. Storage** preview module.
-
-```powershell
-Install-Module Az.Storage -Repository PsGallery -RequiredVersion 3.0.1-preview -AllowClobber -AllowPrerelease -Force
-```
-
-For more information about how to install PowerShell modules, see [Install the Azure PowerShell module](/powershell/azure/install-az-ps)
- #### Grant access Add a network rule that grants access from a resource instance.
$rule.ResourceAccessRules
You can use Azure CLI commands to add or remove resource network rules.
-#### Install the preview extension
-
-1. Open the [Azure Cloud Shell](../../cloud-shell/overview.md), or if you've [installed](/cli/azure/install-azure-cli) the Azure CLI locally, open a command console application such as Windows PowerShell.
-
-2. Then, verify that the version of Azure CLI that you have installed is `2.13.0` or higher by using the following command.
-
- ```azurecli
- az --version
- ```
-
- If your version of Azure CLI is lower than `2.13.0`, then install a later version. See [Install the Azure CLI](/cli/azure/install-azure-cli).
-
-3. Type the following command to install the preview extension.
-
- ```azurecli
- az extension add -n storage-preview
- ```
- #### Grant access Add a network rule that grants access from a resource instance.
If your account does not have the hierarchical namespace feature enabled on it,
You can use the same technique for an account that has the hierarchical namespace feature enable on it. However, you don't have to assign an Azure role if you add the managed identity to the access control list (ACL) of any directory or blob contained in the storage account. In that case, the scope of access for the instance corresponds to the directory or file to which the managed identity has been granted access. You can also combine Azure roles and ACLs together. To learn more about how to combine them together to grant access, see [Access control model in Azure Data Lake Storage Gen2](../blobs/data-lake-storage-access-control-model.md). > [!TIP]
-> The recommended way to grant access to specific resources is to use resource instance rules. To grant access to specific resource instances, see the [Grant access from Azure resource instances (preview)](#grant-access-specific-instances) section of this article.
+> The recommended way to grant access to specific resources is to use resource instance rules. To grant access to specific resource instances, see the [Grant access from Azure resource instances](#grant-access-specific-instances) section of this article.
| Service | Resource Provider Name | Purpose | | :-- | :- | :-- |
storage File Sync Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-planning.md
In this case, Azure File Sync would need about 209,500,000 KiB (209.5 GiB) of sp
> The Azure File Sync agent must be installed on every node in a Failover Cluster for sync to work correctly. ### Data Deduplication
-**Windows Server 2016 and Windows Server 2019**
-Data Deduplication is supported irrespective of whether cloud tiering is enabled or disabled on one or more server endpoints on the volume for Windows Server 2016 and Windows Server 2019. Enabling Data Deduplication on a volume with cloud tiering enabled lets you cache more files on-premises without provisioning more storage.
+**Windows Server 2022, Windows Server 2019, and Windows Server 2016**
+Data Deduplication is supported irrespective of whether cloud tiering is enabled or disabled on one or more server endpoints on the volume for Windows Server 2016, Windows Server 2019, and Windows Server 2022. Enabling Data Deduplication on a volume with cloud tiering enabled lets you cache more files on-premises without provisioning more storage.
When Data Deduplication is enabled on a volume with cloud tiering enabled, Dedup optimized files within the server endpoint location will be tiered similar to a normal file based on the cloud tiering policy settings. Once the Dedup optimized files have been tiered, the Data Deduplication garbage collection job will run automatically to reclaim disk space by removing unnecessary chunks that are no longer referenced by other files on the volume.
Azure File Sync does not support Data Deduplication and cloud tiering on the sam
- For ongoing Deduplication optimization jobs, cloud tiering with date policy will get delayed by the Data Deduplication [MinimumFileAgeDays](/powershell/module/deduplication/set-dedupvolume) setting, if the file is not already tiered. - Example: If the MinimumFileAgeDays setting is seven days and cloud tiering date policy is 30 days, the date policy will tier files after 37 days. - Note: Once a file is tiered by Azure File Sync, the Deduplication optimization job will skip the file.-- If a server running Windows Server 2012 R2 with the Azure File Sync agent installed is upgraded to Windows Server 2016 or Windows Server 2019, the following steps must be performed to support Data Deduplication and cloud tiering on the same volume:
+- If a server running Windows Server 2012 R2 with the Azure File Sync agent installed is upgraded to Windows Server 2016, Windows Server 2019 or Windows Server 2022, the following steps must be performed to support Data Deduplication and cloud tiering on the same volume:
- Uninstall the Azure File Sync agent for Windows Server 2012 R2 and restart the server.
- - Download the Azure File Sync agent for the new server operating system version (Windows Server 2016 or Windows Server 2019).
+ - Download the Azure File Sync agent for the new server operating system version (Windows Server 2016, Windows Server 2019, or Windows Server 2022).
- Install the Azure File Sync agent and restart the server. Note: The Azure File Sync configuration settings on the server are retained when the agent is uninstalled and reinstalled.
storage Storage Troubleshoot Windows File Connection Problems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-troubleshoot-windows-file-connection-problems.md
description: Troubleshooting Azure Files problems in Windows. See common issues
Previously updated : 01/31/2022 Last updated : 05/26/2022
TcpTestSucceeded : True
### Solution for cause 1
-#### Solution 1 ΓÇö Use Azure File Sync
-Azure File Sync can transform your on-premises Windows Server into a quick cache of your Azure file share. You can use any protocol that's available on Windows Server to access your data locally, including SMB, NFS, and FTPS. Azure File Sync works over port 443 and can thus be used as a workaround to access Azure Files from clients that have port 445 blocked. [Learn how to setup Azure File Sync](../file-sync/file-sync-extend-servers.md).
+#### Solution 1 ΓÇö Use Azure File Sync as a QUIC endpoint
+Azure File Sync can be used as a workaround to access Azure Files from clients that have port 445 blocked. Although Azure Files doesn't directly support SMB over QUIC, Windows Server 2022 Azure Edition does support the QUIC protocol. You can create a lightweight cache of your Azure file shares on a Windows Server 2022 Azure Edition VM using Azure File Sync. This uses port 443, which is widely open outbound to support HTTPS, instead of port 445. To learn more about this option, see [SMB over QUIC with Azure File Sync](storage-files-networking-overview.md#smb-over-quic).
-#### Solution 2 ΓÇö Use VPN
-By Setting up a VPN to your specific Storage Account, the traffic will go through a secure tunnel as opposed to over the internet. Follow the [instructions to setup VPN](storage-files-configure-p2s-vpn-windows.md) to access Azure Files from Windows.
+#### Solution 2 ΓÇö Use VPN or ExpressRoute
+By setting up a VPN or ExpressRoute from on-premises to your Azure storage account, with Azure Files exposed on your internal network using private endpoints, the traffic will go through a secure tunnel as opposed to over the internet. Follow the [instructions to setup VPN](storage-files-configure-p2s-vpn-windows.md) to access Azure Files from Windows.
#### Solution 3 ΓÇö Unblock port 445 with help of your ISP/IT Admin Work with your IT department or ISP to open port 445 outbound to [Azure IP ranges](https://www.microsoft.com/download/details.aspx?id=41653).
storage Storage Ruby How To Use Queue Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/storage-ruby-how-to-use-queue-storage.md
Now that you've learned the basics of Queue Storage, follow these links to learn
- Visit the [Azure Storage team blog](/archive/blogs/windowsazurestorage/) - Visit the [Azure SDK for Ruby](https://github.com/WindowsAzure/azure-sdk-for-ruby) repository on GitHub
-For a comparison between Azure Queue Storage discussed in this article and Azure Service Bus queues discussed in [How to use Service Bus queues](/azure/service-bus-messaging/service-bus-quickstart-portal), see [Azure Queue Storage and Service Bus queues - compared and contrasted](../../service-bus-messaging/service-bus-azure-and-service-bus-queues-compared-contrasted.md)
+For a comparison between Azure Queue Storage discussed in this article and Azure Service Bus queues discussed in [How to use Service Bus queues](../../service-bus-messaging/service-bus-quickstart-portal.md), see [Azure Queue Storage and Service Bus queues - compared and contrasted](../../service-bus-messaging/service-bus-azure-and-service-bus-queues-compared-contrasted.md)
storage Tiger Bridge Cdp Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/backup-archive-disaster-recovery/tiger-bridge-cdp-guide.md
# Tiger Bridge archiving with continuous data protection and disaster recovery
-This article will guide you to set up Tiger Bridge data management system with Azure Blob Storage. Tiger Bridge Continuous data protection (CDP) integrates with [Soft Delete](/azure/storage/blobs/soft-delete-blob-overview) and [Versioning](/azure/storage/blobs/versioning-overview) to achieve a complete Continuous Data Protection solution. It applies policies to move data between [Azure Blob tiers](/azure/storage/blobs/access-tiers-overview) for optimal cost. Continuous data protection allows customers to have a real-time file-based backup with snapshots to achieve near zero RPO. CDP enables customers to protect their assets with minimum resources. Optionally, it can be used in WORM scenario using [immutable storage](/azure/storage/blobs/immutable-storage-overview).
+This article will guide you to set up Tiger Bridge data management system with Azure Blob Storage. Tiger Bridge Continuous data protection (CDP) integrates with [Soft Delete](../../../blobs/soft-delete-blob-overview.md) and [Versioning](../../../blobs/versioning-overview.md) to achieve a complete Continuous Data Protection solution. It applies policies to move data between [Azure Blob tiers](../../../blobs/access-tiers-overview.md) for optimal cost. Continuous data protection allows customers to have a real-time file-based backup with snapshots to achieve near zero RPO. CDP enables customers to protect their assets with minimum resources. Optionally, it can be used in WORM scenario using [immutable storage](../../../blobs/immutable-storage-overview.md).
In addition, Tiger Bridge provides easy and efficient Disaster Recovery. It can be combined with [Microsoft DFSR](/windows-server/storage/dfs-replication/dfsr-overview), but it isn't mandatory. It allows mirrored DR sites, or can be used with minimum storage DR sites (keeping only the most recent data on-prem plus). All the replicated files in Azure Blob Storage are stored as native objects, allowing the organization to access them without using Tiger Bridge. This approach prevents vendor locking.
All the replicated files in Azure Blob Storage are stored as native objects, all
:::image type="content" source="./media/tiger-bridge-cdp-guide/tiger-bridge-reference-architecture.png" alt-text="Tiger Bridge reference architecture.":::
-More information on Tiger Bridge solution, and common use case can be read in [Tiger Bridge deployment guide](/azure/storage/solution-integration/validated-partners/primary-secondary-storage/tiger-bridge-deployment-guide).
+More information on Tiger Bridge solution, and common use case can be read in [Tiger Bridge deployment guide](../primary-secondary-storage/tiger-bridge-deployment-guide.md).
## Before you begin -- **Refer to [Tiger Bridge deployment guide](/azure/storage/solution-integration/validated-partners/primary-secondary-storage/tiger-bridge-deployment-guide)**, it describes initial steps needed for setting up CDP.
+- **Refer to [Tiger Bridge deployment guide](../primary-secondary-storage/tiger-bridge-deployment-guide.md)**, it describes initial steps needed for setting up CDP.
- **Choose the right storage options**. When you use Azure as a backup target, you'll make use of [Azure Blob storage](https://azure.microsoft.com/services/storage/blobs/). Blob storage is optimized for storing massive amounts of unstructured data, which is data that doesn't adhere to any data model, or definition. It's durable, highly available, secure, and scalable. You can select the right storage for your workload by looking at two aspects:
- - [Storage redundancy](/azure/storage/common/storage-redundancy)
- - [Storage tier](/azure/storage/blobs/access-tiers-overview)
+ - [Storage redundancy](../../../common/storage-redundancy.md)
+ - [Storage tier](../../../blobs/access-tiers-overview.md)
### Sample backup to Azure cost model Subscription based model can be daunting to customers who are new to the cloud. While you pay for only the capacity used, you do also pay for transactions (read and write), and egress for data read back to your on-premises environment (depending on the network connection used). We recommend using the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/) to perform what-if analysis. You can base the analysis on list pricing or on Azure Storage Reserved Capacity pricing, which can deliver up to 38% savings. Below is an example pricing exercise to model the monthly cost of backing up to Azure.
Subscription based model can be daunting to customers who are new to the cloud.
> This is only an example. Your pricing may vary due to activities not captured here. Estimate was generated with Azure Pricing Calculator using East US Pay-as-you-go pricing. It is based on a 32 MB block size which generates 65,536 PUT Requests (write transactions), per day. This example may not reflect current Azure pricing, or not be applicable towards your requirements. ## Prepare Azure Blob Storage
-Refer to [Tiger Bridge deployment guide](/azure/storage/solution-integration/validated-partners/primary-secondary-storage/tiger-bridge-deployment-guide)
+Refer to [Tiger Bridge deployment guide](../primary-secondary-storage/tiger-bridge-deployment-guide.md)
## Deploy Tiger Bridge Before you can install Tiger Bridge, you need to have a Windows file server installed, and fully functional. Windows server must have access to the storage account prepare in [previous step](#prepare-azure-blob-storage). ## Configure continuous data protection
-1. Deploy Tiger Bridge solution as described in [standalone hybrid configuration](/azure/storage/solution-integration/validated-partners/primary-secondary-storage/tiger-bridge-deployment-guide#deploy-standalone-hybrid-configuration) (steps 1 to 4).
+1. Deploy Tiger Bridge solution as described in [standalone hybrid configuration](../primary-secondary-storage/tiger-bridge-deployment-guide.md#deploy-standalone-hybrid-configuration) (steps 1 to 4).
1. Under Tiger Bridge settings, enable **Delete replica when source file is removed** and **Keep replica versions** :::image type="content" source="./media/tiger-bridge-cdp-guide/tiger-bridge-settings.png" alt-text="Screenshot that shows how to enable settings for CDP."::: 1. Set versioning policy either **By Age** or **By Count**
Tiger Bridge can move a replicated file between Azure Blob Storage tiers to opti
:::image type="content" source="./media/tiger-bridge-cdp-guide/tiger-bridge-pair-account.png" alt-text="Screenshot that shows how to pair a storage account with local source.":::
- Change **Default access tier** to **Archive**. You can also select a default **[Rehydration priority](/azure/storage/blobs/archive-rehydrate-to-online-tier)**.
+ Change **Default access tier** to **Archive**. You can also select a default **[Rehydration priority](../../../blobs/archive-rehydrate-to-online-tier.md)**.
:::image type="content" source="./media/tiger-bridge-cdp-guide/tiger-bridge-change-access-tier.png" alt-text="Screenshot that shows how to change a default access tier in Tiger Bridge Configuration.":::
Tiger Bridge can be configured in Disaster Recovery mode. Typical configuration
:::image type="content" source="./media/tiger-bridge-cdp-guide/tiger-bridge-dr-active-passive.png" alt-text="Architecture for Tiger Bridge in active - passive DR configuration.":::
-1. Deploy and setup Tiger Bridge server on the primary and secondary site as instructed in [Tiger Bridge deployment guide](/azure/storage/solution-integration/validated-partners/primary-secondary-storage/tiger-bridge-deployment-guide#deploy-standalone-hybrid-configuration) for standalone hybrid configuration
+1. Deploy and setup Tiger Bridge server on the primary and secondary site as instructed in [Tiger Bridge deployment guide](../primary-secondary-storage/tiger-bridge-deployment-guide.md#deploy-standalone-hybrid-configuration) for standalone hybrid configuration
> [!NOTE] > Both Tiger Bridge servers on primary and secondary site must be connected to the same container and storage account.
stream-analytics Event Hubs Parquet Capture Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/event-hubs-parquet-capture-tutorial.md
In this tutorial, you learn how to:
Before you start, make sure you've completed the following steps: * If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/).
-* Deploy the TollApp event generator to Azure, use this link to [Deploy TollApp Azure Template](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-stream-analytics%2Fmaster%2FSamples%2FTollApp%2FVSProjects%2FTollAppDeployment%2Fazuredeploy.json). Set the 'interval' parameter to 1. And use a new resource group for this.
+* Deploy the TollApp event generator to Azure, use this link to [Deploy TollApp Azure Template](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-stream-analytics%2Fmaster%2FSamples%2FTollApp%2FVSProjects%2FTollAppDeployment%2Fazuredeploy.json). Set the 'interval' parameter to 1. And use a new resource group for this step.
* Create an [Azure Synapse Analytics workspace](../synapse-analytics/get-started-create-workspace.md) with a Data Lake Storage Gen2 account. ## Use no code editor to create a Stream Analytics job 1. Locate the Resource Group in which the TollApp event generator was deployed.
-2. Select the Azure Event Hubs namespace. And then under the Event Hubs section, select **entrystream** instance.
-3. Go to **Process data** under Features section and then click **start** on the Capture in parquet format template.
-[ ![Screenshot of start capture experience from process data blade.](./media/stream-analytics-no-code/parquet-capture-start.png) ](./media/stream-analytics-no-code/parquet-capture-start.png#lightbox)
-4. Name your job **parquetcapture** and select **Create**.
-5. Configure your event hub input by specifying
- * Consumer Group: Default
- * Serialization type of your input data: JSON
- * Authentication mode that the job will use to connect to your event hub: Connection String defaults
- * Click **Connect**
-6. Within few seconds, you'll see sample input data and the schema. You can choose to drop fields, rename fields or change data type.
-[![Screenshot of event hub data and schema in no code editor.](./media/stream-analytics-no-code/event-hub-data-preview.png)](./media/stream-analytics-no-code/event-hub-data-preview.png#lightbox)
-7. Click the Azure Data Lake Storage Gen2 tile on your canvas and configure it by specifying
+2. Select the Azure Event Hubs **namespace**.
+1. On the **Event Hubs Namespace** page, select **Event Hubs** under **Entities** on the left menu.
+1. Select **entrystream** instance.
+
+ :::image type="content" source="./media/stream-analytics-no-code/select-event-hub.png" alt-text="Screenshot showing the selection of the event hub." lightbox="./media/stream-analytics-no-code/select-event-hub.png":::
+3. On the **Event Hubs instance** page, select **Process data** in the **Features** section on the left menu.
+1. Select **Start** on the **Capture data to ADLS Gen2 in Parquet format** tile.
+
+ :::image type="content" source="./media/stream-analytics-no-code/parquet-capture-start.png" alt-text="Screenshot showing the selection of the **Capture data to ADLS Gen2 in Parquet format** tile." lightbox="./media/stream-analytics-no-code/parquet-capture-start.png":::
+1. Name your job **parquetcapture** and select **Create**.
+
+ :::image type="content" source="./media/stream-analytics-no-code/new-stream-analytics-job.png" alt-text="Screenshot of the New Stream Analytics job page." lightbox="./media/stream-analytics-no-code/new-stream-analytics-job.png":::
+1. On the **event hub** configuration page, confirm the following settings, and then select **Connect**.
+ - *Consumer Group*: Default
+ - *Serialization type* of your input data: JSON
+ - *Authentication mode* that the job will use to connect to your event hub: Connection string.
+
+ :::image type="content" source="./media/event-hubs-parquet-capture-tutorial/event-hub-configuration.png" alt-text="Screenshot of the configuration page for your event hub." lightbox="./media/event-hubs-parquet-capture-tutorial/event-hub-configuration.png":::
+1. Within few seconds, you'll see sample input data and the schema. You can choose to drop fields, rename fields or change data type.
+
+ :::image type="content" source="./media/event-hubs-parquet-capture-tutorial/data-preview.png" alt-text="Screenshot showing the fields and preview of data." lightbox="./media/event-hubs-parquet-capture-tutorial/data-preview.png":::
+1. Select the **Azure Data Lake Storage Gen2** tile on your canvas and configure it by specifying
* Subscription where your Azure Data Lake Gen2 account is located in
- * Storage account name which should be the same ADLS Gen2 account used with your Azure Synapse Analytics workspace done in the Prerequisites section.
+ * Storage account name, which should be the same ADLS Gen2 account used with your Azure Synapse Analytics workspace done in the Prerequisites section.
* Container inside which the Parquet files will be created. * Path pattern set to *{date}/{time}* * Date and time pattern as the default *yyyy-mm-dd* and *HH*.
- * Click **Connect**
-8. Select **Save** in the top ribbon to save your job and then select **Start**. Set Streaming Unit count to 3 and then Select **Start** to run your job.
-[![Screenshot of start job in no code editor.](./media/stream-analytics-no-code/no-code-start-job.png)](./media/stream-analytics-no-code/no-code-start-job.png#lightbox)
-9. You'll then see a list of all Stream Analytics jobs created using the no code editor. And within two minutes, your job will go to a **Running** state.
-[![Screenshot of job in running state after job creation.](./media/stream-analytics-no-code/no-code-job-running-state.png)](./media/stream-analytics-no-code/no-code-job-running-state.png#lightbox)
+ * Select **Connect**
+
+ :::image type="content" source="./media/event-hubs-parquet-capture-tutorial/data-lake-storage-settings.png" alt-text="Screenshot showing the configuration settings for the Data Lake Storage." lightbox="./media/event-hubs-parquet-capture-tutorial/data-lake-storage-settings.png":::
+1. Select **Save** in the top ribbon to save your job and then select **Start**. Set Streaming Unit count to 3 and then Select **Start** to run your job.
+
+ :::image type="content" source="./media/event-hubs-parquet-capture-tutorial/start-job.png" alt-text="Screenshot showing the Start Stream Analytics Job page." lightbox="./media/event-hubs-parquet-capture-tutorial/start-job.png":::
+1. You'll then see a list of all Stream Analytics jobs created using the no code editor. And within two minutes, your job will go to a **Running** state. Select the **Refresh** button on the page to see the status changing from Created -> Starting -> Running.
+
+ :::image type="content" source="./media/event-hubs-parquet-capture-tutorial/job-list.png" alt-text="Screenshot showing the list of Stream Analytics jobs." lightbox="./media/event-hubs-parquet-capture-tutorial/job-list.png":::
## View output in your Azure Data Lake Storage Gen 2 account 1. Locate the Azure Data Lake Storage Gen2 account you had used in the previous step. 2. Select the container you had used in the previous step. You'll see parquet files created based on the *{date}/{time}* path pattern used in the previous step. [![Screenshot of parquet files in Azure Data Lake Storage Gen 2.](./media/stream-analytics-no-code/capture-parquet-files.png)](./media/stream-analytics-no-code/capture-parquet-files.png#lightbox)
-## Query event hub Capture files in Parquet format with Azure Synapse Analytics
+## Query captured data in Parquet format with Azure Synapse Analytics
### Query using Azure Synapse Spark 1. Locate your Azure Synapse Analytics workspace and open Synapse Studio. 2. [Create a serverless Apache Spark pool](../synapse-analytics/get-started-analyze-spark.md#create-a-serverless-apache-spark-pool) in your workspace if one doesn't already exist.
Before you start, make sure you've completed the following steps:
df.printSchema() ``` 5. Select **Run All** to see the results
-[![Screenshot of spark run results in Azure Synapse Analytics.](./media/stream-analytics-no-code/spark-run-all.png)](./media/stream-analytics-no-code/spark-run-all.png#lightbox)
+
+ :::image type="content" source="./media/event-hubs-parquet-capture-tutorial/spark-run-all.png" alt-text="Screenshot of spark run results in Azure Synapse Analytics." lightbox="./media/event-hubs-parquet-capture-tutorial/spark-run-all.png":::
### Query using Azure Synapse Serverless SQL 1. In the **Develop** hub, create a new **SQL script**.
Before you start, make sure you've completed the following steps:
FORMAT='PARQUET' ) AS [result] ```
-[![Screenshot of SQL query results using Azure Synapse Analytics.](./media/stream-analytics-no-code/sql-results.png)](./media/stream-analytics-no-code/sql-results.png#lightbox)
+
+ :::image type="content" source="./media/event-hubs-parquet-capture-tutorial/sql-results.png" alt-text="Screenshot of SQL script results in Azure Synapse Analytics." lightbox="./media/event-hubs-parquet-capture-tutorial/sql-results.png":::
## Clean up resources 1. Locate your Event Hubs instance and see the list of Stream Analytics jobs under **Process Data** section. Stop any jobs that are running.
stream-analytics No Code Power Bi Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/no-code-power-bi-tutorial.md
Previously updated : 05/23/2022 Last updated : 05/25/2022
In this tutorial, you learn how to:
Before you start, make sure you've completed the following steps: * If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/).
-* Deploy the TollApp event generator to Azure, use this link to [Deploy TollApp Azure Template](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-stream-analytics%2Fmaster%2FSamples%2FTollApp%2FVSProjects%2FTollAppDeployment%2Fazuredeploy.json). Set the 'interval' parameter to 1. And use a new resource group for this.
+* Deploy the TollApp event generator to Azure, use this link to [Deploy TollApp Azure Template](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-stream-analytics%2Fmaster%2FSamples%2FTollApp%2FVSProjects%2FTollAppDeployment%2Fazuredeploy.json). Set the 'interval' parameter to 1. And use a new resource group for this step.
* Create an [Azure Synapse Analytics workspace](../synapse-analytics/get-started-create-workspace.md) with a [Dedicated SQL pool](../synapse-analytics/get-started-analyze-sql-pool.md#create-a-dedicated-sql-pool).
-* Create a table named **carsummary** using your Dedicated SQL pool. You can do this by running the following SQL script:
+* Create a table named **carsummary** using your Dedicated SQL pool. You can do it by running the following SQL script:
```SQL CREATE TABLE carsummary (
Before you start, make sure you've completed the following steps:
``` ## Use no code editor to create a Stream Analytics job 1. Locate the Resource Group in which the TollApp event generator was deployed.
-2. Select the Azure Event Hubs namespace. And then under the Event Hubs section, select **entrystream** instance.
-3. Go to **Process data** under Features section and then click **start** on the **Start with blank canvas** template.
-[![Screenshot of real time dashboard template in no code editor.](./media/stream-analytics-no-code/real-time-dashboard-power-bi.png)](./media/stream-analytics-no-code/real-time-dashboard-power-bi.png#lightbox)
-4. Name your job **carsummary** and select **Create**.
-5. Configure your event hub input by specifying
- * Consumer Group: Default
- * Serialization type of your input data: JSON
- * Authentication mode which the job will use to connect to your event hub: Connection String defaults
- * Click **Connect**
-6. Within few seconds, you'll see sample input data and the schema. You can choose to drop fields, rename fields or change data type if you want.
-7. Click the **Group by** tile on the canvas and connect it to the event hub tile. Configure the Group By tile by specifying:
- * Aggregation as **Count**
- * Field as **Make** which is a nested field inside **CarModel**
- * Click **Save**
- * In the **Group by** settings, select **Make** and **Tumbling window** of **3 minutes**
-8. Click the **Manage Fields** tile and connect it to the Group by tile on canvas. Configure the **Manage Fields** tile by specifying:
- * Clicking on **Add all fields**
- * Rename the fields by clicking on the fields and changing the names from:
- * COUNT_make to CarCount
- * Window_End_Time to times
-9. Click the **Azure Synapse Analytics** tile and connect it to Manage Fields tile on your canvas. Configure Azure Synapse Analytics by specifying:
+2. Select the Azure Event Hubs **namespace**.
+1. On the **Event Hubs Namespace** page, select **Event Hubs** under **Entities** on the left menu.
+1. Select **entrystream** instance.
+
+ :::image type="content" source="./media/stream-analytics-no-code/select-event-hub.png" alt-text="Screenshot showing the selection of the event hub." lightbox="./media/stream-analytics-no-code/select-event-hub.png":::
+1. Go to **Process data** under Features section and then select **start** on the **Start with blank canvas** template.
+
+ :::image type="content" source="./media/stream-analytics-no-code/start-blank-canvas.png" alt-text="Screenshot showing the selection of the Start button on the Start with a blank canvas tile." lightbox="./media/stream-analytics-no-code/start-blank-canvas.png":::
+1. Name your job **carsummary** and select **Create**.
+
+ :::image type="content" source="./media/stream-analytics-no-code/job-name.png" alt-text="Screenshot of the New Stream Analytics job page." lightbox="./media/stream-analytics-no-code/job-name.png":::
+1. On the **event hub** configuration page, confirm the following settings, and then select **Connect**.
+ - *Consumer Group*: Default
+ - *Serialization type* of your input data: JSON
+ - *Authentication mode* that the job will use to connect to your event hub: Connection string.
+
+ :::image type="content" source="./media/stream-analytics-no-code/event-hub-configuration.png" alt-text="Screenshot of the configuration page for your event hub." lightbox="./media/stream-analytics-no-code/event-hub-configuration.png":::
+1. Within few seconds, you'll see sample input data and the schema. You can choose to drop fields, rename fields or change data type if you want.
+
+ :::image type="content" source="./media/stream-analytics-no-code/data-preview-fields.png" alt-text="Screenshot showing the preview of data in the event hub and the fields." lightbox="./media/stream-analytics-no-code/data-preview-fields.png":::
+1. Select the **Group by** tile on the canvas and connect it to the event hub tile.
+
+ :::image type="content" source="./media/stream-analytics-no-code/connect-group.png" alt-text="Screenshot showing the Group tile connected to the Event Hubs tile." lightbox="./media/stream-analytics-no-code/connect-group.png":::
+1. Configure the **Group by** tile by specifying:
+ 1. Aggregation as **Count**.
+ 1. Field as **Make** which is a nested field inside **CarModel**.
+ 1. Select **Save**.
+ 1. In the **Group by** settings, select **Make** and **Tumbling window** of **3 minutes**
+
+ :::image type="content" source="./media/stream-analytics-no-code/group-settings.png" alt-text="Screenshot of the Group by configuration page." lightbox="./media/stream-analytics-no-code/group-settings.png":::
+1. Select **Add field** on the **Manage fields** page, and add the **Make** field as shown in the following image, and then select **Save**.
+
+ :::image type="content" source="./media/stream-analytics-no-code/add-make-field.png" alt-text="Screenshot showing the addition of the Make field." lightbox="./media/stream-analytics-no-code/add-make-field.png":::
+1. Select **Manage fields** on the command bar. Connect the **Manage Fields** tile to the **Group by tile** on canvas. Select **Add all fields** on the **Manage fields** configuration page.
+
+ :::image type="content" source="./media/stream-analytics-no-code/manage-fields.png" alt-text="Screenshot of the Manage fields page." lightbox="./media/stream-analytics-no-code/manage-fields.png":::
+1. Select **...** next to the fields, and select **Edit** to rename them.
+ - **COUNT_make** to **CarCount**
+ - **Window_End_Time** to **times**
+
+ :::image type="content" source="./media/stream-analytics-no-code/rename-fields.png" alt-text="Screenshot of the Manage fields page with the fields renamed." lightbox="./media/stream-analytics-no-code/rename-fields.png":::
+1. The **Manage fields** page should look as shown in the following image.
+
+ :::image type="content" source="./media/stream-analytics-no-code/manage-fields-page.png" alt-text="Screenshot of the Manage fields page with three fields." lightbox="./media/stream-analytics-no-code/manage-fields-page.png":::
+1. Select **Synapse** on the command bar. Connect the **Synapse** tile to the **Manage fields** tile on your canvas.
+1. Configure Azure Synapse Analytics by specifying:
* Subscription where your Azure Synapse Analytics is located
- * Database of the Dedicated SQL pool which you used to create the Table in the previous section.
+ * Database of the Dedicated SQL pool that you used to create the **carsummary** table in the previous section.
* Username and password to authenticate * Table name as **carsummary**
- * Click **Connect**. You'll see sample results that will be written to your Synapse SQL table.
- [![Screenshot of synapse output in no code editor.](./media/stream-analytics-no-code/synapse-output.png)](./media/stream-analytics-no-code/synapse-output.png#lightbox)
-8. Select **Save** in the top ribbon to save your job and then select **Start**. Set Streaming Unit count to 3 and then click **Start** to run your job. Specify the storage account that will be used by Synapse SQL to load data into your data warehouse.
-9. You'll then see a list of all Stream Analytics jobs created using the no code editor. And within two minutes, your job will go to a **Running** state.
-[![Screenshot of job in running state in no code editor.](./media/stream-analytics-no-code/cosmos-db-running-state.png)](./media/stream-analytics-no-code/cosmos-db-running-state.png#lightbox)
+ * Select **Connect**. You'll see sample results that will be written to your Synapse SQL table.
+
+ :::image type="content" source="./media/stream-analytics-no-code/synapse-settings.png" alt-text="Screenshot of the Synapse tile settings." lightbox="./media/stream-analytics-no-code/synapse-settings.png":::
+1. Select **Save** in the top ribbon to save your job and then select **Start**. Set Streaming Unit count to 3 and then select **Start** to run your job. Specify the storage account that will be used by Synapse SQL to load data into your data warehouse.
+
+ :::image type="content" source="./media/stream-analytics-no-code/start-analytics-job.png" alt-text="Screenshot of the Start Stream Analytics Job page." lightbox="./media/stream-analytics-no-code/start-analytics-job.png":::
+1. You'll then see a list of all Stream Analytics jobs created using the no code editor. And within two minutes, your job will go to a **Running** state. Select the **Refresh** button on the page to see the status changing from Created -> Starting -> Running.
+
+ :::image type="content" source="./media/stream-analytics-no-code/job-list.png" alt-text="Screenshot showing the list of jobs." lightbox="./media/stream-analytics-no-code/job-list.png":::
## Create a Power BI visualization 1. Download the latest version of [Power BI desktop](https://powerbi.microsoft.com/desktop).
synapse-analytics Security White Paper Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/security-white-paper-introduction.md
Azure Synapse is a Platform-as-a-service (PaaS) analytics service that brings to
[Pipelines](../../data-factory/concepts-pipelines-activities.md) are a logical grouping of activities that perform data movement and data transformation at scale. [Data flow](../../data-factory/concepts-data-flow-overview.md) is a transformation activity in a pipeline that's developed by using a low-code user interface. It can execute data transformations at scale. Behind the scenes, data flows use Apache Spark clusters of Azure Synapse to execute automatically generated code. Pipelines and data flows are compute-only services, and they don't have any managed storage associated with them.
-Pipelines use the Integration Runtime (IR) as the scalable compute infrastructure for performing data movement and dispatch activities. Data movement activities run on the IR whereas the dispatch activities run on variety of other compute engines, including Azure SQL Database, Azure HDInsight, Azure Databricks, Apache Spark clusters of Azure Synapse, and others. Azure Synapse supports two types of IR: Azure Integration Runtime and Self-hosted Integration Runtime. The [Azure IR](/azure/data-factory/concepts-integration-runtime.md#azure-integration-runtime) provides a fully managed, scalable, and on-demand compute infrastructure. The [Self-hosted IR](/azure/data-factory/concepts-integration-runtime.md#self-hosted-integration-runtime) is installed and configured by the customer in their own network, either in on-premises machines or in Azure cloud virtual machines.
+Pipelines use the Integration Runtime (IR) as the scalable compute infrastructure for performing data movement and dispatch activities. Data movement activities run on the IR whereas the dispatch activities run on variety of other compute engines, including Azure SQL Database, Azure HDInsight, Azure Databricks, Apache Spark clusters of Azure Synapse, and others. Azure Synapse supports two types of IR: Azure Integration Runtime and Self-hosted Integration Runtime. The [Azure IR](../../data-factory/concepts-integration-runtime.md#azure-integration-runtime) provides a fully managed, scalable, and on-demand compute infrastructure. The [Self-hosted IR](../../data-factory/concepts-integration-runtime.md#self-hosted-integration-runtime) is installed and configured by the customer in their own network, either in on-premises machines or in Azure cloud virtual machines.
Customers can choose to associate their Synapse workspace with a [managed workspace virtual network](../security/synapse-workspace-managed-vnet.md). When associated with a managed workspace virtual network, Azure IRs and Apache Spark clusters that are used by pipelines, data flows, and the Apache Spark pools are deployed inside the managed workspace virtual network. This setup ensures network isolation between the workspaces for pipelines and Apache Spark workloads.
Azure Synapse implements a multi-layered security architecture for end-to-end pr
## Next steps
-In the [next article](security-white-paper-data-protection.md) in this white paper series, learn about data protection.
+In the [next article](security-white-paper-data-protection.md) in this white paper series, learn about data protection.
synapse-analytics 1 Design Performance Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/1-design-performance-migration.md
As part of the preparation phase, create an inventory of these objects to be mig
There may be facilities in the Azure environment that replace the functionality implemented as functions or stored procedures in the Netezza environment. In this case, it's more efficient to use the built-in Azure facilities rather than recoding the Netezza functions.
-[Data integration partners](/azure/synapse-analytics/partner/data-integration) offer tools and services that can automate the migration.
+[Data integration partners](../../partner/data-integration.md) offer tools and services that can automate the migration.
##### Functions
This section highlights lower-level implementation differences between Netezza a
`CREATE TABLE` statements in both Netezza and Azure Synapse allow for specification of a distribution definition&mdash;via `DISTRIBUTE ON` in Netezza, and `DISTRIBUTION =` in Azure Synapse.
-Compared to Netezza, Azure Synapse provides an additional way to achieve local joins for small table-large table joins (typically dimension table to fact table in a start schema model) is to replicate the smaller dimension table across all nodes. This ensures that any value of the join key of the larger table will have a matching dimension row locally available. The overhead of replicating the dimension tables is relatively low, provided the tables aren't very large (see [Design guidance for replicated tables](/azure/synapse-analytics/sql-data-warehouse/design-guidance-for-replicated-tables))&mdash;in which case, the hash distribution approach as described previously is more appropriate. For more information, see [Distributed tables design](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-distribute).
+Compared to Netezza, Azure Synapse provides an additional way to achieve local joins for small table-large table joins (typically dimension table to fact table in a start schema model) is to replicate the smaller dimension table across all nodes. This ensures that any value of the join key of the larger table will have a matching dimension row locally available. The overhead of replicating the dimension tables is relatively low, provided the tables aren't very large (see [Design guidance for replicated tables](../../sql-data-warehouse/design-guidance-for-replicated-tables.md))&mdash;in which case, the hash distribution approach as described previously is more appropriate. For more information, see [Distributed tables design](../../sql-data-warehouse/sql-data-warehouse-tables-distribute.md).
#### Data indexing
Only one field per table can be used for partitioning. This is frequently a date
#### Data table statistics
-Ensure that statistics on data tables are up to date by building in a [statistics](/azure/synapse-analytics/sql/develop-tables-statistics) step to ETL/ELT jobs.
+Ensure that statistics on data tables are up to date by building in a [statistics](../../sql/develop-tables-statistics.md) step to ETL/ELT jobs.
#### PolyBase for data loading
-PolyBase is the most efficient method for loading large amounts of data into the warehouse since it can leverage parallel loading streams. For more information, see [PolyBase data loading strategy](/azure/synapse-analytics/sql/load-data-overview).
+PolyBase is the most efficient method for loading large amounts of data into the warehouse since it can leverage parallel loading streams. For more information, see [PolyBase data loading strategy](../../sql/load-data-overview.md).
#### Use workload management
-Use [workload management](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-workload-management?context=/azure/synapse-analytics/context/context) instead of resource classes. ETL would be in its own workgroup and should be configured to have more resources per query (less concurrency by more resources). For more information, see [What is dedicated SQL pool in Azure Synapse Analytics](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is).
+Use [workload management](../../sql-data-warehouse/sql-data-warehouse-workload-management.md?context=%2fazure%2fsynapse-analytics%2fcontext%2fcontext) instead of resource classes. ETL would be in its own workgroup and should be configured to have more resources per query (less concurrency by more resources). For more information, see [What is dedicated SQL pool in Azure Synapse Analytics](../../sql-data-warehouse/sql-data-warehouse-overview-what-is.md).
## Next steps
synapse-analytics 2 Etl Load Migration Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/2-etl-load-migration-considerations.md
If these data marts are implemented as physical tables, they'll require addition
> [!TIP] > The performance and scalability of Azure Synapse enables virtualization without sacrificing performance.
-With the advent of relatively low-cost scalable MPP architectures, such as Azure Synapse, and the inherent performance characteristics of such architectures, it may be that you can provide data mart functionality without having to instantiate the mart as a set of physical tables. This is achieved by effectively virtualizing the data marts via SQL views onto the main data warehouse, or via a virtualization layer using features such as views in Azure or the [visualization products of Microsoft partners](/azure/synapse-analytics/partner/data-integration). This approach simplifies or eliminates the need for additional storage and aggregation processing and reduces the overall number of database objects to be migrated.
+With the advent of relatively low-cost scalable MPP architectures, such as Azure Synapse, and the inherent performance characteristics of such architectures, it may be that you can provide data mart functionality without having to instantiate the mart as a set of physical tables. This is achieved by effectively virtualizing the data marts via SQL views onto the main data warehouse, or via a virtualization layer using features such as views in Azure or the [visualization products of Microsoft partners](../../partner/data-integration.md). This approach simplifies or eliminates the need for additional storage and aggregation processing and reduces the overall number of database objects to be migrated.
There's another potential benefit to this approach: by implementing the aggregation and join logic within a virtualization layer, and presenting external reporting tools via a virtualized view, the processing required to create these views is 'pushed down' into the data warehouse, which is generally the best place to run joins, aggregations, and other related operations, on large data volumes.
ATT_TEST | COL_DATE | DATE | 4
The query can be modified to search all tables for any occurrences of unsupported data types.
-Azure Data Factory can be used to move data from a legacy Netezza environment. For more information, see [IBM Netezza connector](/azure/data-factory/connector-netezza).
+Azure Data Factory can be used to move data from a legacy Netezza environment. For more information, see [IBM Netezza connector](../../../data-factory/connector-netezza.md).
[Third-party vendors](/azure/sql-data-warehouse/sql-data-warehouse-partner-data-integration) offer tools and services to automate migration, including the mapping of data types as previously described. Also, third-party ETL tools, like Informatica or Talend, already in use in the Netezza environment can implement all required data transformations. The next section explores the migration of existing third-party ETL processes.
Azure Data Factory can be used to move data from a legacy Netezza environment. F
> [!TIP] > Plan the approach to ETL migration ahead of time and leverage Azure facilities where appropriate.
-For ETL/ELT processing, legacy Netezza data warehouses may use custom-built scripts using Netezza utilities such as nzsql and nzload, or third-party ETL tools such as Informatica or Ab Initio. Sometimes, Netezza data warehouses use a combination of ETL and ELT approaches that's evolved over time. When planning a migration to Azure Synapse, you need to determine the best way to implement the required ETL/ELT processing in the new environment, while minimizing the cost and risk involved. To learn more about ETL and ELT processing, see [ELT vs ETL Design approach](/azure/synapse-analytics/sql-data-warehouse/design-elt-data-loading).
+For ETL/ELT processing, legacy Netezza data warehouses may use custom-built scripts using Netezza utilities such as nzsql and nzload, or third-party ETL tools such as Informatica or Ab Initio. Sometimes, Netezza data warehouses use a combination of ETL and ELT approaches that's evolved over time. When planning a migration to Azure Synapse, you need to determine the best way to implement the required ETL/ELT processing in the new environment, while minimizing the cost and risk involved. To learn more about ETL and ELT processing, see [ELT vs ETL Design approach](../../sql-data-warehouse/design-elt-data-loading.md).
The following sections discuss migration options and make recommendations for various use cases. This flowchart summarizes one approach:
The following sections discuss migration options and make recommendations for va
The first step is always to build an inventory of ETL/ELT processes that need to be migrated. As with other steps, it's possible that the standard 'built-in' Azure features make it unnecessary to migrate some existing processes. For planning purposes, it's important to understand the scale of the migration to be performed.
-In the preceding flowchart, decision 1 relates to a high-level decision about whether to migrate to a totally Azure-native environment. If you're moving to a totally Azure-native environment, we recommend that you re-engineer the ETL processing using [Pipelines and activities in Azure Data Factory](/azure/data-factory/concepts-pipelines-activities?msclkid=b6ea2be4cfda11ec929ac33e6e00db98&tabs=data-factory) or [Synapse Pipelines](/azure/synapse-analytics/get-started-pipelines?msclkid=b6e99db9cfda11ecbaba18ca59d5c95c). If you're not moving to a totally Azure-native environment, then decision 2 is whether an existing third-party ETL tool is already in use.
+In the preceding flowchart, decision 1 relates to a high-level decision about whether to migrate to a totally Azure-native environment. If you're moving to a totally Azure-native environment, we recommend that you re-engineer the ETL processing using [Pipelines and activities in Azure Data Factory](../../../data-factory/concepts-pipelines-activities.md?msclkid=b6ea2be4cfda11ec929ac33e6e00db98&tabs=data-factory) or [Synapse Pipelines](../../get-started-pipelines.md?msclkid=b6e99db9cfda11ecbaba18ca59d5c95c). If you're not moving to a totally Azure-native environment, then decision 2 is whether an existing third-party ETL tool is already in use.
> [!TIP] > Leverage investment in existing third-party tools to reduce cost and risk.
There's also a hybrid approach that uses both methods. For example, you can use
#### Orchestrate from Netezza or Azure?
-The recommended approach when moving to Azure Synapse is to orchestrate the data extract and loading from the Azure environment using [Azure Synapse Pipelines](/azure/synapse-analytics/get-started-pipelines?msclkid=b6e99db9cfda11ecbaba18ca59d5c95c) or [Azure Data Factory](/azure/data-factory/introduction?msclkid=2ccc66eccfde11ecaa58877e9d228779), as well as associated utilities, such as PolyBase or [COPY INTO](/sql/t-sql/statements/copy-into-transact-sql), for the most efficient data loading. This approach leverages Azure capabilities and provides an easy method to build reusable data loading pipelines.
+The recommended approach when moving to Azure Synapse is to orchestrate the data extract and loading from the Azure environment using [Azure Synapse Pipelines](../../get-started-pipelines.md?msclkid=b6e99db9cfda11ecbaba18ca59d5c95c) or [Azure Data Factory](../../../data-factory/introduction.md?msclkid=2ccc66eccfde11ecaa58877e9d228779), as well as associated utilities, such as PolyBase or [COPY INTO](/sql/t-sql/statements/copy-into-transact-sql), for the most efficient data loading. This approach leverages Azure capabilities and provides an easy method to build reusable data loading pipelines.
Other benefits of this approach include reduced impact on the Netezza system during the data load process since the management and loading process is running in Azure, and the ability to automate the process by using metadata-driven data load pipelines.
To summarize, our recommendations for migrating data and associated ETL processe
- Identify and understand the most efficient tools for data extract and load in both Netezza and Azure environments. Use the appropriate tools in each phase in the process. -- Use Azure facilities, such as [Azure Synapse Pipelines](/azure/synapse-analytics/get-started-pipelines?msclkid=b6e99db9cfda11ecbaba18ca59d5c95c) or [Azure Data Factory](/azure/data-factory/introduction?msclkid=2ccc66eccfde11ecaa58877e9d228779), to orchestrate and automate the migration process while minimizing impact on the Netezza system.
+- Use Azure facilities, such as [Azure Synapse Pipelines](../../get-started-pipelines.md?msclkid=b6e99db9cfda11ecbaba18ca59d5c95c) or [Azure Data Factory](../../../data-factory/introduction.md?msclkid=2ccc66eccfde11ecaa58877e9d228779), to orchestrate and automate the migration process while minimizing impact on the Netezza system.
## Next steps
-To learn more about security access operations, see the next article in this series: [Security, access, and operations for Netezza migrations](3-security-access-operations.md).
+To learn more about security access operations, see the next article in this series: [Security, access, and operations for Netezza migrations](3-security-access-operations.md).
synapse-analytics 3 Security Access Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/3-security-access-operations.md
This article discusses the methods of connection for existing legacy Netezza env
It's assumed that there's a requirement to migrate the existing methods of connection and user/role/permission structure as-is. If this isn't the case, then use Azure utilities such as Azure portal to create and manage a new security regime.
-For more information on the [Azure Synapse security](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-manage-security#authorization) options see [Security whitepaper](/azure/synapse-analytics/guidance/security-white-paper-introduction).
+For more information on the [Azure Synapse security](../../sql-data-warehouse/sql-data-warehouse-overview-manage-security.md#authorization) options see [Security whitepaper](../../guidance/security-white-paper-introduction.md).
### Connection and authentication
Comments on the preceding table:
\*\*\*\* These features are managed automatically by the system or via Azure portal in Azure Synapse&mdash;see the next section on Operational considerations.
-Refer to [Azure Synapse Analytics security permissions](/azure/synapse-analytics/guidance/security-white-paper-introduction).
+Refer to [Azure Synapse Analytics security permissions](../../guidance/security-white-paper-introduction.md).
## Operational considerations
The portal also enables integration with other Azure monitoring services such as
> Low-level and system-wide metrics are automatically logged in Azure Synapse. Resource utilization statistics for the Azure Synapse are automatically logged within the system. The metrics include usage statistics for CPU, memory, cache, I/O and temporary workspace for each query as well as connectivity information&mdash;such as failed connection attempts.
-Azure Synapse provides a set of [Dynamic management views](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-manage-monitor?msclkid=3e6eefbccfe211ec82d019ada29b1834) (DMVs). These views are useful when actively troubleshooting and identifying performance bottlenecks with your workload.
+Azure Synapse provides a set of [Dynamic management views](../../sql-data-warehouse/sql-data-warehouse-manage-monitor.md?msclkid=3e6eefbccfe211ec82d019ada29b1834) (DMVs). These views are useful when actively troubleshooting and identifying performance bottlenecks with your workload.
For more information, see [Azure Synapse operations and management options](/azure/sql-data-warehouse/sql-data-warehouse-how-to-manage-and-monitor-workload-importance).
This information can also be used for capacity planning, determining the resourc
> [!TIP] > A major benefit of Azure is the ability to independently scale up and down compute resources on demand to handle peaky workloads cost-effectively.
-The architecture of Azure Synapse separates storage and compute, allowing each to scale independently. As a result, [compute resources can be scaled](/azure/synapse-analytics/sql-data-warehouse/quickstart-scale-compute-portal) to meet performance demands independent of data storage. You can also pause and resume compute resources. A natural benefit of this architecture is that billing for compute and storage is separate. If a data warehouse isn't in use, save on compute costs by pausing compute.
+The architecture of Azure Synapse separates storage and compute, allowing each to scale independently. As a result, [compute resources can be scaled](../../sql-data-warehouse/quickstart-scale-compute-portal.md) to meet performance demands independent of data storage. You can also pause and resume compute resources. A natural benefit of this architecture is that billing for compute and storage is separate. If a data warehouse isn't in use, save on compute costs by pausing compute.
Compute resources can be scaled up or scaled back by adjusting the data warehouse units setting for the data warehouse. Loading and query performance will increase linearly as you add more data warehouse units.
Adding more compute nodes adds more compute power and ability to leverage more p
## Next steps
-To learn more about visualization and reporting, see the next article in this series: [Visualization and reporting for Netezza migrations](4-visualization-reporting.md).
+To learn more about visualization and reporting, see the next article in this series: [Visualization and reporting for Netezza migrations](4-visualization-reporting.md).
synapse-analytics 4 Visualization Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/4-visualization-reporting.md
In addition, all the required data needs to be migrated to ensure the same resul
If BI tools are querying views in the underlying data warehouse or data mart database, then will these views still work? You might think yes, but if there are proprietary SQL extensions, specific to your legacy data warehouse DBMS in these views that have no equivalent in Azure Synapse, you'll need to know about them and find a way to resolve them.
-Other issues like the behavior of nulls or data type variations across DBMS platforms need to be tested, in case they cause slightly different calculation results. Obviously, you want to minimize these issues and take all necessary steps to shield business users from any kind of impact. Depending on your legacy data warehouse system (such as Netezza), there are [tools](/azure/synapse-analytics/partner/data-integration) that can help hide these differences so that BI tools and applications are kept unaware of them and can run unchanged.
+Other issues like the behavior of nulls or data type variations across DBMS platforms need to be tested, in case they cause slightly different calculation results. Obviously, you want to minimize these issues and take all necessary steps to shield business users from any kind of impact. Depending on your legacy data warehouse system (such as Netezza), there are [tools](../../partner/data-integration.md) that can help hide these differences so that BI tools and applications are kept unaware of them and can run unchanged.
> [!TIP] > Use repeatable tests to ensure reports, dashboards, and other visualizations migrate successfully.
This breaks the dependency between business users utilizing self-service BI tool
> [!TIP] > Schema alterations to tune your data model for Azure Synapse can be hidden from users.
-By introducing data virtualization, any schema alternations made during data warehouse and data mart migration to Azure Synapse (to optimize performance, for example) can be hidden from business users because they only access virtual tables in the data virtualization layer. If structural changes are needed, only the mappings between the data warehouse or data marts, and any virtual tables would need to be changed so that users remain unaware of those changes and unaware of the migration. [Microsoft partners](/azure/synapse-analytics/partner/data-integration) provides a useful data virtualization software.
+By introducing data virtualization, any schema alternations made during data warehouse and data mart migration to Azure Synapse (to optimize performance, for example) can be hidden from business users because they only access virtual tables in the data virtualization layer. If structural changes are needed, only the mappings between the data warehouse or data marts, and any virtual tables would need to be changed so that users remain unaware of those changes and unaware of the migration. [Microsoft partners](../../partner/data-integration.md) provides a useful data virtualization software.
## Identify high priority reports to migrate first
For information about how to migrate users, user groups, roles, and privileges,
> [!TIP] > Build an automated test suite to make tests repeatable.
-It's also best practice to automate testing as much as possible, to make each test repeatable and to allow a consistent approach to evaluating results. This works well for known regular reports, and could be managed via [Synapse pipelines](/azure/synapse-analytics/get-started-pipelines?msclkid=8f3e7e96cfed11eca432022bc07c18de) or [Azure Data Factory](/azure/data-factory/introduction?msclkid=2ccc66eccfde11ecaa58877e9d228779) orchestration. If you already have a suite of test queries in place for regression testing, you could use the testing tools to automate the post migration testing.
+It's also best practice to automate testing as much as possible, to make each test repeatable and to allow a consistent approach to evaluating results. This works well for known regular reports, and could be managed via [Synapse pipelines](../../get-started-pipelines.md?msclkid=8f3e7e96cfed11eca432022bc07c18de) or [Azure Data Factory](../../../data-factory/introduction.md?msclkid=2ccc66eccfde11ecaa58877e9d228779) orchestration. If you already have a suite of test queries in place for regression testing, you could use the testing tools to automate the post migration testing.
> [!TIP] > Leverage tools that can compare metadata lineage to verify results.
This substantially simplifies the data migration process, because the business w
> [!TIP] > Azure Data Factory and several third-party ETL tools support lineage.
-Several ETL tools provide end-to-end lineage capability, and you may be able to make use of this via your existing ETL tool if you're continuing to use it with Azure Synapse. Microsoft [Synapse pipelines](/azure/synapse-analytics/get-started-pipelines?msclkid=8f3e7e96cfed11eca432022bc07c18de) or [Azure Data Factory](/azure/data-factory/introduction?msclkid=2ccc66eccfde11ecaa58877e9d228779) lets you view lineage in mapping flows. Also, [Microsoft partners](/azure/synapse-analytics/partner/data-integration) provide automated metadata discovery, data lineage, and lineage comparison tools.
+Several ETL tools provide end-to-end lineage capability, and you may be able to make use of this via your existing ETL tool if you're continuing to use it with Azure Synapse. Microsoft [Synapse pipelines](../../get-started-pipelines.md?msclkid=8f3e7e96cfed11eca432022bc07c18de) or [Azure Data Factory](../../../data-factory/introduction.md?msclkid=2ccc66eccfde11ecaa58877e9d228779) lets you view lineage in mapping flows. Also, [Microsoft partners](../../partner/data-integration.md) provide automated metadata discovery, data lineage, and lineage comparison tools.
## Migrate BI tool semantic layers to Azure Synapse Analytics
A good way to get everything consistent across multiple BI tools is to create a
> [!TIP] > Use data virtualization to create a common semantic layer to guarantee consistency across all BI tools in an Azure Synapse environment.
-In this way, you get consistency across all BI tools, while at the same time breaking the dependency between BI tools and applications, and the underlying physical data structures in Azure Synapse. Use [Microsoft partners](/azure/synapse-analytics/partner/data-integration) on Azure to implement this. The following diagram shows how a common vocabulary in the Data Virtualization server lets multiple BI tools see a common semantic layer.
+In this way, you get consistency across all BI tools, while at the same time breaking the dependency between BI tools and applications, and the underlying physical data structures in Azure Synapse. Use [Microsoft partners](../../partner/data-integration.md) on Azure to implement this. The following diagram shows how a common vocabulary in the Data Virtualization server lets multiple BI tools see a common semantic layer.
:::image type="content" source="../media/4-visualization-reporting/data-virtualization-semantics.png" border="true" alt-text="Diagram with common data names and definitions that relate to the data virtualization server.":::
Finally, consider data virtualization to shield BI tools and applications from s
## Next steps
-To learn more about minimizing SQL issues, see the next article in this series: [Minimizing SQL issues for Netezza migrations](5-minimize-sql-issues.md).
+To learn more about minimizing SQL issues, see the next article in this series: [Minimizing SQL issues for Netezza migrations](5-minimize-sql-issues.md).
synapse-analytics 5 Minimize Sql Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/5-minimize-sql-issues.md
Automate and orchestrate the migration process by making use of the capabilities
Azure Data Factory is a cloud-based data integration service that allows creation of data-driven workflows in the cloud for orchestrating and automating data movement and data transformation. Using Data Factory, you can create and schedule data-driven workflows&mdash;called pipelines&mdash;that can ingest data from disparate data stores. It can process and transform data by using compute services such as Azure HDInsight Hadoop, Spark, Azure Data Lake Analytics, and Azure Machine Learning.
-By creating metadata to list the data tables to be migrated and their location, you can use the Data Factory facilities to manage and automate parts of the migration process. You can also use [Synapse pipelines](/azure/synapse-analytics/get-started-pipelines?msclkid=8f3e7e96cfed11eca432022bc07c18de).
+By creating metadata to list the data tables to be migrated and their location, you can use the Data Factory facilities to manage and automate parts of the migration process. You can also use [Synapse pipelines](../../get-started-pipelines.md?msclkid=8f3e7e96cfed11eca432022bc07c18de).
## SQL DDL differences between Netezza and Azure Synapse
Access this information by using utilities such as `nz_ddl_table` and generate t
> [!TIP] > Third-party tools and services can automate data mapping tasks.
-There are [Microsoft partners](/azure/synapse-analytics/partner/data-integration) who offer tools and services to automate migration, including data-type mapping. Also, if a third-party ETL tool such as Informatica or Talend is already in use in the Netezza environment, that tool can implement any required data transformations.
+There are [Microsoft partners](../../partner/data-integration.md) who offer tools and services to automate migration, including data-type mapping. Also, if a third-party ETL tool such as Informatica or Talend is already in use in the Netezza environment, that tool can implement any required data transformations.
## SQL DML differences between Netezza and Azure Synapse
There may be facilities in the Azure environment that replace the functionality
> [!TIP] > Third-party products and services can automate migration of non-data elements.
-[Microsoft partners](/azure/synapse-analytics/partner/data-integration) offer tools and services that can automate the migration, including the mapping of data types. Also, third-party ETL tools, such as Informatica or Talend, that are already in use in the IBM Netezza environment can implement any required data transformations.
+[Microsoft partners](../../partner/data-integration.md) offer tools and services that can automate the migration, including the mapping of data types. Also, third-party ETL tools, such as Informatica or Talend, that are already in use in the IBM Netezza environment can implement any required data transformations.
See the following sections for more information on each of these elements.
SQL Azure Data Warehouse also supports stored procedures using T-SQL, so if you
In Netezza, a sequence is a named database object created via `CREATE SEQUENCE` that can provide the unique value via the `NEXT VALUE FOR` method. Use these to generate unique numbers for use as surrogate key values for primary key values.
-In Azure Synapse, there's no `CREATE SEQUENCE`. Sequences are handled using [Identity to create surrogate keys](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-identity) or [managed identity](/azure/data-factory/data-factory-service-identity?tabs=data-factory) using SQL code to create the next sequence number in a series.
+In Azure Synapse, there's no `CREATE SEQUENCE`. Sequences are handled using [Identity to create surrogate keys](../../sql-data-warehouse/sql-data-warehouse-tables-identity.md) or [managed identity](../../../data-factory/data-factory-service-identity.md?tabs=data-factory) using SQL code to create the next sequence number in a series.
### Use [EXPLAIN](/sql/t-sql/queries/explain-transact-sql?msclkid=91233fc1cff011ec9dff597671b7ae97) to validate legacy SQL
To minimize the task of migrating the actual SQL code, follow these recommendati
- Automate the process wherever possible to minimize errors, risk, and time for the migration. -- Consider using specialist [Microsoft partners](/azure/synapse-analytics/partner/data-integration) and services to streamline the migration.
+- Consider using specialist [Microsoft partners](../../partner/data-integration.md) and services to streamline the migration.
## Next steps
-To learn more about Microsoft and third-party tools, see the next article in this series: [Tools for Netezza data warehouse migration to Azure Synapse Analytics](6-microsoft-third-party-migration-tools.md).
+To learn more about Microsoft and third-party tools, see the next article in this series: [Tools for Netezza data warehouse migration to Azure Synapse Analytics](6-microsoft-third-party-migration-tools.md).
synapse-analytics 6 Microsoft Third Party Migration Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/6-microsoft-third-party-migration-tools.md
Microsoft Azure Data Factory is a fully managed, pay-as-you-use, hybrid data int
> [!TIP] > Data Factory allows you to build scalable data integration pipelines code-free.
-[Azure Data Factory connectors](/azure/data-factory/connector-overview?msclkid=00086e4acff211ec9263dee5c7eb6e69) connect to external data sources and databases and have templates for common data integration tasks. A visual front-end, browser-based UI enables non-programmers to create and run process pipelines to ingest, transform, and load data. More experienced programmers have the option to incorporate custom code, such as Python programs.
+[Azure Data Factory connectors](../../../data-factory/connector-overview.md?msclkid=00086e4acff211ec9263dee5c7eb6e69) connect to external data sources and databases and have templates for common data integration tasks. A visual front-end, browser-based UI enables non-programmers to create and run process pipelines to ingest, transform, and load data. More experienced programmers have the option to incorporate custom code, such as Python programs.
> [!TIP] > Data Factory enables collaborative development between business and IT professionals.
Azure ExpressRoute creates private connections between Azure data centers and in
#### AzCopy
-[AzCopy](/azure/storage/common/storage-use-azcopy-v10) is a command line utility that copies files to Azure Blob Storage via a standard internet connection. In a warehouse migration project, you can use AzCopy to upload extracted, compressed, and delimited text files before loading through PolyBase, or a native Parquet reader if the exported files are Parquet format. AzCopy can upload individual files, file selections, or file directories.
+[AzCopy](../../../storage/common/storage-use-azcopy-v10.md) is a command line utility that copies files to Azure Blob Storage via a standard internet connection. In a warehouse migration project, you can use AzCopy to upload extracted, compressed, and delimited text files before loading through PolyBase, or a native Parquet reader if the exported files are Parquet format. AzCopy can upload individual files, file selections, or file directories.
#### Azure Data Box
However, PolyBase has some limitations. Rows to be loaded must be less than 1 MB
## Microsoft partners can help you migrate your data warehouse to Azure Synapse Analytics
-In addition to tools that can help you with various aspects of data warehouse migration, there are several practiced [Microsoft partners](/azure/synapse-analytics/partner/data-integration) that can bring their expertise to help you move your legacy on-premises data warehouse platform to Azure Synapse.
+In addition to tools that can help you with various aspects of data warehouse migration, there are several practiced [Microsoft partners](../../partner/data-integration.md) that can bring their expertise to help you move your legacy on-premises data warehouse platform to Azure Synapse.
## Next steps
-To learn more about implementing modern data warehouses, see the next article in this series: [Beyond Netezza migration, implementing a modern data warehouse in Microsoft Azure](7-beyond-data-warehouse-migration.md).
+To learn more about implementing modern data warehouses, see the next article in this series: [Beyond Netezza migration, implementing a modern data warehouse in Microsoft Azure](7-beyond-data-warehouse-migration.md).
synapse-analytics 7 Beyond Data Warehouse Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/7-beyond-data-warehouse-migration.md
One of the key reasons to migrate your existing data warehouse to Azure Synapse
- Azure Data Lake Storage&mdash;for cost effective data ingestion, staging, cleansing and transformation to free up data warehouse capacity occupied by fast growing staging tables -- Azure Data Factory&mdash;for collaborative IT and self-service data integration [with connectors](/azure/data-factory/connector-overview) to cloud and on-premises data sources and streaming data
+- Azure Data Factory&mdash;for collaborative IT and self-service data integration [with connectors](../../../data-factory/connector-overview.md) to cloud and on-premises data sources and streaming data
- [The Open Data Model Common Data Initiative](/common-data-model/)&mdash;to share consistent trusted data across multiple technologies including: - Azure Synapse
One of the key reasons to migrate your existing data warehouse to Azure Synapse
- ML.NET - Visual Studio .NET for Apache Spark to enable data scientists to use Azure Synapse data to train machine learning models at scale. -- [Azure HDInsight](/azure/hdinsight/)&mdash;to leverage big data analytical processing and join big data with Azure Synapse data by creating a Logical Data Warehouse using PolyBase
+- [Azure HDInsight](../../../hdinsight/index.yml)&mdash;to leverage big data analytical processing and join big data with Azure Synapse data by creating a Logical Data Warehouse using PolyBase
-- [Azure Event Hubs](/azure/event-hubs/event-hubs-about), [Azure Stream Analytics](/azure/stream-analytics/stream-analytics-introduction) and [Apache Kafka](/azure/databricks/spark/latest/structured-streaming/kafka)&mdash;to integrate with live streaming data from within Azure Synapse
+- [Azure Event Hubs](../../../event-hubs/event-hubs-about.md), [Azure Stream Analytics](../../../stream-analytics/stream-analytics-introduction.md) and [Apache Kafka](/azure/databricks/spark/latest/structured-streaming/kafka)&mdash;to integrate with live streaming data from within Azure Synapse
-There's often acute demand to integrate with [Machine Learning](/azure/synapse-analytics/machine-learning/what-is-machine-learning) to enable custom built, trained machine learning models for use in Azure Synapse. This would enable in-database analytics to run at scale in-batch, on an event-driven basis and on-demand. The ability to exploit in-database analytics in Azure Synapse from multiple BI tools and applications also guarantees that all get the same predictions and recommendations.
+There's often acute demand to integrate with [Machine Learning](../../machine-learning/what-is-machine-learning.md) to enable custom built, trained machine learning models for use in Azure Synapse. This would enable in-database analytics to run at scale in-batch, on an event-driven basis and on-demand. The ability to exploit in-database analytics in Azure Synapse from multiple BI tools and applications also guarantees that all get the same predictions and recommendations.
In addition, there's an opportunity to integrate Azure Synapse with Microsoft partner tools on Azure to shorten time to value.
Data Factory can support multiple use cases, including:
#### Data sources
-Azure Data Factory lets you use [connectors](/azure/data-factory/connector-overview) from both cloud and on-premises data sources. Agent software, known as a *self-hosted integration runtime*, securely accesses on-premises data sources and supports secure, scalable data transfer.
+Azure Data Factory lets you use [connectors](../../../data-factory/connector-overview.md) from both cloud and on-premises data sources. Agent software, known as a *self-hosted integration runtime*, securely accesses on-premises data sources and supports secure, scalable data transfer.
#### Transform data using Azure Data Factory
To achieve this goal, establish a set of common data names and definitions descr
> [!TIP] > Integrating data to create lake database logical entities in shared storage enables maximum reuse of common data assets.
-Microsoft has done this by creating a [lake database](/azure/synapse-analytics/database-designer/concepts-lake-database). The lake database is a common language for business entities that represents commonly used concepts and activities across a business. Azure Synapse Analytics provides industry specific database templates to help standardize data in the lake. [Lake database templates](/azure/synapse-analytics/database-designer/concepts-database-templates) provide schemas for predefined business areas, enabling data to the loaded into a lake database in a structured way. The power comes when data integration software is used to create lake database common data assets. This results in self-describing trusted data that can be consumed by applications and analytical systems. Create a lake database in Azure Data Lake storage using Azure Data Factory, and consume it with Power BI, Azure Synapse Spark, Azure Synapse and Azure ML. The following diagram shows a lake database used in Azure Synapse Analytics.
+Microsoft has done this by creating a [lake database](../../database-designer/concepts-lake-database.md). The lake database is a common language for business entities that represents commonly used concepts and activities across a business. Azure Synapse Analytics provides industry specific database templates to help standardize data in the lake. [Lake database templates](../../database-designer/concepts-database-templates.md) provide schemas for predefined business areas, enabling data to the loaded into a lake database in a structured way. The power comes when data integration software is used to create lake database common data assets. This results in self-describing trusted data that can be consumed by applications and analytical systems. Create a lake database in Azure Data Lake storage using Azure Data Factory, and consume it with Power BI, Azure Synapse Spark, Azure Synapse and Azure ML. The following diagram shows a lake database used in Azure Synapse Analytics.
:::image type="content" source="../media/7-beyond-data-warehouse-migration/azure-synapse-analytics-lake-database.png" border="true" alt-text="Screenshot showing how a lake database can be used in Azure Synapse Analytics.":::
Azure Machine Learning Service provides a software development kit (SDK) and ser
> [!TIP] > Azure Synapse Spark is Microsoft's dynamically scalable Spark-as-a-service offering scalable execution of data preparation, model development and deployed model execution.
-[Azure Synapse Spark Pool Notebooks](/azure/synapse-analytics/spark/apache-spark-development-using-notebooks?msclkid=cbe4b8ebcff511eca068920ea4bf16b9) is an Apache Spark service optimized to run on Azure which:
+[Azure Synapse Spark Pool Notebooks](../../spark/apache-spark-development-using-notebooks.md?msclkid=cbe4b8ebcff511eca068920ea4bf16b9) is an Apache Spark service optimized to run on Azure which:
- Allows data engineers to build and execute scalable data preparation jobs using Azure Data Factory
By leveraging PolyBase data virtualization inside Azure Synapse, you can impleme
:::image type="content" source="../media/7-beyond-data-warehouse-migration/complex-data-warehouse-structure.png" alt-text="Screenshot showing an example of a complex data warehouse structure accessed through user interface methods.":::
-The previous diagram shows how other technologies of the Microsoft analytical ecosystem can be combined with the capability of Azure Synapse logical data warehouse architecture. For example, data can be ingested into Azure Data Lake Storage (ADLS) and curated using Azure Data Factory to create trusted data products that represent Microsoft [lake database](/azure/synapse-analytics/database-designer/concepts-lake-database) logical data entities. This trusted, commonly understood data can then be consumed and reused in different analytical environments such as Azure Synapse, Azure Synapse Spark Pool Notebooks, or Azure Cosmos DB. All insights produced in these environments are accessible via a logical data warehouse data virtualization layer made possible by PolyBase.
+The previous diagram shows how other technologies of the Microsoft analytical ecosystem can be combined with the capability of Azure Synapse logical data warehouse architecture. For example, data can be ingested into Azure Data Lake Storage (ADLS) and curated using Azure Data Factory to create trusted data products that represent Microsoft [lake database](../../database-designer/concepts-lake-database.md) logical data entities. This trusted, commonly understood data can then be consumed and reused in different analytical environments such as Azure Synapse, Azure Synapse Spark Pool Notebooks, or Azure Cosmos DB. All insights produced in these environments are accessible via a logical data warehouse data virtualization layer made possible by PolyBase.
> [!TIP] > A logical data warehouse architecture simplifies business user access to data and adds new value to what you already know in your data warehouse.
Leverage PolyBase and `COPY INTO` to go beyond your data warehouse. Simplify acc
## Next steps
-To learn more about migrating to a dedicated SQL pool, see [Migrate a data warehouse to a dedicated SQL pool in Azure Synapse Analytics](../migrate-to-synapse-analytics-guide.md).
+To learn more about migrating to a dedicated SQL pool, see [Migrate a data warehouse to a dedicated SQL pool in Azure Synapse Analytics](../migrate-to-synapse-analytics-guide.md).
synapse-analytics 1 Design Performance Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/1-design-performance-migration.md
Teradata supports data replication across nodes via the FALLBACK option, where t
The goal of the high availability architecture in Azure SQL Database is to guarantee that your database is up and running 99.9% of time, without worrying about the impact of maintenance operations and outages. Azure automatically handles critical servicing tasks such as patching, backups, and Windows and SQL upgrades, as well as unplanned events such as underlying hardware, software, or network failures.
-Data storage in Azure Synapse is automatically [backed up](/azure/synapse-analytics/sql-data-warehouse/backup-and-restore) with snapshots. These snapshots are a built-in feature of the service that creates restore points. You don't have to enable this capability. Users can't currently delete automatic restore points where the service uses these restore points to maintain SLAs for recovery.
+Data storage in Azure Synapse is automatically [backed up](../../sql-data-warehouse/backup-and-restore.md) with snapshots. These snapshots are a built-in feature of the service that creates restore points. You don't have to enable this capability. Users can't currently delete automatic restore points where the service uses these restore points to maintain SLAs for recovery.
Azure Synapse Dedicated SQL pool takes snapshots of the data warehouse throughout the day creating restore points that are available for seven days. This retention period can't be changed. SQL Data Warehouse supports an eight-hour recovery point objective (RPO). You can restore your data warehouse in the primary region from any one of the snapshots taken in the past seven days. If you require more granular backups, other user-defined options are available.
As part of the preparation phase, create an inventory of these objects to be mig
There may be facilities in the Azure environment that replace the functionality implemented as functions or stored procedures in the Teradata environment. In this case, it's more efficient to use the built-in Azure facilities rather than recoding the Teradata functions.
-[Data integration partners](/azure/synapse-analytics/partner/data-integration) offer tools and services that can automate the migration.
+[Data integration partners](../../partner/data-integration.md) offer tools and services that can automate the migration.
##### Functions
Azure enables the specification of data distribution methods for individual tabl
For large table-large table joins, hash distributing one or, ideally, both tables on one of the join columns&mdash;which has a wide range of values to help ensure an even distribution. Perform join processing locally, as the data rows to be joined will already be collocated on the same processing node.
-Another way to achieve local joins for small table-large table joins&mdash;typically dimension table to fact table in a star schema model&mdash;is to replicate the smaller dimension table across all nodes. This ensures that any value of the join key of the larger table will have a matching dimension row locally available. The overhead of replicating the dimension tables is relatively low, provided the tables aren't very large (see [Design guidance for replicated tables](/azure/synapse-analytics/sql-data-warehouse/design-guidance-for-replicated-tables))&mdash;in which case, the hash distribution approach as described above is more appropriate. For more information, see [Distributed tables design](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-distribute).
+Another way to achieve local joins for small table-large table joins&mdash;typically dimension table to fact table in a star schema model&mdash;is to replicate the smaller dimension table across all nodes. This ensures that any value of the join key of the larger table will have a matching dimension row locally available. The overhead of replicating the dimension tables is relatively low, provided the tables aren't very large (see [Design guidance for replicated tables](../../sql-data-warehouse/design-guidance-for-replicated-tables.md))&mdash;in which case, the hash distribution approach as described above is more appropriate. For more information, see [Distributed tables design](../../sql-data-warehouse/sql-data-warehouse-tables-distribute.md).
#### Data indexing
Only one field per table can be used for partitioning. That field is frequently
#### Data table statistics
-Ensure that statistics on data tables are up to date by building in a [statistics](/azure/synapse-analytics/sql/develop-tables-statistics) step to ETL/ELT jobs.
+Ensure that statistics on data tables are up to date by building in a [statistics](../../sql/develop-tables-statistics.md) step to ETL/ELT jobs.
#### PolyBase for data loading
-PolyBase is the most efficient method for loading large amounts of data into the warehouse since it can leverage parallel loading streams. For more information, see [PolyBase data loading strategy](/azure/synapse-analytics/sql/load-data-overview).
+PolyBase is the most efficient method for loading large amounts of data into the warehouse since it can leverage parallel loading streams. For more information, see [PolyBase data loading strategy](../../sql/load-data-overview.md).
#### Use workload management
-Use [workload management](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-workload-management?context=/azure/synapse-analytics/context/context) instead of resource classes. ETL would be in its own workgroup and should be configured to have more resources per query (less concurrency by more resources). For more information, see [What is dedicated SQL pool in Azure Synapse Analytics](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is).
+Use [workload management](../../sql-data-warehouse/sql-data-warehouse-workload-management.md?context=%2fazure%2fsynapse-analytics%2fcontext%2fcontext) instead of resource classes. ETL would be in its own workgroup and should be configured to have more resources per query (less concurrency by more resources). For more information, see [What is dedicated SQL pool in Azure Synapse Analytics](../../sql-data-warehouse/sql-data-warehouse-overview-what-is.md).
## Next steps
-To learn more about ETL and load for Teradata migration, see the next article in this series: [Data migration, ETL, and load for Teradata migration](2-etl-load-migration-considerations.md).
+To learn more about ETL and load for Teradata migration, see the next article in this series: [Data migration, ETL, and load for Teradata migration](2-etl-load-migration-considerations.md).
synapse-analytics 2 Etl Load Migration Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/2-etl-load-migration-considerations.md
If these data marts are implemented as physical tables, they'll require addition
> [!TIP] > The performance and scalability of Azure Synapse enables virtualization without sacrificing performance.
-With the advent of relatively low-cost scalable MPP architectures, such as Azure Synapse, and the inherent performance characteristics of such architectures, it may be that you can provide data mart functionality without having to instantiate the mart as a set of physical tables. This is achieved by effectively virtualizing the data marts via SQL views onto the main data warehouse, or via a virtualization layer using features such as views in Azure or the [visualization products of Microsoft partners](/azure/synapse-analytics/partner/data-integration). This approach simplifies or eliminates the need for additional storage and aggregation processing and reduces the overall number of database objects to be migrated.
+With the advent of relatively low-cost scalable MPP architectures, such as Azure Synapse, and the inherent performance characteristics of such architectures, it may be that you can provide data mart functionality without having to instantiate the mart as a set of physical tables. This is achieved by effectively virtualizing the data marts via SQL views onto the main data warehouse, or via a virtualization layer using features such as views in Azure or the [visualization products of Microsoft partners](../../partner/data-integration.md). This approach simplifies or eliminates the need for additional storage and aggregation processing and reduces the overall number of database objects to be migrated.
There's another potential benefit to this approach: by implementing the aggregation and join logic within a virtualization layer, and presenting external reporting tools via a virtualized view, the processing required to create these views is pushed down into the data warehouse, which is generally the best place to run joins, aggregations, and other related operations on large data volumes.
You can get an accurate number for the volume of data to be mitigated for a give
> [!TIP] > Plan the approach to ETL migration ahead of time and leverage Azure facilities where appropriate.
-For ETL/ELT processing, legacy Teradata data warehouses may use custom-built scripts using Teradata utilities such as BTEQ and Teradata Parallel Transporter (TPT), or third-party ETL tools such as Informatica or Ab Initio. Sometimes, Teradata data warehouses use a combination of ETL and ELT approaches that's evolved over time. When planning a migration to Azure Synapse, you need to determine the best way to implement the required ETL/ELT processing in the new environment while minimizing the cost and risk involved. To learn more about ETL and ELT processing, see [ELT vs ETL Design approach](/azure/synapse-analytics/sql-data-warehouse/design-elt-data-loading).
+For ETL/ELT processing, legacy Teradata data warehouses may use custom-built scripts using Teradata utilities such as BTEQ and Teradata Parallel Transporter (TPT), or third-party ETL tools such as Informatica or Ab Initio. Sometimes, Teradata data warehouses use a combination of ETL and ELT approaches that's evolved over time. When planning a migration to Azure Synapse, you need to determine the best way to implement the required ETL/ELT processing in the new environment while minimizing the cost and risk involved. To learn more about ETL and ELT processing, see [ELT vs ETL Design approach](../../sql-data-warehouse/design-elt-data-loading.md).
The following sections discuss migration options and make recommendations for various use cases. This flowchart summarizes one approach:
The following sections discuss migration options and make recommendations for va
The first step is always to build an inventory of ETL/ELT processes that need to be migrated. As with other steps, it's possible that the standard 'built-in' Azure features make it unnecessary to migrate some existing processes. For planning purposes, it's important to understand the scale of the migration to be performed.
-In the preceding flowchart, decision 1 relates to a high-level decision about whether to migrate to a totally Azure-native environment. If you're moving to a totally Azure-native environment, we recommend that you re-engineer the ETL processing using [Pipelines and activities in Azure Data Factory](/azure/data-factory/concepts-pipelines-activities?msclkid=b6ea2be4cfda11ec929ac33e6e00db98&tabs=data-factory) or [Synapse Pipelines](/azure/synapse-analytics/get-started-pipelines?msclkid=b6e99db9cfda11ecbaba18ca59d5c95c). If you're not moving to a totally Azure-native environment, then decision 2 is whether an existing third-party ETL tool is already in use.
+In the preceding flowchart, decision 1 relates to a high-level decision about whether to migrate to a totally Azure-native environment. If you're moving to a totally Azure-native environment, we recommend that you re-engineer the ETL processing using [Pipelines and activities in Azure Data Factory](../../../data-factory/concepts-pipelines-activities.md?msclkid=b6ea2be4cfda11ec929ac33e6e00db98&tabs=data-factory) or [Synapse Pipelines](../../get-started-pipelines.md?msclkid=b6e99db9cfda11ecbaba18ca59d5c95c). If you're not moving to a totally Azure-native environment, then decision 2 is whether an existing third-party ETL tool is already in use.
In the Teradata environment, some or all ETL processing may be performed by custom scripts using Teradata-specific utilities like BTEQ and TPT. In this case, your approach should be to re-engineer using Data Factory.
There's also a hybrid approach that uses both methods. For example, you can use
#### Orchestrate from Teradata or Azure?
-The recommended approach when moving to Azure Synapse is to orchestrate the data extract and loading from the Azure environment using [Azure Synapse Pipelines](/azure/synapse-analytics/get-started-pipelines?msclkid=b6e99db9cfda11ecbaba18ca59d5c95c) or [Azure Data Factory](/azure/data-factory/introduction?msclkid=2ccc66eccfde11ecaa58877e9d228779), as well as associated utilities, such as PolyBase or [COPY INTO](/sql/t-sql/statements/copy-into-transact-sql), for most efficient data loading. This approach leverages the Azure capabilities and provides an easy method to build reusable data loading pipelines.
+The recommended approach when moving to Azure Synapse is to orchestrate the data extract and loading from the Azure environment using [Azure Synapse Pipelines](../../get-started-pipelines.md?msclkid=b6e99db9cfda11ecbaba18ca59d5c95c) or [Azure Data Factory](../../../data-factory/introduction.md?msclkid=2ccc66eccfde11ecaa58877e9d228779), as well as associated utilities, such as PolyBase or [COPY INTO](/sql/t-sql/statements/copy-into-transact-sql), for most efficient data loading. This approach leverages the Azure capabilities and provides an easy method to build reusable data loading pipelines.
Other benefits of this approach include reduced impact on the Teradata system during the data load process since the management and loading process is running in Azure, and the ability to automate the process by using metadata-driven data load pipelines.
To summarize, our recommendations for migrating data and associated ETL processe
- Identify and understand the most efficient tools for data extraction and loading in both Teradata and Azure environments. Use the appropriate tools at each phase in the process. -- Use Azure facilities such as [Azure Synapse Pipelines](/azure/synapse-analytics/get-started-pipelines?msclkid=b6e99db9cfda11ecbaba18ca59d5c95c) or [Azure Data Factory](/azure/data-factory/introduction?msclkid=2ccc66eccfde11ecaa58877e9d228779) to orchestrate and automate the migration process while minimizing impact on the Teradata system.
+- Use Azure facilities such as [Azure Synapse Pipelines](../../get-started-pipelines.md?msclkid=b6e99db9cfda11ecbaba18ca59d5c95c) or [Azure Data Factory](../../../data-factory/introduction.md?msclkid=2ccc66eccfde11ecaa58877e9d228779) to orchestrate and automate the migration process while minimizing impact on the Teradata system.
## Next steps
-To learn more about security access operations, see the next article in this series: [Security, access, and operations for Teradata migrations](3-security-access-operations.md).
+To learn more about security access operations, see the next article in this series: [Security, access, and operations for Teradata migrations](3-security-access-operations.md).
synapse-analytics 3 Security Access Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/3-security-access-operations.md
This article discusses connection methods for existing legacy Teradata environme
We assume there's a requirement to migrate the existing methods of connection and user, role, and permission structure as is. If this isn't the case, then you can use Azure utilities such as Azure portal to create and manage a new security regime.
-For more information on the [Azure Synapse security](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-manage-security#authorization) options see [Security whitepaper](/azure/synapse-analytics/guidance/security-white-paper-introduction).
+For more information on the [Azure Synapse security](../../sql-data-warehouse/sql-data-warehouse-overview-manage-security.md#authorization) options see [Security whitepaper](../../guidance/security-white-paper-introduction.md).
### Connection and authentication
In Azure Synapse, procedures can be used to provide this functionality.
\*\*\*\*\* In Azure Synapse, these features are handled outside of the database.
-Refer to [Azure Synapse Analytics security permissions](/azure/synapse-analytics/guidance/security-white-paper-introduction).
+Refer to [Azure Synapse Analytics security permissions](../../guidance/security-white-paper-introduction.md).
## Operational considerations
Database administrators can use Teradata Viewpoint to determine system status, t
Similarly, Azure Synapse provides a rich monitoring experience within the Azure portal to provide insights into your data warehouse workload. The Azure portal is the recommended tool when monitoring your data warehouse as it provides configurable retention periods, alerts, recommendations, and customizable charts and dashboards for metrics and logs.
-The portal also enables integration with other Azure monitoring services such as Operations Management Suite (OMS) and [Azure Monitor](/azure/synapse-analytics/monitoring/how-to-monitor-using-azure-monitor?msclkid=d5e9e46ecfe111ec8ba8ee5360e77c4c) (logs) to provide a holistic monitoring experience for not only the data warehouse but also the entire Azure analytics platform for an integrated monitoring experience.
+The portal also enables integration with other Azure monitoring services such as Operations Management Suite (OMS) and [Azure Monitor](../../monitoring/how-to-monitor-using-azure-monitor.md?msclkid=d5e9e46ecfe111ec8ba8ee5360e77c4c) (logs) to provide a holistic monitoring experience for not only the data warehouse but also the entire Azure analytics platform for an integrated monitoring experience.
> [!TIP] > Low-level and system-wide metrics are automatically logged in Azure Synapse. Resource utilization statistics for the Azure Synapse are automatically logged within the system. The metrics include usage statistics for CPU, memory, cache, I/O and temporary workspace for each query as well as connectivity information (such as failed connection attempts).
-Azure Synapse provides a set of [Dynamic Management Views](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-manage-monitor?msclkid=3e6eefbccfe211ec82d019ada29b1834) (DMVs). These views are useful when actively troubleshooting and identifying performance bottlenecks with your workload.
+Azure Synapse provides a set of [Dynamic Management Views](../../sql-data-warehouse/sql-data-warehouse-manage-monitor.md?msclkid=3e6eefbccfe211ec82d019ada29b1834) (DMVs). These views are useful when actively troubleshooting and identifying performance bottlenecks with your workload.
For more information, see [Azure Synapse operations and management options](/azure/sql-data-warehouse/sql-data-warehouse-how-to-manage-and-monitor-workload-importance).
This information can also be used for capacity planning, determining the resourc
> [!TIP] > A major benefit of Azure is the ability to independently scale up and down compute resources on demand to handle peaky workloads cost-effectively.
-The architecture of Azure Synapse separates storage and compute, allowing each to scale independently. As a result, [compute resources can be scaled](/azure/synapse-analytics/sql-data-warehouse/quickstart-scale-compute-portal) to meet performance demands independent of data storage. You can also pause and resume compute resources. A natural benefit of this architecture is that billing for compute and storage is separate. If a data warehouse isn't in use, you can save on compute costs by pausing compute.
+The architecture of Azure Synapse separates storage and compute, allowing each to scale independently. As a result, [compute resources can be scaled](../../sql-data-warehouse/quickstart-scale-compute-portal.md) to meet performance demands independent of data storage. You can also pause and resume compute resources. A natural benefit of this architecture is that billing for compute and storage is separate. If a data warehouse isn't in use, you can save on compute costs by pausing compute.
Compute resources can be scaled up or scaled back by adjusting the data warehouse units setting for the data warehouse. Loading and query performance will increase linearly as you add more data warehouse units.
Adding more compute nodes adds more compute power and ability to leverage more p
## Next steps
-To learn more about visualization and reporting, see the next article in this series: [Visualization and reporting for Teradata migrations](4-visualization-reporting.md).
+To learn more about visualization and reporting, see the next article in this series: [Visualization and reporting for Teradata migrations](4-visualization-reporting.md).
synapse-analytics 4 Visualization Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/4-visualization-reporting.md
In addition, all the required data needs to be migrated to ensure the same resul
If BI tools are querying views in the underlying data warehouse or data mart database, then will these views still work? You might think yes, but if there are proprietary SQL extensions, specific to your legacy data warehouse DBMS in these views that have no equivalent in Azure Synapse, you'll need to know about them and find a way to resolve them.
-Other issues like the behavior of nulls or data type variations across DBMS platforms need to be tested, in case they cause slightly different calculation results. Obviously, you want to minimize these issues and take all necessary steps to shield business users from any kind of impact. Depending on your legacy data warehouse system (such as Teradata), there are [tools](/azure/synapse-analytics/partner/data-integration) that can help hide these differences so that BI tools and applications are kept unaware of them and can run unchanged.
+Other issues like the behavior of nulls or data type variations across DBMS platforms need to be tested, in case they cause slightly different calculation results. Obviously, you want to minimize these issues and take all necessary steps to shield business users from any kind of impact. Depending on your legacy data warehouse system (such as Teradata), there are [tools](../../partner/data-integration.md) that can help hide these differences so that BI tools and applications are kept unaware of them and can run unchanged.
> [!TIP] > Use repeatable tests to ensure reports, dashboards, and other visualizations migrate successfully,.
This breaks the dependency between business users utilizing self-service BI tool
> [!TIP] > Schema alterations to tune your data model for Azure Synapse can be hidden from users.
-By introducing data virtualization, any schema alternations made during data warehouse and data mart migration to Azure Synapse (to optimize performance, for example) can be hidden from business users because they only access virtual tables in the data virtualization layer. If structural changes are needed, only the mappings between the data warehouse or data marts, and any virtual tables would need to be changed so that users remain unaware of those changes and unaware of the migration. [Microsoft partners](/azure/synapse-analytics/partner/data-integration) provides a useful data virtualization software.
+By introducing data virtualization, any schema alternations made during data warehouse and data mart migration to Azure Synapse (to optimize performance, for example) can be hidden from business users because they only access virtual tables in the data virtualization layer. If structural changes are needed, only the mappings between the data warehouse or data marts, and any virtual tables would need to be changed so that users remain unaware of those changes and unaware of the migration. [Microsoft partners](../../partner/data-integration.md) provides a useful data virtualization software.
## Identify high priority reports to migrate first
For information about how to migrate users, user groups, roles, and privileges,
> [!TIP] > Build an automated test suite to make tests repeatable.
-It's also best practice to automate testing as much as possible, to make each test repeatable and to allow a consistent approach to evaluating results. This works well for known regular reports, and could be managed via [Synapse pipelines](/azure/synapse-analytics/get-started-pipelines?msclkid=8f3e7e96cfed11eca432022bc07c18de) or [Azure Data Factory](/azure/data-factory/introduction?msclkid=2ccc66eccfde11ecaa58877e9d228779) orchestration. If you already have a suite of test queries in place for regression testing, you could use the testing tools to automate the post migration testing.
+It's also best practice to automate testing as much as possible, to make each test repeatable and to allow a consistent approach to evaluating results. This works well for known regular reports, and could be managed via [Synapse pipelines](../../get-started-pipelines.md?msclkid=8f3e7e96cfed11eca432022bc07c18de) or [Azure Data Factory](../../../data-factory/introduction.md?msclkid=2ccc66eccfde11ecaa58877e9d228779) orchestration. If you already have a suite of test queries in place for regression testing, you could use the testing tools to automate the post migration testing.
> [!TIP] > Leverage tools that can compare metadata lineage to verify results.
This substantially simplifies the data migration process, because the business w
> [!TIP] > Azure Data Factory and several third-party ETL tools support lineage.
-Several ETL tools provide end-to-end lineage capability, and you may be able to make use of this via your existing ETL tool if you're continuing to use it with Azure Synapse. Microsoft [Synapse pipelines](/azure/synapse-analytics/get-started-pipelines?msclkid=8f3e7e96cfed11eca432022bc07c18de) or [Azure Data Factory](/azure/data-factory/introduction?msclkid=2ccc66eccfde11ecaa58877e9d228779) lets you view lineage in mapping flows. Also, [Microsoft partners](/azure/synapse-analytics/partner/data-integration) provide automated metadata discovery, data lineage, and lineage comparison tools.
+Several ETL tools provide end-to-end lineage capability, and you may be able to make use of this via your existing ETL tool if you're continuing to use it with Azure Synapse. Microsoft [Synapse pipelines](../../get-started-pipelines.md?msclkid=8f3e7e96cfed11eca432022bc07c18de) or [Azure Data Factory](../../../data-factory/introduction.md?msclkid=2ccc66eccfde11ecaa58877e9d228779) lets you view lineage in mapping flows. Also, [Microsoft partners](../../partner/data-integration.md) provide automated metadata discovery, data lineage, and lineage comparison tools.
## Migrate BI tool semantic layers to Azure Synapse Analytics
A good way to get everything consistent across multiple BI tools is to create a
> [!TIP] > Use data virtualization to create a common semantic layer to guarantee consistency across all BI tools in an Azure Synapse environment.
-In this way, you get consistency across all BI tools, while at the same time breaking the dependency between BI tools and applications, and the underlying physical data structures in Azure Synapse. Use [Microsoft partners](/azure/synapse-analytics/partner/data-integration) on Azure to implement this. The following diagram shows how a common vocabulary in the Data Virtualization server lets multiple BI tools see a common semantic layer.
+In this way, you get consistency across all BI tools, while at the same time breaking the dependency between BI tools and applications, and the underlying physical data structures in Azure Synapse. Use [Microsoft partners](../../partner/data-integration.md) on Azure to implement this. The following diagram shows how a common vocabulary in the Data Virtualization server lets multiple BI tools see a common semantic layer.
:::image type="content" source="../media/4-visualization-reporting/data-virtualization-semantics.png" border="true" alt-text="Diagram with common data names and definitions that relate to the data virtualization server.":::
Finally, consider data virtualization to shield BI tools and applications from s
## Next steps
-To learn more about minimizing SQL issues, see the next article in this series: [Minimizing SQL issues for Teradata migrations](5-minimize-sql-issues.md).
+To learn more about minimizing SQL issues, see the next article in this series: [Minimizing SQL issues for Teradata migrations](5-minimize-sql-issues.md).
synapse-analytics 5 Minimize Sql Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/5-minimize-sql-issues.md
Automate and orchestrate the migration process by making use of the capabilities
Azure Data Factory is a cloud-based data integration service that allows creation of data-driven workflows in the cloud for orchestrating and automating data movement and data transformation. Using Data Factory, you can create and schedule data-driven workflows&mdash;called pipelines&mdash;that can ingest data from disparate data stores. It can process and transform data by using compute services such as Azure HDInsight Hadoop, Spark, Azure Data Lake Analytics, and Azure Machine Learning.
-By creating metadata to list the data tables to be migrated and their location, you can use the Data Factory facilities to manage and automate parts of the migration process. You can also use [Synapse pipelines](/azure/synapse-analytics/get-started-pipelines?msclkid=8f3e7e96cfed11eca432022bc07c18de).
+By creating metadata to list the data tables to be migrated and their location, you can use the Data Factory facilities to manage and automate parts of the migration process. You can also use [Synapse pipelines](../../get-started-pipelines.md?msclkid=8f3e7e96cfed11eca432022bc07c18de).
## SQL DDL differences between Teradata and Azure Synapse
Access this information via views onto the catalog such as `DBC.ColumnsV` and ge
> [!TIP] > Third-party tools and services can automate data mapping tasks.
-There are [Microsoft partners](/azure/synapse-analytics/partner/data-integration) who offer tools and services to automate migration, including data-type mapping. Also, if a third-party ETL tool such as Informatica or Talend is already in use in the Teradata environment, that tool can implement any required data transformations.
+There are [Microsoft partners](../../partner/data-integration.md) who offer tools and services to automate migration, including data-type mapping. Also, if a third-party ETL tool such as Informatica or Talend is already in use in the Teradata environment, that tool can implement any required data transformations.
## SQL DML differences between Teradata and Azure Synapse
There may be facilities in the Azure environment that replace the functionality
> [!TIP] > Third-party products and services can automate migration of non-data elements.
-[Microsoft partners](/azure/synapse-analytics/partner/data-integration) offer tools and services that can automate the migration.
+[Microsoft partners](../../partner/data-integration.md) offer tools and services that can automate the migration.
See the following sections for more information on each of these elements.
Azure Synapse doesn't support the creation of triggers, but you can implement th
#### Sequences
-Azure Synapse sequences are handled in a similar way to Teradata, using [Identity to create surrogate keys](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-identity) or [managed identity](/azure/data-factory/data-factory-service-identity?tabs=data-factory).
+Azure Synapse sequences are handled in a similar way to Teradata, using [Identity to create surrogate keys](../../sql-data-warehouse/sql-data-warehouse-tables-identity.md) or [managed identity](../../../data-factory/data-factory-service-identity.md?tabs=data-factory).
#### Teradata to T-SQL mapping
To minimize the task of migrating the actual SQL code, follow these recommendati
- Automate the process wherever possible to minimize errors, risk, and time for the migration. -- Consider using specialist [Microsoft partners](/azure/synapse-analytics/partner/data-integration) and services to streamline the migration.
+- Consider using specialist [Microsoft partners](../../partner/data-integration.md) and services to streamline the migration.
## Next steps
-To learn more about Microsoft and third-party tools, see the next article in this series: [Tools for Teradata data warehouse migration to Azure Synapse Analytics](6-microsoft-third-party-migration-tools.md).
+To learn more about Microsoft and third-party tools, see the next article in this series: [Tools for Teradata data warehouse migration to Azure Synapse Analytics](6-microsoft-third-party-migration-tools.md).
synapse-analytics 6 Microsoft Third Party Migration Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/6-microsoft-third-party-migration-tools.md
Microsoft Azure Data Factory is a fully managed, pay-as-you-use, hybrid data int
> [!TIP] > Data Factory allows you to build scalable data integration pipelines code-free.
-[Azure Data Factory connectors](/azure/data-factory/connector-overview?msclkid=00086e4acff211ec9263dee5c7eb6e69) connect to external data sources and databases and have templates for common data integration tasks. A visual front-end, browser-based UI enables non-programmers to create and run process pipelines to ingest, transform, and load data. More experienced programmers have the option to incorporate custom code, such as Python programs.
+[Azure Data Factory connectors](../../../data-factory/connector-overview.md?msclkid=00086e4acff211ec9263dee5c7eb6e69) connect to external data sources and databases and have templates for common data integration tasks. A visual front-end, browser-based UI enables non-programmers to create and run process pipelines to ingest, transform, and load data. More experienced programmers have the option to incorporate custom code, such as Python programs.
> [!TIP] > Data Factory enables collaborative development between business and IT professionals.
Azure ExpressRoute creates private connections between Azure data centers and in
#### AzCopy
-[AzCopy](/azure/storage/common/storage-use-azcopy-v10) is a command line utility that copies files to Azure Blob Storage via a standard internet connection. In a warehouse migration project, you can use AzCopy to upload extracted, compressed, and delimited text files before loading through PolyBase, or a native Parquet reader if the exported files are Parquet format. AzCopy can upload individual files, file selections, or file directories.
+[AzCopy](../../../storage/common/storage-use-azcopy-v10.md) is a command line utility that copies files to Azure Blob Storage via a standard internet connection. In a warehouse migration project, you can use AzCopy to upload extracted, compressed, and delimited text files before loading through PolyBase, or a native Parquet reader if the exported files are Parquet format. AzCopy can upload individual files, file selections, or file directories.
#### Azure Data Box
However, PolyBase has some limitations. Rows to be loaded must be less than 1 MB
## Microsoft partners can help you migrate your data warehouse to Azure Synapse Analytics
-In addition to tools that can help you with various aspects of data warehouse migration, there are several practiced [Microsoft partners](/azure/synapse-analytics/partner/data-integration) that can bring their expertise to help you move your legacy on-premises data warehouse platform to Azure Synapse.
+In addition to tools that can help you with various aspects of data warehouse migration, there are several practiced [Microsoft partners](../../partner/data-integration.md) that can bring their expertise to help you move your legacy on-premises data warehouse platform to Azure Synapse.
## Next steps
-To learn more about implementing modern data warehouses, see the next article in this series: [Beyond Teradata migration, implementing a modern data warehouse in Microsoft Azure](7-beyond-data-warehouse-migration.md).
+To learn more about implementing modern data warehouses, see the next article in this series: [Beyond Teradata migration, implementing a modern data warehouse in Microsoft Azure](7-beyond-data-warehouse-migration.md).
synapse-analytics 7 Beyond Data Warehouse Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/7-beyond-data-warehouse-migration.md
One of the key reasons to migrate your existing data warehouse to Azure Synapse
- Azure Data Lake Storage&mdash;for cost effective data ingestion, staging, cleansing and transformation to free up data warehouse capacity occupied by fast growing staging tables -- Azure Data Factory&mdash;for collaborative IT and self-service data integration [with connectors](/azure/data-factory/connector-overview) to cloud and on-premises data sources and streaming data
+- Azure Data Factory&mdash;for collaborative IT and self-service data integration [with connectors](../../../data-factory/connector-overview.md) to cloud and on-premises data sources and streaming data
- [The Open Data Model Common Data Initiative](/common-data-model/)&mdash;to share consistent trusted data across multiple technologies including: - Azure Synapse
One of the key reasons to migrate your existing data warehouse to Azure Synapse
- ML.NET - Visual Studio .NET for Apache Spark to enable data scientists to use Azure Synapse data to train machine learning models at scale. -- [Azure HDInsight](/azure/hdinsight/)&mdash;to leverage big data analytical processing and join big data with Azure Synapse data by creating a Logical Data Warehouse using PolyBase
+- [Azure HDInsight](../../../hdinsight/index.yml)&mdash;to leverage big data analytical processing and join big data with Azure Synapse data by creating a Logical Data Warehouse using PolyBase
-- [Azure Event Hubs](/azure/event-hubs/event-hubs-about), [Azure Stream Analytics](/azure/stream-analytics/stream-analytics-introduction) and [Apache Kafka](/azure/databricks/spark/latest/structured-streaming/kafka)&mdash;to integrate with live streaming data from within Azure Synapse
+- [Azure Event Hubs](../../../event-hubs/event-hubs-about.md), [Azure Stream Analytics](../../../stream-analytics/stream-analytics-introduction.md) and [Apache Kafka](/azure/databricks/spark/latest/structured-streaming/kafka)&mdash;to integrate with live streaming data from within Azure Synapse
-There's often acute demand to integrate with [Machine Learning](/azure/synapse-analytics/machine-learning/what-is-machine-learning) to enable custom built, trained machine learning models for use in Azure Synapse. This would enable in-database analytics to run at scale in-batch, on an event-driven basis and on-demand. The ability to exploit in-database analytics in Azure Synapse from multiple BI tools and applications also guarantees that all get the same predictions and recommendations.
+There's often acute demand to integrate with [Machine Learning](../../machine-learning/what-is-machine-learning.md) to enable custom built, trained machine learning models for use in Azure Synapse. This would enable in-database analytics to run at scale in-batch, on an event-driven basis and on-demand. The ability to exploit in-database analytics in Azure Synapse from multiple BI tools and applications also guarantees that all get the same predictions and recommendations.
In addition, there's an opportunity to integrate Azure Synapse with Microsoft partner tools on Azure to shorten time to value.
Data Factory can support multiple use cases, including:
#### Data sources
-Data Factory lets you connect with [connectors](/azure/data-factory/connector-overview) from both cloud and on-premises data sources. Agent software, known as a Self-Hosted Integration Runtime, securely accesses on-premises data sources and supports secure, scalable data transfer.
+Data Factory lets you connect with [connectors](../../../data-factory/connector-overview.md) from both cloud and on-premises data sources. Agent software, known as a Self-Hosted Integration Runtime, securely accesses on-premises data sources and supports secure, scalable data transfer.
#### Transform data using Azure Data Factory
To achieve this goal, establish a set of common data names and definitions descr
> [!TIP] > Integrating data to create lake database logical entities in shared storage enables maximum reuse of common data assets.
-Microsoft has done this by creating a [lake database](/azure/synapse-analytics/database-designer/concepts-lake-database). The lake database is a common language for business entities that represents commonly used concepts and activities across a business. Azure Synapse Analytics provides industry specific database templates to help standardize data in the lake. [Lake database templates](/azure/synapse-analytics/database-designer/concepts-database-templates) provide schemas for predefined business areas, enabling data to the loaded into a lake database in a structured way. The power comes when data integration software is used to create lake database common data assets. This results in self-describing trusted data that can be consumed by applications and analytical systems. Create a lake database in Azure Data Lake storage using Azure Data Factory, and consume it with Power BI, Azure Synapse Spark, Azure Synapse and Azure ML. The following diagram shows a lake database used in Azure Synapse Analytics.
+Microsoft has done this by creating a [lake database](../../database-designer/concepts-lake-database.md). The lake database is a common language for business entities that represents commonly used concepts and activities across a business. Azure Synapse Analytics provides industry specific database templates to help standardize data in the lake. [Lake database templates](../../database-designer/concepts-database-templates.md) provide schemas for predefined business areas, enabling data to the loaded into a lake database in a structured way. The power comes when data integration software is used to create lake database common data assets. This results in self-describing trusted data that can be consumed by applications and analytical systems. Create a lake database in Azure Data Lake storage using Azure Data Factory, and consume it with Power BI, Azure Synapse Spark, Azure Synapse and Azure ML. The following diagram shows a lake database used in Azure Synapse Analytics.
:::image type="content" source="../media/7-beyond-data-warehouse-migration/azure-synapse-analytics-lake-database.png" border="true" alt-text="Screenshot showing how a lake database can be used in Azure Synapse Analytics.":::
Azure Machine Learning Service provides a software development kit (SDK) and ser
> [!TIP] > Azure Synapse Spark is Microsoft's dynamically scalable Spark-as-a-service offering scalable execution of data preparation, model development and deployed model execution.
-[Azure Synapse Spark Pool Notebooks](/azure/synapse-analytics/spark/apache-spark-development-using-notebooks?msclkid=cbe4b8ebcff511eca068920ea4bf16b9) is an Apache Spark service optimized to run on Azure which:
+[Azure Synapse Spark Pool Notebooks](../../spark/apache-spark-development-using-notebooks.md?msclkid=cbe4b8ebcff511eca068920ea4bf16b9) is an Apache Spark service optimized to run on Azure which:
- Allows data engineers to build and execute scalable data preparation jobs using Azure Data Factory
By leveraging PolyBase data virtualization inside Azure Synapse, you can impleme
:::image type="content" source="../media/7-beyond-data-warehouse-migration/complex-data-warehouse-structure.png" alt-text="Screenshot showing an example of a complex data warehouse structure accessed through user interface methods.":::
-The previous diagram shows how other technologies of the Microsoft analytical ecosystem can be combined with the capability of Azure Synapse logical data warehouse architecture. For example, data can be ingested into Azure Data Lake Storage (ADLS) and curated using Azure Data Factory to create trusted data products that represent Microsoft [lake database](/azure/synapse-analytics/database-designer/concepts-lake-database) logical data entities. This trusted, commonly understood data can then be consumed and reused in different analytical environments such as Azure Synapse, Azure Synapse Spark Pool Notebooks, or Azure Cosmos DB. All insights produced in these environments are accessible via a logical data warehouse data virtualization layer made possible by PolyBase.
+The previous diagram shows how other technologies of the Microsoft analytical ecosystem can be combined with the capability of Azure Synapse logical data warehouse architecture. For example, data can be ingested into Azure Data Lake Storage (ADLS) and curated using Azure Data Factory to create trusted data products that represent Microsoft [lake database](../../database-designer/concepts-lake-database.md) logical data entities. This trusted, commonly understood data can then be consumed and reused in different analytical environments such as Azure Synapse, Azure Synapse Spark Pool Notebooks, or Azure Cosmos DB. All insights produced in these environments are accessible via a logical data warehouse data virtualization layer made possible by PolyBase.
> [!TIP] > A logical data warehouse architecture simplifies business user access to data and adds new value to what you already know in your data warehouse.
Leverage PolyBase and `COPY INTO` to go beyond your data warehouse. Simplify acc
## Next steps
-To learn more about migrating to a dedicated SQL pool, see [Migrate a data warehouse to a dedicated SQL pool in Azure Synapse Analytics](../migrate-to-synapse-analytics-guide.md).
+To learn more about migrating to a dedicated SQL pool, see [Migrate a data warehouse to a dedicated SQL pool in Azure Synapse Analytics](../migrate-to-synapse-analytics-guide.md).
synapse-analytics Synapse Spark Sql Pool Import Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/synapse-spark-sql-pool-import-export.md
This section presents reference code templates to describe how to use and invoke
#### Read Request - `synapsesql` method signature
+##### [Scala](#tab/scala)
+ ```Scala synapsesql(tableName:String) => org.apache.spark.sql.DataFrame ```
+##### [Python](#tab/python)
+ ```python synapsesql(table_name: str) -> org.apache.spark.sql.DataFrame ```+ #### Read using Azure AD based authentication
-##### [Scala](#tab/scala)
+##### [Scala](#tab/scala1)
```Scala //Use case is to read data from an internal table in Synapse Dedicated SQL Pool DB
val dfToReadFromTable:DataFrame = spark.read.
dfToReadFromTable.show() ```
-##### [Python](#tab/python)
+##### [Python](#tab/python1)
```python # Add required imports
dfToReadFromTable = (spark.read
# Show contents of the dataframe dfToReadFromTable.show() ```+ #### Read using basic authentication
-##### [Scala](#tab/scala1)
+##### [Scala](#tab/scala2)
```Scala //Use case is to read data from an internal table in Synapse Dedicated SQL Pool DB
val dfToReadFromTable:DataFrame = spark.read.
dfToReadFromTable.show() ```
-##### [Python](#tab/python1)
+##### [Python](#tab/python2)
```python # Add required imports
dfToReadFromTable = (spark.read
dfToReadFromTable.show() ```+ ### Write to Azure Synapse Dedicated SQL Pool
synapsesql(tableName:String,
* Spark Pool Version 3.1.2
+##### [Scala](#tab/scala3)
+ ```Scala synapsesql(tableName:String, tableType:String = Constants.INTERNAL,
synapsesql(tableName:String,
callBackHandle=Option[(Map[String, Any], Option[Throwable])=>Unit]):Unit ```
+##### [Python](#tab/python3)
+ ```python synapsesql(table_name: str, table_type: str = Constants.INTERNAL, location: str = None) -> None ```+ #### Write using Azure AD based authentication Following is a comprehensive code template that describes how to use the Connector for write scenarios:
-##### [Scala](#tab/scala2)
+##### [Scala](#tab/scala4)
```Scala //Add required imports
readDF.
if(errorDuringWrite.isDefined) throw errorDuringWrite.get ```
-##### [Python](#tab/python2)
+##### [Python](#tab/python4)
```python
from com.microsoft.spark.sqlanalytics.Constants import Constants
"/path/to/external/table")) ```+ #### Write using basic authentication Following code snippet replaces the write definition described in the [Write using Azure AD based authentication](#write-using-azure-ad-based-authentication) section, to submit write request using SQL basic authentication approach:
-##### [Scala](#tab/scala3)
+##### [Scala](#tab/scala5)
```Scala //Define write options to use SQL basic authentication
readDF.
callBackHandle = Some(callBackFunctionToReceivePostWriteMetrics)) ```
-##### [Python](#tab/python3)
+##### [Python](#tab/python5)
```python # Write using Basic Auth to Internal table
from com.microsoft.spark.sqlanalytics.Constants import Constants
"/path/to/external/table")) ```+ In a basic authentication approach, in order to read data from a source storage path other configuration options are required. Following code snippet provides an example to read from an Azure Data Lake Storage Gen2 data source using Service Principal credentials:
Spark DataFrame's `createOrReplaceTempView` can be used to access data fetched i
* Now, change the language preference on the Notebook to `PySpark (Python)` and fetch data from the registered view `<temporary_view_name>`
- ```Python
+```Python
spark.sql("select * from <temporary_view_name>").show()
- ```
+```
### Response handling
synapse-analytics Develop Storage Files Storage Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-storage-files-storage-access-control.md
You can use the following combinations of authorization and Azure Storage types:
## Firewall protected storage
-You can configure storage accounts to allow access to specific serverless SQL pool by creating a [resource instance rule](../../storage/common/storage-network-security.md?tabs=azure-portal#grant-access-from-azure-resource-instances-preview).
+You can configure storage accounts to allow access to specific serverless SQL pool by creating a [resource instance rule](../../storage/common/storage-network-security.md?tabs=azure-portal#grant-access-from-azure-resource-instances).
When accessing storage that is protected with the firewall, you can use **User Identity** or **Managed Identity**. > [!NOTE]
synapse-analytics Develop Tables External Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-tables-external-tables.md
CREATE EXTERNAL DATA SOURCE SqlOnDemandDemo WITH (
); ``` > [!NOTE]
-> The SQL users needs to have proper permissions on database scoped credentials to access the data source in Azure Synapse Analytics Serverless SQL Pool. [Access external storage using serverless SQL pool in Azure Synapse Analytics](https://docs.microsoft.com/azure/synapse-analytics/sql/develop-storage-files-overview?tabs=impersonation#permissions).
+> The SQL users needs to have proper permissions on database scoped credentials to access the data source in Azure Synapse Analytics Serverless SQL Pool. [Access external storage using serverless SQL pool in Azure Synapse Analytics](./develop-storage-files-overview.md?tabs=impersonation#permissions).
The following example creates an external data source for Azure Data Lake Gen2 pointing to the publicly available New York data set:
The external table is now created, for future exploration of the content of this
## Next steps
-See the [CETAS](develop-tables-cetas.md) article for how to save query results to an external table in Azure Storage. Or you can start querying [Apache Spark for Azure Synapse external tables](develop-storage-files-spark-tables.md).
+See the [CETAS](develop-tables-cetas.md) article for how to save query results to an external table in Azure Storage. Or you can start querying [Apache Spark for Azure Synapse external tables](develop-storage-files-spark-tables.md).
synapse-analytics Query Folders Multiple Csv Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/query-folders-multiple-csv-files.md
Since you have only one folder that matches the criteria, the query result is th
## Traverse folders recursively
-Serverless SQL pool can recursively traverse folders if you specify /** at the end of path. The following query will read all files from all folders and subfolders located in the *csv* folder.
+Serverless SQL pool can recursively traverse folders if you specify /** at the end of path. The following query will read all files from all folders and subfolders located in the *csv/taxi* folder.
```sql SELECT
synapse-analytics Resources Self Help Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md
Confirm the storage account accessed is using the Archive access tier.
The Archive access tier is an offline tier. While a blob is in the Archive access tier, it can't be read or modified.
-To read or download a blob in the Archive tier, rehydrate it to an online tier. See [Archive access tier](/azure/storage/blobs/access-tiers-overview#archive-access-tier).
+To read or download a blob in the Archive tier, rehydrate it to an online tier. See [Archive access tier](../../storage/blobs/access-tiers-overview.md#archive-access-tier).
### [0x80070057](#tab/x80070057)
If you have [partitioned files](query-specific-files.md), make sure you use [par
### Copy and transform data (CETAS)
-Learn how to [store query results to storage](create-external-table-as-select.md) by using the CETAS command.
+Learn how to [store query results to storage](create-external-table-as-select.md) by using the CETAS command.
virtual-desktop Create Profile Container Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-profile-container-azure-ad.md
This section will show you how to configure a VM with FSLogix. You'll need to fo
To configure FSLogix:
-1. [Update or install FSLogix](/fslogix/install-ht) on your session host, if needed.
+1. [Update or install FSLogix](/fslogix/install-ht) on your session host, if needed.
+ > [!NOTE]
+ > If the session host is created using the Azure Virtual Desktop service, FSLogix should already be pre-installed.
2. Follow the instructions in [Configure profile container registry settings](/fslogix/configure-profile-container-tutorial#configure-profile-container-registry-settings) to create the **Enabled** and **VHDLocations** registry values. Set the value of **VHDLocations** to `\\<Storage-account-name>.file.core.windows.net\<file-share-name>`.
virtual-desktop Safe Url List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/safe-url-list.md
Title: Azure Virtual Desktop required URL list - Azure
description: A list of URLs you must unblock to ensure your Azure Virtual Desktop deployment works as intended. Previously updated : 05/12/2022 Last updated : 05/26/2022
The Azure virtual machines you create for Azure Virtual Desktop must have access
|wvdportalstorageblob.blob.core.windows.net|443|Azure portal support|AzureCloud| | 169.254.169.254 | 80 | [Azure Instance Metadata service endpoint](../virtual-machines/windows/instance-metadata-service.md) | N/A | | 168.63.129.16 | 80 | [Session host health monitoring](../virtual-network/network-security-groups-overview.md#azure-platform-considerations) | N/A |
+| oneocsp.microsoft.com | 443 | Certificates | N/A |
+| microsoft.com | 443 | Certificates | N/A |
A [Service Tag](../virtual-network/service-tags-overview.md) represents a group of IP address prefixes from a given Azure service. Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change, minimizing the complexity of frequent updates to network security rules. Service Tags can be used in both Network Security Group ([NSG](../virtual-network/network-security-groups-overview.md)) and [Azure Firewall](../firewall/service-tags.md) rules to restrict outbound network access. Service Tags can be also used in User Defined Route ([UDR](../virtual-network/virtual-networks-udr-overview.md#user-defined)) to customize traffic routing behavior.
The Azure virtual machines you create for Azure Virtual Desktop must have access
|wvdportalstorageblob.blob.core.usgovcloudapi.net|443|Azure portal support|AzureCloud| | 169.254.169.254 | 80 | [Azure Instance Metadata service endpoint](../virtual-machines/windows/instance-metadata-service.md) | N/A | | 168.63.129.16 | 80 | [Session host health monitoring](../virtual-network/network-security-groups-overview.md#azure-platform-considerations) | N/A |
+| ocsp.msocsp.com | 443 | Certificates | N/A |
> [!IMPORTANT] > We are currently transitioning the URLs we use for Agent traffic. We still support the URLs below, however we encourage you to switch to ***.prod.warm.ingest.monitor.core.usgovcloudapi.net** as soon as possible.
virtual-machines Ephemeral Os Disks Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ephemeral-os-disks-faq.md
+
+ Title: FAQ Ephemeral OS disks
+description: Frequently asked questions on ephemeral OS disks for Azure VMs.
++++ Last updated : 05/26/2022+++++
+# Frequently asked questions about Ephemeral OS disks
+
+**Q: What is the size of the local OS Disks?**
+
+A: We support platform, Shared Image Gallery, and custom images, up to the VM cache size with OS cache placement and up to Temp disk size with Temp disk placement, where all read/writes to the OS disk will be local on the same node as the Virtual Machine.
+
+**Q: Can the ephemeral OS disk be resized?**
+
+A: No, once the ephemeral OS disk is provisioned, the OS disk cannot be resized.
+
+**Q: Can the ephemeral OS disk placement be modified after creation of VM?**
+
+A: No, once the ephemeral OS disk is provisioned, the OS disk placement cannot be changed. But the VM can be recreated via ARM template deployment/PowerShell/CLI by updating the OS disk placement of choosing. This would result in the recreation of the VM with Data on the OS disk deleted and OS is reprovisioned.
+
+**Q: Is there any Temp disk created if image size equals to Temp disk size of VM size selected?**
+
+A: No, in that case, there won't be any Temp disk drive created.
+
+**Q: Are Ephemeral OS disks supported on low-priority VMs and Spot VMs?**
+
+A: Yes. There is no option of Stop-Deallocate for Ephemeral VMs, rather users need to Delete instead of deallocating them.
+
+**Q: Can I attach a Managed Disks to an Ephemeral VM?**
+
+A: Yes, you can attach a managed data disk to a VM that uses an ephemeral OS disk.
+
+**Q: Will all VM sizes be supported for ephemeral OS disks?**
+
+A: No, most Premium Storage VM sizes are supported (DS, ES, FS, GS, M, etc.). To know whether a particular VM size supports ephemeral OS disks, you can:
+
+Call `Get-AzComputeResourceSku` PowerShell cmdlet
+```azurepowershell-interactive
+
+$vmSizes=Get-AzComputeResourceSku | where{$_.ResourceType -eq 'virtualMachines' -and $_.Locations.Contains('CentralUSEUAP')}
+
+foreach($vmSize in $vmSizes)
+{
+ foreach($capability in $vmSize.capabilities)
+ {
+ if($capability.Name -eq 'EphemeralOSDiskSupported' -and $capability.Value -eq 'true')
+ {
+ $vmSize
+ }
+ }
+}
+```
+
+**Q: Can the ephemeral OS disk be applied to existing VMs and scale sets?**
+
+A: No, ephemeral OS disk can only be used during VM and scale set creation.
+
+**Q: Can you mix ephemeral and normal OS disks in a scale set?**
+
+A: No, you can't have a mix of ephemeral and persistent OS disk instances within the same scale set.
+
+**Q: Can the ephemeral OS disk be created using PowerShell or CLI?**
+
+A: Yes, you can create VMs with Ephemeral OS Disk using REST, Templates, PowerShell, and CLI.
virtual-machines Ephemeral Os Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ephemeral-os-disks.md
Key differences between persistent and ephemeral OS disks:
| **Redeploy** | OS disk data is preserved | Data on the OS disk is deleted, OS is reprovisioned | | **Stop/ Start of VM** | OS disk data is preserved | Not Supported | | **Page file placement**| For Windows, page file is stored on the resource disk| For Windows, page file is stored on the OS disk (for both OS cache placement and Temp disk placement).|
+| **Maintenance of VM/VMSS using [healing](understand-vm-reboots.md#unexpected-downtime)** | OS disk data is preserved | OS disk data is not preserved |
+| **Maintenance of VM/VMSS using [Live Migration](maintenance-and-updates.md#live-migration)** | OS disk data is preserved | OS disk data is preserved |
+## Placement options for Ephemeral OS disks
+Ephemeral OS disk can be stored either on VM's OS cache disk or VM's temp/resource disk.
+[DiffDiskPlacement](/rest/api/compute/virtualmachines/list#diffdiskplacement) is the new property that can be used to specify where you want to place the Ephemeral OS disk. With this feature, when a Windows VM is provisioned, we configure the pagefile to be located on the OS Disk.
## Size requirements
If you want to opt for **Temp disk placement**: Standard Ubuntu server image fro
> [!Important] > If opting for temp disk placement the Final Temp disk size = (Initial temp disk size - OS image size).
+In the case of **Temp disk placement** as Ephemeral OS disk is placed on temp disk it will share the IOPS with temp disk as per the VM size chosen by you.
+ Basic Linux and Windows Server images in the Marketplace that are denoted by `[smallsize]` tend to be around 30 GiB and can use most of the available VM sizes. Ephemeral disks also require that the VM size supports **Premium storage**. The sizes usually (but not always) have an `s` in the name, like DSv2 and EsV3. For more information, see [Azure VM sizes](sizes.md) for details around which sizes support Premium storage.
-## Placement options for Ephemeral OS disks
-Ephemeral OS disk can be stored either on VM's OS cache disk or VM's temp/resource disk.
-[DiffDiskPlacement](/rest/api/compute/virtualmachines/list#diffdiskplacement) is the new property that can be used to specify where you want to place the Ephemeral OS disk.
-With this feature, when a Windows VM is provisioned, we configure the pagefile to be located on the OS Disk.
## Unsupported features - Capturing VM images
For example, If you try to create a Trusted launch Ephemeral OS disk VM using OS
This is because the temp storage for [Standard_DS4_v2](dv2-dsv2-series.md) is 56 GiB, and 1 GiB is reserved for VMGS when using trusted launch. For the same example above if you create a standard Ephemeral OS disk VM you would not get any errors and it would be a successful operation.
-> [!NOTE]
+> [!Important]
> > While using ephemeral disks for Trusted Launch VMs, keys and secrets generated or sealed by the vTPM after VM creation may not be persisted for operations like reimaging and platform events like service healing. > For more information on [how to deploy a trusted launch VM](trusted-launch-portal.md)
-## Frequently asked questions
-
-**Q: What is the size of the local OS Disks?**
-
-A: We support platform, Shared Image Gallery, and custom images, up to the VM cache size with OS cache placement and up to Temp disk size with Temp disk placement, where all read/writes to the OS disk will be local on the same node as the Virtual Machine.
-
-**Q: Can the ephemeral OS disk be resized?**
-
-A: No, once the ephemeral OS disk is provisioned, the OS disk cannot be resized.
-
-**Q: Can the ephemeral OS disk placement be modified after creation of VM?**
-
-A: No, once the ephemeral OS disk is provisioned, the OS disk placement cannot be changed. But the VM can be recreated via ARM template deployment/PowerShell/CLI by updating the OS disk placement of choosing. This would result in the recreation of the VM with Data on the OS disk deleted and OS is reprovisioned.
-
-**Q: Is there any Temp disk created if image size equals to Temp disk size of VM size selected?**
-
-A: No, in that case, there won't be any Temp disk drive created.
-
-**Q: Are Ephemeral OS disks supported on low-priority VMs and Spot VMs?**
-
-A: Yes. There is no option of Stop-Deallocate for Ephemeral VMs, rather users need to Delete instead of deallocating them.
-
-**Q: Can I attach a Managed Disks to an Ephemeral VM?**
-
-A: Yes, you can attach a managed data disk to a VM that uses an ephemeral OS disk.
-
-**Q: Will all VM sizes be supported for ephemeral OS disks?**
-
-A: No, most Premium Storage VM sizes are supported (DS, ES, FS, GS, M, etc.). To know whether a particular VM size supports ephemeral OS disks, you can:
-
-Call `Get-AzComputeResourceSku` PowerShell cmdlet
-```azurepowershell-interactive
-
-$vmSizes=Get-AzComputeResourceSku | where{$_.ResourceType -eq 'virtualMachines' -and $_.Locations.Contains('CentralUSEUAP')}
-
-foreach($vmSize in $vmSizes)
-{
- foreach($capability in $vmSize.capabilities)
- {
- if($capability.Name -eq 'EphemeralOSDiskSupported' -and $capability.Value -eq 'true')
- {
- $vmSize
- }
- }
-}
-```
-
-**Q: Can the ephemeral OS disk be applied to existing VMs and scale sets?**
-
-A: No, ephemeral OS disk can only be used during VM and scale set creation.
-
-**Q: Can you mix ephemeral and normal OS disks in a scale set?**
-
-A: No, you can't have a mix of ephemeral and persistent OS disk instances within the same scale set.
-
-**Q: Can the ephemeral OS disk be created using PowerShell or CLI?**
-
-A: Yes, you can create VMs with Ephemeral OS Disk using REST, Templates, PowerShell, and CLI.
- > [!NOTE] > > Ephemeral disk will not be accessible through the portal. You will receive a "Resource not Found" or "404" error when accessing the ephemeral disk which is expected.
A: Yes, you can create VMs with Ephemeral OS Disk using REST, Templates, PowerSh
## Next steps Create a VM with ephemeral OS disk using [Azure Portal/CLI/Powershell/ARM template](ephemeral-os-disks-deploy.md).
+Check out the [frequently asked questions on ephemeral os disk](ephemeral-os-disks-faq.md).
virtual-machines Agent Dependency Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/agent-dependency-linux.md
az vm extension list --resource-group myResourceGroup --vm-name myVM -o table
Extension execution output is logged to the following file: ```
-/opt/microsoft/dependency-agent/log/install.log
+/var/opt/microsoft/dependency-agent/log/install.log
``` ### Support
virtual-machines Generalize https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/generalize.md
To generalize your Windows VM, follow these steps:
2. Open a Command Prompt window as an administrator.
-3. Delete the panther directory (C:\Windows\Panther). Then change the directory to %windir%\system32\sysprep, and then run `sysprep.exe`.
-
-4. In the **System Preparation Tool** dialog box, select **Enter System Out-of-Box Experience (OOBE)** and select the **Generalize** check box.
-
-5. For **Shutdown Options**, select **Shutdown**.
-
-6. Select **OK**.
-
- :::image type="content" source="windows/media/upload-generalized-managed/sysprepgeneral.png" alt-text="![Start Sysprep](./media/upload-generalized-managed/sysprepgeneral.png)":::
+3. Delete the panther directory (C:\Windows\Panther).
+
+5. Then change the directory to %windir%\system32\sysprep, and then run:
+ ```
+ sysprep /generalize /shutdown /mode:vm
+ ```
+6. The VM will shut down when Sysprep is finished generalizing the VM. Do not restart the VM.
+
-6. When Sysprep completes, it shuts down the VM. Do not restart the VM.
> [!TIP] > **Optional** Use [DISM](/windows-hardware/manufacture/desktop/dism-optimize-image-command-line-options) to optimize your image and reduce your VM's first boot time.
virtual-machines N Series Driver Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/n-series-driver-setup.md
description: How to set up NVIDIA GPU drivers for N-series VMs running Linux in
-ms.subervice: vm-sizes-gpu
+
virtual-machines Run Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/run-command.md
The Run Command feature uses the virtual machine (VM) agent to run shell scripts
## Benefits
-You can access your virtual machines in multiple ways. Run Command can run scripts on your virtual machines remotely by using the VM agent. You use Run Command through the Azure portal, [REST API](/rest/api/compute/virtual-machines-run-commands/run-command), or [Azure CLI](/cli/azure/vm/run-command#az-vm-run-command-invoke) for Linux VMs.
+You can access your virtual machines in multiple ways. Run Command can run scripts on your virtual machines remotely by using the VM agent. You use Run Command through the Azure portal, [REST API](/rest/api/compute/virtual-machine-run-commands), or [Azure CLI](/cli/azure/vm/run-command#az-vm-run-command-invoke) for Linux VMs.
This capability is useful in all scenarios where you want to run a script within a virtual machine. It's one of the only ways to troubleshoot and remediate a virtual machine that doesn't have the RDP or SSH port open because of network or administrative user configuration.
virtual-machines Upload Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/upload-vhd.md
You can now upload VHD straight into a managed disk. For instructions, see [Uplo
You can also create a customized VM in Azure and then copy the OS disk and attach it to a new VM to create another copy. This is fine for testing, but if you want to use an existing Azure VM as the model for multiple new VMs, create an *image* instead. For more information about creating an image from an existing Azure VM, see [Create a custom image of an Azure VM by using the CLI](tutorial-custom-images.md).
-If you want to copy an existing VM to another region, you might want to use azcopy to [creat a copy of a disk in another region](disks-upload-vhd-to-managed-disk-cli.md#copy-a-managed-disk).
+If you want to copy an existing VM to another region, you might want to use azcopy to [create a copy of a disk in another region](disks-upload-vhd-to-managed-disk-cli.md#copy-a-managed-disk).
Otherwise, you should take a snapshot of the VM and then create a new OS VHD from the snapshot.
virtual-machines Image Builder Gallery Update Image Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/image-builder-gallery-update-image-version.md
Last updated 03/02/2021
-ms.subervice: image-builder
+ # Create a new Windows VM image version from an existing image version using Azure Image Builder
virtual-machines Image Builder Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/image-builder-gallery.md
Last updated 03/02/2021
-ms.subervice: image-builder
-ms.colletion: windows
++ # Create a Windows image and distribute it to an Azure Compute Gallery
virtual-machines Image Builder Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/image-builder-powershell.md
Last updated 03/02/2021
-ms.subervice: image-builder
-ms.colletion: windows
++ # Create a Windows VM with Azure Image Builder using PowerShell
virtual-machines Image Builder Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/image-builder-vnet.md
Last updated 03/02/2021
-ms.subervice: image-builder
-ms.colletion: windows
++ # Use Azure Image Builder for Windows VMs allowing access to an existing Azure VNET
virtual-machines Image Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/image-builder.md
Last updated 04/23/2021
-ms.subervice: image-builder
-ms.colletion: windows
++ # Create a Windows VM with Azure Image Builder
virtual-machines On Prem To Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/on-prem-to-azure.md
description: Create VMs in Azure using VHDs uploaded from other clouds like AWS
-ms.subervice: disks
+ vm-windows
virtual-machines High Availability Guide Rhel Pacemaker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-rhel-pacemaker.md
vm-windows Previously updated : 12/07/2021 Last updated : 05/26/2022
The STONITH device uses a Service Principal to authorize against Microsoft Azure
The Service Principal does not have permissions to access your Azure resources by default. You need to give the Service Principal permissions to start and stop (power-off) all virtual machines of the cluster. If you did not already create the custom role, you can create it using [PowerShell](../../../role-based-access-control/role-assignments-powershell.md) or [Azure CLI](../../../role-based-access-control/role-assignments-cli.md)
-Use the following content for the input file. You need to adapt the content to your subscriptions that is, replace c276fc76-9cd4-44c9-99a7-4fd71546436e and e91d47c4-76f3-4271-a796-21b4ecfe3624 with the Ids of your subscription. If you only have one subscription, remove the second entry in AssignableScopes.
+Use the following content for the input file. You need to adapt the content to your subscriptions that is, replace *xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx* and *yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy* with the Ids of your subscription. If you only have one subscription, remove the second entry in AssignableScopes.
```json { "Name": "Linux Fence Agent Role", "description": "Allows to power-off and start virtual machines", "assignableScopes": [
- "/subscriptions/e663cc2d-722b-4be1-b636-bbd9e4c60fd9",
- "/subscriptions/e91d47c4-76f3-4271-a796-21b4ecfe3624"
+ "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "/subscriptions/yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy"
], "actions": [ "Microsoft.Compute/*/read",
virtual-machines High Availability Guide Suse Pacemaker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-suse-pacemaker.md
vm-windows Previously updated : 04/26/2022 Last updated : 05/26/2022
This section applies only if you're using a STONITH device that's based on an Az
By default, the service principal doesn't have permissions to access your Azure resources. You need to give the service principal permissions to start and stop (deallocate) all virtual machines in the cluster. If you didn't already create the custom role, you can do so by using [PowerShell](../../../role-based-access-control/custom-roles-powershell.md#create-a-custom-role) or the [Azure CLI](../../../role-based-access-control/custom-roles-cli.md).
-Use the following content for the input file. You need to adapt the content to your subscriptions. That is, replace *c276fc76-9cd4-44c9-99a7-4fd71546436e* and *e91d47c4-76f3-4271-a796-21b4ecfe3624* with your own subscription IDs. If you have only one subscription, remove the second entry under AssignableScopes.
+Use the following content for the input file. You need to adapt the content to your subscriptions. That is, replace *xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx* and *yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy* with your own subscription IDs. If you have only one subscription, remove the second entry under AssignableScopes.
```json { "Name": "Linux fence agent Role", "description": "Allows to power-off and start virtual machines", "assignableScopes": [
- "/subscriptions/e663cc2d-722b-4be1-b636-bbd9e4c60fd9",
- "/subscriptions/e91d47c4-76f3-4271-a796-21b4ecfe3624"
+ "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "/subscriptions/yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy"
], "actions": [ "Microsoft.Compute/*/read",
virtual-machines Planning Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/planning-guide.md
For more documentation, see [this article][vpn-gateway-create-site-to-site-rm-po
#### VNet to VNet Connection Using Multi-Site VPN, you need to configure a separate Azure Virtual Network in each of the regions. However often you have the requirement that the software components in the different regions should communicate with each other. Ideally this communication should not be routed from one Azure Region to on-premises and from there to the other Azure Region. To shortcut, Azure offers the possibility to configure a connection from one Azure Virtual Network in one region to another Azure Virtual Network hosted in another region. This functionality is called VNet-to-VNet connection. More details on this functionality can be found here:
-[Configure a VNet-to-VNet VPN gateway connection by using the Azure portal](/azure/vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal).
+[Configure a VNet-to-VNet VPN gateway connection by using the Azure portal](../../../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md).
#### Private Connection to Azure ExpressRoute
Find more details on Azure ExpressRoute and offerings here:
* [ExpressRoute documentation](https://azure.microsoft.com/documentation/services/expressroute/) * [Azure ExpressRoute pricing](https://azure.microsoft.com/pricing/details/expressroute/)
-* [ExpressRoute FAQ](/azure/expressroute/expressroute-faqs)
+* [ExpressRoute FAQ](../../../expressroute/expressroute-faqs.md)
Express Route enables multiple Azure subscriptions through one ExpressRoute circuit as documented here
-* [Tutorial: Connect a virtual network to an ExpressRoute circuit](/azure/expressroute/expressroute-howto-linkvnet-arm)
-* [Quickstart: Create and modify an ExpressRoute circuit using Azure PowerShell](/azure/expressroute/expressroute-howto-circuit-arm)
+* [Tutorial: Connect a virtual network to an ExpressRoute circuit](../../../expressroute/expressroute-howto-linkvnet-arm.md)
+* [Quickstart: Create and modify an ExpressRoute circuit using Azure PowerShell](../../../expressroute/expressroute-howto-circuit-arm.md)
#### Forced tunneling in case of cross-premises For VMs joining on-premises domains through site-to-site, point-to-site, or ExpressRoute, you need to make sure that the Internet proxy settings are getting deployed for all the users in those VMs as well. By default, software running in those VMs or users using a browser to access the internet would not go through the company proxy, but would connect straight through Azure to the internet. But even the proxy setting is not a 100% solution to direct the traffic through the company proxy since it is responsibility of software and services to check for the proxy. If software running in the VM is not doing that or an administrator manipulates the settings, traffic to the Internet can be detoured again directly through Azure to the Internet. In order to avoid such a direct internet connectivity, you can configure Forced Tunneling with site-to-site connectivity between on-premises and Azure. The detailed description of the Forced Tunneling feature is published here:
-[Configure forced tunneling using the classic deployment model](/azure/vpn-gateway/vpn-gateway-about-forced-tunneling)
+[Configure forced tunneling using the classic deployment model](../../../vpn-gateway/vpn-gateway-about-forced-tunneling.md)
Forced Tunneling with ExpressRoute is enabled by customers advertising a default route via the ExpressRoute BGP peering sessions.
High Availability and Disaster recovery functionality for DBMS in general as wel
Here are two examples of a complete SAP NetWeaver HA architecture in Azure - one for Windows and one for Linux.
-Unmanaged disks only: The concepts as explained below may need to be compromised a bit when you deploy many SAP systems and the number of VMs deployed are exceeding the maximum limit of Storage Accounts per subscription. In such cases, VHDs of VMs need to be combined within one Storage Account. Usually you would do so by combining VHDs of SAP application layer VMs of different SAP systems. We also combined different VHDs of different DBMS VMs of different SAP systems in one Azure Storage Account. Thereby keeping the IOPS limits of Azure Storage Accounts in mind [Scalability and performance targets for standard storage accounts](/azure/storage/common/scalability-targets-standard-account)
+Unmanaged disks only: The concepts as explained below may need to be compromised a bit when you deploy many SAP systems and the number of VMs deployed are exceeding the maximum limit of Storage Accounts per subscription. In such cases, VHDs of VMs need to be combined within one Storage Account. Usually you would do so by combining VHDs of SAP application layer VMs of different SAP systems. We also combined different VHDs of different DBMS VMs of different SAP systems in one Azure Storage Account. Thereby keeping the IOPS limits of Azure Storage Accounts in mind [Scalability and performance targets for standard storage accounts](../../../storage/common/scalability-targets-standard-account.md)
##### ![Windows logo.][Logo_Windows] HA on Windows
Read the articles:
- [Azure Virtual Machines deployment for SAP NetWeaver](./deployment-guide.md) - [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](./dbms_guide_general.md)-- [SAP HANA infrastructure configurations and operations on Azure](./hana-vm-operations.md)
+- [SAP HANA infrastructure configurations and operations on Azure](./hana-vm-operations.md)
virtual-machines Sap Rise Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-rise-integration.md
With the information about available interfaces to the SAP RISE/ECS landscape, s
Integrating your SAP system with Azure cloud native services such as Azure Data Factory or Azure Synapse would use these communication channels to the SAP RISE/ECS managed environment.
-The following high-level architecture shows possible integration scenario with Azure data services such as [Data Factory](/azure/data-factory) or [Synapse Analytics](/azure/synapse-analytics). For these Azure services either a self-hosted integration runtime (self-hosted IR or IR) or Azure integration runtime (Azure IR) can be used. The use of either integration runtime depends on the [chosen data connector](/azure/data-factory/copy-activity-overview#supported-data-stores-and-formats), most SAP connectors are only available for the self-hosted IR. [SAP ECC connector](/azure/data-factory/connector-sap-ecc?tabs=data-factory) is capable of being using through both Azure IR and self-hosted IR. The choice of IR governs the network path taken. SAP .NET connector is used for [SAP table connector](/azure/data-factory/connector-sap-ecc?tabs=data-factory), [SAP BW](/azure/data-factory/connector-sap-business-warehouse?tabs=data-factory) and [SAP OpenHub](/azure/data-factory/connector-sap-business-warehouse-open-hub) connectors alike. All these connectors use SAP function modules (FM) on the SAP system, executed through RFC connections. Last if direct database access has been agreed with SAP, along with users and connection path opened, ODBC/JDBC connector for [SAP HANA](/azure/data-factory/connector-sap-hana?tabs=data-factory) can be used from the self-hosted IR as well.
+The following high-level architecture shows possible integration scenario with Azure data services such as [Data Factory](../../../data-factory/index.yml) or [Synapse Analytics](../../../synapse-analytics/index.yml). For these Azure services either a self-hosted integration runtime (self-hosted IR or IR) or Azure integration runtime (Azure IR) can be used. The use of either integration runtime depends on the [chosen data connector](../../../data-factory/copy-activity-overview.md#supported-data-stores-and-formats), most SAP connectors are only available for the self-hosted IR. [SAP ECC connector](../../../data-factory/connector-sap-ecc.md?tabs=data-factory) is capable of being using through both Azure IR and self-hosted IR. The choice of IR governs the network path taken. SAP .NET connector is used for [SAP table connector](../../../data-factory/connector-sap-ecc.md?tabs=data-factory), [SAP BW](../../../data-factory/connector-sap-business-warehouse.md?tabs=data-factory) and [SAP OpenHub](../../../data-factory/connector-sap-business-warehouse-open-hub.md) connectors alike. All these connectors use SAP function modules (FM) on the SAP system, executed through RFC connections. Last if direct database access has been agreed with SAP, along with users and connection path opened, ODBC/JDBC connector for [SAP HANA](../../../data-factory/connector-sap-hana.md?tabs=data-factory) can be used from the self-hosted IR as well.
[![SAP RISE/ECS accessed by Azure ADF or Synapse.](./media/sap-rise-integration/sap-rise-adf-synapse.png)](./media/sap-rise-integration/sap-rise-adf-synapse.png#lightbox)
The customer is responsible for deployment and operation of the self-hosted inte
To learn the overall support on SAP data integration scenario, see [SAP data integration using Azure Data Factory whitepaper](https://github.com/Azure/Azure-DataFactory/blob/master/whitepaper/SAP%20Data%20Integration%20using%20Azure%20Data%20Factory.pdf) with detailed introduction on each SAP connector, comparison and guidance. ## On-premise data gateway
-Further Azure Services such as [Logic Apps](/azure/logic-apps/logic-apps-using-sap-connector), [Power Apps](/connectors/saperp/) or [Power BI](/power-bi/connect-data/desktop-sap-bw-connector) communicate and exchange data with SAP systems through an on-premise data gateway. The on-premise data gateway is a virtual machine, running in Azure or on-premise. It provides secure data transfer between these Azure Services and your SAP systems.
+Further Azure Services such as [Logic Apps](../../../logic-apps/logic-apps-using-sap-connector.md), [Power Apps](/connectors/saperp/) or [Power BI](/power-bi/connect-data/desktop-sap-bw-connector) communicate and exchange data with SAP systems through an on-premise data gateway. The on-premise data gateway is a virtual machine, running in Azure or on-premise. It provides secure data transfer between these Azure Services and your SAP systems.
With SAP RISE, the on-premise data gateway can connect to Azure Services running in customerΓÇÖs Azure subscription. This VM running the data gateway is deployed and operated by the customer. With below high-level architecture as overview, similar method can be used for either service. [![SAP RISE/ECS accessed from Azure on-premise data gateway and connected Azure services.](./media/sap-rise-integration/sap-rise-on-premises-data-gateway.png)](./media/sap-rise-integration/sap-rise-on-premises-data-gateway.png#lightbox)
-The SAP RISE environment here provides access to the SAP ports for RFC and https described earlier. The communication ports are accessed by the private network address through the vnet peering or VPN site-to-site connection. The on-premise data gateway VM running in customerΓÇÖs Azure subscription uses the [SAP .NET connector](https://support.sap.com/en/product/connectors/msnet.html) to run RFC, BAPI or IDoc calls through the RFC connection. Additionally, depending on service and way the communication is setup, a way to connect to public IP of the SAP systems REST API through https might be required. The https connection to a public IP can be exposed through SAP RISE/ECS managed application gateway. This high level architecture shows the possible integration scenario. Alternatives to it such as using Logic Apps single tenant and [private endpoints](/azure/logic-apps/secure-single-tenant-workflow-virtual-network-private-endpoint) to secure the communication and other can be seen as extension and are not described here in.
+The SAP RISE environment here provides access to the SAP ports for RFC and https described earlier. The communication ports are accessed by the private network address through the vnet peering or VPN site-to-site connection. The on-premise data gateway VM running in customerΓÇÖs Azure subscription uses the [SAP .NET connector](https://support.sap.com/en/product/connectors/msnet.html) to run RFC, BAPI or IDoc calls through the RFC connection. Additionally, depending on service and way the communication is setup, a way to connect to public IP of the SAP systems REST API through https might be required. The https connection to a public IP can be exposed through SAP RISE/ECS managed application gateway. This high level architecture shows the possible integration scenario. Alternatives to it such as using Logic Apps single tenant and [private endpoints](../../../logic-apps/secure-single-tenant-workflow-virtual-network-private-endpoint.md) to secure the communication and other can be seen as extension and are not described here in.
SAP RISE/ECS exposes the communication ports for these applications to use but has no knowledge about any details of the connected application or service running in a customerΓÇÖs subscription.
SAP RISE/ECS exposes the communication ports for these applications to use but h
## Azure Monitoring for SAP with SAP RISE
-[Azure Monitoring for SAP](/azure/virtual-machines/workloads/sap/monitor-sap-on-azure) is an Azure-native solution for monitoring your SAP system. It extends the Azure monitor platform monitoring capability with support to gather data about SAP NetWeaver, database, and operating system details.
+[Azure Monitoring for SAP](./monitor-sap-on-azure.md) is an Azure-native solution for monitoring your SAP system. It extends the Azure monitor platform monitoring capability with support to gather data about SAP NetWeaver, database, and operating system details.
> [!Note] > SAP RISE/ECS is a fully managed service for your SAP landscape and thus Azure Monitoring for SAP is not intended to be utilized for such managed environment.
virtual-network Nat Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-overview.md
Virtual Network NAT is a software defined networking service. A NAT gateway won'
* Public IP prefixes
- * Public IP addresses and prefixes derived from custom IP prefixes (BYOIP), to learn more, see [Custom IP address prefix (BYOIP)](/azure/virtual-network/ip-services/custom-ip-address-prefix)
+ * Public IP addresses and prefixes derived from custom IP prefixes (BYOIP), to learn more, see [Custom IP address prefix (BYOIP)](../ip-services/custom-ip-address-prefix.md)
* Virtual Network NAT is compatible with standard SKU public IP addresses or public IP prefix resources or a combination of both. You can use a public IP prefix directly or distribute the public IP addresses of the prefix across multiple NAT gateway resources. The NAT gateway will groom all traffic to the range of IP addresses of the prefix.
For information on the SLA, see [SLA for Virtual Network NAT](https://azure.micr
* Learn about the [NAT gateway resource](./nat-gateway-resource.md).
-* [Learn module: Introduction to Azure Virtual Network NAT](/learn/modules/intro-to-azure-virtual-network-nat).
+* [Learn module: Introduction to Azure Virtual Network NAT](/learn/modules/intro-to-azure-virtual-network-nat).
virtual-network Resource Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/resource-health.md
This article provides guidance on how to use Azure Resource Health to monitor an
## Resource health status
-[Azure Resource Health](/azure/service-health/overview) provides information about the health of your NAT gateway resource. You can use resource health and Azure monitor notifications to keep you informed on the availability and health status of your NAT gateway resource. Resource health can help you quickly assess whether an issue is due to a problem in your Azure infrastructure or because of an Azure platform event. The resource health of your NAT gateway is evaluated by measuring the data-path availability of your NAT gateway endpoint.
+[Azure Resource Health](../../service-health/overview.md) provides information about the health of your NAT gateway resource. You can use resource health and Azure monitor notifications to keep you informed on the availability and health status of your NAT gateway resource. Resource health can help you quickly assess whether an issue is due to a problem in your Azure infrastructure or because of an Azure platform event. The resource health of your NAT gateway is evaluated by measuring the data-path availability of your NAT gateway endpoint.
You can view the status of your NAT gatewayΓÇÖs health status on the **Resource Health** page, found under **Support + troubleshooting** for your NAT gateway resource.
The health of your NAT gateway resource is displayed as one of the following sta
| Unavailable | Your NAT gateway resource is not healthy. The metric for the data-path availability has reported less than 25% for the past 15 minutes. You may experience unavailability of your NAT gateway resource for outbound connectivity. | | Unknown | Health status for your NAT gateway resource hasnΓÇÖt been updated or hasnΓÇÖt received information for data-path availability for more than 5 minutes. This state should be transient and will reflect the correct status as soon as data is received. |
-For more information about Azure Resource Health, see [Resource Health overview](/azure/service-health/resource-health-overview).
+For more information about Azure Resource Health, see [Resource Health overview](../../service-health/resource-health-overview.md).
To view the health of your NAT gateway resource:
To view the health of your NAT gateway resource:
## Next steps -- Learn about [Virtual Network NAT](/azure/virtual-network/nat-gateway/nat-overview)-- Learn about [metrics and alerts for NAT gateway](/azure/virtual-network/nat-gateway/nat-metrics)-- Learn about [troubleshooting NAT gateway resources](/azure/virtual-network/nat-gateway/troubleshoot-nat)-- Learn about [Azure resource health](/azure/service-health/resource-health-overview)
+- Learn about [Virtual Network NAT](./nat-overview.md)
+- Learn about [metrics and alerts for NAT gateway](./nat-metrics.md)
+- Learn about [troubleshooting NAT gateway resources](./troubleshoot-nat.md)
+- Learn about [Azure resource health](../../service-health/resource-health-overview.md)
virtual-wan Monitor Virtual Wan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/monitor-virtual-wan.md
Title: 'Monitoring Azure Virtual WAN' description: Learn about Azure Virtual WAN logs and metrics using Azure Monitor.- - Previously updated : 06/30/2021 Last updated : 05/25/2022 # Monitoring Virtual WAN
-You can monitor Azure Virtual WAN using Azure Monitor. Virtual WAN is a networking service that brings together many networking, security, and routing functionalities to provide a single operational interface. Virtual WAN VPN gateways, ExpressRoute gateways, and Azure Firewall have logging and metrics available through Azure Monitor.
+You can monitor Azure Virtual WAN using Azure Monitor. Virtual WAN is a networking service that brings together many networking, security, and routing functionalities to provide a single operational interface. Virtual WAN VPN gateways, ExpressRoute gateways, and Azure Firewall have logging and metrics available through Azure Monitor.
This article discusses metrics and diagnostics that are available through the portal. Metrics are lightweight and can support near real-time scenarios, making them useful for alerting and fast issue detection.
Diagnostics and logging configuration must be done from there accessing the **Di
:::image type="content" source="./media/monitor-virtual-wan/firewall-diagnostic-settings.png" alt-text="Screenshot shows Firewall diagnostic settings."::: - ## Metrics Metrics in Azure Monitor are numerical values that describe some aspect of a system at a particular time. Metrics are collected every minute, and are useful for alerting because they can be sampled frequently. An alert can be fired quickly with relatively simple logic.
$MetricInformation.Data
* Minimum ΓÇô Minimum bytes that were sent during the selected time grain period. * Maximum ΓÇô Maximum bytes that were sent during the selected time grain period * Total ΓÇô Total bytes/sec that were sent during the selected time grain period.
-
+ ### Site-to-site VPN gateways The following metrics are available for Azure site-to-site VPN gateways:
The following metrics are available for Azure ExpressRoute gateways:
| Metric | Description| | | |
-| **BitsInPerSecond** | Bits per second ingressing Azure through the ExpressRoute Gateway. |
-| **BitsOutPerSecond** | Bits per second egressing Azure through the ExpressRoute Gateway |
-| **CPU Utilization** | CPU Utilization of the ExpressRoute Gateway.|
-| **Packets per second** | Total Packets received on ExpressRoute Gateway per second.|
-| **Count of routes advertised to peer**| Count of Routes Advertised to Peer by ExpressRoute Gateway. |
-| **Count of routes learned from peer**| Count of Routes Learned from Peer by ExpressRoute Gateway.|
-| **Frequency of routes changed** | Frequency of Route changes in ExpressRoute Gateway.|
-| **Number of VMs in Virtual Network**| Number of VM's that use this ExpressRoute Gateway.|
+| **BitsInPerSecond** | Bits per second ingressing Azure via ExpressRoute gateway which can be further split for specific connections. |
+| **BitsOutPerSecond** | Bits per second egressing Azure via ExpressRoute gateway which can be further split for specific connection. |
+| **Bits Received Per Second** | Total Bits received on ExpressRoute gateway per second. |
+| **CPU Utilization** | CPU Utilization of the ExpressRoute gateway.|
+| **Packets per second** | Total Packets received on ExpressRoute gateway per second.|
+| **Count of routes advertised to peer**| Count of Routes Advertised to Peer by ExpressRoute gateway. |
+| **Count of routes learned from peer**| Count of Routes Learned from Peer by ExpressRoute gateway.|
+| **Frequency of routes changed** | Frequency of Route changes in ExpressRoute gateway.|
+| **Number of VMs in Virtual Network**| Number of VMs that use this ExpressRoute gateway.|
### <a name="metrics-steps"></a>View gateway metrics
In order to execute the query, you have to open the Log Analytics resource you c
:::image type="content" source="./media/monitor-virtual-wan/log-analytics-query-samples.png" alt-text="Log Analytics Query Samples.":::
-For additional Log Analytics query samples for Azure VPN Gateway, both Site-to-Site and Point-to-Site, you can visit the page [Troubleshoot Azure VPN Gateway using diagnostic logs](../vpn-gateway/troubleshoot-vpn-with-azure-diagnostics.md).
-For Azure Firewall, a [workbook](../firewall/firewall-workbook.md) is provided to make log analysis easier. Using its graphical interface, it will be possible to investigate into the diagnostic data without manually writing any Log Analytics query.
+For additional Log Analytics query samples for Azure VPN Gateway, both Site-to-Site and Point-to-Site, you can visit the page [Troubleshoot Azure VPN Gateway using diagnostic logs](../vpn-gateway/troubleshoot-vpn-with-azure-diagnostics.md).
+For Azure Firewall, a [workbook](../firewall/firewall-workbook.md) is provided to make log analysis easier. Using its graphical interface, it will be possible to investigate into the diagnostic data without manually writing any Log Analytics query.
## <a name="activity-logs"></a>Activity logs
virtual-wan Virtual Wan Expressroute Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-expressroute-portal.md
Title: 'Tutorial: Create ExpressRoute connections using Azure Virtual WAN' description: In this tutorial, learn how to use Azure Virtual WAN to create ExpressRoute connections to Azure and on-premises environments.- - Previously updated : 04/27/2021 Last updated : 05/25/2022 # Customer intent: As someone with a networking background, I want to connect my corporate on-premises network(s) to my VNets using Virtual WAN and ExpressRoute.
Verify that you have met the following criteria before beginning your configurat
## <a name="openvwan"></a>Create a virtual WAN
-From a browser, navigate to the [Azure portal](https://portal.azure.com) and sign in with your Azure account.
-
-1. Navigate to the Virtual WAN page. In the portal, click **+Create a resource**. Type **Virtual WAN** into the search box and select Enter.
-2. Select **Virtual WAN** from the results. On the Virtual WAN page, click **Create** to open the Create WAN page.
-3. On the **Create WAN** page, on the **Basics** tab, fill in the following fields:
-
- :::image type="content" source="./media/virtual-wan-expressroute-portal/createwan.png" alt-text="Screenshot shows Create WAN page." border="false":::
-
- * **Subscription** - Select the subscription that you want to use.
- * **Resource Group** - Create new or use existing.
- * **Resource group location** - Choose a resource location from the dropdown. A WAN is a global resource and does not live in a particular region. However, you must select a region in order to more easily manage and locate the WAN resource that you create.
- * **Name** - Type the name that you want to call your WAN.
- * **Type** - Select **Standard**. You can't create an ExpressRoute gateway using the Basic SKU.
-4. After you finish filling out the fields, select **Review +Create**.
-5. Once validation passes, select **Create** to create the virtual WAN.
## <a name="hub"></a>Create a virtual hub and gateway
vpn-gateway Point To Site About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-about.md
The validation of the client certificate is performed by the VPN gateway and hap
### Authenticate using native Azure Active Directory authentication
-Azure AD authentication allows users to connect to Azure using their Azure Active Directory credentials. Native Azure AD authentication is only supported for OpenVPN protocol and Windows 10 and 11 and also requires the use of the [Azure VPN Client](https://go.microsoft.com/fwlink/?linkid=2117554).
+Azure AD authentication allows users to connect to Azure using their Azure Active Directory credentials. Native Azure AD authentication is only supported for OpenVPN protocol and Windows 10 and later and also requires the use of the [Azure VPN Client](https://go.microsoft.com/fwlink/?linkid=2117554).
With native Azure AD authentication, you can leverage Azure AD's conditional access as well as Multi-Factor Authentication (MFA) features for VPN.
vpn-gateway Point To Site How To Radius Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-how-to-radius-ps.md
A P2S VPN connection is started from Windows and Mac devices. Connecting clients
* RADIUS server * VPN Gateway native certificate authentication
-* Native Azure Active Directory authentication (Windows 10 only)
+* Native Azure Active Directory authentication (Windows 10 and later only)
This article helps you configure a P2S configuration with authentication using RADIUS server. If you want to authenticate using generated certificates and VPN gateway native certificate authentication instead, see [Configure a Point-to-Site connection to a VNet using VPN gateway native certificate authentication](vpn-gateway-howto-point-to-site-rm-ps.md) or [Create an Azure Active Directory tenant for P2S OpenVPN protocol connections](openvpn-azure-ad-tenant.md) for Azure Active Directory authentication.
vpn-gateway Site To Site Vpn Private Peering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/site-to-site-vpn-private-peering.md
Previously updated : 04/28/2021 Last updated : 05/26/2022
You can configure a Site-to-Site VPN to a virtual network gateway over an Expres
* It is possible to deploy Site-to-Site VPN connections over ExpressRoute private peering at the same time as Site-to-Site VPN connections via the Internet on the same VPN gateway. >[!NOTE]
->This feature is only supported on zone-redundant gateways. For example, VpnGw1AZ, VpnGw2AZ, etc.
+>This feature is supported on gateways with a Standard Public IP only.
> To complete this configuration, verify that you meet the following prerequisites:
In both of these examples, Azure will send traffic to 10.0.1.0/24 over the VPN c
## <a name="portal"></a>Portal steps
-1. Configure a Site-to-Site connection. For steps, see the [Site-to-site configuration](./tutorial-site-to-site-portal.md) article. Be sure to pick a zone-redundant gateway SKU for the gateway.
-
- Zone-redundant SKUs have ΓÇ£AZΓÇ¥ at the end of the SKU. For example, **VpnGw1AZ**. Zone-redundant gateways are only available in regions where the availability zone service is available. For information about the regions in which we support availability zones, see [Regions that support availability zones](../availability-zones/az-region.md).
+1. Configure a Site-to-Site connection. For steps, see the [Site-to-site configuration](./tutorial-site-to-site-portal.md) article. Be sure to pick a gateway with a Standard Public IP.
:::image type="content" source="media/site-to-site-vpn-private-peering/gateway.png" alt-text="Gateway Private IPs"::: 1. Enable Private IPs on the gateway. Select **Configuration**, then set **Gateway Private IPs** to **Enabled**. Select **Save** to save your changes.
In both of these examples, Azure will send traffic to 10.0.1.0/24 over the VPN c
## <a name="powershell"></a>PowerShell steps
-1. Configure a Site-to-Site connection. For steps, see the [Configure a Site-to-Site VPN](./tutorial-site-to-site-portal.md) article. Be sure to pick a zone-redundant gateway SKU for the gateway. Zone-redundant SKUs have ΓÇ£AZΓÇ¥ at the end of the SKU. For example, VpnGw1AZ.
+1. Configure a Site-to-Site connection. For steps, see the [Configure a Site-to-Site VPN](./tutorial-site-to-site-portal.md) article. Be sure to pick a gateway with a Standard Public IP.
1. Set the flag to use the private IP on the gateway using the following PowerShell commands: ```azurepowershell-interactive
vpn-gateway Vpn Gateway Certificates Point To Site Makecert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-certificates-point-to-site-makecert.md
Point-to-Site connections use certificates to authenticate. This article shows you how to create a self-signed root certificate and generate client certificates using MakeCert. If you are looking for different certificate instructions, see [Certificates - PowerShell](vpn-gateway-certificates-point-to-site.md) or [Certificates - Linux](vpn-gateway-certificates-point-to-site-linux.md).
-While we recommend using the [Windows 10 PowerShell steps](vpn-gateway-certificates-point-to-site.md) to create your certificates, we provide these MakeCert instructions as an optional method. The certificates that you generate using either method can be installed on [any supported client operating system](vpn-gateway-howto-point-to-site-resource-manager-portal.md#faq). However, MakeCert has the following limitation:
+While we recommend using the [Windows 10 or later PowerShell steps](vpn-gateway-certificates-point-to-site.md) to create your certificates, we provide these MakeCert instructions as an optional method. The certificates that you generate using either method can be installed on [any supported client operating system](vpn-gateway-howto-point-to-site-resource-manager-portal.md#faq). However, MakeCert has the following limitation:
* MakeCert is deprecated. This means that this tool could be removed at any point. Any certificates that you already generated using MakeCert won't be affected when MakeCert is no longer available. MakeCert is only used to generate the certificates, not as a validating mechanism.
vpn-gateway Vpn Gateway Certificates Point To Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-certificates-point-to-site.md
# Generate and export certificates for Point-to-Site using PowerShell
-Point-to-Site connections use certificates to authenticate. This article shows you how to create a self-signed root certificate and generate client certificates using PowerShell on Windows 10 or Windows Server 2016. If you are looking for different certificate instructions, see [Certificates - Linux](vpn-gateway-certificates-point-to-site-linux.md) or [Certificates - MakeCert](vpn-gateway-certificates-point-to-site-makecert.md).
+Point-to-Site connections use certificates to authenticate. This article shows you how to create a self-signed root certificate and generate client certificates using PowerShell on Windows 10 or later, or Windows Server 2016. If you are looking for different certificate instructions, see [Certificates - Linux](vpn-gateway-certificates-point-to-site-linux.md) or [Certificates - MakeCert](vpn-gateway-certificates-point-to-site-makecert.md).
-The steps in this article apply to Windows 10 or Windows Server 2016. The PowerShell cmdlets that you use to generate certificates are part of the operating system and do not work on other versions of Windows. The Windows 10 or Windows Server 2016 computer is only needed to generate the certificates. Once the certificates are generated, you can upload them, or install them on any supported client operating system.
+The steps in this article apply to Windows 10 or later, or Windows Server 2016. The PowerShell cmdlets that you use to generate certificates are part of the operating system and do not work on other versions of Windows. The Windows 10 or later, or Windows Server 2016 computer is only needed to generate the certificates. Once the certificates are generated, you can upload them, or install them on any supported client operating system.
-If you do not have access to a Windows 10 or Windows Server 2016 computer, you can use [MakeCert](vpn-gateway-certificates-point-to-site-makecert.md) to generate certificates. The certificates that you generate using either method can be installed on any [supported](vpn-gateway-howto-point-to-site-resource-manager-portal.md#faq) client operating system.
+If you do not have access to a Windows 10 or later, or Windows Server 2016 computer, you can use [MakeCert](vpn-gateway-certificates-point-to-site-makecert.md) to generate certificates. The certificates that you generate using either method can be installed on any [supported](vpn-gateway-howto-point-to-site-resource-manager-portal.md#faq) client operating system.
[!INCLUDE [generate and export certificates](../../includes/vpn-gateway-generate-export-certificates-include.md)]
vpn-gateway Vpn Gateway Howto Always On Device Tunnel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-always-on-device-tunnel.md
Title: 'Configure an Always-On VPN tunnel'
-description: Learn how to use gateways with Windows 10 Always On to establish and configure persistent device tunnels to Azure.
+description: Learn how to use gateways with Windows 10 or later Always On to establish and configure persistent device tunnels to Azure.
vpn-gateway Vpn Gateway Howto Point To Site Classic Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-point-to-site-classic-azure-portal.md
If you already have a VNet, verify that the settings are compatible with your VP
Azure uses certificates to authenticate VPN clients for Point-to-Site VPNs. You upload the public key information of the root certificate to Azure. The public key is then considered *trusted*. Client certificates must be generated from the trusted root certificate, and then installed on each client computer in the Certificates-Current User\Personal\Certificates certificate store. The certificate is used to authenticate the client when it connects to the VNet.
-If you use self-signed certificates, they must be created by using specific parameters. You can create a self-signed certificate by using the instructions for [PowerShell and Windows 10](vpn-gateway-certificates-point-to-site.md), or [MakeCert](vpn-gateway-certificates-point-to-site-makecert.md). It's important to follow the steps in these instructions when you use self-signed root certificates and generate client certificates from the self-signed root certificate. Otherwise, the certificates you create won't be compatible with P2S connections and you'll receive a connection error.
+If you use self-signed certificates, they must be created by using specific parameters. You can create a self-signed certificate by using the instructions for [PowerShell and Windows 10 or later](vpn-gateway-certificates-point-to-site.md), or [MakeCert](vpn-gateway-certificates-point-to-site-makecert.md). It's important to follow the steps in these instructions when you use self-signed root certificates and generate client certificates from the self-signed root certificate. Otherwise, the certificates you create won't be compatible with P2S connections and you'll receive a connection error.
### Acquire the public key (.cer) for the root certificate
vpn-gateway Vpn Gateway Howto Point To Site Resource Manager Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md
Title: 'Connect to a VNet using P2S VPN & certificate authentication: portal'
-description: Learn how to connect Windows, macOS, and Linux clients securely to a VNet using VPN Gateway Point-to-Site connections and self-signed or CA issued certificates.
-
+description: Learn how to connect Windows, macOS, and Linux clients securely to a VNet using VPN Gateway point-to-site connections and self-signed or CA issued certificates.
- Previously updated : 04/20/2022 Last updated : 05/26/2022
-# Configure a Point-to-Site VPN connection using Azure certificate authentication: Azure portal
+# Configure a point-to-site VPN connection using Azure certificate authentication: Azure portal
-This article helps you securely connect individual clients running Windows, Linux, or macOS to an Azure VNet. Point-to-Site VPN connections are useful when you want to connect to your VNet from a remote location, such when you're telecommuting from home or a conference. You can also use P2S instead of a Site-to-Site VPN when you have only a few clients that need to connect to a VNet. Point-to-Site connections don't require a VPN device or a public-facing IP address. P2S creates the VPN connection over either SSTP (Secure Socket Tunneling Protocol), or IKEv2. For more information about Point-to-Site VPN, see [About Point-to-Site VPN](point-to-site-about.md).
+This article helps you securely connect individual clients running Windows, Linux, or macOS to an Azure VNet. point-to-site VPN connections are useful when you want to connect to your VNet from a remote location, such when you're telecommuting from home or a conference. You can also use P2S instead of a Site-to-Site VPN when you have only a few clients that need to connect to a VNet. point-to-site connections don't require a VPN device or a public-facing IP address. P2S creates the VPN connection over either SSTP (Secure Socket Tunneling Protocol), or IKEv2. For more information about point-to-site VPN, see [About point-to-site VPN](point-to-site-about.md).
:::image type="content" source="./media/vpn-gateway-howto-point-to-site-resource-manager-portal/point-to-site-diagram.png" alt-text="Connect from a computer to an Azure VNet - point-to-site connection diagram.":::
You can use the following values to create a test environment, or refer to these
**Connection type and client address pool** * **Connection type:** Point-to-site
-* **Client address pool:** 172.16.201.0/24<br>VPN clients that connect to the VNet using this Point-to-Site connection receive an IP address from the client address pool.
+* **Client address pool:** 172.16.201.0/24<br>VPN clients that connect to the VNet using this point-to-site connection receive an IP address from the client address pool.
## <a name="createvnet"></a>Create a VNet
In this section, you create a virtual network.
[!INCLUDE [About cross-premises addresses](../../includes/vpn-gateway-cross-premises.md)] ## <a name="creategw"></a>Create the VPN gateway
You can see the deployment status on the Overview page for your gateway. After t
## <a name="generatecert"></a>Generate certificates
-Certificates are used by Azure to authenticate clients connecting to a VNet over a Point-to-Site VPN connection. Once you obtain a root certificate, you [upload](#uploadfile) the public key information to Azure. The root certificate is then considered 'trusted' by Azure for connection over P2S to the virtual network. You also generate client certificates from the trusted root certificate, and then install them on each client computer. The client certificate is used to authenticate the client when it initiates a connection to the VNet.
+Certificates are used by Azure to authenticate clients connecting to a VNet over a point-to-site VPN connection. Once you obtain a root certificate, you [upload](#uploadfile) the public key information to Azure. The root certificate is then considered 'trusted' by Azure for connection over P2S to the virtual network. You also generate client certificates from the trusted root certificate, and then install them on each client computer. The client certificate is used to authenticate the client when it initiates a connection to the VNet.
### <a name="getcer"></a>Generate a root certificate
Certificates are used by Azure to authenticate clients connecting to a VNet over
## <a name="addresspool"></a>Add the VPN client address pool
-The client address pool is a range of private IP addresses that you specify. The clients that connect over a Point-to-Site VPN dynamically receive an IP address from this range. Use a private IP address range that doesn't overlap with the on-premises location that you connect from, or the VNet that you want to connect to. If you configure multiple protocols and SSTP is one of the protocols, then the configured address pool is split between the configured protocols equally.
+The client address pool is a range of private IP addresses that you specify. The clients that connect over a point-to-site VPN dynamically receive an IP address from this range. Use a private IP address range that doesn't overlap with the on-premises location that you connect from, or the VNet that you want to connect to. If you configure multiple protocols and SSTP is one of the protocols, then the configured address pool is split between the configured protocols equally.
1. Once the virtual network gateway has been created, navigate to the **Settings** section of the virtual network gateway page. In **Settings**, select **Point-to-site configuration**. Select **Configure now** to open the configuration page.
In this section, you upload public root certificate data to Azure. Once the publ
1. Navigate to your **Virtual network gateway -> Point-to-site configuration** page in the **Root certificate** section. This section is only visible if you have selected **Azure certificate** for the authentication type. 1. Make sure that you exported the root certificate as a **Base-64 encoded X.509 (.CER)** file in the previous steps. You need to export the certificate in this format so you can open the certificate with text editor. You don't need to export the private key.
- :::image type="content" source="./media/vpn-gateway-howto-point-to-site-resource-manager-portal/export-base-64.png" alt-text="Screenshot showing export as Base-64 encoded X.509." lightbox="./media/vpn-gateway-howto-point-to-site-resource-manager-portal/export-base-64.png" :::
+ :::image type="content" source="./media/vpn-gateway-howto-point-to-site-resource-manager-portal/export-base-64.png" alt-text="Screenshot showing export as Base-64 encoded X.509." lightbox="./media/vpn-gateway-howto-point-to-site-resource-manager-portal/export-base-64-expand.png" :::
1. Open the certificate with a text editor, such as Notepad. When copying the certificate data, make sure that you copy the text as one continuous line without carriage returns or line feeds. You may need to modify your view in the text editor to 'Show Symbol/Show all characters' to see the carriage returns and line feeds. Copy only the following section as one continuous line:
- :::image type="content" source="./media/vpn-gateway-howto-point-to-site-resource-manager-portal/notepad-root-cert.png" alt-text="Screenshot showing root certificate information in Notepad." border="false" lightbox="./media/vpn-gateway-howto-point-to-site-resource-manager-portal/notepad-root-cert.png":::
+ :::image type="content" source="./media/vpn-gateway-howto-point-to-site-resource-manager-portal/notepad-root-cert.png" alt-text="Screenshot showing root certificate information in Notepad." border="false" lightbox="./media/vpn-gateway-howto-point-to-site-resource-manager-portal/notepad-root-cert-expand.png":::
1. In the **Root certificate** section, you can add up to 20 trusted root certificates. * Paste the certificate data into the **Public certificate data** field.
If you're having trouble connecting, verify that the virtual network gateway isn
These instructions apply to Windows clients. 1. To verify that your VPN connection is active, open an elevated command prompt, and run *ipconfig/all*.
-2. View the results. Notice that the IP address you received is one of the addresses within the Point-to-Site VPN Client Address Pool that you specified in your configuration. The results are similar to this example:
+2. View the results. Notice that the IP address you received is one of the addresses within the point-to-site VPN Client Address Pool that you specified in your configuration. The results are similar to this example:
``` PPP adapter VNet1:
To remove a trusted root certificate:
## <a name="revokeclient"></a>To revoke a client certificate
-You can revoke client certificates. The certificate revocation list allows you to selectively deny Point-to-Site connectivity based on individual client certificates. This is different than removing a trusted root certificate. If you remove a trusted root certificate .cer from Azure, it revokes the access for all client certificates generated/signed by the revoked root certificate. Revoking a client certificate, rather than the root certificate, allows the other certificates that were generated from the root certificate to continue to be used for authentication.
+You can revoke client certificates. The certificate revocation list allows you to selectively deny point-to-site connectivity based on individual client certificates. This is different than removing a trusted root certificate. If you remove a trusted root certificate .cer from Azure, it revokes the access for all client certificates generated/signed by the revoked root certificate. Revoking a client certificate, rather than the root certificate, allows the other certificates that were generated from the root certificate to continue to be used for authentication.
The common practice is to use the root certificate to manage access at team or organization levels, while using revoked client certificates for fine-grained access control on individual users.
You can revoke a client certificate by adding the thumbprint to the revocation l
1. The thumbprint validates and is automatically added to the revocation list. A message appears on the screen that the list is updating. 1. After updating has completed, the certificate can no longer be used to connect. Clients that try to connect using this certificate receive a message saying that the certificate is no longer valid.
-## <a name="faq"></a>Point-to-Site FAQ
+## <a name="faq"></a>Point-to-site FAQ
For frequently asked questions, see the [FAQ](vpn-gateway-vpn-faq.md#P2S).
vpn-gateway Vpn Gateway Howto Point To Site Rm Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-point-to-site-rm-ps.md
Set-AzVirtualNetworkGateway -VirtualNetworkGateway $Gateway -VpnClientAddressPoo
Certificates are used by Azure to authenticate VPN clients for point-to-site VPNs. You upload the public key information of the root certificate to Azure. The public key is then considered 'trusted'. Client certificates must be generated from the trusted root certificate, and then installed on each client computer in the Certificates-Current User/Personal certificate store. The certificate is used to authenticate the client when it initiates a connection to the VNet.
-If you use self-signed certificates, they must be created using specific parameters. You can create a self-signed certificate using the instructions for [PowerShell and Windows 10](vpn-gateway-certificates-point-to-site.md), or, if you don't have Windows 10, you can use [MakeCert](vpn-gateway-certificates-point-to-site-makecert.md). It's important that you follow the steps in the instructions when generating self-signed root certificates and client certificates. Otherwise, the certificates you generate will not be compatible with P2S connections and you receive a connection error.
+If you use self-signed certificates, they must be created using specific parameters. You can create a self-signed certificate using the instructions for [PowerShell and Windows 10 or later](vpn-gateway-certificates-point-to-site.md), or, if you don't have Windows 10 or later, you can use [MakeCert](vpn-gateway-certificates-point-to-site-makecert.md). It's important that you follow the steps in the instructions when generating self-signed root certificates and client certificates. Otherwise, the certificates you generate will not be compatible with P2S connections and you receive a connection error.
### <a name="cer"></a>Root certificate
If you use self-signed certificates, they must be created using specific paramet
## <a name="upload"></a>Upload root certificate public key information
-Verify that your VPN gateway has finished creating. Once it has completed, you can upload the .cer file (which contains the public key information) for a trusted root certificate to Azure. Once a.cer file is uploaded, Azure can use it to authenticate clients that have installed a client certificate generated from the trusted root certificate. You can upload additional trusted root certificate files - up to a total of 20 - later, if needed.
+Verify that your VPN gateway has finished creating. Once it has completed, you can upload the .cer file (which contains the public key information) for a trusted root certificate to Azure. Once a .cer file is uploaded, Azure can use it to authenticate clients that have installed a client certificate generated from the trusted root certificate. You can upload additional trusted root certificate files - up to a total of 20 - later, if needed.
>[!NOTE] > You can't upload the .cer file using Azure Cloud Shell. You can either use PowerShell locally on your computer, or you can use the [Azure portal steps](vpn-gateway-howto-point-to-site-resource-manager-portal.md#uploadfile).
For additional point-to-site information, see the [VPN Gateway point-to-site FAQ
Once your connection is complete, you can add virtual machines to your virtual networks. For more information, see [Virtual Machines](../index.yml). To understand more about networking and virtual machines, see [Azure and Linux VM network overview](../virtual-network/network-overview.md).
-For P2S troubleshooting information, [Troubleshooting: Azure point-to-site connection problems](vpn-gateway-troubleshoot-vpn-point-to-site-connection-problems.md).
+For P2S troubleshooting information, [Troubleshooting: Azure point-to-site connection problems](vpn-gateway-troubleshoot-vpn-point-to-site-connection-problems.md).
vpn-gateway Vpn Gateway Troubleshoot Vpn Point To Site Connection Problems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-troubleshoot-vpn-point-to-site-connection-problems.md
When you try and connect to an Azure virtual network gateway using IKEv2 on Wind
IKEv2 is supported on Windows 10 and Server 2016. However, in order to use IKEv2, you must install updates and set a registry key value locally. OS versions prior to Windows 10 are not supported and can only use SSTP.
-To prepare Windows 10 or Server 2016 for IKEv2:
+To prepare Windows 10 , or Server 2016 for IKEv2:
1. Install the update.
vpn-gateway Vpn Gateway Vpn Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-vpn-faq.md
Previously updated : 12/16/2021 Last updated : 05/25/2022 # VPN Gateway FAQ
### Can I connect virtual networks in different Azure regions?
-Yes. There is no region constraint. One virtual network can connect to another virtual network in the same region, or in a different Azure region.
+Yes. There's no region constraint. One virtual network can connect to another virtual network in the same region, or in a different Azure region.
### Can I connect virtual networks in different subscriptions?
No.
The following cross-premises virtual network gateway connections are supported:
-* **Site-to-Site:** VPN connection over IPsec (IKE v1 and IKE v2). This type of connection requires a VPN device or RRAS. For more information, see [Site-to-Site](./tutorial-site-to-site-portal.md).
-* **Point-to-Site:** VPN connection over SSTP (Secure Socket Tunneling Protocol) or IKE v2. This connection does not require a VPN device. For more information, see [Point-to-Site](vpn-gateway-howto-point-to-site-resource-manager-portal.md).
-* **VNet-to-VNet:** This type of connection is the same as a Site-to-Site configuration. VNet to VNet is a VPN connection over IPsec (IKE v1 and IKE v2). It does not require a VPN device. For more information, see [VNet-to-VNet](vpn-gateway-howto-vnet-vnet-resource-manager-portal.md).
-* **Multi-Site:** This is a variation of a Site-to-Site configuration that allows you to connect multiple on-premises sites to a virtual network. For more information, see [Multi-Site](vpn-gateway-howto-multi-site-to-site-resource-manager-portal.md).
+* **Site-to-site:** VPN connection over IPsec (IKE v1 and IKE v2). This type of connection requires a VPN device or RRAS. For more information, see [Site-to-site](./tutorial-site-to-site-portal.md).
+* **Point-to-site:** VPN connection over SSTP (Secure Socket Tunneling Protocol) or IKE v2. This connection doesn't require a VPN device. For more information, see [Point-to-site](vpn-gateway-howto-point-to-site-resource-manager-portal.md).
+* **VNet-to-VNet:** This type of connection is the same as a site-to-site configuration. VNet to VNet is a VPN connection over IPsec (IKE v1 and IKE v2). It doesn't require a VPN device. For more information, see [VNet-to-VNet](vpn-gateway-howto-vnet-vnet-resource-manager-portal.md).
+* **Multi-Site:** This is a variation of a site-to-site configuration that allows you to connect multiple on-premises sites to a virtual network. For more information, see [Multi-Site](vpn-gateway-howto-multi-site-to-site-resource-manager-portal.md).
* **ExpressRoute:** ExpressRoute is a private connection to Azure from your WAN, not a VPN connection over the public Internet. For more information, see the [ExpressRoute Technical Overview](../expressroute/expressroute-introduction.md) and the [ExpressRoute FAQ](../expressroute/expressroute-faqs.md). For more information about VPN Gateway connections, see [About VPN Gateway](vpn-gateway-about-vpngateways.md).
-### What is the difference between a Site-to-Site connection and Point-to-Site?
+### What is the difference between a site-to-site connection and point-to-site?
-**Site-to-Site** (IPsec/IKE VPN tunnel) configurations are between your on-premises location and Azure. This means that you can connect from any of your computers located on your premises to any virtual machine or role instance within your virtual network, depending on how you choose to configure routing and permissions. It's a great option for an always-available cross-premises connection and is well suited for hybrid configurations. This type of connection relies on an IPsec VPN appliance (hardware device or soft appliance), which must be deployed at the edge of your network. To create this type of connection, you must have an externally facing IPv4 address.
+**Site-to-site** (IPsec/IKE VPN tunnel) configurations are between your on-premises location and Azure. This means that you can connect from any of your computers located on your premises to any virtual machine or role instance within your virtual network, depending on how you choose to configure routing and permissions. It's a great option for an always-available cross-premises connection and is well suited for hybrid configurations. This type of connection relies on an IPsec VPN appliance (hardware device or soft appliance), which must be deployed at the edge of your network. To create this type of connection, you must have an externally facing IPv4 address.
-**Point-to-Site** (VPN over SSTP) configurations let you connect from a single computer from anywhere to anything located in your virtual network. It uses the Windows in-box VPN client. As part of the Point-to-Site configuration, you install a certificate and a VPN client configuration package, which contains the settings that allow your computer to connect to any virtual machine or role instance within the virtual network. It's great when you want to connect to a virtual network, but aren't located on-premises. It's also a good option when you don't have access to VPN hardware or an externally facing IPv4 address, both of which are required for a Site-to-Site connection.
+**Point-to-site** (VPN over SSTP) configurations let you connect from a single computer from anywhere to anything located in your virtual network. It uses the Windows in-box VPN client. As part of the point-to-site configuration, you install a certificate and a VPN client configuration package, which contains the settings that allow your computer to connect to any virtual machine or role instance within the virtual network. It's great when you want to connect to a virtual network, but aren't located on-premises. It's also a good option when you don't have access to VPN hardware or an externally facing IPv4 address, both of which are required for a site-to-site connection.
-You can configure your virtual network to use both Site-to-Site and Point-to-Site concurrently, as long as you create your Site-to-Site connection using a route-based VPN type for your gateway. Route-based VPN types are called dynamic gateways in the classic deployment model.
+You can configure your virtual network to use both site-to-site and point-to-site concurrently, as long as you create your site-to-site connection using a route-based VPN type for your gateway. Route-based VPN types are called dynamic gateways in the classic deployment model.
## <a name="privacy"></a>Privacy
The custom configured traffic selectors will be proposed only when an Azure VPN
### Can I update my policy-based VPN gateway to route-based?
-No. A gateway type cannot be changed from policy-based to route-based, or from route-based to policy-based. To change a gateway type, the gateway must be deleted and recreated. This process takes about 60 minutes. When you create the new gateway, you cannot retain the IP address of the original gateway.
+No. A gateway type can't be changed from policy-based to route-based, or from route-based to policy-based. To change a gateway type, the gateway must be deleted and recreated. This process takes about 60 minutes. When you create the new gateway, you can't retain the IP address of the original gateway.
1. Delete any connections associated with the gateway.
No. A gateway type cannot be changed from policy-based to route-based, or from r
* [Azure portal](vpn-gateway-delete-vnet-gateway-portal.md) * [Azure PowerShell](vpn-gateway-delete-vnet-gateway-powershell.md) * [Azure PowerShell - classic](vpn-gateway-delete-vnet-gateway-classic-powershell.md)
-1. Create a new gateway using the gateway type that you want, and then complete the VPN setup. For steps, see the [Site-to-Site tutorial](./tutorial-site-to-site-portal.md#VNetGateway).
+1. Create a new gateway using the gateway type that you want, and then complete the VPN setup. For steps, see the [Site-to-site tutorial](./tutorial-site-to-site-portal.md#VNetGateway).
### Do I need a 'GatewaySubnet'?
No.
### Can I get my VPN gateway IP address before I create it?
-Zone-redundant and zonal gateways (gateway SKUs that have _AZ_ in the name) both rely on a _Standard SKU_ Azure public IP resource. Azure Standard SKU public IP resources must use a static allocation method. Therefore, you will have the public IP address for your VPN gateway as soon as you create the Standard SKU public IP resource you intend to use for it.
+Zone-redundant and zonal gateways (gateway SKUs that have _AZ_ in the name) both rely on a _Standard SKU_ Azure public IP resource. Azure Standard SKU public IP resources must use a static allocation method. Therefore, you'll have the public IP address for your VPN gateway as soon as you create the Standard SKU public IP resource you intend to use for it.
-For non-zone-redundant and non-zonal gateways (gateway SKUs that do _not_ have _AZ_ in the name), you cannot obtain the VPN gateway IP address before it is created. The IP address changes only if you delete and re-create your VPN gateway.
+For non-zone-redundant and non-zonal gateways (gateway SKUs that do *not* have *AZ* in the name), you can't obtain the VPN gateway IP address before it's created. The IP address changes only if you delete and re-create your VPN gateway.
### Can I request a Static Public IP address for my VPN gateway?
-Zone-redundant and zonal gateways (gateway SKUs that have _AZ_ in the name) both rely on a _Standard SKU_ Azure public IP resource. Azure Standard SKU public IP resources must use a static allocation method.
+Zone-redundant and zonal gateways (gateway SKUs that have *AZ* in the name) both rely on a *Standard SKU* Azure public IP resource. Azure Standard SKU public IP resources must use a static allocation method.
-For non-zone-redundant and non-zonal gateways (gateway SKUs that do _not_ have _AZ_ in the name), only dynamic IP address assignment is supported. However, this doesn't mean that the IP address changes after it has been assigned to your VPN gateway. The only time the VPN gateway IP address changes is when the gateway is deleted and then re-created. The VPN gateway public IP address doesn't change when you resize, reset, or complete other internal maintenance and upgrades of your VPN gateway.
+For non-zone-redundant and non-zonal gateways (gateway SKUs that do *not* have *AZ* in the name), only dynamic IP address assignment is supported. However, this doesn't mean that the IP address changes after it has been assigned to your VPN gateway. The only time the VPN gateway IP address changes is when the gateway is deleted and then re-created. The VPN gateway public IP address doesn't change when you resize, reset, or complete other internal maintenance and upgrades of your VPN gateway.
### How does my VPN tunnel get authenticated?
Yes, the Set Pre-Shared Key API and PowerShell cmdlet can be used to configure b
### Can I use other authentication options?
-We are limited to using pre-shared keys (PSK) for authentication.
+We're limited to using pre-shared keys (PSK) for authentication.
### How do I specify which traffic goes through the VPN gateway?
Yes, you can deploy your own VPN gateways or servers in Azure either from the Az
### <a name="gatewayports"></a>Why are certain ports opened on my virtual network gateway?
-They are required for Azure infrastructure communication. They are protected (locked down) by Azure certificates. Without proper certificates, external entities, including the customers of those gateways, will not be able to cause any effect on those endpoints.
+They're required for Azure infrastructure communication. They're protected (locked down) by Azure certificates. Without proper certificates, external entities, including the customers of those gateways, won't be able to cause any effect on those endpoints.
-A virtual network gateway is fundamentally a multi-homed device with one NIC tapping into the customer private network, and one NIC facing the public network. Azure infrastructure entities cannot tap into customer private networks for compliance reasons, so they need to utilize public endpoints for infrastructure communication. The public endpoints are periodically scanned by Azure security audit.
+A virtual network gateway is fundamentally a multi-homed device with one NIC tapping into the customer private network, and one NIC facing the public network. Azure infrastructure entities can't tap into customer private networks for compliance reasons, so they need to utilize public endpoints for infrastructure communication. The public endpoints are periodically scanned by Azure security audit.
### More information about gateway types, requirements, and throughput For more information, see [About VPN Gateway configuration settings](vpn-gateway-about-vpn-gateway-settings.md).
-## <a name="s2s"></a>Site-to-Site connections and VPN devices
+## <a name="s2s"></a>Site-to-site connections and VPN devices
### What should I consider when selecting a VPN device?
-We have validated a set of standard Site-to-Site VPN devices in partnership with device vendors. A list of known compatible VPN devices, their corresponding configuration instructions or samples, and device specs can be found in the [About VPN devices](vpn-gateway-about-vpn-devices.md) article. All devices in the device families listed as known compatible should work with Virtual Network. To help configure your VPN device, refer to the device configuration sample or link that corresponds to appropriate device family.
+We've validated a set of standard site-to-site VPN devices in partnership with device vendors. A list of known compatible VPN devices, their corresponding configuration instructions or samples, and device specs can be found in the [About VPN devices](vpn-gateway-about-vpn-devices.md) article. All devices in the device families listed as known compatible should work with Virtual Network. To help configure your VPN device, refer to the device configuration sample or link that corresponds to appropriate device family.
### Where can I find VPN device configuration settings?
This is expected behavior for policy-based (also known as static routing) VPN ga
### Can I use software VPNs to connect to Azure?
-We support Windows Server 2012 Routing and Remote Access (RRAS) servers for Site-to-Site cross-premises configuration.
+We support Windows Server 2012 Routing and Remote Access (RRAS) servers for site-to-site cross-premises configuration.
Other software VPN solutions should work with our gateway as long as they conform to industry standard IPsec implementations. Contact the vendor of the software for configuration and support instructions.
-### Can I connect to a VPN gateway via Point-to-Site when located at a Site that has an active Site-to-Site connection?
+### Can I connect to a VPN gateway via point-to-site when located at a Site that has an active site-to-site connection?
-Yes, but the Public IP address(es) of the Point-to-Site client need to be different than the Public IP address(es) used by the Site-to-Site VPN device, or else the Point-to-Site connection will not work. Point-to-Site connections with IKEv2 cannot be initiated from the same Public IP address(es) where a Site-to-Site VPN connection is configured on the same Azure VPN gateway.
+Yes, but the Public IP address(es) of the point-to-site client need to be different than the Public IP address(es) used by the site-to-site VPN device, or else the point-to-site connection won't work. point-to-site connections with IKEv2 can't be initiated from the same Public IP address(es) where a site-to-site VPN connection is configured on the same Azure VPN gateway.
-## <a name="P2S"></a>Point-to-Site - Certificate authentication
+## <a name="P2S"></a>Point-to-site - Certificate authentication
This section applies to the Resource Manager deployment model. [!INCLUDE [P2S Azure cert](../../includes/vpn-gateway-faq-p2s-azurecert-include.md)]
-## <a name="P2SRADIUS"></a>Point-to-Site - RADIUS authentication
+## <a name="P2SRADIUS"></a>Point-to-site - RADIUS authentication
This section applies to the Resource Manager deployment model.
If you want to enable routing between your branch connected to ExpressRoute and
Yes. See the [BGP](#bgp) section for more information. **Classic deployment model**<br>
-Transit traffic via Azure VPN gateway is possible using the classic deployment model, but relies on statically defined address spaces in the network configuration file. BGP is not yet supported with Azure Virtual Networks and VPN gateways using the classic deployment model. Without BGP, manually defining transit address spaces is very error prone, and not recommended.
+Transit traffic via Azure VPN gateway is possible using the classic deployment model, but relies on statically defined address spaces in the network configuration file. BGP isn't yet supported with Azure Virtual Networks and VPN gateways using the classic deployment model. Without BGP, manually defining transit address spaces is very error prone, and not recommended.
### Does Azure generate the same IPsec/IKE pre-shared key for all my VPN connections for the same virtual network?
-No, Azure by default generates different pre-shared keys for different VPN connections. However, you can use the Set VPN Gateway Key REST API or PowerShell cmdlet to set the key value you prefer. The key MUST only contain printable ASCII characters except space, hyphen (-) or tilde (~).
+No, Azure by default generates different pre-shared keys for different VPN connections. However, you can use the `Set VPN Gateway Key` REST API or PowerShell cmdlet to set the key value you prefer. The key MUST only contain printable ASCII characters except space, hyphen (-) or tilde (~).
-### Do I get more bandwidth with more Site-to-Site VPNs than for a single virtual network?
+### Do I get more bandwidth with more site-to-site VPNs than for a single virtual network?
-No, all VPN tunnels, including Point-to-Site VPNs, share the same Azure VPN gateway and the available bandwidth.
+No, all VPN tunnels, including point-to-site VPNs, share the same Azure VPN gateway and the available bandwidth.
### Can I configure multiple tunnels between my virtual network and my on-premises site using multi-site VPN?
Yes, Azure VPN gateway will honor AS Path prepending to help make routing decisi
### Can I use the RoutingWeight property when creating a new VPN VirtualNetworkGateway connection?
-No, such setting is reserved for ExpressRoute gateway connections. If you want to influence routing decisions between multiple connections you need to use AS Path prepending.
+No, such setting is reserved for ExpressRoute gateway connections. If you want to influence routing decisions between multiple connections, you need to use AS Path prepending.
-### Can I use Point-to-Site VPNs with my virtual network with multiple VPN tunnels?
+### Can I use point-to-site VPNs with my virtual network with multiple VPN tunnels?
-Yes, Point-to-Site (P2S) VPNs can be used with the VPN gateways connecting to multiple on-premises sites and other virtual networks.
+Yes, point-to-site (P2S) VPNs can be used with the VPN gateways connecting to multiple on-premises sites and other virtual networks.
### Can I connect a virtual network with IPsec VPNs to my ExpressRoute circuit?
-Yes, this is supported. For more information, see [Configure ExpressRoute and Site-to-Site VPN connections that coexist](../expressroute/expressroute-howto-coexist-classic.md).
+Yes, this is supported. For more information, see [Configure ExpressRoute and site-to-site VPN connections that coexist](../expressroute/expressroute-howto-coexist-classic.md).
## <a name="ipsecike"></a>IPsec/IKE policy
Yes. See [Configure forced tunneling](vpn-gateway-about-forced-tunneling.md).
You have a few options. If you have RDP enabled for your VM, you can connect to your virtual machine by using the private IP address. In that case, you would specify the private IP address and the port that you want to connect to (typically 3389). You'll need to configure the port on your virtual machine for the traffic.
-You can also connect to your virtual machine by private IP address from another virtual machine that's located on the same virtual network. You can't RDP to your virtual machine by using the private IP address if you are connecting from a location outside of your virtual network. For example, if you have a Point-to-Site virtual network configured and you don't establish a connection from your computer, you can't connect to the virtual machine by private IP address.
+You can also connect to your virtual machine by private IP address from another virtual machine that's located on the same virtual network. You can't RDP to your virtual machine by using the private IP address if you're connecting from a location outside of your virtual network. For example, if you have a point-to-site virtual network configured and you don't establish a connection from your computer, you can't connect to the virtual machine by private IP address.
### If my virtual machine is in a virtual network with cross-premises connectivity, does all the traffic from my VM go through that connection?
web-application-firewall Cdn Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/cdn/cdn-overview.md
Previously updated : 08/31/2020 Last updated : 05/26/2022
You can configure a WAF policy and associate that policy to one or more CDN endp
- custom rules that you can create. -- managed rule sets that are a collection of Azure managed pre-configured rules.
+- managed rule sets that are a collection of Azure-managed pre-configured rules.
When both are present, custom rules are processed before processing the rules in a managed rule set. A rule is made of a match condition, a priority, and an action. Action types supported are: *ALLOW*, *BLOCK*, *LOG*, and *REDIRECT*. You can create a fully customized policy that meets your specific application protection requirements by combining managed and custom rules.
You can choose one of the following actions when a request matches a rule's cond
- *Allow*: The request passes through the WAF and is forwarded to back-end. No further lower priority rules can block this request. - *Block*: The request is blocked and WAF sends a response to the client without forwarding the request to the back-end. - *Log*: The request is logged in the WAF logs and WAF continues evaluating lower priority rules.-- *Redirect*: WAF redirects the request to the specified URI. The URI specified is a policy level setting. Once configured, all requests that match the *Redirect* action is sent to that URI.
+- *Redirect*: WAF redirects the request to the specified URI. The URI specified is a policy level setting. Once configured, all requests that match the *Redirect* action are sent to that URI.
## WAF rules
Custom rules can have match rules and rate control rules.
You can configure the following custom match rules: -- *IP allow list and block list*: You can control access to your web applications based on a list of client IP addresses or IP address ranges. Both IPv4 and IPv6 address types are supported. This list can be configured to either block or allow those requests where the source IP matches an IP in the list.
+- *IP allowlist and blocklist*: You can control access to your web applications based on a list of client IP addresses or IP address ranges. Both IPv4 and IPv6 address types are supported. This list can be configured to either block or allow those requests where the source IP matches an IP in the list.
- *Geographic based access control*: You can control access to your web applications based on the country code that's associated with a client's IP address.
You can configure the following custom match rules:
A rate control rule limits abnormally high traffic from any client IP address. -- *Rate limiting rules*: You can configure a threshold on the number of web requests allowed from a client IP address during a one-minute duration. This rule is distinct from an IP list-based allow/block custom rule that either allows all or blocks all request from a client IP address. Rate limits can be combined with additional match conditions such as HTTP(S) parameter matches for granular rate control.
+- *Rate limiting rules*: You can configure a threshold on the number of web requests allowed from a client IP address during a one-minute duration. This rule is distinct from an IP list-based allow/block custom rule that either allows all or blocks all request from a client IP address. Rate limits can be combined with more match conditions such as HTTP(S) parameter matches for granular rate control.
### Azure-managed rule sets
Monitoring for WAF with CDN is integrated with Azure Monitor to track alerts and
## Next steps -- [Tutorial: Create a WAF policy with Azure CDN using the Azure portal](waf-cdn-create-portal.md)
+- [Azure CLI for CDN WAF](/cli/azure/cdn/waf)
web-application-firewall Waf Cdn Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/cdn/waf-cdn-create-portal.md
- Title: 'Tutorial: Create WAF policy for Azure CDN - Azure portal'
-description: In this tutorial, you learn how to create a Web Application Firewall (WAF) policy on Azure CDN using the Azure portal.
---- Previously updated : 09/16/2020---
-# Tutorial: Create a WAF policy on Azure CDN using the Azure portal
-
-This tutorial shows you how to create a basic Azure Web Application Firewall (WAF) policy and apply it to an endpoint on Azure Content Delivery Network (CDN).
-
-In this tutorial, you learn how to:
-
-> [!div class="checklist"]
-> * Create a WAF policy
-> * Associate it with a CDN endpoint. You can associate a WAF policy only with endpoints that are hosted on the **Azure CDN Standard from Microsoft** SKU.
-> * Configure WAF rules
-
-## Prerequisites
-
-Create an Azure CDN profile and endpoint by following the instructions in [Quickstart: Create an Azure CDN profile and endpoint](../../cdn/cdn-create-new-endpoint.md).
-
-## Create a Web Application Firewall policy
-
-First, create a basic WAF policy with a managed Default Rule Set (DRS) using the portal.
-
-1. On the top left-hand side of the screen, select **Create a resource**>search for **WAF**>select **Web application firewall** > select **Create**.
-2. In the **Basics** tab of the **Create a WAF policy** page, enter or select the following information, accept the defaults for the remaining settings, and then select **Review + create**:
-
- | Setting | Value |
- | | |
- | Policy For |Select Azure CDN (Preview).|
- | Subscription |Select your CDN Profile subscription name.|
- | Resource group |Select your CDN Profile resource group name.|
- | Policy name |Enter a unique name for your WAF policy.|
-
- :::image type="content" source="../media/waf-cdn-create-portal/basic.png" alt-text="Screenshot of the Create a W A F policy page, with a Review + create button and values entered for various settings." border="false":::
-
-3. In the **Association** tab of the **Create a WAF policy** page, select **Add CDN Endpoint**, enter the following settings, and then select **Add**:
-
- | Setting | Value |
- | | |
- | CDN Profile | Select your CDN profile name.|
- | Endpoint | Select the name of your endpoint, then select **Add**.|
-
- > [!NOTE]
- > If the endpoint is associated with a WAF policy, it is shown grayed out. You must first remove the Endpoint from the associated policy, and then re-associate the endpoint to a new WAF policy.
-1. Select **Review + create**, then select **Create**.
-
-## Configure Web Application Firewall policy (optional)
-
-### Change mode
-
-By default WAF policy is in *Detection* mode when you create a WAF policy. In *Detection* mode, WAF doesn't block any requests. Instead, requests matching the WAF rules are logged at WAF logs.
-
-To see WAF in action, you can change the mode settings from *Detection* to *Prevention*. In *Prevention* mode, requests that match rules that are defined in Default Rule Set (DRS) are blocked and logged at WAF logs.
-
- :::image type="content" source="../media/waf-cdn-create-portal/policy.png" alt-text="Screenshot of the Policy settings section. The Mode toggle is set to Prevention." border="false":::
-
-### Custom rules
-
-To create a custom rule, select **Add custom rule** under the **Custom rules** section. This opens the custom rule configuration page. There are two types of custom rules: **match rule** and **rate limit** rule.
-
-The following screenshot shows a custom match rule to block a request if the query string contains the value **blockme**.
--
-Rate limit rules require two additional fields: **Rate limit duration** and **Rate limit threshold (requests)** as shown in the following example:
--
-### Default Rule Set (DRS)
-
-The Azure managed Default Rule Set is enabled by default. To disable an individual rule within a rule group, expand the rules within that rule group, select the check box in front of the rule number, and select **Disable** on the tab above. To change actions types for individual rules within the rule set, select the check box in front of the rule number, and then select the **Change action** tab above.
-
- :::image type="content" source="../media/waf-cdn-create-portal/managed2.png" alt-text="Screenshot of the Managed rules page showing a rule set, rule groups, rules, and Enable, Disable, and Change Action buttons. One rule is checked." border="false":::
-
-## Clean up resources
-
-When no longer needed, remove the resource group and all related resources.
--
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Learn about Azure Web Application Firewall](../overview.md)