Updates from: 03/23/2022 02:10:57
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Concept Mfa Licensing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-mfa-licensing.md
Previously updated : 11/02/2021 Last updated : 03/22/2022
# Features and licenses for Azure AD Multi-Factor Authentication
-To protect user accounts in your organization, multi-factor authentication should be used. This feature is especially important for accounts that have privileged access to resources. Basic multi-factor authentication features are available to Microsoft 365 and Azure Active Directory (Azure AD) global administrators for no extra cost. If you want to upgrade the features for your admins or extend multi-factor authentication to the rest of your users, you can purchase Azure AD Multi-Factor Authentication in several ways.
+To protect user accounts in your organization, multi-factor authentication should be used. This feature is especially important for accounts that have privileged access to resources. Basic multi-factor authentication features are available to Microsoft 365 and Azure Active Directory (Azure AD) users and global administrators for no extra cost. If you want to upgrade the features for your admins or extend multi-factor authentication to the rest of your users with more authentication methods and greater control, you can purchase Azure AD Multi-Factor Authentication in several ways.
> [!IMPORTANT] > This article details the different ways that Azure AD Multi-Factor Authentication can be licensed and used. For specific details about pricing and billing, see the [Azure AD pricing page](https://www.microsoft.com/en-us/security/business/identity-access-management/azure-ad-pricing). ## Available versions of Azure AD Multi-Factor Authentication
-Azure AD Multi-Factor Authentication can be used, and licensed, in a few different ways depending on your organization's needs. You may already be entitled to use Azure AD Multi-Factor Authentication depending on the Azure AD, EMS, or Microsoft 365 license you currently have. For example, the first 50,000 monthly active users in Azure AD External Identities can use MFA and other Premium P1 or P2 features for free. For more information, see [Azure Active Directory External Identities pricing](https://azure.microsoft.com/pricing/details/active-directory/external-identities/).
+Azure AD Multi-Factor Authentication can be used, and licensed, in a few different ways depending on your organization's needs. All tenants are entitled to basic multifactor authentication features via Security Defaults. You may already be entitled to use advanced Azure AD Multi-Factor Authentication depending on the Azure AD, EMS, or Microsoft 365 license you currently have. For example, the first 50,000 monthly active users in Azure AD External Identities can use MFA and other Premium P1 or P2 features for free. For more information, see [Azure Active Directory External Identities pricing](https://azure.microsoft.com/pricing/details/active-directory/external-identities/).
The following table details the different ways to get Azure AD Multi-Factor Authentication and some of the features and use cases for each.
active-directory Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/feature-availability.md
Previously updated : 02/28/2022 Last updated : 03/22/2022
This following tables list Azure AD feature availability in Azure Government.
|Service | Feature | Availability | |:||::|
-|**Authentication, single sign-on, and MFA**|||
-||Cloud authentication (Pass-through authentication, password hash synchronization) | ✅ |
+|**Authentication, single sign-on, and MFA**|Cloud authentication (Pass-through authentication, password hash synchronization) | ✅ |
|| Federated authentication (Active Directory Federation Services or federation with other identity providers) | ✅ | || Single sign-on (SSO) unlimited | ✅ | || Multifactor authentication (MFA) | Hardware OATH tokens are not available. Instead, use Conditional Access policies with named locations to establish when multifactor authentication should and should not be required based off the user's current IP address. Microsoft Authenticator only shows GUID and not UPN for compliance reasons. | || Passwordless (Windows Hello for Business, Microsoft Authenticator, FIDO2 security key integrations) | ✅ | || Service-level agreement | ✅ |
-|**Applications access**|||
-|| SaaS apps with modern authentication (Azure AD application gallery apps, SAML, and OAUTH 2.0) | ✅ |
+|**Applications access**|SaaS apps with modern authentication (Azure AD application gallery apps, SAML, and OAUTH 2.0) | ✅ |
|| Group assignment to applications | ✅ | || Cloud app discovery (Microsoft Cloud App Security) | ✅ | || Application Proxy for on-premises, header-based, and Integrated Windows Authentication | ✅ | || Secure hybrid access partnerships (Kerberos, NTLM, LDAP, RDP, and SSH authentication) | ✅ |
-|**Authorization and Conditional Access**|||
-|| Role-based access control (RBAC) | ✅ |
+|**Authorization and Conditional Access**|Role-based access control (RBAC) | ✅ |
|| Conditional Access | ✅ | || SharePoint limited access | ✅ | || Session lifetime management | ✅ | || Identity Protection (vulnerabilities and risky accounts) | See [Identity protection](#identity-protection) below. | || Identity Protection (risk events investigation, SIEM connectivity) | See [Identity protection](#identity-protection) below. |
-|**Administration and hybrid identity**|||
-|| User and group management | ✅ |
+|**Administration and hybrid identity**|User and group management | ✅ |
|| Advanced group management (Dynamic groups, naming policies, expiration, default classification) | ✅ | || Directory synchronizationΓÇöAzure AD Connect (sync and cloud sync) | ✅ | || Azure AD Connect Health reporting | ✅ |
This following tables list Azure AD feature availability in Azure Government.
|| Global password protection and management ΓÇô cloud-only users | ✅ | || Global password protection and management ΓÇô custom banned passwords, users synchronized from on-premises Active Directory | ✅ | || Microsoft Identity Manager user client access license (CAL) | ✅ |
-|**End-user self-service**|||
-|| Application launch portal (My Apps) | ✅ |
+|**End-user self-service**|Application launch portal (My Apps) | ✅ |
|| User application collections in My Apps | ✅ | || Self-service account management portal (My Account) | ✅ | || Self-service password change for cloud users | ✅ |
This following tables list Azure AD feature availability in Azure Government.
|| Self-service sign-in activity search and reporting | ✅ | || Self-service group management (My Groups) | ✅ | || Self-service entitlement management (My Access) | ✅ |
-|**Identity governance**|||
-|| Automated user provisioning to apps | ✅ |
+|**Identity governance**|Automated user provisioning to apps | ✅ |
|| Automated group provisioning to apps | ✅ | || HR-driven provisioning | Partial. See [HR-provisioning apps](#hr-provisioning-apps). | || Terms of use attestation | ✅ | || Access certifications and reviews | ✅ | || Entitlement management | ✅ | || Privileged Identity Management (PIM), just-in-time access | ✅ |
-|**Event logging and reporting**|||
-|| Basic security and usage reports | ✅ |
+|**Event logging and reporting**|Basic security and usage reports | ✅ |
|| Advanced security and usage reports | ✅ | || Identity Protection: vulnerabilities and risky accounts | ✅ | || Identity Protection: risk events investigation, SIEM connectivity | ✅ |
-|**Frontline workers**|||
-|| SMS sign-in | Feature not available. |
+|**Frontline workers**|SMS sign-in | Feature not available. |
|| Shared device sign-out | Enterprise state roaming for Windows 10 devices is not available. | || Delegated user management portal (My Staff) | Feature not available. |
active-directory How To Mfa Registration Campaign https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-registration-campaign.md
Previously updated : 02/28/2022 Last updated : 03/22/2022
The following table lists **authenticationMethodsRegistrationCampaign** properti
| Name | Possible values | Description | ||--|-| | state | "enabled"<br>"disabled"<br>"default" | Allows you to enable or disable the feature.<br>Default value is used when the configuration hasn't been explicitly set and will use Azure AD default value for this setting. Currently maps to disabled.<br>Change states to either enabled or disabled as needed. |
-| snoozeDurationInDays | Range: 0 ΓÇô 14 | Defines after how many days the user will see the nudge again.<br>If the value is 0, the user is nudged during every MFA attempt.<br>Default: 1 day |
+| snoozeDurationInDays | Range: 0 ΓÇô 14 | Defines the number of days before the user is nudged again.<br>If the value is 0, the user is nudged during every MFA attempt.<br>Default: 1 day |
| includeTargets | N/A | Allows you to include different users and groups that you want the feature to target. | | excludeTargets | N/A | Allows you to exclude different users and groups that you want omitted from the feature. If a user is in a group that is excluded and a group that is included, the user will be excluded from the feature.|
active-directory Howto Mfa Mfasettings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-mfasettings.md
Previously updated : 01/11/2022 Last updated : 03/22/2022
The following Azure AD Multi-Factor Authentication settings are available in the
| Feature | Description | | - | -- |
-| [Account lockout](#account-lockout) | Temporarily lock accounts from using Azure AD Multi-Factor Authentication if there are too many denied authentication attempts in a row. This feature applies only to users who enter a PIN to authenticate. (MFA Server) |
+| [Account lockout](#account-lockout) | Temporarily lock accounts from using Azure AD Multi-Factor Authentication if there are too many denied authentication attempts in a row. This feature applies only to users who enter a PIN to authenticate. (MFA Server only) |
| [Block/unblock users](#block-and-unblock-users) | Block specific users from being able to receive Azure AD Multi-Factor Authentication requests. Any authentication attempts for blocked users are automatically denied. Users remain blocked for 90 days from the time that they're blocked or until they're manually unblocked. | | [Fraud alert](#fraud-alert) | Configure settings that allow users to report fraudulent verification requests. | | [Notifications](#notifications) | Enable notifications of events from MFA Server. |
active-directory Workload Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/workload-identity.md
Previously updated : 03/04/2022 Last updated : 03/22/2022
Create a location based Conditional Access policy that applies to service princi
1. Under **Cloud apps or actions**, select **All cloud apps**. The policy will apply only when a service principal requests a token. 1. Under **Conditions** > **Locations**, include **Any location** and exclude **Selected locations** where you want to allow access. 1. Under **Grant**, **Block access** is the only available option. Access is blocked when a token request is made from outside the allowed range.
-1. Your policy can be saved in **Report-only** mode, allowing administrators to estimate the effects, or policy is enforced by turning policy **On**.
+1. Set **Enable policy** to **On**.
1. Select **Create** to complete your policy. ### Create a risk-based Conditional Access policy
Create a location based Conditional Access policy that applies to service princi
1. Select the levels of risk where you want this policy to trigger. 1. Select **Done**. 1. Under **Grant**, **Block access** is the only available option. Access is blocked when a token request is made from outside the allowed range.
-1. Your policy can be saved in **Report-only** mode, allowing administrators to estimate the effects, or policy is enforced by turning policy **On**.
+1. Set **Enable policy** to **On**.
1. Select **Create** to complete your policy.
+#### Report-only mode
+
+Saving your policy in Report-only mode won't allow administrators to estimate the effects because we don't currently log this risk information in sign-in logs.
+ ## Roll back If you wish to roll back this feature, you can delete or disable any created policies.
active-directory Active Directory V2 Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-v2-protocols.md
Title: OAuth 2.0 and OpenID Connect protocols on the Microsoft identity platform | Azure
-description: A guide to OAuth 2.0 and OpenID Connect protocols that are supported by the Microsoft identity platform.
+description: A guide to OAuth 2.0 and OpenID Connect protocols as supported by the Microsoft identity platform.
Previously updated : 07/21/2020 Last updated : 03/23/2022 -+
-# OAuth 2.0 and OpenID Connect protocols on the Microsoft identity platform
+# OAuth 2.0 and OpenID Connect in the Microsoft identity platform
-The Microsoft identity platform endpoint for identity-as-a-service implements authentication and authorization with the industry standard protocols OpenID Connect (OIDC) and OAuth 2.0, respectively. While the service is standards-compliant, there can be subtle differences between any two implementations of these protocols. The information here will be useful if you choose to write your code by directly sending and handling HTTP requests or use a third-party open-source library, rather than using one of our [open-source libraries](reference-v2-libraries.md).
+The Microsoft identity platform offers authentication and authorization services using standards-compliant implementations of OAuth 2.0 and OpenID Connect (OIDC) 1.0.
-## The basics
+You don't need to learn OAuth and OIDC at the protocol level to use the Microsoft identity platform. However, debugging your apps can be made easier by learning a few basics of the protocols and their implementation on the identity platform.
-In nearly all OAuth 2.0 and OpenID Connect flows, there are four parties involved in the exchange:
+## Roles in OAuth 2.0
+
+Four parties are typically involved in an OAuth 2.0 and OpenID Connect authentication and authorization exchange. Such exchanges are often called *authentication flows* or *auth flows*.
![Diagram showing the OAuth 2.0 roles](./media/active-directory-v2-flows/protocols-roles.svg)
-* The **Authorization Server** is the Microsoft identity platform and is responsible for ensuring the user's identity, granting and revoking access to resources, and issuing tokens. The authorization server is also known as the identity provider - it securely handles anything to do with the user's information, their access, and the trust relationships between parties in a flow.
-* The **Resource Owner** is typically the end user. It's the party that owns the data and has the power to allow clients to access that data or resource.
-* The **OAuth Client** is your app, identified by its application ID. The OAuth client is usually the party that the end user interacts with, and it requests tokens from the authorization server. The client must be granted permission to access the resource by the resource owner.
-* The **Resource Server** is where the resource or data resides. It trusts the Authorization Server to securely authenticate and authorize the OAuth Client, and uses Bearer access tokens to ensure that access to a resource can be granted.
+* **Authorization server** - The Microsoft identity platform itself is the authorization server. Also called an *identity provider* or *IdP*, it securely handles the end-user's information, their access, and the trust relationships between the parties in the auth flow. The authorization server issues the security tokens your apps and APIs use for granting, denying, or revoking access to resources (authorization) after the user has signed in (authenticated).
+
+* **Client** - The client in an OAuth exchange is the application requesting access to a protected resource. The client could be a web app running on a server, a single-page web app running in a user's web browser, or a web API that calls another web API. You'll often see the client referred to as *client application*, *application*, or *app*.
+
+* **Resource owner** - The resource owner in an auth flow is typically the application user, or *end-user* in OAuth terminology. The end-user "owns" the protected resource--their data--your app accesses on their behalf. The resource owner can grant or deny your app (the client) access to the resources they own. For example, your app might call an external system's API to get a user's email address from their profile on that system. Their profile data is a resource the end-user owns on the external system, and the end-user can consent to or deny your app's request to access their data.
+
+* **Resource server** - The resource server hosts or provides access to a resource owner's data. Most often, the resource server is a web API fronting a data store. The resource server relies on the authorization server to perform authentication and uses information in bearer tokens issued by the authorization server to grant or deny access to resources.
+
+## Tokens
+
+The parties in an authentication flow use **bearer tokens** to assure identification (authentication) and to grant or deny access to protected resources (authorization). Bearer tokens in the Microsoft identity platform are formatted as [JSON Web Tokens](https://tools.ietf.org/html/rfc7519) (JWT).
+
+Three types of bearer tokens are used by the Microsoft identity platform as *security tokens*:
+
+* [Access tokens](access-tokens.md) - Access tokens are issued by the authorization server to the client application. The client passes access tokens to the resource server. Access tokens contain the permissions the client has been granted by the authorization server.
+
+* [ID tokens](id-tokens.md) - ID tokens are issued by the authorization server to the client application. Clients use ID tokens when signing in users and to get basic information about them.
+
+* **Refresh tokens** - The client uses a refresh token, or *RT*, to request new access and ID tokens from the authorization server. Your code should treat refresh tokens and their string content as opaque because they're intended for use only by authorization server.
## App registration
-Every app that wants to accept both personal and work or school accounts must be registered through the **App registrations** experience in the [Azure portal](https://aka.ms/appregistrations) before it can sign these users in using OAuth 2.0 or OpenID Connect. The app registration process will collect and assign a few values to your app:
+Your client app needs a way to trust the security tokens issued to it by the Microsoft identity platform. The first step in establishing that trust is by [registering your app](quickstart-register-app.md) with the identity platform in Azure Active Directory (Azure AD).
-* An **Application ID** that uniquely identifies your app
-* A **Redirect URI** (optional) that can be used to direct responses back to your app
-* A few other scenario-specific values.
+When you register your app in Azure AD, the Microsoft identity platform automatically assigns it some values, while others you configure based on the application's type.
-For more details, learn how to [register an app](quickstart-register-app.md).
+Two the most commonly referenced app registration settings are:
-## Endpoints
+* **Application (client) ID** - Also called _application ID_ and _client ID_, this value is assigned to your app by the Microsoft identity platform. The client ID uniquely identifies your app in the identity platform and is included in the security tokens the platform issues.
+* **Redirect URI** - The authorization server uses a redirect URI to direct the resource owner's *user-agent* (web browser, mobile app) to another destination after completing their interaction. For example, after the end-user authenticates with the authorization server. Not all client types use redirect URIs.
-Once registered, the app communicates with the Microsoft identity platform by sending requests to the endpoint:
+Your app's registration also holds information about the authentication and authorization *endpoints* you'll use in your code to get ID and access tokens.
-```
-https://login.microsoftonline.com/{tenant}/oauth2/v2.0/authorize
-https://login.microsoftonline.com/{tenant}/oauth2/v2.0/token
-```
+## Endpoints
-Where the `{tenant}` can take one of four different values:
+Authorization servers like the Microsoft identity platform provide a set of HTTP endpoints for use by the parties in an auth flow to execute the flow.
-| Value | Description |
-| | |
-| `common` | Allows users with both personal Microsoft accounts and work/school accounts from Azure AD to sign into the application. |
-| `organizations` | Allows only users with work/school accounts from Azure AD to sign into the application. |
-| `consumers` | Allows only users with personal Microsoft accounts (MSA) to sign into the application. |
-| `8eaef023-2b34-4da1-9baa-8bc8c9d6a490` or `contoso.onmicrosoft.com` | Allows only users with work/school accounts from a particular Azure AD tenant to sign into the application. Either the friendly domain name of the Azure AD tenant or the tenant's GUID identifier can be used. |
+The endpoint URIs for your app are generated for you when you register or configure your app in Azure AD. The endpoints you use in your app's code depend on the application's type and the identities (account types) it should support.
-To learn how to interact with these endpoints, choose a particular app type in the [Protocols](#protocols) section and follow the links for more info.
+Two commonly used endpoints are the [authorization endpoint](v2-oauth2-auth-code-flow.md#request-an-authorization-code) and [token endpoint](v2-oauth2-auth-code-flow.md#redeem-a-code-for-an-access-token). Here are examples of the `authorize` and `token` endpoints:
-> [!TIP]
-> Any app registered in Azure AD can use the Microsoft identity platform, even if they don't sign in personal accounts. This way, you can migrate existing applications to the Microsoft identity platform and [MSAL](reference-v2-libraries.md) without re-creating your application.
+```Bash
+# Authorization endpoint - used by client to obtain authorization from the resource owner.
+https://login.microsoftonline.com/<issuer>/oauth2/v2.0/authorize
+# Token endpoint - used by client to exchange an authorization grant or refresh token for an access token.
+https://login.microsoftonline.com/<issuer>/oauth2/v2.0/token
-## Tokens
+# NOTE: These are examples. Endpoint URI format may vary based on application type,
+# sign-in audience, and Azure cloud instance (global or national cloud).
+```
+
+To find the endpoints for an application you've registered, in the [Azure portal](https://portal.azure.com) navigate to:
-OAuth 2.0 and OpenID Connect make extensive use of **bearer tokens**, generally represented as [JWTs (JSON Web Tokens)](https://tools.ietf.org/html/rfc7519). A bearer token is a lightweight security token that grants the ΓÇ£bearerΓÇ¥ access to a protected resource. In this sense, the ΓÇ£bearerΓÇ¥ is anyone that gets a copy of the token. Though a party must first authenticate with the Microsoft identity platform to receive the bearer token, if the required steps are not taken to secure the token in transmission and storage, it can be intercepted and used by an unintended party. While some security tokens have a built-in mechanism for preventing unauthorized parties from using them, bearer tokens do not have this mechanism and must be transported in a secure channel such as transport layer security (HTTPS). If a bearer token is transmitted in the clear, a malicious party can use a man-in-the-middle attack to acquire the token and use it for unauthorized access to a protected resource. The same security principles apply when storing or caching bearer tokens for later use. Always ensure that your app transmits and stores bearer tokens in a secure manner. For more security considerations on bearer tokens, see [RFC 6750 Section 5](https://tools.ietf.org/html/rfc6750).
+**Azure Active Directory** > **App registrations** > *{YOUR-APPLICATION}* > **Endpoints**
-There are primarily 3 types of tokens used in OAuth 2.0 / OIDC:
+## Next steps
-* [Access tokens](access-tokens.md) - tokens that a resource server receives from a client, containing permissions the client has been granted.
-* [ID tokens](id-tokens.md) - tokens that a client receives from the authorization server, used to sign in a user and get basic information about them.
-* Refresh tokens - used by a client to get new access and ID tokens over time. These are opaque strings, and are only understandable by the authorization server.
+Next, learn about the OAuth 2.0 authentication flows used by each application type and the libraries you can use in your apps to perform them:
-## Protocols
+* [Authentication flows and application scenarios](authentication-flows-app-scenarios.md)
+* [Microsoft authentication libraries](reference-v2-libraries.md)
-If you're ready to see some example requests, get started with one of the below protocol documents. Each one corresponds to a particular authentication scenario. If you need help with determining which is the right flow for you, check out [the types of apps you can build with Microsoft identity platform](v2-app-types.md).
+Always prefer using an authentication library over making raw HTTP calls to execute auth flows. However, if you have an app that requires it or you'd like to learn more about the identity platform's implementation of OAuth and OIDC, see:
-* [Build mobile, native, and web application with OAuth 2.0](v2-oauth2-auth-code-flow.md)
-* [Sign users in with OpenID Connect](v2-protocols-oidc.md)
-* [Build daemons or server-side processes with the OAuth 2.0 client credentials flow](v2-oauth2-client-creds-grant-flow.md)
-* [Get tokens in a web API with the OAuth 2.0 on-behalf-of Flow](v2-oauth2-on-behalf-of-flow.md)
-* [Build single-page apps with the OAuth 2.0 Implicit Flow](v2-oauth2-implicit-grant-flow.md)
+* [OpenID Connect](v2-protocols-oidc.md) - User sign-in, sign-out, and single sign-on (SSO)
+* [Authorization code grant flow](v2-oauth2-auth-code-flow.md) - Single-page apps (SPA), mobile apps, native (desktop) applications
+* [Client credentials flow](v2-oauth2-client-creds-grant-flow.md) - Server-side processes, scripts, daemons
+* [On-behalf-of (OBO) flow](v2-oauth2-on-behalf-of-flow.md) - Web APIs that call another web API on a user's behalf
active-directory Msal Acquire Cache Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-acquire-cache-tokens.md
Title: Acquire & cache tokens with Microsoft Authentication Library (MSAL) | Azure
+ Title: Acquire and cache tokens with Microsoft Authentication Library (MSAL) | Azure
description: Learn about acquiring and caching tokens using MSAL.
Previously updated : 11/04/2020 Last updated : 03/22/2022
-#Customer intent: As an application developer, I want to learn about acquiring and caching tokens so I can decide if this platform meets my application development needs and requirements.
+#Customer intent: As an application developer, I want to learn about acquiring and caching tokens so my app can support authentication and authorization.
# Acquire and cache tokens using the Microsoft Authentication Library (MSAL)
Generally, the method of acquiring a token depends on whether it's a public clie
### Public client applications
-For public client applications (desktop or mobile app), you:
+In public client applications like desktop and mobile apps, you can:
-- Often acquire tokens interactively, having the user sign in through a UI or pop-up window.-- Can [get a token silently for the signed-in user](msal-authentication-flows.md#integrated-windows-authentication) using integrated Windows authentication (IWA/Kerberos) if the desktop application is running on a Windows computer joined to a domain or to Azure.-- Can [get a token with a username and password](msal-authentication-flows.md#usernamepassword) in .NET framework desktop client applications (not recommended). Do not use username/password in confidential client applications.-- Can acquire a token through the [device code flow](msal-authentication-flows.md#device-code) in applications running on devices that don't have a web browser. The user is provided with a URL and a code, who then goes to a web browser on another device and enters the code and signs in. Azure AD then sends a token back to the browser-less device.
+- Get tokens interactively by having the user sign in through a UI or pop-up window.
+- Get a token silently for the signed-in user using [integrated Windows authentication](msal-authentication-flows.md#integrated-windows-authentication-iwa) (IWA/Kerberos) if the desktop application is running on a Windows computer joined to a domain or to Azure.
+- Get a token with a [username and password](msal-authentication-flows.md#usernamepassword-ropc) in .NET framework desktop client applications (not recommended). Do not use username/password in confidential client applications.
+- Get a token through the [device code flow](msal-authentication-flows.md#device-code) in applications running on devices that don't have a web browser. The user is provided with a URL and a code, who then goes to a web browser on another device and enters the code and signs in. Azure AD then sends a token back to the browser-less device.
### Confidential client applications For confidential client applications (web app, web API, or a daemon application like a Windows service), you: - Acquire tokens **for the application itself** and not for a user, using the [client credentials flow](msal-authentication-flows.md#client-credentials). This technique can be used for syncing tools, or tools that process users in general and not a specific user.-- Use the [on-behalf-of flow](msal-authentication-flows.md#on-behalf-of) for a web API to call an API on behalf of the user. The application is identified with client credentials in order to acquire a token based on a user assertion (SAML, for example, or a JWT token). This flow is used by applications that need to access resources of a particular user in service-to-service calls.
+- Use the [on-behalf-of (OBO) flow](msal-authentication-flows.md#on-behalf-of-obo) for a web API to call an API on behalf of the user. The application is identified with client credentials in order to acquire a token based on a user assertion (SAML, for example, or a JWT token). This flow is used by applications that need to access resources of a particular user in service-to-service calls.
- Acquire tokens using the [authorization code flow](msal-authentication-flows.md#authorization-code) in web apps after the user signs in through the authorization request URL. OpenID Connect application typically use this mechanism, which lets the user sign in using Open ID connect and then access web APIs on behalf of the user. ## Authentication results
active-directory Msal Authentication Flows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-authentication-flows.md
Title: MSAL authentication flows | Azure
+ Title: Authentication flow support in the Microsoft Authentication Library (MSAL) | Azure
-description: Learn about the authentication flows and grants used by the Microsoft Authentication Library (MSAL).
+description: Learn about the authorization grants and authentication flows supported by MSAL.
Previously updated : 01/25/2021 Last updated : 03/22/2022 # Customer intent: As an application developer, I want to learn about the authentication flows supported by MSAL.
-# Authentication flows
+# Authentication flow support in MSAL
-The Microsoft Authentication Library (MSAL) supports several authentication flows for use in different application scenarios.
+The Microsoft Authentication Library (MSAL) supports several authorization grants and associated token flows for use by different application types and scenarios.
-| Flow | Description | Used in |
-|--|--|--|
-| [Authorization code](#authorization-code) | Used in apps that are installed on a device to gain access to protected resources, such as web APIs. Enables you to add sign-in and API access to your mobile and desktop apps. | [Desktop apps](scenario-desktop-overview.md), [mobile apps](scenario-mobile-overview.md), [web apps](scenario-web-app-call-api-overview.md) |
-| [Client credentials](#client-credentials) | Allows you to access web-hosted resources by using the identity of an application. Commonly used for server-to-server interactions that must run in the background, without immediate interaction with a user. | [Daemon apps](scenario-daemon-overview.md) |
-| [Device code](#device-code) | Allows users to sign in to input-constrained devices such as a smart TV, IoT device, or printer. | [Desktop/mobile apps](scenario-desktop-acquire-token-device-code-flow.md) |
-| [Implicit grant](#implicit-grant) | Allows the app to get tokens without performing a back-end server credential exchange. Enables the app to sign in the user, maintain session, and get tokens to other web APIs, all within the client JavaScript code. | [Single-page applications (SPA)](scenario-spa-overview.md) |
-| [On-behalf-of](#on-behalf-of) | An application invokes a service or web API, which in turn needs to call another service or web API. The idea is to propagate the delegated user identity and permissions through the request chain. | [Web APIs](scenario-web-api-call-api-overview.md) |
-| [Username/password](#usernamepassword) | Allows an application to sign in the user by directly handling their password. This flow isn't recommended. | [Desktop/mobile apps](scenario-desktop-acquire-token-username-password.md) |
-| [Integrated Windows authentication](#integrated-windows-authentication) | Allows applications on domain or Azure Active Directory (Azure AD) joined computers to acquire a token silently (without any UI interaction from the user). | [Desktop/mobile apps](scenario-desktop-acquire-token-integrated-windows-authentication.md) |
+| Authentication flow | Enables | Supported application types |
+|--|--||
+| [Authorization code](#authorization-code) | User sign-in and access to web APIs on behalf of the user. | * [Desktop](scenario-desktop-overview.md) <br /> * [Mobile](scenario-mobile-overview.md) <br /> * [Single-page app (SPA)](scenario-spa-overview.md) (requires PKCE) <br /> * [Web](scenario-web-app-call-api-overview.md) |
+| [Client credentials](#client-credentials) | Access to web APIs by using the identity of the application itself. Typically used for server-to-server communication and automated scripts requiring no user interaction. | [Daemon](scenario-daemon-overview.md) |
+| [Device code](#device-code) | User sign-in and access to web APIs on behalf of the user on input-constrained devices like smart TVs and IoT devices. Also used by command line interface (CLI) applications. | [Desktop, Mobile](scenario-desktop-acquire-token-device-code-flow.md) |
+| [Implicit grant](#implicit-grant) | User sign-in and access to web APIs on behalf of the user. _The implicit grant flow is no longer recommended - use authorization code with PKCE instead._ | * [Single-page app (SPA)](scenario-spa-overview.md) <br /> * [Web](scenario-web-app-call-api-overview.md) |
+| [On-behalf-of (OBO)](#on-behalf-of-obo) | Access from an "upstream" web API to a "downstream" web API on behalf of the user. The user's identity and delegated permissions are passed through to the downstream API from the upstream API. | [Web API](scenario-web-api-call-api-overview.md) |
+| [Username/password (ROPC)](#usernamepassword-ropc) | Allows an application to sign in the user by directly handling their password. _The ROPC flow is NOT recommended._ | [Desktop, Mobile](scenario-desktop-acquire-token-username-password.md) |
+| [Integrated Windows authentication (IWA)](#integrated-windows-authentication-iwa) | Allows applications on domain or Azure Active Directory (Azure AD) joined computers to acquire a token silently (without any UI interaction from the user). | [Desktop, Mobile](scenario-desktop-acquire-token-integrated-windows-authentication.md) |
-## How each flow emits tokens and codes
+## Tokens
-Depending on how your client application is built, it can use one or more of the authentication flows supported by the Microsoft identity platform. These flows can produce several types of tokens as well as authorization codes, and require different tokens to make them work.
+Your application can use one or more authentication flows. Each flow uses certain token types for authentication, authorization, and token refresh, and some also use an authorization code.
-| Flow | Requires | id_token | access token | refresh token | authorization code |
-||:-:|:--:|::|:-:|::|
-| [Authorization code flow](v2-oauth2-auth-code-flow.md) | | x | x | x | x |
-| [Client credentials](v2-oauth2-client-creds-grant-flow.md) | | | x (app-only) | | |
-| [Device code flow](v2-oauth2-device-code.md) | | x | x | x | |
-| [Implicit flow](v2-oauth2-implicit-grant-flow.md) | | x | x | | |
-| [On-behalf-of flow](v2-oauth2-on-behalf-of-flow.md) | access token | x | x | x | |
-| [Username/password](v2-oauth-ropc.md) (ROPC) | username & password | x | x | x | |
-| [Hybrid OIDC flow](v2-protocols-oidc.md#protocol-diagram-access-token-acquisition) | | x | | | x |
-| [Refresh token redemption](v2-oauth2-auth-code-flow.md#refresh-the-access-token) | refresh token | x | x | x | |
+| Authentication flow or action | Requires | ID token | Access token | Refresh token | Authorization code |
+||::|:--:|::|:-:|::|
+| [Authorization code flow](v2-oauth2-auth-code-flow.md) | | x | x | x | x |
+| [Client credentials](v2-oauth2-client-creds-grant-flow.md) | | | x (app-only) | | |
+| [Device code flow](v2-oauth2-device-code.md) | | x | x | x | |
+| [Implicit flow](v2-oauth2-implicit-grant-flow.md) | | x | x | | |
+| [On-behalf-of flow](v2-oauth2-on-behalf-of-flow.md) | access token | x | x | x | |
+| [Username/password](v2-oauth-ropc.md) (ROPC) | username, password | x | x | x | |
+| [Hybrid OIDC flow](v2-protocols-oidc.md#protocol-diagram-access-token-acquisition) | | x | | | x |
+| [Refresh token redemption](v2-oauth2-auth-code-flow.md#refresh-the-access-token) | refresh token | x | x | x | |
### Interactive and non-interactive authentication Several of these flows support both interactive and non-interactive token acquisition.
- - **Interactive** means that the user can be prompted for input. For example, prompting the user to login, perform multi-factor authentication (MFA), or to grant additional consent to resources.
- - **Non-interactive**, or *silent*, authentication attempts to acquire a token in a way in which the login server *cannot* prompt the user for additional information.
+- **Interactive** - The user may be prompted for input by the authorization server. For example, to sign in, perform multi-factor authentication (MFA), or to grant consent to more resource access permissions.
+- **Non-interactive** - The user may _not_ be prompted for input. Also called "silent" token acquisition, the application tries to get a token by using a method in which the authorization server _may not_ prompt the user for input.
-Your MSAL-based application should first attempt to acquire a token *silently*, and then interactively only if the non-interactive method fails. For more information about this pattern, see [Acquire and cache tokens using the Microsoft Authentication Library (MSAL)](msal-acquire-cache-tokens.md).
+Your MSAL-based application should first try to acquire a token silently and fall back to the interactive method only if the non-interactive attempt fails. For more information about this pattern, see [Acquire and cache tokens using the Microsoft Authentication Library (MSAL)](msal-acquire-cache-tokens.md).
## Authorization code
-The [OAuth 2 authorization code grant](v2-oauth2-auth-code-flow.md) can be used in apps that are installed on a device to gain access to protected resources like web APIs. This allows you to add sign-in and API access to your mobile and desktop apps.
+The [OAuth 2.0 authorization code grant](v2-oauth2-auth-code-flow.md) can be used by web apps, single-page apps (SPA), and native (mobile and desktop) apps to gain access to protected resources like web APIs.
-When users sign in to web applications (websites), the web application receives an authorization code. The authorization code is redeemed to acquire a token to call web APIs.
+When users sign in to web applications, the application receives an authorization code that it can redeem for an access token to call web APIs.
![Diagram of authorization code flow](media/msal-authentication-flows/authorization-code.png) In the preceding diagram, the application:
-1. Requests an authorization code, which is redeemed for an access token.
-2. Uses the access token to call a web API.
+1. Requests an authorization code which redeemed for an access token.
+2. Uses the access token to call a web API, Microsoft Graph.
-### Considerations
+### Constraints for authorization code
-- You can use the authorization code only once to redeem a token. Don't try to acquire a token multiple times with the same authorization code because it's explicitly prohibited by the protocol standard specification. If you redeem the code several times, either intentionally or because you're unaware that a framework also does it for you, you'll get the following error:
+- Single-page applications require Proof Key for Code Exchange (PKCE) when using the authorization code grant flow. PKCE is supported by MSAL.
+
+- The OAuth 2.0 specification requires you use an authorization code to redeem an access token only _once_.
+
+ If you attempt to acquire access token multiple times with the same authorization code, an error similar to the following is returned by the Microsoft identity platform. Keep in mind that some libraries and frameworks request the authorization code for you automatically, and requesting a code manually in such cases will also result in this error.
`AADSTS70002: Error validating credentials. AADSTS54005: OAuth2 Authorization code was already redeemed, please retry with a new valid code or use an existing refresh token.`
The [OAuth 2 client credentials flow](v2-oauth2-client-creds-grant-flow.md) allo
The client credentials grant flow permits a web service (a confidential client) to use its own credentials, instead of impersonating a user, to authenticate when calling another web service. In this scenario, the client is typically a middle-tier web service, a daemon service, or a website. For a higher level of assurance, the Microsoft identity platform also allows the calling service to use a certificate (instead of a shared secret) as a credential.
-> [!NOTE]
-> The confidential client flow isn't available on mobile platforms like UWP, iOS, and Android because they support only public client applications. Public client applications don't know how to prove the application's identity to the identity provider. A secure connection can be achieved on web app or web API back-ends by deploying a certificate.
- ### Application secrets ![Diagram of confidential client with password](media/msal-authentication-flows/confidential-client-password.png)
These client credentials need to be:
- Registered with Azure AD. - Passed in when constructing the confidential client application object in your code.
+### Constraints for client credentials
+
+The confidential client flow is **unsupported** on mobile platforms like Android, iOS, or UWP. Mobile applications are considered public client applications that are incapable of guaranteeing the confidentiality of their credentials.
+ ## Device code
-The [OAuth 2 device code flow](v2-oauth2-device-code.md) allows users to sign in to input-constrained devices like smart TVs, IoT devices, and printers. Interactive authentication with Azure AD requires a web browser. Where the device or operating system doesn't provide a web browser, the device code flow lets the user use another device like a computer or mobile phone to sign in interactively.
+The [OAuth 2 device code flow](v2-oauth2-device-code.md) allows users to sign in to input-constrained devices like smart TVs, IoT devices, and printers. Interactive authentication with Azure AD requires a web browser. Where the device or operating system doesn't provide a web browser, the device code flow allows the user use another device like a computer or mobile phone to sign in interactively.
By using the device code flow, the application obtains tokens through a two-step process designed for these devices and operating systems. Examples of such applications include those running on IoT devices and command-line interface (CLI) tools.
In the preceding diagram:
1. Whenever user authentication is required, the app provides a code and asks the user to use another device like an internet-connected smartphone to visit a URL (for example, `https://microsoft.com/devicelogin`). The user is then prompted to enter the code, and proceeding through a normal authentication experience including consent prompts and [multi-factor authentication](../authentication/concept-mfa-howitworks.md), if necessary. 1. Upon successful authentication, the command-line app receives the required tokens through a back channel, and uses them to perform the web API calls it needs.
-### Constraints
+### Constraints for device code
-- Device code flow is available only in public client applications.-- The authority passed in when constructing the public client application must be one of the following:
- - Tenanted, in the form `https://login.microsoftonline.com/{tenant}/,` where `{tenant}` is either the GUID representing the tenant ID or a domain name associated with the tenant.
- - For work and school accounts in the form `https://login.microsoftonline.com/organizations/`.
+- The device code flow is available only for public client applications.
+- When you initialize a public client application in MSAL, use one of these authority formats:
+ - Tenanted: `https://login.microsoftonline.com/{tenant}/,` where `{tenant}` is either the GUID representing the tenant ID or a domain name associated with the tenant.
+ - Work and school accounts: `https://login.microsoftonline.com/organizations/`.
## Implicit grant
-The [OAuth 2 implicit grant](v2-oauth2-implicit-grant-flow.md) flow allows the app to get tokens from the Microsoft identity platform without performing a back-end server credential exchange. This flow allows the app to sign in the user, maintain a session, and get tokens for other web APIs, all within the client JavaScript code.
+The implicit grant has been replaced by the [authorization code flow with PCKE](scenario-spa-overview.md) as the preferred and more secure token grant flow for client-side single page-applications (SPAs). If you're building a SPA, use the authorization code flow with PKCE instead.
+
+Single-page web apps written in JavaScript (including frameworks like Angular, Vue.js, or React.js) are downloaded from the server and their code runs directly in the browser. Because their client-side code runs in the browser and not on a web server, they have different security characteristics than traditional server-side web applications. Prior to the availability of Proof Key for Code Exchange (PKCE) for the authorization code flow, the implicit grant flow was used by SPAs for improved responsiveness and efficiency in getting access tokens.
+
+The [OAuth 2 implicit grant flow](v2-oauth2-implicit-grant-flow.md) allows the app to get access tokens from the Microsoft identity platform without performing a back-end server credential exchange. The implicit grant flow allows an app to sign in the user, maintain a session, and get tokens for other web APIs from within the JavaScript code downloaded and run by the user-agent (typically a web browser).
![Diagram of implicit grant flow](media/msal-authentication-flows/implicit-grant.svg)
-Many modern web applications are built as client-side, single page-applications (SPA) written in JavaScript or an SPA framework such as Angular, Vue.js, and React.js. These applications run in a web browser, and have different authentication characteristics than traditional server-side web applications. The Microsoft identity platform enables single page applications to sign in users, and get tokens to access back-end services or web APIs, by using the implicit grant flow. The implicit flow allows the application to get ID tokens to represent the authenticated user, and also access tokens needed to call protected APIs.
+### Constraints for implicit grant
-This authentication flow doesn't include application scenarios that use cross-platform JavaScript frameworks like Electron or React-Native because they require further capabilities for interaction with the native platforms.
+The implicit grant flow doesn't include application scenarios that use cross-platform JavaScript frameworks like Electron or React Native. Cross-platform frameworks like these require further capabilities for interaction with the native desktop and mobile platforms on which they run.
Tokens issued via the implicit flow mode have a **length limitation** because they're returned to the browser by URL (where `response_mode` is either `query` or `fragment`). Some browsers limit the length of the URL in the browser bar and fail when it's too long. Thus, these implicit flow tokens don't contain `groups` or `wids` claims.
-## On-behalf-of
+## On-behalf-of (OBO)
The [OAuth 2 on-behalf-of authentication flow](v2-oauth2-on-behalf-of-flow.md) flow is used when an application invokes a service or web API that in turn needs to call another service or web API. The idea is to propagate the delegated user identity and permissions through the request chain. For the middle-tier service to make authenticated requests to the downstream service, it needs to secure an access token from the Microsoft identity platform *on behalf of* the user.
In the preceding diagram:
3. When the client calls the web API, the web API requests another token on-behalf-of the user. 4. The protected web API uses this token to call a downstream web API on-behalf-of the user. The web API can also later request tokens for other downstream APIs (but still on behalf of the same user).
-## Username/password
+## Username/password (ROPC)
+
+> [!WARNING]
+> The resource owner password credentials (ROPC) flow is NOT recommended. ROPC requires a high degree of trust and credential exposure. _Resort to using ROPC only if a more secure flow can't be used._ For more information, see [What's the solution to the growing problem of passwords?](https://news.microsoft.com/features/whats-solution-growing-problem-passwords-says-microsoft/).
The [OAuth 2 resource owner password credentials](v2-oauth-ropc.md) (ROPC) grant allows an application to sign in the user by directly handling their password. In your desktop application, you can use the username/password flow to acquire a token silently. No UI is required when using the application.
+Some application scenarios like DevOps might find ROPC useful, but you should avoid it in any application in which you provide an interactive UI for user sign-in.
+ ![Diagram of the username/password flow](media/msal-authentication-flows/username-password.png) In the preceding diagram, the application:
In the preceding diagram, the application:
1. Acquires a token by sending the username and password to the identity provider. 2. Calls a web API by using the token.
-> [!WARNING]
-> This flow isn't recommended. It requires a high degree of trust and credential exposure. You should use this flow *only* when more secure flows can't be used. For more information, see [What's the solution to the growing problem of passwords?](https://news.microsoft.com/features/whats-solution-growing-problem-passwords-says-microsoft/).
-
-The preferred flow for acquiring a token silently on Windows domain-joined machines is [Integrated Windows authentication](#integrated-windows-authentication). In other cases, use the [device code flow](#device-code).
+To acquire a token silently on Windows domain-joined machines, we recommend [integrated Windows authentication (IWA)](#integrated-windows-authentication-iwa) instead of ROPC. For other scenarios, use the [device code flow](#device-code).
-Although the username/password flow might be useful in some scenarios like DevOps, avoid it if you want to use username/password in interactive scenarios where you provide your own UI.
+### Constraints for ROPC
-By using username/password:
+The following constraints apply to the applications using the ROPC flow:
-- Users that need to perform multi-factor authentication won't be able to sign in because there is no interaction.-- Users won't be able to do single sign-on.
+- Single sign-on is **unsupported**.
+- Multi-factor authentication (MFA) is **unsupported**.
+ - Check with your tenant admin before using this flow - MFA is a commonly used feature.
+- Conditional Access is **unsupported**.
+- ROPC works _only_ for work and school accounts.
+- Personal Microsoft accounts (MSA) are **unsupported** by ROPC.
+- ROPC is **supported** in .NET desktop and .NET Core applications.
+- ROPC is **unsupported** in Universal Windows Platform (UWP) applications.
+- ROPC in Azure AD B2C is supported _only_ for local accounts.
+ - For information about ROPC in MSAL.NET and Azure AD B2C, see [Using ROPC with Azure AD B2C](msal-net-aad-b2c-considerations.md#resource-owner-password-credentials-ropc).
-### Constraints
+## Integrated Windows authentication (IWA)
-Apart from the [integrated Windows authentication constraints](#integrated-windows-authentication), the following constraints also apply:
--- The username/password flow isn't compatible with Conditional Access and multi-factor authentication. As a consequence, if your app runs in an Azure AD tenant where the tenant admin requires multi-factor authentication, you can't use this flow. Many organizations do that.-- ROPC works only for work and school accounts. You can't use ROPC for Microsoft accounts (MSA).-- The flow is available on .NET desktop and .NET Core, but not on Universal Windows Platform.-- In Azure AD B2C, the ROPC flow works only for local accounts. For information about ROPC in MSAL.NET and Azure AD B2C, see [Using ROPC with Azure AD B2C](msal-net-aad-b2c-considerations.md#resource-owner-password-credentials-ropc).-
-## Integrated Windows authentication
-
-MSAL supports integrated Windows authentication (IWA) for desktop and mobile applications that run on a domain-joined or Azure AD-joined Windows computer. Using IWA, these applications can acquire a token silently without requiring UI interaction by user.
+MSAL supports integrated Windows authentication (IWA) for desktop and mobile applications that run on domain-joined or Azure AD-joined Windows computers. By using IWA, these applications acquire a token silently without requiring UI interaction by user.
![Diagram of integrated Windows authentication](media/msal-authentication-flows/integrated-windows-authentication.png)
In the preceding diagram, the application:
1. Acquires a token by using integrated Windows authentication. 2. Uses the token to make requests of the resource.
-### Constraints
+### Constraints for IWA
+
+**Compatibility**
+
+Integrated Windows authentication (IWA) is enabled for .NET desktop, .NET Core, and Windows Universal Platform apps.
+
+IWA supports AD FS-federated users *only* - users created in Active Directory and backed by Azure AD. Users created directly in Azure AD without Active Directory backing (managed users) can't use this authentication flow.
-Integrated Windows authentication (IWA) supports federated users *only* - users created in Active Directory and backed by Azure AD. Users created directly in Azure AD without Active Directory backing (managed users) can't use this authentication flow. This limitation doesn't affect the [username/password flow](#usernamepassword).
+**Multi-factor authentication (MFA)**
-IWA is for .NET Framework, .NET Core, and Universal Windows Platform applications.
+IWA's non-interactive (silent) authentication can fail if MFA is enabled in the Azure AD tenant and an MFA challenge is issued by Azure AD. If IWA fails, you should fall back to an [interactive method of authentication](#interactive-and-non-interactive-authentication) as described earlier.
-IWA doesn't bypass multi-factor authentication. If multi-factor authentication is configured, IWA might fail if a multi-factor authentication challenge is required. Multi-factor authentication requires user interaction.
+Azure AD uses AI to determine when two-factor authentication is required. Two-factor authentication is typically required when a user signs in from a different country/region, when connected to a corporate network without using a VPN, and sometimes when they _are_ connected through a VPN. Because MFA's configuration and challenge frequency may be outside of your control as the developer, your application should gracefully handle a failure of IWA's silent token acquisition.
-You don't control when the identity provider requests two-factor authentication to be performed. The tenant admin does. Typically, two-factor authentication is required when you sign in from a different country/region, when you're not connected via VPN to a corporate network, and sometimes even when you are connected via VPN. Azure AD uses AI to continuously learn if two-factor authentication is required. If IWA fails, you should fall back to an [interactive user prompt](#interactive-and-non-interactive-authentication).
+**Authority URI restrictions**
The authority passed in when constructing the public client application must be one of: -- Tenanted, in the form `https://login.microsoftonline.com/{tenant}/,` where `{tenant}` is either the GUID representing the tenant ID or a domain name associated with the tenant.-- For any work and school accounts (`https://login.microsoftonline.com/organizations/`). Microsoft personal accounts (MSA) are unsupported; you can't use `/common` or `/consumers` tenants.
+- `https://login.microsoftonline.com/{tenant}/` - This authority indicates a single-tenant application whose sign-in audience is restricted to the users in the specified Azure AD tenant. The `{tenant}` value can be the tenant ID in GUID form or the domain name associated with the tenant.
+- `https://login.microsoftonline.com/organizations/` - This authority indicates a multi-tenant application whose sign-in audience is users in any Azure AD tenant.
-Because IWA is a silent flow, one of the following must be true:
+Authority values must NOT contain `/common` or `/consumers` because personal Microsoft accounts (MSA) are unsupported by IWA.
+
+**Consent requirements**
+
+Because IWA is a silent flow:
- The user of your application must have previously consented to use the application.+
+ _OR_
+ - The tenant admin must have previously consented to all users in the tenant to use the application.
-This means that one of the following is true:
+To satisfy either requirement, one of these operations must have been completed:
-- You as a developer have selected **Grant** in the Azure portal for yourself.-- A tenant admin has selected **Grant/revoke admin consent for {tenant domain}** in the **API permissions** tab of the app registration in the Azure portal (see [Add permissions to access your web API](quickstart-configure-app-access-web-apis.md#add-permissions-to-access-your-web-api)).
+- You as the application developer have selected **Grant** in the Azure portal for yourself.
+- A tenant admin has selected **Grant/revoke admin consent for {tenant domain}** in the **API permissions** tab of the app registration in the Azure portal; see [Add permissions to access your web API](quickstart-configure-app-access-web-apis.md#add-permissions-to-access-your-web-api).
- You've provided a way for users to consent to the application; see [Requesting individual user consent](v2-permissions-and-consent.md#requesting-individual-user-consent). - You've provided a way for the tenant admin to consent for the application; see [admin consent](v2-permissions-and-consent.md#requesting-consent-for-an-entire-tenant).
-The IWA flow is enabled for .NET desktop, .NET Core, and Windows Universal Platform apps.
-
-For more information on consent, see [v2.0 permissions and consent](v2-permissions-and-consent.md).
+For more information on consent, see [Permissions and consent](v2-permissions-and-consent.md).
## Next steps
-Now that you've reviewed authentication flows supported by the Microsoft Authentication Library (MSAL), learn about acquiring and caching the tokens used in these flows:
+Now that you've reviewed the authentication flows supported by MSAL, learn about acquiring and caching the tokens used in these flows:
[Acquire and cache tokens using the Microsoft Authentication Library (MSAL)](msal-acquire-cache-tokens.md)
active-directory Msal Net Adfs Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-adfs-support.md
Previously updated : 07/16/2019 Last updated : 03/22/2022
# Active Directory Federation Services support in MSAL.NET+ Active Directory Federation Services (AD FS) in Windows Server enables you to add OpenID Connect and OAuth 2.0 based authentication and authorization to applications you are developing. Those applications can, then, authenticate users directly against AD FS. For more information, read [AD FS Scenarios for Developers](/windows-server/identity/ad-fs/overview/ad-fs-openid-connect-oauth-flows-scenarios). Microsoft Authentication Library for .NET (MSAL.NET) supports two scenarios for authenticating against AD FS:
Microsoft Authentication Library for .NET (MSAL.NET) supports two scenarios for
- MSAL.NET talks to Azure Active Directory, which itself is *federated* with AD FS. - MSAL.NET talks **directly** to an ADFS authority. This is only supported from AD FS 2019 and above. One of the scenarios this highlights is [Azure Stack](https://azure.microsoft.com/overview/azure-stack/) support - ## MSAL connects to Azure AD, which is federated with AD FS+ MSAL.NET supports connecting to Azure AD, which signs in managed-users (users managed in Azure AD) or federated users (users managed by another identity provider such as AD FS). MSAL.NET does not know about the fact that users are federated. As far as itΓÇÖs concerned, it talks to Azure AD. The [authority](msal-client-application-configuration.md#authority) you use in this case is the usual authority (authority host name + tenant, common, or organizations). ### Acquiring a token interactively+ When you call the `AcquireTokenInteractive` method, the user experience is typically: 1. The user enters their account ID.
When you call the `AcquireTokenInteractive` method, the user experience is typic
Supported AD FS versions in this federated scenario are AD FS v2, AD FS v3 (Windows Server 2012 R2), and AD FS v4 (AD FS 2016). ### Acquiring a token using AcquireTokenByIntegratedAuthentication or AcquireTokenByUsernamePassword
-When acquiring a token using the `AcquireTokenByIntegratedAuthentication` or `AcquireTokenByUsernamePassword` methods, MSAL.NET gets the identity provider to contact based on the username. MSAL.NET receives a [SAML 1.1 token](reference-saml-tokens.md) after contacting the identity provider. MSAL.NET then provides the SAML token to Azure AD as a user assertion (similar to the [on-behalf-of flow](msal-authentication-flows.md#on-behalf-of)) to get back a JWT.
+
+When acquiring a token using the `AcquireTokenByIntegratedAuthentication` or `AcquireTokenByUsernamePassword` methods, MSAL.NET gets the identity provider to contact based on the username. MSAL.NET receives a [SAML 1.1 token](reference-saml-tokens.md) after contacting the identity provider. MSAL.NET then provides the SAML token to Azure AD as a user assertion (similar to the [on-behalf-of flow](msal-authentication-flows.md#on-behalf-of-obo) to get back a JWT.
## MSAL connects directly to AD FS
-MSAL.NET supports connecting to AD FS 2019, which is Open ID Connect compliant and understands PKCE and scopes. This support requires that a service pack [KB 4490481](https://support.microsoft.com/en-us/help/4490481/windows-10-update-kb4490481) is applied to Windows Server. When connecting directly to AD FS, the authority you'll want to use to build your application is similar to `https://mysite.contoso.com/adfs/`.
+
+MSAL.NET supports connecting to AD FS 2019, which is Open ID Connect compliant and understands PKCE and scopes. This support requires that a service pack [KB 4490481](https://support.microsoft.com/help/4490481/windows-10-update-kb4490481) is applied to Windows Server. When connecting directly to AD FS, the authority you'll want to use to build your application is similar to `https://mysite.contoso.com/adfs/`.
Currently, there are no plans to support a direct connection to:
active-directory V2 Oauth2 On Behalf Of Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth2-on-behalf-of-flow.md
A service-to-service request for a SAML assertion contains the following paramet
| Parameter | Type | Description | | | | |
-| grant_type |required | The type of the token request. For a request that uses a JWT, the value must be **urn:ietf:params:oauth:grant-type:jwt-bearer**. |
+| grant_type |required | The type of the token request. For a request that uses a JWT, the value must be `urn:ietf:params:oauth:grant-type:jwt-bearer`. |
| assertion |required | The value of the access token used in the request.| | client_id |required | The app ID assigned to the calling service during registration with Azure AD. To find the app ID in the Azure portal, select **Active Directory**, choose the directory, and then select the application name. | | client_secret |required | The key registered for the calling service in Azure AD. This value should have been noted at the time of registration. The Basic auth pattern of instead providing credentials in the Authorization header, per [RFC 6749](https://datatracker.ietf.org/doc/html/rfc6749#section-2.3.1) is also supported. |
-| scope |required | A space-separated list of scopes for the token request. For more information, see [scopes](v2-permissions-and-consent.md). For example, 'https://testapp.contoso.com/user_impersonation openid' |
-| requested_token_use |required | Specifies how the request should be processed. In the On-Behalf-Of flow, the value must be **on_behalf_of**. |
-| requested_token_type | required | Specifies the type of token requested. The value can be **urn:ietf:params:oauth:token-type:saml2** or **urn:ietf:params:oauth:token-type:saml1** depending on the requirements of the accessed resource. |
+| scope |required | A space-separated list of scopes for the token request. For more information, see [scopes](v2-permissions-and-consent.md). SAML itself doesn't have a concept of scopes, but here it is used to identify the target SAML application for which you want to receive a token. For this OBO flow, the scope value must always be the SAML Entity ID with `/.default` appended. For example, in case the SAML application's Entity ID is `https://testapp.contoso.com`, then the requested scope should be `https://testapp.contoso.com/.default`. In case the Entity ID doesn't start with a URI scheme such as `https:`, Azure AD prefixes the Entity ID with `spn:`. In that case you must request the scope `spn:<EntityID>/.default`, for example `spn:testapp/.default` in case the Entity ID is `testapp`. Note that the scope value you request here determines the resulting `Audience` element in the SAML token, which may be important to the SAML application receiving the token. |
+| requested_token_use |required | Specifies how the request should be processed. In the On-Behalf-Of flow, the value must be `on_behalf_of`. |
+| requested_token_type | required | Specifies the type of token requested. The value can be `urn:ietf:params:oauth:token-type:saml2` or `urn:ietf:params:oauth:token-type:saml1` depending on the requirements of the accessed resource. |
The response contains a SAML token encoded in UTF8 and Base64url. - **SubjectConfirmationData for a SAML assertion sourced from an OBO call**: If the target application requires a recipient value in **SubjectConfirmationData**, then the value must be a non-wildcard Reply URL in the resource application configuration. - **The SubjectConfirmationData node**: The node can't contain an **InResponseTo** attribute since it's not part of a SAML response. The application receiving the SAML token must be able to accept the SAML assertion without an **InResponseTo** attribute.--- **Consent**: Consent must have been granted to receive a SAML token containing user data on an OAuth flow. For information on permissions and obtaining administrator consent, see [Permissions and consent in the Azure Active Directory v1.0 endpoint](../azuread-dev/v1-permissions-consent.md).
+- **API permissions**: You have to [add the necessary API permissions](quickstart-configure-app-access-web-apis.md) on the middle-tier application to allow access to the SAML application, so that it can request a token for the `/.default` scope of the SAML application.
+- **Consent**: Consent must have been granted to receive a SAML token containing user data on an OAuth flow. For information, see [Gaining consent for the middle-tier application](#gaining-consent-for-the-middle-tier-application) below.
### Response with SAML assertion
The response contains a SAML token encoded in UTF8 and Base64url.
Depending on the architecture or usage of your application, you may consider different strategies for ensuring that the OBO flow is successful. In all cases, the ultimate goal is to ensure proper consent is given so that the client app can call the middle-tier app, and the middle tier app has permission to call the back-end resource. > [!NOTE]
-> Previously the Microsoft account system (personal accounts) did not support the "Known client application" field, nor could it show combined consent. This has been added and all apps in the Microsoft identity platform can use the known client application approach for getting consent for OBO calls.
+> Previously the Microsoft account system (personal accounts) did not support the "known client applications" field, nor could it show combined consent. This has been added and all apps in the Microsoft identity platform can use the known client application approach for getting consent for OBO calls.
### .default and combined consent
-The middle tier application adds the client to the known client applications list in its manifest. If a consent prompt is triggered by the client, the consent flow will be both for itself and the middle tier application. On the Microsoft identity platform, this is done using the [`.default` scope](v2-permissions-and-consent.md#the-default-scope). When triggering a consent screen using known client applications and `.default`, the consent screen will show permissions for **both** the client to the middle tier API, and also request whatever permissions are required by the middle-tier API. The user provides consent for both applications, and then the OBO flow works.
+The middle tier application adds the client to the [known client applications list](reference-app-manifest.md#knownclientapplications-attribute) (`knownClientApplications`) in its manifest. If a consent prompt is triggered by the client, the consent flow will be both for itself and the middle tier application. On the Microsoft identity platform, this is done using the [`.default` scope](v2-permissions-and-consent.md#the-default-scope). When triggering a consent screen using known client applications and `.default`, the consent screen will show permissions for **both** the client to the middle tier API, and also request whatever permissions are required by the middle-tier API. The user provides consent for both applications, and then the OBO flow works.
The resource service (API) identified in the request should be the API for which the client application is requesting an access token as a result of the user's sign-in. For example, `scope=openid https://middle-tier-api.example.com/.default` (to request an access token for the middle tier API), or `scope=openid offline_access .default` (when a resource is not identified, it defaults to Microsoft Graph).
Regardless of which API is identified in the authorization request, the consent
### Pre-authorized applications
-Resources can indicate that a given application always has permission to receive certain scopes. This is primarily useful to make connections between a front-end client and a back-end resource more seamless. A resource can declare multiple pre-authorized applications - any such application can request these permissions in an OBO flow and receive them without the user providing consent.
+Resources can indicate that a given application always has permission to receive certain scopes. This is primarily useful to make connections between a front-end client and a back-end resource more seamless. A resource can [declare multiple pre-authorized applications](reference-app-manifest.md#preauthorizedapplications-attribute) (`preAuthorizedApplications`) in its manifest - any such application can request these permissions in an OBO flow and receive them without the user providing consent.
### Admin consent
active-directory V2 Protocols Oidc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-protocols-oidc.md
post_logout_redirect_uri=http%3A%2F%2Flocalhost%2Fmyapp%2F
| Parameter | Condition | Description | | -- | - | | | `post_logout_redirect_uri` | Recommended | The URL that the user is redirected to after successfully signing out. If the parameter isn't included, the user is shown a generic message that's generated by the Microsoft identity platform. This URL must match one of the redirect URIs registered for your application in the app registration portal. |
+| `logout_hint` | Optional | Enables sign-out to occur without prompting the user to select an account. To use `logout_hint`, enable the `login_hint` [optional claim](active-directory-optional-claims.md) in your client application and use the value of the `login_hint` optional claim as the `logout_hint` parameter. Do not use UPNs or phone numbers as the value of the `logout_hint` parameter.
## Single sign-out
active-directory Complete Access Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/complete-access-review.md
To see the status and stage of a multi-stage access review:
1. Once you are on the results page, under **Status** it will tell you which stage the multi-stage review is in. The next stage of the review won't become active until the duration specified during the access review setup has passed. 1. If a decision has been made, but the review duration for this stage has not expired yet, you can select **Stop current stage** button on the results page. This will trigger the next stage of review.
-
+ ## Retrieve the results To view the results for a review, click the **Results** page. To view just a user's access, in the Search box, type the display name or user principal name of a user whose access was reviewed.
Manually or automatically applying results doesn't have an effect on a group tha
On review creation, the creator can choose between two options for denied guest users in an access review. - Denied guest users can have their access to the resource removed. This is the default. - The denied guest user can be blocked from signing in for 30 days, then deleted from the tenant. During the 30-day period the guest user is able to be restored access to the tenant by an administrator. After the 30-day period is completed, if the guest user has not had access to the resource granted to them again, they will be removed from the tenant permanently. In addition, using the Azure Active Directory portal, a Global Administrator can explicitly [permanently delete a recently deleted user](../fundamentals/active-directory-users-restore.md) before that time period is reached. Once a user has been permanently deleted, the data about that guest user will be removed from active access reviews. Audit information about deleted users remains in the audit log.
+
+### Actions taken on denied B2B direct connect users
+Denied B2B direct connect users and teams will lose access to all shared channels in the Team.
## Next steps
active-directory Create Access Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-access-review.md
A multi-stage review allows the administrator to define two or three sets of rev
1. Continue on to the **settings tab** and finish the rest of the settings and create the review. Follow the instructions in [Next: Settings](#next-settings).
+## Include B2B direct connect users and teams accessing Teams Shared Channels in access reviews (preview)
+
+You can create access reviews for B2B direct connect users via shared channels in Microsoft Teams. As you collaborate externally, you can use Azure AD access reviews to make sure external access to shared channels stays current. To learn more about Teams Shared Channels and B2B direct connect users, read the [B2B direct connect](../external-identities/b2b-direct-connect-overview.md) article.
+
+When you create an access review on a Team with shared channels, your reviewers can review continued need for access of those external users and Teams in the shared channels. External users in the shared channels are called B2B direct connect users. You can review access of B2B connect users and other supported B2B collaboration users and non-B2B internal users in the same review.
+
+>[!NOTE]
+> Currently, B2B direct connect users and teams are only included in single-stage reviews. If multi-stage reviews are enabled, the direct connect users and teams won't be included in the access review.
+
+B2B direct connect users and teams are included in access reviews of the Teams-enabled Microsoft 365 group that the shared channels are a part of. To create the review, you must be a:
+- Global Administrator
+- User administrator
+- Identity Governance Administrator
+
+Ue the following instructions to create an access review on a team with shared channels:
+
+1. Sign in to the Azure Portal as a Global Admin, User Admin or Identity Governance Admin.
+
+1. Open the [Identity Governance](https://portal.azure.com/#blade/Microsoft_AAD_ERM/DashboardBlade/) page.
+
+1. On the left menu, select **Access reviews**.
+
+1. Select **+ New access review**.
+
+1. Select **Teams + Groups** and then click **Select teams + groups** to set the **Review scope**. B2B direct connect users and teams are not included in reviews of **All Microsoft 365 groups with guest users**.
+
+1. Select a Team that has shared channels shared with 1 or more B2B direct connect users or Teams.
+
+1. Set the **Scope**.
+
+ ![Screenshot that shows setting the review scope to for shared channels review.](./media/create-access-review/create-shared-channels-review.png)
+
+ - Choose **All users** to include:
+ - All internal users
+ - B2B collaboration users that are members of the Team
+ - B2B direct connect users
+ - Teams that access shared channels
+ - Or, choose **Guest users only** to only include B2B direct connect users and Teams and B2B collaboration users.
+
+1. Continue on to the **Reviews** tab. Select a reviewer to complete the review, then specify the **Duration** and **Review recurrence**.
+
+ > [!NOTE]
+ > <ul> If you set **Select reviewers** to **Users review their own access** or **Managers of users**, B2B direct connect users and Teams won't be able to review their own access in your tenant. The owner of the Team under review will get an email that asks the owner to review the B2B direct connect user and Teams.</ul><ul>If you select **Managers of users**, a selected fallback reviewer will review any user without a manager in the home tenant. This includes B2B direct connect users and Teams without a manager.</ul>
+
+1. Go on to the **Settings** tab and configure additional settings. Then go to the **Review and Create** tab to start your access review. For more detailed information about creating a review and configuration settings, see our [Create a single-stage access review](#create-a-single-stage-access-review).
+ ## Allow group owners to create and manage access reviews of their groups (preview) The prerequisite role is a Global or User administrator.
active-directory Perform Access Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/perform-access-review.md
Approve or deny access as outlined in [Review access for one or more users](#rev
> [!NOTE] > The next stage of the review won't become active until the duration specified during the access review setup has passed. If the administrator believes a stage is done but the review duration for this stage has not expired yet, they can use the **Stop current stage** button in the overview of the access review in the Azure AD portal. This will close the active stage and start the next stage.
+### Review access for B2B direct connect users in Teams Shared Channels and Microsoft 365 groups (preview)
+
+To review access of B2B direct connect users, use the following instructions:
+
+1. As the reviewer, you should receive an email that requests you to review access for the team or group. Click the link in the email, or navigate directly to https://myaccess.microsoft.com/.
+
+1. Follow the instructions in [Review access for one or more users](#review-access-for-one-or-more-users) to make decisions to approve or deny the users access to the Teams.
+
+> [!NOTE]
+> Unlike internal users and B2B Collaboration users, B2B direct connect users and Teams **don't** have recommendations based on last sign-in activity to make decisions when you perform the review.
+
+If a Team you review has shared channels, all B2B direct connect users and teams that access those shared channels are part of the review. This includes B2B collaboration users and internal users. When a B2B direct connect user or team is denied access in an access review, the user will lose access to every shared channel in the Team. To learn more about B2B direct connect users, read [B2B direct connect](../external-identities/b2b-direct-connect-overview.md).
+ ## If no action is taken on access review When the access review is setup, the administrator has the option to use advanced settings to determine what will happen in the event a reviewer doesn't respond to an access review request.
active-directory Whatis Azure Ad Connect V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/whatis-azure-ad-connect-v2.md
Title: 'What is Azure AD Connect V2.0? | Microsoft Docs'
+ Title: 'What is Azure AD Connect v2.0? | Microsoft Docs'
description: Learn about the next version of Azure AD Connect.
Previously updated : 09/22/2021 Last updated : 03/22/2022
# Introduction to Azure AD Connect V2.0
-The first version of Azure Active Directory (Azure AD) Connect was released several years ago. Since then, we've scheduled several components of Azure AD Connect for deprecation and updated them to newer versions.
+Azure AD Connect was released several years ago. Since this time, several of the components that Azure AD Connect uses have been scheduled for deprecation and updated to newer versions. To attempt to update all of these components individually would take time and planning.
-Making updates to all these components individually requires a lot of time and planning. To address this drawback, we've bundled many of the newer components into a new, single release, so you have to update only once. This release, Azure AD Connect V2.0, is a new version of the same software you're already using to accomplish your hybrid identity goals, but it's updated with the latest foundational components.
+To address this, we have bundled as many of these newer components into a new, single release, so you only have to update once. This release is Azure AD Connect V2. This is a new version of the same software used to accomplish your hybrid identity goals that is built using the latest foundational components.
## What are the major changes? ### SQL Server 2019 LocalDB
-Earlier versions of Azure AD Connect shipped with the SQL Server 2012 LocalDB feature. V2.0 ships with SQL Server 2019 LocalDB, which promises enhanced stability and performance and has several security-related bug fixes. In July 2022, SQL Server 2012 will no longer have extended support. For more information, see [Microsoft SQL 2019](https://www.microsoft.com/sql-server/sql-server-2019).
+The previous versions of Azure AD Connect shipped with a SQL Server 2012 LocalDB. V2.0 ships with a SQL Server 2019 LocalDB, which promises enhanced stability and performance and has several security-related bug fixes. SQL Server 2012 will go out of extended support in July 2022. For more information see [Microsoft SQL 2019](https://www.microsoft.com/sql-server/sql-server-2019).
### MSAL authentication library
-Earlier versions of Azure AD Connect shipped with the Azure Active Directory Authentication Library (ADAL). This library will be deprecated in June 2022. The V2.0 release ships with the newer Microsoft Authentication Library (MSAL). For more information, see [Overview of the MSAL library](../../active-directory/develop/msal-overview.md).
+The previous versions of Azure AD Connect shipped with the ADAL authentication library. This library will be deprecated in June 2022. The V2 release ships with the newer MSAL library. For more information see [Overview of the MSAL library](../../active-directory/develop/msal-overview.md).
-### Visual C++ Redistributable 14 runtime
+### Visual C++ Redist 14
-SQL Server 2019 requires the Visual C++ Redistributable 14 runtime, so we've updated the C++ runtime library to use this version. The library is installed with the Azure AD Connect V2.0 package, so you don't have to take any action to get the C++ runtime update.
+SQL Server 2019 requires the Visual C++ Redist 14 runtime, so we are updating the C++ runtime library to use this version. This will be installed with the Azure AD Connect V2 package, so you do not have to take any action for the C++ runtime update.
### TLS 1.2
-The Transport Layer Security (TLS) 1.0 and TLS 1.1 protocols are deemed unsafe and are being deprecated by Microsoft. Azure AD Connect V2.0 supports only TLS 1.2. All versions of Windows Server that are supported for Azure AD Connect V2.0 already default to TLS 1.2. If your server doesn't support TLS 1.2, you need to enable it before you can deploy Azure AD Connect V2.0. For more information, see [TLS 1.2 enforcement for Azure AD Connect](reference-connect-tls-enforcement.md).
+TLS1.0 and TLS 1.1 are protocols that are deemed unsafe and are being deprecated by Microsoft. This release of Azure AD Connect will only support TLS 1.2.
+All versions of Windows Server that are supported for Azure AD Connect V2 already default to TLS 1.2. If your server does not support TLS 1.2 you will need to enable this before you can deploy Azure AD Connect V2. For more information, see [TLS 1.2 enforcement for Azure AD Connect](reference-connect-tls-enforcement.md).
-### All binaries signed with SHA-2
+### All binaries signed with SHA2
-We noticed that some components have Secure Hash Algorithm 1 (SHA-1) signed binaries. We no longer support SHA-1 for downloadable binaries, and we've upgraded all binaries to SHA-2 signing. The digital signatures are used to ensure that the updates come directly from Microsoft and aren't tampered with during delivery. Because of weaknesses in the SHA-1 algorithm, and to align with industry standards, we've changed the signing of Windows updates to use the more secure SHA-2 algorithm.ΓÇ»
+We noticed that some components had SHA1 signed binaries. We no longer support SHA1 for downloadable binaries and we upgraded all binaries to SHA2 signing. The digital signatures are used to ensure that the updates come directly from Microsoft and were not tampered with during delivery. Because of weaknesses in the SHA-1 algorithm and to align to industry standards, we have changed the signing of Windows updates to use the more secure SHA-2 algorithm."ΓÇ»
-No action is required of you at this time.
+There is no action needed from your side.
-### Windows Server 2012 and 2012 R2 are no longer supported
+### Windows Server 2012 and Windows Server 2012 R2 are no longer supported
-SQL Server 2019 requires Windows Server 2016 or later as a server operating system. Because Azure AD Connect V2.0 contains SQL Server 2019 components, we no longer support earlier Windows Server versions.
+SQL Server 2019 requires Windows Server 2016 or newer as a server operating system. Since AAD Connect v2 contains SQL Server 2019 components, we no longer can support older Windows Server versions.
-You can't install this version on earlier Windows Server versions. We suggest that you upgrade your Azure AD Connect server to Windows Server 2019, which is the most recent version of the Windows Server operating system.
+You cannot install this version on an older Windows Server version. We suggest you upgrade your Azure AD Connect server to Windows Server 2019, which is the most recent version of the Windows Server operating system.
-For more information about upgrading from earlier Windows Server versions to Windows Server 2019, see [Install, upgrade, or migrate to Windows Server](/windows-server/get-started-19/install-upgrade-migrate-19).
+This [article](/windows-server/get-started-19/install-upgrade-migrate-19) describes the upgrade from older Windows Server versions to Windows Server 2019.
### PowerShell 5.0
-The Azure AD Connect V2.0 release contains several cmdlets that require PowerShell 5.0 or later, so this requirement is a new prerequisite for Azure AD Connect.
+This release of Azure AD Connect contains several cmdlets that require PowerShell 5.0, so this requirement is a new prerequisite for Azure AD Connect.
-For more information, see [Windows PowerShell System Requirements](/powershell/scripting/windows-powershell/install/windows-powershell-system-requirements#windows-powershell-50).
+More details about PowerShell prerequisites can be found [here](/powershell/scripting/windows-powershell/install/windows-powershell-system-requirements#windows-powershell-50).
>[!NOTE]
- >PowerShell 5.0 is already part of Windows Server 2016, so you probably don't have to take action as long as you're using a recent Window Server version.
+ >PowerShell 5 is already part of Windows Server 2016 so you probably do not have to take action as long as you are on a recent Window Server version.
## What else do I need to know? + **Why is this upgrade important for me?** </br>
-Next year, several components in your current Azure AD Connect server installations will go out of support. If you're using unsupported products, it will be harder for our support team to provide you with the support experience your organization requires. We recommend that you upgrade to this newer version as soon as possible.
+Next year several of the components in your current Azure AD Connect server installations will go out of support. If you are using unsupported products, it will be harder for our support team to provide you with the support experience your organization requires. So we recommend all customers to upgrade to this newer version as soon as they can.
-This upgrade is especially important, because we've had to update our prerequisites for Azure AD Connect. You might need additional time to plan and update your servers to the newest versions of the prerequisites.
+This upgrade is especially important since we have had to update our prerequisites for Azure AD Connect and you may need additional time to plan and update your servers to the newer versions of these prerequisites
**Is there any new functionality I need to know about?** </br>
-No, this release doesn't contain new functionality. It contains only updates of some of the foundational components on Azure AD Connect. However, later releases of Azure AD Connect V2 might contain new functionality.
+No ΓÇô the V2.0 release does not contain any new functionality. This release only contains updates of some of the foundational components on Azure AD Connect. However, later releases of Azure AD Connect V2 may contain new functionality.
-**Can I upgrade from earlier versions to V2.0?** </br>
-Yes, upgrading from earlier versions of Azure AD Connect to Azure AD Connect V2.0 is supported. To determine your best upgrade strategy, see [Azure AD Connect: Upgrade from a previous version to the latest](how-to-upgrade-previous-version.md).
+**Can I upgrade from any previous version to V2?** </br>
+Yes ΓÇô upgrades from any previous version of Azure AD Connect to Azure AD Connect V2 is supported. Please follow the guidance in [this article](how-to-upgrade-previous-version.md) to determine what is the best upgrade strategy for you.
-**Can I export the configuration of my current server and import it in Azure AD Connect V2.0?** </br>
-Yes, and it's a great way to migrate to Azure AD Connect V2.0, especially if you're also upgrading to a new operating system version. For more information, see [Import and export Azure AD Connect configuration settings](how-to-connect-import-export-config.md).
+**Can I export the configuration of my current server and import it in Azure AD Connect V2?** </br>
+Yes, you can do that, and it is a great way to migrate to Azure AD Connect V2 ΓÇô especially if you are also upgrading to a new operating system version. You can read more about the Import/export configuration feature and how you can use it in this [article](how-to-connect-import-export-config.md).
-**I have enabled the auto-upgrade feature for Azure AD Connect. Will I get this new version automatically?** </br>
-Yes. Your Azure AD Connect server will be upgraded to the latest release if you've enabled the auto-upgrade feature. Note that we have not yet released an auto-upgrade version for Azure AD Connect.
+**I have enabled auto upgrade for Azure AD Connect ΓÇô will I get this new version automatically?** </br>
+Yes - your Azure AD Connect server will be upgraded to the latest release if you have enabled the auto-upgrade feature. Note that we have no yet release an autop upgrade version for Azure AD Connect.
-**I am not ready to upgrade yet. How much time do I have?** </br>
-All Azure AD Connect V1 versions will be retired on August 31, 2022, so you should upgrade to Azure AD Connect V2.0 as soon as you can. For the time being, we'll continue to support earlier versions of Azure AD Connect, but it might be difficult to provide a good support experience if some Azure AD Connect components are no longer supported. This upgrade is particularly important for ADAL and TLS 1.0/1.1, because these services might stop working unexpectedly after they're deprecated.
+**I am not ready to upgrade yet ΓÇô how much time do I have?** </br>
+You should upgrade to Azure AD Connect V2 as soon as you can. **__All Azure AD Connect V1 versions will be retired on 31 August, 2022.__** For the time being we will continue to support older versions of Azure AD Connect, but it may prove difficult to provide a good support experience if some of the components in Azure AD Connect have dropped out of support. This upgrade is particularly important for ADAL and TLS1.0/1.1 as these services might stop working unexpectedly after they are deprecated.
-**I use an external SQL database and do not use SQL Server 2012 LocalDB. Do I still have to upgrade?** </br>
-Yes, you need to upgrade to remain in a supported state, even if you don't use SQL Server 2012, because of the TLS 1.0/1.1 and ADAL deprecation. Note that you can still use SQL Server 2012 as an external SQL database with Azure AD Connect V2.0. The SQL Server 2019 drivers in Azure AD Connect V2.0 are compatible with SQL Server 2012.
+**I use an external SQL database and do not use SQL 2012 LocalDb ΓÇô do I still have to upgrade?** </br>
+Yes, you still need to upgrade to remain in a supported state even if you do not use SQL Server 2012, due to the TLS1.0/1.1 and ADAL deprecation. Note that SQL Server 2012 can still be used as an external SQL database with Azure AD Connect V2 - the SQL 2019 drivers in Azure AD Connect V2 are compatible with SQL Server 2012.
-**After I've upgraded my Azure AD Connect instance to V2.0, will the SQL Server 2012 components get uninstalled automatically?** </br>
-No, the upgrade to SQL Server 2019 doesn't remove any SQL Server 2012 components from your server. If you no longer need these components, follow the instructions in [Uninstall an existing instance of SQL Server](/sql/sql-server/install/uninstall-an-existing-instance-of-sql-server-setup).
+**After the upgrade of my Azure AD Connect instance to V2, will the SQL 2012 components automatically get uninstalled?** </br>
+No, the upgrade to SQL 2019 does not remove any SQL 2012 components from your server. If you no longer need these components then you should follow [the SQL Server uninstallation instructions](/sql/sql-server/install/uninstall-an-existing-instance-of-sql-server-setup).
-**What happens if I don't upgrade?** </br>
-Until a component that's being retired is actually deprecated, your current version of Azure AD Connect will keep working and you won't see any impact.
+**What happens if I do not upgrade?** </br>
+Until one of the components that are being retired are actually deprecated, you will not see any impact. Azure AD Connect will keep on working.
-We expect TLS 1.0/1.1 to be deprecated in January 2022. You need to make sure that you're no longer using these protocols by that date, because your service might stop working unexpectedly. You can manually configure your server for TLS 1.2, though, because that doesn't require an upgrade to Azure AD Connect V2.0.
+We expect TLS 1.0/1.1 to be deprecated in 2022, and you need to make sure you are not using these protocols by that date as your service may stop working unexpectedly. You can manually configure your server for TLS 1.2 though, and that does not require an update of Azure AD Connect to V2
-In June 2022, ADAL is planned to go out of support. At that time, authentication might stop working unexpectedly, and the Azure AD Connect server will no longer work properly. We strongly recommend that you upgrade to Azure AD Connect V2.0 before June 2022. You can't upgrade to a supported authentication library with your current Azure AD Connect version.
+In June 2022, ADAL is planned to go out of support. When ADAL goes out of support authentication may stop working unexpectedly and this will block the Azure AD Connect server from working properly. We strongly advise you to upgrade to Azure AD Connect V2 before June 2022. You cannot upgrade to a supported authentication library with your current Azure AD Connect version.
-**After I upgraded to Azure AD Connect V2.0, the ADSync PowerShell cmdlets don't work. What can I do?** </br>
-This is a known issue. To resolve it, restart your PowerShell session after you've installed or upgraded to Azure AD Connect V2.0, and then reimport the module. To import the module, do the following:
+**After upgrading to 2 the ADSync PowerShell cmdlets do not work?** </br>
+This is a known issue. To resolve this, restart your PowerShell session after installing or upgrading to version 2 and then re-import the module. Use the following instructions to import the module.
- 1. Open Windows PowerShell with administrative privileges.
- 1. Run the following command:
+ 1. Open Windows PowerShell with administrative privileges.
+ 1. Type or copy and paste the following code:
```powershell Import-module -Name "C:\Program Files\Microsoft Azure AD Sync\Bin\ADSync"
This is a known issue. To resolve it, restart your PowerShell session after you'
- [Hardware and prerequisites](how-to-connect-install-prerequisites.md) - [Express settings](how-to-connect-install-express.md)-- [Customized settings](how-to-connect-install-custom.md)
+- [Customized settings](how-to-connect-install-custom.md)
active-directory Recommendation Convert To Conditional Access Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-convert-to-conditional-access-mfa.md
+
+ Title: Azure Active Directory recommendation - Convert from per-user MFA to conditional access MFA in Azure AD | Microsoft Docs
+description: Learn why you should convert from per-user MFA to conditional access MFA in Azure AD
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid: 9b88958d-94a2-4f4b-a18c-616f0617a24e
++
+ na
++ Last updated : 03/21/2022++++++
+# Azure AD recommendation: Convert from per-user MFA to conditional access MFA
+
+[Azure AD recommendations](overview-recommendations.md) is a feature that provides you with personalized insights and actionable guidance to align your tenant with recommended best practices.
++
+This article covers the recommendation to convert from per-user MFA to conditional access MFA.
++
+## Description
+
+As an admin, you want to maintain security for my companyΓÇÖs resources, but you also want your employees to easily access resources as needed.
+
+Multi-factor authentication (MFA) enables you to enhance the security posture of your tenant. In your tenant, you can enable MFA on a per-user basis. In this scenario, your users perform MFA each time they sign in (with some exceptions, such as when they sign in from trusted IP addresses or when the remember MFA on trusted devices feature is turned on).
+
+While enabling MFA is a good practice, you can reduce the number of times your users are prompted for MFA by converting per-user MFA to MFA based on conditional access.
++
+## Logic
+
+This recommendation shows up, if:
+
+- You have per-user MFA configured for at least 5% of your users
+- Conditional access policies are active for more than 1% of your users (indicating familiarity with CA policies).
+
+## Value
+
+This recommendation improves your user's productivity and minimizes the sign-in time with fewer MFA prompts. Ensure that your most sensitive resources can have the tightest controls, while your least sensitive resources can be more freely accessible.
+
+## Action plan
+
+1. To get started, confirm that there's an existing conditional access policy with an MFA requirement. Ensure that you're covering all resources and users you would like to secure with MFA. Review your [conditional access policies](https://portal.azure.com/?Microsoft_AAD_IAM_enableAadvisorFeaturePreview=true&amp%3BMicrosoft_AAD_IAM_enableAadvisorFeature=true#blade/Microsoft_AAD_IAM/PoliciesTemplateBlade).
+
+2. To require MFA using a conditional access policy, follow the steps in [Secure user sign-in events with Azure AD Multi-Factor Authentication](../authentication/tutorial-enable-azure-mfa.md).
+
+3. Ensure that the per-user MFA configuration is turned off.
+
+
+
+## Next steps
+
+- [Tutorials for integrating SaaS applications with Azure Active Directory](../saas-apps/tutorial-list.md)
+- [Azure AD reports overview](overview-reports.md)
active-directory Adp Emea French Hr Portal Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/adp-emea-french-hr-portal-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with ADP EMEA French HR Portal mon.adp.com'
+description: Learn how to configure single sign-on between Azure Active Directory and ADP EMEA French HR Portal mon.adp.com.
++++++++ Last updated : 03/03/2022++++
+# Tutorial: Azure AD SSO integration with ADP EMEA French HR Portal mon.adp.com
+
+In this tutorial, you'll learn how to integrate ADP EMEA French HR Portal mon.adp.com with Azure Active Directory (Azure AD). When you integrate ADP EMEA French HR Portal mon.adp.com with Azure AD, you can:
+
+* Control in Azure AD who has access to ADP EMEA French HR Portal mon.adp.com.
+* Enable your users to be automatically signed-in to ADP EMEA French HR Portal mon.adp.com with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* ADP EMEA French HR Portal mon.adp.com single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* ADP EMEA French HR Portal mon.adp.com supports **IDP** initiated SSO.
+
+## Add ADP EMEA French HR Portal mon.adp.com from the gallery
+
+To configure the integration of ADP EMEA French HR Portal mon.adp.com into Azure AD, you need to add ADP EMEA French HR Portal mon.adp.com from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **ADP EMEA French HR Portal mon.adp.com** in the search box.
+1. Select **ADP EMEA French HR Portal mon.adp.com** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for ADP EMEA French HR Portal mon.adp.com
+
+Configure and test Azure AD SSO with ADP EMEA French HR Portal mon.adp.com using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in ADP EMEA French HR Portal mon.adp.com.
+
+To configure and test Azure AD SSO with ADP EMEA French HR Portal mon.adp.com, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure ADP EMEA French HR Portal mon.adp.com SSO](#configure-adp-emea-french-hr-portal-monadpcom-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create ADP EMEA French HR Portal mon.adp.com test user](#create-adp-emea-french-hr-portal-monadpcom-test-user)** - to have a counterpart of B.Simon in ADP EMEA French HR Portal mon.adp.com that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **ADP EMEA French HR Portal mon.adp.com** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, the application is pre-configured and the necessary URLs are already pre-populated with Azure. The user needs to save the configuration by clicking the **Save** button.
+
+1. ADP EMEA French HR Portal mon.adp.com application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![image](common/default-attributes.png)
+
+1. In addition to above, ADP EMEA French HR Portal mon.adp.com application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | -- | |
+ | CompanyID | <given_by_adp> |
+ | ApplicationID | uxfr|
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (PEM)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificate-base64-download.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to ADP EMEA French HR Portal mon.adp.com.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **ADP EMEA French HR Portal mon.adp.com**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure ADP EMEA French HR Portal mon.adp.com SSO
+
+To configure single sign-on on **ADP EMEA French HR Portal mon.adp.com** side, you need to send the downloaded **Certificate (PEM)** and appropriate copied URLs from Azure portal to [ADP EMEA French HR Portal mon.adp.com support team](mailto:asp.projects@europe.adp.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create ADP EMEA French HR Portal mon.adp.com test user
+
+In this section, you create a user called Britta Simon in ADP EMEA French HR Portal mon.adp.com. Work with [ADP EMEA French HR Portal mon.adp.com support team](mailto:asp.projects@europe.adp.com) to add the users in the ADP EMEA French HR Portal mon.adp.com platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on Test this application in Azure portal and you should be automatically signed in to the ADP EMEA French HR Portal mon.adp.com for which you set up the SSO.
+
+* You can use Microsoft My Apps. When you click the ADP EMEA French HR Portal mon.adp.com tile in the My Apps, you should be automatically signed in to the ADP EMEA French HR Portal mon.adp.com for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure ADP EMEA French HR Portal mon.adp.com you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Axway Csos Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/axway-csos-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Axway CSOS'
+description: Learn how to configure single sign-on between Azure Active Directory and Axway CSOS.
++++++++ Last updated : 03/09/2022++++
+# Tutorial: Azure AD SSO integration with Axway CSOS
+
+In this tutorial, you'll learn how to integrate Axway CSOS with Azure Active Directory (Azure AD). When you integrate Axway CSOS with Azure AD, you can:
+
+* Control in Azure AD who has access to Axway CSOS.
+* Enable your users to be automatically signed-in to Axway CSOS with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Axway CSOS single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Axway CSOS supports **SP** initiated SSO.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add Axway CSOS from the gallery
+
+To configure the integration of Axway CSOS into Azure AD, you need to add Axway CSOS from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Axway CSOS** in the search box.
+1. Select **Axway CSOS** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Axway CSOS
+
+Configure and test Azure AD SSO with Axway CSOS using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Axway CSOS.
+
+To configure and test Azure AD SSO with Axway CSOS, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Axway CSOS SSO](#configure-axway-csos-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Axway CSOS test user](#create-axway-csos-test-user)** - to have a counterpart of B.Simon in Axway CSOS that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Axway CSOS** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type the URL:
+ `https://www.axway.com`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://<host>:<port>/ui/core/SsoSamlAssertionConsumer`
+
+ c. In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://<host>:<port>/ui`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Reply URL and Sign-on URL. Contact [Axway CSOS Client support team](mailto:support@axway.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificatebase64.png)
+
+1. On the **Set up Axway CSOS** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Axway CSOS.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Axway CSOS**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Axway CSOS SSO
+
+To configure single sign-on on **Axway CSOS** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Axway CSOS support team](mailto:support@axway.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Axway CSOS test user
+
+In this section, you create a user called Britta Simon in Axway CSOS. Work with [Axway CSOS support team](mailto:support@axway.com) to add the users in the Axway CSOS platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Axway CSOS Sign-on URL where you can initiate the login flow.
+
+* Go to Axway CSOS Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Axway CSOS tile in the My Apps, this will redirect to Axway CSOS Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+
+## Next steps
+
+Once you configure Axway CSOS you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Blockbax Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/blockbax-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Blockbax'
+description: Learn how to configure single sign-on between Azure Active Directory and Blockbax.
++++++++ Last updated : 03/09/2022++++
+# Tutorial: Azure AD SSO integration with Blockbax
+
+In this tutorial, you'll learn how to integrate Blockbax with Azure Active Directory (Azure AD). When you integrate Blockbax with Azure AD, you can:
+
+* Control in Azure AD who has access to Blockbax.
+* Enable your users to be automatically signed-in to Blockbax with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Blockbax single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Blockbax supports **SP** and **IDP** initiated SSO.
+
+* Blockbax supports **Just In Time** user provisioning.
+
+## Add Blockbax from the gallery
+
+To configure the integration of Blockbax into Azure AD, you need to add Blockbax from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Blockbax** in the search box.
+1. Select **Blockbax** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Blockbax
+
+Configure and test Azure AD SSO with Blockbax using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Blockbax.
+
+To configure and test Azure AD SSO with Blockbax, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Blockbax SSO](#configure-blockbax-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Blockbax test user](#create-blockbax-test-user)** - to have a counterpart of B.Simon in Blockbax that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Blockbax** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ `https://login.blockbax.com/saml2/service-provider-metadata/<CustomerName>`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://login.blockbax.com/login/saml2/sso/<CustomerName>`
+
+ c. In the **Sign on URL** text box, type the URL:
+ `https://login.blockbax.com/sso`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Blockbax support team](mailto:support@blockbax.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Blockbax.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Blockbax**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Blockbax SSO
+
+1. Log in to your Blockbax company site as an administrator.
+
+1. Go to **Settings** and expand **SSO Settings**.
+
+ ![Screenshot shows the SAML Account](./media/blockbax-tutorial/account.png "SAML Account")
+
+1. In the **Identity provider metadata URL** textbox, paste the **App Federation Metadata Url** value which you have copied from the Azure portal.
+
+1. Click **Add identity provider**.
+
+### Create Blockbax test user
+
+In this section, a user called Britta Simon is created in Blockbax. Blockbax supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Blockbax, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Blockbax Sign on URL where you can initiate the login flow.
+
+* Go to Blockbax Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Blockbax for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Blockbax tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Blockbax for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Blockbax you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Buttonwood Central Sso Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/buttonwood-central-sso-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Buttonwood Central SSO'
+description: Learn how to configure single sign-on between Azure Active Directory and Buttonwood Central SSO.
++++++++ Last updated : 03/03/2022++++
+# Tutorial: Azure AD SSO integration with Buttonwood Central SSO
+
+In this tutorial, you'll learn how to integrate Buttonwood Central SSO with Azure Active Directory (Azure AD). When you integrate Buttonwood Central SSO with Azure AD, you can:
+
+* Control in Azure AD who has access to Buttonwood Central SSO.
+* Enable your users to be automatically signed-in to Buttonwood Central SSO with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Buttonwood Central SSO single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Buttonwood Central SSO supports **SP** initiated SSO.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add Buttonwood Central SSO from the gallery
+
+To configure the integration of Buttonwood Central SSO into Azure AD, you need to add Buttonwood Central SSO from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Buttonwood Central SSO** in the search box.
+1. Select **Buttonwood Central SSO** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Buttonwood Central SSO
+
+Configure and test Azure AD SSO with Buttonwood Central SSO using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Buttonwood Central SSO.
+
+To configure and test Azure AD SSO with Buttonwood Central SSO, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Buttonwood Central SSO](#configure-buttonwood-central-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Buttonwood Central SSO test user](#create-buttonwood-central-sso-test-user)** - to have a counterpart of B.Simon in Buttonwood Central SSO that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Buttonwood Central SSO** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier (Entity ID)** text box, type the URL:
+ `http://adfs.bcx.buttonwood.net/adfs/services/trust`
+
+ b. In the **Reply URL** text box, type the URL:
+ `https://adfs.bcx.buttonwood.net/adfs/ls/`
+
+ c. In the **Sign on URL** text box, type the URL:
+ `https://exchange.bcx.buttonwood.net/User/FederatedLogin`
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Buttonwood Central SSO.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Buttonwood Central SSO**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Buttonwood Central SSO
+
+To configure single sign-on on **Buttonwood Central SSO** side, you need to send the **App Federation Metadata Url** to [Buttonwood Central SSO support team](mailto:support@buttonwood.com.au). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Buttonwood Central SSO test user
+
+In this section, you create a user called Britta Simon in Buttonwood Central SSO. Work with [Buttonwood Central SSO support team](mailto:support@buttonwood.com.au) to add the users in the Buttonwood Central SSO platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Buttonwood Central SSO Sign-on URL where you can initiate the login flow.
+
+* Go to Buttonwood Central SSO Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Buttonwood Central SSO tile in the My Apps, this will redirect to Buttonwood Central SSO Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Buttonwood Central SSO you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Datava Enterprise Service Platform Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/datava-enterprise-service-platform-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Datava Enterprise Service Platform | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Datava Enterprise Service Platform'
description: Learn how to configure single sign-on between Azure Active Directory and Datava Enterprise Service Platform.
Previously updated : 08/20/2020 Last updated : 03/21/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Datava Enterprise Service Platform
+# Tutorial: Azure AD SSO integration with Datava Enterprise Service Platform
In this tutorial, you'll learn how to integrate Datava Enterprise Service Platform with Azure Active Directory (Azure AD). When you integrate Datava Enterprise Service Platform with Azure AD, you can:
In this tutorial, you'll learn how to integrate Datava Enterprise Service Platfo
* Enable your users to be automatically signed-in to Datava Enterprise Service Platform with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
To get started, you need the following items:
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * Datava Enterprise Service Platform single sign-on (SSO) enabled subscription.
+> [!NOTE]
+> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
+ ## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Datava Enterprise Service Platform supports **SP** initiated SSO
-* Datava Enterprise Service Platform supports **Just In Time** user provisioning
-* Once you configure Datava Enterprise Service Platform you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+* Datava Enterprise Service Platform supports **SP** initiated SSO.
+* Datava Enterprise Service Platform supports **Just In Time** user provisioning.
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-## Adding Datava Enterprise Service Platform from the gallery
+## Add Datava Enterprise Service Platform from the gallery
To configure the integration of Datava Enterprise Service Platform into Azure AD, you need to add Datava Enterprise Service Platform from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
To configure the integration of Datava Enterprise Service Platform into Azure AD
Configure and test Azure AD SSO with Datava Enterprise Service Platform using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Datava Enterprise Service Platform.
-To configure and test Azure AD SSO with Datava Enterprise Service Platform, complete the following building blocks:
+To configure and test Azure AD SSO with Datava Enterprise Service Platform, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
To configure and test Azure AD SSO with Datava Enterprise Service Platform, comp
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Datava Enterprise Service Platform** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Datava Enterprise Service Platform** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, perform the following steps:
- a. In the **Sign-on URL** text box, type a URL using the following pattern:
- `https://go.datava.com/<TENANT_NAME>`
-
- b. In the **Reply URL** textbox, type the URL:
+ a. In the **Reply URL** textbox, type the URL:
`https://go.datava.com/saml/module.php/saml/sp/saml2-acs.php/azure-sp`
+ b. In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://go.datava.com/<TENANT_NAME>`
+ > [!NOTE] > The value is not real. Update the value with the actual Sign-On URL. Contact [Datava Enterprise Service Platform Client support team](mailto:support@datava.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Datava Enterprise Service Platform**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
In this section, a user called Britta Simon is created in Datava Enterprise Serv
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the Datava Enterprise Service Platform tile in the Access Panel, you should be automatically signed in to the Datava Enterprise Service Platform for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-
-## Additional resources
+In this section, you test your Azure AD single sign-on configuration with following options.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Click on **Test this application** in Azure portal. This will redirect to Datava Enterprise Service Platform Sign-on URL where you can initiate the login flow.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+* Go to Datava Enterprise Service Platform Sign-on URL directly and initiate the login flow from there.
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+* You can use Microsoft My Apps. When you click the Datava Enterprise Service Platform tile in the My Apps, this will redirect to Datava Enterprise Service Platform Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [Try Datava Enterprise Service Platform with Azure AD](https://aad.portal.azure.com/)
+## Next steps
-- [What is session control in Microsoft Defender for Cloud Apps?](/cloud-app-security/proxy-intro-aad)
+Once you configure Datava Enterprise Service Platform you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Datto Workplace Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/datto-workplace-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Datto Workplace Single Sign On'
+description: Learn how to configure single sign-on between Azure Active Directory and Datto Workplace Single Sign On.
++++++++ Last updated : 03/09/2022++++
+# Tutorial: Azure AD SSO integration with Datto Workplace Single Sign On
+
+In this tutorial, you'll learn how to integrate Datto Workplace Single Sign On with Azure Active Directory (Azure AD). When you integrate Datto Workplace Single Sign On with Azure AD, you can:
+
+* Control in Azure AD who has access to Datto Workplace Single Sign On.
+* Enable your users to be automatically signed-in to Datto Workplace Single Sign On with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Datto Workplace Single Sign On single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Datto Workplace Single Sign On supports **SP** and **IDP** initiated SSO.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add Datto Workplace Single Sign On from the gallery
+
+To configure the integration of Datto Workplace Single Sign On into Azure AD, you need to add Datto Workplace Single Sign On from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Datto Workplace Single Sign On** in the search box.
+1. Select **Datto Workplace Single Sign On** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Datto Workplace Single Sign On
+
+Configure and test Azure AD SSO with Datto Workplace Single Sign On using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Datto Workplace Single Sign On.
+
+To configure and test Azure AD SSO with Datto Workplace Single Sign On, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Datto Workplace Single Sign On SSO](#configure-datto-workplace-single-sign-on-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Datto Workplace Single Sign On test user](#create-datto-workplace-single-sign-on-test-user)** - to have a counterpart of B.Simon in Datto Workplace Single Sign On that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Datto Workplace Single Sign On** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+
+1. On the **Basic SAML Configuration** section, if you wish to configure in **SP** initiated mode then perform the following steps:
+
+ a. In the **Identifier** text box, type the URL:
+ `https://saml.workplace.datto.com/singlesignon/saml/metadata`
+
+ b. In the **Reply URL** text box, type the URL:
+ `https://saml.workplace.datto.com/singlesignon/saml/SSO`
+
+ c. In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.workplace.datto.com/login`
+
+ > [!NOTE]
+ > The Sign-on URL value is not real. Update the value with the actual Sign-on URL. Contact [Datto Workplace Single Sign On Client support team](mailto:ms-sso-support@ot.soonr.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Datto Workplace Single Sign On.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Datto Workplace Single Sign On**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Datto Workplace Single Sign On SSO
+
+To configure single sign-on on **Datto Workplace Single Sign On** side, you need to send the **App Federation Metadata Url** to [Datto Workplace Single Sign On support team](mailto:ms-sso-support@ot.soonr.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Datto Workplace Single Sign On test user
+
+In this section, you create a user called Britta Simon in Datto Workplace Single Sign On. Work with [Datto Workplace Single Sign On support team](mailto:ms-sso-support@ot.soonr.com) to add the users in the Datto Workplace Single Sign On platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Datto Workplace Single Sign On Sign on URL where you can initiate the login flow.
+
+* Go to Datto Workplace Single Sign On Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Datto Workplace Single Sign On for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Datto Workplace Single Sign On tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Datto Workplace Single Sign On for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Datto Workplace Single Sign On you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Neotalogicstudio Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/neotalogicstudio-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Neota Logic Studio | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and Neota Logic Studio.
+ Title: 'Tutorial: Azure AD SSO integration with Neota Studio'
+description: Learn how to configure single sign-on between Azure Active Directory and Neota Studio.
Previously updated : 03/04/2019 Last updated : 03/21/2022
-# Tutorial: Azure Active Directory integration with Neota Logic Studio
+# Tutorial: Azure AD SSO integration with Neota Studio
-In this tutorial, you learn how to integrate Neota Logic Studio with Azure Active Directory (Azure AD).
-Integrating Neota Logic Studio with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Neota Studio with Azure Active Directory (Azure AD). When you integrate Neota Studio with Azure AD, you can:
-* You can control in Azure AD who has access to Neota Logic Studio.
-* You can enable your users to be automatically signed-in to Neota Logic Studio (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Neota Studio.
+* Enable your users to be automatically signed-in to Neota Studio with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with Neota Logic Studio, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* Neota Logic Studio single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Neota Studio single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Neota Logic Studio supports **SP** initiated SSO
-
-## Adding Neota Logic Studio from the gallery
-
-To configure the integration of Neota Logic Studio into Azure AD, you need to add Neota Logic Studio from the gallery to your list of managed SaaS apps.
-
-**To add Neota Logic Studio from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
+* Neota Studio supports **SP** initiated SSO.
-4. In the search box, type **Neota Logic Studio**, select **Neota Logic Studio** from result panel then click **Add** button to add the application.
+## Add Neota Studio from the gallery
- ![Neota Logic Studio in the results list](common/search-new-app.png)
+To configure the integration of Neota Studio into Azure AD, you need to add Neota Studio from the gallery to your list of managed SaaS apps.
-## Configure and test Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Neota Studio** in the search box.
+1. Select **Neota Studio** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you configure and test Azure AD single sign-on with Neota Logic Studio based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Neota Logic Studio needs to be established.
+## Configure and test Azure AD SSO for Neota Studio
-To configure and test Azure AD single sign-on with Neota Logic Studio, you need to complete the following building blocks:
+Configure and test Azure AD SSO with Neota Studio using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Neota Studio.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Neota Logic Studio Single Sign-On](#configure-neota-logic-studio-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Neota Logic Studio test user](#create-neota-logic-studio-test-user)** - to have a counterpart of Britta Simon in Neota Logic Studio that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+To configure and test Azure AD SSO with Neota Studio, perform the following steps:
-### Configure Azure AD single sign-on
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Neota Studio SSO](#configure-neota-studio-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Neota Studio test user](#create-neota-studio-test-user)** - to have a counterpart of B.Simon in Neota Studio that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+## Configure Azure AD SSO
-To configure Azure AD single sign-on with Neota Logic Studio, perform the following steps:
+Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Neota Logic Studio** application integration page, select **Single sign-on**.
+1. In the Azure portal, on the **Neota Studio** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Configure single sign-on link](common/select-sso.png)
-
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
-
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, perform the following steps:
- ![Neota Logic Studio Domain and URLs single sign-on information](common/sp-identifier.png)
+ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ `https://<sub domain>.neotalogic.com/wb`
- a. In the **Sign on URL** text box, type a URL using the following pattern:
+ b. In the **Sign on URL** text box, type a URL using the following pattern:
`https://<sub domain>.neotalogic.com/a/<sub application>`
- b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
- `https://<sub domain>.neotalogic.com/wb`
- > [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [Neota Logic Studio Client support team](https://www.neotalogic.com/contact-us/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier and Sign on URL. Contact [Neota Studio Client support team](https://www.neotalogic.com/contact-us/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer. ![The Certificate download link](common/metadataxml.png)
-6. On the **Set up Neota Logic Studio** section, copy the appropriate URL(s) as per your requirement.
+6. On the **Set up Neota Studio** section, copy the appropriate URL(s) as per your requirement.
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure AD Identifier
-
- c. Logout URL
-
-### Configure Neota Logic Studio Single Sign-On
-
-To configure single sign-on on **Neota Logic Studio** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Neota Logic Studio support team](https://www.neotalogic.com/contact-us/). They set this setting to have the SAML SSO connection set properly on both sides.
- ### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
+In this section, you'll create a test user in the Azure portal called B.Simon.
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Neota Logic Studio.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Neota Logic Studio**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **Neota Logic Studio**.
-
- ![The Neota Logic Studio link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Neota Studio.
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Neota Studio**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
+## Configure Neota Studio SSO
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **Neota Studio** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Neota Studio support team](https://www.neotalogic.com/contact-us/). They set this setting to have the SAML SSO connection set properly on both sides.
-### Create Neota Logic Studio test user
+### Create Neota Studio test user
-In this section, you create a user called Britta Simon in Neota Logic Studio. Work with [Neota Logic Studio support team](https://www.neotalogic.com/contact-us/) to add the users in the Neota Logic Studio platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called Britta Simon in Neota Studio. Work with [Neota Studio support team](https://www.neotalogic.com/contact-us/) to add the users in the Neota Studio platform. Users must be created and activated before you use single sign-on.
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the Neota Logic Studio tile in the Access Panel, you should be automatically signed in to the Neota Logic Studio for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal. This will redirect to Neota Studio Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to Neota Studio Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Neota Studio tile in the My Apps, this will redirect to Neota Studio Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Neota Studio you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Surveymonkey Enterprise Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/surveymonkey-enterprise-tutorial.md
Previously updated : 01/27/2022 Last updated : 03/22/2022
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure SurveyMonkey Enterprise SSO
-To configure single sign-on on **SurveyMonkey Enterprise** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [SurveyMonkey Enterprise support team](mailto:support@selerix.com). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **SurveyMonkey Enterprise** side, please refer [this](https://help.surveymonkey.com/teams/single-sign-on/#set-up) article.
### Create SurveyMonkey Enterprise test user
active-directory Webcargo Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/webcargo-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Webcargo | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Webcargo'
description: Learn how to configure single sign-on between Azure Active Directory and Webcargo.
Previously updated : 06/23/2021 Last updated : 03/21/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Webcargo
+# Tutorial: Azure AD SSO integration with Webcargo
In this tutorial, you'll learn how to integrate Webcargo with Azure Active Directory (Azure AD). When you integrate Webcargo with Azure AD, you can:
To get started, you need the following items:
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * Webcargo single sign-on (SSO) enabled subscription.
+> [!NOTE]
+> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
+ ## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
aks Azure Files Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-files-csi.md
Title: Use Container Storage Interface (CSI) drivers for Azure Files on Azure Ku
description: Learn how to use the Container Storage Interface (CSI) drivers for Azure Files in an Azure Kubernetes Service (AKS) cluster. Previously updated : 12/10/2021 Last updated : 03/22/2021
kind: StorageClass
metadata: name: azurefile-csi-nfs provisioner: file.csi.azure.com
+allowVolumeExpansion: true
parameters: protocol: nfs
+mountOptions:
+ - nconnect=8
``` After editing and saving the file, create the storage class with the [kubectl apply][kubectl-apply] command:
aks Azure Files Dynamic Pv https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-files-dynamic-pv.md
description: Learn how to dynamically create a persistent volume with Azure Files for use with multiple concurrent pods in Azure Kubernetes Service (AKS) Previously updated : 07/01/2020 Last updated : 03/22/2021 #Customer intent: As a developer, I want to learn how to dynamically create and attach storage using Azure Files to pods in AKS.
apiVersion: storage.k8s.io/v1
metadata: name: my-azurefile provisioner: file.csi.azure.com # replace with "kubernetes.io/azure-file" if aks version is less than 1.21
+allowVolumeExpansion: true
mountOptions: - dir_mode=0777 - file_mode=0777
mountOptions:
- cache=strict - actimeo=30 parameters:
- skuName: Standard_LRS
+ skuName: Premium_LRS
``` Create the storage class with the [kubectl apply][kubectl-apply] command:
kubectl apply -f azure-file-sc.yaml
## Create a persistent volume claim
-A persistent volume claim (PVC) uses the storage class object to dynamically provision an Azure file share. The following YAML can be used to create a persistent volume claim *5 GB* in size with *ReadWriteMany* access. For more information on access modes, see the [Kubernetes persistent volume][access-modes] documentation.
+A persistent volume claim (PVC) uses the storage class object to dynamically provision an Azure file share. The following YAML can be used to create a persistent volume claim *100 GB* in size with *ReadWriteMany* access. For more information on access modes, see the [Kubernetes persistent volume][access-modes] documentation.
Now create a file named `azure-file-pvc.yaml` and copy in the following YAML. Make sure that the *storageClassName* matches the storage class created in the last step:
spec:
storageClassName: my-azurefile resources: requests:
- storage: 5Gi
+ storage: 100Gi
``` > [!NOTE]
Once completed, the file share will be created. A Kubernetes secret is also crea
$ kubectl get pvc my-azurefile NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
-my-azurefile Bound pvc-8436e62e-a0d9-11e5-8521-5a8664dc0477 5Gi RWX my-azurefile 5m
+my-azurefile Bound pvc-8436e62e-a0d9-11e5-8521-5a8664dc0477 10Gi RWX my-azurefile 5m
``` ## Use the persistent volume
apiVersion: storage.k8s.io/v1
metadata: name: my-azurefile provisioner: file.csi.azure.com # replace with "kubernetes.io/azure-file" if aks version is less than 1.21
+allowVolumeExpansion: true
mountOptions: - dir_mode=0777 - file_mode=0777
mountOptions:
- cache=strict - actimeo=30 parameters:
- skuName: Standard_LRS
+ skuName: Premium_LRS
``` ## Using Azure tags
aks Devops Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/devops-pipeline.md
In this article, you'll learn how to create a pipeline that continuously builds
## Prerequisites * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An Azure Resource Manager service connection. [Create an Azure Resource Manager service connection](/azure/devops/library/connect-to-azure#create-an-azure-resource-manager-service-connection-using-automated-security).
+* An Azure Resource Manager service connection. [Create an Azure Resource Manager service connection](/azure/devops/pipelines/library/connect-to-azure#create-an-azure-resource-manager-service-connection-using-automated-security).
* A GitHub account. Create a free [GitHub account](https://github.com/join) if you don't have one already. ## Get the code
If you're building our sample app, then _Hello world_ appears in your browser.
When you finished selecting options and then proceeded to validate and configure the pipeline Azure Pipelines created a pipeline for you, using the _Deploy to Azure Kubernetes Service_ template.
-The build stage uses the [Docker task](/azure/devops/tasks/build/docker) to build and push the image to the Azure Container Registry.
+The build stage uses the [Docker task](/azure/devops/pipelines/tasks/build/docker) to build and push the image to the Azure Container Registry.
```YAML - stage: Build
In this article, you'll learn how to create a pipeline that continuously builds
## Prerequisites * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An Azure Resource Manager service connection. [Create an Azure Resource Manager service connection](/azure/devops/library/connect-to-azure#create-an-azure-resource-manager-service-connection-using-automated-security).
+* An Azure Resource Manager service connection. [Create an Azure Resource Manager service connection](/azure/devops/pipelines/library/connect-to-azure#create-an-azure-resource-manager-service-connection-using-automated-security).
* A GitHub account. Create a free [GitHub account](https://github.com/join) if you don't have one already. ## Get the code
you must establish an authentication mechanism. This can be achieved in two ways
1. Grant AKS access to ACR. See [Authenticate with Azure Container Registry from Azure Kubernetes Service](/azure/container-registry/container-registry-auth-aks). 1. Use a [Kubernetes image pull secret](/azure/container-registry/container-registry-auth-aks).
- An image pull secret can be created by using the [Kubernetes deployment task](/azure/devops/tasks/deploy/kubernetes).
+ An image pull secret can be created by using the [Kubernetes deployment task](/azure/devops/pipelines/tasks/deploy/kubernetes).
## Create a release pipeline
It also packaged and published a Helm chart as an artifact. In the release pipel
- **Azure subscription**: Select a connection from the list under **Available Azure Service Connections** or create a more restricted permissions connection to your Azure subscription. If you see an **Authorize** button next to the input, use it to authorize the connection to your Azure subscription.
- If you don't see the required Azure subscription in the list of subscriptions, see [Create an Azure service connection](/azure/devops/library/connect-to-azure) to manually set up the connection.
+ If you don't see the required Azure subscription in the list of subscriptions, see [Create an Azure service connection](/azure/devops/pipelines/library/connect-to-azure) to manually set up the connection.
- **Resource group**: Enter or select the resource group containing your AKS cluster.
api-management Api Management Cross Domain Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-cross-domain-policies.md
The `cors` policy adds cross-origin resource sharing (CORS) support to an operat
> [!NOTE] > If request matches an operation with an OPTIONS method defined in the API, pre-flight request processing logic associated with CORS policies will not be executed. Therefore, such operations can be used to implement custom pre-flight processing logic.
+> [!IMPORTANT]
+> If you configure the CORS policy at the product scope, and your API uses subscription key authentication, the policy will only work when requests include a subscription key as a query parameter.
+ CORS allows a browser and a server to interact and determine whether or not to allow specific cross-origin requests (i.e. XMLHttpRequests calls made from JavaScript on a web page to other domains). This allows for more flexibility than only allowing same-origin requests, but is more secure than allowing all cross-origin requests. You need to apply the CORS policy to enable the interactive console in the developer portal. Refer to the [developer portal documentation](./developer-portal-faq.md#cors) for details.
api-management Diagnose Solve Problems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/diagnose-solve-problems.md
When you build and manage an API in Azure API Management, you want to be prepare
Although this experience is most helpful when you re having issues with your API within the last 24 hours, all the diagnostic graphs are always available for you to analyze.
-[!NOTE] Diagnose and solve problems is currenty not supported for Consumption Tier.
## Open API Management Diagnostics
app-service App Service Configure Premium Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-configure-premium-tier.md
description: Learn how to better performance for your web, mobile, and API app i
keywords: app service, azure app service, scale, scalable, app service plan, app service cost ms.assetid: ff00902b-9858-4bee-ab95-d3406018c688 Previously updated : 10/01/2020 Last updated : 03/21/2022
If your app runs in an App Service deployment where **PremiumV3** isn't availabl
In the **Clone app** page, you can create an App Service plan using **PremiumV3** in the region you want, and specify the app settings and configuration that you want to clone.
+If you are
+ ## Moving from Premium Container to Premium V3 SKU
-If you have an app which is using the preview Premium Container SKU and you would like to move to the new Premium V3 SKU, you need to redeploy your app to take advantage of **PremiumV3**. To do this, see the first option in [Scale up from an unsupported resource group and region combination](#scale-up-from-an-unsupported-resource-group-and-region-combination)
+The Premium Container SKU will be retired on **30th June 2022**. You should move your applications to the **Premium V3 SKU** ahead of this date. Use the clone functionality in the Azure App Service CLI experience to [move your application from your Premium Container App Service Plan to a new Premium V3 App Service plan](https://aka.ms/pcsku).
## Automate with scripts
app-service App Service Ip Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-ip-restrictions.md
To add an access restriction rule to your app, do the following:
1. Sign in to the Azure portal.
+1. Select the app that you wan't to add access restrictions to.
+ 1. On the left pane, select **Networking**. 1. On the **Networking** pane, under **Access Restrictions**, select **Configure Access Restrictions**.
app-service Configure Common https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-common.md
Set-AzWebAppSlotConfigName -ResourceGroupName <group-name> -Name <app-name> -Con
--
-### Edit app settings in bulk
+### Edit connection strings in bulk
# [Azure portal](#tab/portal)
-Click the **Advanced edit** button. Edit the settings in the text area. When finished, click **Update**. Don't forget to click **Save** back in the **Configuration** page.
+Click the **Advanced edit** button. Edit the connection strings in the text area. When finished, click **Update**. Don't forget to click **Save** back in the **Configuration** page.
Connection strings have the following JSON formatting:
Connection strings have the following JSON formatting:
Run [az webapp config connection-string set](/cli/azure/webapp/config/connection-string#az_webapp_config_connection_string_set) with the name of the JSON file. ```azurecli-interactive
-az webapp config appsettings set --resource-group <group-name> --name <app-name> --settings "@fileName.json"
+az webapp config connection-string set --resource-group <group-name> --name <app-name> --settings "@fileName.json"
``` > [!TIP] > Wrapping the file name with quotes is only required in PowerShell.
-The file format needed is a JSON array of settings where the slot setting field is optional. For example:
+The file format needed is a JSON array of connection strings where the slot setting field is optional. For example:
```json [
app-service Configure Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-custom-container.md
If the app changes compute instances for any reason, such as scaling up and down
## Configure port number
-By default, App Service assumes your custom container is listening on port 80. If your container listens to a different port, set the `WEBSITES_PORT` app setting in your App Service app. You can set it via the [Cloud Shell](https://shell.azure.com). In Bash:
+By default, App Service assumes your custom container is listening on either port 80 or port 8080. If your container listens to a different port, set the `WEBSITES_PORT` app setting in your App Service app. You can set it via the [Cloud Shell](https://shell.azure.com). In Bash:
```azurecli-interactive az webapp config appsettings set --resource-group <group-name> --name <app-name> --settings WEBSITES_PORT=8000
app-service Manage Create Arc Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-create-arc-environment.md
az extension add --upgrade --yes --name appservice-kube
az group create -g $aksClusterGroupName -l $resourceLocation az aks create --resource-group $aksClusterGroupName --name $aksName --enable-aad --generate-ssh-keys
- infra_rg=$(az aks show --resource-group $aksClusterGroupName --name $aksName --output tsv --query nodeResourceGroup)
``` # [PowerShell](#tab/powershell)
app-service Monitor Instances Health Check https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/monitor-instances-health-check.md
This article uses Health check in the Azure portal to monitor App Service instan
## What App Service does with Health checks - When given a path on your app, Health check pings this path on all instances of your App Service app at 1-minute intervals.-- If an instance doesn't respond with a status code between 200-299 (inclusive) after two or more requests, or fails to respond to the ping, the system determines it's unhealthy and removes it.
+- If an instance doesn't respond with a status code between 200-299 (inclusive) after ten requests, App Service determines it is unhealthy and removes it. (The required number of failed requests for an instance to be deemed unhealthy is configurable to a minimum of 2 requests.)
- After removal, Health check continues to ping the unhealthy instance. If the instance begins to respond with a healthy status code (200-299) then the instance is returned to the load balancer. - If an instance remains unhealthy for one hour, it will be replaced with new instance.-- Furthermore, when scaling up or out, App Service pings the Health check path to ensure new instances are ready.
+- When scaling out, App Service pings the Health check path to ensure new instances are ready.
> [!NOTE] >- Health check doesn't follow 302 redirects.
app-service Overview Local Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-local-cache.md
As part of the step that copies the storage content, any folder that is named re
### How to flush the local cache logs after a site management operation? To flush the local cache logs, stop and restart the app. This action clears the old cache.
+### Why does App Service starts showing previously deployed files after a restart when Local Cache is enabled?
+In case App Service starts showing previously deployed files on a restart, check for the precense of the App Setting - '[WEBSITE_DISABLE_SCM_SEPARATION=true](https://github.com/projectkudu/kudu/wiki/Configurable-settings#use-the-same-process-for-the-user-site-and-the-scm-site)'. After adding this setting any deployments via KUDU start writing to the local VM instead of the persistent storage. Best practices mentioned above in this article should be leveraged, wherein the deployments should always be done to the staging slot which does not have Local Cache enabled.
+ ## More resources
-[Environment variables and app settings reference](reference-app-settings.md)
+[Environment variables and app settings reference](reference-app-settings.md)
app-service Overview Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-managed-identity.md
First, you'll need to create a user-assigned identity resource.
1. Search for the identity you created earlier and select it. Click **Add**. ![Managed identity in App Service](media/app-service-managed-service-identity/user-assigned-managed-identity-in-azure-portal.png)
+
+> [!IMPORTANT]
+> If you select **Add** after you select a user-assigned identity to add, your application will restart.
# [Azure CLI](#tab/cli)
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview.md
App Service can also host web apps natively on Linux for supported application s
### Built-in languages and frameworks
-App Service on Linux supports a number of language specific built-in images. Just deploy your code. Supported languages include: Node.js, Java (JRE 8 & JRE 11), PHP, Python, .NET Core, and Ruby. Run [`az webapp list-runtimes --os linux`](/cli/azure/webapp#az_webapp_list_runtimes) to view the latest languages and supported versions. If the runtime your application requires is not supported in the built-in images, you can deploy it with a custom container.
+App Service on Linux supports a number of language specific built-in images. Just deploy your code. Supported languages include: Node.js, Java (JRE 8 & JRE 11), PHP, Python, .NET Core, and Ruby. Run [`az webapp list-runtimes --linux`](/cli/azure/webapp#az_webapp_list_runtimes) to view the latest languages and supported versions. If the runtime your application requires is not supported in the built-in images, you can deploy it with a custom container.
Outdated runtimes are periodically removed from the Web Apps Create and Configuration blades in the Portal. These runtimes are hidden from the Portal when they are deprecated by the maintaining organization or found to have significant vulnerabilities. These options are hidden to guide customers to the latest runtimes where they will be the most successful.
app-service Quickstart Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-nodejs.md
Title: 'Quickstart: Create a Node.js web app'
description: Deploy your first Node.js Hello World to Azure App Service in minutes. ms.assetid: 582bb3c2-164b-42f5-b081-95bfcb7a502a Previously updated : 03/10/2022+
+#zone_pivot_groups: app-service-platform-windows-linux
Last updated : 03/22/2022 ms.devlang: javascript- #zone_pivot_groups: app-service-ide-oss zone_pivot_groups: app-service-vscode-cli-portal
This quickstart configures an App Service app in the **Free** tier and incurs no
::: zone-end ## Create your Node.js application
-In this step, you create a starter Node.js application and make sure it runs on your computer.
+In this step, you create a basic Node.js application and ensure it runs on your computer.
> [!TIP] > If you have already completed the [Node.js tutorial](https://code.visualstudio.com/docs/nodejs/nodejs-tutorial), you can skip ahead to [Deploy to Azure](#deploy-to-azure).
-1. Create a simple Node.js application using the [Express Generator](https://expressjs.com/starter/generator.html), which is installed by default with Node.js and NPM.
+1. Create a Node.js application using the [Express Generator](https://expressjs.com/starter/generator.html), which is installed by default with Node.js and NPM.
```bash npx express-generator myExpressApp --view ejs
Before you continue, ensure that you have all the prerequisites installed and co
#### Sign in to Azure
-1. In the terminal, make sure you're in the *myExpressApp* directory, then start Visual Studio Code with the following command:
+1. In the terminal, ensure you're in the *myExpressApp* directory, then start Visual Studio Code with the following command:
```bash code .
Before you continue, ensure that you have all the prerequisites installed and co
:::image type="content" source="media/quickstart-nodejs/deploy.png" alt-text="Screenshot of the Azure App service in Visual Studio Code showing the blue arrow icon selected.":::
-1. Choose the *myExpressApp* folder.
+1. Select the *myExpressApp* folder.
# [Deploy to Linux](#tab/linux)
-3. Choose **Create new Web App**. A Linux container is used by default.
+3. Select **Create new Web App**. A Linux container is used by default.
1. Type a globally unique name for your web app and press **Enter**. The name must be unique across all of Azure and use only alphanumeric characters ('A-Z', 'a-z', and '0-9') and hyphens ('-'). 1. In Select a runtime stack, select the Node.js version you want. An **LTS** version is recommended.
-1. In Select a pricing tier, select **Free (F1)** and wait for the the resources to be provisioned in Azure.
+1. In Select a pricing tier, select **Free (F1)** and wait for the resources to be provisioned in Azure.
1. In the popup **Always deploy the workspace "myExpressApp" to \<app-name>"**, select **Yes**. This way, as long as you're in the same workspace, Visual Studio Code deploys to the same App Service app each time. While Visual Studio Code provisions the Azure resources and deploys the code, it shows [progress notifications](https://code.visualstudio.com/api/references/extension-guidelines#notifications).
Before you continue, ensure that you have all the prerequisites installed and co
# [Deploy to Windows](#tab/windows)
-3. Choose **Create new Web App... Advanced**.
+3. Select **Create new Web App... Advanced**.
1. Type a globally unique name for your web app and press **Enter**. The name must be unique across all of Azure and use only alphanumeric characters ('A-Z', 'a-z', and '0-9') and hyphens ('-'). 1. Select **Create a new resource group**, then enter a name for the resource group, such as *AppServiceQS-rg*. 1. Select the Node.js version you want. An **LTS** version is recommended.
Before you continue, ensure that you have all the prerequisites installed and co
:::zone target="docs" pivot="development-environment-cli"
-In the terminal, make sure you're in the *myExpressApp* directory, and deploy the code in your local folder (*myExpressApp*) using the [az webapp up](/cli/azure/webapp#az-webapp-up) command:
+In the terminal, ensure you're in the *myExpressApp* directory, and deploy the code in your local folder (*myExpressApp*) using the [az webapp up](/cli/azure/webapp#az-webapp-up) command:
# [Deploy to Linux](#tab/linux)
az webapp up --sku F1 --name <app-name> --os-type Windows
-- -- If the `az` command isn't recognized, be sure you have the Azure CLI installed as described in [Set up your initial environment](#set-up-your-initial-environment).
+- If the `az` command isn't recognized, ensure you have the Azure CLI installed as described in [Set up your initial environment](#set-up-your-initial-environment).
- Replace `<app_name>` with a name that's unique across all of Azure (*valid characters are `a-z`, `0-9`, and `-`*). A good pattern is to use a combination of your company name and an app identifier. - The `--sku F1` argument creates the web app on the Free pricing tier, which incurs a no cost. - You can optionally include the argument `--location <location-name>` where `<location_name>` is an available Azure region. You can retrieve a list of allowable regions for your Azure account by running the [`az account list-locations`](/cli/azure/appservice#az_appservice_list_locations) command. - The command creates a Linux app for Node.js by default. To create a Windows app instead, use the `--os-type` argument. -- If you see the error, "Could not auto-detect the runtime stack of your app," make sure you're running the command in the *myExpressApp* directory (See [Troubleshooting auto-detect issues with az webapp up](https://github.com/Azure/app-service-linux-docs/blob/master/AzWebAppUP/runtime_detection.md)).
+- If you see the error, "Could not auto-detect the runtime stack of your app," ensure you're running the command in the *myExpressApp* directory (See [Troubleshooting auto-detect issues with az webapp up](https://github.com/Azure/app-service-linux-docs/blob/master/AzWebAppUP/runtime_detection.md)).
The command may take a few minutes to complete. While running, it provides messages about creating the resource group, the App Service plan, and the app resource, configuring logging, and doing ZIP deployment. It then gives the message, "You can launch the app at http://&lt;app-name&gt;.azurewebsites.net", which is the app's URL on Azure.
Sign in to the Azure portal at https://portal.azure.com.
:::image type="content" source="./media/quickstart-nodejs/portal-search.png?text=Azure portal search details" alt-text="Screenshot of portal search"::: 1. In the **App Services** page, select **Create**.
-1. In the **Basics** tab, under **Project details**, make sure the correct subscription is selected and then choose to **Create new** resource group. Type *myResourceGroup* for the name.
+1. In the **Basics** tab, under **Project details**, ensure the correct subscription is selected and then select to **Create new** resource group. Type *myResourceGroup* for the name.
:::image type="content" source="./media/quickstart-nodejs/project-details.png" alt-text="Screenshot of the Project details section showing where you select the Azure subscription and the resource group for the web app":::
-1. Under **Instance details**, type a globally unique name for your web app and select **Code**. Choose *Node 14 LTS* **Runtime stack**, an **Operating System**, and a **Region** you want to serve your app from.
+1. Under **Instance details**, type a globally unique name for your web app and select **Code**. Select *Node 14 LTS* **Runtime stack**, an **Operating System**, and a **Region** you want to serve your app from.
:::image type="content" source="./media/quickstart-nodejs/instance-details.png" alt-text="Screenshot of the Instance details section where you provide a name for the virtual machine and select its region, image and size":::
-1. Under **App Service Plan**, choose to **Create new** App Service Plan. Type *myAppServicePlan* for the name. To change to the Free tier, click **Change size**, select **Dev/Test** tab, select **F1**, and select the **Apply** button at the bottom of the page.
+1. Under **App Service Plan**, select **Create new** App Service Plan. Type *myAppServicePlan* for the name. To change to the Free tier, select **Change size**, select **Dev/Test** tab, select **F1**, and select the **Apply** button at the bottom of the page.
:::image type="content" source="./media/quickstart-nodejs/app-service-plan-details.png" alt-text="Screenshot of the Administrator account section where you provide the administrator username and password":::
Sign in to the Azure portal at https://portal.azure.com.
### Get FTP credentials
-Azure App Service supports [**two types of credentials**](deploy-configure-credentials.md) for FTP/S deployment. These credentials are not the same as your Azure subscription credentials. In this section, you get the *application-scope credentials* to use with FileZilla.
+Azure App Service supports [**two types of credentials**](deploy-configure-credentials.md) for FTP/S deployment. These credentials aren't the same as your Azure subscription credentials. In this section, you get the *application-scope credentials* to use with FileZilla.
-1. From the App Service app page, click **Deployment Center** in the left-hand menu and select **FTPS credentials** tab.
+1. From the App Service app page, select **Deployment Center** in the left-hand menu and select **FTPS credentials** tab.
:::image type="content" source="./media/quickstart-nodejs/ftps-deployment-credentials.png" alt-text="FTPS deployment credentials":::
Azure App Service supports [**two types of credentials**](deploy-configure-crede
:::image type="content" source="./media/quickstart-nodejs/filezilla-ftps-connection.png" alt-text="FTPS connection details":::
-1. Click **Connect** in FileZilla.
+1. Select **Connect** in FileZilla.
### Deploy files with FTP
You can deploy changes to this app by making edits in Visual Studio Code, saving
You can stream log output (calls to `console.log()`) from the Azure app directly in the Visual Studio Code output window.
-1. In the **App Service** explorer, right-click the app node and choose **Start Streaming Logs**.
+1. In the **App Service** explorer, right-click the app node and select **Start Streaming Logs**.
![Start Streaming Logs](./media/quickstart-nodejs/view-logs.png)
-1. If asked to restart the app, click **Yes**. Once the app is restarted, the Visual Studio Code output window opens with a connection to the log stream.
+1. If asked to restart the app, select **Yes**. Once the app is restarted, the Visual Studio Code output window opens with a connection to the log stream.
1. After a few seconds, the output window shows a message indicating that you're connected to the log-streaming service. You can generate more output activity by refreshing the page in the browser.
To stop log streaming at any time, press **Ctrl**+**C** in the terminal.
You can access the console logs generated from inside the app and the container in which it runs. You can stream log output (calls to `console.log()`) from the Node.js app directly in the Azure portal.
-1. In the same **App Service** page for your app, use the left menu to scroll to the *Monitoring* section and click **Log stream**.
+1. In the same **App Service** page for your app, use the left menu to scroll to the *Monitoring* section and select **Log stream**.
:::image type="content" source="./media/quickstart-nodejs/log-stream.png" alt-text="Screenshot of Log stream in Azure App service.":::
You can access the console logs generated from inside the app and the container
2021-10-26T21:04:08.614403210Z: [INFO] 2021-10-26T21:04:08.614407110Z: [INFO] export NODE_PATH=/usr/local/lib/node_modules:$NODE_PATH 2021-10-26T21:04:08.614411210Z: [INFO] if [ -z "$PORT" ]; then
- 2021-10-26T21:04:08.614415310Z: [INFO] export PORT=8080
+ 2021-10-26T21:04:08.614415310Z: [INFO] export PORT=8080
2021-10-26T21:04:08.614419610Z: [INFO] fi 2021-10-26T21:04:08.614423411Z: [INFO] 2021-10-26T21:04:08.614427211Z: [INFO] node /opt/startup/default-static-site.js
The `--no-wait` argument allows the command to return before the operation is co
When no longer needed, you can delete the resource group, App service, and all related resources.
-1. From your App Service *overview* page, click the *resource group* you created in the [Create Azure resources](#create-azure-resources) step.
+1. From your App Service *overview* page, select the *resource group* you created in the [Create Azure resources](#create-azure-resources) step.
:::image type="content" source="./media/quickstart-nodejs/resource-group.png" alt-text="Resource group in App Service overview page":::
Check out the other Azure extensions.
* [Azure Resource Manager Tools](https://marketplace.visualstudio.com/items?itemName=msazurermtools.azurerm-vscode-tools) Or get them all by installing the
-[Node Pack for Azure](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack) extension pack.
+[Node Pack for Azure](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack) extension pack.
app-service Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-python.md
Title: 'Quickstart: Deploy a Python (Django or Flask) web app to Azure' description: Get started with Azure App Service by deploying your first Python app to Azure App Service. Previously updated : 01/28/2022 Last updated : 03/22/2022 ms.devlang: python-+ # Quickstart: Deploy a Python (Django or Flask) web app to Azure App Service
-In this quickstart, you will deploy a Python web app (Django or Flask) to [Azure App Service](./overview.md#app-service-on-linux). Azure App Service is a fully managed web hosting service that supports Python 3.7 and higher apps hosted in a Linux server environment.
+In this quickstart, you'll deploy a Python web app (Django or Flask) to [Azure App Service](./overview.md#app-service-on-linux). Azure App Service is a fully managed web hosting service that supports Python 3.7 and higher apps hosted in a Linux server environment.
To complete this quickstart, you need: 1. An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
To run the application locally:
### [Flask](#tab/flask)
-1. Navigate into in the application folder:
+1. Go to the application folder:
```Console cd msdocs-python-flask-webapp-quickstart
Having issues? [Let us know](https://aka.ms/PythonAppServiceQuickstartFeedback).
### [Django](#tab/django)
-1. Navigate into in the application folder:
+1. Go to the application folder:
```Console cd msdocs-python-django-webapp-quickstart
Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps
| Instructions | Screenshot | |:-|--:|
-| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-azure-portal-1.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-1-240px.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find App Services in Azure." lightbox="./media/quickstart-python/create-app-service-azure-portal-1.png"::: |
-| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-azure-portal-2.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-2-240px.png" alt-text="A screenshot showing the location of the Create button on the App Services page in the Azure portal." lightbox="./media/quickstart-python/create-app-service-azure-portal-2.png"::: |
-| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-azure-portal-3.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-3-240px.png" alt-text="A screenshot showing how to fill out the form to create a new App Service in the Azure portal." lightbox="./media/quickstart-python/create-app-service-azure-portal-3.png"::: |
-| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-azure-portal-4.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-4-240px.png" alt-text="A screenshot showing how to select the basic app service plan in the Azure portal." lightbox="./media/quickstart-python/create-app-service-azure-portal-4.png"::: |
-| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-azure-portal-5.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-5-240px.png" alt-text="A screenshot showing the location of the Review plus Create button in the Azure portal." lightbox="./media/quickstart-python/create-app-service-azure-portal-5.png"::: |
+| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-azure-portal-1.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-1-240px.png" alt-text="A screenshot of how to use the search box in the top tool bar to find App Services in Azure." lightbox="./media/quickstart-python/create-app-service-azure-portal-1.png"::: |
+| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-azure-portal-2.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-2-240px.png" alt-text="A screenshot of the location of the Create button on the App Services page in the Azure portal." lightbox="./media/quickstart-python/create-app-service-azure-portal-2.png"::: |
+| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-azure-portal-3.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-3-240px.png" alt-text="A screenshot of how to fill out the form to create a new App Service in the Azure portal." lightbox="./media/quickstart-python/create-app-service-azure-portal-3.png"::: |
+| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-azure-portal-4.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-4-240px.png" alt-text="A screenshot of how to select the basic app service plan in the Azure portal." lightbox="./media/quickstart-python/create-app-service-azure-portal-4.png"::: |
+| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-azure-portal-5.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-5-240px.png" alt-text="A screenshot of the location of the Review plus Create button in the Azure portal." lightbox="./media/quickstart-python/create-app-service-azure-portal-5.png"::: |
### [VS Code](#tab/vscode-aztools)
To create Azure resources in VS Code, you must have the [Azure Tools extension p
| Instructions | Screenshot | |:-|--:|
-| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-visual-studio-code-1.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-1-240px.png" alt-text="A screenshot showing the location of the Azure Tools icon in the left toolbar of VS Code." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-1.png"::: |
-| [!INCLUDE [Create app service step 2](<./includes/quickstart-python/create-app-service-visual-studio-code-2.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-2-240px.png" alt-text="A screenshot showing the App Service section of Azure Tools extension and the context menu used to create a new web app." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-2.png"::: |
+| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-visual-studio-code-1.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-1-240px.png" alt-text="A Screenshot of the Azure Tools icon in the left toolbar of VS Code." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-1.png"::: |
+| [!INCLUDE [Create app service step 2](<./includes/quickstart-python/create-app-service-visual-studio-code-2.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-2-240px.png" alt-text="A screenshot of the App Service section of Azure Tools extension and the context menu used to create a new web app." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-2.png"::: |
| [!INCLUDE [Create app service step 4](<./includes/quickstart-python/create-app-service-visual-studio-code-3.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-3-240px.png" alt-text="A screenshot of dialog box used to enter the name of the new web app in Visual Studio Code." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-3.png"::: | | [!INCLUDE [Create app service step 5](<./includes/quickstart-python/create-app-service-visual-studio-code-4.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-4-240px.png" alt-text="A screenshot of the dialog box in VS Code used to select the runtime for the new web app." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-4.png"::: | | [!INCLUDE [Create app service step 6](<./includes/quickstart-python/create-app-service-visual-studio-code-5.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-5-240px.png" alt-text="A screenshot of the dialog in VS Code used to select the App Service plan for the new web app." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-5.png"::: |
To deploy a web app from VS Code, you must have the [Azure Tools extension pack]
| Instructions | Screenshot | |:-|--:|
-| [!INCLUDE [VS Code deploy step 1](<./includes/quickstart-python/deploy-visual-studio-code-1.md>)] | :::image type="content" source="./media/quickstart-python/deploy-visual-studio-code-1-240px.png" alt-text="A screenshot showing the location of the Azure Tools icon in the left toolbar of VS Code." lightbox="./media/quickstart-python/deploy-visual-studio-code-1.png"::: |
-| [!INCLUDE [VS Code deploy step 2](<./includes/quickstart-python/deploy-visual-studio-code-2.md>)] | :::image type="content" source="./media/quickstart-python/deploy-visual-studio-code-2-240px.png" alt-text="A screenshot showing the context menu of an App Service and the deploy to web app menu option." lightbox="./media/quickstart-python/deploy-visual-studio-code-2.png"::: |
+| [!INCLUDE [VS Code deploy step 1](<./includes/quickstart-python/deploy-visual-studio-code-1.md>)] | :::image type="content" source="./media/quickstart-python/deploy-visual-studio-code-1-240px.png" alt-text="A screenshot of the location of the Azure Tools icon in the left toolbar of VS Code." lightbox="./media/quickstart-python/deploy-visual-studio-code-1.png"::: |
+| [!INCLUDE [VS Code deploy step 2](<./includes/quickstart-python/deploy-visual-studio-code-2.md>)] | :::image type="content" source="./media/quickstart-python/deploy-visual-studio-code-2-240px.png" alt-text="A screenshot of the context menu of an App Service and the deploy to web app menu option." lightbox="./media/quickstart-python/deploy-visual-studio-code-2.png"::: |
| [!INCLUDE [VS Code deploy step 3](<./includes/quickstart-python/deploy-visual-studio-code-3.md>)] | :::image type="content" source="./media/quickstart-python/deploy-visual-studio-code-3-240px.png" alt-text="A screenshot dialog in VS Code used to choose the app to deploy." lightbox="./media/quickstart-python/deploy-visual-studio-code-3.png"::: | | [!INCLUDE [VS Code deploy step 4](<./includes/quickstart-python/deploy-visual-studio-code-4.md>)] | :::image type="content" source="./media/quickstart-python/deploy-visual-studio-code-4-240px.png" alt-text="A screenshot of a dialog box in VS Code asking if you want to update your workspace to run build commands." lightbox="./media/quickstart-python/deploy-visual-studio-code-4.png"::: | | [!INCLUDE [VS Code deploy step 5](<./includes/quickstart-python/deploy-visual-studio-code-5.md>)] | :::image type="content" source="./media/quickstart-python/deploy-visual-studio-code-5-240px.png" alt-text="A screenshot showing the confirmation dialog when the app code has been deployed to Azure." lightbox="./media/quickstart-python/deploy-visual-studio-code-5.png"::: |
Having issues? Refer first to the [Troubleshooting guide](./configure-language-p
## 4 - Browse to the app
-Browse to the deployed application in your web browser at the URL `http://<app-name>.azurewebsites.net`. It can take a minute or two for the app to start, so if you see a default app page, wait a minute and refresh the browser.
+Browse to the deployed application in your web browser at the URL `http://<app-name>.azurewebsites.net`. If you see a default app page, wait a minute and refresh the browser.
The Python sample code is running a Linux container in App Service using a built-in image. :::image type="content" source="./media/quickstart-python/run-app-azure.png" alt-text="Screenshot of the app running in Azure":::
-**Congratulations!** You have deployed your Python app to App Service.
+**Congratulations!** You've deployed your Python app to App Service.
Having issues? Refer first to the [Troubleshooting guide](./configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/PythonAppServiceQuickstartFeedback).
The contents of the App Service diagnostic logs can be reviewed in the Azure por
| Instructions | Screenshot | |:-|--:|
-| [!INCLUDE [Stream logs from Azure portal 1](<./includes/quickstart-python/stream-logs-azure-portal-1.md>)] | :::image type="content" source="./media/quickstart-python/stream-logs-azure-portal-1-240px.png" alt-text="A screenshot showing the location in the Azure portal where to enable streaming logs." lightbox="./media/quickstart-python/stream-logs-azure-portal-1.png"::: |
-| [!INCLUDE [Stream logs from Azure portal 2](<./includes/quickstart-python/stream-logs-azure-portal-2.md>)] | :::image type="content" source="./media/quickstart-python/stream-logs-azure-portal-2-240px.png" alt-text="A screenshot showing how to view logs in the Azure portal." lightbox="./media/quickstart-python/stream-logs-azure-portal-2.png"::: |
+| [!INCLUDE [Stream logs from Azure portal 1](<./includes/quickstart-python/stream-logs-azure-portal-1.md>)] | :::image type="content" source="./media/quickstart-python/stream-logs-azure-portal-1-240px.png" alt-text="A screenshot of the location in the Azure portal where to enable streaming logs." lightbox="./media/quickstart-python/stream-logs-azure-portal-1.png"::: |
+| [!INCLUDE [Stream logs from Azure portal 2](<./includes/quickstart-python/stream-logs-azure-portal-2.md>)] | :::image type="content" source="./media/quickstart-python/stream-logs-azure-portal-2-240px.png" alt-text="A screenshot of how to view logs in the Azure portal." lightbox="./media/quickstart-python/stream-logs-azure-portal-2.png"::: |
### [VS Code](#tab/vscode-aztools) | Instructions | Screenshot | |:-|--:|
-| [!INCLUDE [Stream logs from VS Code 1](<./includes/quickstart-python/stream-logs-visual-studio-code-1.md>)] | :::image type="content" source="./media/quickstart-python/stream-logs-vs-code-1-240px.png" alt-text="A screenshot showing how to start streaming logs with the VS Code extension." lightbox="./media/quickstart-python/stream-logs-vs-code-1.png"::: |
-| [!INCLUDE [Stream logs from VS Code 2](<./includes/quickstart-python/stream-logs-visual-studio-code-2.md>)] | :::image type="content" source="./media/quickstart-python/stream-logs-vs-code-2-240px.png" alt-text="A screenshot showing an example of streaming logs in the VS Code Output window." lightbox="./media/quickstart-python/stream-logs-vs-code-2.png"::: |
+| [!INCLUDE [Stream logs from VS Code 1](<./includes/quickstart-python/stream-logs-visual-studio-code-1.md>)] | :::image type="content" source="./media/quickstart-python/stream-logs-vs-code-1-240px.png" alt-text="A screenshot of how to start streaming logs with the VS Code extension." lightbox="./media/quickstart-python/stream-logs-vs-code-1.png"::: |
+| [!INCLUDE [Stream logs from VS Code 2](<./includes/quickstart-python/stream-logs-visual-studio-code-2.md>)] | :::image type="content" source="./media/quickstart-python/stream-logs-vs-code-2-240px.png" alt-text="A screenshot of an example of streaming logs in the VS Code Output window." lightbox="./media/quickstart-python/stream-logs-vs-code-2.png"::: |
### [Azure CLI](#tab/azure-cli)
Having issues? Refer first to the [Troubleshooting guide](./configure-language-p
## Clean up resources
-When you are finished with the sample app, you can remove all of the resources for the app from Azure to ensure you do not incur additional charges and keep your Azure subscription uncluttered. Removing the resource group also removes all resources in the resource group and is the fastest way to remove all Azure resources for your app.
+When you're finished with the sample app, you can remove all of the resources for the app from Azure. It will not incur extra charges and keep your Azure subscription uncluttered. Removing the resource group also removes all resources in the resource group and is the fastest way to remove all Azure resources for your app.
### [Azure portal](#tab/azure-portal)
Follow these steps while signed-in to the Azure portal to delete a resource grou
| Instructions | Screenshot | |:-|--:|
-| [!INCLUDE [Remove resource group Azure portal 1](<./includes/quickstart-python/remove-resource-group-azure-portal-1.md>)] | :::image type="content" source="./media/quickstart-python/remove-resource-group-azure-portal-1-240px.png" alt-text="A screenshot showing how to search for and navigate to a resource group in the Azure portal." lightbox="./media/quickstart-python/remove-resource-group-azure-portal-1.png"::: |
-| [!INCLUDE [Remove resource group Azure portal 2](<./includes/quickstart-python/remove-resource-group-azure-portal-2.md>)] | :::image type="content" source="./media/quickstart-python/remove-resource-group-azure-portal-2-240px.png" alt-text="A screenshot showing the location of the Delete Resource Group button in the Azure portal." lightbox="./media/quickstart-python/remove-resource-group-azure-portal-2.png"::: |
+| [!INCLUDE [Remove resource group Azure portal 1](<./includes/quickstart-python/remove-resource-group-azure-portal-1.md>)] | :::image type="content" source="./media/quickstart-python/remove-resource-group-azure-portal-1-240px.png" alt-text="A screenshot of how to search for and navigate to a resource group in the Azure portal." lightbox="./media/quickstart-python/remove-resource-group-azure-portal-1.png"::: |
+| [!INCLUDE [Remove resource group Azure portal 2](<./includes/quickstart-python/remove-resource-group-azure-portal-2.md>)] | :::image type="content" source="./media/quickstart-python/remove-resource-group-azure-portal-2-240px.png" alt-text="A screenshot of the location of the Delete Resource Group button in the Azure portal." lightbox="./media/quickstart-python/remove-resource-group-azure-portal-2.png"::: |
| [!INCLUDE [Remove resource group Azure portal 3](<./includes/quickstart-python/remove-resource-group-azure-portal-3.md>)] | :::image type="content" source="./media/quickstart-python/remove-resource-group-azure-portal-3-240px.png" alt-text="A screenshot of the confirmation dialog for deleting a resource group in the Azure portal." lightbox="./media/quickstart-python/remove-resource-group-azure-portal-3.png"::: | ### [VS Code](#tab/vscode-aztools) | Instructions | Screenshot | |:-|--:|
-| [!INCLUDE [Remove resource group VS Code 1](<./includes/quickstart-python/remove-resource-group-visual-studio-code-1.md>)] | :::image type="content" source="./media/quickstart-python/remove-resource-group-visual-studio-code-1-240px.png" alt-text="A screenshot showing how to delete a resource group in VS Code using the Azure Tools extension." lightbox="./media/quickstart-python/remove-resource-group-visual-studio-code-1.png"::: |
+| [!INCLUDE [Remove resource group VS Code 1](<./includes/quickstart-python/remove-resource-group-visual-studio-code-1.md>)] | :::image type="content" source="./media/quickstart-python/remove-resource-group-visual-studio-code-1-240px.png" alt-text="A screenshot of how to delete a resource group in VS Code using the Azure Tools extension." lightbox="./media/quickstart-python/remove-resource-group-visual-studio-code-1.png"::: |
| [!INCLUDE [Remove resource group VS Code 2](<./includes/quickstart-python/remove-resource-group-visual-studio-code-2.md>)] | :::image type="content" source="./media/quickstart-python/remove-resource-group-visual-studio-code-2-240px.png" alt-text="A screenshot of the confirmation dialog for deleting a resource group from VS Code." lightbox="./media/quickstart-python/remove-resource-group-visual-studio-code-2.png"::: | ### [Azure CLI](#tab/azure-cli)
app-service Cli Deploy Privateendpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-deploy-privateendpoint.md
az appservice plan create \
## Create a Web App Now that you have an App Service Plan you can deploy a Web App.
-Create a Web App with [az appservice plan create](/cli/azure/webapp#az_webapp_create.
+Create a Web App with [az webapp create](/cli/azure/webapp#az_webapp_create).
This example creates a Web App named *mySiteName* in the Plan named *myAppServicePlan* ```azurecli-interactive
app-service Tutorial Python Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md
Having issues? [Let us know](https://aka.ms/DjangoCLITutorialHelp).
To host your application in Azure, you need to create Azure App Service web app. ### [Azure portal](#tab/azure-portal)
-Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps to create your Azure Database for PostgreSQL resource.
+Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps to create your Azure App Service resource.
| Instructions | Screenshot | |:-|--:|
You can create a PostgreSQL database in Azure using the [Azure portal](https://p
### [Azure portal](#tab/azure-portal)
-Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps to create your Azure App Service resources.
+Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps to create your Azure Database for PostgreSQL resource.
| Instructions | Screenshot | |:-|--:|
application-gateway Application Gateway End To End Ssl Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-end-to-end-ssl-powershell.md
All configuration items are set before creating the application gateway. The fol
``` > [!NOTE]
- > This sample configures the certificate used for the TLS connection. The certificate needs to be in .pfx format, and the password must be 4 to 12 characters.
+ > This sample configures the certificate used for the TLS connection. The certificate needs to be in .pfx format.
6. Create the HTTP listener for the application gateway. Assign the front-end IP configuration, port, and TLS/SSL certificate to use.
automation Extension Based Hybrid Runbook Worker Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/extension-based-hybrid-runbook-worker-install.md
Title: Deploy an extension-based Windows or Linux User Hybrid Runbook Worker in
description: This article tells how to deploy an extension-based Windows or Linux Hybrid Runbook Worker that you can use to run runbooks on Windows-based machines in your local datacenter or cloud environment. Previously updated : 09/28/2021 Last updated : 03/17/2021 #Customer intent: As a developer, I want to learn about extension so that I can efficiently deploy Hybrid Runbook Workers.
To install and use Hybrid Worker extension using REST API, follow these steps. T
{ "properties": {"vmResourceId": "{VmResourceId}"} }- ``` Response of PUT call confirms if the Hybrid worker is created or not. To reconfirm, you would have to make another GET call on Hybrid worker as follows.
To install and use Hybrid Worker extension using REST API, follow these steps. T
The API call will provide the value with the key: `AutomationHybridServiceUrl`. Use the URL in the next step to enable extension on the VM.
-1. Install the Hybrid Worker Extension on the VM by running the following PowerShell cmdlet
- `(Required module: Az.Compute)`. Use the *properties.automationHybridServiceUrl* provided by the above API call.
- ```azurepowershell-interactive
+1. Install the Hybrid Worker Extension on Azure VM by using the following API call.
+
+ ```http
+ PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}/extensions/{vmExtensionName}?api-version=2021-11-01
- $settings = @{
- "AutomationAccountURL" = <AutomationHybridServiceUrl>;
- };
- Set-AzVMExtension -ResourceGroupName <VMResourceGroupName> -Location <VMLocation> -VMName <VMName> -Name "HybridWorkerExtension" -Publisher "Microsoft.Azure.Automation.HybridWorker" -ExtensionType <HybridWorkerForWindows/HybridWorkerForLinux> -TypeHandlerVersion 0.1 -Settings $settings
- ```
+ ```
+
+ The request body should contain the following information:
- For ARC VMs, use the below cmdlet for enabling the extension.
+ ```json
+ {
+ "location": "<VMLocation>",
+ "properties": {
+ "publisher": "Microsoft.Azure.Automation.HybridWorker",
+ "type": "<HybridWorkerForWindows/HybridWorkerForLinux>",
+ "typeHandlerVersion": <version>,
+ "settings": {
+ "AutomationAccountURL" = "<AutomationHybridServiceUrl>"
+ }
+ }
+ }
- ```azurepowershell-interactive
- New-AzConnectedMachineExtension -ResourceGroupName <VMResourceGroupName> -Location <VMLocation> -VMName <VMName> -Name "HybridWorkerExtension" -Publisher "Microsoft.Azure.Automation.HybridWorker" -ExtensionType <HybridWorkerForWindows/HybridWorkerForLinux> -TypeHandlerVersion 0.1 -Settings $settings
- ```
- The Output of the above commands will confirm if the extension is successfully installed or not on the targeted VM. You can also go to the VM in the Azure portal, and check status of extensions installed on the target VM under **Extensions** tab.
+ ```
+
+ For ARC VMs, use the below API call for enabling the extension:
-1. Go to the **Portal** page of the VM and in the **Extensions** tab, you can check the status of the Hybrid Worker Extension installation.
-
-1. Run the below API call to start executing jobs on the new Hybrid worker.
-
- ```http
- PUT
- https://westcentralus.management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Automation/automationAccounts/{automationAccountName}/jobs/{jobName}?api-version=2017-05-15-preview
-
- ```
+ ```http
+ PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.HybridCompute/machines/{machineName}/extensions/{extensionName}?api-version=2021-05-20
+ ```
+
The request body should contain the following information:
- ```
- {
- "properties": {
- "runbook": {
- "name": "{RunbookName}"
- },
- "parameters": {
- "key01": "value01",
- "key02": "value02"
- },
- "runOn": "{hybridRunbookWorkerGroupName}"
+ ```json
+ {
+ "location": "<VMLocation>",
+ "properties": {
+ "publisher": "Microsoft.Azure.Automation.HybridWorker",
+ "type": "<HybridWorkerForWindows/HybridWorkerForLinux>",
+ "typeHandlerVersion": <version>,
+ "settings": {
+ "AutomationAccountURL" = "<AutomationHybridServiceUrl>"
+ }
}
- }
- ```
+ }
+ ```
+ Response of the *PUT* call will confirm if the extension is successfully installed or not on the targeted VM. You can also go to the VM in the Azure portal, and check status of extensions installed on the target VM under **Extensions** tab.
+ ## Manage Role permissions for Hybrid Worker Groups and Hybrid Workers
azure-app-configuration Concept Point Time Snapshot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-point-time-snapshot.md
Title: Retrieve key-value pairs from a point-in-time
+ Title: Retrieve key-values from a point-in-time
description: Retrieve old key-value pairs using point-in-time snapshots in Azure App Configuration, which maintains a record of changes to key-values. --++ Previously updated : 08/05/2020 Last updated : 03/14/2022 # Point-in-time snapshot
-Azure App Configuration maintains a record of changes made to key-values. This record provides a timeline of key-value changes. You can reconstruct the history of any key-value and provide its past value at any moment within the key history period (7 days for Free tier stores, or 30 days for Standard tier stores). Using this feature, you can ΓÇ£time-travelΓÇ¥ backward and retrieve an old key-value. For example, you can recover configuration settings used before the most recent deployment in order to roll back the application to the previous configuration.
+Azure App Configuration maintains a record of changes made to key-values. This record provides a timeline of key-value changes. You can reconstruct the history of any key and provide its past value at any moment within the key history period (7 days for Free tier stores, or 30 days for Standard tier stores). Using this feature, you can ΓÇ£time-travelΓÇ¥ backward and retrieve an old key-value. For example, you can recover configuration settings used before the most recent deployment in order to roll back the application to the previous configuration.
-## Key-value retrieval
+## Restore key-values
-You can use Azure portal or CLI to retrieve past key-values. In Azure CLI, use `az appconfig revision list`, adding appropriate parameters to retrieve the required values. Specify the Azure App Configuration instance by providing either the store name (`--name <app-config-store-name>`) or by using a connection string (`--connection-string <your-connection-string>`). Restrict the output by specifying a specific point in time (`--datetime`) and by specifying the maximum number of items to return (`--top`).
+You can use the Azure portal or the Azure CLI to retrieve past key-values.
-If you don't have Azure CLI installed locally, you can optionally use [Azure Cloud Shell](../cloud-shell/overview.md).
+### [Portal](#tab/azure-portal)
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **All resources**, and select the App Configuration store instance where your key-value are stored.
+
+2. In the **Operations** menu, select **Restore**.
+
+ :::image type="content" source="media/restore-key-value-portal.png" alt-text="Screenshot of the Azure portal, selecting restore":::
+
+3. Select **Date: Select date** to select a date and time you want to revert to.
+4. Click outside of the date and time fields or press **Tab** to validate your choice. You can now see which key values have changed between your selected date and time and the current time. This step helps you understand what keys and values you're preparing to revert to.
+
+ :::image type="content" source="media/restore-key-value-past-values.png" alt-text="Screenshot of the Azure portal with saved key-values":::
+
+ The portal displays a table of key-values. The first column includes symbols indicating what will happen if you restore the data for the chosen date and time:
+ - The red minus sign (ΓÇô) means that the key-value didn't exist at your selected date and time and will be deleted.
+ - The green plus sign (+) means that the key-value existed at your selected date and time and doesn't exist now. If you revert to selected date and time it will be added back to your configuration.
+ - The orange bullet sign (ΓÇó) means that the key-value was modified since your selected date and time. The key will revert to the value it had at the selected date and time.
+
+5. Select the checkbox in the row to select/deselect the key value to take action. When selected it will display the difference for the key value between the current and selected date and time.
+
+ :::image type="content" source="media/restore-key-value-compare.png" alt-text="Screenshot of the Azure portal with compared keys-values":::
+
+ In the above example, the preview shows the key TestApp:Settings:BackgroundColor, which currently has a value of #FFF. This value will be modified to #45288E if we go through with restoring the data.
+
+ You can select one or more checkboxes in the table to take action on the key-value of your choice. You can also use the select-all checkbox at the very top of the list to select/deselect all key-values.
+
+6. Select **Restore** to restore the selected key-value(s) to the selected data and time.
+
+ :::image type="content" source="media/restore-key-value-confirm.png" alt-text="Screenshot of the Azure portal selecting Restore":::
+
+### [Azure CLI](#tab/azure-cli)
+
+Use the Azure CLI as explained below to retrieve and restore past key-values. If you don't have the Azure CLI installed locally, you can optionally use [Azure Cloud Shell](../cloud-shell/overview.md).
+
+In the CLI, use `az appconfig revision list` to view changes or `az appconfig kv restore` to restore key-values, adding appropriate parameters. Specify the Azure App Configuration instance by providing either the store name (`--name <app-config-store-name>`) or by using a connection string (`--connection-string <your-connection-string>`). Restrict the output by specifying a specific point in time (`--datetime`), a label (`--label`) and the maximum number of items to return (`--top`).
+and by specifying the maximum number of items to return (`--top`).
Retrieve all recorded changes to your key-values. ```azurecli
-az appconfig revision list --name <your-app-config-store-name>.
+az appconfig revision list --name <your-app-config-store-name>
+```
+
+Restore all key-values to a specific point in time.
+
+```azurecli
+az appconfig kv restore --name <app-config-store-name> --datetime "2019-05-01T11:24:12Z"
+```
+
+Restore for any label starting with v1. to a specific point in time.
+
+```azurecli
+az appconfig kv restore --name <app-config-store-name> --label v1.* --datetime "2019-05-01T11:24:12Z"
+```
+
+For more examples of CLI commands and optional parameters to restore key-value, go to the [Azure CLI documentation](/cli/azure/appconfig/kv).
+
+You can also access the history of a specific key-value. This feature allows you to check the value of a specific key at a chosen point in time and to revert to a past value without updating any other key-value.
+++
+## Historical/Timeline view of key-value
+
+ > [!TIP]
+ > This method is convenient if you have no more than a couple of changes to make, as Configuration explorer only lets you make changes key by key. If you need to restore multiple key-values at once, use the **Restore** menu instead.
+
+### [Portal](#tab/azure-portal)
+
+You can also access the revision history of a specific key-value in the portal.
+
+1. In the **Operations** menu, select **Configuration explorer**.
+1. Select **More actions** for the key you want to explore, and then **History**
+
+ :::image type="content" source="media/explorer-key-history.png" alt-text="Screenshot of the Azure portal selecting key-value history":::
+
+ You can now see the revision history for the selected key and information about the changes.
+
+1. Select **Restore** to restore the key and value to this point in time.
+
+ :::image type="content" source="media/explorer-key-day-restore.png" alt-text="Screenshot of the Azure portal viewing key-value data for a specific date":::
++
+### [Azure CLI](#tab/azure-cli)
+
+Use the Azure CLI as explained below to retrieve and restore a single key-value. If you don't have the Azure CLI installed locally, you can optionally use [Azure Cloud Shell](../cloud-shell/overview.md).
+
+In the CLI, use `az appconfig revision list` to view changes to a key-value or use `az appconfig kv restore` to restore a key-value, adding appropriate parameters. Specify the Azure App Configuration instance by providing either the store name (`--name <app-config-store-name>`) or by using a connection string (`--connection-string <your-connection-string>`). Restrict the output by specifying a specific key (`--key`). Optionally, specify a label (`--label`), a point in time (`--datetime`) and the maximum number of items to return (`--top`).
+
+List revision history for key "color" with any labels.
+
+```azurecli
+az appconfig revision list --name <app-config-store-name> --key color
```
-Retrieve all recorded changes for the key `environment` and the labels `test` and `prod`.
+List revision history of a specific key-value with a label.
```azurecli
-az appconfig revision list --name <your-app-config-store-name> --key environment --label test,prod
+az appconfig revision list --name <app-config-store-name> --key color --label test
```
-Retrieve all recorded changes in the hierarchical key space `environment:prod`.
+List revision history of a key-value with multiple labels.
```azurecli
-az appconfig revision list --name <your-app-config-store-name> --key environment:prod:*
+az appconfig revision list --name <app-config-store-name> --key color --label test,prod,\0
``` Retrieve all recorded changes for the key `color` at a specific point-in-time. ```azurecli
-az appconfig revision list --connection-string <your-app-config-connection-string> --key color --datetime "2019-05-01T11:24:12Z"
+az appconfig revision list --name <app-config-store-name> --key color --datetime "2019-05-01T11:24:12Z"
```
-Retrieve the last 10 recorded changes to your key-values and return only the values for `key`, `label`, and `last_modified` time stamp.
+Retrieve the last 10 recorded changes for the key `color` at a specific point-in-time.
-```azurecli-interactive
-az appconfig revision list --name <your-app-config-store-name> --top 10 --fields key label last_modified
+```azurecli
+az appconfig revision list --name <app-config-store-name> --key color --top 10 --datetime "2019-05-01T11:24:12Z"
```
+For more examples and optional parameters, go to the [Azure CLI documentation](/cli/azure/appconfig/revision).
+++ ## Next steps > [!div class="nextstepaction"]
-> [Create an ASP.NET Core web app](./quickstart-aspnet-core-app.md)
+> [Create an ASP.NET Core web app](./quickstart-aspnet-core-app.md)
azure-app-configuration Quickstart Feature Flag Aspnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-feature-flag-aspnet-core.md
dotnet new mvc --no-https --output TestFeatureFlags
} ``` #### [.NET Core 3.x](#tab/core3x)-
+
+ Add the following code:
```csharp public void ConfigureServices(IServiceCollection services) { services.AddControllersWithViews();
+ services.AddAzureAppConfiguration();
services.AddFeatureManagement(); } ```
+ And then add below:
+
+ ```csharp
+ public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
+ {
+ // ...
+ app.UseAzureAppConfiguration();
+ }
+ ```
+
#### [.NET Core 2.x](#tab/core2x) ```csharp
azure-cache-for-redis Cache Best Practices Development https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-best-practices-development.md
Last updated 02/25/2022 + # Development
We recommend performance testing to choose the right tier and validate connectio
Locate your cache instance and your application in the same region. Connecting to a cache in a different region can significantly increase latency and reduce reliability.
-While you can connect from outside of Azure, it is not recommended *especially when using Redis as a cache*. If you're using Redis server as just a key/value store, latency may not be the primary concern.
+While you can connect from outside of Azure, it isn't recommended *especially when using Redis as a cache*. If you're using Redis server as just a key/value store, latency may not be the primary concern.
## Use TLS encryption
To continue to pin intermediate certificates, add the following to the pinned in
| [Microsoft Azure TLS Issuing CA 05](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2005.cer) | 6c3af02e7f269aa73afd0eff2a88a4a1f04ed1e5 | | [Microsoft Azure TLS Issuing CA 06](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2006.cer) | 30e01761ab97e59a06b41ef20af6f2de7ef4f7b0 |
-If your application validates certificate in code, you need to modify it to recognize the properties for example, Issuers, Thumbprint of the newly pinned certificates. This extra verification should cover all pinned certificates to be more future-proof.
+If your application validates certificate in code, you need to modify it to recognize the properties for example, Issuers, Thumbprint of the newly pinned certificates. This extra verification should cover all pinned certificates to be more future-proof.
+
+#### Rely on hostname not public IP address
+
+The public IP address assigned to your cache can change as a result of a scale operation or backend improvement. We recommend relying on the hostname, in the form `<cachename>.redis.cache.windows.net`, instead of an explicit public IP address.
## Client library-specific guidance
azure-cache-for-redis Cache Troubleshoot Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-troubleshoot-connectivity.md
In this article, we provide troubleshooting help for connecting your client appl
- [Virtual network configuration](#virtual-network-configuration) - [Private endpoint configuration](#private-endpoint-configuration) - [Firewall rules](#third-party-firewall-or-external-proxy)
+ - [Public IP address change](#public-ip-address-change)
## Intermittent connectivity issues
Check if the Max aggregate for `Connected Clients` metric is close or higher tha
### Kubernetes hosted applications - If your client application is hosted on Kubernetes, check that the pod running the client application or the cluster nodes aren't under memory/CPU/Network pressure. A pod running the client application can be affected by other pods running on the same node and throttle Redis connections or IO operations.-- If you're using *Istio* or any other service mesh, check that your service mesh proxy reserves port 13000-13019 or 15000-15019. These ports are used by clients to communicate with a clustered Azure Cache For Redis nodes and could cause connectivity issues on those ports.
+- If you're using _Istio_ or any other service mesh, check that your service mesh proxy reserves port 13000-13019 or 15000-15019. These ports are used by clients to communicate with a clustered Azure Cache For Redis nodes and could cause connectivity issues on those ports.
### Linux-based client application
Steps to check your virtual network configuration:
1. Check if a virtual network is assigned to your cache from the "**Virtual Network**" section under the **Settings** on the Resource menu of the Azure portal. 1. Ensure that the client host machine is in the same virtual network as the Azure Cache For Redis.
-1. When the client application is in a different VNet than your Azure Cache For Redis, both VNets must have VNet peering enabled within the same Azure region.
+1. When the client application is in a different VNet from your Azure Cache For Redis, both VNets must have VNet peering enabled within the same Azure region.
1. Validate that the [Inbound](cache-how-to-premium-vnet.md#inbound-port-requirements) and [Outbound](cache-how-to-premium-vnet.md#outbound-port-requirements) rules meet the requirement. 1. For more information, see [Configure a virtual network - Premium-tier Azure Cache for Redis instance](cache-how-to-premium-vnet.md#how-can-i-verify-that-my-cache-is-working-in-a-virtual-network).
If you have a firewall configured for your Azure Cache For Redis, ensure that yo
When you use a third-party firewall or proxy in your network, check that the endpoint for Azure Cache for Redis, `*.redis.cache.windows.net`, is allowed along with the ports `6379` and `6380`. You might need to allow more ports when using a clustered cache or geo-replication.
+### Public IP address change
+
+If you've configured any networking or security resource to use your cache's public IP address, check to see if your cache's public IP address changed. For more information, see [Rely on hostname not public IP address for your cache](cache-best-practices-development.md#rely-on-hostname-not-public-ip-address).
+ ## Next steps These articles provide more information on connectivity and resilience:
azure-functions Functions Bindings Cosmosdb V2 Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-trigger.md
See the [Example section](#example) for complete examples.
## Usage
-The parameter type supported by the Event Grid trigger depends on the Functions runtime version, the extension package version, and the C# modality used.
+The parameter type supported by the Azure Cosmos DB trigger depends on the Functions runtime version, the extension package version, and the C# modality used.
The trigger requires a second collection that it uses to store _leases_ over the partitions. Both the collection being monitored and the collection that contains the leases must be available for the trigger to work.
The trigger doesn't indicate whether a document was updated or inserted, it just
- [Read an Azure Cosmos DB document (Input binding)](./functions-bindings-cosmosdb-v2-input.md) - [Save changes to an Azure Cosmos DB document (Output binding)](./functions-bindings-cosmosdb-v2-output.md)
-[version 4.x of the extension]: ./functions-bindings-cosmosdb-v2.md?tabs=extensionv4
+[version 4.x of the extension]: ./functions-bindings-cosmosdb-v2.md?tabs=extensionv4
azure-functions Functions Reference Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-java.md
This guide contains detailed information to help you succeed developing Azure Fu
As a Java developer, if you're new to Azure Functions, please consider first reading one of the following articles:
-| Getting started | Concepts|
-| -- | -- |
-| <ul><li>[Java function using Visual Studio Code](./create-first-function-vs-code-java.md)</li><li>[Jav)</li></ul> |
+| Getting started | Concepts| Scenarios/samples |
+| -- | -- | -- |
+| <ul><li>[Java function using Visual Studio Code](./create-first-function-vs-code-java.md)</li><li>[Jav)</li></ul> | <ul><li>[Java samples with different triggers](/samples/azure-samples/azure-functions-samples-java/azure-functions-java/)</li><li>[Event Hub trigger and Cosmos DB output binding](/samples/azure-samples/java-functions-eventhub-cosmosdb/sample/)</li></ul> |
## Java function basics
For more information about Azure Functions Java development, see the following r
* Local development and debug with [Visual Studio Code](https://code.visualstudio.com/docs/jav) * [Remote Debug Java functions using Visual Studio Code](https://code.visualstudio.com/docs/java/java-serverless#_remote-debug-functions-running-in-the-cloud) * [Maven plugin for Azure Functions](https://github.com/Microsoft/azure-maven-plugins/blob/develop/azure-functions-maven-plugin/README.md)
-* Streamline function creation through the `azure-functions:add` goal, and prepare a staging directory for [ZIP file deployment](deployment-zip-push.md).
+* Streamline function creation through the `azure-functions:add` goal, and prepare a staging directory for [ZIP file deployment](deployment-zip-push.md).
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compare-azure-government-global-azure.md
recommendations: false Previously updated : 03/07/2022 Last updated : 03/21/2022 # Compare Azure Government and global Azure
Azure Government services operate the same way as the corresponding services in
You can use AzureCLI or PowerShell to obtain Azure Government endpoints for services you provisioned: -- Use **Azure CLI** to run the [az cloud show](/cli/azure/cloud#az_cloud_show) command and provide `AzureUSGovernment` as the name of the target cloud environment. For example,
+- Use **Azure CLI** to run the [az cloud show](/cli/azure/cloud#az_cloud_show) comm
+provide `AzureUSGovernment` as the name of the target cloud environment. For example,
```azurecli az cloud show --name AzureUSGovernment
The following Azure Database for MySQL **features are not currently available**
The following Azure Database for PostgreSQL **features are not currently available** in Azure Government: -- Hyperscale (Citus) and Flexible server deployment options
+- Hyperscale (Citus) deployment option
- The following features of the Single server deployment option - Advanced Threat Protection - Backup with long-term retention
The following Azure Advisor recommendation **features are not currently availabl
- (Preview) Consider App Service stamp fee reserved capacity to save over your on-demand costs. - (Preview) Consider Azure Data Explorer reserved capacity to save over your pay-as-you-go costs. - (Preview) Consider Azure Synapse Analytics (formerly SQL DW) reserved capacity to save over your pay-as-you-go costs.
- - (Preview) Consider Blob storage reserved capacity to save on Blob v2 and and Data Lake Storage Gen2 costs.
- - (Preview) Consider Blob storage reserved instance to save on Blob v2 and and Data Lake Storage Gen2 costs.
+ - (Preview) Consider Blob storage reserved capacity to save on Blob v2 and Data Lake Storage Gen2 costs.
+ - (Preview) Consider Blob storage reserved instance to save on Blob v2 and Data Lake Storage Gen2 costs.
- (Preview) Consider Cache for Redis reserved capacity to save over your pay-as-you-go costs. - (Preview) Consider Cosmos DB reserved capacity to save over your pay-as-you-go costs. - (Preview) Consider Database for MariaDB reserved capacity to save over your pay-as-you-go costs.
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
recommendations: false Previously updated : 03/07/2022 Last updated : 03/21/2022 # Azure, Dynamics 365, Microsoft 365, and Power Platform services compliance scope
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| Service | FedRAMP High | DoD IL2 | | - |::|:-:|
+| [Advisor](../../advisor/index.yml) | &#x2705; | &#x2705; |
| [AI Builder](/ai-builder/) | &#x2705; | &#x2705; |
+| [Analysis Services](../../analysis-services/index.yml) | &#x2705; | &#x2705; |
| [API Management](../../api-management/index.yml) | &#x2705; | &#x2705; |
+| [App Configuration](../../azure-app-configuration/index.yml) | &#x2705; | &#x2705; |
| [Application Gateway](../../application-gateway/index.yml) | &#x2705; | &#x2705; | | [Automation](../../automation/index.yml) | &#x2705; | &#x2705; | | [Azure Active Directory (Free and Basic)](../../active-directory/fundamentals/active-directory-whatis.md#what-are-the-azure-ad-licenses) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Active Directory Domain Services](../../active-directory-domain-services/index.yml) | &#x2705; | &#x2705; | | [Azure Active Directory Provisioning Service](../../active-directory/app-provisioning/how-provisioning-works.md)| &#x2705; | &#x2705; | | [Azure AD Multi-Factor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | &#x2705; | &#x2705; |
-| [Azure Advisor](../../advisor/index.yml) | &#x2705; | &#x2705; |
-| [Azure Analysis Services](../../analysis-services/index.yml) | &#x2705; | &#x2705; |
-| [Azure App Configuration](../../azure-app-configuration/index.yml) | &#x2705; | &#x2705; |
+| [Azure API for FHIR](../../healthcare-apis/azure-api-for-fhir/index.yml) | &#x2705; | &#x2705; |
| [Azure Arc-enabled servers](../../azure-arc/servers/index.yml) | &#x2705; | &#x2705; |
-| [Azure Archive Storage](../../storage/blobs/access-tiers-overview.md) | &#x2705; | &#x2705; |
| **Service** | **FedRAMP High** | **DoD IL2** |
-| [Azure Backup](../../backup/index.yml) | &#x2705; | &#x2705; |
-| [Azure Bastion](../../bastion/index.yml) | &#x2705; | &#x2705; |
-| [Azure Blueprints](../../governance/blueprints/index.yml) | &#x2705; | &#x2705; |
-| [Azure Bot Service](/azure/bot-service/) | &#x2705; | &#x2705; |
| [Azure Cache for Redis](../../azure-cache-for-redis/index.yml) | &#x2705; | &#x2705; |
-| [Azure Cloud Services](../../cloud-services/index.yml) | &#x2705; | &#x2705; |
-| [Azure Cognitive Search](../../search/index.yml) (formerly Azure Search) | &#x2705; | &#x2705; |
| [Azure Cosmos DB](../../cosmos-db/index.yml) | &#x2705; | &#x2705; |
-| [Azure Cost Management and Billing](../../cost-management-billing/index.yml) | &#x2705; | &#x2705; |
-| [Azure Data Box](../../databox/index.yml) **&ast;** | &#x2705; | &#x2705; |
-| [Azure Data Explorer](/azure/data-explorer/) | &#x2705; | &#x2705; |
-| [Azure Data Share](../../data-share/index.yml) | &#x2705; | &#x2705; |
| [Azure Database for MariaDB](../../mariadb/index.yml) | &#x2705; | &#x2705; | | [Azure Database for MySQL](../../mysql/index.yml) | &#x2705; | &#x2705; | | [Azure Database for PostgreSQL](../../postgresql/index.yml) | &#x2705; | &#x2705; |
-| **Service** | **FedRAMP High** | **DoD IL2** |
-| [Azure Database Migration Service](../../dms/index.yml) | &#x2705; | &#x2705; |
| [Azure Databricks](/azure/databricks/) **&ast;&ast;** | &#x2705; | &#x2705; |
-| [Azure DDoS Protection](../../ddos-protection/index.yml) | &#x2705; | &#x2705; |
-| [Azure Dedicated HSM](../../dedicated-hsm/index.yml) | &#x2705; | &#x2705; |
-| [Azure DevTest Labs](../../devtest-labs/index.yml) | &#x2705; | &#x2705; |
-| [Azure DNS](../../dns/index.yml) | &#x2705; | &#x2705; |
-| [Azure ExpressRoute](../../expressroute/index.yml) | &#x2705; | &#x2705; |
-| [Azure Firewall](../../firewall/index.yml) | &#x2705; | &#x2705; |
-| [Azure Firewall Manager](../../firewall-manager/index.yml) | &#x2705; | &#x2705; |
| [Azure for Education](https://azureforeducation.microsoft.com/) | &#x2705; | &#x2705; |
-| [Azure Form Recognizer](../../applied-ai-services/form-recognizer/index.yml) | &#x2705; | &#x2705; |
-| [Azure Front Door](../../frontdoor/index.yml) | &#x2705; | &#x2705; |
-| [Azure Functions](../../azure-functions/index.yml) | &#x2705; | &#x2705; |
-| [Azure Health Bot](/healthbot/) | &#x2705; | &#x2705; |
-| [Azure HDInsight](../../hdinsight/index.yml) | &#x2705; | &#x2705; |
-| **Service** | **FedRAMP High** | **DoD IL2** |
-| [Azure Healthcare APIs](../../healthcare-apis/index.yml) (formerly Azure API for FHIR) | &#x2705; | &#x2705; |
-| [Azure HPC Cache](../../hpc-cache/index.yml) | &#x2705; | &#x2705; |
-| [Azure Immersive Reader](../../applied-ai-services/immersive-reader/index.yml) | &#x2705; | &#x2705; |
| [Azure Information Protection](/azure/information-protection/) | &#x2705; | &#x2705; |
-| [Azure Internet Analyzer](../../internet-analyzer/index.yml) | &#x2705; | &#x2705; |
-| [Azure IoT Hub](../../iot-hub/index.yml) | &#x2705; | &#x2705; |
| [Azure Kubernetes Service (AKS)](../../aks/index.yml) | &#x2705; | &#x2705; |
-| [Azure Lab Services](../../lab-services/index.yml) | &#x2705; | &#x2705; |
-| [Azure Lighthouse](../../lighthouse/index.yml) | &#x2705; | &#x2705; |
-| [Azure Logic Apps](../../logic-apps/index.yml) | &#x2705; | &#x2705; |
-| [Azure Machine Learning](../../machine-learning/index.yml) | &#x2705; | &#x2705; |
-| [Azure Managed Applications](../../azure-resource-manager/managed-applications/index.yml) | &#x2705; | &#x2705; |
| [Azure Marketplace portal](https://azuremarketplace.microsoft.com/) | &#x2705; | &#x2705; | | [Azure Maps](../../azure-maps/index.yml) | &#x2705; | &#x2705; |
-| [Azure Media Services](../../media-services/index.yml) | &#x2705; | &#x2705; |
-| **Service** | **FedRAMP High** | **DoD IL2** |
-| [Azure Migrate](../../migrate/index.yml) | &#x2705; | &#x2705; |
| [Azure Monitor](../../azure-monitor/index.yml) (incl. [Application Insights](../../azure-monitor/app/app-insights-overview.md), [Log Analytics](../../azure-monitor/logs/data-platform-logs.md), and [Application Change Analysis](../../azure-monitor/app/change-analysis.md)) | &#x2705; | &#x2705; | | [Azure NetApp Files](../../azure-netapp-files/index.yml) | &#x2705; | &#x2705; |
-| [Azure Open Datasets](../../open-datasets/index.yml) | &#x2705; | &#x2705; |
-| [Azure Peering Service](../../peering-service/index.yml) | &#x2705; | &#x2705; |
| [Azure Policy](../../governance/policy/index.yml) | &#x2705; | &#x2705; | | [Azure Policy's guest configuration](../../governance/policy/concepts/guest-configuration.md) | &#x2705; | &#x2705; |
-| [Azure Public IP](../../virtual-network/ip-services/public-ip-addresses.md) | &#x2705; | &#x2705; |
+| **Service** | **FedRAMP High** | **DoD IL2** |
| [Azure Red Hat OpenShift](../../openshift/index.yml) | &#x2705; | &#x2705; |
-| [Azure Resource Graph](../../governance/resource-graph/index.yml) | &#x2705; | &#x2705; |
| [Azure Resource Manager](../../azure-resource-manager/management/index.yml) | &#x2705; | &#x2705; |
-| [Azure Scheduler](../../scheduler/index.yml) (replaced by [Azure Logic Apps](../../logic-apps/index.yml)) | &#x2705; | &#x2705; |
-| [Azure Service Fabric](../../service-fabric/index.yml) | &#x2705; | &#x2705; |
-| [Azure Service Health](../../service-health/index.yml) | &#x2705; | &#x2705; |
| [Azure Service Manager (RDFE)](/previous-versions/azure/ee460799(v=azure.100)) | &#x2705; | &#x2705; |
-| **Service** | **FedRAMP High** | **DoD IL2** |
| [Azure Sign-up portal](https://signup.azure.com/) | &#x2705; | &#x2705; |
-| [Azure SignalR Service](../../azure-signalr/index.yml) | &#x2705; | &#x2705; |
-| [Azure Site Recovery](../../site-recovery/index.yml) | &#x2705; | &#x2705; |
| [Azure Sphere](/azure-sphere/) | &#x2705; | &#x2705; |
-| [Azure Spring Cloud](../../spring-cloud/index.yml) | &#x2705; | &#x2705; |
-| [Azure SQL Database](../../azure-sql/database/sql-database-paas-overview.md) | &#x2705; | &#x2705; |
| [Azure Stack Edge](../../databox-online/index.yml) (formerly Data Box Edge) **&ast;** | &#x2705; | &#x2705; |
-| [Azure Stream Analytics](../../stream-analytics/index.yml) | &#x2705; | &#x2705; |
-| [Azure Synapse Analytics](../../synapse-analytics/index.yml) | &#x2705; | &#x2705; |
-| [Azure Time Series Insights](../../time-series-insights/index.yml) | &#x2705; | &#x2705; |
| [Azure Virtual Desktop](../../virtual-desktop/index.yml) (formerly Windows Virtual Desktop) | &#x2705; | &#x2705; | | [Azure VMware Solution](../../azure-vmware/index.yml) | &#x2705; | &#x2705; |
-| [Azure Web Application Firewall](../../web-application-firewall/index.yml) | &#x2705; | &#x2705; |
+| [Backup](../../backup/index.yml) | &#x2705; | &#x2705; |
+| [Bastion](../../bastion/index.yml) | &#x2705; | &#x2705; |
| [Batch](../../batch/index.yml) | &#x2705; | &#x2705; |
+| [Blueprints](../../governance/blueprints/index.yml) | &#x2705; | &#x2705; |
+| [Bot Service](/azure/bot-service/) | &#x2705; | &#x2705; |
+| [Cloud Services](../../cloud-services/index.yml) | &#x2705; | &#x2705; |
| [Cloud Shell](../../cloud-shell/overview.md) | &#x2705; | &#x2705; | | **Service** | **FedRAMP High** | **DoD IL2** |
+| [Cognitive Search](../../search/index.yml) (formerly Azure Search) | &#x2705; | &#x2705; |
| [Cognitive | [Cognitive | [Cognitive Services Containers](../../cognitive-services/cognitive-services-container-support.md) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Container Instances](../../container-instances/index.yml) | &#x2705; | &#x2705; | | [Container Registry](../../container-registry/index.yml) | &#x2705; | &#x2705; | | [Content Delivery Network](../../cdn/index.yml) | &#x2705; | &#x2705; |
-| [Customer Lockbox](../../security/fundamentals/customer-lockbox-overview.md) | &#x2705; | &#x2705; |
| **Service** | **FedRAMP High** | **DoD IL2** |
+| [Cost Management and Billing](../../cost-management-billing/index.yml) | &#x2705; | &#x2705; |
+| [Customer Lockbox](../../security/fundamentals/customer-lockbox-overview.md) | &#x2705; | &#x2705; |
+| [Data Box](../../databox/index.yml) **&ast;** | &#x2705; | &#x2705; |
+| [Data Explorer](/azure/data-explorer/) | &#x2705; | &#x2705; |
| [Data Factory](../../data-factory/index.yml) | &#x2705; | &#x2705; |
+| [Data Share](../../data-share/index.yml) | &#x2705; | &#x2705; |
+| [Database Migration Service](../../dms/index.yml) | &#x2705; | &#x2705; |
| [Dataverse](/powerapps/maker/data-platform/) (incl. [Azure Synapse Link for Dataverse](/powerapps/maker/data-platform/export-to-data-lake)) | &#x2705; | &#x2705; |
+| [DDoS Protection](../../ddos-protection/index.yml) | &#x2705; | &#x2705; |
+| [Dedicated HSM](../../dedicated-hsm/index.yml) | &#x2705; | &#x2705; |
+| [DevTest Labs](../../devtest-labs/index.yml) | &#x2705; | &#x2705; |
+| [DNS](../../dns/index.yml) | &#x2705; | &#x2705; |
| [Dynamics 365 Chat (Omnichannel Engagement Hub)](/dynamics365/omnichannel/introduction-omnichannel) | &#x2705; | &#x2705; | | [Dynamics 365 Commerce](/dynamics365/commerce/)| &#x2705; | &#x2705; | | [Dynamics 365 Customer Service](/dynamics365/customer-service/overview)| &#x2705; | &#x2705; |
+| **Service** | **FedRAMP High** | **DoD IL2** |
| [Dynamics 365 Field Service](/dynamics365/field-service/overview)| &#x2705; | &#x2705; | | [Dynamics 365 Finance](/dynamics365/finance/)| &#x2705; | &#x2705; | | [Dynamics 365 Guides](/dynamics365/mixed-reality/guides/)| &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Dynamics 365 Supply Chain Management](/dynamics365/supply-chain/)| &#x2705; | &#x2705; | | [Event Grid](../../event-grid/index.yml) | &#x2705; | &#x2705; | | [Event Hubs](../../event-hubs/index.yml) | &#x2705; | &#x2705; |
+| [ExpressRoute](../../expressroute/index.yml) | &#x2705; | &#x2705; |
+| [File Sync](../../storage/file-sync/index.yml) | &#x2705; | &#x2705; |
+| [Firewall](../../firewall/index.yml) | &#x2705; | &#x2705; |
+| [Firewall Manager](../../firewall-manager/index.yml) | &#x2705; | &#x2705; |
+| [Form Recognizer](../../applied-ai-services/form-recognizer/index.yml) | &#x2705; | &#x2705; |
+| [Front Door](../../frontdoor/index.yml) | &#x2705; | &#x2705; |
+| [Functions](../../azure-functions/index.yml) | &#x2705; | &#x2705; |
+| **Service** | **FedRAMP High** | **DoD IL2** |
| [GitHub AE](https://docs.github.com/github-ae@latest/admin/overview/about-github-ae) | &#x2705; | &#x2705; |
+| [Health Bot](/healthbot/) | &#x2705; | &#x2705; |
+| [HDInsight](../../hdinsight/index.yml) | &#x2705; | &#x2705; |
+| [HPC Cache](../../hpc-cache/index.yml) | &#x2705; | &#x2705; |
+| [Immersive Reader](../../applied-ai-services/immersive-reader/index.yml) | &#x2705; | &#x2705; |
| [Import/Export](../../import-export/index.yml) | &#x2705; | &#x2705; |
-| **Service** | **FedRAMP High** | **DoD IL2** |
+| [Internet Analyzer](../../internet-analyzer/index.yml) | &#x2705; | &#x2705; |
+| [IoT Hub](../../iot-hub/index.yml) | &#x2705; | &#x2705; |
| [Key Vault](../../key-vault/index.yml) | &#x2705; | &#x2705; |
+| [Lab Services](../../lab-services/index.yml) | &#x2705; | &#x2705; |
+| [Lighthouse](../../lighthouse/index.yml) | &#x2705; | &#x2705; |
| [Load Balancer](../../load-balancer/index.yml) | &#x2705; | &#x2705; |
+| [Logic Apps](../../logic-apps/index.yml) | &#x2705; | &#x2705; |
+| [Machine Learning](../../machine-learning/index.yml) | &#x2705; | &#x2705; |
+| [Managed Applications](../../azure-resource-manager/managed-applications/index.yml) | &#x2705; | &#x2705; |
+| **Service** | **FedRAMP High** | **DoD IL2** |
+| [Media Services](../../media-services/index.yml) | &#x2705; | &#x2705; |
| [Microsoft 365 Defender](/microsoft-365/security/defender/) (formerly Microsoft Threat Protection) | &#x2705; | &#x2705; | | [Microsoft Azure Attestation](../../attestation/index.yml)| &#x2705; | &#x2705; | | [Microsoft Azure Marketplace portal](https://azuremarketplace.microsoft.com/marketplace/)| &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Microsoft Intune](/mem/intune/) | &#x2705; | &#x2705; | | [Microsoft Sentinel](../../sentinel/index.yml) | &#x2705; | &#x2705; | | [Microsoft Stream](/stream/) | &#x2705; | &#x2705; |
-| **Service** | **FedRAMP High** | **DoD IL2** |
| [Microsoft Threat Experts](/microsoft-365/security/defender-endpoint/microsoft-threat-experts) | &#x2705; | &#x2705; |
+| **Service** | **FedRAMP High** | **DoD IL2** |
+| [Migrate](../../migrate/index.yml) | &#x2705; | &#x2705; |
| [Network Watcher](../../network-watcher/index.yml) (incl. [Traffic Analytics](../../network-watcher/traffic-analytics.md)) | &#x2705; | &#x2705; | | [Notification Hubs](../../notification-hubs/index.yml) | &#x2705; | &#x2705; |
+| [Open Datasets](../../open-datasets/index.yml) | &#x2705; | &#x2705; |
+| [Peering Service](../../peering-service/index.yml) | &#x2705; | &#x2705; |
| [Power Apps](/powerapps/) | &#x2705; | &#x2705; | | [Power Apps Portal](https://powerapps.microsoft.com/portals/) | &#x2705; | &#x2705; | | [Power Automate](/power-automate/) (formerly Microsoft Flow) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Power Data Integrator for Dataverse](/power-platform/admin/data-integrator) (formerly Dynamics 365 Integrator App) | &#x2705; | &#x2705; | | [Power Virtual Agents](/power-virtual-agents/) | &#x2705; | &#x2705; | | [Private Link](../../private-link/index.yml) | &#x2705; | &#x2705; |
+| [Public IP](../../virtual-network/ip-services/public-ip-addresses.md) | &#x2705; | &#x2705; |
+| [Resource Graph](../../governance/resource-graph/index.yml) | &#x2705; | &#x2705; |
+| **Service** | **FedRAMP High** | **DoD IL2** |
+| [Scheduler](../../scheduler/index.yml) (replaced by [Logic Apps](../../logic-apps/index.yml)) | &#x2705; | &#x2705; |
| [Service Bus](../../service-bus-messaging/index.yml) | &#x2705; | &#x2705; |
+| [Service Fabric](../../service-fabric/index.yml) | &#x2705; | &#x2705; |
+| [Service Health](../../service-health/index.yml) | &#x2705; | &#x2705; |
+| [SignalR Service](../../azure-signalr/index.yml) | &#x2705; | &#x2705; |
+| [Site Recovery](../../site-recovery/index.yml) | &#x2705; | &#x2705; |
+| [Spring Cloud](../../spring-cloud/index.yml) | &#x2705; | &#x2705; |
+| [SQL Database](../../azure-sql/database/sql-database-paas-overview.md) | &#x2705; | &#x2705; |
| [SQL Server Registry](/sql/sql-server/end-of-support/sql-server-extended-security-updates) | &#x2705; | &#x2705; | | [SQL Server Stretch Database](../../sql-server-stretch-database/index.yml) | &#x2705; | &#x2705; |
+| [Storage: Archive](../../storage/blobs/access-tiers-overview.md) | &#x2705; | &#x2705; |
| [Storage: Blobs](../../storage/blobs/index.yml) (incl. [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md)) | &#x2705; | &#x2705; |
-| **Service** | **FedRAMP High** | **DoD IL2** |
| [Storage: Data Movement](../../storage/common/storage-use-data-movement-library.md) | &#x2705; | &#x2705; | | [Storage: Disks (incl. managed disks)](../../virtual-machines/managed-disks-overview.md) | &#x2705; | &#x2705; |
-| [Storage: Files](../../storage/files/index.yml) (incl. [Azure File Sync](../../storage/file-sync/index.yml)) | &#x2705; | &#x2705; |
+| [Storage: Files](../../storage/files/index.yml) | &#x2705; | &#x2705; |
+| **Service** | **FedRAMP High** | **DoD IL2** |
| [Storage: Queues](../../storage/queues/index.yml) | &#x2705; | &#x2705; | | [Storage: Tables](../../storage/tables/index.yml) | &#x2705; | &#x2705; | | [StorSimple](../../storsimple/index.yml) | &#x2705; | &#x2705; |
+| [Stream Analytics](../../stream-analytics/index.yml) | &#x2705; | &#x2705; |
+| [Synapse Analytics](../../synapse-analytics/index.yml) | &#x2705; | &#x2705; |
+| [Time Series Insights](../../time-series-insights/index.yml) | &#x2705; | &#x2705; |
| [Traffic Manager](../../traffic-manager/index.yml) | &#x2705; | &#x2705; | | [Video Analyzer for Media](../../azure-video-analyzer/video-analyzer-for-media-docs/index.yml) (formerly Video Indexer) | &#x2705; | &#x2705; | | [Virtual Machine Scale Sets](../../virtual-machine-scale-sets/index.yml) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Virtual Network NAT](../../virtual-network/nat-gateway/index.yml) | &#x2705; | &#x2705; | | [Virtual WAN](../../virtual-wan/index.yml) | &#x2705; | &#x2705; | | [VPN Gateway](../../vpn-gateway/index.yml) | &#x2705; | &#x2705; |
+| [Web Application Firewall](../../web-application-firewall/index.yml) | &#x2705; | &#x2705; |
| [Web Apps (App Service)](../../app-service/index.yml) | &#x2705; | &#x2705; | | [Windows 10 IoT Core Services](/windows-hardware/manufacture/iot/iotcoreservicesoverview) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
**&ast;&ast;** FedRAMP High authorization for Azure Databricks is applicable to limited regions in Azure. To configure Azure Databricks for FedRAMP High use, contact your Microsoft or Databricks representative. ## Azure Government services by audit scope
-*Last updated: February 2022*
+*Last updated: March 2022*
### Terminology used
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| Service | FedRAMP High | DoD IL2 | DoD IL4 | DoD IL5 | DoD IL6 | | - |::|:-:|:-:|:-:|:-:|
+| [Advisor](../../advisor/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [AI Builder](/ai-builder/) | &#x2705; | &#x2705; | | | |
+| [Analysis Services](../../analysis-services/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [API Management](../../api-management/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [App Configuration](../../azure-app-configuration/index.yml) | &#x2705; | &#x2705; | &#x2705; |&#x2705; | |
| [Application Gateway](../../application-gateway/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Automation](../../automation/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Automation](../../automation/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Azure Active Directory (Free and Basic)](../../active-directory/fundamentals/active-directory-whatis.md#what-are-the-azure-ad-licenses) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Active Directory (Premium P1 + P2)](../../active-directory/fundamentals/active-directory-whatis.md#what-are-the-azure-ad-licenses) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Active Directory Domain Services](../../active-directory-domain-services/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure AD access reviews](../../active-directory/governance/access-reviews-overview.md) | | | | | &#x2705; |
| [Azure AD Multi-Factor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Azure Advisor](../../advisor/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Azure Analysis Services](../../analysis-services/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure App Configuration](../../azure-app-configuration/index.yml) | &#x2705; | &#x2705; | &#x2705; |&#x2705; | |
+| [Azure AD Privileged Identity Management](../../active-directory/privileged-identity-management/index.yml) | | | | | &#x2705; |
+| [Azure API for FHIR](../../healthcare-apis/azure-api-for-fhir/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/index.yml) | &#x2705; | &#x2705; | | | |
-| [Azure Arc-enabled servers](../../azure-arc/servers/index.yml) | &#x2705; | &#x2705; | | | |
-| [Azure Archive Storage](../../storage/blobs/access-tiers-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure Backup](../../backup/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
-| [Azure Bastion](../../bastion/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure Blueprints](../../governance/blueprints/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure Bot Service](/azure/bot-service/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Arc-enabled servers](../../azure-arc/servers/index.yml) | &#x2705; | &#x2705; | | | |
| [Azure Cache for Redis](../../azure-cache-for-redis/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Azure Cloud Services](../../cloud-services/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Azure Cognitive Search](../../search/index.yml) (formerly Azure Search) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Azure Cosmos DB](../../cosmos-db/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Azure Cost Management and Billing](../../cost-management-billing/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Azure CXP Nomination Portal](https://cxp.azure.com/nominationportal/nominationform/fasttrack)| &#x2705; | &#x2705; | | | |
-| [Azure Data Box](../../databox/index.yml) **&ast;** | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Azure Data Explorer](/azure/data-explorer/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Azure Data Share](../../data-share/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Azure Database for MariaDB](../../mariadb/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Database for MySQL](../../mysql/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Database for PostgreSQL](../../postgresql/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
-| [Azure Database Migration Service](../../dms/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Azure Databricks](/azure/databricks/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure DDoS Protection](../../ddos-protection/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure Dedicated HSM](../../dedicated-hsm/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure DevTest Labs](../../devtest-labs/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure DNS](../../dns/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Azure ExpressRoute](../../expressroute/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Azure Firewall](../../firewall/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure Firewall Manager](../../firewall-manager/index.yml) | &#x2705; | &#x2705; | | | |
-| [Azure Form Recognizer](../../applied-ai-services/form-recognizer/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure Front Door](../../frontdoor/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure Functions](../../azure-functions/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Azure HDInsight](../../hdinsight/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure Healthcare APIs](../../healthcare-apis/index.yml) (formerly Azure API for FHIR) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure HPC Cache](../../hpc-cache/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
| [Azure Information Protection](/azure/information-protection/) **&ast;&ast;** | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Azure IoT Hub](../../iot-hub/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Azure Kubernetes Service (AKS)](../../aks/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Azure Lab Services](../../lab-services/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure Lighthouse](../../lighthouse/index.yml)| &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure Logic Apps](../../logic-apps/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Azure Machine Learning](../../machine-learning/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure Managed Applications](../../azure-resource-manager/managed-applications/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Azure Maps](../../azure-maps/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure Media Services](../../media-services/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure Migrate](../../migrate/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure Monitor](../../azure-monitor/index.yml) (incl. [Log Analytics](../../azure-monitor/logs/data-platform-logs.md)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Monitor](../../azure-monitor/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| Azure Monitor [Application Insights](../../azure-monitor/app/app-insights-overview.md) | | | | | &#x2705; |
+| Azure Monitor [Log Analytics](../../azure-monitor/logs/data-platform-logs.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Azure NetApp Files](../../azure-netapp-files/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure Peering Service](../../peering-service/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure Policy](../../governance/policy/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
+| [Azure Policy](../../governance/policy/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Azure Policy's guest configuration](../../governance/policy/concepts/guest-configuration.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure Public IP](../../virtual-network/ip-services/public-ip-addresses.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure Resource Graph](../../governance/resource-graph/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Azure Resource Manager](../../azure-resource-manager/management/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Azure Resource Mover](../../resource-mover/index.yml) | &#x2705; | &#x2705; | | | |
-| [Azure Route Server](../../route-server/index.yml) | &#x2705; | &#x2705; | | | |
-| [Azure Scheduler](../../scheduler/index.yml) (replaced by [Azure Logic Apps](../../logic-apps/index.yml)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure Service Fabric](../../service-fabric/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Azure Service Health](../../service-health/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Azure Service Manager (RDFE)](/previous-versions/azure/ee460799(v=azure.100)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Sign-up portal](https://signup.azure.com/) | &#x2705; | &#x2705; | | | |
-| [Azure SignalR Service](../../azure-signalr/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure Site Recovery](../../site-recovery/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure SQL Database](../../azure-sql/database/sql-database-paas-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Azure Stack Bridge](/azure-stack/operator/azure-stack-usage-reporting) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
| [Azure Stack Edge](../../databox-online/index.yml) (formerly Data Box Edge) **&ast;** | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Azure Stream Analytics](../../stream-analytics/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure Synapse Analytics](../../synapse-analytics/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure Synapse Link for Dataverse](/powerapps/maker/data-platform/export-to-data-lake) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Azure Virtual Desktop](../../virtual-desktop/index.yml) (formerly Windows Virtual Desktop) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Azure Web Application Firewall](../../web-application-firewall/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Backup](../../backup/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Bastion](../../bastion/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Batch](../../batch/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Blueprints](../../governance/blueprints/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Bot Service](/azure/bot-service/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Cloud Services](../../cloud-services/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Cloud Shell](../../cloud-shell/overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
+| [Cognitive Search](../../search/index.yml) (formerly Azure Search) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Cognitive | [Cognitive | [Cognitive Services Containers](../../cognitive-services/cognitive-services-container-support.md) | &#x2705; | &#x2705; | | | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Cognitive | [Cognitive | [Cognitive
-| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
| [Cognitive | [Cognitive | [Cognitive
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Container Instances](../../container-instances/index.yml)| &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Container Registry](../../container-registry/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Content Delivery Network](../../cdn/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
+| [Cost Management and Billing](../../cost-management-billing/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Customer Lockbox](../../security/fundamentals/customer-lockbox-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Data Factory](../../data-factory/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Data Box](../../databox/index.yml) **&ast;** | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Data Explorer](/azure/data-explorer/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Data Factory](../../data-factory/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Data Share](../../data-share/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Database Migration Service](../../dms/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Dataverse](/powerapps/maker/data-platform/) (formerly Common Data Service) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [DDoS Protection](../../ddos-protection/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Dedicated HSM](../../dedicated-hsm/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [DevTest Labs](../../devtest-labs/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [DNS](../../dns/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Dynamics 365 Chat (Omnichannel Engagement Hub)](/dynamics365/omnichannel/introduction-omnichannel) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Dynamics 365 Customer Insights](/dynamics365/customer-insights/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Dynamics 365 Customer Service](/dynamics365/customer-service/overview) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
| [Dynamics 365 Customer Voice](/dynamics365/customer-voice/about) (formerly Forms Pro) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Dynamics 365 Field Service](/dynamics365/field-service/overview) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
| [Dynamics 365 Finance](/dynamics365/finance/) | &#x2705; | &#x2705; | | | | | [Dynamics 365 Project Service Automation](/dynamics365/project-operations/psa/overview) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Dynamics 365 Sales](/dynamics365/sales/help-hub) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Dynamics 365 Supply Chain Management](/dynamics365/supply-chain/) | &#x2705; | &#x2705; | | | | | [Event Grid](../../event-grid/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Event Hubs](../../event-hubs/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [ExpressRoute](../../expressroute/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [File Sync](../../storage/file-sync/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Firewall](../../firewall/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Firewall Manager](../../firewall-manager/index.yml) | &#x2705; | &#x2705; | | | |
+| [Form Recognizer](../../applied-ai-services/form-recognizer/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Front Door](../../frontdoor/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Functions](../../azure-functions/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
| [GitHub AE](https://docs.github.com/en/github-ae@latest/admin/overview/about-github-ae) | &#x2705; | &#x2705; | &#x2705; | | |
+| [HDInsight](../../hdinsight/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [HPC Cache](../../hpc-cache/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Import/Export](../../import-export/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [IoT Hub](../../iot-hub/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Key Vault](../../key-vault/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Lab Services](../../lab-services/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Lighthouse](../../lighthouse/index.yml)| &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Load Balancer](../../load-balancer/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Logic Apps](../../logic-apps/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Machine Learning](../../machine-learning/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Managed Applications](../../azure-resource-manager/managed-applications/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Media Services](../../media-services/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Microsoft 365 Defender](/microsoft-365/security/defender/) (formerly Microsoft Threat Protection) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Microsoft Azure portal](https://azure.microsoft.com/features/azure-portal/) | &#x2705; | &#x2705; | &#x2705;| &#x2705; | &#x2705; |
-| [Microsoft Azure Government portal](../documentation-government-get-started-connect-with-portal.md) | &#x2705; | &#x2705; | &#x2705;| &#x2705; | &#x2705; |
+| [Microsoft Azure portal](../../azure-portal/index.yml) | &#x2705; | &#x2705; | &#x2705;| &#x2705; | &#x2705; |
+| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
+| [Microsoft Azure Government portal](../documentation-government-get-started-connect-with-portal.md) | &#x2705; | &#x2705; | &#x2705;| &#x2705; | |
| [Microsoft Defender for Cloud](../../defender-for-cloud/index.yml) (formerly Azure Security Center) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Microsoft Defender for Cloud Apps](/defender-cloud-apps/) (formerly Microsoft Cloud App Security) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
| [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/) (formerly Microsoft Defender Advanced Threat Protection) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Microsoft Defender for Identity](/defender-for-identity/) (formerly Azure Advanced Threat Protection) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Microsoft Defender for IoT](../../defender-for-iot/index.yml) (formerly Azure Security for IoT) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Microsoft Intune](/mem/intune/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Microsoft Sentinel](../../sentinel/index.yml) (formerly Azure Sentinel) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Microsoft Stream](/stream/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Migrate](../../migrate/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Network Watcher](../../network-watcher/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Network Watcher Traffic Analytics](../../network-watcher/traffic-analytics.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Notification Hubs](../../notification-hubs/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Peering Service](../../peering-service/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
| [Planned Maintenance for VMs](../../virtual-machines/maintenance-and-updates.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Power Apps](/powerapps/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Power Automate](/power-automate/) (formerly Microsoft Flow) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Power BI](/power-bi/fundamentals/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Power BI Embedded](/power-bi/developer/embedded/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
| [Power Data Integrator for Dataverse](/power-platform/admin/data-integrator) (formerly Dynamics 365 Integrator App) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Power Query Online](/power-query/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Power Virtual Agents](/power-virtual-agents/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Private Link](../../private-link/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Public IP](../../virtual-network/ip-services/public-ip-addresses.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Resource Graph](../../governance/resource-graph/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Resource Mover](../../resource-mover/index.yml) | &#x2705; | &#x2705; | | | |
+| [Route Server](../../route-server/index.yml) | &#x2705; | &#x2705; | | | |
+| [Scheduler](../../scheduler/index.yml) (replaced by [Logic Apps](../../logic-apps/index.yml)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Service Bus](../../service-bus-messaging/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
+| [Service Fabric](../../service-fabric/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Service Health](../../service-health/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [SignalR Service](../../azure-signalr/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Site Recovery](../../site-recovery/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [SQL Database](../../azure-sql/database/sql-database-paas-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [SQL Server Stretch Database](../../sql-server-stretch-database/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Storage: Archive](../../storage/blobs/access-tiers-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Storage: Blobs](../../storage/blobs/index.yml) (incl. [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Storage: Data Movement](../../storage/common/storage-use-data-movement-library.md) | | | | | &#x2705; |
| [Storage: Disks (incl. managed disks)](../../virtual-machines/managed-disks-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Storage: Files](../../storage/files/index.yml) (incl. [Azure File Sync](../../storage/file-sync/index.yml)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Storage: Files](../../storage/files/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Storage: Queues](../../storage/queues/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Storage: Tables](../../storage/tables/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [StorSimple](../../storsimple/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Stream Analytics](../../stream-analytics/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
+| [Synapse Analytics](../../synapse-analytics/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Synapse Link for Dataverse](/powerapps/maker/data-platform/export-to-data-lake) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Traffic Manager](../../traffic-manager/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Virtual Machine Scale Sets](../../virtual-machine-scale-sets/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Virtual Machines](../../virtual-machines/index.yml) (incl. [Reserved VM Instances](../../virtual-machines/prepay-reserved-vm-instances.md)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
| [Virtual Network](../../virtual-network/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Virtual Network NAT](../../virtual-network/nat-gateway/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Virtual WAN](../../virtual-wan/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [VPN Gateway](../../vpn-gateway/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Web Application Firewall](../../web-application-firewall/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Web Apps (App Service)](../../app-service/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | **&ast;** Authorizations for edge devices (such as Azure Data Box and Azure Stack Edge) apply only to Azure services that support on-premises, customer-managed devices. You are wholly responsible for the authorization package that covers the physical devices. For assistance with accelerating your onboarding and authorization of devices, contact your Microsoft account representative.
azure-government Documentation Government Plan Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-plan-security.md
recommendations: false Previously updated : 03/12/2022 Last updated : 03/21/2022 # Azure Government security
These principles are applicable to both Azure and Azure Government. As described
Mitigating risk and meeting regulatory obligations are driving the increasing focus and importance of data encryption. Use an effective encryption implementation to enhance current network and application security measures and decrease the overall risk of your cloud environment. Azure has extensive support to safeguard customer data using [data encryption](../security/fundamentals/encryption-overview.md), including various encryption models: - Server-side encryption that uses service-managed keys, customer-managed keys (CMK) in Azure, or CMK in customer-controlled hardware.-- Client-side encryption that enables customers to manage and store keys on-premises or in another secure location. Client-side encryption is built into the Java and .NET storage client libraries, which can use Azure Key Vault APIs, making the implementation straightforward. You can use Azure Active Directory to provide specific individuals with access to Azure Key Vault secrets.
+- Client-side encryption that enables you to manage and store keys on-premises or in another secure location. Client-side encryption is built into the Java and .NET storage client libraries, which can use Azure Key Vault APIs, making the implementation straightforward. You can use Azure Active Directory to provide specific individuals with access to Azure Key Vault secrets.
Data encryption provides isolation assurances that are tied directly to encryption key access. Since Azure uses strong ciphers for data encryption, only entities with access to encryption keys can have access to data. Deleting or revoking encryption keys renders the corresponding data inaccessible.
We're now screening all our operators at a Tier 3 Investigation (formerly Nation
|Cloud screen|Azure </br>Azure Gov|Every two years|- Social Security Number search </br>- Criminal history check (7-yr history) </br>- Office of Foreign Assets Control (OFAC) list </br>- Bureau of Industry and Security (BIS) list </br>- Office of Defense Trade Controls (DDTC) debarred list| |US citizenship|Azure Gov|Upon employment|- Verification of US citizenship| |Criminal Justice Information Services (CJIS)|Azure Gov|Upon signed CJIS agreement with State|- Adds fingerprint background check against FBI database </br>- Criminal records check and credit check|
-|Tier 3 Investigation|Azure Gov|Upon signed contract with sponsoring agency|- Detailed background and criminal history investigation (Form SF 86 required)|
+|Tier 3 Investigation|Azure Gov|Upon signed contract with sponsoring agency|- Detailed background and criminal history investigation ([SF 86](https://www.opm.gov/forms/pdf_fill/SF86.pdf))|
For Azure operations personnel, the following access principles apply:
azure-monitor Alerts Log Create Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-log-create-templates.md
This JSON can be saved and deployed using [Azure Resource Manager in Azure porta
## Simple template (up to API version 2018-04-16)
-[Scheduled Query Rules creation](/rest/api/monitor/scheduledqueryrule-2018-04-16/scheduled-query-rules/create-or-update) template based on [number of results log alert](./alerts-unified-log.md#count-of-the-results-table-rows) (sample data set as variables):
+[Scheduled Query Rules creation](/rest/api/monitor/scheduledqueryrule-2018-04-16/scheduled-query-rules/create-or-update) template based on [number of results log alert](./alerts-unified-log.md#result-count) (sample data set as variables):
```json {
This JSON can be saved and deployed using [Azure Resource Manager in Azure porta
## Template with cross-resource query (up to API version 2018-04-16)
-[Scheduled Query Rules creation](/rest/api/monitor/scheduledqueryrule-2018-04-16/scheduled-query-rules/create-or-update) template based on [metric measurement](./alerts-unified-log.md#calculation-of-measure-based-on-a-numeric-column-such-as-cpu-counter-value) that queries [cross-resources](../logs/cross-workspace-query.md) (sample data set as variables):
+[Scheduled Query Rules creation](/rest/api/monitor/scheduledqueryrule-2018-04-16/scheduled-query-rules/create-or-update) template based on [metric measurement](./alerts-unified-log.md#calculation-of-a-value) that queries [cross-resources](../logs/cross-workspace-query.md) (sample data set as variables):
```json {
azure-monitor Alerts Log Webhook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-log-webhook.md
The following sample payload is for a standard webhook when it's used for log al
"alertContextVersion": "1.0" }, "alertContext": {
- "properties": null,
+ "properties": {
+ "name1": "value1",
+ "name2": "value2"
+ },
"conditionType": "LogQueryCriteria", "condition": { "windowSize": "PT10M", "allOf": [ { "searchQuery": "Heartbeat",
- "metricMeasure": null,
+ "metricMeasureColumn": "CounterValue",
"targetResourceTypes": "['Microsoft.Compute/virtualMachines']", "operator": "LowerThan", "threshold": "1",
azure-monitor Alerts Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-log.md
You can also [create log alert rules using Azure Resource Manager templates](../
> - For more advanced customizations, use Logic Apps.
-1. In the [portal](https://portal.azure.com/), select the relevant resource.
-1. In the Resource menu, under **Monitor**, select **Logs**.
+1. In the [portal](https://portal.azure.com/), select the relevant resource. We recommend monitoring at scale by using a subscription or resource group for the alert rule.
+1. In the Resource menu, select **Logs**.
1. Write a query that will find the log events for which you want to create an alert. You can use the [alert query examples topic](../logs/queries.md) to understand what you can discover or [get started on writing your own query](../logs/log-analytics-tutorial.md). Also, [learn how to create optimized alert queries](alerts-log-query.md). 1. From the top command bar, Select **+ New Alert rule**.
You can also [create log alert rules using Azure Resource Manager templates](../
1. When validation passes and you have reviewed the settings, click the **Create** button. :::image type="content" source="media/alerts-log/alerts-rule-review-create.png" alt-text="Review and create tab.":::-
-> [!NOTE]
-> We recommend that you create alerts at scale when using resource access mode for log running on multiple resources using a resource group or subscription scope. Alerting at scale reduces rule management overhead. To be able to target the resources, include the resource ID column in the results. [Learn more about splitting alerts by dimensions](./alerts-unified-log.md#split-by-alert-dimensions).
## Manage alert rules in the Alerts portal > [!NOTE]
azure-monitor Alerts Manage Alerts Previous Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-manage-alerts-previous-version.md
Use the PowerShell cmdlets listed below to manage rules with the [Scheduled Quer
- [New-AzScheduledQueryRuleAlertingAction](/powershell/module/az.monitor/new-azscheduledqueryrulealertingaction) : PowerShell cmdlet to create or update object specifying action parameters for a log alert. Used as input by [New-AzScheduledQueryRule](/powershell/module/az.monitor/new-azscheduledqueryrule) and [Set-AzScheduledQueryRule](/powershell/module/az.monitor/set-azscheduledqueryrule) cmdlet. - [New-AzScheduledQueryRuleAznsActionGroup](/powershell/module/az.monitor/new-azscheduledqueryruleaznsactiongroup) : PowerShell cmdlet to create or update object specifying action groups parameters for a log alert. Used as input by [New-AzScheduledQueryRuleAlertingAction](/powershell/module/az.monitor/new-azscheduledqueryrulealertingaction) cmdlet. - [New-AzScheduledQueryRuleTriggerCondition](/powershell/module/az.monitor/new-azscheduledqueryruletriggercondition) : PowerShell cmdlet to create or update object specifying trigger condition parameters for log alert. Used as input by [New-AzScheduledQueryRuleAlertingAction](/powershell/module/az.monitor/new-azscheduledqueryrulealertingaction) cmdlet.-- [New-AzScheduledQueryRuleLogMetricTrigger](/powershell/module/az.monitor/new-azscheduledqueryrulelogmetrictrigger) : PowerShell cmdlet to create or update object specifying metric trigger condition parameters for [metric measurement type log alert](./alerts-unified-log.md#calculation-of-measure-based-on-a-numeric-column-such-as-cpu-counter-value). Used as input by [New-AzScheduledQueryRuleTriggerCondition](/powershell/module/az.monitor/new-azscheduledqueryruletriggercondition) cmdlet.
+- [New-AzScheduledQueryRuleLogMetricTrigger](/powershell/module/az.monitor/new-azscheduledqueryrulelogmetrictrigger) : PowerShell cmdlet to create or update object specifying metric trigger condition parameters for a 'metric measurement' log alert. Used as input by [New-AzScheduledQueryRuleTriggerCondition](/powershell/module/az.monitor/new-azscheduledqueryruletriggercondition) cmdlet.
- [Get-AzScheduledQueryRule](/powershell/module/az.monitor/get-azscheduledqueryrule) : PowerShell cmdlet to list existing log alert rules or a specific log alert rule - [Update-AzScheduledQueryRule](/powershell/module/az.monitor/update-azscheduledqueryrule) : PowerShell cmdlet to enable or disable log alert rule - [Remove-AzScheduledQueryRule](/powershell/module/az.monitor/remove-azscheduledqueryrule): PowerShell cmdlet to delete an existing log alert rule
azure-monitor Alerts Troubleshoot Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot-log.md
When you author an alert rule, Log Analytics creates a permission snapshot for y
### Metric measurement alert rule with splitting using the legacy Log Analytics API
-[Metric measurement](alerts-unified-log.md#calculation-of-measure-based-on-a-numeric-column-such-as-cpu-counter-value) is a type of log alert that's based on summarized time series results. You can use these rules to group by columns to [split alerts](alerts-unified-log.md#split-by-alert-dimensions). If you're using the legacy Log Analytics API, splitting doesn't work as expected because it doesn't support grouping.
+[Metric measurement](alerts-unified-log.md#calculation-of-a-value) is a type of log alert that's based on summarized time series results. You can use these rules to group by columns to [split alerts](alerts-unified-log.md#split-by-alert-dimensions). If you're using the legacy Log Analytics API, splitting doesn't work as expected because it doesn't support grouping.
-You can use the current ScheduledQueryRules API to set **Aggregate On** in [Metric measurement](alerts-unified-log.md#calculation-of-measure-based-on-a-numeric-column-such-as-cpu-counter-value) rules, which work as expected. To learn more about switching to the current ScheduledQueryRules API, see [Upgrade to the current Log Alerts API from legacy Log Analytics Alert API]](../alerts/alerts-log-api-switch.md).
+You can use the current ScheduledQueryRules API to set **Aggregate On** in [Metric measurement](alerts-unified-log.md#calculation-of-a-value) rules, which work as expected. To learn more about switching to the current ScheduledQueryRules API, see [Upgrade to the current Log Alerts API from legacy Log Analytics Alert API]](../alerts/alerts-log-api-switch.md).
## Log alert fired unnecessarily
azure-monitor Alerts Unified Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-unified-log.md
If you use **ago** command in the query, the range is automatically set to two d
### Measure Log alerts turn log into numeric values that can be evaluated. You can measure two different things:
+* Result count
+* Calculation of a value
-#### Count of the results table rows
+#### Result count
Count of results is the default measure and is used when you set a **Measure** with a selection of **Table rows**. Ideal for working with events such as Windows event logs, syslog, application exceptions. Triggers when log records happen or doesn't happen in the evaluated time window.
Log alerts work best when you try to detect data in the log. It works less well
> [!NOTE] > Since logs are semi-structured data, they are inherently more latent than metric, you may experience misfires when trying to detect lack of data in the logs, and you should consider using [metric alerts](alerts-metric-overview.md). You can send data to the metric store from logs using [metric alerts for logs](alerts-metric-logs.md).
-##### Example of results table rows count use case
+##### Example of result count use case
You want to know when your application responded with error code 500 (Internal Server Error). You would create an alert rule with the following details:
requests
Then alert rules monitors for any requests ending with 500 error code. The query runs every 15 minutes, over the last 15 minutes. If even one record is found, it fires the alert and triggers the actions configured.
-#### Calculation of measure based on a numeric column (such as CPU counter value)
+### Calculation of a value
-Calculation of measure based on a numeric column is used when the **Measure** has a selection of any number column name.
+Calculation of a value is used when you select a column name of a numeric column for the **Measure**, and the result is a calculation that you perform on the values in that column. This would be used, for example, as CPU counter value.
### Aggregation type The calculation that is done on multiple records to aggregate them to one numeric value using the [**Aggregation granularity**](#aggregation-granularity) defined. For example:
azure-monitor Resource Manager Alerts Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/resource-manager-alerts-log.md
The following sample creates a rule that can target any resource.
``` ## Number of results template (up to version 2018-04-16)
-The following sample creates a [number of results alert rule](../alerts/alerts-unified-log.md#count-of-the-results-table-rows).
+The following sample creates a [number of results alert rule](../alerts/alerts-unified-log.md#result-count).
### Notes
The following sample creates a [number of results alert rule](../alerts/alerts-u
``` ## Metric measurement template (up to version 2018-04-16)
-The following sample creates a [metric measurement alert rule](../alerts/alerts-unified-log.md#calculation-of-measure-based-on-a-numeric-column-such-as-cpu-counter-value).
+The following sample creates a [metric measurement alert rule](../alerts/alerts-unified-log.md#calculation-of-a-value).
### Template file
azure-monitor Api Custom Events Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-custom-events-metrics.md
Title: Application Insights API for custom events and metrics | Microsoft Docs description: Insert a few lines of code in your device or desktop app, webpage, or service, to track usage and diagnose issues. Previously updated : 05/11/2020 Last updated : 05/11/2020 ms.devlang: csharp, java, javascript, vb # Application Insights API for custom events and metrics
-Insert a few lines of code in your application to find out what users are doing with it, or to help diagnose issues. You can send telemetry from device and desktop apps, web clients, and web servers. Use the [Azure Application Insights](./app-insights-overview.md) core telemetry API to send custom events and metrics, and your own versions of standard telemetry. This API is the same API that the standard Application Insights data collectors use.
+Insert a few lines of code in your application to find out what users are doing with it, or to help diagnose issues. You can send telemetry from device and desktop apps, web clients, and web servers. Use the [Azure Application Insights](./app-insights-overview.md) core telemetry API to send custom events and metrics, and your own versions of standard telemetry. This API is the same API that the standard Application Insights data collectors use.
## API summary
To learn how to effectively use the GetMetric() call to capture locally pre-aggr
## TrackMetric > [!NOTE]
-> Microsoft.ApplicationInsights.TelemetryClient.TrackMetric is not the preferred method for sending metrics. Metrics should always be pre-aggregated across a time period before being sent. Use one of the GetMetric(..) overloads to get a metric object for accessing SDK pre-aggregation capabilities. If you are implementing your own pre-aggregation logic, you can
-use the TrackMetric() method to send the resulting aggregates. If your application requires sending a separate telemetry item at every occasion without aggregation across time, you likely have a use case for event telemetry; see TelemetryClient.TrackEvent
+> Microsoft.ApplicationInsights.TelemetryClient.TrackMetric is not the preferred method for sending metrics. Metrics should always be pre-aggregated across a time period before being sent. Use one of the GetMetric(..) overloads to get a metric object for accessing SDK pre-aggregation capabilities. If you are implementing your own pre-aggregation logic, you can use the TrackMetric() method to send the resulting aggregates. If your application requires sending a separate telemetry item at every occasion without aggregation across time, you likely have a use case for event telemetry; see TelemetryClient.TrackEvent
(Microsoft.ApplicationInsights.DataContracts.EventTelemetry). Application Insights can chart metrics that are not attached to particular events. For example, you could monitor a queue length at regular intervals. With metrics, the individual measurements are of less interest than the variations and trends, and so statistical charts are useful.
To associate page views to AJAX calls, join with dependencies:
```kusto pageViews
-| join (dependencies) on operation_Id
+| join (dependencies) on operation_Id
``` ## TrackRequest
telemetry.trackTrace({
```javascript trackTrace({
- message: string,
- properties?: {[string]:string},
+ message: string,
+ properties?: {[string]:string},
severityLevel?: SeverityLevel }) ```
try
{ success = dependency.Call(); }
-catch(Exception ex)
+catch(Exception ex)
{ success = false; telemetry.TrackException(ex);
telemetry.flush();
The function is asynchronous for the [server telemetry channel](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel/). > [!NOTE]
-> The Java and Javascript SDKs automatically flush on application shutdown.
+> The Java and JavaScript SDKs automatically flush on application shutdown.
## Authenticated users
In webpages, you might want to set it from the web server's state, rather than c
// Standard Application Insights webpage script: var appInsights = window.appInsights || function(config){ ... // Modify this part:
-}({instrumentationKey:
+}({instrumentationKey:
// Generate from server property: @Microsoft.ApplicationInsights.Extensibility. TelemetryConfiguration.Active.InstrumentationKey;
azure-monitor Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-core.md
For the latest updates and bug fixes, see the [release notes](./release-notes.md
* [Configure a snapshot collection](./snapshot-debugger.md) to see the state of source code and variables at the moment an exception is thrown. * [Use the API](./api-custom-events-metrics.md) to send your own events and metrics for a detailed view of your app's performance and usage. * Use [availability tests](./monitor-web-app-availability.md) to check your app constantly from around the world.
-* [Dependency Injection in ASP.NET Core](/aspnet/core/fundamentals/dependency-injection)
+* [Dependency Injection in ASP.NET Core](/aspnet/core/fundamentals/dependency-injection)
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
We still provide full backwards compatibility for your Application Insights clas
To write queries against the [new workspace-based table structure/schema](apm-tables.md), you must first navigate to your Log Analytics workspace.
+To ensure the queries successfully run, validate that the query's fields align with the [new schema fields](apm-tables.md#appmetrics).
+ When you query directly from the Log Analytics UI within your workspace, you'll only see the data that is ingested post migration. To see both your classic Application Insights data + new data ingested after migration in a unified query experience use the Logs (Analytics) query view from within your migrated Application Insights resource. > [!NOTE]
You can check your current retention settings for Log Analytics under **General*
## Next steps * [Explore metrics](../essentials/metrics-charts.md)
-* [Write Analytics queries](../logs/log-query-overview.md)
+* [Write Analytics queries](../logs/log-query-overview.md)
azure-monitor Migrate From Instrumentation Keys To Connection Strings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/migrate-from-instrumentation-keys-to-connection-strings.md
Connection strings provide a single configuration setting and eliminate the need
- NodeJS v1.5.0+ - Python v1.0.0+ ## Troubleshooting
+### Alert: "Transition to using connection strings for data ingestion"
+Follow the [migration steps](#migration) in this article to resolve this alert.
### Missing data - Confirm you're using a [supported SDK version](#supported-sdk-versions). If you use Application Insights integration in another Azure product offering, check its documentation on how to properly configure a connection string.
azure-monitor Change Analysis Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-troubleshoot.md
Title: Troubleshoot Application Change Analysis - Azure Monitor
-description: Learn how to troubleshoot problems in Application Change Analysis.
+ Title: Troubleshoot Azure Monitor's Change Analysis
+description: Learn how to troubleshoot problems in Azure Monitor's Change Analysis.
ms.contributor: cawa Previously updated : 03/11/2022 Last updated : 03/21/2022
-# Troubleshoot Application Change Analysis (preview)
+# Troubleshoot Azure Monitor's Change Analysis (preview)
## Trouble registering Microsoft.ChangeAnalysis resource provider from Change history tab.
-If you're viewing Change history after its first integration with Application Change Analysis, you will see it automatically registering the **Microsoft.ChangeAnalysis** resource provider. The resource may fail and incur the following error messages:
+If you're viewing Change history after its first integration with Azure Monitor's Change Analysis, you will see it automatically registering the **Microsoft.ChangeAnalysis** resource provider. The resource may fail and incur the following error messages:
### You don't have enough permissions to register Microsoft.ChangeAnalysis resource provider. You're receiving this error message because your role in the current subscription is not associated with the **Microsoft.Support/register/action** scope. For example, you are not the owner of your subscription and instead received shared access permissions through a coworker (like view access to a resource group).
If this is a blocking issue for you, we can provide a workaround that involves c
## An error occurred while getting changes. Please refresh this page or come back later to view changes.
-When changes can't be loaded, Application Change Analysis service presents this general error message. A few known causes are:
+When changes can't be loaded, Azure Monitor's Change Analysis service presents this general error message. A few known causes are:
- Internet connectivity error from the client device. - Change Analysis service being temporarily unavailable.
azure-monitor Change Analysis Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-visualizations.md
Title: Visualizations for Application Change Analysis - Azure Monitor
-description: Learn how to use visualizations in Application Change Analysis in Azure Monitor.
+ Title: Visualizations for Change Analysis in Azure Monitor
+description: Learn how to use visualizations in Azure Monitor's Change Analysis.
ms.contributor: cawa Previously updated : 03/11/2022 Last updated : 03/21/2022
-# Visualizations for Application Change Analysis (preview)
+# Visualizations for Change Analysis in Azure Monitor (preview)
## Standalone UI
The UI supports selecting multiple subscriptions to view resource changes. Use t
## Diagnose and solve problems tool
-Application Change Analysis is:
+Azure Monitor's Change Analysis is:
- A standalone detector in the Web App **Diagnose and solve problems** tool. - Aggregated in **Application Crashes** and **Web App Down detectors**.
From your resource's overview page in Azure portal, select **Diagnose and solve
:::image type="content" source="./media/change-analysis/application-changes.png" alt-text="Screenshot of the Application Crashes button":::
- The link leads to Application Change Analysis UI scoped to the web app.
+ The link leads to Azure Monitor's Change Analysis UI scoped to the web app.
3. Enable web app in-guest change tracking if you haven't already.
You can view Change Analysis data for [multiple Azure resources](./change-analys
## Activity Log change history
-Use the [View change history](../essentials/activity-log.md#view-change-history) feature to call the Application Change Analysis service backend to view changes associated with an operation. Changes returned include:
+Use the [View change history](../essentials/activity-log.md#view-change-history) feature to call the Azure Monitor Change Analysis service backend to view changes associated with an operation. Changes returned include:
- Resource level changes from [Azure Resource Graph](../../governance/resource-graph/overview.md). - Resource properties from [Azure Resource Manager](../../azure-resource-manager/management/overview.md). - In-guest changes from PaaS services, such as App Services web app.
Use the [View change history](../essentials/activity-log.md#view-change-history)
1. From within your resource, select **Activity Log** from the side menu. 1. Select a change from the list. 1. Select the **Change history (Preview)** tab.
-1. For the Application Change Analysis service to scan for changes in users' subscriptions, a resource provider needs to be registered. Upon selecting the **Change history (Preview)** tab, the tool will automatically register **Microsoft.ChangeAnalysis** resource provider.
+1. For the Azure Monitor Change Analysis service to scan for changes in users' subscriptions, a resource provider needs to be registered. Upon selecting the **Change history (Preview)** tab, the tool will automatically register **Microsoft.ChangeAnalysis** resource provider.
1. Once registered, you can view changes from **Azure Resource Graph** immediately from the past 14 days. - Changes from other sources will be available after ~4 hours after subscription is onboard.
If you've enabled [VM Insights](../vm/vminsights-overview.md), you can view chan
:::image type="content" source="./media/change-analysis/vm-insights.png" alt-text="Virtual machine insights performance and property panel."::: 1. Select the **Changes** tab.
-1. Select the **Investigate Changes** button to view change details in the Application Change Analysis standalone UI.
+1. Select the **Investigate Changes** button to view change details in the Azure Monitor Change Analysis standalone UI.
:::image type="content" source="./media/change-analysis/vm-insights-2.png" alt-text="View of the property panel, selecting Investigate Changes button.":::
azure-monitor Change Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis.md
Title: Use Application Change Analysis in Azure Monitor to find web-app issues | Microsoft Docs
-description: Use Application Change Analysis in Azure Monitor to troubleshoot application issues on live sites on Azure App Service.
+ Title: Use Change Analysis in Azure Monitor to find web-app issues | Microsoft Docs
+description: Use Change Analysis in Azure Monitor to troubleshoot issues on live sites.
ms.contributor: cawa Previously updated : 03/11/2022 Last updated : 03/21/2022
-# Use Application Change Analysis in Azure Monitor (preview)
+# Use Change Analysis in Azure Monitor (preview)
While standard monitoring solutions might alert you to a live site issue, outage, or component failure, they often don't explain the cause. For example, your site worked five minutes ago, and now it's broken. What changed in the last five minutes?
-We've designed Application Change Analysis to answer that question in Azure Monitor.
+We've designed Change Analysis to answer that question in Azure Monitor.
Building on the power of [Azure Resource Graph](../../governance/resource-graph/overview.md), Change Analysis: - Provides insights into your Azure application changes.
The following diagram illustrates the architecture of Change Analysis:
## Supported resource types
-Application Change Analysis service supports resource property level changes in all Azure resource types, including common resources like:
+Azure Monitor Change Analysis service supports resource property level changes in all Azure resource types, including common resources like:
- Virtual Machine - Virtual machine scale set - App Service
Application Change Analysis service supports resource property level changes in
## Data sources
-Application Change Analysis queries for:
+Azure Monitor's Change Analysis queries for:
- Azure Resource Manager tracked properties. - Proxied configurations. - Web app in-guest changes.
Change Analysis detects related resources. Common examples are:
- Network Security Group - Virtual Network-- Application Gateway
+- Azure Monitor Gateway
- Load Balancer related to a Virtual Machine. Network resources are usually provisioned in the same resource group as the resources using it. Filter the changes by resource group to show all changes for the virtual machine and its related networking resources. :::image type="content" source="./media/change-analysis/network-changes.png" alt-text="Screenshot of Networking changes":::
-## Application Change Analysis service enablement
+## Azure Monitor's Change Analysis service enablement
-The Application Change Analysis service:
+The Change Analysis service:
- Computes and aggregates change data from the data sources mentioned earlier. - Provides a set of analytics for users to: - Easily navigate through all resource changes.
If you don't see changes within 30 minutes, refer to [the troubleshooting guide]
## Cost
-Application Change Analysis is a free service. Once enabled, the Change Analysis **Diagnose and solve problems** tool does not:
+Azure Monitor's Change Analysis is a free service. Once enabled, the Change Analysis **Diagnose and solve problems** tool does not:
- Incur any billing cost to subscriptions. - Have any performance impact for scanning Azure Resource properties changes.
+### Data retention
+Change Analysis provides 14 days of data retention.
+ ## Enable Change Analysis at scale for Web App in-guest file and environment variable changes If your subscription includes several web apps, enabling the service at the web app level would be inefficient. Instead, run the following script to enable all web apps in your subscription.
azure-monitor Container Insights Log Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-log-alerts.md
If you're not familiar with Azure Monitor alerts, see [Overview of alerts in Mic
## Log query measurements Log query alerts can perform two different measurements of the result of a log query, each of which support distinct scenarios for monitoring virtual machines.
-[Metric measurement](../alerts/alerts-unified-log.md#calculation-of-measure-based-on-a-numeric-column-such-as-cpu-counter-value) create a separate alert for each record in the query results that has a numeric value that exceeds a threshold defined in the alert rule. These are ideal for numeric data such as CPU.
+[Metric measurement](../alerts/alerts-unified-log.md#calculation-of-a-value) create a separate alert for each record in the query results that has a numeric value that exceeds a threshold defined in the alert rule. These are ideal for numeric data such as CPU.
-[Number of results](../alerts/alerts-unified-log.md#count-of-the-results-table-rows) create a single alert when a query returns at least a specified number of records. These are ideal for non-numeric data such or for analyzing performance trends across multiple computers. You may also choose this strategy if you want to minimize your number of alerts or possibly create an alert only when multiple components have the same error condition.
+[Number of results](../alerts/alerts-unified-log.md#result-count) create a single alert when a query returns at least a specified number of records. These are ideal for non-numeric data such or for analyzing performance trends across multiple computers. You may also choose this strategy if you want to minimize your number of alerts or possibly create an alert only when multiple components have the same error condition.
> [!NOTE] > Resource-centric log alert rules, currently in public preview, will simplify log query alerts and replace the functionality currently provided by metric measurement queries. You can use the AKS cluster as a target for the rule which will better identify it as the affected resource. When resource-center log query alerts become generally available, the guidance in this scenario will be updated. ## Create a log query alert rule
-[Comparison of log query alert measures](../vm/monitor-virtual-machine-alerts.md#comparison-of-log-query-alert-measures) provides a complete walkthrough of log query alert rules for each type of measurement, including a comparison of the log queries supporting each. You can use these same processes to create alert rules for AKS clusters using queries similar to the ones in this article.
+[Comparison of log query alert measures](../vm/monitor-virtual-machine-alerts.md#example-log-query-alert) provides a complete walkthrough of log query alert rules for each type of measurement, including a comparison of the log queries supporting each. You can use these same processes to create alert rules for AKS clusters using queries similar to the ones in this article.
## Resource utilization
azure-monitor Data Collection Rule Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-structure.md
Set of destinations where the data should be sent. Examples include Log Analytic
### Data flows Definition of which streams should be sent to which destinations.
-### Endpoint
-HTTPS endpoint for DCR used for custom logs API. The DCR is applied to any data sent to that endpoint.
--- ## Next steps -- [Overview of data collection rules including methods for creating them.](data-collection-rule-overview.md)
+- [Overview of data collection rules including methods for creating them.](data-collection-rule-overview.md)
azure-monitor Data Collection Rule Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-transformations.md
The following [Bitwise operators](/azure/data-explorer/kusto/query/binoperators)
- [pack_array](/azure/data-explorer/kusto/query/packarrayfunction) - [pack](/azure/data-explorer/kusto/query/packfunction) - [parse_json](/azure/data-explorer/kusto/query/parsejsonfunction)-- [parse_xml](/azure/data-explorer/kusto/query/parse-xmlfunction.html)
+- [parse_xml](/azure/data-explorer/kusto/query/parse-xmlfunction)
- [zip](/azure/data-explorer/kusto/query/zipfunction) #### Mathematical functions
azure-monitor App Insights Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/app-insights-connector.md
description: You can use the Application Insights Connector solution to diagnose
Previously updated : 02/13/2019 + Last updated : 03/22/2022
ApplicationInsights | summarize by ApplicationName
## Next steps -- Use [Log Search](./log-query-overview.md) to view detailed information for your Application Insights apps.
+- Use [Log Search](./log-query-overview.md) to view detailed information for your Application Insights apps.
azure-monitor Azure Data Explorer Query Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/azure-data-explorer-query-storage.md
Title: Query exported data from Azure Monitor using Azure Data Explorer description: Use Azure Data Explorer to query data that was exported from your Log Analytics workspace to an Azure storage account. --+ Previously updated : 10/13/2020 + Last updated : 03/22/2022
write-host -ForegroundColor Green $CreateExternal
Write-Host -ForegroundColor Green $createMapping ```
-The following image shows and example of the output.
+The following image shows an example of the output.
:::image type="content" source="media/azure-data-explorer-query-storage/external-table-create-command-output.png" alt-text="ExternalTable create command output.":::
-[![Example output](media/azure-data-explorer-query-storage/external-table-create-command-output.png)](media/azure-data-explorer-query-storage/external-table-create-command-output.png#lightbox)
- >[!TIP] >* Copy, paste, and then run the output of the script in your Azure Data Explorer client tool to create the table and mapping. >* To use all of the data inside the container, alter the script and change the URL to be 'https://your.blob.core.windows.net/containername;SecKey'
azure-monitor Delete Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/delete-workspace.md
description: Learn how to delete your Log Analytics workspace if you created one
Previously updated : 12/20/2020 + Last updated : 03/22/2022
azure-monitor Logicapp Flow Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logicapp-flow-connector.md
Previously updated : 03/13/2020+ Last updated : 03/22/2022
azure-monitor Logs Export Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-export-logic-app.md
Previously updated : 10/02/2020+ Last updated : 03/22/2022
azure-monitor Manage Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/manage-access.md
Title: Manage Log Analytics workspaces in Azure Monitor | Microsoft Docs description: You can manage access to data stored in a Log Analytics workspace in Azure Monitor using resource, workspace, or table-level permissions. This article details how to complete. Previously updated : 04/10/2019 + Last updated : 03/22/2022
Sometimes custom logs come from sources that are not directly associated to a sp
* See [Log Analytics agent overview](../agents/log-analytics-agent.md) to gather data from computers in your datacenter or other cloud environment.
-* See [Collect data about Azure virtual machines](../vm/monitor-virtual-machine.md) to configure data collection from Azure VMs.
+* See [Collect data about Azure virtual machines](../vm/monitor-virtual-machine.md) to configure data collection from Azure VMs.
azure-monitor Manage Cost Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/manage-cost-storage.md
Changes to a workspace's pricing pier are recorded in the [Activity Log](../esse
## Legacy pricing tiers
-Subscriptions that contained a Log Analytics workspace or Application Insights resource on April 2, 2018, or are linked to an Enterprise Agreement that started before February 1, 2019 and is still active, will continue to have access to use the legacy pricing tiers: **Free Trial**, **Standalone (Per GB)**, and **Per Node (OMS)**. Workspaces in the Free Trial pricing tier will have daily data ingestion limited to 500 MB (except for security data types collected by [Microsoft Defender for Cloud](../../security-center/index.yml)) and the data retention is limited to seven days. The Free Trial pricing tier is intended only for evaluation purposes. No SLA is provided for the Free tier. Workspaces in the Standalone or Per Node pricing tiers have user-configurable retention from 30 to 730 days.
+Subscriptions that contained a Log Analytics workspace or Application Insights resource on April 2, 2018, or are linked to an Enterprise Agreement that started before February 1, 2019 and is still active, will continue to have access to use the legacy pricing tiers: **Free Trial**, **Standalone (Per GB)**, and **Per Node (OMS)**. Workspaces in the Free Trial pricing tier will have daily data ingestion limited to 500 MB (except for security data types collected by [Microsoft Defender for Cloud](../../security-center/index.yml)) and the data retention is limited to seven days. The Free Trial pricing tier is intended only for evaluation purposes. No SLA is provided for the Free tier. Workspaces in the Standalone or Per Node pricing tiers have user-configurable retention from 30 to 730 days. Creating new workspaces in (or moving existing workspaces into) the Free Trial pricing tier is possible until July 1, 2022.
Usage on the Standalone pricing tier is billed by the ingested data volume. It is reported in the **Log Analytics** service and the meter is named "Data Analyzed".
azure-monitor Move Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/move-workspace.md
Title: Move a Log Analytics workspace in Azure Monitor | Microsoft Docs description: Learn how to move your Log Analytics workspace to another subscription or resource group. Previously updated : 11/12/2020+ Last updated : 03/22/2022
azure-monitor Query Optimization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/query-optimization.md
description: Best practices for optimizing log queries in Azure Monitor.
Previously updated : 03/30/2019+ Last updated : 03/22/2022+
azure-monitor Tutorial Custom Logs Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-custom-logs-api.md
Once the data collection rule has been created, the application needs to be give
## Send sample data The following PowerShell code sends data to the endpoint using HTTP REST fundamentals.
+> [!NOTE]
+> This tutorial uses commands that require PowerShell v7.0 or later. Please make sure your local installation of PowerShell is up to date, or execute this script using the Azure CloudShell.
+ 1. Run the following PowerShell command which adds a required assembly for the script. ```powershell
The following PowerShell code sends data to the endpoint using HTTP REST fundame
$headers = @{"Authorization"="Bearer $bearerToken";"Content-Type"="application/json"}; $uri = "$dceEndpoint/dataCollectionRules/$dcrImmutableId/streams/Custom-MyTableRawData?api-version=2021-11-01-preview"
- $uploadResponse = Invoke-RestMethod -Uri $uri -Method "Post" -Body $body -Headers $headers -TransferEncoding "GZip"
+ $uploadResponse = Invoke-RestMethod -Uri $uri -Method "Post" -Body $body -Headers $headers
``` > [!NOTE] > If you receive an `Unable to find type [System.Web.HttpUtility].` error, run the last line in section 1 of the script for a fix and executely. Executing it uncommented as part of the script will not resolve the issue - the command must be executed separately.
-2. After executing this script, you should see a `HTTP - 200 OK` response, and in just a few minutes, the data arrive to your Log Analytics workspace.
+2. After executing this script, you should see a `HTTP - 204` response, and in just a few minutes, the data arrive to your Log Analytics workspace.
## Troubleshooting This section describes different error conditions you may receive and how to correct them.
azure-monitor Monitor Virtual Machine Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-alerts.md
The type of alert rule that you create for a particular scenario depends on wher
Typically, the best strategy is to use metric alerts instead of log alerts when possible because they're more responsive and stateful. To use metric alerts, the data you're alerting on must be available in Metrics. VM insights currently sends all of its data to Logs, so you must install the Azure Monitor agent to use metric alerts with data from the guest operating system. Use Log query alerts with metric data when it's unavailable in Metrics or if you require logic beyond the relatively simple logic for a metric alert rule.
-### Metric alert rules
+### Metric alerts
[Metric alert rules](../alerts/alerts-metric.md) are useful for alerting when a particular metric exceeds a threshold. An example is when the CPU of a machine is running high. The target of a metric alert rule can be a specific machine, a resource group, or a subscription. In this instance, you can create a single rule that applies to a group of machines. Metric rules for virtual machines can use the following data:
Metric rules for virtual machines can use the following data:
> When VM insights supports the Azure Monitor agent, which is currently in public preview, it sends performance data from the guest operating system to Metrics so that you can use metric alerts. ### Log alerts
-[Log alerts](../alerts/alerts-metric.md) can perform two different measurements of the result of a log query, each of which supports distinct scenarios for monitoring virtual machines:
+[Log alerts](../alerts/alerts-unified-log.md) can measure two different things, each of which supports distinct scenarios for monitoring virtual machines:
-- [Metric measurements](../alerts/alerts-unified-log.md#calculation-of-measure-based-on-a-numeric-column-such-as-cpu-counter-value): Creates a separate alert for each record in the query results that has a numeric value that exceeds a threshold defined in the alert rule. Metric measurements are ideal for non-numeric data such as Windows and Syslog events collected by the Log Analytics agent or for analyzing performance trends across multiple computers.-- [Number of results](../alerts/alerts-unified-log.md#count-of-the-results-table-rows): Creates a single alert when a query returns at least a specified number of records. Number of results measurements are ideal for non-numeric data such as Windows and Syslog events collected by the [Log Analytics agent](../agents/log-analytics-agent.md) or for analyzing performance trends across multiple computers. You might also choose this strategy if you want to minimize your number of alerts or possibly create an alert only when multiple machines have the same error condition.
+- [Result count](../alerts/alerts-unified-log.md#result-count): This measure counts the number of rows returned by the query, and can be used to work with events such as Windows event logs, syslog, application exceptions.
+- [Calculation of a value](../alerts/alerts-unified-log.md#calculation-of-a-value): This measure is based on a numeric column and can be used to include any number of resources. For example, CPU percentage.
-### Target resource and impacted resource
+### Targeting resources and dimensions
-> [!NOTE]
-> Resource-centric log alert rules, currently in public preview, simplify log query alerts for virtual machines and replace the functionality currently provided by metric measurement queries. You can use the machine as a target for the rule, which better identifies it as the affected resource. You can also apply a single alert rule to all machines in a particular resource group or subscription. When resource-center log query alerts become generally available, the guidance in this scenario will be updated.
->
-Each alert in Azure Monitor has an **Affected resource** property, which is defined by the target of the rule. For metric alert rules, the affected resource is the computer, which allows you to easily identify it in the standard alert view. Log query alerts are associated with the workspace resource instead of the machine, even when you use a metric measurement alert that creates an alert for each computer. You need to view the details of the alert to view the computer that was affected.
+You can monitor multiple instancesΓÇÖ values with one rule using dimensions. You would use dimensions if, for example, you want to monitor CPU usage on multiple instances running your web site or app for CPU usage over 80%.
+
+To create resource-centric alerts at scale for a subscription or resource group, you can use the **Split by dimensions** section of the condition to split alerts into separate alerts by grouping unique combinations using numerical or string columns. When you want to monitor the same condition on multiple Azure resources, splitting on Azure resource ID column will change the target of the alert to the specified resource.
-The computer name is stored in the **Impacted resource** property, which you can view in the details of the alert. It's also displayed as a dimension in emails that are sent from the alert.
+You may also decide not to split when you want a condition on multiple resources in the scope, for example, if you want to alert if at least five machines in the resource group scope have CPU usage over 80%.
-You might want to have a view that lists the alerts with the affected computer. You can use a custom workbook that uses a custom [Resource Graph](../../governance/resource-graph/overview.md) to provide this view. Use the following query to display alerts, and use the data source **Azure Resource Graph** in the workbook.
+You might want to see a list of the alerts with the affected computers. You can use a custom workbook that uses a custom [Resource Graph](../../governance/resource-graph/overview.md) to provide this view. Use the following query to display alerts, and use the data source **Azure Resource Graph** in the workbook.
```kusto alertsmanagementresources
The most basic requirement is to send an alert when a machine is unavailable. It
#### Log query alert rules Log query alerts use the [Heartbeat table](/azure/azure-monitor/reference/tables/heartbeat), which should have a heartbeat record every minute from each machine.
-**Separate alerts**
-
-Use a metric measurement rule with the following query.
+Use a rule with the following query.
```kusto Heartbeat | summarize TimeGenerated=max(TimeGenerated) by Computer | extend Duration = datetime_diff('minute',now(),TimeGenerated)
-| summarize AggregatedValue = min(Duration) by Computer, bin(TimeGenerated,5m)
+| summarize AggregatedValue = min(Duration) by Computer, bin(TimeGenerated,5m), _ResourceId
```-
-**Single alert**
-
-Use a number of results alert with the following query.
-
-```kusto
-Heartbeat
-| summarize LastHeartbeat=max(TimeGenerated) by Computer
-| where LastHeartbeat < ago(5m)
-```
- #### Metric alert rules A metric called *Heartbeat* is included in each Log Analytics workspace. Each virtual machine connected to that workspace sends a heartbeat metric value each minute. Because the computer is a dimension on the metric, you can fire an alert when any computer fails to send a heartbeat. Set the **Aggregation type** to **Count** and the **Threshold** value to match the **Evaluation granularity**.
A metric called *Heartbeat* is included in each Log Analytics workspace. Each vi
InsightsMetrics | where Origin == "vm.azm.ms" | where Namespace == "Processor" and Name == "UtilizationPercentage"
-| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer
+| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId
```-
-**CPU utilization for all compute resources in a subscription**
-
-```kusto
- InsightsMetrics
- | where Origin == "vm.azm.ms"
- | where _ResourceId startswith "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" and (_ResourceId contains "/providers/Microsoft.Compute/virtualMachines/" or _ResourceId contains "/providers/Microsoft.Compute/virtualMachineScaleSets/")
- | where Namespace == "Processor" and Name == "UtilizationPercentage" | summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), _ResourceId
-```
-
-**CPU utilization for all compute resources in a resource group**
-
-```kusto
-InsightsMetrics
-| where Origin == "vm.azm.ms"
-| where _ResourceId startswith "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/my-resource-group/providers/Microsoft.Compute/virtualMachines/" or _ResourceId startswith "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/my-resource-group/providers/Microsoft.Compute/virtualMachineScaleSets/"
-| where Namespace == "Processor" and Name == "UtilizationPercentage" | summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), _ResourceId
-```
- ### Memory alerts #### Metric alert rules
InsightsMetrics
InsightsMetrics | where Origin == "vm.azm.ms" | where Namespace == "Memory" and Name == "AvailableMB"
-| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer
+| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId
``` **Available memory in percentage**
InsightsMetrics
| where Origin == "vm.azm.ms" | where Namespace == "Memory" and Name == "AvailableMB" | extend TotalMemory = toreal(todynamic(Tags)["vm.azm.ms/memorySizeMB"]) | extend AvailableMemoryPercentage = (toreal(Val) / TotalMemory) * 100.0
-| summarize AggregatedValue = avg(AvailableMemoryPercentage) by bin(TimeGenerated, 15m), Computer
+| summarize AggregatedValue = avg(AvailableMemoryPercentage) by bin(TimeGenerated, 15m), Computer, _ResourceId
``` ### Disk alerts
InsightsMetrics
| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId, NetworkInterface ```
-## Comparison of log query alert measures
-To compare the behavior of the two log alert measures, here's a walk-through of each to create an alert when the CPU of a virtual machine exceeds 80 percent. The data you need is in the [InsightsMetrics table](/azure/azure-monitor/reference/tables/insightsmetrics). The following query returns the records that need to be evaluated for the alert. Each type of alert rule uses a variant of this query.
-
-```kusto
-InsightsMetrics
-| where Origin == "vm.azm.ms"
-| where Namespace == "Processor" and Name == "UtilizationPercentage"
-```
-
-### Metric measurement
-The **metric measurement** measure creates a separate alert for each record in a query that has a value that exceeds a threshold defined in the alert rule. These alert rules are ideal for virtual machine performance data because they create individual alerts for each computer. The log query for this measure needs to return a value for each machine. The threshold in the alert rule determines if the value should fire an alert.
-
-> [!NOTE]
-> Resource-centric log alert rules, currently in public preview, simplify log query alerts for virtual machines and replace the functionality currently provided by metric measurement queries. You can use the machine as a target for the rule, which better identifies it as the affected resource. You can also apply a single alert rule to all machines in a particular resource group or description. When resource-center log query alerts become generally available, the guidance in this scenario will be updated.
-
-#### Query
-The query for rules using metric measurement must include a record for each machine with a numeric property called **AggregatedValue**. This value is compared to the threshold in the alert rule. The query doesn't need to compare this value to a threshold because the threshold is defined in the alert rule.
-
-```kusto
-InsightsMetrics
-| where Origin == "vm.azm.ms"
-| where Namespace == "Processor" and Name == "UtilizationPercentage"
-| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer
-```
-
-#### Alert rule
-On the Azure Monitor menu, select **Logs** to open Log Analytics. Make sure that the correct workspace is selected for your scope. If not, click **Select scope** in the upper left and select the correct workspace. Paste in the query that has the logic you want, and select **Run** to verify that it returns the correct results.
--
-Select **New alert rule** to create a rule with the current query. The rule uses your workspace for the **Resource**.
-
-Select **Condition** to view the configuration. The query is already filled in with a graphical view of the value returned from the query for each computer. Select the computer from the **Pivoted on** dropdown list.
-
-Scroll down to **Alert logic**, and select **Metric measurement** for the **Based on** property. Because you want to alert when the utilization exceeds 80 percent, set **Aggregate value** to **Greater than** and **Threshold value** to **80**.
-
-Scroll down to **Alert logic**, and select **Metric measurement** for the **Based on** property. Provide a **Threshold** value to compare to the value returned from the query. In this example, use **80**. In **Trigger Alert Based On**, specify how many times the threshold must be exceeded before an alert is created. For example, you might not care if the processor exceeds a threshold once and then returns to normal, but you do care if it continues to exceed the threshold over multiple consecutive measurements. For this example, set **Consecutive breaches** to **3**.
-
-Scroll down to **Evaluated based on**. **Period** specifies the time span for the query. Specify a value of **15** minutes, which means that the query only uses data collected in the last 15 minutes. **Frequency** specifies how often the query is run. A lower value makes the alert rule more responsive but also has a higher cost. Specify **15** to run the query every 15 minutes.
--
-### Number of results rule
-The **number of results** rule creates a single alert when a query returns at least a specified number of records. The log query in this type of alert rule typically identifies the alerting condition, while the threshold for the alert rule determines if a sufficient number of records are returned.
-
-#### Query
-In this example, the threshold for the CPU utilization is included in the query. The number of records returned from the query is the number of machines exceeding that threshold. The threshold for the alert rule is the minimum number of machines required to fire the alert. If you want an alert when a single machine is in error, the threshold for the alert rule is zero.
-
-```kusto
-InsightsMetrics
-| where Origin == "vm.azm.ms"
-| where Namespace == "Processor" and Name == "UtilizationPercentage"
-| summarize AverageUtilization = avg(Val) by Computer
-| where AverageUtilization > 80
-```
-
-#### Alert rule
-On the Azure Monitor menu, select **Logs** to open Log Analytics. Make sure that the correct workspace is selected for your scope. If not, click **Select scope** in the upper left and select the correct workspace. Paste in the query that has the logic you want, and select **Run** to verify that it returns the correct results. You probably don't have a machine currently over threshold, so change to a lower threshold temporarily to verify results. Then set the appropriate threshold before you create the alert rule.
--
-Select **New alert rule** to create a rule with the current query. The rule uses your workspace for the **Resource**.
-
-Select the **Condition** to view the configuration. The query is already filled in with a graphical view of the number of records that have been returned from that query over the past several minutes.
-
-Scroll down to **Alert logic**, and select **Number of results** for the **Based on** property. For this example, you want an alert if any records are returned, which means that at least one virtual machine has a processor above 80 percent. Select **Greater than** for the **Operator** and **0** for the **Threshold value**.
-
-Scroll down to **Evaluated based on**. **Period** specifies the time span for the query. Specify a value of **15** minutes, which means that the query only uses data collected in the last 15 minutes. **Frequency** specifies how often the query is run. A lower value makes the alert rule more responsive but also has a higher cost. Specify **15** to run the query every 15 minutes.
--
+## Example log query alert
+Here's a walk-through of creating a log alert for when the CPU of a virtual machine exceeds 80 percent. The data you need is in the [InsightsMetrics table](/azure/azure-monitor/reference/tables/insightsmetrics). The following query returns the records that need to be evaluated for the alert. Each type of alert rule uses a variant of this query.
+### Create the log alert rule
+ 1. In the portal, select the relevant resource. We recommend scaling resources by using subscriptions or resource groups.
+ 1. In the Resource menu, select **Logs**.
+ 1. Use this query to monitor for virtual machines CPU usage:
+
+ ```kusto
+ InsightsMetrics
+ | where Origin == "vm.azm.ms"
+ | where Namespace == "Processor" and Name == "UtilizationPercentage"
+ | summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId
+ ```
+ 1. Run the query to make sure you get the results you were expecting.
+ 1. From the top command bar, Select **+ New alert rule** to create a rule using the current query.
+ 1. The **Create an alert rule** page opens with your query. We try to detect summarized data from the query results automatically. If detected, the appropriate values are automatically selected.
+ :::image type="content" source="media/monitor-virtual-machines/log-alert-rule-query.png" alt-text="Screenshot of new log alert rule query.":::
+ 1. In the **Measurement** section, select the values for these fields if they are not already automatically selected.
+
+ |Field |Description |Value for this scenario |
+ ||||
+ |Measure| The number of table rows or a numeric column to aggregate |AggregatedValue|
+ |Aggregation type|The type of aggregation to apply to the data points in aggregation granularity|Average|
+ |Aggregation granularity|The interval over which data points are grouped by the aggregation type|15 minutes|
+
+ :::image type="content" source="media/monitor-virtual-machines/log-alert-rule-measurement.png" alt-text="Screenshot of new log alert rule measurement. ":::
+ 1. In the **Split by dimensions** section, select the values for these fields if they are not already automatically selected.
+
+ |Field|Description |Value for this scenario |
+ ||||
+ |Resource ID column|An Azure Resource ID column that will split the alerts and set the fired alert target scope.|_Resourceid|
+ |Dimension name|Dimensions monitor specific time series and provide context to the fired alert. Dimensions can be either number or string columns. If you select more than one dimension value, each time series that results from the combination will trigger its own alert and will be charged separately. The displayed dimension values are based on data from the last 48 hours. Custom dimension values can be added by clicking 'Add custom value'.|Computer|
+ |Operator|The operator to compare the dimension value|=|
+ |Dimension value| The list of dimension column values |All current and future values|
+
+ :::image type="content" source="media/monitor-virtual-machines/log-alert-rule-dimensions.png" alt-text="Screenshot of new log alert rule with dimensions. ":::
+ 1. In the **Alert Logic** section, select the values for these fields if they are not already automatically selected.
+
+ |Field |Description |Value for this scenario |
+ ||||
+ |Operator |The operator to compare the metric value against the threshold|Greater than|
+ |Threshold value| The value that the result is measured against.|80|
+ |Frequency of evaluation|How often the alert rule should run. A frequency smaller than the aggregation granularity results in a sliding window evaluation.|15 minutes|
+ 1. (Optional) In the **Advanced options** section, set the [Number of violations to trigger alert](../alerts/alerts-unified-log.md#number-of-violations-to-trigger-alert).
+ :::image type="content" source="../alerts/media/alerts-log/alerts-rule-preview-advanced-options.png" alt-text="Screenshot of alerts rule preview advanced options.":::
+
+ 1. The **Preview** chart shows query evaluations results over time. You can change the chart period or select different time series that resulted from unique alert splitting by dimensions.
+ :::image type="content" source="../alerts/media/alerts-log/alerts-create-alert-rule-preview.png" alt-text="Screenshot of alerts rule preview.":::
+
+ 1. From this point on, you can select the **Review + create** button at any time.
+ 1. In the **Actions** tab, select or create the required [action groups](../alerts/action-groups.md).
+ :::image type="content" source="../alerts/media/alerts-log/alerts-rule-actions-tab.png" alt-text="Screenshot of alerts rule preview actions tab.":::
+
+ 1. In the **Details** tab, define the **Project details** and the **Alert rule details**.
+ 1. (Optional) In the **Advanced options** section, you can set several options, including whether to **Enable upon creation**, or to [**mute actions**](../alerts/alerts-unified-log.md#state-and-resolving-alerts) for a period after the alert rule fires.
+ :::image type="content" source="../alerts/media/alerts-log/alerts-rule-details-tab.png" alt-text="Screenshot of alerts rule preview details tab.":::
+ > [!NOTE]
+ > If you or your administrator assigned the Azure Policy **Azure Log Search Alerts over Log Analytics workspaces should use customer-managed keys**, you must select **Check workspace linked storage** option in **Advanced options**, or the rule creation will fail as it will not meet the policy requirements.
+
+1. In the **Tags** tab, set any required tags on the alert rule resource.
+ :::image type="content" source="../alerts/media/alerts-log/alerts-rule-tags-tab.png" alt-text="Screenshot of alerts rule preview tags tab.":::
+
+1. In the **Review + create** tab, a validation will run and inform you of any issues.
+1. When validation passes and you have reviewed the settings, click the **Create** button.
+ :::image type="content" source="../alerts/media/alerts-log/alerts-rule-review-create.png" alt-text="Screenshot of alerts rule preview review and create tab.":::
## Next steps * [Monitor workloads running on virtual machines.](monitor-virtual-machine-workloads.md)
azure-sql Authentication Aad Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/authentication-aad-overview.md
With Azure AD authentication, you can centrally manage the identities of databas
- Azure AD supports similar connections from SQL Server Data Tools (SSDT) that use Active Directory Interactive Authentication. For more information, see [Azure Active Directory support in SQL Server Data Tools (SSDT)](/sql/ssdt/azure-active-directory) > [!NOTE]
-> Connecting to a SQL Server instance that's running on an Azure virtual machine (VM) is not supported using an Azure Active Directory account. Use a domain Active Directory account instead.
+> Connecting to a SQL Server instance that's running on an Azure virtual machine (VM) is not supported using Azure Active Directory or Azure Active Directory Domain Services. Use an Active Directory domain account instead.
The configuration steps include the following procedures to configure and use Azure Active Directory authentication.
azure-sql How To Content Reference Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/how-to-content-reference-guide.md
ms.devlang:
Previously updated : 04/16/2019+ Last updated : 03/22/2022 # Azure SQL Managed Instance content reference [!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
In this article you can find a content reference to various guides, scripts, and
- [Configure multi-factor auth](../database/authentication-mfa-ssms-configure.md) - [Configure auto-failover group](auto-failover-group-configure-sql-mi.md) to automatically failover all databases on an instance to a secondary instance in another region in the event of a disaster. - [Configure a temporal retention policy](../database/temporal-tables-retention-policy.md)-- [Configure TDE with BYOK](../database/transparent-data-encryption-byok-configure.md)-- [Rotate TDE BYOK keys](../database/transparent-data-encryption-byok-key-rotation.md)-- [Remove a TDE protector](../database/transparent-data-encryption-byok-remove-tde-protector.md) - [Configure In-Memory OLTP](../in-memory-oltp-configure.md) - [Configure Azure Automation](../database/automation-manage.md) - [Transactional replication](replication-between-two-instances-configure-tutorial.md) enables you to replicate your data between managed instances, or from SQL Server on-premises to SQL Managed Instance, and vice versa. - [Configure threat detection](threat-detection-configure.md) ΓÇô [threat detection](../database/threat-detection-overview.md) is a built-in Azure SQL Managed Instance feature that detects various potential attacks such as SQL injection or access from suspicious locations. - [Creating alerts](alerts-create.md) enables you to set up alerts on monitored metrics such as CPU utilization, storage space consumption, IOPS and others for SQL Managed Instance.
+### Transparent Data Encryption
+
+- [Configure TDE with BYOK](../database/transparent-data-encryption-byok-configure.md)
+- [Rotate TDE BYOK keys](../database/transparent-data-encryption-byok-key-rotation.md)
+- [Remove a TDE protector](../database/transparent-data-encryption-byok-remove-tde-protector.md)
+
+### Managed Instance link feature
+
+- [Prepare environment for link feature](managed-instance-link-preparation.md)
+- [Replicate database with link feature in SSMS](managed-instance-link-use-ssms-to-replicate-database.md)
+- [Replicate database with Azure SQL Managed Instance link feature with T-SQL and PowerShell scripts](managed-instance-link-use-scripts-to-replicate-database.md)
+- [Failover database with link feature in SSMS - Azure SQL Managed Instance](managed-instance-link-use-ssms-to-failover-database.md)
+- [Failover (migrate) database with Azure SQL Managed Instance link feature with T-SQL and PowerShell scripts](managed-instance-link-use-scripts-to-failover-database.md)
+- [Best practices with link feature for Azure SQL Managed Instance](link-feature-best-practices.md)
++ ## Monitoring and tuning - [Manual tuning](../database/performance-guidance.md)
azure-sql Log Replay Service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/log-replay-service-migrate.md
Previously updated : 01/04/2022 Last updated : 03/22/2022 # Migrate databases from SQL Server to SQL Managed Instance by using Log Replay Service (Preview)
Functional limitations of LRS are:
- LRS must be started separately for each database pointing to the full URI path containing an individual database folder. - LRS can support up to 100 simultaneous restore processes per single managed instance.
+> [!NOTE]
+> If you require database to be R/O accessible during the migration, and if you require migration window larger than 36 hours, please consider an alternative online migrations solution [link feature for Managed Instance](link-feature.md) providing such capability.
+ ## Troubleshooting After you start LRS, use the monitoring cmdlet (`get-azsqlinstancedatabaselogreplay` or `az_sql_midb_log_replay_show`) to see the status of the operation. If LRS fails to start after some time and you get an error, check for the most common issues:
After you start LRS, use the monitoring cmdlet (`get-azsqlinstancedatabaselogrep
- If you started LRS in autocomplete mode, was a valid filename for the last backup file specified? ## Next steps-- Learn more about [migrating SQL Server to SQL Managed instance](../migration-guides/managed-instance/sql-server-to-managed-instance-guide.md).
+- Learn more about [migrating to Managed Instance using the link feature](link-feature.md).
+- Learn more about [migrating from SQL Server to SQL Managed instance](../migration-guides/managed-instance/sql-server-to-managed-instance-guide.md).
- Learn more about [differences between SQL Server and SQL Managed Instance](transact-sql-tsql-differences-sql-server.md). - Learn more about [best practices to cost and size workloads migrated to Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/migrate-best-practices-costs).
azure-sql Managed Instance Link Preparation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/managed-instance-link-preparation.md
Previously updated : 03/10/2022 Last updated : 03/22/2022 # Prepare environment for link feature - Azure SQL Managed Instance
If the connection is unsuccessful, verify the following items:
> [!CAUTION] > Proceed with the next steps only if there is validated network connectivity between your source and target environments. Otherwise, please troubleshoot network connectivity issues before proceeding any further.
+## Migrate a certificate of a TDE-protected database
+
+If you are migrating a database on SQL Server protected by Transparent Data Encryption to a managed instance, the corresponding encryption certificate from the on-premises or Azure VM SQL Server needs to be migrated to managed instance before using the link. For detailed steps, see [Migrate a TDE cert to a managed instance](tde-certificate-migrate.md).
## Install SSMS
azure-sql Managed Instance Link Use Scripts To Replicate Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/managed-instance-link-use-scripts-to-replicate-database.md
Previously updated : 03/15/2022 Last updated : 03/22/2022 # Replicate database with Azure SQL Managed Instance link feature with T-SQL and PowerShell scripts
To replicate your databases to Azure SQL Managed Instance, you need the followin
- [SQL Server Management Studio (SSMS) v18.11.1 or later](/sql/ssms/download-sql-server-management-studio-ssms). - A properly [prepared environment](managed-instance-link-preparation.md).
+## Replicate database
+
+Use instructions below to manually setup the link between your instance of SQL Server and your instance of SQL Managed Instance. Once the link is created, your source database gets a read-only replica copy on your target Azure SQL Managed Instance.
+
+> [!NOTE]
+> The link supports replication of user databases only. Replication of system databases is not supported. To replicate instance-level objects (stored in master or msdb databases), we recommend to script them out and run T-SQL scripts on the destination instance.
+ ## Terminology and naming conventions In executing scripts from this user guide, it's important not to mistaken, for example, SQL Server, or Managed Instance name, with their fully qualified domain names.
azure-sql Managed Instance Link Use Ssms To Replicate Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/managed-instance-link-use-ssms-to-replicate-database.md
Previously updated : 03/10/2022 Last updated : 03/22/2022 # Replicate database with link feature in SSMS - Azure SQL Managed Instance
To replicate your databases to Azure SQL Managed Instance, you need the followin
- [SQL Server Management Studio (SSMS) v18.11.1 or later](/sql/ssms/download-sql-server-management-studio-ssms). - A properly [prepared environment](managed-instance-link-preparation.md). - ## Replicate database Use the **New Managed Instance link** wizard in SQL Server Management Studio (SSMS) to setup the link between your instance of SQL Server and your instance of SQL Managed Instance. The wizard takes you through the process of creating the Managed Instance link. Once the link is created, your source database gets a read-only replica copy on your target Azure SQL Managed Instance.
+> [!NOTE]
+> The link supports replication of user databases only. Replication of system databases is not supported. To replicate instance-level objects (stored in master or msdb databases), we recommend to script them out and run T-SQL scripts on the destination instance.
+ To set up the Managed Instance link, follow these steps: 1. Open SQL Server Management Studio (SSMS) and connect to your instance of SQL Server.
azure-sql Sql Managed Instance Paas Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/sql-managed-instance-paas-overview.md
Some key differences:
- SQL Managed Instance supports [Azure AD authentication](../database/authentication-aad-overview.md) and [Windows Authentication for Azure Active Directory principals (Preview)](winauth-azuread-overview.md). - SQL Managed Instance automatically manages XTP filegroups and files for databases containing In-Memory OLTP objects. - SQL Managed Instance supports SQL Server Integration Services (SSIS) and can host an SSIS catalog (SSISDB) that stores SSIS packages, but they are executed on a managed Azure-SSIS Integration Runtime (IR) in Azure Data Factory. See [Create Azure-SSIS IR in Data Factory](../../data-factory/create-azure-ssis-integration-runtime.md). To compare the SSIS features, see [Compare SQL Database to SQL Managed Instance](../../data-factory/create-azure-ssis-integration-runtime.md#comparison-of-sql-database-and-sql-managed-instance).
+- SQL Managed Instance supports connectivity only through the TCP protocol. It does not support connectivity through named pipes.
### Administration features
azure-sql Sql Server To Managed Instance Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-guide.md
ms.devlang:
- Previously updated : 06/25/2021+ Last updated : 03/22/2022 # Migration guide: SQL Server to Azure SQL Managed Instance [!INCLUDE[appliesto-sqldb-sqlmi](../../includes/appliesto-sqlmi.md)]
To learn more about this migration option, see [Restore a database to Azure SQL
> [!NOTE] > A database restore operation is asynchronous and retryable. You might get an error in SQL Server Management Studio if the connection breaks or a time-out expires. Azure SQL Database will keep trying to restore database in the background, and you can track the progress of the restore using the [sys.dm_exec_requests](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-requests-transact-sql) and [sys.dm_operation_status](/sql/relational-databases/system-dynamic-management-views/sys-dm-operation-status-azure-sql-database) views.
+## Migation tools
+
+While using [Azure Database Migration Service](../../../dms/tutorial-sql-server-to-managed-instance.md), or [native backup and restore](../../managed-instance/restore-sample-database-quickstart.md) to migrate a database to Managed Instance, consider as well the following migration tools:
+
+|Migration option |When to use |Considerations |
+||||
+|[Azure SQL Migration extension for Azure Data Studio](../../../dms/migration-using-azure-data-studio.md) | - Migrate single databases or multiple databases at scale. </br> - Can run in both online (minimal downtime) and offline (acceptable downtime) modes. </br> </br> Supported sources: </br> - SQL Server (2005 to 2019) on-premises or Azure VM </br> - AWS EC2 </br> - AWS RDS </br> - GCP Compute SQL Server VM | - Easy to setup and get started. </br> - Requires setup of self-hosted integration runtime to access on-premises SQL Server and backups. </br> - Includes both assessment and migration capabilities. |
+|[Log Replay Service](../../managed-instance/log-replay-service-migrate.md) | - Migrate individual line-of-business application databases. </br> - More control is needed for database migrations. </br> </br> Supported sources: </br> - SQL Server (2008 to 2019) on-premises or Azure VM </br> - AWS EC2 </br> - AWS RDS </br> - GCP Compute SQL Server VM | - The migration entails making full database backups on SQL Server and copying backup files to Azure Blob Storage. Log Replay Service is used to restore backup files from Azure Blob Storage to SQL Managed Instance. </br> - Databases being restored during the migration process will be in a restoring mode and can't be used to read or write until the process has finished.|
+|[Link feature for Managed Instance](../../managed-instance/link-feature.md) | - Migrate individual line-of-business application databases. </br> - More control is needed for database migrations. </br> - Minimum downtime migration is needed. </br> </br> Supported sources: </br> - SQL Server (2016 to 2019) on-premises or Azure VM </br> - AWS EC2 </br> - GCP Compute SQL Server VM | - The migration entails establishing a VPN connectivity between SQL Server and Managed Instance, and opening inbound communication ports. </br> - Always On technology is used to replicate database near real-time, making an exact replica of SQL Server database on Managed Instance. </br> - Database can be used for R/O access on Managed Instance while migration is in progress. </br> - Provides the best performance minimum downtime migration. |
## Data sync and cutover
azure-sql Sql Server To Managed Instance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-overview.md
ms.devlang:
- Previously updated : 09/07/2021+ Last updated : 03/22/2022 # Migration overview: SQL Server to Azure SQL Managed Instance [!INCLUDE[appliesto--sqlmi](../../includes/appliesto-sqlmi.md)]
We recommend the following migration tools:
|[Azure Database Migration Service](../../../dms/tutorial-sql-server-to-managed-instance.md) | This Azure service supports migration in the offline mode for applications that can afford downtime during the migration process. Unlike the continuous migration in online mode, offline mode migration runs a one-time restore of a full database backup from the source to the target. | |[Native backup and restore](../../managed-instance/restore-sample-database-quickstart.md) | SQL Managed Instance supports restore of native SQL Server database backups (.bak files). It's the easiest migration option for customers who can provide full database backups to Azure Storage.| |[Log Replay Service](../../managed-instance/log-replay-service-migrate.md) | This cloud service is enabled for SQL Managed Instance based on SQL Server log-shipping technology. It's a migration option for customers who can provide full, differential, and log database backups to Azure Storage. Log Replay Service is used to restore backup files from Azure Blob Storage to SQL Managed Instance.|
+|[Link feature for Managed Instance](../../managed-instance/link-feature.md) | This feature enables online migration to Managed Instance using Always On technology. ItΓÇÖs a migration option for customers who require database on Managed Instance to be accessible in R/O mode while migration is in progress, who need to keep the migration running for prolonged periods of time (weeks or months at the time), who require true online replication to Business Critical service tier, and for customers who require the most performant minimum downtime migration. |
+ The following table lists alternative migration tools:
The following table lists alternative migration tools:
|[Transactional replication](../../managed-instance/replication-transactional-overview.md) | Replicate data from source SQL Server database tables to SQL Managed Instance by providing a publisher-subscriber type migration option while maintaining transactional consistency. | |[Bulk copy](/sql/relational-databases/import-export/import-and-export-bulk-data-by-using-the-bcp-utility-sql-server)| The [bulk copy program (bcp) tool](/sql/tools/bcp-utility) copies data from an instance of SQL Server into a data file. Use the tool to export the data from your source and import the data file into the target SQL managed instance. </br></br> For high-speed bulk copy operations to move data to Azure SQL Managed Instance, you can use the [Smart Bulk Copy tool](/samples/azure-samples/smartbulkcopy/smart-bulk-copy/) to maximize transfer speed by taking advantage of parallel copy tasks. | |[Import Export Wizard/BACPAC](../../database/database-import.md?tabs=azure-powershell)| [BACPAC](/sql/relational-databases/data-tier-applications/data-tier-applications#bacpac) is a Windows file with a .bacpac extension that encapsulates a database's schema and data. You can use BACPAC to both export data from a SQL Server source and import the data back into Azure SQL Managed Instance. |
-|[Azure Data Factory](../../../data-factory/connector-azure-sql-managed-instance.md)| The [Copy activity](../../../data-factory/copy-activity-overview.md) in Azure Data Factory migrates data from source SQL Server databases to SQL Managed Instance by using built-in connectors and an [integration runtime](../../../data-factory/concepts-integration-runtime.md).</br> </br> Data Factory supports a wide range of [connectors](../../../data-factory/connector-overview.md) to move data from SQL Server sources to SQL Managed Instance. |
+|[Azure Data Factory](../../../data-factory/connector-azure-sql-managed-instance.md)| The [Copy activity](../../../data-factory/copy-activity-overview.md) in Azure Data Factory migrates data from source SQL Server databases to SQL Managed Instance by using built-in connectors and an [integration runtime](../../../data-factory/concepts-integration-runtime.md).</br> </br> Data Factory supports a wide range of [connectors](../../../data-factory/connector-overview.md) to move data from SQL Server sources to SQL Managed Instance. |
## Compare migration options
The following table compares the migration options that we recommend:
|[Azure Database Migration Service](../../../dms/tutorial-sql-server-to-managed-instance.md) | - Migrate single databases or multiple databases at scale. </br> - Can accommodate downtime during the migration process. </br> </br> Supported sources: </br> - SQL Server (2005 to 2019) on-premises or Azure VM </br> - AWS EC2 </br> - AWS RDS </br> - GCP Compute SQL Server VM | - Migrations at scale can be automated via [PowerShell](../../../dms/howto-sql-server-to-azure-sql-managed-instance-powershell-offline.md). </br> - Time to complete migration depends on database size and is affected by backup and restore time. </br> - Sufficient downtime might be required. | |[Native backup and restore](../../managed-instance/restore-sample-database-quickstart.md) | - Migrate individual line-of-business application databases. </br> - Quick and easy migration without a separate migration service or tool. </br> </br> Supported sources: </br> - SQL Server (2005 to 2019) on-premises or Azure VM </br> - AWS EC2 </br> - AWS RDS </br> - GCP Compute SQL Server VM | - Database backup uses multiple threads to optimize data transfer to Azure Blob Storage, but partner bandwidth and database size can affect transfer rate. </br> - Downtime should accommodate the time required to perform a full backup and restore (which is a size of data operation).| |[Log Replay Service](../../managed-instance/log-replay-service-migrate.md) | - Migrate individual line-of-business application databases. </br> - More control is needed for database migrations. </br> </br> Supported sources: </br> - SQL Server (2008 to 2019) on-premises or Azure VM </br> - AWS EC2 </br> - AWS RDS </br> - GCP Compute SQL Server VM | - The migration entails making full database backups on SQL Server and copying backup files to Azure Blob Storage. Log Replay Service is used to restore backup files from Azure Blob Storage to SQL Managed Instance. </br> - Databases being restored during the migration process will be in a restoring mode and can't be used to read or write until the process has finished.|
+|[Link feature for Managed Instance](../../managed-instance/link-feature.md) | - Migrate individual line-of-business application databases. </br> - More control is needed for database migrations. </br> - Minimum downtime migration is needed. </br> </br> Supported sources: </br> - SQL Server (2016 to 2019) on-premises or Azure VM </br> - AWS EC2 </br> - GCP Compute SQL Server VM | - The migration entails establishing a VPN connectivity between SQL Server and Managed Instance, and opening inbound communication ports. </br> - Always On technology is used to replicate database near real-time, making an exact replica of SQL Server database on Managed Instance. </br> - Database can be used for R/O access on Managed Instance while migration is in progress. </br> - Provides the best performance minimum downtime migration. |
The following table compares the alternative migration options:
azure-sql Availability Group Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/virtual-machines/windows/availability-group-overview.md
This article introduces Always On availability groups (AG) for SQL Server on Azure Virtual Machines (VMs).
+To get started, see the [availability group tutorial](availability-group-manually-configure-prerequisites-tutorial-multi-subnet.md).
+ ## Overview Always On availability groups on Azure Virtual Machines are similar to [Always On availability groups on-premises](/sql/database-engine/availability-groups/windows/always-on-availability-groups-sql-server), and rely on the underlying [Windows Server Failover Cluster](hadr-windows-server-failover-cluster-overview.md). However, since the virtual machines are hosted in Azure, there are a few additional considerations as well, such as VM redundancy, and routing traffic on the Azure network.
The following table provides a comparison of the options available:
## Next steps
-Review the [HADR best practices](hadr-cluster-best-practices.md) and then get started with deploying your availability group using the [Azure portal](availability-group-azure-portal-configure.md), [Azure CLI / PowerShell](./availability-group-az-commandline-configure.md), [Quickstart Templates](availability-group-quickstart-template-configure.md) or [manually](availability-group-manually-configure-prerequisites-tutorial-single-subnet.md).
+To get started, review the [HADR best practices](hadr-cluster-best-practices.md), and then deploy your availability group manually with the [availability group tutorial](availability-group-manually-configure-prerequisites-tutorial-multi-subnet.md).
-Alternatively, you can deploy a [clusterless availability group](availability-group-clusterless-workgroup-configure.md) or an availability group in [multiple regions](availability-group-manually-configure-multiple-regions.md).
To learn more, see:
azure-sql Business Continuity High Availability Disaster Recovery Hadr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/virtual-machines/windows/business-continuity-high-availability-disaster-recovery-hadr-overview.md
Geo-redundant storage (GRS) in Azure is implemented with a feature called geo-re
## Deployment architectures+ Azure supports these SQL Server technologies for business continuity: * [Always On availability groups](/sql/database-engine/availability-groups/windows/always-on-availability-groups-sql-server)
You can have a high-availability solution for SQL Server at a database level wit
| Technology | Example architectures | | | |
-| **Availability groups** |Availability replicas running in Azure VMs in the same region provide high availability. You need to configure a domain controller VM, because Windows failover clustering requires an Active Directory domain.<br/><br/> For higher redundancy and availability, the Azure VMs can be deployed in different [availability zones](../../../availability-zones/az-overview.md) as documented in the [availability group overview](availability-group-overview.md). If the SQL Server VMs in an availability group are deployed in availability zones, then use [Azure Standard Load Balancer](../../../load-balancer/load-balancer-overview.md) for the listener, as documented in the [Azure SQL VM CLI](./availability-group-az-commandline-configure.md) and [Azure Quickstart templates](availability-group-quickstart-template-configure.md) articles.<br/> ![Diagram that shows the "Domain Controller" above the "WSFC Cluster" made of the "Primary Replica", "Secondary Replica", and "File Share Witness".](./medi). |
-| **Failover cluster instances** |Failover cluster instances are supported on SQL Server VMs. Because the FCI feature requires shared storage, five solutions will work with SQL Server on Azure VMs: <br/><br/> - Using [Azure shared disks](failover-cluster-instance-azure-shared-disks-manually-configure.md) for Windows Server 2019. Shared managed disks are an Azure product that allow attaching a managed disk to multiple virtual machines simultaneously. VMs in the cluster can read or write to your attached disk based on the reservation chosen by the clustered application through SCSI Persistent Reservations (SCSI PR). SCSI PR is an industry-standard storage solution that's used by applications running on a storage area network (SAN) on-premises. Enabling SCSI PR on a managed disk allows you to migrate these applications to Azure as is. <br/><br/>- Using [Storage Spaces Direct \(S2D\)](failover-cluster-instance-storage-spaces-direct-manually-configure.md) to provide a software-based virtual SAN for Windows Server 2016 and later.<br/><br/>- Using a [Premium file share](failover-cluster-instance-premium-file-share-manually-configure.md) for Windows Server 2012 and later. Premium file shares are SSD backed, have consistently low latency, and are fully supported for use with FCI.<br/><br/>- Using storage supported by a partner solution for clustering. For a specific example that uses SIOS DataKeeper, see the blog entry [Failover clustering and SIOS DataKeeper](https://azure.microsoft.com/blog/high-availability-for-a-file-share-using-wsfc-ilb-and-3rd-party-software-sios-datakeeper/).<br/><br/>- Using shared block storage for a remote iSCSI target via Azure ExpressRoute. For example, NetApp Private Storage (NPS) exposes an iSCSI target via ExpressRoute with Equinix to Azure VMs.<br/><br/>For shared storage and data replication solutions from Microsoft partners, contact the vendor for any issues related to accessing data on failover.<br/><br/>|
+| [**Availability groups**](availability-group-overview.md) |Availability replicas running in Azure VMs in the same region provide high availability. You need to configure a domain controller VM, because Windows failover clustering requires an Active Directory domain.<br/><br/> For higher redundancy and availability, the Azure VMs can be deployed in different [availability zones](../../../availability-zones/az-overview.md) as documented in the [availability group overview](availability-group-overview.md). ![Diagram that shows the "Domain Controller" above the "WSFC Cluster" made of the "Primary Replica", "Secondary Replica", and "File Share Witness".](./medi). |
+| [**Failover cluster instances**](failover-cluster-instance-overview.md) |Failover cluster instances are supported on SQL Server VMs. Because the FCI feature requires shared storage, five solutions will work with SQL Server on Azure VMs: <br/><br/> - Using [Azure shared disks](failover-cluster-instance-azure-shared-disks-manually-configure.md) for Windows Server 2019. Shared managed disks are an Azure product that allows attaching a managed disk to multiple virtual machines simultaneously. VMs in the cluster can read or write to your attached disk based on the reservation chosen by the clustered application through SCSI Persistent Reservations (SCSI PR). SCSI PR is an industry-standard storage solution that's used by applications running on a storage area network (SAN) on-premises. Enabling SCSI PR on a managed disk allows you to migrate these applications to Azure as is. <br/><br/>- Using [Storage Spaces Direct \(S2D\)](failover-cluster-instance-storage-spaces-direct-manually-configure.md) to provide a software-based virtual SAN for Windows Server 2016 and later.<br/><br/>- Using a [Premium file share](failover-cluster-instance-premium-file-share-manually-configure.md) for Windows Server 2012 and later. Premium file shares are SSD backed, have consistently low latency, and are fully supported for use with FCI.<br/><br/>- Using storage supported by a partner solution for clustering. For a specific example that uses SIOS DataKeeper, see the blog entry [Failover clustering and SIOS DataKeeper](https://azure.microsoft.com/blog/high-availability-for-a-file-share-using-wsfc-ilb-and-3rd-party-software-sios-datakeeper/).<br/><br/>- Using shared block storage for a remote iSCSI target via Azure ExpressRoute. For example, NetApp Private Storage (NPS) exposes an iSCSI target via ExpressRoute with Equinix to Azure VMs.<br/><br/>For shared storage and data replication solutions from Microsoft partners, contact the vendor for any issues related to accessing data on failover.<br/><br/> To get started, [prepare your VM for FCI](failover-cluster-instance-prepare-vm.md)|
## Azure only: Disaster recovery solutions You can have a disaster recovery solution for your SQL Server databases in Azure by using availability groups, database mirroring, or backup and restore with storage blobs. | Technology | Example architectures | | | |
-| **Availability groups** |Availability replicas running across multiple datacenters in Azure VMs for disaster recovery. This cross-region solution helps protect against a complete site outage. <br/> ![Diagram that shows two regions with a "Primary Replica" and "Secondary Replica" connected by an "Asynchronous Commit".](./medi).|
+| [**Availability groups**](availability-group-overview.md) |Availability replicas running across multiple datacenters in Azure VMs for disaster recovery. This cross-region solution helps protect against a complete site outage. <br/> ![Diagram that shows two regions with a "Primary Replica" and "Secondary Replica" connected by an "Asynchronous Commit".](./medi).|
| **Database mirroring** |Principal and mirror and servers running in different datacenters for disaster recovery. You must deploy them by using server certificates. SQL Server database mirroring is not supported for SQL Server 2008 or SQL Server 2008 R2 on an Azure VM. <br/>![Diagram that shows the "Principal" in one region connected to the "Mirror in another region with "High Performance".](./media/business-continuity-high-availability-disaster-recovery-hadr-overview/azure-only-dr-dbmirroring.png) | | **Backup and restore with Azure Blob storage** |Production databases backed up directly to Blob storage in a different datacenter for disaster recovery.<br/>![Diagram that shows a "Database" in one region backing up to "Blob Storage" in another region.](./medi). | | **Replicate and fail over SQL Server to Azure with Azure Site Recovery** |Production SQL Server instance in one Azure datacenter replicated directly to Azure Storage in a different Azure datacenter for disaster recovery.<br/>![Diagram that shows a "Database" in one Azure datacenter using "ASR Replication" for disaster recovery in another datacenter. ](./medi). |
You can have a disaster recovery solution for your SQL Server databases in a hyb
| Technology | Example Architectures | | | |
-| **Availability groups** |Some availability replicas running in Azure VMs and other replicas running on-premises for cross-site disaster recovery. The production site can be either on-premises or in an Azure datacenter.<br/>![Availability groups](./media/business-continuity-high-availability-disaster-recovery-hadr-overview/hybrid-dr-alwayson.png)<br/>Because all availability replicas must be in the same failover cluster, the cluster must span both networks (a multi-subnet failover cluster). This configuration requires a VPN connection between Azure and the on-premises network.<br/><br/>For successful disaster recovery of your databases, you should also install a replica domain controller at the disaster recovery site.|
+| [**Availability groups**](availability-group-overview.md) |Some availability replicas running in Azure VMs and other replicas running on-premises for cross-site disaster recovery. The production site can be either on-premises or in an Azure datacenter.<br/>![Availability groups](./medi).|
| **Database mirroring** |One partner running in an Azure VM and the other running on-premises for cross-site disaster recovery by using server certificates. Partners don't need to be in the same Active Directory domain, and no VPN connection is required.<br/>![Database mirroring](./medi) is required.<br/><br/>For successful disaster recovery of your databases, you should also install a replica domain controller at the disaster recovery site. SQL Server database mirroring is not supported for SQL Server 2008 or SQL Server 2008 R2 on an Azure VM. | | **Log shipping** |One server running in an Azure VM and the other running on-premises for cross-site disaster recovery. Log shipping depends on Windows file sharing, so a VPN connection between the Azure virtual network and the on-premises network is required.<br/>![Log shipping](./media/business-continuity-high-availability-disaster-recovery-hadr-overview/hybrid-dr-log-shipping.png)<br/>For successful disaster recovery of your databases, you should also install a replica domain controller at the disaster recovery site. | | **Backup and restore with Azure Blob storage** |On-premises production databases backed up directly to Azure Blob storage for disaster recovery.<br/>![Backup and restore](./medi). |
azure-sql Failover Cluster Instance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/virtual-machines/windows/failover-cluster-instance-overview.md
This article introduces feature differences when you're working with failover cluster instances (FCI) for SQL Server on Azure Virtual Machines (VMs).
+To get started, [prepare your vm](failover-cluster-instance-prepare-vm.md).
+ ## Overview SQL Server on Azure VMs uses [Windows Server Failover Clustering (WSFC)](hadr-windows-server-failover-cluster-overview.md) functionality to provide local high availability through redundancy at the server-instance level: a failover cluster instance. An FCI is a single instance of SQL Server that's installed across WSFC (or simply the cluster) nodes and, possibly, across multiple subnets. On the network, an FCI appears to be a single instance of SQL Server running on a single computer. But the FCI provides failover from one WSFC node to another if the current node becomes unavailable.
azure-sql Security Considerations Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/virtual-machines/windows/security-considerations-best-practices.md
SQL Server features and capabilities provide a method of security at the data le
- Use [Azure Security Center](../../../defender-for-cloud/defender-for-cloud-introduction.md) to evaluate and take action to improve the security posture of your data environment. Capabilities such as [Azure Advanced Threat Protection (ATP)](../../database/threat-detection-overview.md) can be leveraged across your hybrid workloads to improve security evaluation and give the ability to react to risks. Registering your SQL Server VM with the [SQL IaaS Agent extension](sql-agent-extension-manually-register-single-vm.md) surfaces Azure Security Center assessments within the [SQL virtual machine resource](manage-sql-vm-portal.md) of the Azure portal. - Leverage [Microsoft Defender for SQL](../../../defender-for-cloud/defender-for-sql-introduction.md) to discover and mitigate potential database vulnerabilities, as well as detect anomalous activities that could indicate a threat to your SQL Server instance and database layer. - [Vulnerability Assessment](../../database/sql-vulnerability-assessment.md) is a part of [Microsoft Defender for SQL](../../../defender-for-cloud/defender-for-sql-introduction.md) that can discover and help remediate potential risks to your SQL Server environment. It provides visibility into your security state, and includes actionable steps to resolve security issues.-- [Azure Advisor](../../../advisor/advisor-security-recommendations.md) analyzes your resource configuration and usage telemetry and then recommends solutions that can help you improve the cost effectiveness, performance, high availability, and security of your Azure resources.. Leverage Azure Advisor at the virtual machine, resource group, or subscription level to help identify and apply best practices to optimize your Azure deployments.
+- [Azure Advisor](../../../advisor/advisor-security-recommendations.md) analyzes your resource configuration and usage telemetry and then recommends solutions that can help you improve the cost effectiveness, performance, high availability, and security of your Azure resources. Leverage Azure Advisor at the virtual machine, resource group, or subscription level to help identify and apply best practices to optimize your Azure deployments.
- Use [Azure Disk Encryption](../../../virtual-machines/windows/disk-encryption-windows.md) when your compliance and security needs require you to encrypt the data end-to-end using your encryption keys, including encryption of the ephemeral (locally attached temporary) disk. - [Managed Disks are encrypted](../../../virtual-machines/disk-encryption.md) at rest by default using Azure Storage Service Encryption, where the encryption keys are Microsoft-managed keys stored in Azure. - For a comparison of the managed disk encryption options review the [managed disk encryption comparison chart](../../../virtual-machines/disk-encryption-overview.md#comparison)
You don't want attackers to easily guess account names or passwords. Use the fol
- If you must use the **SA** login, enable the login after provisioning and assign a new strong password.
+> [!NOTE]
+> Connecting to a SQL Server instance that's running on an Azure virtual machine (VM) is not supported using Azure Active Directory or Azure Active Directory Domain Services. Use an Active Directory domain account instead.
+ ## Auditing and reporting [Auditing with Log Analytics](../../../azure-monitor/agents/data-sources-windows-events.md#configuring-windows-event-logs) documents events and writes to an audit log in a secure Azure BLOB storage account. Log Analytics can be used to decipher the details of the audit logs. Auditing gives you the ability to save data to a separate storage account and create an audit trail of all events you select. You can also leverage Power BI against the audit log for quick analytics of and insights about your data, as well as to provide a view for regulatory compliance. To learn more about auditing at the VM and Azure levels, see [Azure security logging and auditing](../../../security/fundamentals/log-audit.md).
azure-sql Sql Server On Azure Vm Iaas What Is Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview.md
The following table details the benefits unlocked by the extension:
[!INCLUDE [SQL VM feature benefits](../../includes/sql-vm-feature-benefits.md)] -- ## Getting started To get started with SQL Server on Azure VMs, review the following resources:
To get started with SQL Server on Azure VMs, review the following resources:
- **Pricing**: For information about the pricing structure of your SQL Server on Azure VM, review the [Pricing guidance](pricing-guidance.md). - **Frequently asked questions**: For commonly asked questions, and scenarios, review the [FAQ](frequently-asked-questions-faq.yml).
+## High availability & disaster recovery
+
+On top of the built-in [high availability provided by Azure virtual machines](../../../virtual-machines/availability.md), you can also leverage the high availability and disaster recovery features provided by SQL Server.
+
+To learn more, see the overview of [Always On availability groups](availability-group-overview.md), and [Always On failover cluster instances](failover-cluster-instance-overview.md). For more details, see the [business continuity overview](business-continuity-high-availability-disaster-recovery-hadr-overview.md).
+
+To get started, see the tutorials for [availability groups](availability-group-manually-configure-prerequisites-tutorial-multi-subnet.md) or [preparing your VM for a failover cluster instance](failover-cluster-instance-prepare-vm.md).
+ ## Licensing To get started, choose a SQL Server virtual machine image with your required version, edition, and operating system. The following sections provide direct links to the Azure portal for the SQL Server virtual machine gallery images.
azure-web-pubsub Howto Local Debug Event Handler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-local-debug-event-handler.md
+
+ Title: How to troubleshoot and debug Azure Web PubSub event handler locally
+description: Guidance about debugging event handler locally when developing with Azure Web PubSub service.
++++ Last updated : 3/13/2022++
+# How to troubleshoot and debug Azure Web PubSub event handler locally
+
+When a WebSocket connection connects to Web PubSub service, the service formulates an HTTP POST request to the registered upstream and expects an HTTP response. We call the upstream as the **event handler** and the **event handler** is responsible to handle the incoming events following the [Web PubSub CloudEvents specification](./reference-cloud-events.md).
+
+When the **event handler** runs locally, the local server is not publicly accessible so we need some tunnel tool to help expose localhost publicly so that the Web PubSub service can reach it.
+
+## Use localtunnel to expose localhost
+
+[localtunnel](https://github.com/localtunnel/localtunnel) is an open-source project that help expose your localhost publicly. [Install the tool](https://github.com/localtunnel/localtunnel#installation) and run the follow command (update the `<port>` value to the port your **event handler** listens to):
+
+```bash
+lt --port <port> --print-requests
+```
+
+localtunnel will print an url (`https://<domain-name>.loca.lt`) that can be accessed from internet, for example, `https://xxx.loca.lt`. And `--print-requests` subcommand prints all the incoming requests so that you can see later if this event handler is successfully invoked.
+
+> [!Tip]
+>
+> There is one known issue that [localtunnel goes offline when the server restarts](https://github.com/localtunnel/localtunnel/issues/466) and [here is the workaround](https://github.com/localtunnel/localtunnel/issues/466#issuecomment-1030599216)
+
+There are also other tools to choose when debugging the webhook locally, for example, [ngrok](https://ngrok.com), [loophole](https://loophole.cloud/docs), [TunnelRelay](https://github.com/OfficeDev/microsoft-teams-tunnelrelay) or so.
++
+## Test if the event handler is working publicly
+
+Some tools might have issue returning response headers correctly. Try the following command to see if the tool is working properly:
+
+```bash
+curl https://<domain-name>.loca.lt/eventhandler -X OPTIONS -H "WebHook-Request-Origin: *" -H "ce-awpsversion: 1.0" --ssl-no-revoke -i
+```
+`https://<domain-name>.loca.lt/eventhandler` is the path that your **event handler** listens to. Update it if your **event handler** listens to other path.
+
+Check if the response header contains the `webhook-allowed-origin` header. This curl command actually checks if the WebHook [abuse protection request](./reference-cloud-events.md#webhook-validation) responses with the expected header.
+
+## Next steps
+
azure-web-pubsub Tutorial Subprotocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/tutorial-subprotocol.md
Last updated 11/01/2021
# Tutorial: Publish and subscribe messages between WebSocket clients using subprotocol
-In [Build a chat app tutorial](./tutorial-build-chat.md), you've learned how to use WebSocket APIs to send and receive data with Azure Web PubSub. You can see there's no protocol needed when client is communicating with the service. For example, you can use `WebSocket.send()` to send any data and server will receive the data as is. This is easy to use, but the functionality is also limited. You can't, for example, specify the event name when sending the event to server, or publish message to other clients instead of sending it to server. In this tutorial, you'll learn how to use subprotocol to extend the functionality of client.
+In [Build a chat app tutorial](./tutorial-build-chat.md), you've learned how to use WebSocket APIs to send and receive data with Azure Web PubSub. You can see there's no protocol needed when client is communicating with the service. For example, you can use `WebSocket.send()` to send any data and server will receive the data as is. This is easy to use, but the functionality is also limited. You can't, for example, specify the event name when sending the event to your server, or publish message to other clients instead of sending it to your server. In this tutorial, you'll learn how to use subprotocol to extend the functionality of client.
In this tutorial, you learn how to:
Copy the fetched **ConnectionString** and it will be used later in this tutorial
## Using a subprotocol
-The client can start a WebSocket connection using a specific [subprotocol](https://datatracker.ietf.org/doc/html/rfc6455#section-1.9). Azure Web PubSub service supports a subprotocol called `json.webpubsub.azure.v1` to empower the clients to do publish/subscribe directly instead of a round trip to the upstream server. Check [Azure Web PubSub supported JSON WebSocket subprotocol](./reference-json-webpubsub-subprotocol.md) for details about the subprotocol.
+The client can start a WebSocket connection using a specific [subprotocol](https://datatracker.ietf.org/doc/html/rfc6455#section-1.9). Azure Web PubSub service supports a subprotocol called `json.webpubsub.azure.v1` to empower the clients to do publish/subscribe directly through the Web PubSub service instead of a round trip to the upstream server. Check [Azure Web PubSub supported JSON WebSocket subprotocol](./reference-json-webpubsub-subprotocol.md) for details about the subprotocol.
> If you use other protocol names, they will be ignored by the service and passthrough to server in the connect event handler, so you can build your own protocols.
Also note that, instead of a plain text, client now receives a JSON message that
## Publish messages from client
-In the [Build a chat app](./tutorial-build-chat.md) tutorial, when client sends a message through WebSocket connection, it will trigger a user event at the server side. With subprotocol, client will have more functionalities by sending a JSON message. For example, you can publish message directly from client to other clients.
+In the [Build a chat app](./tutorial-build-chat.md) tutorial, when client sends a message through WebSocket connection to the Web PubSub service, the service triggers a user event at your server side. With subprotocol, client will have more functionalities by sending a JSON message. For example, you can publish messages directly from client through the Web PubSub service to other clients.
This will be useful if you want to stream a large amount of data to other clients in real time. Let's use this feature to build a log streaming application, which can stream console logs to browser in real time.
bastion Bastion Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-faq.md
description: Learn about frequently asked questions for Azure Bastion.
Previously updated : 03/02/2022 Last updated : 03/22/2022 # Azure Bastion FAQ
The browser must support HTML 5. Use the Microsoft Edge browser or Google Chrome
For more information, see the [pricing page](https://aka.ms/BastionHostPricing).
-### Is IPv6 supported?
+### <a name="ipv6"></a>Is IPv6 supported?
At this time, IPv6 isn't supported. Azure Bastion supports IPv4 only. This means that you can only assign an IPv4 public IP address to your Bastion resource, and that you can use your Bastion to connect to IPv4 target VMs. You can also use your Bastion to connect to dual-stack target VMs, but you'll only be able to send and receive IPv4 traffic via Azure Bastion.
At this time, IPv6 isn't supported. Azure Bastion supports IPv4 only. This means
Azure Bastion doesn't move or store customer data out of the region it's deployed in.
-### Can I use Azure Bastion with Azure Private DNS Zones?
+### <a name="dns"></a>Can I use Azure Bastion with Azure Private DNS Zones?
Azure Bastion needs to be able to communicate with certain internal endpoints to successfully connect to target resources. Therefore, you *can* use Azure Bastion with Azure Private DNS Zones as long as the zone name you select doesn't overlap with the naming of these internal endpoints. Before you deploy your Azure Bastion resource, please make sure that the host virtual network is not linked to a private DNS zone with the following exact names: * blob.core.windows.net
-* vault.azure.com
* core.windows.net
+* vaultcore.windows.net
+* vault.azure.com
* azure.com
-You may use a private DNS zone ending with one of the names listed above (ex: dummy.blob.core.windows.net) as long as it is not one of the recommended DNS zone names for an Azure service listed [here](../private-link/private-endpoint-dns.md#azure-services-dns-zone-configuration).
+You may use a private DNS zone ending with one of the names listed above (ex: dummy.blob.core.windows.net).
The use of Azure Bastion is also not supported with Azure Private DNS Zones in national clouds.
Review any error messages and [raise a support request in the Azure portal](../a
Azure Bastion is deployed within VNets or peered VNets, and is associated to an Azure region. You're responsible for deploying Azure Bastion to a Disaster Recovery (DR) site VNet. In the event of an Azure region failure, perform a failover operation for your VMs to the DR region. Then, use the Azure Bastion host that's deployed in the DR region to connect to the VMs that are now deployed there.
-## <a name="vm"></a>VMs and connections
+## <a name="vm"></a>VM features and connections
### <a name="roles"></a>Are any roles required to access a virtual machine?
No. When you connect to a VM using Azure Bastion, you don't need a public IP on
### <a name="rdpssh"></a>Do I need an RDP or SSH client?
-No. You don't need an RDP or SSH client to access the RDP/SSH to your Azure virtual machine in your Azure portal. Use the [Azure portal](https://portal.azure.com) to let you get RDP/SSH access to your virtual machine directly in the browser.
+No. You can access your virtual machine from the Azure portal using your browser. For available connections and methods, see [About VM connections and features](vm-about.md).
+
+### <a name="native-client"></a>Can I connect to my VM using a native client?
+
+Yes. You can connect to a VM from your local computer using a native client. See [Connect to a VM using a native client](connect-native-client-windows.md).
### <a name="agent"></a>Do I need an agent running in the Azure virtual machine? No. You don't need to install an agent or any software on your browser or your Azure virtual machine. The Bastion service is agentless and doesn't require any additional software for RDP/SSH.
-### <a name="rdpfeaturesupport"></a>What features are supported in an RDP session?
+### <a name="rdpfeaturesupport"></a>What features are supported for VM sessions?
+
+See [About VM connections and features](vm-about.md) for supported features.
+
+### <a name="audio"></a>Is remote audio available for VMs?
-At this time, only text copy/paste is supported. Feel free to share your feedback about new features on the [Azure Bastion Feedback page](https://feedback.azure.com/d365community/forum/8ae9bf04-8326-ec11-b6e6-000d3a4f0789?c=c109f019-8326-ec11-b6e6-000d3a4f0789).
+Yes. See [About VM connections and features](vm-about.md#audio).
-### Does Azure Bastion support file transfer?
+### <a name="file-transfer"></a>Does Azure Bastion support file transfer?
-Azure Bastion offers support for file transfer between your target VM and local computer using Bastion and a native RDP or SSH client. File transfer is supported using the native client only. At this time, you canΓÇÖt upload or download files using PowerShell or via the Azure portal. To learn more, see [Upload and download files using the native client](vm-upload-download-native.md).
+Azure Bastion offers support for file transfer between your target VM and local computer using Bastion and a native RDP or SSH client. At this time, you canΓÇÖt upload or download files using PowerShell or via the Azure portal. For more information, see [Upload and download files using the native client](vm-upload-download-native.md).
### <a name="aadj"></a>Does Bastion hardening work with AADJ VM extension-joined VMs?
Azure Bastion currently supports the following keyboard layouts inside the VM:
To establish the correct key mappings for your target language, you must set either the keyboard layout on your local computer to English (United States) or the keyboard layout inside the target VM to English (United States). That is, the keyboard layout on your local computer must be set to English (United States) while the keyboard layout on your target VM is set to your target language, or vice versa.
-To set English (United States) as your keyboard layout on a Windows workstation, navigate to Settings > Time & Language > Lanugage & Region. Under "Preferred languages," select "Add a language" and add English (United States). You will then be able to see your keyboard layouts on your toolbar. To set English (United States) as your keyboard layout, select "ENG" on your toolbar or click Windows + Spacebar to open keyboard layouts.
+To set English (United States) as your keyboard layout on a Windows workstation, navigate to Settings > Time & Language > Language & Region. Under "Preferred languages," select "Add a language" and add English (United States). You'll then be able to see your keyboard layouts on your toolbar. To set English (United States) as your keyboard layout, select "ENG" on your toolbar or click Windows + Spacebar to open keyboard layouts.
### <a name="res"></a>What is the maximum screen resolution supported via Bastion?
bastion Bastion Vm Copy Paste https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-vm-copy-paste.md
After you connect to the virtual machine using the [Azure portal ](https://porta
## Next steps
-Read the [Bastion FAQ](bastion-faq.md).
+For more VM features, see [About VM connections and features](vm-about.md).
bastion Bastion Vm Full Screen https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-vm-full-screen.md
Select the **Fullscreen** button to switch the session to a full screen experien
## Next steps
-Read the [Bastion FAQ](bastion-faq.md).
-Learn how to [Copy and paste](bastion-vm-copy-paste.md) to and from an Azure VM.
+For more VM features, see [About VM connections and features](vm-about.md).
bastion Vm About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/vm-about.md
For steps and more information, see [Upload or download files to a VM using a na
## <a name="audio"></a>Remote audio
-You can enable remote audio output for your VM. Some VMs automatically enable this setting, others require you to enable audio settings manually. The settings are changed on the VM itself. Your Bastion deployment doesn't need any special configuration settings to enable remote audio output.
-For steps, see the [Deploy Bastion](tutorial-create-host-portal.md#audio) tutorial.
+## <a name="faq"></a>FAQ
+
+For FAQs, see [Bastion FAQ - VM connectons and features](bastion-faq.md#vm).
## Next steps
-For frequently asked questions, see the VM section of the [Azure Bastion FAQ](bastion-faq.md).
+[Quickstart: Deploy Azure Bastion with default settings](quickstart-host-portal.md)
bastion Vm Upload Download Native https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/vm-upload-download-native.md
This section helps you upload files from your local computer to your target VM o
## Next steps
-* Read the [Bastion FAQ](bastion-faq.md)
+For more VM features, see [About VM connections and features](vm-about.md).
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/releasenotes.md
Previously updated : 02/14/2022 Last updated : 03/22/2022
See below for information about changes to Speech services and resources.
## What's new?
+* Custom speech-to-text container v3.1.0 released in March 2022, with support to get display models.
* STT Service January 2022, added 18 new locales. * Speech SDK 1.20.0 released January 2022. Updates include extended programming language support for DialogServiceConnector, Unity on Linux, enhancements to IntentRecognizer, added support for Python 3.10, and a fix to remove a 10-second delay while stopping a speech recognizer (when using a PushAudioInputStream, and no new audio is pushed in after StopContinuousRecognition is called). * Speech CLI 1.20.0 released January 2022. Updates include microphone input for Speaker recognition and expanded support for Intent recognition. * TTS Service January 2022, added 10 new languages and variants for Neural text-to-speech and new voices in preview for en-GB, fr-FR and de-DE.
-* Containers v3.0.0 released January 2022, with support for using containers in disconnected environments.
## Release notes
cognitive-services Speech Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-howto.md
With Speech containers, you can build a speech application architecture that's o
| Container | Features | Latest | Release status | |--|--|--|--|
-| Speech-to-text | Analyzes sentiment and transcribes continuous real-time speech or batch audio recordings with intermediate results. | 3.0.0 | Generally available |
-| Custom speech-to-text | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | 3.0.0 | Generally available |
+| Speech-to-text | Analyzes sentiment and transcribes continuous real-time speech or batch audio recordings with intermediate results. | 3.1.0 | Generally available |
+| Custom speech-to-text | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | 3.1.0 | Generally available |
| Speech language identification | Detects the language spoken in audio files. | 1.5.0 | Preview | | Neural text-to-speech | Converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech. | 2.0.0 | Generally available |
This command:
#### Base model download on the custom speech-to-text container
-Starting in v2.6.0 of the custom-speech-to-text container, you can get the available base model information by using option `BaseModelLocale=<locale>`. This option gives you a list of available base models on that locale under your billing account. For example:
+Starting in v2.6.0 of the custom-speech-to-text container, you can get the available base model information by using option `BaseModelLocale={LOCALE}`. This option gives you a list of available base models on that locale under your billing account. For example:
```bash docker run --rm -it \
Checking available base model for en-us
2020/10/30 21:54:21 [Fatal] Please run this tool again and assign --modelId '<one above base model id>'. If no model id listed above, it means currently there is no available base model for en-us ```
+#### Display model download on the custom speech-to-text container
+Starting in v3.1.0 of the custom-speech-to-text container, you can get the available display models information and choose to download those models into your speech-to-text container to get highly improved final display output.
+
+You can query or download any or all of these display model types: Rescoring (`Rescore`), Punctuation (`Punct`), resegmentation (`Resegment`), and wfstitn (`Wfstitn`). Otherwise, you can use the `FullDisplay` option (with or without the other types) to query or download all types of display models.
+
+Set the `BaseModelLocale` to query the latest available display model on the target locale. If you include multiple display model types, the command will return the latest available display models for each type. For example:
+
+```bash
+docker run --rm -it \
+mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text \
+Punct Rescore Resegment Wfstitn \ # Specify `FullDisplay` or a space-separated subset of display models
+BaseModelLocale={LOCALE} \
+Eula=accept \
+Billing={ENDPOINT_URI} \
+ApiKey={API_KEY}
+```
+
+Set the `DisplayLocale` to download the latest available display model on the target locale. When you set `DisplayLocale`, you must also specify `FullDisplay` or a space-separated subset of display models. The command will download the latest available display model for each specified type. For example:
+
+```bash
+docker run --rm -it \
+mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text \
+Punct Rescore Resegment Wfstitn \ # Specify `FullDisplay` or a space-separated subset of display models
+DisplayLocale={LOCALE} \
+Eula=accept \
+Billing={ENDPOINT_URI} \
+ApiKey={API_KEY}
+```
+
+Set one model ID parameter to download a specific display model: Rescoring (`RescoreId`), Punctuation (`PunctId`), resegmentation (`ResegmentId`), or wfstitn (`WfstitnId`). This is similar to how you would download a base model via the `ModelId` parameter. For example, to download a rescoring display model, you can use the following command with the `RescoreId` parameter:
+
+```bash
+docker run --rm -it \
+mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text \
+RescoreId={RESCORE_MODEL_ID} \
+Eula=accept \
+Billing={ENDPOINT_URI} \
+ApiKey={API_KEY}
+```
+
+> [!NOTE]
+> If you set more than one query or download parameter, the command will prioritize in this order: `BaseModelLocale`, model ID, and then `DisplayLocale` (only applicable for display models).
+ #### Custom pronunciation on the custom speech-to-text container Starting in v2.5.0 of the custom-speech-to-text container, you can get custom pronunciation results in the output. All you need to do is have your own custom pronunciation rules set up in your custom model and mount the model to a custom-speech-to-text container.
cognitive-services Container Image Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/containers/container-image-tags.md
The [Custom Speech-to-text][sp-cstt] container image can be found on the `mcr.mi
# [Latest version](#tab/current)
-Release note for `3.0.0-amd64`:
+Release note for `3.1.0-amd64`:
**Features**
-* Support for using containers in [disconnected environments](disconnected-containers.md).
+* Support Full Display Process. The final display outcomes is expected highly improved if this feature is enabled.
+* Security Upgrade.
Note that due to the phrase lists feature, the size of this container image has increased. | Image Tags | Notes | Digest | |-|:|:-|
-| `latest` | | `sha256:7eff5d7610f20622b5c5cae6235602774108f2de7aeebe2148016b6d232f7c42`|
-| `3.0.0-amd64` | | `sha256:7eff5d7610f20622b5c5cae6235602774108f2de7aeebe2148016b6d232f7c42`|
+| `latest` | | `sha256:7a3d08885cb65eb42d4ad085a54853fb057cca0cffecff7708a241afcb2c110a`|
+| `3.1.0-amd64` | | `sha256:7a3d08885cb65eb42d4ad085a54853fb057cca0cffecff7708a241afcb2c110a`|
# [Previous version](#tab/previous)
+Release note for `3.0.0-amd64`:
+
+**Features**
+* Support for using containers in [disconnected environments](disconnected-containers.md).
+ Release note for `2.18.0-amd64`: Regular monthly upgrade
Release note for `2.5.0-amd64`:
| Image Tags | Notes | |-|:--|
+| `3.0.0-amd64` | |
| `2.18.0-amd64` | | | `2.17.0-amd64` | | | `2.16.0-amd64` | |
Since Speech-to-text v2.5.0, images are supported in the *US Government Virginia
# [Latest version](#tab/current)
-Release note for `3.0.1-amd64-<locale>`:
-
-**Features**
-* Support new locale `uk-UA`.
-
-Release note for `3.0.0-amd64-<locale>`:
+Release note for `3.1.0-amd64-<locale>`:
**Features**
-* Support for using containers in [disconnected environments](disconnected-containers.md).
+* Support Full Display Process and this feature is enabled by default on all listed locales. The size of images increases 2-5 Gb because of full display models. And for `en-US`, the container takes extra 150 Mb memory usage.
+* Security Upgrade.
Note that due to the phrase lists feature, the size of this container image has increased. | Image Tags | Notes | |-|:--| | `latest` | Container image with the `en-US` locale. |
-| `3.0.0-amd64-<locale>` | Replace `<locale>` with one of the available locales, listed below. For example `3.0.0-amd64-en-us`. |
+| `3.1.0-amd64-<locale>` | Replace `<locale>` with one of the available locales, listed below. For example `3.1.0-amd64-en-us`. |
This container has the following locales available.
-| Locale for v3.0.1 | Notes | Digest |
+| Locale for v3.1.0 | Notes | Digest |
|--|:--|:--|
-| `uk-ua` | Container image with the `uk-UA` locale. | `sha256:af8c370a7ed3e231a611ea37053e809fa7e52ea514c70f4c85f133c7b28a4fba` |
-
-| Locale for v3.0.0 | Notes | Digest |
-|--|:--|:--|
-| `ar-ae` | Container image with the `ar-AE` locale. | `sha256:3ce4ab5141dd46d2fce732b3659cba8fc70ab83fa5b37bf43f4dfa2efca5aef7` |
-| `ar-bh` | Container image with the `ar-BH` locale. | `sha256:86ed164f98f1d1776faa9bda4a7846bc0ad9232dd0613ae506fd5698d4823787` |
-| `ar-eg` | Container image with the `ar-EG` locale. | `sha256:43fa641504d6e8b89e31f6eaa033ad680bb586b93fa3853747051a570fbf05ca` |
-| `ar-iq` | Container image with the `ar-IQ` locale. | `sha256:001c0d3ac2e3fec59993e001a8278696b847b14a1bd1ed5c843d18959b3d3d4e` |
-| `ar-jo` | Container image with the `ar-JO` locale. | `sha256:1707f21fa9cbe5bd2275023620718f1a98429e5f5fb7279211951500d30a6e65` |
-| `ar-kw` | Container image with the `ar-KW` locale. | `sha256:3ce4ab5141dd46d2fce732b3659cba8fc70ab83fa5b37bf43f4dfa2efca5aef7` |
-| `ar-lb` | Container image with the `ar-LB` locale. | `sha256:d237ecf21770b493c5aaf3bbab5ae9260aba121518996192d13924b4c5e999f4` |
-| `ar-om` | Container image with the `ar-OM` locale. | `sha256:d1e4e45ba5df3a9307433e8a631f02142c246e5a2fbf9c25edf97e290008c63a` |
-| `ar-qa` | Container image with the `ar-QA` locale. | `sha256:3ce4ab5141dd46d2fce732b3659cba8fc70ab83fa5b37bf43f4dfa2efca5aef7` |
-| `ar-sa` | Container image with the `ar-SA` locale. | `sha256:3ce4ab5141dd46d2fce732b3659cba8fc70ab83fa5b37bf43f4dfa2efca5aef7` |
-| `ar-sy` | Container image with the `ar-SY` locale. | `sha256:a51c67916deac54a73ea1bb5084b511c34cd649764bd5935aac9f527bf33baf0` |
-| `bg-bg` | Container image with the `bg-BG` locale. | `sha256:f0d70b8ab0e324ee42f0ca7fe57fa828c29ac1df11261676f7168b60139a0e3c` |
-| `ca-es` | Container image with the `ca-ES` locale. | `sha256:b876d37460b96cddb76fd74f0dfa64ad97399681eda27969e30f74d703a16b05` |
-| `cs-cz` | Container image with the `cs-CZ` locale. | `sha256:73bb40181bae4da3d3aaa1f77f5b831156ca496fbd065b4944b6e49f0807d9e9` |
-| `da-dk` | Container image with the `da-DK` locale. | `sha256:a0b65559390af1100941983d850bf549f1aefe3ce56574de1a8cab63d5c52694` |
-| `de-at` | Container image with the `de-AT` locale. | `sha256:78030695ef9ff10e5a465340e211f1ca76dce569b9e8bd8c7758d28d2139965e` |
-| `de-ch` | Container image with the `de-CH` locale. | `sha256:7705a78e3ea3d05bdf1a09876b9cd4c03a8734463f350e0eed81cc989710bcd5` |
-| `de-de` | Container image with the `de-DE` locale. | `sha256:d10066583f94bc3db96d2afd28fa42e880bd71e3f6195cc764bda79d039a58c7` |
-| `el-gr` | Container image with the `el-GR` locale. | `sha256:d8b7d28287e016baacb4df7e3bf2d5cd7f6124ec136446200ad70b9472ee8207` |
-| `en-au` | Container image with the `en-AU` locale. | `sha256:493742b671c10b6767b371b8bb687241cbf38f53929a2ecc18979d531be136b4` |
-| `en-ca` | Container image with the `en-CA` locale. | `sha256:61fa4cb2a671b504f06fa89f4d90ade6ccfbc378d93d1eada0cc47434b45601f` |
-| `en-gb` | Container image with the `en-GB` locale. | `sha256:3b0f47356aab046c176bf2a5a5187404e3e5a9a50387bd29d35ce2371d49beff` |
-| `en-hk` | Container image with the `en-HK` locale. | `sha256:bf98a2553b9555254968f6deeeee85e83462cb45a04adcd9c35be62c1cf51924` |
-| `en-ie` | Container image with the `en-IE` locale. | `sha256:952a611a3911faf893313b51529d392eeac82a4a8abe542c49ca7aa9c89e8a48` |
-| `en-in` | Container image with the `en-IN` locale. | `sha256:6ad1168ac4e278ed65d66a8a5de441327522b27619dfdf6ecae52a68ab04b214` |
-| `en-nz` | Container image with the `en-NZ` locale. | `sha256:03174464fab551c34df402102cac3b4f4b4efc0a4250a14c07f35318787ad9e2` |
-| `en-ph` | Container image with the `en-PH` locale. | `sha256:e38bbe4ae16dc792be3b6e9e2e488588fdd9d12eed08f330cd5dfc5d318b74e0` |
-| `en-sg` | Container image with the `en-SG` locale. | `sha256:58476a88fb548b0ad18d54a118024c628a555a67d75fa5fdf7e860cc43b25272` |
-| `en-us` | Container image with the `en-US` locale. | `sha256:e1ea7a52fd45ab959d10b597dc7f455f02100973f3edc8a67d25dd8cb373bac3` |
-| `en-za` | Container image with the `en-ZA` locale. | `sha256:e5eabe477da8f6fb11a8083c67723026f268ba1a35372d1dffde85cc9d09bae9` |
-| `es-ar` | Container image with the `es-AR` locale. | `sha256:b5c1279f30ee301d7e8d28cb084262da50a5c495feca36f04489a29ecd24f24f` |
-| `es-bo` | Container image with the `es-BO` locale. | `sha256:d2e70e3fe109c6dcf02d75830efae3ea13955a1e68f590eeaf2c42239cd4a00a` |
-| `es-cl` | Container image with the `es-CL` locale. | `sha256:70c5975df4b4ae2f301e73e35e21eaef306c50ee62a51526c1c29a0487ef8f0c` |
-| `es-co` | Container image with the `es-CO` locale. | `sha256:b81dd737747421ebb71b8f02cd16534a80809f2c792425d04f78388b4e9b10f1` |
-| `es-cr` | Container image with the `es-CR` locale. | `sha256:2b5a469f630a647626a99a78d5bfe9afec331a18ea895b42bd5aa68bebdca73e` |
-| `es-cu` | Container image with the `es-CU` locale. | `sha256:5c5c54cfa3da78579e872fec36c49902e402ddb14ffbe4ef4c273e6767219ccf` |
-| `es-do` | Container image with the `es-DO` locale. | `sha256:d417cedae4b7eb455700084e3e305552bbd6b2c20d0bba3d03d4a95052002dbc` |
-| `es-ec` | Container image with the `es-EC` locale. | `sha256:82258abbba72a1238dfa334da0046ffd760480d793f18cbea1441c3fdb596255` |
-| `es-es` | Container image with the `es-ES` locale. | `sha256:efad3474a24ba7662e3d10808e31e2641580e206aa381f5d43af79604b367fc0` |
-| `es-gt` | Container image with the `es-GT` locale. | `sha256:86dc0a12fdd237abc00e14e26714440e311e9945dd07ff662ca24881f23a5b2f` |
-| `es-hn` | Container image with the `es-HN` locale. | `sha256:52139db949594a13a1c6f98f49b30d880d9426ce2f239bfde6090e3532fd7351` |
-| `es-mx` | Container image with the `es-MX` locale. | `sha256:0ab8ea9a70f378f6684e4fc7d9d4db0596e8790badf0217b4c415f4857bce38f` |
-| `es-ni` | Container image with the `es-NI` locale. | `sha256:512853c5af3b374b82848d3c5117d69264473a08d460b85d072829e36e3bd92f` |
-| `es-pa` | Container image with the `es-PA` locale. | `sha256:c3a871d1f4b6c22e78e92f96ac3af435129ea2cfbe80cfef97d10d88e68ac763` |
-| `es-pe` | Container image with the `es-PE` locale. | `sha256:bd1ea7e260276d0ea29506270bc790c4eabb76b6d6026776b523628eb7806b08` |
-| `es-pr` | Container image with the `es-PR` locale. | `sha256:005e23623966802ed801373457ad57bf19aed5031f5fcd197cacb387082c7d95` |
-| `es-py` | Container image with the `es-PY` locale. | `sha256:fb0c71003d5dd73d93e10c04b7316d13129152ca293f16ac2b8b91361ecde1ca` |
-| `es-sv` | Container image with the `es-SV` locale. | `sha256:23d1e068a418845a1783e6f9beb365782dc95baea21304780ea4023444d63352` |
-| `es-us` | Container image with the `es-US` locale. | `sha256:268ef7cec34fd0e2449f15d924a263566dcfb147b66f1596c3b593cdc9080119` |
-| `es-uy` | Container image with the `es-UY` locale. | `sha256:229e68ab16658646556f76d61e1e675aa39751130b8e87f1aba1d723036230e2` |
-| `es-ve` | Container image with the `es-VE` locale. | `sha256:764337c9d5145986a1e292dfd6b69fa2a2cc335e0bd9e53c4d4f45b8dff05cc4` |
-| `et-ee` | Container image with the `et-EE` locale. | `sha256:4ba59e9b68686386055771d240d8b5ca8e5e12723c7017b15e2674f525c46395` |
-| `fi-fi` | Container image with the `fi-FI` locale. | `sha256:aa8040e8467194f654cb7c8444e027757053e0322e87940b2f4434e09686cec3` |
-| `fr-ca` | Container image with the `fr-CA` locale. | `sha256:b213da609a2f2c8631a71d3e74f6d155e237ddbf1367574a3e6f0fc2144c4b73` |
-| `fr-fr` | Container image with the `fr-FR` locale. | `sha256:6b5f98a5c8573dc03557b62ccda6ce9a1426b0ad6f2d432788294c1e41cd9deb` |
-| `ga-ie` | Container image with the `ga-IE` locale. | `sha256:b5f5955b4baf9d431fc46c1a8c1afe994e6811ff9ae575106954b1c40821a7d3` |
-| `gu-in` | Container image with the `gu-IN` locale. | `sha256:a1bc229571563ca5769664a2457e42cce82216dfee5820f871b6a870e29f6d26` |
-| `hi-in` | Container image with the `hi-IN` locale. | `sha256:f28b07751cbebcd020e0fba17811fc97ee1f49e53e5584e970d6db30f60e34e9` |
-| `hr-hr` | Container image with the `hr-HR` locale. | `sha256:c4bea85be0d7236b86b1a2315de764cb094ab1e567699b90a86e53716ed467f6` |
-| `hu-hu` | Container image with the `hu-HU` locale. | `sha256:189bc20605d93b17c739361364b94462d5007ff237ec8b28c0aa0f7aadc495ab` |
-| `it-it` | Container image with the `it-IT` locale. | `sha256:572887127159990a3d44f6f5c3e5616d3df5b9f7b4696487407dcda619570d72` |
-| `ja-jp` | Container image with the `ja-JP` locale. | `sha256:4b961e96614ce3845d5456db84163ae3a14acca6a8d7adf1ebded8a242f59be8` |
-| `ko-kr` | Container image with the `ko-KR` locale. | `sha256:1b2ca4c7ff3361241c6eb5090fd739f9d72c52a6ffcaf05b1d305ae9cac76661` |
-| `lt-lt` | Container image with the `lt-LT` locale. | `sha256:4733f6390a776707fc082fd025a73b5e73532c859c6add626640b1705decaa8b` |
-| `lv-lv` | Container image with the `lv-LV` locale. | `sha256:84ebb7ab14e9ccd04da97747bc2611bff3f5d7bb2494e16c7ca257f1dacf3742` |
-| `mr-in` | Container image with the `mr-IN` locale. | `sha256:ca3edf97d26ff985cfe10b1bdcec2f65825758884cf706caca6855c6b865f4fd` |
-| `mt-mt` | Container image with the `mt-MT` locale. | `sha256:f3f9e5ee72abed81d93dae46a13be28491f833e96e43312d685388deee39af67` |
-| `nb-no` | Container image with the `nb-NO` locale. | `sha256:e0f5df9b49ebcd341fa4de899d4840c7b9e0cb291d5d6b3c8269f5e40420933c` |
-| `nl-nl` | Container image with the `nl-NL` locale. | `sha256:895ce0059b0fafe145053e1521fb63188a6d856753467ab85bd24aa8926102c1` |
-| `pl-pl` | Container image with the `pl-PL` locale. | `sha256:f74afc0b64860b97db449a8c6892fb1cb484e0ab9a02b15ab4e984a0f3a7c62d` |
-| `pt-br` | Container image with the `pt-BR` locale. | `sha256:963c4cca989f14861d56aafa1a58ad14f489f7b5ba2ac6052a617d8950ee507c` |
-| `pt-pt` | Container image with the `pt-PT` locale. | `sha256:468d4511672566d7d3de35a1c6150fdaa70634664a2553ae871c11806b024cb8` |
-| `ro-ro` | Container image with the `ro-RO` locale. | `sha256:4de5d11d77c1e7015090da0a82b81b3328973a389d99afeb2c188e70464bc544` |
-| `ru-ru` | Container image with the `ru-RU` locale. | `sha256:8a643ce653efcbf7e8042d87d89e456cd44ab6f100970ed4a38a1d6b5491a6c0` |
-| `sk-sk` | Container image with the `sk-SK` locale. | `sha256:8b11c142024cee74d395a5bf0d8e6ed980304ac7a127302b08ad248fb66d82ea` |
-| `sl-si` | Container image with the `sl-SI` locale. | `sha256:bd140766406a58c679e4cf7f4c48448d2cd9f9cacd005c1f5bfd4bf4642b4193` |
-| `sv-se` | Container image with the `sv-SE` locale. | `sha256:a47258027fdaf47b776e1e6f58d71a130574f42a0bccf14ba0a1d215d4546add` |
-| `ta-in` | Container image with the `ta-IN` locale. | `sha256:376cb98f99e733c4f6015cb283923bb07f3c126341959b0ba1cb5472619a2836` |
-| `te-in` | Container image with the `te-IN` locale. | `sha256:d0ae77a2e5539dbdd809d895eea123320fb4aab24932af38b769d26968a4150c` |
-| `th-th` | Container image with the `th-TH` locale. | `sha256:522c14b9cbb6a218839942bf7c36b3fc207f26cf6ce4068bc883e8dd7890237b` |
-| `tr-tr` | Container image with the `tr-TR` locale. | `sha256:c5f1ef181cb8287c917b9db2ee68eaa24b4f05e59372a00081bec70797bd54d1` |
-| `zh-cn` | Container image with the `zh-CN` locale. | `sha256:110e1e79bbb10254f9bd735b1c9cb70b0bf5a88f73da7a68985d2c861a40f201` |
-| `zh-hk` | Container image with the `zh-HK` locale. | `sha256:c1e0830d3cb04c8151c2e9c8c6eb0fb97036a09829fc8539a06bb07ca68a8e5e` |
-| `zh-tw` | Container image with the `zh-TW` locale. | `sha256:dd1ef4db3784594ba8c7c211f6196714690fbd360a8f81f5b109e8a023585b3d` |
+| `ar-ae` | Container image with the `ar-AE` locale. | `sha256:e01ede581475934ee2873fd28cfabc500aa7334e1a0994bda5f9748f30a85fb5` |
+| `ar-bh` | Container image with the `ar-BH` locale. | `sha256:644e871429b75a0168e6a3d893a12de719e58a1a95d99c4a82bfeced634eeefd` |
+| `ar-eg` | Container image with the `ar-EG` locale. | `sha256:82a9affeae9379b0503be0fce64f6844e0afffa7b24ad8813c2c8ce8390c452a` |
+| `ar-iq` | Container image with the `ar-IQ` locale. | `sha256:161278801bce4fdf34eb4d9569d814266ec59f5e951b625d39d493d1208f12d0` |
+| `ar-jo` | Container image with the `ar-JO` locale. | `sha256:9d833b3f3d3fc773a25e667b15f41856922c6ae862cc09ae9b64857a4fb23824` |
+| `ar-kw` | Container image with the `ar-KW` locale. | `sha256:e01ede581475934ee2873fd28cfabc500aa7334e1a0994bda5f9748f30a85fb5` |
+| `ar-lb` | Container image with the `ar-LB` locale. | `sha256:bc8e49cbb6eef0bc268e8f8bfa0f41a1730aa9e38d79d91232226ddcc41a417e` |
+| `ar-om` | Container image with the `ar-OM` locale. | `sha256:2bbe8d69f1042f71d92fa66a768b538d116a2db87e0f024d0a61999325507a9f` |
+| `ar-qa` | Container image with the `ar-QA` locale. | `sha256:e01ede581475934ee2873fd28cfabc500aa7334e1a0994bda5f9748f30a85fb5` |
+| `ar-sa` | Container image with the `ar-SA` locale. | `sha256:e01ede581475934ee2873fd28cfabc500aa7334e1a0994bda5f9748f30a85fb5` |
+| `ar-sy` | Container image with the `ar-SY` locale. | `sha256:cc7d1d5272637104966e0acd290881a19e17d6466599d2dde853cf81c5dd249b` |
+| `bg-bg` | Container image with the `bg-BG` locale. | `sha256:9af266e51be11dbcf116e19ded5fd39848aeee800f0366125c4b635535dc2b2b` |
+| `ca-es` | Container image with the `ca-ES` locale. | `sha256:ab8c80b66404d2720d086b32a413b87064b7884240e8fbfbc5ee2f8c9ca9b174` |
+| `cs-cz` | Container image with the `cs-CZ` locale. | `sha256:c24b56ad144600bf301b3960b4b829afffec2d7d7cc64e707aa1f34de8e48e79` |
+| `da-dk` | Container image with the `da-DK` locale. | `sha256:52cb5fc8660adb4e92e7b1f886f19c529367733543c8df05c9db5f87e1a8c08f` |
+| `de-at` | Container image with the `de-AT` locale. | `sha256:cb8a8febd76d831d9505c7f61c841499f097c0a2c09aae1374668f9bc38fb952` |
+| `de-ch` | Container image with the `de-CH` locale. | `sha256:d9d6624f14dc0f35fe9b20da5fe601b40abfc98b89092b40a934532d3cfe6dc9` |
+| `de-de` | Container image with the `de-DE` locale. | `sha256:0c9da57b2c5312de859de1ad53e244bdb68103ce072e0ea42ab3078ff0339132` |
+| `el-gr` | Container image with the `el-GR` locale. | `sha256:f9c2a3172bd5671c590ae309d4ae49ea45ec696c112f6d9bf3b6007e94f0bdc5` |
+| `en-au` | Container image with the `en-AU` locale. | `sha256:6df7313d6900b847ef29a44e967cac4e9a013e7b89493041b7a2e45f1e3fbba8` |
+| `en-ca` | Container image with the `en-CA` locale. | `sha256:39896bc91b15c5fe6c1f17cc79ff6c391c0b2051135a206ac64ae2a590b37fe9` |
+| `en-gb` | Container image with the `en-GB` locale. | `sha256:c251b61c54fc9e0b269901e8c9b12447f3a4facc8688aded07eed378b35b2e48` |
+| `en-hk` | Container image with the `en-HK` locale. | `sha256:767fa5f3668692de9f1913786b1a7e343dc359c4c0c3275f6823b3bb4934e278` |
+| `en-ie` | Container image with the `en-IE` locale. | `sha256:24dabb8ecf0d06e0a4d67b72dda7e436bd31ee27e28370fc2f9cbed790f716ff` |
+| `en-in` | Container image with the `en-IN` locale. | `sha256:662f6a8119eee2ed3093b65d70b0bb2301723dece4bfa270ecb8e007ac102a76` |
+| `en-nz` | Container image with the `en-NZ` locale. | `sha256:3ba5b5d4f6fd68f242ceea8640545ab520f704d315672c9631e7ad151d75f1b7` |
+| `en-ph` | Container image with the `en-PH` locale. | `sha256:374e48220c590cc8d89805d35a10b6a8e8a46ae16b5199775b75289b8f58c636` |
+| `en-sg` | Container image with the `en-SG` locale. | `sha256:c084a1ccd2efcabcd260be4c0c22ced90f8666f56f16e31ea039d677b74597d7` |
+| `en-us` | Container image with the `en-US` locale. | `sha256:1aab9d20a7bef256e5140a94c7fb6662a4ada2c8c5cb5ed75836f4e9101f2d13` |
+| `en-za` | Container image with the `en-ZA` locale. | `sha256:8dc4c73b7100b9f115f1707a2021874866ae4e1024d2ea2db7b1f3734b1121fc` |
+| `es-ar` | Container image with the `es-AR` locale. | `sha256:8f0735b84d9e8aef4e8fc9eb30d263c871b2de93080744b086bf97de1c0aa21b` |
+| `es-bo` | Container image with the `es-BO` locale. | `sha256:b6e1d56befb6279c6937b9eb0530426ca2fbb2b94e0b06bf442a72ca7d2db1f8` |
+| `es-cl` | Container image with the `es-CL` locale. | `sha256:74bc32f5853a70bf26707bea6815f31b593335f73693c940fac66cc00a17b4ea` |
+| `es-co` | Container image with the `es-CO` locale. | `sha256:58c12bd3ed9876665a08b221e483f233e2096e9f4364fffd24487412dde3939a` |
+| `es-cr` | Container image with the `es-CR` locale. | `sha256:12da366cc0d126bd5831994348acb255a9cd05a3929af7affa513458d981352d` |
+| `es-cu` | Container image with the `es-CU` locale. | `sha256:edb533d5cddb0188b900b70a71b95c759b8f4f2d17da12c4959621644af0a92e` |
+| `es-do` | Container image with the `es-DO` locale. | `sha256:cdf27024a777d00c776df3c3175f5fe4778de8fa6da77658d6dc538a6bf64f57` |
+| `es-ec` | Container image with the `es-EC` locale. | `sha256:20c6c5cf52f75b178c5ff9ef2d5d1990ca8fcce0056ff3939ccbb7a10fbdca1f` |
+| `es-es` | Container image with the `es-ES` locale. | `sha256:0de092a826f3e847ff60629837cb06949afde6690cbe8da8805998d114f477c2` |
+| `es-gt` | Container image with the `es-GT` locale. | `sha256:1e3d2dad7350fcd91c6db64fde8375c83fff2ed34ad2e1758773400d6b42c0b3` |
+| `es-hn` | Container image with the `es-HN` locale. | `sha256:0bd284e5edbdb93ab2e802d2f05ab44cf484e884b3bca72171ee8f72b19cdb4a` |
+| `es-mx` | Container image with the `es-MX` locale. | `sha256:aa033f75eb4abf13e9f40b452140780cb8c416881f512a2b9c18ce7d46032bc0` |
+| `es-ni` | Container image with the `es-NI` locale. | `sha256:6f15f6fd2ef3d029bf8f508268cf26e22b67b4f02fcde4c06414a47c57817f6d` |
+| `es-pa` | Container image with the `es-PA` locale. | `sha256:4d99d615ea7b0e33acbf69c593c4cfd03a83bb15f30e7d71ee9d9c606870408e` |
+| `es-pe` | Container image with the `es-PE` locale. | `sha256:9ba5fbb172e32c526a9662190a1c106a635f0637dd44a00bf0b4927b7388c31f` |
+| `es-pr` | Container image with the `es-PR` locale. | `sha256:7419eed485084b99dc527a5ecc48b130bbf9a13807c9dff0fe9097655fcb6ea7` |
+| `es-py` | Container image with the `es-PY` locale. | `sha256:485a0b0c5831973358249e084dee285e0ae9c960e3f550f7d971b89e3d1e3d91` |
+| `es-sv` | Container image with the `es-SV` locale. | `sha256:56108b647909b60053053369907eb46f23049fbf7296135165f58e10dc218214` |
+| `es-us` | Container image with the `es-US` locale. | `sha256:eb68916713f83c4bc8ae9a621b022d73f7737b9c9479ad79acb76657fd8f6c0f` |
+| `es-uy` | Container image with the `es-UY` locale. | `sha256:779199a5133d8690d5c983baa8aac8088a30eb746ba80d90d91b7c54eb4396de` |
+| `es-ve` | Container image with the `es-VE` locale. | `sha256:c3bc655c9bc2d4d44aca7a0c9bfe0be68f2640fd48c3112ccde4667553282a79` |
+| `et-ee` | Container image with the `et-EE` locale. | `sha256:bd11b40b81920c3796ceb9ba8d6bf61be1aff4f16bffb6100d8ca860d40d8552` |
+| `fi-fi` | Container image with the `fi-FI` locale. | `sha256:6df492b55b7934f207b1c6d047e57c2cb13a10147e26c2b4864f4d02f8a4e786` |
+| `fr-ca` | Container image with the `fr-CA` locale. | `sha256:75e756cfc34e335eaed8bcf12aaaad54043e3727129fddd0e260818680fbb987` |
+| `fr-fr` | Container image with the `fr-FR` locale. | `sha256:2e88bcbc57355244eb8a543f32805252ffee8d20f8724e106a7eeae6d9c41d33` |
+| `ga-ie` | Container image with the `ga-IE` locale. | `sha256:e77f9716f2db3ea8dc98dfb8edbb2a6c9cbb11137d0a46d592dc4079b4797c8c` |
+| `gu-in` | Container image with the `gu-IN` locale. | `sha256:1c1ba6d7af9d69e9b5beeb3a1a2502e2747bf7f5350a4dbee14508698806542b` |
+| `hi-in` | Container image with the `hi-IN` locale. | `sha256:29f270a67e87e57643cf6b3e6407547becc94ffd7d7c32f5c466feab9f43df9e` |
+| `hr-hr` | Container image with the `hr-HR` locale. | `sha256:6c3b8b8ea8d0f04b486efce72c061569dd993c251ca1c874ee7b75cfe28e078b` |
+| `hu-hu` | Container image with the `hu-HU` locale. | `sha256:0db167fa90de896318b56d6a534f7ea5241f6e4414081b0b1f07b60aaeacd211` |
+| `it-it` | Container image with the `it-IT` locale. | `sha256:1bcca1c51d282f5f78a5323df0d927cd6161a49fded1e0c0ce83a999a60a0cec` |
+| `ja-jp` | Container image with the `ja-JP` locale. | `sha256:4b9ba7b007d9dc0872976768a44604c31e5f671648082cfdb3f395bd8d52e3a8` |
+| `ko-kr` | Container image with the `ko-KR` locale. | `sha256:1fd625c589dde9547db32232f7151f073851e84764e439bb48928bf3697dc280` |
+| `lt-lt` | Container image with the `lt-LT` locale. | `sha256:a6aaacb60377fd137d52251b6855cbf6f584b6c2407cde7ef4be9f0315293ddf` |
+| `lv-lv` | Container image with the `lv-LV` locale. | `sha256:3d9feeb98a4650164704161b9c38aa4eb441505b7ef45b227fe471026224c590` |
+| `mr-in` | Container image with the `mr-IN` locale. | `sha256:03fe81b7a7da4573a012400955e3f9505e2cfee5b40b1e59c32abd7d65a84636` |
+| `mt-mt` | Container image with the `mt-MT` locale. | `sha256:0c76e4a8695421a80b87b5ed348c7ba4771486dc7befcb8c13b0f8216018e360` |
+| `nb-no` | Container image with the `nb-NO` locale. | `sha256:c617c263f7b090591d347148b3c9970cbde3b5864e602f7eb54361bcdbad80a8` |
+| `nl-nl` | Container image with the `nl-NL` locale. | `sha256:db8c0341e7d7995dda288fe7232f7ce810994317738759bbc7adb0ed93050701` |
+| `pl-pl` | Container image with the `pl-PL` locale. | `sha256:5393a0c72d49b50836a8820f2a7fa7c381606cbf1ea03e19b60c48cad320dd29` |
+| `pt-br` | Container image with the `pt-BR` locale. | `sha256:fc888ddc056bba30be9aebbf94fbcdf322ee35f20805d80914ddd1b9fe146510` |
+| `pt-pt` | Container image with the `pt-PT` locale. | `sha256:10ac67830ef5fa49c8bbeef1bb6becbbb34875d6fdfcc8bd78abef030b71853b` |
+| `ro-ro` | Container image with the `ro-RO` locale. | `sha256:44195bab625e430089a333ae25348ddbb7052e1ce9c223e2bb1c444a4add25ea` |
+| `ru-ru` | Container image with the `ru-RU` locale. | `sha256:bb5ead4296e76463b2f54108afa8c4cb5a12f65470012763fa973af466960400` |
+| `sk-sk` | Container image with the `sk-SK` locale. | `sha256:d6922755249be4c85a1e119366e5044e62986c0e065abfc4cec8ec5d788b1f28` |
+| `sl-si` | Container image with the `sl-SI` locale. | `sha256:5f3225b01d023eb99711033f1531d32b8627207ebf9cdb560f3429106677eb85` |
+| `sv-se` | Container image with the `sv-SE` locale. | `sha256:9fd69d7b18579e9813d69a53fa008e4e43521e81c3b8db0f91df047f7f85d53f` |
+| `ta-in` | Container image with the `ta-IN` locale. | `sha256:a2c4ced531ebcf41e6653d72eaa689b665748d1fbce428ed1542d2183a0c23c3` |
+| `te-in` | Container image with the `te-IN` locale. | `sha256:8e9df83624627f58cebb0b0654035e746b60f4fd30eeb44d9bf30733ace02886` |
+| `th-th` | Container image with the `th-TH` locale. | `sha256:060ea9db2f153e5c3296e3280616f6c4c24423a352237e91db8ffc5939db5038` |
+| `tr-tr` | Container image with the `tr-TR` locale. | `sha256:fffe495ae24885a69447f553ad0c8a8fec7fc805163f0aae6f27374c9c5f7ae6` |
+| `uk-ua` | Container image with the `uk-UA` locale. | `sha256:8ee7118a2a4e3aa9d7d9fd21770d274dd4081a6c7290374db8892dbc604ea819` |
+| `zh-cn` | Container image with the `zh-CN` locale. | `sha256:fea665892e189ff3778cb2ff35865da9c7aa5c62a2711d359203ae0f288dadc0` |
+| `zh-hk` | Container image with the `zh-HK` locale. | `sha256:b46dd6bbcf01ce28d10279bd4c6a749f0f5871b293c059f35db43314b4e7a4a8` |
+| `zh-tw` | Container image with the `zh-TW` locale. | `sha256:ff98176465d574d4eede6339605dbd21d1197ca82d33b86b961d86d114283927` |
# [Previous version](#tab/previous)
+Release note for `3.0.1-amd64-<locale>`:
+
+**Features**
+* Support new locale `uk-UA`.
+
+Release note for `3.0.0-amd64-<locale>`:
+
+**Features**
+* Support for using containers in [disconnected environments](disconnected-containers.md).
+ Release note for `2.18.0-amd64-<locale>`: Regular monthly release
Release note for `2.5.0-amd64-<locale>`:
**Fixes** * Fixes an issue with running as a non-root user in Diarization mode
+| Locale for v3.0.1 | Notes | Digest |
+|--|:--|:--|
+| `uk-ua` | Container image with the `uk-UA` locale. | `sha256:af8c370a7ed3e231a611ea37053e809fa7e52ea514c70f4c85f133c7b28a4fba` |
+
+| Locale for v3.0.0 | Notes | Digest |
+|--|:--|:--|
+| `ar-ae` | Container image with the `ar-AE` locale. | `sha256:3ce4ab5141dd46d2fce732b3659cba8fc70ab83fa5b37bf43f4dfa2efca5aef7` |
+| `ar-bh` | Container image with the `ar-BH` locale. | `sha256:86ed164f98f1d1776faa9bda4a7846bc0ad9232dd0613ae506fd5698d4823787` |
+| `ar-eg` | Container image with the `ar-EG` locale. | `sha256:43fa641504d6e8b89e31f6eaa033ad680bb586b93fa3853747051a570fbf05ca` |
+| `ar-iq` | Container image with the `ar-IQ` locale. | `sha256:001c0d3ac2e3fec59993e001a8278696b847b14a1bd1ed5c843d18959b3d3d4e` |
+| `ar-jo` | Container image with the `ar-JO` locale. | `sha256:1707f21fa9cbe5bd2275023620718f1a98429e5f5fb7279211951500d30a6e65` |
+| `ar-kw` | Container image with the `ar-KW` locale. | `sha256:3ce4ab5141dd46d2fce732b3659cba8fc70ab83fa5b37bf43f4dfa2efca5aef7` |
+| `ar-lb` | Container image with the `ar-LB` locale. | `sha256:d237ecf21770b493c5aaf3bbab5ae9260aba121518996192d13924b4c5e999f4` |
+| `ar-om` | Container image with the `ar-OM` locale. | `sha256:d1e4e45ba5df3a9307433e8a631f02142c246e5a2fbf9c25edf97e290008c63a` |
+| `ar-qa` | Container image with the `ar-QA` locale. | `sha256:3ce4ab5141dd46d2fce732b3659cba8fc70ab83fa5b37bf43f4dfa2efca5aef7` |
+| `ar-sa` | Container image with the `ar-SA` locale. | `sha256:3ce4ab5141dd46d2fce732b3659cba8fc70ab83fa5b37bf43f4dfa2efca5aef7` |
+| `ar-sy` | Container image with the `ar-SY` locale. | `sha256:a51c67916deac54a73ea1bb5084b511c34cd649764bd5935aac9f527bf33baf0` |
+| `bg-bg` | Container image with the `bg-BG` locale. | `sha256:f0d70b8ab0e324ee42f0ca7fe57fa828c29ac1df11261676f7168b60139a0e3c` |
+| `ca-es` | Container image with the `ca-ES` locale. | `sha256:b876d37460b96cddb76fd74f0dfa64ad97399681eda27969e30f74d703a16b05` |
+| `cs-cz` | Container image with the `cs-CZ` locale. | `sha256:73bb40181bae4da3d3aaa1f77f5b831156ca496fbd065b4944b6e49f0807d9e9` |
+| `da-dk` | Container image with the `da-DK` locale. | `sha256:a0b65559390af1100941983d850bf549f1aefe3ce56574de1a8cab63d5c52694` |
+| `de-at` | Container image with the `de-AT` locale. | `sha256:78030695ef9ff10e5a465340e211f1ca76dce569b9e8bd8c7758d28d2139965e` |
+| `de-ch` | Container image with the `de-CH` locale. | `sha256:7705a78e3ea3d05bdf1a09876b9cd4c03a8734463f350e0eed81cc989710bcd5` |
+| `de-de` | Container image with the `de-DE` locale. | `sha256:d10066583f94bc3db96d2afd28fa42e880bd71e3f6195cc764bda79d039a58c7` |
+| `el-gr` | Container image with the `el-GR` locale. | `sha256:d8b7d28287e016baacb4df7e3bf2d5cd7f6124ec136446200ad70b9472ee8207` |
+| `en-au` | Container image with the `en-AU` locale. | `sha256:493742b671c10b6767b371b8bb687241cbf38f53929a2ecc18979d531be136b4` |
+| `en-ca` | Container image with the `en-CA` locale. | `sha256:61fa4cb2a671b504f06fa89f4d90ade6ccfbc378d93d1eada0cc47434b45601f` |
+| `en-gb` | Container image with the `en-GB` locale. | `sha256:3b0f47356aab046c176bf2a5a5187404e3e5a9a50387bd29d35ce2371d49beff` |
+| `en-hk` | Container image with the `en-HK` locale. | `sha256:bf98a2553b9555254968f6deeeee85e83462cb45a04adcd9c35be62c1cf51924` |
+| `en-ie` | Container image with the `en-IE` locale. | `sha256:952a611a3911faf893313b51529d392eeac82a4a8abe542c49ca7aa9c89e8a48` |
+| `en-in` | Container image with the `en-IN` locale. | `sha256:6ad1168ac4e278ed65d66a8a5de441327522b27619dfdf6ecae52a68ab04b214` |
+| `en-nz` | Container image with the `en-NZ` locale. | `sha256:03174464fab551c34df402102cac3b4f4b4efc0a4250a14c07f35318787ad9e2` |
+| `en-ph` | Container image with the `en-PH` locale. | `sha256:e38bbe4ae16dc792be3b6e9e2e488588fdd9d12eed08f330cd5dfc5d318b74e0` |
+| `en-sg` | Container image with the `en-SG` locale. | `sha256:58476a88fb548b0ad18d54a118024c628a555a67d75fa5fdf7e860cc43b25272` |
+| `en-us` | Container image with the `en-US` locale. | `sha256:e1ea7a52fd45ab959d10b597dc7f455f02100973f3edc8a67d25dd8cb373bac3` |
+| `en-za` | Container image with the `en-ZA` locale. | `sha256:e5eabe477da8f6fb11a8083c67723026f268ba1a35372d1dffde85cc9d09bae9` |
+| `es-ar` | Container image with the `es-AR` locale. | `sha256:b5c1279f30ee301d7e8d28cb084262da50a5c495feca36f04489a29ecd24f24f` |
+| `es-bo` | Container image with the `es-BO` locale. | `sha256:d2e70e3fe109c6dcf02d75830efae3ea13955a1e68f590eeaf2c42239cd4a00a` |
+| `es-cl` | Container image with the `es-CL` locale. | `sha256:70c5975df4b4ae2f301e73e35e21eaef306c50ee62a51526c1c29a0487ef8f0c` |
+| `es-co` | Container image with the `es-CO` locale. | `sha256:b81dd737747421ebb71b8f02cd16534a80809f2c792425d04f78388b4e9b10f1` |
+| `es-cr` | Container image with the `es-CR` locale. | `sha256:2b5a469f630a647626a99a78d5bfe9afec331a18ea895b42bd5aa68bebdca73e` |
+| `es-cu` | Container image with the `es-CU` locale. | `sha256:5c5c54cfa3da78579e872fec36c49902e402ddb14ffbe4ef4c273e6767219ccf` |
+| `es-do` | Container image with the `es-DO` locale. | `sha256:d417cedae4b7eb455700084e3e305552bbd6b2c20d0bba3d03d4a95052002dbc` |
+| `es-ec` | Container image with the `es-EC` locale. | `sha256:82258abbba72a1238dfa334da0046ffd760480d793f18cbea1441c3fdb596255` |
+| `es-es` | Container image with the `es-ES` locale. | `sha256:efad3474a24ba7662e3d10808e31e2641580e206aa381f5d43af79604b367fc0` |
+| `es-gt` | Container image with the `es-GT` locale. | `sha256:86dc0a12fdd237abc00e14e26714440e311e9945dd07ff662ca24881f23a5b2f` |
+| `es-hn` | Container image with the `es-HN` locale. | `sha256:52139db949594a13a1c6f98f49b30d880d9426ce2f239bfde6090e3532fd7351` |
+| `es-mx` | Container image with the `es-MX` locale. | `sha256:0ab8ea9a70f378f6684e4fc7d9d4db0596e8790badf0217b4c415f4857bce38f` |
+| `es-ni` | Container image with the `es-NI` locale. | `sha256:512853c5af3b374b82848d3c5117d69264473a08d460b85d072829e36e3bd92f` |
+| `es-pa` | Container image with the `es-PA` locale. | `sha256:c3a871d1f4b6c22e78e92f96ac3af435129ea2cfbe80cfef97d10d88e68ac763` |
+| `es-pe` | Container image with the `es-PE` locale. | `sha256:bd1ea7e260276d0ea29506270bc790c4eabb76b6d6026776b523628eb7806b08` |
+| `es-pr` | Container image with the `es-PR` locale. | `sha256:005e23623966802ed801373457ad57bf19aed5031f5fcd197cacb387082c7d95` |
+| `es-py` | Container image with the `es-PY` locale. | `sha256:fb0c71003d5dd73d93e10c04b7316d13129152ca293f16ac2b8b91361ecde1ca` |
+| `es-sv` | Container image with the `es-SV` locale. | `sha256:23d1e068a418845a1783e6f9beb365782dc95baea21304780ea4023444d63352` |
+| `es-us` | Container image with the `es-US` locale. | `sha256:268ef7cec34fd0e2449f15d924a263566dcfb147b66f1596c3b593cdc9080119` |
+| `es-uy` | Container image with the `es-UY` locale. | `sha256:229e68ab16658646556f76d61e1e675aa39751130b8e87f1aba1d723036230e2` |
+| `es-ve` | Container image with the `es-VE` locale. | `sha256:764337c9d5145986a1e292dfd6b69fa2a2cc335e0bd9e53c4d4f45b8dff05cc4` |
+| `et-ee` | Container image with the `et-EE` locale. | `sha256:4ba59e9b68686386055771d240d8b5ca8e5e12723c7017b15e2674f525c46395` |
+| `fi-fi` | Container image with the `fi-FI` locale. | `sha256:aa8040e8467194f654cb7c8444e027757053e0322e87940b2f4434e09686cec3` |
+| `fr-ca` | Container image with the `fr-CA` locale. | `sha256:b213da609a2f2c8631a71d3e74f6d155e237ddbf1367574a3e6f0fc2144c4b73` |
+| `fr-fr` | Container image with the `fr-FR` locale. | `sha256:6b5f98a5c8573dc03557b62ccda6ce9a1426b0ad6f2d432788294c1e41cd9deb` |
+| `ga-ie` | Container image with the `ga-IE` locale. | `sha256:b5f5955b4baf9d431fc46c1a8c1afe994e6811ff9ae575106954b1c40821a7d3` |
+| `gu-in` | Container image with the `gu-IN` locale. | `sha256:a1bc229571563ca5769664a2457e42cce82216dfee5820f871b6a870e29f6d26` |
+| `hi-in` | Container image with the `hi-IN` locale. | `sha256:f28b07751cbebcd020e0fba17811fc97ee1f49e53e5584e970d6db30f60e34e9` |
+| `hr-hr` | Container image with the `hr-HR` locale. | `sha256:c4bea85be0d7236b86b1a2315de764cb094ab1e567699b90a86e53716ed467f6` |
+| `hu-hu` | Container image with the `hu-HU` locale. | `sha256:189bc20605d93b17c739361364b94462d5007ff237ec8b28c0aa0f7aadc495ab` |
+| `it-it` | Container image with the `it-IT` locale. | `sha256:572887127159990a3d44f6f5c3e5616d3df5b9f7b4696487407dcda619570d72` |
+| `ja-jp` | Container image with the `ja-JP` locale. | `sha256:4b961e96614ce3845d5456db84163ae3a14acca6a8d7adf1ebded8a242f59be8` |
+| `ko-kr` | Container image with the `ko-KR` locale. | `sha256:1b2ca4c7ff3361241c6eb5090fd739f9d72c52a6ffcaf05b1d305ae9cac76661` |
+| `lt-lt` | Container image with the `lt-LT` locale. | `sha256:4733f6390a776707fc082fd025a73b5e73532c859c6add626640b1705decaa8b` |
+| `lv-lv` | Container image with the `lv-LV` locale. | `sha256:84ebb7ab14e9ccd04da97747bc2611bff3f5d7bb2494e16c7ca257f1dacf3742` |
+| `mr-in` | Container image with the `mr-IN` locale. | `sha256:ca3edf97d26ff985cfe10b1bdcec2f65825758884cf706caca6855c6b865f4fd` |
+| `mt-mt` | Container image with the `mt-MT` locale. | `sha256:f3f9e5ee72abed81d93dae46a13be28491f833e96e43312d685388deee39af67` |
+| `nb-no` | Container image with the `nb-NO` locale. | `sha256:e0f5df9b49ebcd341fa4de899d4840c7b9e0cb291d5d6b3c8269f5e40420933c` |
+| `nl-nl` | Container image with the `nl-NL` locale. | `sha256:895ce0059b0fafe145053e1521fb63188a6d856753467ab85bd24aa8926102c1` |
+| `pl-pl` | Container image with the `pl-PL` locale. | `sha256:f74afc0b64860b97db449a8c6892fb1cb484e0ab9a02b15ab4e984a0f3a7c62d` |
+| `pt-br` | Container image with the `pt-BR` locale. | `sha256:963c4cca989f14861d56aafa1a58ad14f489f7b5ba2ac6052a617d8950ee507c` |
+| `pt-pt` | Container image with the `pt-PT` locale. | `sha256:468d4511672566d7d3de35a1c6150fdaa70634664a2553ae871c11806b024cb8` |
+| `ro-ro` | Container image with the `ro-RO` locale. | `sha256:4de5d11d77c1e7015090da0a82b81b3328973a389d99afeb2c188e70464bc544` |
+| `ru-ru` | Container image with the `ru-RU` locale. | `sha256:8a643ce653efcbf7e8042d87d89e456cd44ab6f100970ed4a38a1d6b5491a6c0` |
+| `sk-sk` | Container image with the `sk-SK` locale. | `sha256:8b11c142024cee74d395a5bf0d8e6ed980304ac7a127302b08ad248fb66d82ea` |
+| `sl-si` | Container image with the `sl-SI` locale. | `sha256:bd140766406a58c679e4cf7f4c48448d2cd9f9cacd005c1f5bfd4bf4642b4193` |
+| `sv-se` | Container image with the `sv-SE` locale. | `sha256:a47258027fdaf47b776e1e6f58d71a130574f42a0bccf14ba0a1d215d4546add` |
+| `ta-in` | Container image with the `ta-IN` locale. | `sha256:376cb98f99e733c4f6015cb283923bb07f3c126341959b0ba1cb5472619a2836` |
+| `te-in` | Container image with the `te-IN` locale. | `sha256:d0ae77a2e5539dbdd809d895eea123320fb4aab24932af38b769d26968a4150c` |
+| `th-th` | Container image with the `th-TH` locale. | `sha256:522c14b9cbb6a218839942bf7c36b3fc207f26cf6ce4068bc883e8dd7890237b` |
+| `tr-tr` | Container image with the `tr-TR` locale. | `sha256:c5f1ef181cb8287c917b9db2ee68eaa24b4f05e59372a00081bec70797bd54d1` |
+| `zh-cn` | Container image with the `zh-CN` locale. | `sha256:110e1e79bbb10254f9bd735b1c9cb70b0bf5a88f73da7a68985d2c861a40f201` |
+| `zh-hk` | Container image with the `zh-HK` locale. | `sha256:c1e0830d3cb04c8151c2e9c8c6eb0fb97036a09829fc8539a06bb07ca68a8e5e` |
+| `zh-tw` | Container image with the `zh-TW` locale. | `sha256:dd1ef4db3784594ba8c7c211f6196714690fbd360a8f81f5b109e8a023585b3d` |
+ | Image Tags | Notes | |--|:--| | `2.18.0-amd64-<locale>` | Replace `<locale>` with one of the available locales, listed below. For example `2.18.0-amd64-en-us`.|
This container image has the following tags available. You can also find a full
# [Latest version](#tab/current)
-* Release notes for version `1.5.0`:
- * Added new languages: Bulgarian, Croatian, Czech, Hungarian, Indonesian, Latvian, Lithuanian, Slovak, Slovenian, Tamil, and Turkish.
- * Improved performance on short audios.
+* Release notes for version `1.6.1`:
+ * Added new languages: Ukrainian
| Image Tags | Notes | ||:| | `latest` | |
-| `1.5.0-amd64-preview` | |
+| `1.6.1-amd64-preview` | |
# [Previous versions](#tab/previous) | Image Tags | Notes | ||:|
+| `1.5.0-amd64-preview` | |
| `1.3.0-amd64-preview` | | | `1.2.0-amd64-preview` | | | `1.1.0-amd64-preview` | |
cognitive-services Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/how-to/create-project.md
Previously updated : 02/10/2022 Last updated : 03/21/2022
Before you start using custom text classification, you will need several things:
## Azure resources
-Before you start using custom classification, you will need a Azure Language resource. We recommend following the steps below for creating your resource in the Azure portal . Creating a resource in the Azure portal lets you create an Azure storage account at the same time, with all of the required permissions pre-configured. You can also read further in the article to learn how to use a pre-existing resource, and configure it to work with custom text classification.
+Before you start using custom classification, you will need an Azure Language resource. We recommend following the steps below for creating your resource in the Azure portal. Creating a resource in the Azure portal lets you create an Azure storage account at the same time, with all of the required permissions pre-configured. You can also read further in the article to learn how to use a pre-existing resource, and configure it to work with custom text classification.
You also will need an Azure storage account where you will upload your `.txt` files that will be used to train a model to classify text.
You also will need an Azure storage account where you will upload your `.txt` fi
### Create a new resource from Language Studio
-If it's your first time logging in, you'll see a window appear in [Language Studio](https://aka.ms/languageStudio) that will let you choose a language resource or create a new one. You can also create a resource by clicking the settings icon in the top-right corner, selecting **Resources**, then clicking **Create a new resource**.
+If it's your first time logging in, you'll see a window in [Language Studio](https://aka.ms/languageStudio) that will let you choose a language resource or create a new one. You can also create a resource by clicking the settings icon in the top-right corner, selecting **Resources**, then clicking **Create a new resource**.
> [!IMPORTANT] > * To use Custom Text Classification, you'll need a Language resource in **West US 2** or **West Europe** with the Standard (**S**) pricing tier.
To set proper roles on your storage account:
[!INCLUDE [Storage connection note](../includes/storage-account-note.md)]
+### Enable CORS for your storage account
+
+Make sure to allow (**GET, PUT, DELETE**) methods when enabling Cross-Origin Resource Sharing (CORS). Make sure to add an asterisk (`*`) to the fields, and add the recommended value of 500 for the maximum age.
++ ## Prepare training data * As a prerequisite for creating a custom text classification project, your training data needs to be uploaded to a blob container in your storage account. You can create and upload training files from Azure directly or through using the Azure Storage Explorer tool. Using Azure Storage Explorer tool allows you to upload more data in less time.
To set proper roles on your storage account:
## Next steps
-After your project is created, you can start [tagging your data](tag-data.md), which will inform your text classification model how to interpret text, and is used for training and evaluation.
+After your project is created, you can start [tagging your data](tag-data.md), which will inform your text classification model how to interpret text, and is used for training and evaluation.
cognitive-services Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/create-project.md
Previously updated : 02/10/2022 Last updated : 03/21/2022
You should have an idea of the [project schema](design-schema.md) you will use f
## Azure resources
-Before you start using custom NER, you will need a Azure Language resource. We recommend the steps in the [quickstart](../quickstart.md) for creating one in the Azure portal. Creating a resource in the Azure portal lets you create an Azure storage account at the same time, with all of the required permissions pre-configured. You can also read further in the article to learn how to use a pre-existing resource, and configure it to work with custom NER.
+Before you start using custom NER, you will need an Azure Language resource. We recommend the steps in the [quickstart](../quickstart.md) for creating one in the Azure portal. Creating a resource in the Azure portal lets you create an Azure storage account at the same time, with all of the required permissions pre-configured. You can also read further in the article to learn how to use a pre-existing resource, and configure it to work with custom NER.
# [Azure portal](#tab/portal)
Before you start using custom NER, you will need a Azure Language resource. We r
### Create a new resource from Language Studio
-If it's your first time logging in, you'll see a window appear in [Language Studio](https://aka.ms/languageStudio) that will let you choose a language resource or create a new one. You can also create a resource by clicking the settings icon in the top-right corner, selecting **Resources**, then clicking **Create a new resource**.
+If it's your first time logging in, you'll see a window in [Language Studio](https://aka.ms/languageStudio) that will let you choose a language resource or create a new one. You can also create a resource by clicking the settings icon in the top-right corner, selecting **Resources**, then clicking **Create a new resource**.
> [!IMPORTANT] > * To use Custom NER, you'll need a Language resource in **West US 2** or **West Europe** with the Standard (**S**) pricing tier.
To set proper roles on your storage account:
For information on authorizing access to your Azure blob storage account and data, see [Authorize access to data in Azure storage](../../../../storage/common/authorize-data-access.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json).
+### Enable CORS for your storage account
+
+Make sure to allow (**GET, PUT, DELETE**) methods when enabling Cross-Origin Resource Sharing (CORS). Then, add an asterisk (`*`) to the fields and add the recommended value of 500 for the maximum age.
++ ## Prepare training data * As a prerequisite for creating a custom NER project, your training data needs to be uploaded to a blob container in your storage account. You can create and upload training files from Azure directly or through using the Azure Storage Explorer tool. Using Azure Storage Explorer tool allows you to upload more data in less time.
Review the data you entered and select **Create Project**.
## Next steps
-After your project is created, you can start [tagging your data](tag-data.md), which will inform your entity extraction model how to interpret text, and is used for training and evaluation.
+After your project is created, you can start [tagging your data](tag-data.md), which will inform your entity extraction model how to interpret text, and is used for training and evaluation.
communication-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/overview.md
Azure Communication Services supports various communication formats:
You can connect custom client apps, custom services, and the publicly switched telephony network (PSTN) to your communications experience. You can acquire [phone numbers](./concepts/telephony/plan-solution.md) directly through Azure Communication Services REST APIs, SDKs, or the Azure portal; and use these numbers for SMS or calling applications. Azure Communication Services [direct routing](./concepts/telephony/plan-solution.md) allows you to use SIP and session border controllers to connect your own PSTN carriers and bring your own phone numbers.
-In addition to REST APIs, [Azure Communication Services client libraries](./concepts/sdk-options.md) are available for various platforms and languages, including Web browsers (JavaScript), iOS (Swift), Java (Android), Windows (.NET). A [UI library](https://aka.ms/acsstorybook) can accelerate development for Web, iOS, and Android apps. Azure Communication Services is identity agnostic and you control how end users are identified and authenticated.
+In addition to REST APIs, [Azure Communication Services client libraries](./concepts/sdk-options.md) are available for various platforms and languages, including Web browsers (JavaScript), iOS (Swift), Android (Java), Windows (.NET). A [UI library](https://aka.ms/acsstorybook) can accelerate development for Web, iOS, and Android apps. Azure Communication Services is identity agnostic and you control how end users are identified and authenticated.
Scenarios for Azure Communication Services include:
cosmos-db Sql Api Sdk Dotnet Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-dotnet-standard.md
ms.devlang: csharp Previously updated : 04/06/2021 Last updated : 03/22/2022
Release history is maintained in the Azure Cosmos DB .NET SDK source repo. For a
## <a name="recommended-version"></a> Recommended version
-Different sub versions of .NET SDKs are available under the 3.x.x version. **The minimum recommended version is 3.20.1**.
+Different sub versions of .NET SDKs are available under the 3.x.x version. **The minimum recommended version is 3.25.0**.
## <a name="known-issues"></a> Known issues
cosmos-db Unique Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/unique-keys.md
You can define unique keys only when you create an Azure Cosmos container. A uni
* To set a unique key for an existing container, create a new container with the unique key constraint. Use the appropriate data migration tool to move the data from the existing container to the new container. For SQL containers, use the [Data Migration tool](import-data.md) to move data. For MongoDB containers, use [mongoimport.exe or mongorestore.exe](../dms/tutorial-mongodb-cosmos-db.md?toc=%2fazure%2fcosmos-db%2ftoc.json%253ftoc%253d%2fazure%2fcosmos-db%2ftoc.json) to move data.
-* A unique key policy can have a maximum of 16 path values. For example, the values can be `/firstName`, `/lastName`, and `/address/zipCode`. Each unique key policy can have a maximum of 10 unique key constraints or combinations. The combined paths for each unique index constraint must not exceed 60 bytes. In the previous example, first name, last name, and email address together are one constraint. This constraint uses 3 out of the 16 possible paths.
+* A unique key policy can have a maximum of 16 path values. For example, the values can be `/firstName`, `/lastName`, and `/address/zipCode`. Each unique key policy can have a maximum of 10 unique key constraints or combinations. In the previous example, first name, last name, and email address together are one constraint. This constraint uses 3 out of the 16 possible paths.
* When a container has a unique key policy, [Request Unit (RU)](request-units.md) charges to create, update, and delete an item are slightly higher.
cost-management-billing Export Cost Data Storage Account Sas Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/export-cost-data-storage-account-sas-key.md
Title: Export cost data with an Azure Storage account SAS key
description: This article helps partners create a SAS key and configure Cost Management exports. Previously updated : 03/08/2021 Last updated : 03/22/2022
cost-management-billing Migrate Cost Management Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/migrate-cost-management-api.md
Title: Migrate EA to Microsoft Customer Agreement APIs - Azure
description: This article helps you understand the consequences of migrating a Microsoft Enterprise Agreement (EA) to a Microsoft Customer Agreement. Previously updated : 10/07/2021 Last updated : 03/22/2022
The following items help you transition to MCA APIs.
- Determine which APIs you use and see which ones are replaced in the following section. - Familiarize yourself with [Azure Resource Manager REST APIs](/rest/api/azure). - If you're not already using Azure Resource Manager APIs, [register your client app with Azure AD](/rest/api/azure/#register-your-client-application-with-azure-ad).
+- Grant the application that was created during Azure AD app registration read access to the billing account using Access control (IAM).
- Update any programming code to [use Azure AD authentication](/rest/api/azure/#create-the-request). - Update any programming code to replace EA API calls with MCA API calls. - Update error handling to use new error codes.
cost-management-billing Quick Acm Cost Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/quick-acm-cost-analysis.md
Title: Quickstart - Explore Azure costs with cost analysis
description: This quickstart helps you use cost analysis to explore and analyze your Azure organizational costs. Previously updated : 02/17/2021 Last updated : 03/22/2022
cost-management-billing Save Share Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/save-share-views.md
description: This article explains how to save and share a customized view with others. Previously updated : 01/18/2021 Last updated : 03/22/2022
You can also pin the current view to an Azure portal dashboard. This only includ
:::image type="content" source="./media/save-share-views/save-box.png" alt-text="Screen shot showing Save box where you enter a name to save." lightbox="./media/save-share-views/save-box.png" ::: 1. After you save a view, it's available to select from the **View** menu. :::image type="content" source="./media/save-share-views/view-list.png" alt-text="Screen shot showing the View list." lightbox="./media/save-share-views/view-list.png" :::
+
+You can save up to 100 private views across all scopes for yourself and up to 100 shared views per scope that anyone with Cost Management Reader or greater access can use.
### To share a view
Keep in mind that if you choose to include a link to data, anyone who receives t
## Next steps - For more information about creating dashboards, see [Create a dashboard in the Azure portal](../../azure-portal/azure-portal-dashboards.md).-- To learn more about Cost Management, see [Cost Management + Billing documentation](../index.yml).
+- To learn more about Cost Management, see [Cost Management + Billing documentation](../index.yml).
cost-management-billing Tutorial Acm Create Budgets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-acm-create-budgets.md
Title: Tutorial - Create and manage Azure budgets
description: This tutorial helps you plan and account for the costs of Azure services that you consume. Previously updated : 06/17/2021 Last updated : 03/22/2022
cost-management-billing Avoid Charges Free Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/avoid-charges-free-account.md
tags: billing
Previously updated : 03/30/2021 Last updated : 03/22/2022
cost-management-billing Billing Troubleshoot Azure Payment Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/billing-troubleshoot-azure-payment-issues.md
tags: billing
Previously updated : 05/13/2021 Last updated : 03/22/2022
cost-management-billing Change Azure Account Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/change-azure-account-profile.md
tags: billing
Previously updated : 04/08/2021 Last updated : 03/22/2022
cost-management-billing Ea Portal Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-portal-troubleshoot.md
Title: Troubleshoot Azure EA portal access
description: This article describes some common issues that can occur with an Azure Enterprise Agreement (EA) in the Azure EA portal. Previously updated : 07/26/2021 Last updated : 03/22/2022
cost-management-billing Manage Billing Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/manage-billing-access.md
tags: billing
Previously updated : 06/27/2021 Last updated : 03/22/2022
cost-management-billing Programmatically Create Subscription Enterprise Agreement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-enterprise-agreement.md
Previously updated : 09/01/2021 Last updated : 03/22/2022
cost-management-billing Programmatically Create Subscription Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-preview.md
Previously updated : 06/09/2021 Last updated : 03/22/2022
cost-management-billing Programmatically Create Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription.md
Previously updated : 03/11/2021 Last updated : 03/22/2022
cost-management-billing Subscription Disabled https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/subscription-disabled.md
tags: billing
Previously updated : 07/15/2021 Last updated : 03/22/2022
cost-management-billing Switch Azure Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/switch-azure-offer.md
tags: billing,top-support-issue
Previously updated : 08/27/2021 Last updated : 03/22/2022
cost-management-billing Transfer Subscriptions Subscribers Csp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/transfer-subscriptions-subscribers-csp.md
Previously updated : 04/15/2021 Last updated : 03/22/2022
cost-management-billing Troubleshoot Cant Find Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/troubleshoot-cant-find-invoice.md
tags: billing
Previously updated : 06/25/2021 Last updated : 03/22/2022
cost-management-billing Troubleshoot Sign In Issue https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/troubleshoot-sign-in-issue.md
tags: billing
Previously updated : 07/16/2021 Last updated : 03/22/2022
cost-management-billing Upgrade Azure Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/upgrade-azure-subscription.md
tags: billing
Previously updated : 07/15/2021 Last updated : 03/22/2022
cost-management-billing Withholding Tax Credit India https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/withholding-tax-credit-india.md
tags: billing
Previously updated : 05/06/2021 Last updated : 03/22/2022
cost-management-billing Manage Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/microsoft-customer-agreement/manage-tenants.md
tags: billing
Previously updated : 05/05/2021 Last updated : 03/22/2022
cost-management-billing Microsoft Customer Agreement Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/microsoft-customer-agreement/microsoft-customer-agreement-get-started.md
tags: billing
Previously updated : 06/14/2021 Last updated : 03/22/2022
cost-management-billing Troubleshoot Subscription Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/microsoft-customer-agreement/troubleshoot-subscription-access.md
tags: billing
Previously updated : 04/07/2021 Last updated : 03/22/2022
cost-management-billing Find Reservation Purchaser From Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/find-reservation-purchaser-from-logs.md
Previously updated : 03/13/2021 Last updated : 03/22/2022
cost-management-billing Prepay Hana Large Instances Reserved Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/prepay-hana-large-instances-reserved-capacity.md
Previously updated : 03/19/2021 Last updated : 03/22/2022
cost-management-billing Reservation Discount App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/reservation-discount-app-service.md
Previously updated : 05/13/2021 Last updated : 03/22/2022
cost-management-billing Reserved Instance Purchase Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/reserved-instance-purchase-recommendations.md
Previously updated : 01/27/2021 Last updated : 03/22/2022 # Reservation recommendations
cost-management-billing Synapse Analytics Pre Purchase Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/synapse-analytics-pre-purchase-plan.md
Any Azure Synapse Analytics use deducts from the prepurchased SCUs automatically
## Determine the right size to buy
-A synapse prepurchase applies to all Synapse workloads and tiers. You can think of the Pre-Purchase Plan as a pool of prepaid Synapse commit units. Usage is deducted from the pool, regardless of the workload or tier. Other charges such as compute, storage, and networking are charged separately.
+A synapse prepurchase applies to all Synapse workloads and tiers. You can think of the Pre-Purchase Plan as a pool of prepaid Synapse commit units. Usage is deducted from the pool, regardless of the workload or tier. Integrated services such as VMs for SHIR, Azure Storage accounts, and networking components are charged separately.
The Synapse prepurchase discount applies to usage from the following products:
To learn more about Azure Reservations, see the following articles:
- [Manage Azure Reservations](manage-reserved-vm-instance.md) - [Understand Azure Reservations discount](understand-reservation-charges.md) - [Understand reservation usage for your Pay-As-You-Go subscription](understand-reserved-instance-usage.md)-- [Understand reservation usage for your Enterprise enrollment](understand-reserved-instance-usage-ea.md)
+- [Understand reservation usage for your Enterprise enrollment](understand-reserved-instance-usage-ea.md)
cost-management-billing Troubleshoot No Eligible Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/troubleshoot-no-eligible-subscriptions.md
Previously updated : 06/27/2021 Last updated : 03/22/2022 # Troubleshoot no eligible subscriptions
cost-management-billing Troubleshoot Reservation Recommendation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/troubleshoot-reservation-recommendation.md
Previously updated : 09/15/2021 Last updated : 03/22/2022 # Troubleshoot Azure reservation recommendations
cost-management-billing Understand Reserved Instance Usage Ea https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/understand-reserved-instance-usage-ea.md
tags: billing
Previously updated : 04/20/2021 Last updated : 03/22/2022
cost-management-billing Understand Storage Charges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/understand-storage-charges.md
Previously updated : 03/08/2021 Last updated : 03/22/2022
cost-management-billing Mca Understand Your Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/mca-understand-your-invoice.md
tags: billing
Previously updated : 04/08/2021 Last updated : 03/22/2022
cost-management-billing Review Individual Bill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/review-individual-bill.md
tags: billing
Previously updated : 05/17/2021 Last updated : 03/22/2022
data-factory Control Flow Execute Data Flow Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-execute-data-flow-activity.md
Previously updated : 09/09/2021 Last updated : 03/22/2022 # Data Flow activity in Azure Data Factory and Azure Synapse Analytics
A minimum compute type of General Purpose with an 8+8 (16 total v-cores) configu
If you're using an Azure Synapse Analytics as a sink or source, you must choose a staging location for your PolyBase batch load. PolyBase allows for batch loading in bulk instead of loading the data row-by-row. PolyBase drastically reduces the load time into Azure Synapse Analytics.
+## Checkpoint key
+
+When using the change capture option for data flow sources, ADF will maintain and manage the checkpoint for you automatically. The default checkpoint key is a hash of the data flow name and the pipeline name. If you are using a dynamic pattern for your source tables or folders, you may wish to override this hash and set your own checkpoint key value here.
+ ## Logging level If you do not require every pipeline execution of your data flow activities to fully log all verbose telemetry logs, you can optionally set your logging level to "Basic" or "None". When executing your data flows in "Verbose" mode (default), you are requesting the service to fully log activity at each individual partition level during your data transformation. This can be an expensive operation, so only enabling verbose when troubleshooting can improve your overall data flow and pipeline performance. "Basic" mode will only log transformation durations while "None" will only provide a summary of durations.
data-factory Data Flow Exists https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-exists.md
Previously updated : 09/09/2021 Last updated : 03/22/2022 # Exists transformation in mapping data flow
To create a free-form expression that contains operators other than "and" and "e
:::image type="content" source="media/data-flow/exists1.png" alt-text="Exists custom settings":::
+If you are building dynamic patterns in your data flows by using "late binding" of columns via schema drift, you can use the ```byName()``` expression function to use the exists transformation without hardcoding (i.e. early binding) the column names. Example: ```toString(byName('ProductNumber','source1')) == toString(byName('ProductNumber','source2'))```
+ ## Broadcast optimization :::image type="content" source="media/data-flow/broadcast.png" alt-text="Broadcast Join":::
data-factory Data Flow Expression Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-expression-functions.md
In Data Factory and Synapse pipelines, use the expression language of the mappin
| [hasColumn](data-flow-expressions-usage.md#hasColumn) | Checks for a column value by name in the stream. You can pass an optional stream name as the second argument. Column names known at design time should be addressed just by their name. Computed inputs aren't supported but you can use parameter substitutions. | | [hasError](data-flow-expressions-usage.md#hasError) | Checks if the assert with provided ID is marked as error. | | [iif](data-flow-expressions-usage.md#iif) | Based on a condition applies one value or the other. If other is unspecified, it's considered NULL. Both the values must be compatible(numeric, string...). |
-| [iifNull](data-flow-expressions-usage.md#iifNull) | Checks if the first parameter is null. If not null, the first parameter is returned. If null, the second parameter is returned. If three parameters are specified, the behavior is the same as iif(isNull(value1), value2, value3) and the third parameter is returned if the first value isn't null. |
+| [iifNull](data-flow-expressions-usage.md#iifNull) | Given two or more inputs, returns the first not null item. This function is equivalent to coalesce. |
| [initCap](data-flow-expressions-usage.md#initCap) | Converts the first letter of every word to uppercase. Words are identified as separated by whitespace. | | [instr](data-flow-expressions-usage.md#instr) | Finds the position(1 based) of the substring within a string. 0 is returned if not found. | | [isDelete](data-flow-expressions-usage.md#isDelete) | Checks if the row is marked for delete. For transformations taking more than one input stream you can pass the (1-based) index of the stream. The stream index should be either 1 or 2 and the default value is 1. |
data-factory Quickstart Create Data Factory Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-rest-api.md
$body = @"
} } "@
-$response = Invoke-RestMethod -Method PUT -Uri $request -Header $authHeader -Body $body
+$response = Invoke-AzRestMethod -Path ${path} -Method PUT -Payload $body
$response.content ```
data-lake-analytics Data Lake Analytics Manage Use Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-manage-use-python-sdk.md
This document was written using `pip version 9.0.1`.
Use the following `pip` commands to install the modules from the commandline: ```console
+pip install azure-identity
pip install azure-mgmt-resource pip install azure-datalake-store pip install azure-mgmt-datalake-store
Paste the following code into the script:
# Use this only for Azure AD end-user authentication #from azure.common.credentials import UserPassCredentials
+# Required for Azure Identity
+from azure.identity import DefaultAzureCredential
+ # Required for Azure Resource Manager from azure.mgmt.resource.resources import ResourceManagementClient from azure.mgmt.resource.resources.models import ResourceGroup
from azure.mgmt.datalake.analytics.job.models import JobInformation, JobState, U
# Required for Azure Data Lake Analytics catalog management from azure.mgmt.datalake.analytics.catalog import DataLakeAnalyticsCatalogManagementClient
+# Required for Azure Data Lake Analytics Model
+from azure.mgmt.datalake.analytics.account.models import CreateOrUpdateComputePolicyParameters
+ # Use these as needed for your application import logging import getpass
credentials = UserPassCredentials(user, password)
### Noninteractive authentication with SPI and a secret ```python
-credentials = ServicePrincipalCredentials(
- client_id='FILL-IN-HERE', secret='FILL-IN-HERE', tenant='FILL-IN-HERE')
+# Acquire a credential object for the app identity. When running in the cloud,
+# DefaultAzureCredential uses the app's managed identity (MSI) or user-assigned service principal.
+# When run locally, DefaultAzureCredential relies on environment variables named
+# AZURE_CLIENT_ID, AZURE_CLIENT_SECRET, and AZURE_TENANT_ID.
+
+credentials = DefaultAzureCredential()
``` ### Noninteractive authentication with API and a certificate
armGroupResult = resourceClient.resource_groups.create_or_update(
First create a store account. ```python
-adlsAcctResult = adlsAcctClient.account.create(
+adlsAcctResult = adlsAcctClient.account.begin_create(
rg, adls, DataLakeStoreAccount(
The DataLakeAnalyticsAccountManagementClient object provides methods for managin
The following code retrieves a list of compute policies for a Data Lake Analytics account. ```python
-policies = adlaAccountClient.computePolicies.listByAccount(rg, adla)
+policies = adlaAcctClient.compute_policies.list_by_account(rg, adla)
for p in policies:
- print('Name: ' + p.name + 'Type: ' + p.objectType + 'Max AUs / job: ' +
- p.maxDegreeOfParallelismPerJob + 'Min priority / job: ' + p.minPriorityPerJob)
+ print('Name: ' + p.name + 'Type: ' + p.object_type + 'Max AUs / job: ' +
+ p.max_degree_of_parallelism_per_job + 'Min priority / job: ' + p.min_priority_per_job)
``` ### Create a new compute policy
The following code creates a new compute policy for a Data Lake Analytics accoun
```python userAadObjectId = "3b097601-4912-4d41-b9d2-78672fc2acde"
-newPolicyParams = ComputePolicyCreateOrUpdateParameters(
+newPolicyParams = CreateOrUpdateComputePolicyParameters(
userAadObjectId, "User", 50, 250)
-adlaAccountClient.computePolicies.createOrUpdate(
+adlaAcctClient.compute_policies.create_or_update(
rg, adla, "GaryMcDaniel", newPolicyParams) ```
databox Data Box Disk Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-overview.md
Previously updated : 06/18/2019 Last updated : 03/21/2022 # Customer intent: As an IT admin, I need to understand what Data Box Disk is and how it works so I can use it to import on-premises data into Azure.
A typical flow includes the following steps:
Throughout this process, you are notified through email on all status changes. For more information about the detailed flow, go to [Deploy Data Box Disks in Azure portal](data-box-disk-quickstart-portal.md).
+> [!NOTE]
+> Only import is supported for Data Box Disk. Export functionality is not available. If you want to export data from Azure, you can use [Azure Data Box](data-box-overview.md).
+ ## Benefits Data Box Disk is designed to move large amounts of data to Azure with no impact to network. The solution has the following benefits:
defender-for-cloud Adaptive Application Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/adaptive-application-controls.md
No enforcement options are currently available. Adaptive application controls ar
|Supported machines:|:::image type="icon" source="./media/icons/yes-icon.png"::: Azure and non-Azure machines running Windows and Linux<br>:::image type="icon" source="./media/icons/yes-icon.png"::: [Azure Arc](../azure-arc/index.yml) machines| |Required roles and permissions:|**Security Reader** and **Reader** roles can both view groups and the lists of known-safe applications<br>**Contributor** and **Security Admin** roles can both edit groups and the lists of known-safe applications| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet)<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts|
-|||
+
defender-for-cloud Adaptive Network Hardening https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/adaptive-network-hardening.md
This page explains how to configure and manage adaptive network hardening in Def
|Pricing:|Requires [Microsoft Defender for servers](defender-for-servers-introduction.md)| |Required roles and permissions:|Write permissions on the machineΓÇÖs NSGs| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected AWS accounts|
-|||
+ ## What is adaptive network hardening? Applying [network security groups (NSG)](../virtual-network/network-security-groups-overview.md) to filter traffic to and from resources, improves your network security posture. However, there can still be some cases in which the actual traffic flowing through the NSG is a subset of the NSG rules defined. In these cases, further improving the security posture can be achieved by hardening the NSG rules, based on the actual traffic patterns.
defender-for-cloud Alerts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-overview.md
The severity is based on how confident Defender for Cloud is in the finding or t
| **Medium** | This is probably a suspicious activity might indicate that a resource is compromised. Defender for Cloud's confidence in the analytic or finding is medium and the confidence of the malicious intent is medium to high. These would usually be machine learning or anomaly-based detections. For example, a sign-in attempt from an anomalous location. | | **Low** | This might be a benign positive or a blocked attack. Defender for Cloud isn't confident enough that the intent is malicious and the activity might be innocent. For example, log clear is an action that might happen when an attacker tries to hide their tracks, but in many cases is a routine operation performed by admins. Defender for Cloud doesn't usually tell you when attacks were blocked, unless it's an interesting case that we suggest you look into. | | **Informational** | An incident is typically made up of a number of alerts, some of which might appear on their own to be only informational, but in the context of the other alerts might be worthy of a closer look. |
-| | |
+ ## Export alerts
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
At the bottom of this page, there's a table describing the Microsoft Defender fo
| **Unusual user SSH key reset in your virtual machine**<br>(VM_VMAccessUnusualSSHReset) | An unusual user SSH key reset was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>While this action may be legitimate, attackers can try utilizing VM Access extension to reset SSH key of a user account in your virtual machine and compromise it. | Credential Access | Medium | | **VBScript HTTP object allocation detected** | Creation of a VBScript file using Command Prompt has been detected. The following script contains HTTP object allocation command. This action can be used to download malicious files. | - | High | | **Windows registry persistence method detected**<br>(VM_RegistryPersistencyKey) | Analysis of host data has detected an attempt to persist an executable in the Windows registry. Malware often uses such a technique to survive a boot. | Persistence | Low |
-| | | | |
+ ## <a name="alerts-linux"></a>Alerts for Linux machines
At the bottom of this page, there's a table describing the Microsoft Defender fo
|**Unusual execution of custom script extension in your virtual machine**<br>(VM_CustomScriptExtensionUnusualExecution) | Unusual execution of a custom script extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers may use custom script extensions to execute malicious code on your virtual machines via the Azure Resource Manager. | Execution | Medium | |**Unusual user password reset in your virtual machine**<br>(VM_VMAccessUnusualPasswordReset) | An unusual user password reset was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>While this action may be legitimate, attackers can try utilizing the VM Access extension to reset the credentials of a local user in your virtual machine and compromise it. | Credential Access | Medium | |**Unusual user SSH key reset in your virtual machine**<br>(VM_VMAccessUnusualSSHReset) | An unusual user SSH key reset was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>While this action may be legitimate, attackers can try utilizing VM Access extension to reset SSH key of a user account in your virtual machine and compromise it. | Credential Access | Medium |
-|||||
+ ## <a name="alerts-azureappserv"></a>Alerts for Azure App Service
At the bottom of this page, there's a table describing the Microsoft Defender fo
| **Vulnerability scanner detected**<br>(AppServices_WpScanner) | Azure App Service activity log indicates that a possible vulnerability scanner was used on your App Service resource.<br>The suspicious activity detected resembles that of tools targeting WordPress applications.<br>If your App Service resource isnΓÇÖt hosting a WordPress site, it isnΓÇÖt vulnerable to this specific code injection exploit and you can safely suppress this alert for the resource. To learn how to suppress security alerts, see [Suppress alerts from Microsoft Defender for Cloud](alerts-suppression-rules.md).<br>(Applies to: App Service on Windows and App Service on Linux) | PreAttack | Medium | | **Web fingerprinting detected**<br>(AppServices_WebFingerprinting) | Azure App Service activity log indicates a possible web fingerprinting activity on your App Service resource.<br>The suspicious activity detected is associated with a tool called Blind Elephant. The tool fingerprint web servers and tries to detect the installed applications and version.<br>Attackers often use this tool for probing the web application to find vulnerabilities.<br>(Applies to: App Service on Windows and App Service on Linux) | PreAttack | Medium | | **Website is tagged as malicious in threat intelligence feed**<br>(AppServices_SmartScreen) | Your website as described below is marked as a malicious site by Windows SmartScreen. If you think this is a false positive, contact Windows SmartScreen via report feedback link provided.<br>(Applies to: App Service on Windows and App Service on Linux) | Collection | Medium |
-| | | | |
+ ## <a name="alerts-k8scluster"></a>Alerts for containers - Kubernetes clusters
Microsoft Defender for Containers provides security alerts on the cluster level
| **Suspicious password access (Preview)**<br>(K8S.NODE_SuspectPasswordFileAccess) | Analysis of processes running within a container detected suspicious access to encrypted user passwords. | Persistence | Informational | | **Suspicious use of DNS over HTTPS (Preview)**<br>(K8S.NODE_SuspiciousDNSOverHttps) | Analysis of processes running within a container indicates the use of a DNS call over HTTPS in an uncommon fashion. This technique is used by attackers to hide calls out to suspect or malicious sites. | DefenseEvasion, Exfiltration | Medium | | **A possible connection to malicious location has been detected. (Preview)**<br>(K8S.NODE_ThreatIntelCommandLineSuspectDomain) | Analysis of processes running within a container detected a connection to a location that has been reported to be malicious or unusual. This is an indicator that a compromise may have occurred. | InitialAccess | Medium |
-| | | | |
+ <sup><a name="footnote1"></a>1</sup>: **Limitations on GKE clusters**: GKE uses a Kuberenetes audit policy that doesn't support all alert types. As a result, this security alert, which is based on Kubernetes audit events, are not supported for GKE clusters.
Microsoft Defender for Containers provides security alerts on the cluster level
| **Suspected brute force attack** | A potential brute force attack has been detected on your SQL server '{name}'. | PreAttack | High | | **Suspected successful brute force attack**<br>(SQL.DB_BruteForce<br>SQL.VM_BruteForce<br>SQL.DW_BruteForce<br>SQL.MI_BruteForce) | A successful login occurred after an apparent brute force attack on your resource | PreAttack | High | | **Unusual export location** | Someone has extracted a massive amount of data from your SQL Server '{name}' to an unusual location. | Exfiltration | High |
-| | | | |
+
Microsoft Defender for Containers provides security alerts on the cluster level
| **Logon from an unusual cloud provider**<br>(SQL.PostgreSQL_CloudProviderAnomaly<br>SQL.MariaDB_CloudProviderAnomaly<br>SQL.MySQL_CloudProviderAnomaly) | Someone logged on to your resource from a cloud provider not seen in the last 60 days. It's quick and easy for threat actors to obtain disposable compute power for use in their campaigns. If this is expected behavior caused by the recent adoption of a new cloud provider, Defender for Cloud will learn over time and attempt to prevent future false positives. | Exploitation | Medium | | **Log on from an unusual location**<br>(SQL.MariaDB_GeoAnomaly<br>SQL.PostgreSQL_GeoAnomaly<br>SQL.MySQL_GeoAnomaly) | Someone logged on to your resource from an unusual Azure Data Center. | Exploitation | Medium | | **Login from a suspicious IP**<br>(SQL.PostgreSQL_SuspiciousIpAnomaly<br>SQL.MariaDB_SuspiciousIpAnomaly<br>SQL.MySQL_SuspiciousIpAnomaly) | Your resource has been accessed successfully from an IP address that Microsoft Threat Intelligence has associated with suspicious activity. | PreAttack | Medium |
-| | | | |
+
Microsoft Defender for Containers provides security alerts on the cluster level
| **Usage of NetSPI techniques to maintain persistence in your Azure environment**<br>(ARM_NetSPI.MaintainPersistence) | Usage of NetSPI persistence technique to create a webhook backdoor and maintain persistence in your Azure environment. This was detected by analyzing Azure Resource Manager operations in your subscription. | - | High | | **Usage of PowerZure exploitation toolkit to run an arbitrary code or exfiltrate Azure Automation account credentials**<br>(ARM_PowerZure.RunCodeOnBehalf) | PowerZure exploitation toolkit detected attempting to run code or exfiltrate Azure Automation account credentials. This was detected by analyzing Azure Resource Manager operations in your subscription. | - | High | | **Usage of PowerZure function to maintain persistence in your Azure environment**<br>(ARM_PowerZure.MaintainPersistence) | PowerZure exploitation toolkit detected creating a webhook backdoor to maintain persistence in your Azure environment. This was detected by analyzing Azure Resource Manager operations in your subscription. | - | High |
-| | | | |
+ ## <a name="alerts-dns"></a>Alerts for DNS
Microsoft Defender for Containers provides security alerts on the cluster level
| **Possible data download via DNS tunnel**<br>(AzureDNS_DataInfiltration) | Analysis of DNS transactions from %{CompromisedEntity} detected a possible DNS tunnel. Such activity, while possibly legitimate user behavior, is frequently performed by attackers to evade network monitoring and filtering. Typical related attacker activity is likely to include the download and execution of malicious software or remote administration tools. | Exfiltration | - | | **Possible data exfiltration via DNS tunnel**<br>(AzureDNS_DataExfiltration) | Analysis of DNS transactions from %{CompromisedEntity} detected a possible DNS tunnel. Such activity, while possibly legitimate user behavior, is frequently performed by attackers to evade network monitoring and filtering. Typical related attacker activity is likely to include the download and execution of malicious software or remote administration tools. | Exfiltration | - | | **Possible data transfer via DNS tunnel**<br>(AzureDNS_DataObfuscation) | Analysis of DNS transactions from %{CompromisedEntity} detected a possible DNS tunnel. Such activity, while possibly legitimate user behavior, is frequently performed by attackers to evade network monitoring and filtering. Typical related attacker activity is likely to include the download and execution of malicious software or remote administration tools. | Exfiltration | - |
-| | | | |
+ ## <a name="alerts-azurestorage"></a>Alerts for Azure Storage
Microsoft Defender for Containers provides security alerts on the cluster level
| **Unusual deletion in a storage account**<br>(Storage.Blob_DeletionAnomaly<br>Storage.Files_DeletionAnomaly) | Indicates that one or more unexpected delete operations has occurred in a storage account, compared to recent activity on this account. A potential cause is that an attacker has deleted data from your storage account.<br>Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2 | Exfiltration | Medium | | **Unusual upload of .cspkg to a storage account**<br>(Storage.Blob_CspkgUploadAnomaly) | Indicates that an Azure Cloud Services package (.cspkg file) has been uploaded to a storage account in an unusual way, compared to recent activity on this account. A potential cause is that an attacker has been preparing to deploy malicious code from your storage account to an Azure cloud service.<br>Applies to: Azure Blob Storage, Azure Data Lake Storage Gen2 | Lateral Movement, Execution | Medium | | **Unusual upload of .exe to a storage account**<br>(Storage.Blob_ExeUploadAnomaly<br>Storage.Files_ExeUploadAnomaly) | Indicates that an .exe file has been uploaded to a storage account in an unusual way, compared to recent activity on this account. A potential cause is that an attacker has uploaded a malicious executable file to your storage account, or that a legitimate user has uploaded an executable file.<br>Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2 | Lateral Movement, Execution | Medium |
-| | | | |
+ ## <a name="alerts-azurecosmos"></a>Alerts for Azure Cosmos DB (Preview)
Microsoft Defender for Containers provides security alerts on the cluster level
| **PREVIEW - Extraction of Azure Cosmos DB accounts keys via a potentially malicious script**<br>(CosmosDB_SuspiciousListKeys.MaliciousScript) | A PowerShell script was run in your subscription and performed a suspicious pattern of key-listing operations to get the keys of Azure Cosmos DB accounts in your subscription. Threat actors use automated scripts, like Microburst, to list keys and find Azure Cosmos DB accounts they can access. <br><br> This operation might indicate that an identity in your organization was breached, and that the threat actor is trying to compromise Azure Cosmos DB accounts in your environment for malicious intentions. <br><br> Alternatively, a malicious insider could be trying to access sensitive data and perform lateral movement. | Collection | High | | **PREVIEW - SQL injection: potential data exfiltration**<br>(CosmosDB_SqlInjection.DataExfiltration) | A suspicious SQL statement was used to query a container in this Azure Cosmos DB account. <br><br> The injected statement might have succeeded in exfiltrating data that the threat actor isnΓÇÖt authorized to access. <br><br> Due to the structure and capabilities of Azure Cosmos DB queries, many known SQL injection attacks on Azure Cosmos DB accounts cannot work. However, the variation used in this attack may work and threat actors can exfiltrate data. | Exfiltration | Medium | | **PREVIEW - SQL injection: fuzzing attempt**<br>(CosmosDB_SqlInjection.FailedFuzzingAttempt) | A suspicious SQL statement was used to query a container in this Azure Cosmos DB account. <br><br> Like other well-known SQL injection attacks, this attack wonΓÇÖt succeed in compromising the Azure Cosmos DB account. <br><br> Nevertheless, itΓÇÖs an indication that a threat actor is trying to attack the resources in this account, and your application may be compromised. <br><br> Some SQL injection attacks can succeed and be used to exfiltrate data. This means that if the attacker continues performing SQL injection attempts, they may be able to compromise your Azure Cosmos DB account and exfiltrate data. <br><br> You can prevent this threat by using parameterized queries. | Pre-attack | Low |
-| | | | |
+ ## <a name="alerts-azurenetlayer"></a>Alerts for Azure network layer
Microsoft Defender for Containers provides security alerts on the cluster level
| **Suspicious outgoing SSH network activity to multiple destinations**<br>(SSH_Outgoing_BF_OneToMany) | Network traffic analysis detected anomalous outgoing SSH communication to multiple destinations originating from %{Compromised Host} (%{Attacker IP}), a resource in your deployment. When the compromised resource is a load balancer or an application gateway, the suspected outgoing traffic has been originated from to one or more of the resources in the backend pool (of the load balancer or application gateway). Specifically, sampled network data shows your resource connecting to %{Number of Attacked IPs} unique IPs, which is considered abnormal for this environment. This activity may indicate that your resource was compromised and is now used to brute force external SSH end points. Note that this type of activity could possibly cause your IP to be flagged as malicious by external entities. | Discovery | Medium | | **Suspicious outgoing SSH network activity**<br>(SSH_Outgoing_BF_OneToOne) | Network traffic analysis detected anomalous outgoing SSH communication to %{Victim IP} originating from %{Compromised Host} (%{Attacker IP}), a resource in your deployment. When the compromised resource is a load balancer or an application gateway, the suspected outgoing traffic has been originated from to one or more of the resources in the backend pool (of the load balancer or application gateway). Specifically, sampled network data shows %{Number of Connections} outgoing connections from your resource, which is considered abnormal for this environment. This activity may indicate that your resource was compromised and is now used to brute force external SSH end points. Note that this type of activity could possibly cause your IP to be flagged as malicious by external entities. | Lateral Movement | Medium | | **Traffic detected from IP addresses recommended for blocking** | Microsoft Defender for Cloud detected inbound traffic from IP addresses that are recommended to be blocked. This typically occurs when this IP address doesn't communicate regularly with this resource. Alternatively, the IP address has been flagged as malicious by Defender for Cloud's threat intelligence sources. | Probing | Low |
-| | | | |
+ ## <a name="alerts-azurekv"></a>Alerts for Azure Key Vault
Microsoft Defender for Containers provides security alerts on the cluster level
| **Unusual user accessed a key vault**<br>(KV_UserAnomaly) | A key vault has been accessed by a user that does not normally access it. This anomalous access pattern may be legitimate activity, but it could be an indication that a threat actor has gained access to the key vault in an attempt to access the secrets contained within it. We recommend further investigations. | Credential Access | Medium | | **Unusual user-application pair accessed a key vault**<br>(KV_UserAppAnomaly) | A key vault has been accessed by a user-service principal pair that does not normally access it. This anomalous access pattern may be legitimate activity, but it could be an indication that a threat actor has gained access to the key vault in an attempt to access the secrets contained within it. We recommend further investigations. | Credential Access | Medium | | **User accessed high volume of key vaults**<br>(KV_AccountVolumeAnomaly) | A user or service principal has accessed an anomalously high volume of key vaults. This anomalous access pattern may be legitimate activity, but it could be an indication that a threat actor has gained access to multiple key vaults in an attempt to access the secrets contained within them. We recommend further investigations. | Credential Access | Medium |
-| | | | |
+ ## <a name="alerts-azureddos"></a>Alerts for Azure DDoS Protection
Microsoft Defender for Containers provides security alerts on the cluster level
|--|-|:--:|-| | **DDoS Attack detected for Public IP** | DDoS Attack detected for Public IP (IP address) and being mitigated. | Probing | High | | **DDoS Attack mitigated for Public IP** | DDoS Attack mitigated for Public IP (IP address). | Probing | Low |
-| | | | |
+ ## <a name="alerts-fusion"></a>Security incident alerts
Microsoft Defender for Containers provides security alerts on the cluster level
| **Security incident detected on multiple resources** | The incident which started on {Start Time (UTC)} and recently detected on {Detected Time (UTC)} indicates that similar attack methods were performed on your cloud resources {Host} | - | Medium | | **Security incident detected from same source** | The incident which started on {Start Time (UTC)} and recently detected on {Detected Time (UTC)} indicates that an attacker has {Action taken} your resource {Host} | - | High | | **Security incident detected on multiple machines** | The incident which started on {Start Time (UTC)} and recently detected on {Detected Time (UTC)} indicates that an attacker has {Action taken} your resources {Host} | - | Medium |
-| | | | |
+ ## MITRE ATT&CK tactics <a name="intentions"></a>
Defender for Cloud's supported kill chain intents are based on [version 7 of the
| **Exfiltration** | Exfiltration refers to techniques and attributes that result or aid in the adversary removing files and information from a target network. This category also covers locations on a system or network where the adversary may look for information to exfiltrate. | | **Command and Control** | The command and control tactic represents how adversaries communicate with systems under their control within a target network. | | **Impact** | Impact events primarily try to directly reduce the availability or integrity of a system, service, or network; including manipulation of data to impact a business or operational process. This would often refer to techniques such as ransomware, defacement, data manipulation, and others. |
-| | |
+ > [!NOTE] > For alerts that are in preview: [!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)]
defender-for-cloud Alerts Schemas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-schemas.md
You can view the security alerts events in Activity Log by searching for the Act
|**subscriptionId**|The subscription ID of the compromised resource| |**properties**|A JSON bag of additional properties pertaining to the alert. These can change from one alert to the other, however, the following fields will appear in all alerts:<br>- severity: The severity of the attack<br>- compromisedEntity: The name of the compromised resource<br>- remediationSteps: Array of remediation steps to be taken<br>- intent: The kill-chain intent of the alert. Possible intents are documented in the [Intentions table](alerts-reference.md#intentions)| |**relatedEvents**|Constant - empty array|
-|||
+ ### [Workflow automation](#tab/schema-workflow-automation)
defender-for-cloud Alerts Suppression Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-suppression-rules.md
This page explains how you can use alerts suppression rules to suppress false po
|Pricing:|Free<br>(Most security alerts are only available with [enhanced security features](enable-enhanced-security.md))| |Required roles and permissions:|**Security admin** and **Owner** can create/delete rules.<br>**Security reader** and **Reader** can view rules.| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet)|
-|||
+ ## What are suppression rules?
defender-for-cloud Apply Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/apply-security-baseline.md
Use the security recommendations described in this article to assess the machine
|Prerequisites:|Machines must (1) be members of a workgroup, (2) have the Guest Configuration extension, (3) have a system-assigned managed-identity, and (4) be running a supported OS:<br>ΓÇó Windows Server 2012, 2012r2, 2016 or 2019<br>ΓÇó Ubuntu 14.04, 16.04, 17.04, 18.04 or 20.04<br>ΓÇó Debian 7, 8, 9, or 10<br>ΓÇó CentOS 7 or 8<br>ΓÇó Red Hat Enterprise Linux (RHEL) 7 or 8<br>ΓÇó Oracle Linux 7 or 8<br>ΓÇó SUSE Linux Enterprise Server 12| |Required roles and permissions:|To install the Guest Configuration extension and its prerequisites, **write** permission is required on the relevant machines.<br>To **view** the recommendations and explore the OS baseline data, **read** permission is required at the subscription level.| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)|
-|||
+ ## What are the hardening recommendations?
The list of resources in the **Not applicable** tab includes a **Reason** column
| **Guest Configuration extension is not installed on the machine** | The machine is missing the Guest Configuration extension, which is a prerequisite for assessing the compliance with the Azure security baseline. | | **System managed identity is not configured on the machine** | A system-assigned, managed identity must be deployed on the machine. | | **The recommendation is disabled in policy** | The policy definition that assesses the OS baseline is disabled on the scope that includes the relevant machine. |
-| | |
+ ## Next steps In this document, you learned how to use Defender for Cloud's guest configuration recommendations to compare the hardening of your OS with the Azure security baseline.
defender-for-cloud Asset Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/asset-inventory.md
The asset management possibilities for this tool are substantial and continue to
|Pricing:|Free*<br>* Some features of the inventory page, such as the [software inventory](#access-a-software-inventory) require paid solutions to be in-place| |Required roles and permissions:|All users| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)|
-|||
+ ## What are the key features of asset inventory?
defender-for-cloud Configure Email Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/configure-email-notifications.md
To avoid alert fatigue, Defender for Cloud limits the volume of outgoing mails.
|Pricing:|Email notifications are free; for security alerts, enable the enhanced security plans ([plan pricing](https://azure.microsoft.com/pricing/details/defender-for-cloud/)) | |Required roles and permissions:|**Security Admin**<br>**Subscription Owner** | |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet)|
-|||
+ ## Customize the security alerts email notifications via the portal<a name="email"></a>
defender-for-cloud Continuous Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/continuous-export.md
This article describes how to configure continuous export to Log Analytics works
|Pricing:|Free| |Required roles and permissions:|<ul><li>**Security admin** or **Owner** on the resource group</li><li>Write permissions for the target resource.</li><li>If you're using the Azure Policy 'DeployIfNotExist' policies described below you'll also need permissions for assigning policies</li><li>To export data to Event Hub, you'll need Write permission on the Event Hub Policy.</li><li>To export to a Log Analytics workspace:<ul><li>if it **has the SecurityCenterFree solution**, you'll need a minimum of read permissions for the workspace solution: `Microsoft.OperationsManagement/solutions/read`</li><li>if it **doesn't have the SecurityCenterFree solution**, you'll need write permissions for the workspace solution: `Microsoft.OperationsManagement/solutions/action`</li><li>Learn more about [Azure Monitor and Log Analytics workspace solutions](../azure-monitor/insights/solutions.md)</li></ul></li></ul>| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet)|
-|||
+ ## What data types can be exported?
To deploy your continuous export configurations across your organization, use th
|||| |Continuous export to Event Hub|[Deploy export to Event Hub for Microsoft Defender for Cloud alerts and recommendations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fcdfcce10-4578-4ecd-9703-530938e4abcb)|cdfcce10-4578-4ecd-9703-530938e4abcb| |Continuous export to Log Analytics workspace|[Deploy export to Log Analytics workspace for Microsoft Defender for Cloud alerts and recommendations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fffb6f416-7bd2-4488-8828-56585fef2be9)|ffb6f416-7bd2-4488-8828-56585fef2be9|
- ||||
+ > [!TIP] > You can also find these by searching Azure Policy:
defender-for-cloud Custom Dashboards Azure Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/custom-dashboards-azure-workbooks.md
Within Microsoft Defender for Cloud, you can access the built-in workbooks to tr
| Pricing: | Free | | Required roles and permissions: | To save workbooks, you must have at least [Workbook Contributor](../role-based-access-control/built-in-roles.md#workbook-contributor) permissions on the target resource group | | Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet) |
-| | |
+ ## Workbooks gallery in Microsoft Defender for Cloud
The secure score over time workbook has five graphs for the subscriptions report
|**Recommendations with the most unhealthy resources**<br>This table helps you triage the recommendations that have had the most resources changed to unhealthy over the selected period.|:::image type="content" source="media/custom-dashboards-azure-workbooks/secure-score-over-time-table-3.png" alt-text="Recommendations with the most unhealthy resources.":::| |**Scores for specific security controls**<br>Defender for Cloud's security controls are logical groupings of recommendations. This chart shows you, at a glance, the weekly scores for all of your controls.|:::image type="content" source="media/custom-dashboards-azure-workbooks/secure-score-over-time-table-4.png" alt-text="Scores for your security controls over the selected time period.":::| |**Resources changes**<br>Recommendations with the most resources that have changed state (healthy, unhealthy, or not applicable) during the selected period are listed here. Select any recommendation from the list to open a new table listing the specific resources.|:::image type="content" source="media/custom-dashboards-azure-workbooks/secure-score-over-time-table-5.png" alt-text="Recommendations with the most resources that have changed health state.":::|
-|||
+ ### Use the 'System Updates' workbook
defender-for-cloud Data Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/data-security.md
You can specify the workspace and region where data collected from your machines
| Japan | Japan | | China | China | | Australia | Australia |
-| | |
+ > [!NOTE] > **Microsoft Defender for Storage** stores artifacts regionally according to the location of the related Azure resource. Learn more in [Introduction to Microsoft Defender for Storage](defender-for-storage-introduction.md).
Customers can access Defender for Cloud related data from the following data str
| [Azure Monitor logs](../azure-monitor/data-platform.md) | All security alerts. | | [Azure Resource Graph](../governance/resource-graph/overview.md) | Security alerts, security recommendations, vulnerability assessment results, secure score information, status of compliance checks, and more. | | [Microsoft Defender for Cloud REST API](/rest/api/securitycenter/) | Security alerts, security recommendations, and more. |
-| | |
+ ## Next steps
defender-for-cloud Defender For App Service Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-app-service-introduction.md
To protect your Azure App Service plan with Microsoft Defender for App Service,
| Pricing: | Microsoft Defender for App Service is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/)<br>Billing is according to total compute instances in all plans | | Supported App Service plans: | [The supported App Service plans](https://azure.microsoft.com/pricing/details/app-service/plans/) are:<br>ΓÇó Free plan<br>ΓÇó Basic Service plan<br>ΓÇó Standard Service plan<br>ΓÇó Premium v2 Service Plan<br>ΓÇó Premium v3 Service Plan<br>ΓÇó App Service Environment v1<br>ΓÇó App Service Environment v2<br>ΓÇó App Service Environment v3| | Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet) |
-| | |
+ ## What are the benefits of Microsoft Defender for App Service?
defender-for-cloud Defender For Container Registries Cicd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-container-registries-cicd.md
YouΓÇÖll get traceability information such as the GitHub workflow and the GitHub
|Release state:| **This CI/CD integration is in preview.**<br>We recommend that you experiment with it on non-production workflows only.<br>[!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)]| |Pricing:|**Microsoft Defender for container registries** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/)| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)|
-|||
+ ## Prerequisites
defender-for-cloud Defender For Container Registries Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-container-registries-introduction.md
To protect the Azure Resource Manager based registries in your subscription, ena
|Unsupported registries and images:|Windows images<br>'Private' registries (unless access is granted to [Trusted Services](../container-registry/allow-access-trusted-services.md#trusted-services))<br>Super-minimalist images such as [Docker scratch](https://hub.docker.com/_/scratch/) images, or "Distroless" images that only contain an application and its runtime dependencies without a package manager, shell, or OS<br>Images with [Open Container Initiative (OCI) Image Format Specification](https://github.com/opencontainers/image-spec/blob/master/spec.md)| |Required roles and permissions:|**Security reader** and [Azure Container Registry roles and permissions](../container-registry/container-registry-roles.md)| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png" border="false"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png" border="false"::: National (Azure Government, Azure China 21Vianet)|
-|||
+ ## What are the benefits of Microsoft Defender for container registries?
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
On this page, you'll learn how you can use Defender for Containers to improve, m
| Pricing: | **Microsoft Defender for Containers** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/) | | Required roles and permissions: | ΓÇó To auto provision the required components, [Contributor](../role-based-access-control/built-in-roles.md#contributor), [Log Analytics Contributor](../role-based-access-control/built-in-roles.md#log-analytics-contributor), or [Azure Kubernetes Service Contributor Role](../role-based-access-control/built-in-roles.md#azure-kubernetes-service-contributor-role)<br> ΓÇó **Security admin** can dismiss alerts<br> ΓÇó **Security reader** can view vulnerability assessment findings<br> See also [Azure Container Registry roles and permissions](../container-registry/container-registry-roles.md) | | Clouds: | **Azure**:<br>:::image type="icon" source="./medi#defender-for-containers-feature-availability). |
-| | |
+ ## What are the benefits of Microsoft Defender for Containers?
defender-for-cloud Defender For Dns Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-dns-introduction.md
From within Azure DNS, Defender for DNS monitors the queries from these resource
|Release state:|General availability (GA)| |Pricing:|**Microsoft Defender for DNS** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/)| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure China 21Vianet<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government|
-|||
+ ## What are the benefits of Microsoft Defender for DNS?
defender-for-cloud Defender For Key Vault Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-key-vault-introduction.md
Enable **Microsoft Defender for Key Vault** for Azure-native, advanced threat pr
|Release state:|General availability (GA)| |Pricing:|**Microsoft Defender for Key Vault** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/)| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)|
-|||
+ ## What are the benefits of Microsoft Defender for Key Vault?
defender-for-cloud Defender For Kubernetes Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-kubernetes-introduction.md
Host-level threat detection for your Linux AKS nodes is available if you enable
|Pricing:|**Microsoft Defender for Kubernetes** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).| |Required roles and permissions:|**Security admin** can dismiss alerts.<br>**Security reader** can view findings.| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet)|
-|||
+ ## What are the benefits of Microsoft Defender for Kubernetes?
defender-for-cloud Defender For Resource Manager Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-resource-manager-introduction.md
Microsoft Defender for Resource Manager automatically monitors the resource mana
|Release state:|General availability (GA)| |Pricing:|**Microsoft Defender for Resource Manager** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/)| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure China 21Vianet|
-|||
+ ## What are the benefits of Microsoft Defender for Resource Manager?
defender-for-cloud Defender For Sql Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-sql-introduction.md
Microsoft Defender for SQL includes two Microsoft Defender plans that extend Mic
|Pricing:|The two plans that form **Microsoft Defender for SQL** are billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/)| |Protected SQL versions:|[SQL on Azure virtual machines](../azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview.md)<br>[SQL Server on Azure Arc-enabled servers](/sql/sql-server/azure-arc/overview)<br>On-premises SQL servers on Windows machines without Azure Arc<br>Azure SQL [single databases](../azure-sql/database/single-database-overview.md) and [elastic pools](../azure-sql/database/elastic-pool-overview.md)<br>[Azure SQL Managed Instance](../azure-sql/managed-instance/sql-managed-instance-paas-overview.md)<br>[Azure Synapse Analytics (formerly SQL DW) dedicated SQL pool](../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md)| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure China 21Vianet (**Partial**: Subset of alerts and vulnerability assessment for SQL servers. Behavioral threat protections aren't available.)|
-|||
+ ## What does Microsoft Defender for SQL protect?
defender-for-cloud Defender For Sql Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-sql-usage.md
You'll see alerts when there are suspicious database activities, potential vulne
|Pricing:|**Microsoft Defender for SQL servers on machines** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/)| |Protected SQL versions:|SQL Server (versions currently [supported by Microsoft](/mem/configmgr/core/plan-design/configs/support-for-sql-server-versions))| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet|
-|||
+
defender-for-cloud Defender For Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-introduction.md
Defender for Storage doesn't access the Storage account data and has no impact o
|Pricing:|**Microsoft Defender for Storage** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/)| |Protected storage types:|[Blob Storage](https://azure.microsoft.com/services/storage/blobs/) (Standard/Premium StorageV2, Block Blobs) <br>[Azure Files](../storage/files/storage-files-introduction.md) (over REST API and SMB)<br>[Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md) (Standard/Premium accounts with hierarchical namespaces enabled)| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet<br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected AWS accounts|
-|||
+ ## What are the benefits of Microsoft Defender for Storage?
defender-for-cloud Deploy Vulnerability Assessment Byol Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-byol-vm.md
Alternatively, you might want to deploy your own privately licensed vulnerabilit
|Pricing:|Free| |Required roles and permissions:|**Resource owner** can deploy the scanner<br>**Security reader** can view findings| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)|
-|||
+ ## Deploy a BYOL solution from the Azure portal
To run the script, you'll need the relevant information for the parameters below
|**licenseCode**|Γ£ö|Vendor provided license string.| |**publicKey**|Γ£ö|Vendor provided public key.| |**autoUpdate**|-|Enable (true) or disable (false) auto deploy for this VA solution. When enabled, every new VM on the subscription will automatically attempt to link to the solution.<br/>(Default: False)|
-||||
+ Syntax:
defender-for-cloud Deploy Vulnerability Assessment Tvm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-tvm.md
For a quick overview of threat and vulnerability management, watch this video:
|Prerequisites:|Enable the [integration with Microsoft Defender for Endpoint](integration-defender-for-endpoint.md)| |Required roles and permissions:|[Owner](../role-based-access-control/built-in-roles.md#owner) (resource group level) can deploy the scanner<br>[Security Reader](../role-based-access-control/built-in-roles.md#security-reader) can view findings| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)|
-|||
+ ## Onboarding your machines to threat and vulnerability management
defender-for-cloud Deploy Vulnerability Assessment Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-vm.md
Deploy the vulnerability assessment solution that best meets your needs and bud
|Pricing:|Requires [Microsoft Defender for servers](defender-for-servers-introduction.md)| |Required roles and permissions:|[Owner](../role-based-access-control/built-in-roles.md#owner) (resource group level) can deploy the scanner<br>[Security Reader](../role-based-access-control/built-in-roles.md#security-reader) can view findings| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts|
-|||
+ ## Overview of the integrated vulnerability scanner
The vulnerability scanner extension works as follows:
| Oracle | Enterprise Linux | 5.11, 6, 7-7.9, 8-8.4 | | Debian | Debian | 7.x-10.x | | Ubuntu | Ubuntu | 12.04 LTS, 14.04 LTS, 15.x, 16.04 LTS, 18.04 LTS, 19.10, 20.04 LTS |
- ||||
+ 1. From the list of unhealthy machines, select the ones to receive a vulnerability assessment solution and select **Remediate**.
Your machine might be in this tab because:
| Oracle | Enterprise Linux | 5.11, 6, 7-7.9, 8-8.4 | | Debian | Debian | 7.x-10.x | | Ubuntu | Ubuntu | 12.04 LTS, 14.04 LTS, 15.x, 16.04 LTS, 18.04 LTS, 19.10, 20.04 LTS |
- ||||
+ ### What is scanned by the built-in vulnerability scanner? The scanner runs on your machine to look for vulnerabilities of the machine itself. From the machine, it can't scan your network.
defender-for-cloud Enable Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-data-collection.md
To enable auto provisioning of the Log Analytics agent:
||| |Policy Add-on for Kubernetes |[Deploy Azure Policy Add-on to Azure Kubernetes Service clusters](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fa8eff44f-8c92-45c3-a3fb-9880802d67a7)| |Guest Configuration agent (preview) |[Deploy prerequisites to enable Guest Configuration policies on virtual machines](https://github.com/Azure/azure-policy/blob/64dcfa3033a3ff231ec4e73d2c1dad4db4e3b5dd/built-in-policies/policySetDefinitions/Guest%20Configuration/GuestConfiguration_Prerequisites.json)|
- |||
+ 1. Select **Save**. If a workspace needs to be provisioned, agent installation might take up to 25 minutes.
defender-for-cloud Enhanced Security Features Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enhanced-security-features-overview.md
You can use any of the following ways to enable enhanced security for your subsc
| Azure CLI | [az security pricing](/cli/azure/security/pricing) | | PowerShell | [Set-AzSecurityPricing](/powershell/module/az.security/set-azsecuritypricing) | | Azure Policy | [Bundle Pricings](https://github.com/Azure/Azure-Security-Center/blob/master/Pricing%20%26%20Settings/ARM%20Templates/Set-ASC-Bundle-Pricing.json) |
-| | |
+ ### Can I enable Microsoft Defender for servers on a subset of servers in my subscription? No. When you enable [Microsoft Defender for servers](defender-for-servers-introduction.md) on a subscription, all the machines in the subscription will be protected by Defender for servers.
defender-for-cloud Exempt Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/exempt-resource.md
In such cases, you can create an exemption for a recommendation to:
| Required roles and permissions: | **Owner** or **Resource Policy Contributor** to create an exemption<br>To create a rule, you need permissions to edit policies in Azure Policy.<br>Learn more in [Azure RBAC permissions in Azure Policy](../governance/policy/overview.md#azure-rbac-permissions-in-azure-policy). | | Limitations: | Exemptions can be created only for recommendations included in Defender for Cloud's default initiative, [Azure Security Benchmark](/security/benchmark/azure/introduction), or any of the supplied regulatory standard initiatives. Recommendations that are generated from custom initiatives cannot be exempted. Learn more about the relationships between [policies, initiatives, and recommendations](security-policy-concept.md). | | Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet) |
-| | |
+ ## Define an exemption
defender-for-cloud File Integrity Monitoring Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/file-integrity-monitoring-overview.md
Learn how to configure file integrity monitoring (FIM) in Microsoft Defender for
|Pricing:|Requires [Microsoft Defender for servers](defender-for-servers-introduction.md).<br>Using the Log Analytics agent, FIM uploads data to the Log Analytics workspace. Data charges apply, based on the amount of data you upload. See [Log Analytics pricing](https://azure.microsoft.com/pricing/details/log-analytics/) to learn more.| |Required roles and permissions:|**Workspace owner** can enable/disable FIM (for more information, see [Azure Roles for Log Analytics](/services-hub/health/azure-roles#azure-roles)).<br>**Reader** can view results.| |Clouds:|:::image type="icon" source="./medi).<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts|
-|||
+ ## What is FIM in Defender for Cloud? File integrity monitoring (FIM), also known as change monitoring, examines operating system files, Windows registries, application software, Linux system files, and more, for changes that might indicate an attack.
defender-for-cloud Harden Docker Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/harden-docker-hosts.md
When vulnerabilities are found, they're grouped inside a single recommendation.
|Pricing:|Requires [Microsoft Defender for servers](defender-for-servers-introduction.md)| |Required roles and permissions:|**Reader** on the workspace to which the host connects| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet)<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts|
-|||
+ ## Identify and remediate security vulnerabilities in your Docker configuration
defender-for-cloud Information Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/information-protection.md
This page explains the integration of Azure Purview's data sensitivity classific
|Pricing:|You'll need an Azure Purview account to create the data sensitivity classifications and run the scans. Viewing the scan results and using the output is free for Defender for Cloud users| |Required roles and permissions:|**Security admin** and **Security contributor**| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet (**Partial**: Subset of alerts and vulnerability assessment for SQL servers. Behavioral threat protections aren't available.)|
-|||
+ ## The triage problem and Defender for Cloud's solution
defender-for-cloud Integration Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/integration-defender-for-endpoint.md
Title: Using Microsoft Defender for Endpoint in Microsoft Defender for Cloud to protect native, on-premises, and AWS machines. description: Learn about deploying Microsoft Defender for Endpoint from Microsoft Defender for Cloud to protect Azure, hybrid, and multi-cloud machines. Previously updated : 01/10/2022 Last updated : 03/22/2022 # Protect your endpoints with Defender for Cloud's integrated EDR solution: Microsoft Defender for Endpoint
Microsoft Defender for Endpoint is a holistic, cloud-delivered, endpoint securit
| Supported environments: | :::image type="icon" source="./medi) (formerly Windows Virtual Desktop), [Windows 10 Enterprise multi-session](../virtual-desktop/windows-10-multisession-faq.yml) (formerly Enterprise for Virtual Desktops)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure VMs running Windows 11 or Windows 10 (except if running Azure Virtual Desktop or Windows 10 Enterprise multi-session) | | Required roles and permissions: | * To enable/disable the integration: **Security admin** or **Owner**<br>* To view Defender for Endpoint alerts in Defender for Cloud: **Security reader**, **Reader**, **Resource Group Contributor**, **Resource Group Owner**, **Security admin**, **Subscription owner**, or **Subscription Contributor** | | Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet <br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts |
-| | |
+ ## Benefits of integrating Microsoft Defender for Endpoint with Defender for Cloud
To remove the Defender for Endpoint solution from your machines:
1. Remove the MDE.Windows/MDE.Linux extension from the machine.
-1. Follow the steps in [Offboard devices from the Microsoft Defender for Endpoint service](/microsoft-365/security/defender-endpoint/offboard-machines?view=o365-worldwide) from the Defender for Endpoint documentation.
+1. Follow the steps in [Offboard devices from the Microsoft Defender for Endpoint service](/microsoft-365/security/defender-endpoint/offboard-machines?view=o365-worldwide&preserve-view=true) from the Defender for Endpoint documentation.
## FAQ - Microsoft Defender for Cloud integration with Microsoft Defender for Endpoint
Defender for Cloud automatically deploys the extension to machines running:
> If you delete the MDE.Windows/MDE.Linux extension, it will not remove Microsoft Defender for Endpoint. to 'offboard', see [Offboard Windows servers.](/microsoft-365/security/defender-endpoint/configure-server-endpoints).
-### I've enabled the solution by the "MDE.Windows" / "MDE.Linux" extension isn't showing on my machine
+### I've enabled the solution but the "MDE.Windows" / "MDE.Linux" extension isn't showing on my machine
If you've enabled the integration, but still don't see the extension running on your machines, check the following:
defender-for-cloud Just In Time Access Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/just-in-time-access-overview.md
If you want to create custom roles that can work with JIT, you'll need the detai
|Configure or edit a JIT policy for a VM | *Assign these actions to the role:* <ul><li>On the scope of a subscription or resource group that is associated with the VM:<br/> `Microsoft.Security/locations/jitNetworkAccessPolicies/write` </li><li> On the scope of a subscription or resource group of VM: <br/>`Microsoft.Compute/virtualMachines/write`</li></ul> | |Request JIT access to a VM | *Assign these actions to the user:* <ul><li> `Microsoft.Security/locations/jitNetworkAccessPolicies/initiate/action` </li><li> `Microsoft.Security/locations/jitNetworkAccessPolicies/*/read` </li><li> `Microsoft.Compute/virtualMachines/read` </li><li> `Microsoft.Network/networkInterfaces/*/read` </li> <li> `Microsoft.Network/publicIPAddresses/read` </li></ul> | |Read JIT policies| *Assign these actions to the user:* <ul><li>`Microsoft.Security/locations/jitNetworkAccessPolicies/read`</li><li>`Microsoft.Security/locations/jitNetworkAccessPolicies/initiate/action`</li><li>`Microsoft.Security/policies/read`</li><li>`Microsoft.Security/pricings/read`</li><li>`Microsoft.Compute/virtualMachines/read`</li><li>`Microsoft.Network/*/read`</li>|
-|||
+ ## Next steps
defender-for-cloud Just In Time Access Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/just-in-time-access-usage.md
This page teaches you how to include JIT in your security program. You'll learn
| Supported VMs: | :::image type="icon" source="./medi). | | Required roles and permissions: | **Reader** and **SecurityReader** roles can both view the JIT status and parameters.<br>To create custom roles that can work with JIT, see [What permissions are needed to configure and use JIT?](just-in-time-access-overview.md#what-permissions-are-needed-to-configure-and-use-jit).<br>To create a least-privileged role for users that need to request JIT access to a VM, and perform no other JIT operations, use the [Set-JitLeastPrivilegedRole script](https://github.com/Azure/Azure-Security-Center/tree/main/Powershell%20scripts/JIT%20Scripts/JIT%20Custom%20Role) from the Defender for Cloud GitHub community pages. | | Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected AWS accounts |
-|||
+ <sup><a name="footnote1"></a>1</sup> For any VM protected by Azure Firewall, JIT will only fully protect the machine if it's in the same VNET as the firewall. VMs using VNET peering will not be fully protected.
defender-for-cloud Kubernetes Workload Protections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/kubernetes-workload-protections.md
You can manually configure the Kubernetes data plane hardening add-on, or extens
| Kubernetes clusters should not grant CAPSYSADMIN security capabilities | Manage access and permissions | No | | Privileged containers should be avoided | Manage access and permissions | No | | Running containers as root user should be avoided | Manage access and permissions | No |
- ||||
+ For recommendations with parameters that need to be customized, you will need to set the parameters:
defender-for-cloud Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/permissions.md
The following table displays roles and allowed actions in Defender for Cloud.
| Dismiss alerts | - | Γ£ö | - | Γ£ö | Γ£ö | | Apply security recommendations for a resource</br> (and use [Fix](implement-security-recommendations.md#fix-button)) | - | - | Γ£ö | Γ£ö | Γ£ö | | View alerts and recommendations | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| | | | | |
+ For **auto provisioning**, the specific role required depends on the extension you're deploying. For full details, check the tab for the specific extension in the [availability table on the auto provisioning quick start page](enable-data-collection.md#availability).
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
This screenshot shows AWS accounts displayed in Defender for Cloud's [overview d
|Pricing:|The **CSPM plan** is free.<br>The **[Defender for Containers](defender-for-containers-introduction.md)** plan is free during the preview. After which, it will be billed for AWS at the same price as for Azure resources.<br>For every AWS machine connected to Azure with [Azure Arc-enabled servers](../azure-arc/servers/overview.md), the **Defender for servers** plan is billed at the same price as the [Microsoft Defender for servers](defender-for-servers-introduction.md) plan for Azure machines. If an AWS EC2 doesn't have the Azure Arc agent deployed, you won't be charged for that machine.| |Required roles and permissions:|**Contributor** permission for the relevant Azure subscription.| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)|
-|||
+ ## Prerequisites
Defender for Cloud will immediately start scanning your AWS resources and you'll
|Pricing:|Requires [Microsoft Defender for servers](defender-for-servers-introduction.md)| |Required roles and permissions:|**Owner** on the relevant Azure subscription<br>**Contributor** can also connect an AWS account if an owner provides the service principal details| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)|
-|||
+ ## Connect your AWS account
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
To protect your GCP-based resources, you can connect an account in two different
|Pricing:|The **CSPM plan** is free.<br> The **Defender for servers** plan is billed at the same price as the [Microsoft Defender for servers](defender-for-servers-introduction.md) plan for Azure machines. If a GCP VM instance doesn't have the Azure Arc agent deployed, you won't be charged for that machine. <br>The **[Defender for Containers](defender-for-containers-introduction.md)** plan is free during the preview. After which, it will be billed for GCP at the same price as for Azure resources.| |Required roles and permissions:| **Contributor** on the relevant Azure Subscription| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet, Other Gov)|
-|||
+ ## Remove 'classic' connectors
If you choose to disable all of available configuration options, no agents, or c
|Pricing:|Requires [Microsoft Defender for servers](defender-for-servers-introduction.md)| |Required roles and permissions:|**Owner** or **Contributor** on the relevant Azure Subscription| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)|
-|||
+ ## Connect your GCP project
defender-for-cloud Recommendations Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference.md
Title: Reference table for all Microsoft Defender for Cloud recommendations description: This article lists Microsoft Defender for Cloud's security recommendations that help you harden and protect your resources.-+ Last updated 03/13/2022-+ # Security recommendations - a reference guide
impact on your secure score.
|Install Azure Security Center for IoT security module to get more visibility into your IoT devices|Install Azure Security Center for IoT security module to get more visibility into your IoT devices.|Low| |Your machines should be restarted to apply system updates|Restart your machines to apply the system updates and secure the machine from vulnerabilities. (Related policy: System updates should be installed on your machines)|Medium| |Monitoring agent should be installed on your machines|This action installs a monitoring agent on the selected virtual machines. Select a workspace for the agent to report to. (No related policy)|High|
-||||
+ ## Next steps
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md
We've added two **preview** recommendations to deploy and maintain the endpoint
|||| |[Endpoint protection should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/4fb67663-9ab9-475d-b026-8c544cced439) |To protect your machines from threats and vulnerabilities, install a supported endpoint protection solution. <br> <a href="/azure/defender-for-cloud/endpoint-protection-recommendations-technical">Learn more about how Endpoint Protection for machines is evaluated.</a><br />(Related policy: [Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2faf6cd1bd-1635-48cb-bde7-5b15693900b9)) |High | |[Endpoint protection health issues should be resolved on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/37a3689a-818e-4a0e-82ac-b1392b9bb000) |Resolve endpoint protection health issues on your virtual machines to protect them from latest threats and vulnerabilities. Azure Security Center supported endpoint protection solutions are documented [here](./supported-machines-endpoint-solutions-clouds-servers.md?tabs=features-windows). Endpoint protection assessment is documented <a href='/azure/defender-for-cloud/endpoint-protection-recommendations-technical'>here</a>.<br />(Related policy: [Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2faf6cd1bd-1635-48cb-bde7-5b15693900b9)) |Medium |
-|||
+ > [!NOTE] > The recommendations show their freshness interval as 8 hours, but there are some scenarios in which this might take significantly longer. For example, when an on premises machine is deleted, it takes 24 hours for Security Center to identify the deletion. After that, the assessment will take up to 8 hours to return the information. In that specific situation therefore, it may take 32 hours for the machine to be removed from the list of affected resources.
The description has also been updated to better explain the purpose of this hard
| Recommendation | Description | Severity | |--|--|:--:| | **Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources** | By default, a virtual machineΓÇÖs OS and data disks are encrypted-at-rest using platform-managed keys; temp disks and data caches arenΓÇÖt encrypted, and data isnΓÇÖt encrypted when flowing between compute and storage resources. For a comparison of different disk encryption technologies in Azure, see https://aka.ms/diskencryptioncomparison.<br>Use Azure Disk Encryption to encrypt all this data. Disregard this recommendation if: (1) youΓÇÖre using the encryption-at-host feature, or (2) server-side encryption on Managed Disks meets your security requirements. Learn more in Server-side encryption of Azure Disk Storage. | High |
-| | | |
+ ### Continuous export of secure score and regulatory compliance data released for general availability (GA)
To expand the threat protections provided by Azure Defender for Key Vault, we've
| Alert (alert type) | Description | MITRE tactic | Severity | |||:--:|-| | Access from a suspicious IP address to a key vault<br>(KV_SuspiciousIPAccess) | A key vault has been successfully accessed by an IP that has been identified by Microsoft Threat Intelligence as a suspicious IP address. This may indicate that your infrastructure has been compromised. We recommend further investigation. Learn more about [Microsoft's threat intelligence capabilities](https://go.microsoft.com/fwlink/?linkid=2128684). | Credential Access | Medium |
-|||
+ For more information, see: - [Introduction to Azure Defender for Key Vault](defender-for-resource-manager-introduction.md)
To reflect the fact that the security alerts provided by Azure Defender for Kube
|Alert (alert type)|Description| |-|-| |Kubernetes penetration testing tool detected<br>(**AKS**_PenTestToolsKubeHunter)|Kubernetes audit log analysis detected usage of Kubernetes penetration testing tool in the **AKS** cluster. While this behavior can be legitimate, attackers might use such public tools for malicious purposes.
-|||
+ was changed to: |Alert (alert type)|Description| |-|-| |Kubernetes penetration testing tool detected<br>(**K8S**_PenTestToolsKubeHunter)|Kubernetes audit log analysis detected usage of Kubernetes penetration testing tool in the **Kubernetes** cluster. While this behavior can be legitimate, attackers might use such public tools for malicious purposes.|
-|||
+ Any suppression rules that refer to alerts beginning "AKS_" were automatically converted. If you've setup SIEM exports, or custom automation scripts that refer to Kubernetes alerts by alert type, you'll need to update them with the new alert types.
To expand the threat protections provided by Azure Defender for Resource Manager
|**Privileged custom role created for your subscription in a suspicious way (Preview)**<br>(ARM_PrivilegedRoleDefinitionCreation)|Azure Defender for Resource Manager detected a suspicious creation of privileged custom role definition in your subscription. This operation might have been performed by a legitimate user in your organization. Alternatively, it might indicate that an account in your organization was breached, and that the threat actor is trying to create a privileged role to use in the future to evade detection.|Lateral Movement, Defense Evasion|Low| |**Azure Resource Manager operation from suspicious IP address (Preview)**<br>(ARM_OperationFromSuspiciousIP)|Azure Defender for Resource Manager detected an operation from an IP address that has been marked as suspicious in threat intelligence feeds.|Execution|Medium| |**Azure Resource Manager operation from suspicious proxy IP address (Preview)**<br>(ARM_OperationFromSuspiciousProxyIP)|Azure Defender for Resource Manager detected a resource management operation from an IP address that is associated with proxy services, such as TOR. While this behavior can be legitimate, it's often seen in malicious activities, when threat actors try to hide their source IP.|Defense Evasion|Medium|
-||||
+ For more information, see: - [Introduction to Azure Defender for Resource Manager](defender-for-resource-manager-introduction.md)
To access this information, you can use any of the methods in the table below.
| Azure Resource Graph | `securityresources`<br>`where type == "microsoft.security/assessments"` | | Continuous export | The two dedicated fields will be available the Log Analytics workspace data | | [CSV export](continuous-export.md#manual-one-time-export-of-alerts-and-recommendations) | The two fields are included in the CSV files |
-| | |
+ Learn more about the [Assessments REST API](/rest/api/securitycenter/assessments).
The 11 Azure Defender alerts listed below have been deprecated.
|--|--| | ARM_MicroBurstDomainInfo | PREVIEW - MicroBurst toolkit "Get-AzureDomainInfo" function run detected | | ARM_MicroBurstRunbook | PREVIEW - MicroBurst toolkit "Get-AzurePasswords" function run detected |
- | | |
+ - These nine alerts relate to an Azure Active Directory Identity Protection connector (IPC) that has already been deprecated:
The 11 Azure Defender alerts listed below have been deprecated.
| PasswordSpray | Password Spray | | LeakedCredentials | Azure AD threat intelligence | | AADAI | Azure AD AI |
- | | |
+ > [!TIP] > These nine IPC alerts were never Security Center alerts. TheyΓÇÖre part of the Azure Active Directory (AAD) Identity Protection connector (IPC) that was sending them to Security Center. For the last two years, the only customers whoΓÇÖve been seeing those alerts are organizations who configured the export (from the connector to ASC) in 2019 or earlier. AAD IPC has continued to show them in its own alerts systems and theyΓÇÖve continued to be available in Azure Sentinel. The only change is that theyΓÇÖre no longer appearing in Security Center.
Learn which recommendations are in each security control in [Security controls a
||| |Vulnerability assessment should be enabled on your SQL servers<br>Vulnerability assessment should be enabled on your SQL managed instances<br>Vulnerabilities on your SQL databases should be remediated new<br>Vulnerabilities on your SQL databases in VMs should be remediated |Moving from Remediate vulnerabilities (worth 6 points)<br>to Remediate security configurations (worth 4 points).<br>Depending on your environment, these recommendations will have a reduced impact on your score.| |There should be more than one owner assigned to your subscription<br>Automation account variables should be encrypted<br>IoT Devices - Auditd process stopped sending events<br>IoT Devices - Operating system baseline validation failure<br>IoT Devices - TLS cipher suite upgrade needed<br>IoT Devices - Open Ports On Device<br>IoT Devices - Permissive firewall policy in one of the chains was found<br>IoT Devices - Permissive firewall rule in the input chain was found<br>IoT Devices - Permissive firewall rule in the output chain was found<br>Diagnostic logs in IoT Hub should be enabled<br>IoT Devices - Agent sending underutilized messages<br>IoT Devices - Default IP Filter Policy should be Deny<br>IoT Devices - IP Filter rule large IP range<br>IoT Devices - Agent message intervals and size should be adjusted<br>IoT Devices - Identical Authentication Credentials<br>IoT Devices - Audited process stopped sending events<br>IoT Devices - Operating system (OS) baseline configuration should be fixed|Moving to **Implement security best practices**.<br>When a recommendation moves to the Implement security best practices security control, which is worth no points, the recommendation no longer affects your secure score.|
-|||
+ ## March 2021
We provide three Azure Policy 'DeployIfNotExist' policies that create and config
|Workflow automation for security alerts|[Deploy Workflow Automation for Azure Security Center alerts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2ff1525828-9a90-4fcf-be48-268cdd02361e)|f1525828-9a90-4fcf-be48-268cdd02361e| |Workflow automation for security recommendations|[Deploy Workflow Automation for Azure Security Center recommendations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f73d6ab6c-2475-4850-afd6-43795f3492ef)|73d6ab6c-2475-4850-afd6-43795f3492ef| |Workflow automation for regulatory compliance changes|[Deploy Workflow Automation for Azure Security Center regulatory compliance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f73d6ab6c-509122b9-ddd9-47ba-a5f1-d0dac20be63c)|509122b9-ddd9-47ba-a5f1-d0dac20be63c|
-||||
+ There are two updates to the features of these policies:
To increase the coverage of this benchmark, the following 35 preview recommendat
| Manage access and permissions | - Function apps should have 'Client Certificates (Incoming client certificates)' enabled | | Protect applications against DDoS attacks | - Web Application Firewall (WAF) should be enabled for Application Gateway<br> - Web Application Firewall (WAF) should be enabled for Azure Front Door Service service | | Restrict unauthorized network access | - Firewall should be enabled on Key Vault<br> - Private endpoint should be configured for Key Vault<br> - App Configuration should use private link<br> - Azure Cache for Redis should reside within a virtual network<br> - Azure Event Grid domains should use private link<br> - Azure Event Grid topics should use private link<br> - Azure Machine Learning workspaces should use private link<br> - Azure SignalR Service should use private link<br> - Azure Spring Cloud should use network injection<br> - Container registries should not allow unrestricted network access<br> - Container registries should use private link<br> - Public network access should be disabled for MariaDB servers<br> - Public network access should be disabled for MySQL servers<br> - Public network access should be disabled for PostgreSQL servers<br> - Storage account should use a private link connection<br> - Storage accounts should restrict network access using virtual network rules<br> - VM Image Builder templates should use private link|
-| | |
+ Related links:
Preview recommendations don't render a resource unhealthy, and they aren't inclu
| Restrict unauthorized network access | - Private endpoint should be enabled for PostgreSQL servers<br>- Private endpoint should be enabled for MariaDB servers<br>- Private endpoint should be enabled for MySQL servers | | Enable auditing and logging | - Diagnostic logs in App Services should be enabled | | Implement security best practices | - Azure Backup should be enabled for virtual machines<br>- Geo-redundant backup should be enabled for Azure Database for MariaDB<br>- Geo-redundant backup should be enabled for Azure Database for MySQL<br>- Geo-redundant backup should be enabled for Azure Database for PostgreSQL<br>- PHP should be updated to the latest version for your API app<br>- PHP should be updated to the latest version for your web app<br>- Java should be updated to the latest version for your API app<br>- Java should be updated to the latest version for your function app<br>- Java should be updated to the latest version for your web app<br>- Python should be updated to the latest version for your API app<br>- Python should be updated to the latest version for your function app<br>- Python should be updated to the latest version for your web app<br>- Audit retention for SQL servers should be set to at least 90 days |
-| | |
+ Related links:
To ensure a consistent experience for all users, regardless of the scanner type
|-|:-| |**A vulnerability assessment solution should be enabled on your virtual machines**|Replaces the following two recommendations:<br> ***** Enable the built-in vulnerability assessment solution on virtual machines (powered by Qualys (now deprecated) (Included with standard tier)<br> ***** Vulnerability assessment solution should be installed on your virtual machines (now deprecated) (Standard and free tiers)| |**Vulnerabilities in your virtual machines should be remediated**|Replaces the following two recommendations:<br>***** Remediate vulnerabilities found on your virtual machines (powered by Qualys) (now deprecated)<br>***** Vulnerabilities should be remediated by a Vulnerability Assessment solution (now deprecated)|
-|||
+ Now you'll use the same recommendation to deploy Security Center's vulnerability assessment extension or a privately licensed solution ("BYOL") from a partner such as Qualys or Rapid7.
If you have scripts, queries, or automations referring to the previous recommend
|**Remediate vulnerabilities found on your virtual machines (powered by Qualys)**<br>Key: 1195afff-c881-495e-9bc5-1486211ae03f|Built-in| |**Vulnerability assessment solution should be installed on your virtual machines**<br>Key: 01b1ed4c-b733-4fee-b145-f23236e70cf3|BYOL| |**Vulnerabilities should be remediated by a Vulnerability Assessment solution**<br>Key: 71992a2a-d168-42e0-b10e-6b45fa2ecddb|BYOL|
-|||
+ |Policy|Scope| |-|:-| |**Vulnerability assessment should be enabled on virtual machines**<br>Policy ID: 501541f7-f7e7-4cd6-868c-4190fdad3ac9|Built-in| |**Vulnerabilities should be remediated by a vulnerability assessment solution**<br>Policy ID: 760a85ff-6162-42b3-8d70-698e268f648c|BYOL|
-|||
+ ##### From August 2020
If you have scripts, queries, or automations referring to the previous recommend
|-|:-| |**A vulnerability assessment solution should be enabled on your virtual machines**<br>Key: ffff0522-1e88-47fc-8382-2a80ba848f5d|Built-in + BYOL| |**Vulnerabilities in your virtual machines should be remediated**<br>Key: 1195afff-c881-495e-9bc5-1486211ae03f|Built-in + BYOL|
-|||
+ |Policy|Scope| |-|:-| |[**Vulnerability assessment should be enabled on virtual machines**](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f501541f7-f7e7-4cd6-868c-4190fdad3ac9)<br>Policy ID: 501541f7-f7e7-4cd6-868c-4190fdad3ac9 |Built-in + BYOL|
-|||
+ ### New AKS security policies added to ASC_default initiative ΓÇô for use by private preview customers only
The policy definitions can be found in Azure Policy:
|Continuous export to Log Analytics workspace|[Deploy export to Log Analytics workspace for Azure Security Center alerts and recommendations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fffb6f416-7bd2-4488-8828-56585fef2be9)|ffb6f416-7bd2-4488-8828-56585fef2be9| |Workflow automation for security alerts|[Deploy Workflow Automation for Azure Security Center alerts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2ff1525828-9a90-4fcf-be48-268cdd02361e)|f1525828-9a90-4fcf-be48-268cdd02361e| |Workflow automation for security recommendations|[Deploy Workflow Automation for Azure Security Center recommendations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f73d6ab6c-2475-4850-afd6-43795f3492ef)|73d6ab6c-2475-4850-afd6-43795f3492ef|
-||||
+ Get started with [workflow automation templates](https://github.com/Azure/Azure-Security-Center/tree/master/Workflow%20automation).
The policy definitions can be found in Azure Policy:
| [Advanced threat protection should be enabled on Azure Container Registry registries](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fc25d9a16-bc35-4e15-a7e5-9db606bf9ed4) | c25d9a16-bc35-4e15-a7e5-9db606bf9ed4 | | [Advanced threat protection should be enabled on Azure Kubernetes Service clusters](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f523b5cd1-3e23-492f-a539-13118b6d1e3a) | 523b5cd1-3e23-492f-a539-13118b6d1e3a | | [Advanced threat protection should be enabled on Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f4da35fc9-c9e7-4960-aec9-797fe7d9051d) | 4da35fc9-c9e7-4960-aec9-797fe7d9051d |
-| | |
+ Learn more about [Threat protection in Azure Security Center](azure-defender.md).
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
The new alerts for this Defender plan cover these intentions as shown in the fol
| **Suspicious invocation of a high-risk 'Lateral Movement' operation detected (Preview)**<br>(ARM_AnomalousOperation.LateralMovement) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription, which might indicate an attempt to perform lateral movement. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to compromise additional resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Lateral Movement | Medium | | **Suspicious invocation of a high-risk 'Data Collection' operation detected (Preview)**<br>(ARM_AnomalousOperation.Collection) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription, which might indicate an attempt to collect data. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to collect sensitive data on resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Collection | Medium | | **Suspicious invocation of a high-risk 'Impact' operation detected (Preview)**<br>(ARM_AnomalousOperation.Impact) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription, which might indicate an attempted configuration change. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to access restricted credentials and compromise resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Impact | Medium |
-|||||
+ In addition, these two alerts from this plan have come out of preview:
In addition, these two alerts from this plan have come out of preview:
|-|--|:-:|-| | **Azure Resource Manager operation from suspicious IP address**<br>(ARM_OperationFromSuspiciousIP) | Microsoft Defender for Resource Manager detected an operation from an IP address that has been marked as suspicious in threat intelligence feeds. | Execution | Medium | | **Azure Resource Manager operation from suspicious proxy IP address**<br>(ARM_OperationFromSuspiciousProxyIP) | Microsoft Defender for Resource Manager detected a resource management operation from an IP address that is associated with proxy services, such as TOR. While this behavior can be legitimate, it's often seen in malicious activities, when threat actors try to hide their source IP. | Defense Evasion | Medium |
-|||||
+ ### Recommendations to enable Microsoft Defender plans on workspaces (in preview)
The two recommendations, which both offer automated remediation (the 'Fix' actio
|||| |[Microsoft Defender for servers should be enabled on workspaces](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1ce68079-b783-4404-b341-d2851d6f0fa2) |Microsoft Defender for servers brings threat detection and advanced defenses for your Windows and Linux machines.<br>With this Defender plan enabled on your subscriptions but not on your workspaces, you're paying for the full capability of Microsoft Defender for servers but missing out on some of the benefits.<br>When you enable Microsoft Defender for servers on a workspace, all machines reporting to that workspace will be billed for Microsoft Defender for servers - even if they're in subscriptions without Defender plans enabled. Unless you also enable Microsoft Defender for servers on the subscription, those machines won't be able to take advantage of just-in-time VM access, adaptive application controls, and network detections for Azure resources.<br>Learn more in <a target="_blank" href="/azure/defender-for-cloud/defender-for-servers-introduction?wt.mc_id=defenderforcloud_inproduct_portal_recoremediation">Introduction to Microsoft Defender for servers</a>.<br />(No related policy) |Medium | |[Microsoft Defender for SQL on machines should be enabled on workspaces](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e9c320f1-03a0-4d2b-9a37-84b3bdc2e281) |Microsoft Defender for servers brings threat detection and advanced defenses for your Windows and Linux machines.<br>With this Defender plan enabled on your subscriptions but not on your workspaces, you're paying for the full capability of Microsoft Defender for servers but missing out on some of the benefits.<br>When you enable Microsoft Defender for servers on a workspace, all machines reporting to that workspace will be billed for Microsoft Defender for servers - even if they're in subscriptions without Defender plans enabled. Unless you also enable Microsoft Defender for servers on the subscription, those machines won't be able to take advantage of just-in-time VM access, adaptive application controls, and network detections for Azure resources.<br>Learn more in <a target="_blank" href="/azure/defender-for-cloud/defender-for-servers-introduction?wt.mc_id=defenderforcloud_inproduct_portal_recoremediation">Introduction to Microsoft Defender for servers</a>.<br />(No related policy) |Medium |
-||||
+ ### Auto provision Log Analytics agent to Azure Arc-enabled machines (preview)
These are the new alerts:
|||--|-| | **Publicly accessible storage containers successfully discovered**<br>(Storage.Blob_OpenContainersScanning.SuccessfulDiscovery) | A successful discovery of publicly open storage container(s) in your storage account was performed in the last hour by a scanning script or tool.<br><br> This usually indicates a reconnaissance attack, where the threat actor tries to list blobs by guessing container names, in the hope of finding misconfigured open storage containers with sensitive data in them.<br><br> The threat actor may use their own script or use known scanning tools like Microburst to scan for publicly open containers.<br><br> Γ£ö Azure Blob Storage<br> Γ£û Azure Files<br> Γ£û Azure Data Lake Storage Gen2 | Collection | Medium | | **Publicly accessible storage containers unsuccessfully scanned**<br>(Storage.Blob_OpenContainersScanning.FailedAttempt) | A series of failed attempts to scan for publicly open storage containers were performed in the last hour. <br><br>This usually indicates a reconnaissance attack, where the threat actor tries to list blobs by guessing container names, in the hope of finding misconfigured open storage containers with sensitive data in them.<br><br> The threat actor may use their own script or use known scanning tools like Microburst to scan for publicly open containers.<br><br> Γ£ö Azure Blob Storage<br> Γ£û Azure Files<br> Γ£û Azure Data Lake Storage Gen2 | Collection | Low |
-|||
+ For more information, see: - [Threat matrix for storage services](https://www.microsoft.com/security/blog/2021/04/08/threat-matrix-for-storage/)
The following alert was removed from our network layer alerts due to inefficienc
| Alert (alert type) | Description | MITRE tactics | Severity | ||-|:--:|| | **Possible outgoing port scanning activity detected**<br>(PortSweeping) | Network traffic analysis detected suspicious outgoing traffic from %{Compromised Host}. This traffic may be a result of a port scanning activity. When the compromised resource is a load balancer or an application gateway, the suspected outgoing traffic has been originated from to one or more of the resources in the backend pool (of the load balancer or application gateway). If this behavior is intentional, please note that performing port scanning is against Azure Terms of service. If this behavior is unintentional, it may mean your resource has been compromised. | Discovery | Medium |
-||||
+
With this update, we've changed the prefixes of these alerts to match this reass
| ARM_VMAccessUnusualConfigReset | VM_VMAccessUnusualConfigReset | | ARM_VMAccessUnusualPasswordReset | VM_VMAccessUnusualPasswordReset | | ARM_VMAccessUnusualSSHReset | VM_VMAccessUnusualSSHReset |
-|||
+ Learn more about the [Azure Defender for Resource Manager](defender-for-resource-manager-introduction.md) and [Azure Defender for servers](defender-for-servers-introduction.md) plans.
These alerts are generated based on a new machine learning model and Kubernetes
|||:--:|-| | **Anomalous pod deployment (Preview)**<br>(K8S_AnomalousPodDeployment) | Kubernetes audit log analysis detected pod deployment that is anomalous based on previous pod deployment activity. This activity is considered an anomaly when taking into account how the different features seen in the deployment operation are in relations to one another. The features monitored by this analytics include the container image registry used, the account performing the deployment, day of the week, how often does this account performs pod deployments, user agent used in the operation, is this a namespace which is pod deployment occur to often, or other feature. Top contributing reasons for raising this alert as anomalous activity are detailed under the alert extended properties. | Execution | Medium | | **Excessive role permissions assigned in Kubernetes cluster (Preview)**<br>(K8S_ServiceAcountPermissionAnomaly) | Analysis of the Kubernetes audit logs detected an excessive permissions role assignment to your cluster. From examining role assignments, the listed permissions are uncommon to the specific service account. This detection considers previous role assignments to the same service account across clusters monitored by Azure, volume per permission, and the impact of the specific permission. The anomaly detection model used for this alert takes into account how this permission is used across all clusters monitored by Azure Defender. | Privilege Escalation | Low |
-|||
+ For a full list of the Kubernetes alerts, see [Alerts for Kubernetes clusters](alerts-reference.md#alerts-k8scluster).
defender-for-cloud Secure Score Security Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/secure-score-security-controls.md
In this example:
| 3 | **Number of resources** | There are 35 resources affected by this control.<br>To understand the possible contribution of every resource, divide the max score by the number of resources.<br>For this example, 6/35=0.1714<br>**Every resource contributes 0.1714 points.** | | 4 | **Current score** | The current score for this control.<br>Current score=[Score per resource]*[Number of healthy resources]<br> 0.1714 x 5 healthy resources = 0.86<br>Each control contributes towards the total score. In this example, the control is contributing 0.86 points to current total secure score. | | 5 | **Potential score increase** | The remaining points available to you within the control. If you remediate all the recommendations in this control, your score will increase by 9%.<br>Potential score increase=[Score per resource]*[Number of unhealthy resources]<br> 0.1714 x 30 unhealthy resources = 5.14<br> |
-| | | |
+
defender-for-cloud Supported Machines Endpoint Solutions Clouds Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-containers.md
The **tabs** below show the features of Microsoft Defender for Cloud that are av
<sup><a name="footnote1"></a>1</sup> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-### [**On-prem/IasS (ARC)**](#tab/iass-arc)
+### [**On-prem/IasS (Arc)**](#tab/iass-arc)
| Domain | Feature | Supported Resources | Release state <sup>[1](#footnote1)</sup> | Windows support | Agentless/Agent-based | Pricing tier | |--|--| -- | -- | -- | -- | --|
defender-for-cloud Supported Machines Endpoint Solutions Clouds Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-servers.md
The **tabs** below show the features of Microsoft Defender for Cloud that are av
| Disk encryption assessment | Γ£ö</br>(for [supported scenarios](../virtual-machines/windows/disk-encryption-windows.md#unsupported-scenarios)) | Γ£ö | - | No | | Third-party vulnerability assessment | Γ£ö | - | Γ£ö | No | | [Network security assessment](protect-network-resources.md) | Γ£ö | Γ£ö | - | No |
-| | | | | |
+ ### [**Linux machines**](#tab/features-linux)
The **tabs** below show the features of Microsoft Defender for Cloud that are av
| Disk encryption assessment | Γ£ö</br>(for [supported scenarios](../virtual-machines/windows/disk-encryption-windows.md#unsupported-scenarios)) | Γ£ö | - | No | | Third-party vulnerability assessment | Γ£ö | - | Γ£ö | No | | [Network security assessment](protect-network-resources.md) | Γ£ö | Γ£ö | - | No |
-| | | | | |
+ ### [**Multi-cloud machines**](#tab/features-multi-cloud)
The **tabs** below show the features of Microsoft Defender for Cloud that are av
| Disk encryption assessment | Γ£ö</br>(for [supported scenarios](../virtual-machines/windows/disk-encryption-windows.md#unsupported-scenarios)) | Γ£ö</br>(for [supported scenarios](../virtual-machines/windows/disk-encryption-windows.md#unsupported-scenarios)) | | Third-party vulnerability assessment | - | - | | [Network security assessment](protect-network-resources.md) | - | - |
-| | |
+
For information about when recommendations are generated for each of these solut
| McAfee v10+ | Linux (GA) | No | | Microsoft Defender for Endpoint for Linux<sup>[1](#footnote1)</sup> | Linux (GA) | Via extension | | Sophos V9+ | Linux (GA) | No |
-| | | |
+ <sup><a name="footnote1"></a>1</sup> It's not enough to have Microsoft Defender for Endpoint on the Linux machine: the machine will only appear as healthy if the always-on scanning feature (also known as real-time protection (RTP)) is active. By default, the RTP feature is **disabled** to avoid clashes with other AV software.
For information about when recommendations are generated for each of these solut
| - [Microsoft Defender for Endpoint deployment and integrated license](./integration-defender-for-endpoint.md) | GA | GA | Not Available | | - [Connect AWS account](./quickstart-onboard-aws.md) | GA | Not Available | Not Available | | - [Connect GCP project](./quickstart-onboard-gcp.md) | GA | Not Available | Not Available |
-| | | | |
+ <sup><a name="footnote1"></a>1</sup> Partially GA: The ability to disable specific findings from vulnerability scans is in public preview.
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Defender for Cloud includes multiple recommendations for improving the managemen
|[External accounts with write permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/04e7147b-0deb-9796-2e5c-0336343ceb3d)|04e7147b-0deb-9796-2e5c-0336343ceb3d| |[Deprecated accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e52064aa-6853-e252-a11e-dffc675689c2)|e52064aa-6853-e252-a11e-dffc675689c2| |[Deprecated accounts should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/00c6d40b-e990-6acf-d4f3-471e747a27c4)|00c6d40b-e990-6acf-d4f3-471e747a27c4|
- |||
+ - **Recommendations rename** - From this update, we're renaming two recommendations. We're also revising their descriptions. The assessment keys will remain unchanged.
Defender for Cloud includes multiple recommendations for improving the managemen
|Name |[Deprecated accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e52064aa-6853-e252-a11e-dffc675689c2) |Subscriptions should be purged of accounts that are blocked in Active Directory and have owner permissions | |Description |User accounts that have been blocked from signing in, should be removed from your subscriptions.<br>These accounts can be targets for attackers looking to find ways to access your data without being noticed.|User accounts that have been blocked from signing into Active Directory, should be removed from your subscriptions. These accounts can be targets for attackers looking to find ways to access your data without being noticed.<br>Learn more about securing the identity perimeter in [Azure Identity Management and access control security best practices](../security/fundamentals/identity-management-best-practices.md).| |Related policy |[Deprecated accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2febb62a0c-3560-49e1-89ed-27e074e9f8ad) |Subscriptions should be purged of accounts that are blocked in Active Directory and have owner permissions |
- |||
+ |Property |Current value | From the update| ||||
Defender for Cloud includes multiple recommendations for improving the managemen
|Name |[Deprecated accounts should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/00c6d40b-e990-6acf-d4f3-471e747a27c4)|Subscriptions should be purged of accounts that are blocked in Active Directory and have read and write permissions| |Description |User accounts that have been blocked from signing in, should be removed from your subscriptions.<br>These accounts can be targets for attackers looking to find ways to access your data without being noticed.|User accounts that have been blocked from signing into Active Directory, should be removed from your subscriptions. These accounts can be targets for attackers looking to find ways to access your data without being noticed.<br>Learn more about securing the identity perimeter in [Azure Identity Management and access control security best practices](../security/fundamentals/identity-management-best-practices.md).| |Related policy |[Deprecated accounts should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f6b1cbf55-e8b6-442f-ba4c-7246b6381474)|Subscriptions should be purged of accounts that are blocked in Active Directory and have read and write permissions|
- |||
+ ## Next steps
defender-for-cloud Workflow Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/workflow-automation.md
This article describes the workflow automation feature of Microsoft Defender for
|Pricing:|Free| |Required roles and permissions:|**Security admin role** or **Owner** on the resource group<br>Must also have write permissions for the target resource<br><br>To work with Azure Logic Apps workflows, you must also have the following Logic Apps roles/permissions:<br> - [Logic App Operator](../role-based-access-control/built-in-roles.md#logic-app-operator) permissions are required or Logic App read/trigger access (this role can't create or edit logic apps; only *run* existing ones)<br> - [Logic App Contributor](../role-based-access-control/built-in-roles.md#logic-app-contributor) permissions are required for Logic App creation and modification<br>If you want to use Logic App connectors, you may need additional credentials to sign in to their respective services (for example, your Outlook/Teams/Slack instances)| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet)|
-|||
+
To implement these policies:
|Workflow automation for security alerts |[Deploy Workflow Automation for Microsoft Defender for Cloud alerts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2ff1525828-9a90-4fcf-be48-268cdd02361e)|f1525828-9a90-4fcf-be48-268cdd02361e| |Workflow automation for security recommendations |[Deploy Workflow Automation for Microsoft Defender for Cloud recommendations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f73d6ab6c-2475-4850-afd6-43795f3492ef)|73d6ab6c-2475-4850-afd6-43795f3492ef| |Workflow automation for regulatory compliance changes|[Deploy Workflow Automation for Microsoft Defender for Cloud regulatory compliance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f509122b9-ddd9-47ba-a5f1-d0dac20be63c)|509122b9-ddd9-47ba-a5f1-d0dac20be63c|
- ||||
+ > [!TIP] > You can also find these by searching Azure Policy:
defender-for-iot How To Manage Sensors From The On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-sensors-from-the-on-premises-management-console.md
Title: Manage sensors from the on-premises management console description: Learn how to manage sensors from the management console, including updating sensor versions, pushing system settings to sensors, managing certificates, and enabling and disabling engines on sensors. Previously updated : 11/09/2021 Last updated : 03/20/2022
You can define the following sensor system settings from the management console:
You can update several sensors simultaneously from the on-premises management console.
-### Update sequence
-
-When upgrading an on-premises management console and managed sensors, first update the management console, and then update the sensors. The sensor update process will not succeed if you do not update the on-premises management console first.
+If you're upgrading an on-premises management console and managed sensors, first update the management console, and then update the sensors. The sensor update process won't succeed if you don't update the on-premises management console first.
**To update several sensors**:
-1. Verify that you have already updated the on-premises management console to the version that you are updating the sensors. For more information on-premises management console update see, [Update the software version](how-to-manage-the-on-premises-management-console.md#update-the-software-version).
-
-1. Go to the [Azure portal](https://portal.azure.com/).
-
-1. Navigate to Microsoft Defender for IoT.
-
-1. Go to the **Updates** page.
-
- :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/update-screen.png" alt-text="Screenshot of the Updates dashboard view.":::
-
-1. Select **Download** from the **Sensors** section and save the file.
-
-1. Sign in to the management console, and select **System Settings**.
-
- :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/admin-system-settings.png" alt-text="Screenshot of the Administration menu to select System Settings.":::
-
-1. Select the sensors to update in the **Sensor Engine Configuration** section, and then select **Automatic Updates**.
-
- :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/sensors-select.png" alt-text="Two sensors showing learning mode and automatic updates.":::
-
-1. Select **Save Changes**.
-
-1. On the management console, select **System Settings**.
-1. Under the Sensor version update section, select the :::image type="icon" source="../media/how-to-manage-sensors-from-the-on-premises-management-console/add-icon.png" border="false"::: button.
-
- :::image type="content" source="../media/how-to-manage-sensors-from-the-on-premises-management-console/sendor-version-update-window.png" alt-text="In the Sensor version update window select the + icon to update all of the sensors connected to the management console.":::
-
-9. An **Upload File** dialog box opens. Upload the file that you downloaded from the **Updates** page.
-
- :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/upload-file.png" alt-text="Select the Browse button to upload your file.":::
-
-You can monitor the update status of each sensor in the **Site Management** window.
--
-### Update sensors from the on-premises management console
-
-You can view the update status of your sensors from the management console. If the update failed, you can reattempt to update the sensor from the on-premises management console (versions 2.3.5 and on).
+1. Verify that you've already updated the on-premises management console to the version that you're updating the sensors. For more information see [Update the software version](how-to-manage-the-on-premises-management-console.md#update-the-software-version).
-**To update the sensor from on-premises management console:**
+1. On the Azure portal, go to **Defender for IoT** > **Updates**. Under **Sensors**, select **Download** and save the file.
-1. Sign in to the on-premises management console, and navigate to the **Sites Management** page.
+ :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/update-screen.png" alt-text="Screenshot of the Updates page.":::
-1. Locate any sensors that have **Failed** under the Update Progress column, and select the download button.
+1. Sign in to the on-premises management console, and select **System Settings**.
- :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/download-update-button.png" alt-text="Select the download icon to try to download and install the update for your sensor.":::
+1. Under **Sensor Engine Configuration**, select any sensor you want to update, and then select **Automatic Version Updates** > **Save Changes**. For example:
-You can monitor the update status of each sensor in the **Site Management** window.
+ :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/automatic-updates.png" alt-text="Screenshot of on-premises management console with Automatic Version Updates selected." lightbox="media/how-to-manage-sensors-from-the-on-premises-management-console/automatic-updates.png":::
+1. On the right, select **Version** update, and then browse to and select the update file you'd downloaded from the Azure portal.
-If you are unable to update the sensor, contact customer support for assistance.
+Monitor the update status of each sensor connected to your on-premises management console in the **Site Management** page. For any update that failed, reattempt the update or open a support ticket for assistance.
## Update threat intelligence packages
You can manually upload this file in the Azure portal and automatically update i
The **Site Manager** window displays disconnection information if sensors disconnect from their assigned on-premises management console. The following sensor disconnection information is available: -- "The on-premises management console cannot process data received from the sensor."
+- "The on-premises management console canΓÇÖt process data received from the sensor."
- "Times drift detected. The on-premises management console has been disconnected from sensor."
You can send alerts to third parties with information about disconnected sensors
## Enable or disable sensors
-Sensors are protected by five Defender for IoT engines. You can enable or disable the engines for connected sensors.
+Sensors are protected by Defender for IoT engines. You can enable or disable the engines for connected sensors.
| Engine | Description | Example scenario | |--|--|--|
-| Protocol violation engine | A protocol violation occurs when the packet structure or field values don't comply with the protocol specification. | "Illegal MODBUS Operation (Function Code Zero)" alert. This alert indicates that a primary device sent a request with function code 0 to a secondary device. This is not allowed according to the protocol specification, and the secondary device might not handle the input correctly. |
-| Policy violation engine | A policy violation occurs with a deviation from baseline behavior defined in the learned or configured policy. | "Unauthorized HTTP User Agent" alert. This alert indicates that an application that was not learned or approved by the policy is used as an HTTP client on a device. This might be a new web browser or application on that device. |
+| Protocol violation engine | A protocol violation occurs when the packet structure or field values don't comply with the protocol specification. | "Illegal MODBUS Operation (Function Code Zero)" alert. This alert indicates that a primary device sent a request with function code 0 to a secondary device. This isn't allowed according to the protocol specification, and the secondary device might not handle the input correctly. |
+| Policy violation engine | A policy violation occurs with a deviation from baseline behavior defined in the learned or configured policy. | "Unauthorized HTTP User Agent" alert. This alert indicates that an application that wasn't learned or approved by the policy is used as an HTTP client on a device. This might be a new web browser or application on that device. |
| Malware engine | The malware engine detects malicious network activity. | "Suspicion of Malicious Activity (Stuxnet)" alert. This alert indicates that the sensor found suspicious network activity known to be related to the Stuxnet malware, which is an advanced persistent threat aimed at industrial control and SCADA networks. | | Anomaly engine | The malware engine detects an anomaly in network behavior. | "Periodic Behavior in Communication Channel." This is a component that inspects network connections and finds periodic or cyclic behavior of data transmission, which is common in industrial networks. |
-| Operational engine | This engine detects operational incidents or malfunctioning entities. | `Device is Suspected to be Disconnected (Unresponsive)` alert. This alert triggered when a device is not responding to any requests for a predefined period. It might indicate a device shutdown, disconnection, or malfunction.
-|
+| Operational engine | This engine detects operational incidents or malfunctioning entities. | `Device is Suspected to be Disconnected (Unresponsive)` alert. This alert triggered when a device isn't responding to any requests for a predefined period. It might indicate a device shutdown, disconnection, or malfunction.
**To enable or disable engines for connected sensors:**
By default, sensors are automatically backed up at 3:00 AM daily. The backup sch
When the default sensor backup location is changed, the on-premises management console automatically retrieves the files from the new location on the sensor or an external location, provided that the console has permission to access the location.
-When the sensors are not registered with the on-premises management console, the **Sensor Backup Schedule** dialog box indicates that no sensors are managed.
+When the sensors aren't registered with the on-premises management console, the **Sensor Backup Schedule** dialog box indicates that no sensors are managed.
The restore process is the same regardless of where the files are stored.
The default allocation is displayed in the **Sensor Backup Schedule** dialog box
:::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/edit-mail-server-configuration.png" alt-text="The Edit Mail Server Configuration screen.":::
-There is no storage limit when you're backing up to an external server. You must, however, define an upper allocation limit in the **Sensor Backup Schedule** > **Custom Path** field. The following numbers and characters are supported: `/, a-z, A-Z, 0-9, and _`.
+There's no storage limit when you're backing up to an external server. You must, however, define an upper allocation limit in the **Sensor Backup Schedule** > **Custom Path** field. The following numbers and characters are supported: `/, a-z, A-Z, 0-9, and _`.
Here's information about exceeding allocation storage limits: -- If you exceed the allocated storage space, the sensor is not backed up.
+- If you exceed the allocated storage space, the sensor isn't backed up.
- If you're backing up more than one sensor, the management console tries to retrieve sensor files for the managed sensors.
Sensor backup files are automatically named in the following format: `<sensor na
The **Sensor Backup Schedule** dialog box and the backup log automatically list information about backup successes and failures. Failures might occur because:
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
Title: What's new in Microsoft Defender for IoT description: This article lets you know what's new in the latest release of Defender for IoT. Previously updated : 03/15/2022 Last updated : 03/22/2022 # What's new in Microsoft Defender for IoT?
The Defender for IoT sensor and on-premises management console update packages i
| Version | Date released | End support date | |--|--|--|
-| 22.1.2 | 03/2022 | 11/2022 |
+| 22.1.3 | 03/2022 | 11/2022 |
| 22.1.1 | 02/2022 | 10/2022 | | 10.5.5 | 12/2021 | 09/2022 | | 10.5.4 | 12/2021 | 09/2022 |
The Defender for IoT sensor and on-premises management console update packages i
## March 2022
+- [Use Azure Monitor workbooks with Microsoft Defender for IoT](#use-azure-monitor-workbooks-with-microsoft-defender-for-iot-public-preview)
+- [IoT OT Threat Monitoring with Defender for IoT solution GA](#iot-ot-threat-monitoring-with-defender-for-iot-solution-ga)
- [Edit and delete devices from the Azure portal](#edit-and-delete-devices-from-the-azure-portal-public-preview) - [Key state alert updates](#key-state-alert-updates-public-preview) - [Sign out of a CLI session](#sign-out-of-a-cli-session)
+### Use Azure Monitor workbooks with Microsoft Defender for IoT (Public preview)
+
+[Azure Monitor workbooks](/azure/azure-monitor/visualize/workbooks-overview) provide graphs and dashboards that visually reflect your data, and are now available directly in Microsoft Defender for IoT with data from [Azure Resource Graph](/azure/governance/resource-graph/).
+
+In the Azure portal, use the new Defender for IoT **Workbooks** page to view workbooks created by Microsoft and provided out-of-the-box, or create custom workbooks of your own.
++
+For more information, see [Use Azure Monitor workbooks in Microsoft Defender for IoT](workbooks.md).
+
+### IoT OT Threat Monitoring with Defender for IoT solution GA
+
+The IoT OT Threat Monitoring with Defender for IoT solution in Microsoft Sentinel is now GA. Use this solution to help secure your entire OT environment, whether you need to protect existing OT devices or build security into new OT innovations.
+
+For more information, see [OT threat monitoring in enterprise SOCs](concept-sentinel-integration.md) and [Tutorial: Integrate Defender for IoT and Sentinel](../../sentinel/iot-solution.md?tabs=use-out-of-the-box-analytics-rules-recommended).
+ ### Edit and delete devices from the Azure portal (Public preview) The **Device inventory** page in the Azure portal now supports the ability to edit device details, such as security, classification, location, and more:
Starting in this version, CLI users are automatically signed out of their sessio
For more information, see [Work with Defender for IoT CLI commands](references-work-with-defender-for-iot-cli-commands.md). - ## February 2022 - [New sensor installation wizard](#new-sensor-installation-wizard)
If you're on a legacy version, you may need to run a series of updates in order
After you've upgraded to version 22.1.x, the new upgrade log can be found at the following path, accessed via SSH and the *cyberx_host* user: `/opt/sensor/logs/legacy-upgrade.log`.
-For more information, see [Update a standalone sensor version](how-to-manage-individual-sensors.md#update-a-standalone-sensor-version).
+For more information, see [Update a standalone sensor version](how-to-manage-individual-sensors.md#update-a-standalone-sensor-version) and [Update sensor versions from the on-premises management console](how-to-manage-sensors-from-the-on-premises-management-console.md#update-sensor-versions).
> [!NOTE] > Upgrading to version 22.1.x is a large update, and you should expect the update process to require more time than previous updates.
defender-for-iot Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/workbooks.md
+
+ Title: Use Azure Monitor workbooks in Microsoft Defender for IoT
+description: Learn how to view and create Azure Monitor workbooks for Defender for IoT data.
+ Last updated : 03/06/2022++
+# Use Azure Monitor workbooks in Microsoft Defender for IoT (Public preview)
+
+> [!IMPORTANT]
+>
+> The **Workbooks** page is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Azure Monitor workbooks provide graphs, charts, and dashboards that visually reflect data stored in your Azure Resource Graph subscriptions and are available directly in Microsoft Defender for IoT.
+
+In the Azure portal, use the Defender for IoT **Workbooks** page to view workbooks created by Microsoft and provided out-of-the-box, or created by customers and shared across the community.
+
+Each workbook graph or chart is based on an Azure Resource Graph (ARG) query running on your data. In Defender for IoT, you might use ARG queries to:
+
+- Gather sensor statuses
+- Identify new devices in your network
+- Find alerts related to specific IP addresses
+- Understand which alerts are seen by each sensor.
+
+## View workbooks
+
+To view out-of-the-box workbooks created by Microsoft, or other workbooks already saved to your subscription:
+
+1. In the Azure portal, go to **Defender for IoT** and select **Workbooks** on the left.
+
+ :::image type="content" source="media/release-notes/workbooks.png" alt-text="Screenshot of the new Workbooks page." lightbox="media/release-notes/workbooks.png":::
+
+1. Modify your filtering options if needed, and select a workbook to open it.
+
+Defender for IoT provides the following workbooks out-of-the-box:
+
+- **Sensor health**. Displays data about your sensor health, such as the sensor console software versions installed on your sensors.
+- **Alerts**. Displays data about alerts occurring on your sensors, including alerts by sensor, alert types, recent alerts generated, and more.
+- **Devices**. Displays data about your device inventory, including devices by vendor, subtype, and new devices identified.
++
+## Create custom workbooks
+
+Use the Defender for IoT **Workbooks** page to create custom Azure Monitor workbooks directly in Defender for IoT.
+
+1. On the **Workbooks** page, select **New**, or to start from another template, open the template workbook and select **Edit**.
+
+1. In your new workbook, select **Add**, and select the option you want to add to your workbook. If you're editing an existing workbook or template, select the options (**...**) button on the right to access the **Add** menu.
+
+ You can add any of the following elements to your workbook:
+
+ |Option |Description |
+ |||
+ |**Text** | Add text to describe the graphs shown on your workbook or any additional action required. |
+ |**Parameters** | Define parameters to use in your workbook text and queries. |
+ |**Links / tabs** | Add navigational elements to your workbook, including lists, links to other targets, extra tabs, or toolbars. |
+ |**Query** | Add a query to use when creating your workbook graphs and charts. <br><br>- Make sure to select **Azure Resource Graph** as your **Data source** and select all of your relevant subscriptions. <br>- Add a graphical representation for your data by selecting a type from the **Visualization** options. |
+ |**Metric** | Add metrics to use when creating workbook graphs and charts. |
+ |**Group** | Add groups to organize your workbooks into sub-areas. |
+ | | |
+
+ For each option, after you've defined all available settings, select the **Add...** or **Run...** button to create that workbook element. For example, **Add parameter** or **Run Query**.
+
+ > [!TIP]
+ > You can build your queries in the [Azure Resource Graph Explorer](https://ms.portal.azure.com/#blade/HubsExtension/ArgQueryBlade) and copy them into your workbook query.
+
+1. In the toolbar, select **Save** :::image type="icon" source="media/workbooks/save-icon.png" border="false"::: or **Save as** :::image type="icon" source="media/workbooks/save-as-icon.png" border="false"::: to save your workbook, and then select **Done editing**.
+
+1. Select **Workbooks** to go back to the main workbook page with the full workbook listing.
+
+### Reference parameters in your queries
+
+Once you've created a parameter, reference it in your query using the following syntax: `{ParameterName}`. For example:
+
+```kusto
+iotsecurityresources
+| where type == "microsoft.iotsecurity/sensors"
+| extend Name=name
+| extend Status= properties.sensorStatus
+| where Name=={SensorName}
+| project Name,Status
+```
+
+## Sample queries
+
+This section provides sample queries that are commonly used in Defender for IoT workbooks.
+
+### Alert queries
+
+**Distribution of alerts across sensors**
+
+```kusto
+iotsecurityresources
+| where type == "microsoft.iotsecurity/locations/devicegroups/alerts"
+| extend Sensor=properties.extendedProperties.SensorId
+| where properties.status!='Closed'
+| summarize Alerts=count() by tostring(Sensor)
+| sort by Alerts desc
+```
+
+**New alerts from the last 24 hours**
+
+```kusto
+iotsecurityresources
+| where type == "microsoft.iotsecurity/locations/devicegroups/alerts"
+| where properties.status!='Closed'
+| extend AlertTime=properties.startTimeUtc
+| extend Type=properties.displayName
+| where AlertTime > ago(1d)
+| project AlertTime, Type
+```
+
+**Alerts by source IP address**
+
+```kusto
+iotsecurityresources
+| where type == "microsoft.iotsecurity/locations/devicegroups/alerts"
+| extend Type=properties.displayName
+| extend Source_IP=properties.extendedProperties.SourceDeviceAddress
+| extend Destination_IP=properties.extendedProperties.DestinationDeviceAddress
+| where Source_IP=='192.168.10.1'
+| project Source_IP, Destination_IP, Type
+```
+
+### Device queries
+
+**OT device inventory by vendor**
+
+```kusto
+iotsecurityresources
+| extend Vendor= properties.hardware.vendor
+| where properties.deviceDataSource=='OtSensor'
+| summarize Devices=count() by tostring(Vendor)
+| sort by Devices
+```
+
+**OT device inventory by sub-type, such as PLC, embedded device, UPS, and so on**
+
+```kusto
+iotsecurityresources
+| where type == "microsoft.iotsecurity/locations/devicegroups/devices"
+| extend SubType=properties.deviceSubTypeDisplayName
+| summarize Devices=count() by tostring(SubType)
+| sort by Devices
+```
+
+**New OT devices by sensor, site, and IPv4 address**
+
+```kusto
+iotsecurityresources
+| where type == "microsoft.iotsecurity/locations/devicegroups/devices"
+| extend TimeFirstSeen=properties.firstSeen
+| where TimeFirstSeen > ago(1d)
+| extend DeviceName=properties.deviceName
+| extend Site=properties.sensor.site
+| extend Sensor=properties.sensor.name
+| extend IPv4=properties.nics.[0].ipv4Address
+| where properties.deviceDataSource=='OtSensor'
+| project TimeFirstSeen, Site, Sensor, DeviceName, IPv4
+```
+
+**Summarize alerts by Purdue level**
+
+```kusto
+iotsecurityresources
+ | where type == "microsoft.iotsecurity/locations/devicegroups/alerts"
+ | project
+ resourceId = id,
+ affectedResource = tostring(properties.extendedProperties.DeviceResourceIds),
+ id = properties.systemAlertId
+ | join kind=leftouter (
+ iotsecurityresources | where type == "microsoft.iotsecurity/locations/devicegroups/devices"
+ | project
+ sensor = properties.sensor.name,
+ zone = properties.sensor.zone,
+ site = properties.sensor.site,
+ deviceProperties=properties,
+ affectedResource = tostring(id)
+ ) on affectedResource
+ | project-away affectedResource1
+ | where deviceProperties.deviceDataSource == 'OtSensor'
+ | summarize Alerts=count() by tostring(deviceProperties.purdueLevel)
+```
+
+## Next steps
+
+Learn more about viewing dashboards and reports on the sensor console:
+
+- [Run data mining queries](how-to-create-data-mining-queries.md)
+- [Risk assessment reporting](how-to-create-risk-assessment-reports.md)
+- [Create trends and statistics dashboards](how-to-create-trends-and-statistics-reports.md)
+
+Learn more about Azure Monitor workbooks and Azure Resource Graph:
+
+- [Azure Resource Graph documentation](/azure/governance/resource-graph/)
+- [Azure Monitor workbook documentation](/azure/azure-monitor/visualize/workbooks-overview)
+- [Kusto Query Language (KQL) documentation](/azure/data-explorer/kusto/query/)
digital-twins Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/overview.md
You can create a new IoT Hub for this purpose with Azure Digital Twins, or conne
You can also drive Azure Digital Twins from other data sources, using REST APIs or connectors to other services like [Logic Apps](../logic-apps/logic-apps-overview.md).
-### Output to ADX, Time Series Insights, storage, and analytics
+### Output data for storage and analytics
The data in your Azure Digital Twins model can be routed to downstream Azure services for more analytics or storage. This functionality is provided through *event routes*, which use [Event Hubs](../event-hubs/event-hubs-about.md), [Event Grid](../event-grid/overview.md), or [Service Bus](../service-bus-messaging/service-bus-messaging-overview.md) to drive your data flows. Some things you can do with event routes include:
-* Sending digital twin data to ADX for querying with the [Azure Digital Twins query plugin for Azure Data Explorer (ADX)](concepts-data-explorer-plugin.md)
+* Sending digital twin data to Azure Data Explorer for querying with the [Azure Digital Twins query plugin for Azure Data Explorer](concepts-data-explorer-plugin.md)
* [Connecting Azure Digital Twins to Time Series Insights](how-to-integrate-time-series-insights.md) to track time series history of each twin * Aligning a Time Series Model in Time Series Insights with a source in Azure Digital Twins * Storing Azure Digital Twins data in [Azure Data Lake](../storage/blobs/data-lake-storage-introduction.md)
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
The following table shows locations by service provider. If you want to view ava
| **[DE-CIX](https://www.de-cix.net/en/de-cix-service-world/cloud-exchange/find-a-cloud-service/detail/microsoft-azure)** | Supported |Supported | Amsterdam2, Chennai, Dubai2, Frankfurt, Frankfurt2, Madrid, Marseille, Mumbai, Munich, New York, Singapore2 | | **[Devoli](https://devoli.com/expressroute)** | Supported |Supported | Auckland, Melbourne, Sydney | | **[Deutsche Telekom AG IntraSelect](https://geschaeftskunden.telekom.de/vernetzung-digitalisierung/produkt/intraselect)** | Supported |Supported | Frankfurt |
-| **[Deutsche Telekom AG](https://geschaeftskunden.telekom.de/vernetzung-digitalisierung/produkt/intraselect)** | Supported |Supported | Frankfurt2 |
+| **[Deutsche Telekom AG](https://www.t-systems.com/de/en/cloud-and-infrastructure/manage-it-efficiently/managed-azure/cloudconnect-for-azure)** | Supported |Supported | Frankfurt2 |
| **du datamena** |Supported |Supported | Dubai2 | | **[eir](https://www.eirevo.ie/cloud-services/cloud-connectivity)** |Supported |Supported |Dublin| | **[Epsilon Global Communications](https://www.epsilontel.com/solutions/direct-cloud-connect)** |Supported |Supported | Singapore, Singapore2 |
firewall Premium Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-migrate.md
Previously updated : 02/22/2022 Last updated : 03/22/2022
Usage example:
> [!IMPORTANT] > The script doesn't migrate Threat Intelligence settings. You'll need to note those settings before proceeding and migrate them manually.
+This script requires the latest Azure PowerShell. Run `Get-Module -ListAvailable Az` to see which versions are installed. If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+ ```azurepowershell <# .SYNOPSIS
frontdoor Web Application Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/web-application-firewall.md
+
+ Title: Web Application Firewall on Azure Front Door
+description: This article provides a list of the various features available with Web Application Firewall (WAF) on Azure Front Door.
++++ Last updated : 03/18/2022+++
+# Web Application Firewall (WAF) on Azure Front Door
+
+Azure Web Application Firewall (WAF) on Azure Front Door provides centralized protection for your web applications. WAF defends your web services against common exploits and vulnerabilities. It keeps your service highly available for your users and helps you meet compliance requirements. In this article, you'll learn about the different features of Azure Web Application Firewall on Azure Front Door. For more information, see [WAF on Azure Front Door](../web-application-firewall/afds/afds-overview.md).
++
+## Policy settings
+
+A Web Application Firewall (WAF) policy allows you to control access to your web applications by using a set of custom and managed rules. You can change the state of the policy or configure a specific mode type for the policy. Depending on policy level settings you can choose to either actively inspect incoming requests, monitor only, or to monitor and take actions against requests that match a rule. For more information, see [WAF policy settings](../web-application-firewall/afds/waf-front-door-policy-settings.md).
+
+## Managed rules
+
+Azure Front Door web application firewall (WAF) protects web applications from common vulnerabilities and exploits. Azure-managed rule sets provide an easy way to deploy protection against a common set of security threats. Since such rule sets are managed by Azure, the rules are updated as needed to protect against new attack signatures. Default rule set also includes the Microsoft Threat Intelligence Collection rules that are written in partnership with the Microsoft Intelligence team to provide increase coverage, patches for specific vulnerabilities, and better false positive reduction. For more information, see [WAF managed rules](../web-application-firewall/afds/waf-front-door-drs.md).
+
+> [!NOTE]
+> * Only Azure Front Door Premium and Azure Front Door (classic) support managed rules.
+> * Azure Front Door (classic) supports only DRS 1.1 or below.
+>
+
+## Custom rules
+
+Azure Web Application Firewall (WAF) with Front Door allows you to control access to your web applications based on the conditions you define. A custom WAF rule consists of a priority number, rule type, match conditions, and an action. There are two types of custom rules: match rules and rate limit rules. A match rule control access based on a set of matching conditions while a rate limit rule control access based on matching conditions and the rates of incoming requests. For more information, see [WAF custom rules](../web-application-firewall/afds/waf-front-door-custom-rules.md).
+
+## Exclusion lists
+
+Azure Web Application Firewall (WAF) can sometime block requests that you want to allow for your application. WAF exclusion list allows you to omit certain request attributes from a WAF evaluation and allow the rest of the request to be process as normal. For more information, see [WAF exclusion lists](../web-application-firewall/afds/waf-front-door-exclusion.md).
+
+## Geo-filtering
+
+Azure Front Door by default will respond to all user requests regardless of the location where the request is coming from. Geo-filtering allows you to restrict access to your web application by countries/regions. For more information, see [WAF geo-filtering](../web-application-firewall/afds/waf-front-door-geo-filtering.md).
+
+## Bot Protection
+
+Azure Web Application Firewall (WAF) for Front Door provides bot rules to identify good bots and protect from bad bots. For more information, see [configure bot protection](../web-application-firewall/afds/waf-front-door-policy-configure-bot-protection.md).
+
+## IP restriction
+
+An IP addressΓÇôbased access control rule is a custom WAF rule that lets you control access to your web applications by specifying a list of IP addresses or IP address ranges. For more information, see [configure IP restriction](../web-application-firewall/afds/waf-front-door-configure-ip-restriction.md).
+
+## Rate limiting
+
+A custom rate limit rule controls access based on matching conditions and the rates of incoming requests. For more information, see [configure rate limit](../web-application-firewall/afds/waf-front-door-rate-limit-powershell.md).
+
+## Tuning
+
+The Microsoft-managed Default Rule Set is based on the [OWASP Core Rule Set (CRS)](https://github.com/SpiderLabs/owasp-modsecurity-crs/tree/v3.1/dev) and includes Microsoft Threat Intelligence Collection rules. Azure Web Application Firewall (WAF) lets you tune the WAF rules to suit the needs of your application and organization WAF requirements. Tuning features you can expect to see are defining rules exclusions, creating custom rules, and disabling of rules. For more information, see [WAF Tuning](../web-application-firewall/afds/waf-front-door-tuning.md).
+
+## Monitor and logging
+
+Azure Web Application Firewall (WAF) monitoring and logging are provided through logging and integration with Azure Monitor and Azure Monitor logs. For more information, see [Azure Web Application Firewall (WAF) logging and monitoring](../web-application-firewall/afds/waf-front-door-monitor.md).
+
+## Next steps
+
+* Learn how to [create and apply Web Application Firewall policy](../web-application-firewall/afds/waf-front-door-create-portal.md) to your Azure Front Door profile.
+* For more information, see [Web Application Firewall (WAF) FAQ](../web-application-firewall/afds/waf-faq.yml).
governance Remediate Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/remediate-resources.md
understand and accomplish remediation with Azure Policy.
When Azure Policy starts a template deployment when evaluating **deployIfNotExists** policies or modifies a resource when evaluating **modify** policies, it does so using a [managed identity](../../../active-directory/managed-identities-azure-resources/overview.md) that is associated with the policy assignment.
-Policy assignments can either use a system assigned managed identity that is created by the policy service or a user assigned identity provided by the user. The managed identity needs to be granted the appropriate roles required for remediating resources
-to grant the managed identity. If the managed identity is missing roles, an error is displayed
+Policy assignments can either use a system assigned managed identity that is created by the policy service or a user assigned identity provided by the user. The managed identity needs to be assigned the minimum role(s) required to remediate resources.
+If the managed identity is missing roles, an error is displayed
during the assignment of the policy or an initiative. When using the portal, Azure Policy automatically grants the managed identity the listed roles once assignment starts. When using SDK, the roles must manually be granted to the managed identity. The _location_ of the managed identity
healthcare-apis Events Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-deploy-portal.md
Previously updated : 03/14/2022 Last updated : 03/21/2022 # Deploy Events in the Azure portal
-In this quickstart, you’ll learn how to deploy the Azure Health Data Services Events feature in the Azure portal to send Fast Healthcare Interoperability Resources (FHIR®) event messages.
+In this quickstart, youΓÇÖll learn how to deploy the Azure Health Data Services Events feature in the Azure portal to send Fast Healthcare Interoperability Resources (FHIR&#174;) event messages.
## Prerequisites It's important that you have the following prerequisites completed before you begin the steps of deploying the Events feature in Azure Health Data Services. * [An active Azure account](https://azure.microsoft.com/free/search/?OCID=AID2100131_SEM_c4b0772dc7df1f075552174a854fd4bc:G:s&ef_id=c4b0772dc7df1f075552174a854fd4bc:G:s&msclkid=c4b0772dc7df1f075552174a854fd4bc)
-* [Event Hubs namespace and an event hub deployed in the Azure portal](../../event-hubs/event-hubs-create.md)
-* [Workspace deployed in Azure Health Data Services](../healthcare-apis-quickstart.md)
-* [FHIR service deployed in Azure Health Data Services](../fhir/fhir-portal-quickstart.md)
+* [Microsoft Azure Event Hubs namespace and an event hub deployed in the Azure portal](../../event-hubs/event-hubs-create.md)
+* [Workspace deployed in the Azure Health Data Services](../healthcare-apis-quickstart.md)
+* [FHIR service deployed in the workspace](../fhir/fhir-portal-quickstart.md)
+
+> [!IMPORTANT]
+> You will also need to make sure that the Microsoft.EventGrid resource provider has been successfully registered with your Azure subscription to deploy the Events feature. For more information, see [Azure resource providers and types - Register resource provider](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).
> [!NOTE]
-> For the purposes of this quickstart, we'll be using a basic set up and an event hub as the endpoint for Events messages.
+> For the purposes of this quickstart, we'll be using a basic Events set up and an event hub as the endpoint for Events messages. To learn how to deploy Azure Event Hubs, see [Quickstart: Create an event hub using Azure portal](../../event-hubs/event-hubs-create.md).
## Deploy Events
-1. Browse to the Workspace that contains the FHIR service you want to send event messages from and select the **Events** blade.
+1. Browse to the workspace that contains the FHIR service you want to send Events messages from and select the **Events** button on the left hand side of the portal.
- :::image type="content" source="media/events-deploy-in-portal/events-workspace-select.png" alt-text="Screenshot of Workspace and select Events button." lightbox="media/events-deploy-in-portal/events-workspace-select.png":::
+ :::image type="content" source="media/events-deploy-in-portal/events-workspace-select.png" alt-text="Screenshot of workspace and select Events button." lightbox="media/events-deploy-in-portal/events-workspace-select.png":::
2. Select **+ Event Subscription** to begin the creation of an event subscription.
- :::image type="content" source="media/events-deploy-in-portal/events-new-subscription-select.png" alt-text="Screenshot of Workspace and select events subscription button." lightbox="media/events-deploy-in-portal/events-new-subscription-select.png":::
-
-3. In the **Create Event Subscription** box, enter the following subscription information.
+ :::image type="content" source="media/events-deploy-in-portal/events-new-subscription-select.png" alt-text="Screenshot of workspace and select events subscription button." lightbox="media/events-deploy-in-portal/events-new-subscription-select.png":::
+
+3. In the **Create Event Subscription** box, enter the following subscription information.
* **Name**: Provide a name for your Events subscription.
+ * **System Topic Name**: Provide a name for your System Topic.
+
+ >[!NOTE]
+ > The first time you set up the Events feature, you will be required to enter a new **System Topic Name**. Once the system topic for the workspace is created, the **System Topic Name** will be used for any additional Events subscriptions that you create within the workspace.
+ * **Event types**: Type of FHIR events to send messages for (for example: create, updated, and deleted).
- * **Endpoint Details**: Endpoint to send Events messages to (for example: an Event Hubs).
+ * **Endpoint Details**: Endpoint to send Events messages to (for example: an Azure Event Hubs namespace + an event hub).
>[!NOTE]
- > For the purposes of this quickstart, we'll use the **Event Schema** and the **Managed Identity Type** settings as their defaults.
+ > For the purposes of this quickstart, we'll use the **Event Schema** and the **Managed Identity Type** settings at their default values.
:::image type="content" source="media/events-deploy-in-portal/events-create-new-subscription.png" alt-text="Screenshot of the create event subscription box." lightbox="media/events-deploy-in-portal/events-create-new-subscription.png"::: 4. After the form is completed, select **Create** to begin the subscription creation.
-5. After provisioning a new Events subscription, event messages won't be sent until the System Topic deployment has successfully completed and the status of the Workspace has changed from "Updating" to "Succeeded".
+5. Event messages won't be sent until the Event Grid System Topic deployment has successfully completed. Upon successful creation of the Event Grid System Topic, the status of the workspace will change from "Updating" to "Succeeded".
:::image type="content" source="media/events-deploy-in-portal/events-new-subscription-create.png" alt-text="Screenshot of an events subscription being deployed" lightbox="media/events-deploy-in-portal/events-new-subscription-create.png"::: :::image type="content" source="media/events-deploy-in-portal/events-workspace-update.png" alt-text="Screenshot of an events subscription successfully deployed." lightbox="media/events-deploy-in-portal/events-workspace-update.png"::: - 6. After the subscription is deployed, it will require access to your message delivery endpoint. :::image type="content" source="media/events-deploy-in-portal/events-new-subscription-created.png" alt-text="Screenshot of a successfully deployed events subscription." lightbox="media/events-deploy-in-portal/events-new-subscription-created.png"::: >[!TIP]
- >For more information about providing access using an Azure Managed identity, see
- > - [Assign a system-managed identity to an Event Grid system topic](../../event-grid/enable-identity-system-topics.md)
- > - [Event delivery with a managed identity](../../event-grid/managed-service-identity.md)
+ >For more information about providing access using an Azure Managed identity, see [Assign a system-managed identity to an Event Grid system topic](../../event-grid/enable-identity-system-topics.md) and [Event delivery with a managed identity](../../event-grid/managed-service-identity.md)
>
- >For more information about managed identities, see
- > - [What are managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md)
+ >For more information about managed identities, see [What are managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md)
>
- >For more information about Azure role-based access control (Azure RBAC), see
- > - [What is Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md)
+ >For more information about Azure role-based access control (Azure RBAC), see [What is Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md)
## Next steps
To learn how to export Event Grid system diagnostic logs and metrics, see
>[!div class="nextstepaction"] >[How to export Events diagnostic logs and metrics](./events-display-metrics.md)
-(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
key-vault Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/quick-create-powershell.md
A resource group is a logical container into which Azure resources are deployed
```azurepowershell-interactive New-AzResourceGroup -Name "myResourceGroup" -Location "westus2"
+```
## Get your principal ID
kinect-dk Body Sdk Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/body-sdk-download.md
description: Understand how to download each version of the Azure Kinect Sensor
ms.prod: kinect-dk Previously updated : 03/18/2021 Last updated : 03/21/2022 keywords: azure, kinect, sdk, download update, latest, available, install, body, tracking
If the command succeeds, the SDK is ready for use.
* [Feature] Added cmake support to all body tracking samples * [Feature] NuGet package returns. Developed new NuGet package that includes Microsoft developed body tracking dlls and headers, and ONNX runtime dependencies. The package no longer includes the NVIDIA CUDA and TRT dependencies. These continue to be included in the MSI package. * [Feature] Upgraded to ONNX Runtime v1.10. Recommended NVIDIA driver version is 472.12 (Game Ready) or 472.84 (Studio). There are OpenGL issues with later drivers.
+* [Bug Fix] CMake missing from offline_processor sample [Link](https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/890)
* [Bug Fix] CPU mode no longer requires NVIDIA CUDA dependencies [Link](https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/1154)
-* [Bug Fix] Verified samples compile with Visual Studio 2022 and updated samples to use this release [Link] (https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/1250)
-* [Bug Fix] Added const qualifier to APIs [Link] (https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/1365)
-* [Bug Fix] Added check for nullptr handle in shutdown() [Link] (https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/1373)
-* [Bug Fix] Improved dependencies checks [Link] (https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/1510)
+* [Bug Fix] Verified samples compile with Visual Studio 2022 and updated samples to use this release [Link](https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/1250)
+* [Bug Fix] Added const qualifier to APIs [Link](https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/1365)
+* [Bug Fix] Added check for nullptr handle in shutdown() [Link](https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/1373)
+* [Bug Fix] Improved dependencies checks [Link](https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/1510)
* [Bug Fix] Updated REDIST.TXT file [Link](https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/1541)
-* [Bug Fix] Improved DirectML performance [Link] (https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/1546)
-* [Bug Fix] Fixed exception declaration in frame::get_body() [Link] (https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/1573)
-* [Bug Fix] Fixed memory leak [Link] (https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/1576)
-* [Bug Fix] Updated dependencies list [Link] (https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/1644)
+* [Bug Fix] Improved DirectML performance [Link](https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/1546)
+* [Bug Fix] Fixed exception declaration in frame::get_body() [Link](https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/1573)
+* [Bug Fix] Fixed memory leak [Link](https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/1576)
+* [Bug Fix] Updated dependencies list [Link](https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/1644)
### v1.1.0 * [Feature] Add support for DirectML (Windows only) and TensorRT execution of pose estimation model. See FAQ on new execution environments.
kinect-dk Body Sdk Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/body-sdk-setup.md
description: In this quickstart, you will set up the body tracking SDK for Azure
ms.prod: kinect-dk Previously updated : 06/26/2019 Last updated : 03/15/2022 keywords: kinect, azure, sensor, access, depth, sdk, body, tracking, joint, setup, onnx, directml, cuda, trt, nvidia
kinect-dk Reset Azure Kinect Dk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/reset-azure-kinect-dk.md
ms.prod: kinect-dk Previously updated : 02/11/2020 Last updated : 03/15/2022 keywords: kinect, reset
kinect-dk Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/troubleshooting.md
description: Learn about some of the known issues and troubleshooting tips when
ms.prod: kinect-dk Previously updated : 03/05/2021 Last updated : 03/15/2022 keywords: troubleshooting, update, bug, kinect, feedback, recovery, logging, tips
load-balancer Quickstart Basic Internal Load Balancer Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/quickstart-basic-internal-load-balancer-portal.md
+
+ Title: "Quickstart: Create a basic internal load balancer - Azure portal"
+
+description: This quickstart shows how to create a basic internal load balancer by using the Azure portal.
++++ Last updated : 03/21/2022++
+#Customer intent: I want to create a internal load balancer so that I can load balance internal traffic to VMs.
++
+# Quickstart: Create a basic internal load balancer to load balance VMs using the Azure portal
+
+Get started with Azure Load Balancer by using the Azure portal to create an internal load balancer and two virtual machines.
+
+>[!NOTE]
+>Standard SKU load balancer is recommended for production workloads. For more information about SKUs, see **[Azure Load Balancer SKUs](../skus.md)**.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+## Sign in to Azure
+
+Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+
+## Create the virtual network
+
+When you create an internal load balancer, a virtual network is configured as the network for the load balancer.
+
+A private IP address in the virtual network is configured as the frontend for the load balancer. The frontend IP address can be **Static** or **Dynamic**.
+
+An Azure Bastion host is created to securely manage the virtual machines and install IIS.
+
+In this section, you'll create a virtual network, subnet, and Azure Bastion host.
+
+1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual Networks** in the search results.
+
+2. In **Virtual networks**, select **+ Create**.
+
+3. In **Create virtual network**, enter or select this information in the **Basics** tab:
+
+ | **Setting** | **Value** |
+ ||--|
+ | **Project Details** | |
+ | Subscription | Select your Azure subscription |
+ | Resource Group | Select **Create new**. </br> In **Name** enter **CreateIntLBQS-rg**. </br> Select **OK**. |
+ | **Instance details** | |
+ | Name | Enter **myVNet** |
+ | Region | Select **West US 3** |
+
+4. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page.
+
+5. In the **IP Addresses** tab, enter this information:
+
+ | Setting | Value |
+ |--|-|
+ | IPv4 address space | Enter **10.1.0.0/16** |
+
+6. Under **Subnet name**, select the word **default**.
+
+7. In **Edit subnet**, enter this information:
+
+ | Setting | Value |
+ |--|-|
+ | Subnet name | Enter **myBackendSubnet** |
+ | Subnet address range | Enter **10.1.0.0/24** |
+
+8. Select **Save**.
+
+9. Select the **Security** tab.
+
+10. Under **BastionHost**, select **Enable**. Enter this information:
+
+ | Setting | Value |
+ |--|-|
+ | Bastion name | Enter **myBastionHost** |
+ | AzureBastionSubnet address space | Enter **10.1.1.0/27** |
+ | Public IP Address | Select **Create new**. </br> For **Name**, enter **myBastionIP**. </br> Select **OK**. |
+
+11. Select the **Review + create** tab or select the **Review + create** button.
+
+12. Select **Create**.
+
+ > [!NOTE]
+ > The virtual network and subnet are created immediately. The Bastion host creation is submitted as a job and will complete within 10 minutes. You can proceed to the next steps while the Bastion host is created.
+
+## Create load balancer
+
+In this section, you create a load balancer that load balances virtual machines.
+
+During the creation of the load balancer, you'll configure:
+
+* Frontend IP address
+* Backend pool
+* Inbound load-balancing rules
+
+1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
+
+2. In the **Load balancer** page, select **Create**.
+
+3. In the **Basics** tab of the **Create load balancer** page, enter, or select the following information:
+
+ | Setting | Value |
+ | | |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **CreateIntLBQS-rg**. |
+ | **Instance details** | |
+ | Name | Enter **myLoadBalancer** |
+ | Region | Select **West US 3**. |
+ | SKU | Select **Basic**. |
+ | Type | Select **Internal**. |
+
+4. Select **Next: Frontend IP configuration** at the bottom of the page.
+
+5. In **Frontend IP configuration**, select **+ Add a frontend IP configuration**.
+
+6. Enter **myFrontend** in **Name**.
+
+7. Select **myVNet** in **Virtual network**.
+
+8. Select **myBackendSubnet** in **Subnet**.
+
+9. Select **Dynamic** for **Assignment**.
+
+10. Select **Add**.
+
+11. Select **Next: Backend pools** at the bottom of the page.
+
+12. In the **Backend pools** tab, select **+ Add a backend pool**.
+
+13. Enter **myBackendPool** in **Name**.
+
+14. Select **Virtual machines** in **Associated to**.
+
+15. Select **IPv4** or **IPv6** for **IP version**.
+
+16. Select **Add**.
+
+17. Select the **Next: Inbound rules** button at the bottom of the page.
+
+18. In **Load balancing rule** in the **Inbound rules** tab, select **+ Add a load balancing rule**.
+
+19. In **Add load balancing rule**, enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter **myHTTPRule** |
+ | IP Version | Select **IPv4** or **IPv6** depending on your requirements. |
+ | Frontend IP address | Select **myFrontend**. |
+ | Backend pool | Select **myBackendPool**. |
+ | Protocol | Select **TCP**. |
+ | Port | Enter **80**. |
+ | Backend port | Enter **80**. |
+ | Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe**. </br> Select **TCP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. |
+ | Session persistence | Select **None**. |
+ | Idle timeout (minutes) | Enter or select **15**. |
+ | Floating IP | Select **Disabled**. |
+
+20. Select **Add**.
+
+21. Select the blue **Review + create** button at the bottom of the page.
+
+22. Select **Create**.
+
+## Create virtual machines
+
+In this section, you'll create two VMs (**myVM1** and **myVM2**) in two different zones (**Zone 1** and **Zone 2**).
+
+These VMs are added to the backend pool of the load balancer that was created earlier.
+
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+
+2. In **Virtual machines**, select **+ Create** > **Azure virtual machine**.
+
+3. In **Create a virtual machine**, enter or select the values in the **Basics** tab:
+
+ | Setting | Value |
+ |--|-|
+ | **Project Details** | |
+ | Subscription | Select your Azure subscription |
+ | Resource Group | Select **CreateIntLBQS-rg** |
+ | **Instance details** | |
+ | Virtual machine name | Enter **myVM1** |
+ | Region | Select **(US) West US 3** |
+ | Availability Options | Select **Availability set** |
+ | Availability set | Select **Create new**. </br> Enter **myAvailabilitySet** in **Name**. </br> Select **OK** |
+ | Security type | Select **Standard**. |
+ | Image | Select **Windows Server 2019 Datacenter - Gen2** |
+ | Azure Spot instance | Leave the default of unchecked. |
+ | Size | Choose VM size or take default setting |
+ | **Administrator account** | |
+ | Username | Enter a username |
+ | Password | Enter a password |
+ | Confirm password | Reenter password |
+ | **Inbound port rules** | |
+ | Public inbound ports | Select **None** |
+
+4. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
+
+5. In the Networking tab, select or enter:
+
+ | Setting | Value |
+ |-|-|
+ | **Network interface** | |
+ | Virtual network | **myVNet** |
+ | Subnet | **myBackendSubnet** |
+ | Public IP | Select **None**. |
+ | NIC network security group | Select **Advanced**|
+ | Configure network security group | Select **Create new**. </br> In the **Create network security group**, enter **myNSG** in **Name**. </br> Under **Inbound rules**, select **+Add an inbound rule**. </br> In **Service**, select **HTTP**. </br> Under **Priority**, enter **100**. </br> In **Name**, enter **myNSGRule** </br> Select **Add** </br> Select **OK** |
+ | **Load balancing** |
+ | Place this virtual machine behind an existing load-balancing solution? | Select the box. |
+ | **Load balancing settings** |
+ | Load-balancing options | Select **Azure load balancing** |
+ | Select a load balancer | Select **myLoadBalancer** |
+ | Select a backend pool | Select **myBackendPool** |
+
+6. Select **Review + create**.
+
+7. Review the settings, and then select **Create**.
+
+8. Follow the steps 1 through 7 to create one more VM with the following values and all the other settings the same as **myVM1**:
+
+ | Setting | VM 2 |
+ | - | -- |
+ | Name | **myVM2** |
+ | Availability set | Select the existing **myAvailabiltySet** |
+ | Network security group | Select the existing **myNSG** |
++
+## Create test virtual machine
+
+In this section, you'll create a VM named **myTestVM**. This VM will be used to test the load balancer configuration.
+
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+
+2. In **Virtual machines**, select **+ Create** > **Azure virtual machine**.
+
+2. In **Create a virtual machine**, type or select the values in the **Basics** tab:
+
+ | Setting | Value |
+ |-- | - |
+ | **Project Details** | |
+ | Subscription | Select your Azure subscription |
+ | Resource Group | Select **CreateIntLBQS-rg** |
+ | **Instance details** | |
+ | Virtual machine name | Enter **myTestVM** |
+ | Region | Select **(US) West US 3** |
+ | Availability Options | Select **No infrastructure redundancy required** |
+ | Security type | Select **Standard**. |
+ | Image | Select **Windows Server 2019 Datacenter - Gen2** |
+ | Azure Spot instance | Leave the default of unselected. |
+ | Size | Choose VM size or take default setting |
+ | **Administrator account** | |
+ | Username | Enter a username |
+ | Password | Enter a password |
+ | Confirm password | Reenter password |
+ | **Inbound port rules** | |
+ | Public inbound ports | Select **None**. |
+
+3. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
+
+4. In the **Networking** tab, select or enter:
+
+ | Setting | Value |
+ |-|-|
+ | **Network interface** | |
+ | Virtual network | **myVNet** |
+ | Subnet | **myBackendSubnet** |
+ | Public IP | Select **None**. |
+ | NIC network security group | Select **Advanced** |
+ | Configure network security group | Select **MyNSG** created in the previous step.|
+
+5. Select **Review + create**.
+
+6. Review the settings, and then select **Create**.
+
+## Install IIS
+
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+
+2. Select **myVM1**.
+
+3. In the **Overview** page, select **Connect**, then **Bastion**.
+
+4. Enter the username and password entered during VM creation.
+
+5. Select **Connect**.
+
+6. On the server desktop, navigate to **Windows Administrative Tools** > **Windows PowerShell** > **Windows PowerShell**.
+
+7. In the PowerShell Window, execute the following commands to:
+
+ * Install the IIS server.
+ * Remove the default iisstart.htm file.
+ * Add a new iisstart.htm file that displays the name of the VM.
+
+ ```powershell
+
+ # Install IIS server role
+ Install-WindowsFeature -name Web-Server -IncludeManagementTools
+
+ # Remove default htm file
+ Remove-Item C:\inetpub\wwwroot\iisstart.htm
+
+ # Add a new htm file that displays server name
+ Add-Content -Path "C:\inetpub\wwwroot\iisstart.htm" -Value $("Hello World from " + $env:computername)
+ ```
+
+8. Close the Bastion session with **myVM1**.
+
+9. Repeat steps 1 through 8 to install IIS and the updated iisstart.htm file on **myVM2**.
+
+## Test the load balancer
+
+In this section, you'll test the load balancer by connecting to the **myTestVM** and verifying the webpage.
+
+1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
+
+2. Select **myLoadBalancer**.
+
+3. Make note or copy the address next to **Private IP address** in the **Overview** of **myLoadBalancer**. If you can't see the **Private IP address** field, select **See more** in the information window.
+
+4. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+
+5. Select **myTestVM**.
+
+6. In the **Overview** page, select **Connect**, then **Bastion**.
+
+7. Enter the username and password entered during VM creation.
+
+8. Open **Internet Explorer** on **myTestVM**.
+
+9. Enter the IP address from the previous step into the address bar of the browser. The custom page displaying one of the backend server names is displayed on the browser. In this example, it's **10.1.0.4**.
+
+To see the load balancer distribute traffic across both VMs, you can force-refresh your web browser from the client machine.
+
+## Clean up resources
+
+When no longer needed, delete the resource group, load balancer, and all related resources. To do so, select the resource group **CreateIntLBQS-rg** that contains the resources and then select **Delete**.
+
+## Next steps
+
+In this quickstart, you:
+
+* Created an internal Azure Load Balancer
+
+* Attached 2 VMs to the load balancer
+
+* Configured the load balancer traffic rule, health probe, and then tested the load balancer
+
+To learn more about Azure Load Balancer, continue to:
+> [!div class="nextstepaction"]
+> [What is Azure Load Balancer?](../load-balancer-overview.md)
load-balancer Quickstart Basic Public Load Balancer Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/quickstart-basic-public-load-balancer-powershell.md
+
+ Title: 'Quickstart: Create a basic internal load balancer - Azure PowerShell'
+
+description: This quickstart shows how to create a basic internal load balancer using Azure PowerShell
++ Last updated : 03/22/2022+++
+#Customer intent: I want to create a load balancer so that I can load balance internet traffic to VMs.
++
+# Quickstart: Create a basic internal load balancer to load balance VMs using Azure PowerShell
+
+Get started with Azure Load Balancer by using Azure PowerShell to create a public load balancer and two virtual machines.
+
+>[!NOTE]
+>Standard SKU load balancer is recommended for production workloads. For more information about SKUs, see **[Azure Load Balancer SKUs](../skus.md)**.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
+
+- Azure PowerShell installed locally or Azure Cloud Shell
+
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+
+## Create a resource group
+
+An Azure resource group is a logical container into which Azure resources are deployed and managed.
+
+Create a resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup):
+
+```azurepowershell-interactive
+$rg = @{
+ Name = 'CreatePubLBQS-rg'
+ Location = 'westus3'
+}
+New-AzResourceGroup @rg
+```
+
+## Create a public IP address
+
+Use [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress) to create a public IP address.
+
+```azurepowershell-interactive
+$publicip = @{
+ Name = 'myPublicIP'
+ ResourceGroupName = 'CreatePubLBQS-rg'
+ Location = 'westus3'
+ Sku = 'Basic'
+ AllocationMethod = 'static'
+}
+New-AzPublicIpAddress @publicip
+```
+
+## Create a load balancer
+
+This section details how you can create and configure the following components of the load balancer:
+
+* Create a front-end IP with [New-AzLoadBalancerFrontendIpConfig](/powershell/module/az.network/new-azloadbalancerfrontendipconfig) for the frontend IP pool. This IP receives the incoming traffic on the load balancer
+
+* Create a back-end address pool with [New-AzLoadBalancerBackendAddressPoolConfig](/powershell/module/az.network/new-azloadbalancerbackendaddresspoolconfig) for traffic sent from the frontend of the load balancer. This pool is where your backend virtual machines are deployed
+
+* Create a health probe with [Add-AzLoadBalancerProbeConfig](/powershell/module/az.network/add-azloadbalancerprobeconfig) that determines the health of the backend VM instances
+
+* Create a load balancer rule with [Add-AzLoadBalancerRuleConfig](/powershell/module/az.network/add-azloadbalancerruleconfig) that defines how traffic is distributed to the VMs
+
+* Create a public load balancer with [New-AzLoadBalancer](/powershell/module/az.network/new-azloadbalancer)
+
+```azurepowershell-interactive
+## Place public IP created in previous steps into variable. ##
+$pip = @{
+ Name = 'myPublicIP'
+ ResourceGroupName = 'CreatePubLBQS-rg'
+}
+$publicIp = Get-AzPublicIpAddress @pip
+
+## Create load balancer frontend configuration and place in variable. ##
+$fip = @{
+ Name = 'myFrontEnd'
+ PublicIpAddress = $publicIp
+}
+$feip = New-AzLoadBalancerFrontendIpConfig @fip
+
+## Create backend address pool configuration and place in variable. ##
+$bepool = New-AzLoadBalancerBackendAddressPoolConfig -Name 'myBackEndPool'
+
+## Create the health probe and place in variable. ##
+$probe = @{
+ Name = 'myHealthProbe'
+ Protocol = 'tcp'
+ Port = '80'
+ IntervalInSeconds = '360'
+ ProbeCount = '5'
+}
+$healthprobe = New-AzLoadBalancerProbeConfig @probe
+
+## Create the load balancer rule and place in variable. ##
+$lbrule = @{
+ Name = 'myHTTPRule'
+ Protocol = 'tcp'
+ FrontendPort = '80'
+ BackendPort = '80'
+ IdleTimeoutInMinutes = '15'
+ FrontendIpConfiguration = $feip
+ BackendAddressPool = $bePool
+}
+$rule = New-AzLoadBalancerRuleConfig @lbrule
+
+## Create the load balancer resource. ##
+$loadbalancer = @{
+ ResourceGroupName = 'CreatePubLBQS-rg'
+ Name = 'myLoadBalancer'
+ Location = 'westus3'
+ Sku = 'Basic'
+ FrontendIpConfiguration = $feip
+ BackendAddressPool = $bePool
+ LoadBalancingRule = $rule
+ Probe = $healthprobe
+}
+New-AzLoadBalancer @loadbalancer
+```
+
+## Configure virtual network
+
+Before you deploy VMs and test your load balancer, create the supporting virtual network resources.
+
+Create a virtual network for the backend virtual machines.
+
+Create a network security group to define inbound connections to your virtual network.
+
+Create an Azure Bastion host to securely manage the virtual machines in the backend pool.
+
+### Create virtual network, network security group and bastion host.
+
+* Create a virtual network with [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork)
+
+* Create a network security group rule with [New-AzNetworkSecurityRuleConfig](/powershell/module/az.network/new-aznetworksecurityruleconfig)
+
+* Create an Azure Bastion host with [New-AzBastion](/powershell/module/az.network/new-azbastion)
+
+* Create a network security group with [New-AzNetworkSecurityGroup](/powershell/module/az.network/new-aznetworksecuritygroup)
+
+* Create the NAT gateway resource with [New-AzNatGateway](/powershell/module/az.network/new-aznatgateway)
+
+```azurepowershell-interactive
+## Create backend subnet config ##
+$subnet = @{
+ Name = 'myBackendSubnet'
+ AddressPrefix = '10.1.0.0/24'
+}
+$subnetConfig = New-AzVirtualNetworkSubnetConfig @subnet
+
+## Create Azure Bastion subnet. ##
+$bastsubnet = @{
+ Name = 'AzureBastionSubnet'
+ AddressPrefix = '10.1.1.0/27'
+}
+$bastsubnetConfig = New-AzVirtualNetworkSubnetConfig @bastsubnet
+
+## Create the virtual network ##
+$net = @{
+ Name = 'myVNet'
+ ResourceGroupName = 'CreatePubLBQS-rg'
+ Location = 'westus3'
+ AddressPrefix = '10.1.0.0/16'
+ Subnet = $subnetConfig,$bastsubnetConfig
+}
+$vnet = New-AzVirtualNetwork @net
+
+## Create public IP address for bastion host. ##
+$ip = @{
+ Name = 'myBastionIP'
+ ResourceGroupName = 'CreatePubLBQS-rg'
+ Location = 'westus3'
+ Sku = 'Standard'
+ AllocationMethod = 'Static'
+}
+$publicip = New-AzPublicIpAddress @ip
+
+## Create bastion host ##
+$bastion = @{
+ ResourceGroupName = 'CreatePubLBQS-rg'
+ Name = 'myBastion'
+ PublicIpAddress = $publicip
+ VirtualNetwork = $vnet
+}
+New-AzBastion @bastion -AsJob
+
+## Create rule for network security group and place in variable. ##
+$nsgrule = @{
+ Name = 'myNSGRuleHTTP'
+ Description = 'Allow HTTP'
+ Protocol = '*'
+ SourcePortRange = '*'
+ DestinationPortRange = '80'
+ SourceAddressPrefix = 'Internet'
+ DestinationAddressPrefix = '*'
+ Access = 'Allow'
+ Priority = '2000'
+ Direction = 'Inbound'
+}
+$rule1 = New-AzNetworkSecurityRuleConfig @nsgrule
+
+## Create network security group ##
+$nsg = @{
+ Name = 'myNSG'
+ ResourceGroupName = 'CreatePubLBQS-rg'
+ Location = 'westus3'
+ SecurityRules = $rule1
+}
+New-AzNetworkSecurityGroup @nsg
+```
+
+## Create virtual machines
+
+In this section, you'll create the two virtual machines for the backend pool of the load balancer.
+
+* Create two network interfaces with [New-AzNetworkInterface](/powershell/module/az.network/new-aznetworkinterface)
+
+* Set an administrator username and password for the VMs with [Get-Credential](/powershell/module/microsoft.powershell.security/get-credential)
+
+* Use [New-AzAvailabilitySet](/powershell/module/az.compute/new-azvm) to create an availability set for the virtual machines.
+
+* Create the virtual machines with:
+
+ * [New-AzVM](/powershell/module/az.compute/new-azvm)
+
+ * [New-AzVMConfig](/powershell/module/az.compute/new-azvmconfig)
+
+ * [Set-AzVMOperatingSystem](/powershell/module/az.compute/set-azvmoperatingsystem)
+
+ * [Set-AzVMSourceImage](/powershell/module/az.compute/set-azvmsourceimage)
+
+ * [Add-AzVMNetworkInterface](/powershell/module/az.compute/add-azvmnetworkinterface)
+
+```azurepowershell-interactive
+# Set the administrator and password for the VMs. ##
+$cred = Get-Credential
+
+## Place the virtual network into a variable. ##
+$net = @{
+ Name = 'myVNet'
+ ResourceGroupName = 'CreatePubLBQS-rg'
+}
+$vnet = Get-AzVirtualNetwork @net
+
+## Place the load balancer into a variable. ##
+$lb = @{
+ Name = 'myLoadBalancer'
+ ResourceGroupName = 'CreatePubLBQS-rg'
+}
+$bepool = Get-AzLoadBalancer @lb | Get-AzLoadBalancerBackendAddressPoolConfig
+
+## Place the network security group into a variable. ##
+$ns = @{
+ Name = 'myNSG'
+ ResourceGroupName = 'CreatePubLBQS-rg'
+}
+$nsg = Get-AzNetworkSecurityGroup @ns
+
+## Create availability set for the virtual machines. ##
+$set = @{
+ Name = 'myAvSet'
+ ResourceGroupName = 'CreatePubLBQS-rg'
+ Location = 'westus3'
+ Sku = 'Aligned'
+ PlatformFaultDomainCount = '2'
+ PlatformUpdateDomainCount = '2'
+}
+$avs = New-AzAvailabilitySet @set
+
+## For loop with variable to create virtual machines for load balancer backend pool. ##
+for ($i=1; $i -le 2; $i++)
+{
+ ## Command to create network interface for VMs ##
+ $nic = @{
+ Name = "myNicVM$i"
+ ResourceGroupName = 'CreatePubLBQS-rg'
+ Location = 'westus3'
+ Subnet = $vnet.Subnets[0]
+ NetworkSecurityGroup = $nsg
+ LoadBalancerBackendAddressPool = $bepool
+ }
+ $nicVM = New-AzNetworkInterface @nic
+
+ ## Create a virtual machine configuration for VMs ##
+ $vmsz = @{
+ VMName = "myVM$i"
+ VMSize = 'Standard_DS1_v2'
+ AvailabilitySetId = $avs.Id
+ }
+ $vmos = @{
+ ComputerName = "myVM$i"
+ Credential = $cred
+ }
+ $vmimage = @{
+ PublisherName = 'MicrosoftWindowsServer'
+ Offer = 'WindowsServer'
+ Skus = '2019-Datacenter'
+ Version = 'latest'
+ }
+ $vmConfig = New-AzVMConfig @vmsz `
+ | Set-AzVMOperatingSystem @vmos -Windows `
+ | Set-AzVMSourceImage @vmimage `
+ | Add-AzVMNetworkInterface -Id $nicVM.Id
+
+ ## Create the virtual machine for VMs ##
+ $vm = @{
+ ResourceGroupName = 'CreatePubLBQS-rg'
+ Location = 'westus3'
+ VM = $vmConfig
+ }
+ New-AzVM @vm -AsJob
+}
+```
+
+The deployments of the virtual machines and bastion host are submitted as PowerShell jobs. To view the status of the jobs, use [Get-Job](/powershell/module/microsoft.powershell.core/get-job):
+
+```azurepowershell-interactive
+Get-Job
+
+Id Name PSJobTypeName State HasMoreData Location Command
+-- - - -- -- -- -
+1 Long Running O… AzureLongRunni… Completed True localhost New-AzBastion
+2 Long Running O… AzureLongRunni… Completed True localhost New-AzVM
+3 Long Running O… AzureLongRunni… Completed True localhost New-AzVM
+```
+
+Ensure the **State** of the VM creation is **Completed** before moving on to the next steps.
++
+## Install IIS
+
+Use [Set-AzVMExtension](/powershell/module/az.compute/set-azvmextension) to install the Custom Script Extension.
+
+The extension runs `PowerShell Add-WindowsFeature Web-Server` to install the IIS webserver and then updates the Default.htm page to show the hostname of the VM:
+
+> [!IMPORTANT]
+> Ensure the virtual machine deployments have completed from the previous steps before proceeding. Use `Get-Job` to check the status of the virtual machine deployment jobs.
+
+```azurepowershell-interactive
+## For loop with variable to install custom script extension on virtual machines. ##
+for ($i=1; $i -le 2; $i++)
+{
+$ext = @{
+ Publisher = 'Microsoft.Compute'
+ ExtensionType = 'CustomScriptExtension'
+ ExtensionName = 'IIS'
+ ResourceGroupName = 'CreatePubLBQS-rg'
+ VMName = "myVM$i"
+ Location = 'westus3'
+ TypeHandlerVersion = '1.8'
+ SettingString = '{"commandToExecute":"powershell Add-WindowsFeature Web-Server; powershell Add-Content -Path \"C:\\inetpub\\wwwroot\\Default.htm\" -Value $($env:computername)"}'
+}
+Set-AzVMExtension @ext -AsJob
+}
+```
+
+The extensions are deployed as PowerShell jobs. To view the status of the installation jobs, use [Get-Job](/powershell/module/microsoft.powershell.core/get-job):
+
+```azurepowershell-interactive
+Get-Job
+
+Id Name PSJobTypeName State HasMoreData Location Command
+-- - - -- -- -- -
+8 Long Running O… AzureLongRunni… Running True localhost Set-AzVMExtension
+9 Long Running O… AzureLongRunni… Running True localhost Set-AzVMExtension
+```
+
+Ensure the **State** of the jobs is **Completed** before moving on to the next steps.
+
+## Test the load balancer
+
+Use [Get-AzPublicIpAddress](/powershell/module/az.network/get-azpublicipaddress) to get the public IP address of the load balancer:
+
+```azurepowershell-interactive
+$ip = @{
+ ResourceGroupName = 'CreatePubLBQS-rg'
+ Name = 'myPublicIP'
+}
+Get-AzPublicIPAddress @ip | select IpAddress
+```
+
+Copy the public IP address, and then paste it into the address bar of your browser. The default page of IIS Web server is displayed on the browser.
+
+## Clean up resources
+
+When no longer needed, you can use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) command to remove the resource group, load balancer, and the remaining resources.
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name 'CreatePubLBQS-rg'
+```
+
+## Next steps
+
+In this quickstart, you:
+
+* Created an Azure Load Balancer
+
+* Attached 2 VMs to the load balancer
+
+* Tested the load balancer
+
+To learn more about Azure Load Balancer, continue to:
+> [!div class="nextstepaction"]
+> [What is Azure Load Balancer?](../load-balancer-overview.md)
logic-apps Logic Apps Using Sap Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-using-sap-connector.md
Previously updated : 02/28/2022 Last updated : 03/18/2022 tags: connectors
This article explains how you can access your SAP resources from Azure Logic App
* If you're running your logic app workflow in a Premium-level [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md), review the [ISE prerequisites](#ise-prerequisites).
-* An [SAP application server](https://wiki.scn.sap.com/wiki/display/ABAP/ABAP+Application+Server) or [SAP message server](https://help.sap.com/saphelp_nw70/helpdata/en/40/c235c15ab7468bb31599cc759179ef/frameset.htm) that you want to access from Azure Logic Apps. For information about the SAP servers that support this connector, review [SAP compatibility](#sap-compatibility).
+* An [SAP Application server](https://wiki.scn.sap.com/wiki/display/ABAP/ABAP+Application+Server) or [SAP Message server](https://help.sap.com/saphelp_nw70/helpdata/en/40/c235c15ab7468bb31599cc759179ef/frameset.htm) that you want to access from Azure Logic Apps. For information about the SAP servers that support this connector, review [SAP compatibility](#sap-compatibility).
> [!IMPORTANT] > Make sure that you set up your SAP server and user account to allow using RFC. For more information, which includes the supported
The SAP connector is compatible with the following types of SAP systems:
* Classic on-premises SAP systems, such as R/3 and ECC.
+SAP must support the SAP system version that you want to connect. Otherwise, any issues that you might encounter might not be resolvable. For more information about SAP system versions and maintenance information, review the [SAP Product Availability Matrix (PAM)](http://support.sap.com/pam).
+ The SAP connector supports the following message and data integration types from SAP NetWeaver-based systems: * Intermediate Document (IDoc)
The SAP connector uses the [SAP .NET Connector (NCo) library](https://support.sa
To use the available [SAP trigger](#triggers) and [SAP actions](#actions), you need to first authenticate your connection. You can authenticate your connection with a username and password. The SAP connector also supports [SAP Secure Network Communications (SNC)](https://help.sap.com/doc/saphelp_nw70/7.0.31/e6/56f466e99a11d1a5b00000e835363f/content.htm?no_cache=true) for authentication. You can use SNC for SAP NetWeaver single sign-on (SSO), or for additional security capabilities from external products. If you use SNC, review the [SNC prerequisites](#snc-prerequisites) and the [SNC prerequisites for the ISE connector](#snc-prerequisites-ise).
+### Network prerequisites
+
+The SAP system requires network connectivity from the host of the SAP .NET Connector (NCo) library. The multi-tenant host of the SAP .NET Connector (NCo) library is the on-premises data gateway. If you use an on-premises data gateway cluster, all nodes of the cluster require network connectivity to the SAP system. The ISE host of the SAP .NET Connector (NCo) library is within the ISE virtual network.
+
+The SAP system-required network connectivity includes the following servers and
+
+* SAP Application Server, Dispatcher service (for all Logon types)
+
+ Your SAP system can include multiple SAP Application Servers. The host of the SAP .NET Connector (NCo) library requires access to each server and their services.
+
+* SAP Message Server, Message service (for Logon type Group)
+
+ The Message Server and service will redirect to one or more Application Server's Dispatcher services. The host of the SAP .NET Connector (NCo) library requires access to each server and their services.
+
+* SAP Gateway Server, Gateway service
+
+* SAP Gateway Server, Gateway secured service
+
+ The SAP system-required network connectivity also includes this server and service to use with the Secure Network Communications (SNC).
+
+Redirection of requests from Application Server, Dispatcher service to Gateway Server, Gateway service occurs automatically within the SAP .NET Connector (NCo) library. This redirection occurs even if only the Application Server, Dispatcher service information is provided in the connection parameters.
+
+If you're using a load balancer in front of your SAP system, all the services must be redirected to their respective servers.
+
+For more information about SAP services and ports, review the [TCP/IP Ports of All SAP Products](https://help.sap.com/viewer/ports).
+
+> [!NOTE]
+ > Make sure you enabled network connectivity from the host of SAP .NET Connector (NCo) library and that
+ > the required ports are open on firewalls and network security groups. Otherwise, you get errors such as
+ > **partner not reached** from component **NI (network interface)** and additional error text such as **WSAECONNREFUSED: Connection refused**.
+ ### Migrate to current connector The previous SAP Application Server and SAP Message server connectors were deprecated February 29, 2020. To migrate to the current SAP connector, follow these steps:
The managed SAP connector integrates with SAP systems through your [on-premises
1. Configure the network host names and service names resolution for the host machine where you installed the on-premises data gateway.
- If you intend to use host names or service names for connection from Azure Logic Apps, you must set up each SAP application, message, and gateway server along with their services for name resolution. The network host name resolution is configured in the `%windir%\System32\drivers\etc\hosts` file or in the DNS server that's available to your on-premises data gateway host machine. The service name resolution is configured in `%windir%\System32\drivers\etc\services`. If you do not intend to use network host names or service names for the connection, you can use host IP addresses and service port numbers instead.
+ If you intend to use the host names or service names for connections from Azure Logic Apps, you have to set up name resolution for each SAP Application, Message, and Gateway server along with their
+
+ * Set up the network host name resolution in the **%windir%\System32\drivers\etc\hosts** file or in the DNS server that's available to your on-premises data gateway host machine.
+
+ * Set up the service name resolution in the **%windir%\System32\drivers\etc\services** file.
+
+ If you don't intend to use network host names or service names for the connection, you can use host IP addresses and service port numbers instead.
- If you do not have a DNS entry for your SAP system, the following example shows a sample entry for the hosts file:
+ If you don't have a DNS entry for your SAP system, the following example shows a sample entry for the hosts file:
```text 10.0.1.9 sapserver # SAP single-instance system host IP by simple computer name
An ISE provides access to resources that are protected by an Azure virtual netwo
1. Select **Create** to finish creating your ISE connector.
-1. If your SAP instance and ISE are in different virtual networks, you also need to [peer those networks](../virtual-network/tutorial-connect-virtual-networks-portal.md) so they are connected. Also review the [SNC prerequisites for the ISE connector](#snc-prerequisites-ise).
+1. If your SAP instance and ISE are in different virtual networks, you also need to [peer those networks](../virtual-network/tutorial-connect-virtual-networks-portal.md) so they're connected. Also review the [SNC prerequisites for the ISE connector](#snc-prerequisites-ise).
-1. Get the IP addresses for the SAP application, message, and gateway servers that you plan to use for connecting from your logic app workflow. Network name resolution is not available for SAP connections in an ISE.
+1. Get the IP addresses for the SAP Application, Message, and Gateway servers that you plan to use for connecting from your logic app workflow. Network name resolution isn't available for SAP connections in an ISE.
-1. Get the port numbers for the SAP application, message, and gateway services that you plan you will use for connection with Logic App. Service name resolution is not available for SAP connector in ISE.
+1. Get the port numbers for the SAP Application, Message, and Gateway services that you plan you'll use for connection with Logic App. Service name resolution isn't available for SAP connector in ISE.
### SAP client library prerequisites
The ISE version of the SAP connector supports SNC X.509. You can enable SNC for
> When you delete an old connector, you can still keep the logic app workflows that use this connector. After you redeploy > the connector, you can then authenticate the new connection in your SAP triggers and actions in these logic app workflows.
-First, if you have already deployed the SAP connector without the SNC or SAPGENPSE libraries, delete all the connections and the connector.
+First, if you've already deployed the SAP connector without the SNC or SAPGENPSE libraries, delete all the connections and the connector.
1. Sign in to the [Azure portal](https://portal.azure.com).
Next, deploy or redeploy the SAP connector in your ISE:
1. Copy all SNC, SAPGENPSE, and NCo libraries to the root folder of your zip archive. Don't put these binaries in subfolders.
- 1. You must use the 64-bit SNC library. There is no 32-bit support.
+ 1. You must use the 64-bit SNC library. There's no 32-bit support.
1. Your SNC library and its dependencies must be compatible with your SAP environment. For how to check compatibility, the [ISE prerequisites](#ise-prerequisites).
Last, create new connections that use SNC in all your logic apps that use the SA
Output written to connectionInput.txt ```
- If the output path parameter is not provided, the script's output to the console will have line breaks. Remove the line breaks of the base 64-encoded string for the connection input parameter.
+ If the output path parameter isn't provided, the script's output to the console will have line breaks. Remove the line breaks of the base 64-encoded string for the connection input parameter.
> [!NOTE] > If you're using more than one SNC client certificate for your ISE, you must provide the same PSE for all connections.
Next, create an action to send your IDoc message to SAP when your [Request trigg
* For **Application Server**, these properties, which usually appear optional, are required:
- ![Screenshot that shows how to create SAP application server connection.](./media/logic-apps-using-sap-connector/create-SAP-application-server-connection.png)
+ ![Screenshot that shows how to create SAP Application server connection.](./media/logic-apps-using-sap-connector/create-SAP-application-server-connection.png)
* For **Group**, these properties, which usually appear optional, are required:
- ![Screenshot that shows how to create SAP message server connection.](./media/logic-apps-using-sap-connector/create-SAP-message-server-connection.png)
+ ![Screenshot that shows how to create SAP Message server connection.](./media/logic-apps-using-sap-connector/create-SAP-message-server-connection.png)
In SAP, the Logon Group is maintained by opening the **CCMS: Maintain Logon Groups** (T-Code SMLG) dialog box. For more information, review [SAP Note 26317 - Set up for LOGON group for automatic load balancing](https://service.sap.com/sap/support/notes/26317).
Next, create an action to send your IDoc message to SAP when your [Request trigg
1. Now find and select an action from your SAP server.
- 1. In the **SAP Action** box, select the folder icon. From the file list, find and select the SAP message you want to use. To navigate the list, use the arrows.
+ 1. In the **SAP Action** box, select the folder icon. From the file list, find and select the SAP Message you want to use. To navigate the list, use the arrows.
This example selects an IDoc with the **Orders** type.
This example uses a logic app workflow that triggers when the app receives a mes
* For **Application Server**, these properties, which usually appear optional, are required:
- ![Screenshot that shows creating a connection to SAP application server.](./media/logic-apps-using-sap-connector/create-SAP-application-server-connection.png)
+ ![Screenshot that shows creating a connection to SAP Application server.](./media/logic-apps-using-sap-connector/create-SAP-application-server-connection.png)
* For **Group**, these properties, which usually appear optional, are required:
- ![Screenshot that shows creating a connection to SAP message server](./media/logic-apps-using-sap-connector/create-SAP-message-server-connection.png)
+ ![Screenshot that shows creating a connection to SAP Message server](./media/logic-apps-using-sap-connector/create-SAP-message-server-connection.png)
By default, strong typing is used to check for invalid values by performing XML validation against the schema. This behavior can help you detect issues earlier. The **Safe Typing** option is available for backward compatibility and only checks the string length. Learn more about the [Safe Typing option](#safe-typing).
If you receive a **500 Bad Gateway** or **400 Bad Request** error with a message
sapgw00 3300/tcp ```
-You might get a similar error when SAP application server or message server name resolves to the IP address. For ISE, you must specify the IP address for your SAP application server or message server. For the on-premises data gateway, you can instead add the name to the IP address mapping in `%windir%\System32\drivers\etc\hosts`, for example:
+You might get a similar error when SAP Application server or Message server name resolves to the IP address. For ISE, you must specify the IP address for your SAP Application server or Message server. For the on-premises data gateway, you can instead add the name to the IP address mapping in `%windir%\System32\drivers\etc\hosts`, for example:
```text 10.0.1.9 SAPDBSERVER01 # SAP System Server VPN IP by computer name
If you're receiving this error message and experience intermittent failures call
#### The segment or group definition E2EDK36001 was not found in the IDoc meta
-This error message means expected failures happen with other errors. For example, the failure to generate an IDoc XML payload because its segments are not released by SAP. As a result, the segment type metadata required for conversion is missing.
+This error message means expected failures happen with other errors. For example, the failure to generate an IDoc XML payload because its segments aren't released by SAP. As a result, the segment type metadata required for conversion is missing.
To have these segments released by SAP, contact the ABAP engineer for your SAP system.
Here's an example that shows how to extract individual IDocs from a packet by us
* **202 Accepted**, which means the request has been accepted for processing but the processing isn't complete yet.
- * **204 No Content**, which means the server has successfully fulfilled the request and there is no additional content to send in the response payload body.
+ * **204 No Content**, which means the server has successfully fulfilled the request and there's no additional content to send in the response payload body.
* **200 OK**. This status code always contains a payload, even if the server generates a payload body of zero length.
You can begin your XML schema with an optional XML prolog. The SAP connector wor
#### XML samples for RFC requests
-The following example is a basic RFC call. The RFC name is `STFC_CONNECTION`. This request uses the default namespace `xmlns=`, however, you can assign and use namespace aliases such as `xmmlns:exampleAlias=`. The namespace value is the namespace for all RFCs in SAP for Microsoft services. There is a simple input parameter in the request, `<REQUTEXT>`.
+The following example is a basic RFC call. The RFC name is `STFC_CONNECTION`. This request uses the default namespace `xmlns=`, however, you can assign and use namespace aliases such as `xmmlns:exampleAlias=`. The namespace value is the namespace for all RFCs in SAP for Microsoft services. There's a simple input parameter in the request, `<REQUTEXT>`.
```xml <STFC_CONNECTION xmlns="http://Microsoft.LobServices.Sap/2007/03/Rfc/">
This example declares the root node and namespaces. The URI in the sample code,
<ns0:idocData> ```
-You can repeat the `idocData` node to send a batch of IDocs in a single call. In the example below, there is one control record, `EDI_DC40`, and multiple data records.
+You can repeat the `idocData` node to send a batch of IDocs in a single call. In the example below, there's one control record, `EDI_DC40`, and multiple data records.
```xml <...>
The following example is an alternative method to set the transaction identifier
* For **Application Server**, these properties, which usually appear optional, are required:
- ![Screenshot that shows creating a connection for SAP application server](./media/logic-apps-using-sap-connector/create-SAP-application-server-connection.png)
+ ![Screenshot that shows creating a connection for SAP Application server](./media/logic-apps-using-sap-connector/create-SAP-application-server-connection.png)
* For **Group**, these properties, which usually appear optional, are required:
- ![Screenshot that shows creating a connection for SAP message server](./media/logic-apps-using-sap-connector/create-SAP-message-server-connection.png)
+ ![Screenshot that shows creating a connection for SAP Message server](./media/logic-apps-using-sap-connector/create-SAP-message-server-connection.png)
1. When you're finished, select **Create**.
When you send transactions to SAP from Logic Apps, this exchange happens in two
This capability to decouple the transaction ID confirmation is useful when you don't want to duplicate transactions in SAP, for example, in scenarios where failures might happen due to causes such as network issues. By confirming the transaction ID separately, the transaction is only completed one time in your SAP system.
-Here is an example that shows this pattern:
+Here's an example that shows this pattern:
1. Create a blank logic app and add the Request trigger.
machine-learning Azure Machine Learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/azure-machine-learning-release-notes.md
The `ml` extension to the Azure CLI is the next-generation interface for Azure M
+ **azureml-core** + [Experimental feature] Add support to link synapse workspace into AML as an linked service + [Experimental feature] Add support to attach synapse spark pool into AML as a compute
- + [Experimental feature] Add support for identity based data access. Users can register datastore or datasets without providing credentials. In such case, users' Azure AD token or managed identity of compute target will be used for authentication. Learn more [here](./how-to-identity-based-data-access.md).
+ + [Experimental feature] Add support for identity based data access. Users can register datastore or datasets without providing credentials. In such case, users' Azure AD token or managed identity of compute target will be used for authentication. To learn more, see [Connect to storage by using identity-based data access](./how-to-identity-based-data-access.md).
+ **azureml-pipeline-steps** + [Experimental feature] Add support for [SynapseSparkStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.synapsesparkstep) + **azureml-synapse**
Learn more about [image instance segmentation labeling](how-to-label-data.md).
## 2020-05-04 **New Notebook Experience**
-You can now create, edit, and share machine learning notebooks and files directly inside the studio web experience of Azure Machine Learning. You can use all the classes and methods available in [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro) from inside these notebooks
-Get started [here](./how-to-run-jupyter-notebooks.md)
+You can now create, edit, and share machine learning notebooks and files directly inside the studio web experience of Azure Machine Learning. You can use all the classes and methods available in [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro) from inside these notebooks.
+To get started, visit the [Run Jupyter Notebooks in your workspace](./how-to-run-jupyter-notebooks.md) article.
**New Features Introduced:**
Access the following web-based authoring tools from the studio:
+ **Breaking changes** + **Semantic Versioning 2.0.0**
- + Starting with version 1.1 Azure ML Python SDK adopts Semantic Versioning 2.0.0. [Read more here](https://semver.org/). All subsequent versions will follow new numbering scheme and semantic versioning contract.
+ + Starting with version 1.1 Azure ML Python SDK adopts [Semantic Versioning 2.0.0](https://semver.org/). All subsequent versions will follow new numbering scheme and semantic versioning contract.
+ **Bug fixes and improvements** + **azure-cli-ml**
Access the following web-based authoring tools from the studio:
+ **Breaking changes** + **Semantic Versioning 2.0.0**
- + Starting with version 1.1 Azure ML Python SDK adopts Semantic Versioning 2.0.0. [Read more here](https://semver.org/). All subsequent versions will follow new numbering scheme and semantic versioning contract.
+ + Starting with version 1.1 Azure ML Python SDK adopts [Semantic Versioning 2.0.0](https://semver.org/). All subsequent versions will follow new numbering scheme and semantic versioning contract.
+ **Bug fixes and improvements** + **azureml-automl-runtime**
Access the following web-based authoring tools from the studio:
+ **New features** + Dataset: Add two options `on_error` and `out_of_range_datetime` for `to_pandas_dataframe` to fail when data has error values instead of filling them with `None`.
- + Workspace: Added the `hbi_workspace` flag for workspaces with sensitive data that enables further encryption and disables advanced diagnostics on workspaces. We also added support for bringing your own keys for the associated Cosmos DB instance, by specifying the `cmk_keyvault` and `resource_cmk_uri` parameters when creating a workspace, which creates a Cosmos DB instance in your subscription while provisioning your workspace. [Read more here.](./concept-data-encryption.md#azure-cosmos-db)
+ + Workspace: Added the `hbi_workspace` flag for workspaces with sensitive data that enables further encryption and disables advanced diagnostics on workspaces. We also added support for bringing your own keys for the associated Cosmos DB instance, by specifying the `cmk_keyvault` and `resource_cmk_uri` parameters when creating a workspace, which creates a Cosmos DB instance in your subscription while provisioning your workspace. To learn more, see the [Azure Cosmos DB section of data encryption article](./concept-data-encryption.md#azure-cosmos-db).
+ **Bug fixes and improvements** + **azureml-automl-runtime**
machine-learning How To Debug Parallel Run Step https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-debug-parallel-run-step.md
def init():
"""Init once in a worker process.""" entry_script = EntryScript() logger = entry_script.logger
- logger.debug("This will show up in files under logs/user on the Azure portal.")
+ logger.info("This will show up in files under logs/user on the Azure portal.")
def run(mini_batch):
def run(mini_batch):
# This class is in singleton pattern and will return same instance as the one in init() entry_script = EntryScript() logger = entry_script.logger
- logger.debug(f"{__file__}: {mini_batch}.")
+ logger.info(f"{__file__}: {mini_batch}.")
... return mini_batch
def run(mini_batch):
## Where does the message from Python `logging` sink to? ParallelRunStep sets a handler on the root logger, which sinks the message to `logs/user/stdout/<node_id>/processNNN.stdout.txt`.
-`logging` defaults to `WARNING` level. By default, levels below `WARNING` won't show up, such as `INFO` or `DEBUG`.
+`logging` defaults to `INFO` level. By default, levels below `INFO` won't show up, such as `DEBUG`.
## How could I write to a file to show up in the portal? Files in `logs` folder will be uploaded and show up in the portal.
machine-learning How To Secure Workspace Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-workspace-vnet.md
In this article you learn how to enable the following workspaces resources in a
* If you plan to use Azure Machine Learning studio and the storage account is also in the VNet, there are extra validation requirements: * If the storage account uses a __service endpoint__, the workspace private endpoint and storage service endpoint must be in the same subnet of the VNet.
- * If the storage account uses a __private endpoint__, the workspace private endpoint and storage service endpoint must be in the same VNet. In this case, they can be in different subnets.
+ * If the storage account uses a __private endpoint__, the workspace private endpoint and storage private endpoint must be in the same VNet. In this case, they can be in different subnets.
### Azure Container Registry
This article is part of a series on securing an Azure Machine Learning workflow.
* [Secure the inference environment](how-to-secure-inferencing-vnet.md) * [Enable studio functionality](how-to-enable-studio-virtual-network.md) * [Use custom DNS](how-to-custom-dns.md)
-* [Use a firewall](how-to-access-azureml-behind-firewall.md)
+* [Use a firewall](how-to-access-azureml-behind-firewall.md)
marketplace Gtm Your Marketplace Benefits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/gtm-your-marketplace-benefits.md
description: Go-To-Market Services - Describes Microsoft resources that publishe
Previously updated : 12/14/2021 Last updated : 03/21/2021
migrate Create Manage Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/create-manage-projects.md
Set up a new project in an Azure subscription.
> [!Note]
- > Use the **Advanced** configuration section to create an Azure Migrate project with private endpoint connectivity. [Learn more](how-to-use-azure-migrate-with-private-endpoints.md#create-a-project-with-private-endpoint-connectivity)
+ > Use the **Advanced** configuration section to create an Azure Migrate project with private endpoint connectivity. [Learn more](discover-and-assess-using-private-endpoints.md#create-a-project-with-private-endpoint-connectivity)
7. Select **Create**.
migrate Discover And Assess Using Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/discover-and-assess-using-private-endpoints.md
+
+ Title: Discover and assess using Azure Private Link
+description: Create an Azure Migrate project, set up the Azure Migrate appliance, and use it to discover and assess servers for migration.
++
+ms.
+ Last updated : 12/29/2021+
+
+# Discover and assess servers for migration using Private Link
+
+This article describes how to create an Azure Migrate project, set up the Azure Migrate appliance, and use it to discover and assess servers for migration using [Azure Private Link](../private-link/private-endpoint-overview.md). You can use the [Azure Migrate: Discovery and assessment](migrate-services-overview.md#azure-migrate-discovery-and-assessment-tool) tool to connect privately and securely to Azure Migrate over an Azure ExpressRoute private peering or a site-to-site (S2S) VPN connection by using Private Link.
+
+## Create a project with private endpoint connectivity
+
+To set up a new Azure Migrate project, see [Create and manage projects](./create-manage-projects.md#create-a-project-for-the-first-time).
+
+> [!Note]
+> You can't change the connectivity method to private endpoint connectivity for existing Azure Migrate projects.
+
+In the **Advanced** configuration section, provide the following details to create a private endpoint for your Azure Migrate project.
+1. In **Connectivity method**, choose **Private endpoint**.
+1. In **Disable public endpoint access**, keep the default setting **No**. Some migration tools might not be able to upload usage data to the Azure Migrate project if public network access is disabled. Learn more about [other integrated tools](how-to-use-azure-migrate-with-private-endpoints.md#other-integrated-tools).
+1. In **Virtual network subscription**, select the subscription for the private endpoint virtual network.
+1. In **Virtual network**, select the virtual network for the private endpoint. The Azure Migrate appliance and other software components that need to connect to the Azure Migrate project must be on this network or a connected virtual network.
+1. In **Subnet**, select the subnet for the private endpoint.
+
+ ![Screenshot that shows the Advanced section on the Create project page.](./media/how-to-use-azure-migrate-with-private-endpoints/create-project.png)
+
+1. Select **Create** to create a migration project and attach a private endpoint to it. Wait a few minutes for the Azure Migrate project to deploy. Don't close this page while the project creation is in progress.
+
+> [!Note]
+> If you've already created a project, you can use that project to register more appliances to discover and assess more servers. Learn how to [manage projects](create-manage-projects.md#find-a-project).
+
+## Set up the Azure Migrate appliance
+
+1. In **Discover machines** > **Are your machines virtualized?**, select the virtualization server type.
+1. In **Generate Azure Migrate project key**, provide a name for the Azure Migrate appliance.
+1. Select **Generate key** to create the required Azure resources.
+
+ > [!Important]
+ > Don't close the **Discover machines** page during the creation of resources.
+ - At this step, Azure Migrate creates a key vault, a storage account, a Recovery Services vault (only for agentless VMware migrations), and a few internal resources. Azure Migrate attaches a private endpoint to each resource. The private endpoints are created in the virtual network selected during the project creation.
+ - After the private endpoints are created, the DNS CNAME resource records for the Azure Migrate resources are updated to an alias in a subdomain with the prefix *privatelink*. By default, Azure Migrate also creates a private DNS zone corresponding to the *privatelink* subdomain for each resource type and inserts DNS A records for the associated private endpoints. This action enables the Azure Migrate appliance and other software components that reside in the source network to reach the Azure Migrate resource endpoints on private IP addresses.
+ - Azure Migrate also enables a [managed identity](../active-directory/managed-identities-azure-resources/overview.md) for the migrate project and the Recovery Services vault and grants permissions to the managed identity to securely access the storage account.
+
+1. After the key is successfully generated, copy the key details to configure and register the appliance.
+
+### Download the appliance installer file
+
+Azure Migrate: Discovery and assessment use a lightweight Azure Migrate appliance. The appliance performs server discovery and sends server configuration and performance metadata to Azure Migrate.
+
+> [!Note]
+> If you have deployed an appliance using a template (OVA for servers on a VMware environment and VHD for a Hyper-V environment), you can use the same appliance and register it with an Azure Migrate project with private endpoint connectivity.
+
+To set up the appliance:
+ 1. Download the zipped file that contains the installer script from the portal.
+ 1. Copy the zipped file on the server that will host the appliance.
+ 1. After you download the zipped file, verify the file security.
+ 1. Run the installer script to deploy the appliance.
+
+### Verify security
+
+Check that the zipped file is secure, before you deploy it.
+
+1. On the server to which you downloaded the file, open an administrator command window.
+2. Run the following command to generate the hash for the zipped file:
+ - ```C:\>CertUtil -HashFile <file_location> [Hashing Algorithm]```
+ - Example usage: ```C:\>CertUtil -HashFile C:\Users\administrator\Desktop\AzureMigrateInstaller.zip SHA256 ```
+3. Verify the latest appliance version and hash value:
+
+ **Download** | **Hash value**
+ |
+ [Latest version](https://go.microsoft.com/fwlink/?linkid=2160648) | 30d4f4e06813ceb83602a220fc5fe2278fa6aafcbaa36a40a37f3133f882ee8c
+
+> [!NOTE]
+> The same script can be used to set up an appliance with private endpoint connectivity for any of the chosen scenarios, such as VMware, Hyper-V, physical or other to deploy an appliance with the desired configuration.
+
+Make sure the server meets the [hardware requirements](./migrate-appliance.md) for the chosen scenario, such as VMware, Hyper-V, physical or other, and can connect to the [required URLs](./migrate-appliance.md#public-cloud-urls-for-private-link-connectivity).
+
+### Run the Azure Migrate installer script
+
+1. Extract the zipped file to a folder on the server that will host the appliance. Make sure you don't run the script on a server with an existing Azure Migrate appliance.
+
+2. Launch PowerShell on the above server with administrative (elevated) privilege.
+
+3. Change the PowerShell directory to the folder where the contents have been extracted from the downloaded zipped file.
+
+4. Run the script named `AzureMigrateInstaller.ps1` by running the following command:
+
+ `PS C:\Users\administrator\Desktop\AzureMigrateInstaller> .\AzureMigrateInstaller.ps1`
+
+5. Select from the scenario, cloud and connectivity options to deploy an appliance with the desired configuration. For instance, the selection shown below sets up an appliance to discover and assess **servers running in your VMware environment** to an Azure Migrate project with **private endpoint connectivity** on **Azure public cloud**.
+
+ :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/script-vmware-private-inline.png" alt-text="Screenshot that shows how to set up appliance with desired configuration for private endpoint." lightbox="./media/how-to-use-azure-migrate-with-private-endpoints/script-vmware-private-expanded.png":::
+
+After the script has executed successfully, the appliance configuration manager will be launched automatically.
+
+> [!NOTE]
+> If you come across any issues, you can access the script logs at C:\ProgramData\Microsoft Azure\Logs\AzureMigrateScenarioInstaller_<em>Timestamp</em>.log for troubleshooting.
+
+## Configure the appliance and start continuous discovery
+
+Open a browser on any machine that can connect to the appliance server. Open the URL of the appliance configuration manager, `https://appliance name or IP address: 44368`. Or, you can open the configuration manager from the appliance server desktop by selecting the shortcut for the configuration manager.
+
+### Set up prerequisites
+
+1. Read the third-party information, and accept the **license terms**.
+
+1. In the configuration manager under **Set up prerequisites**, do the following:
+ - **Connectivity**: The appliance checks for access to the required URLs. If the server uses a proxy:
+ - Select **Set up proxy** to specify the proxy address `http://ProxyIPAddress` or `http://ProxyFQDN` and listening port.
+ - Specify credentials if the proxy needs authentication. Only HTTP proxy is supported.
+ - You can add a list of URLs or IP addresses that should bypass the proxy server.
+ ![Adding to bypass proxy list](./media/how-to-use-azure-migrate-with-private-endpoints/bypass-proxy-list.png)
+ - Select **Save** to register the configuration if you've updated the proxy server details or added URLs or IP addresses to bypass proxy.
+
+ > [!Note]
+ > If you get an error with the aka.ms/* link during the connectivity check and you don't want the appliance to access this URL over the internet, disable the auto-update service on the appliance. Follow the steps in [Turn off auto-update](./migrate-appliance.md#turn-off-auto-update). After you've disabled auto-update, the aka.ms/* URL connectivity check will be skipped.
+
+ - **Time sync**: The time on the appliance should be in sync with internet time for discovery to work properly.
+ - **Install updates**: The appliance ensures that the latest updates are installed. After the check completes, select **View appliance services** to see the status and versions of the services running on the appliance server.
+ > [!Note]
+ > If you disabled auto-update on the appliance, you can update the appliance services manually to get the latest versions of the services. Follow the steps in [Manually update an older version](./migrate-appliance.md#manually-update-an-older-version).
+ - **Install VDDK**: _(Needed only for VMware appliance.)_ The appliance checks that the VMware vSphere Virtual Disk Development Kit (VDDK) is installed. If it isn't installed, download VDDK 6.7 from VMware. Extract the downloaded zipped contents to the specified location on the appliance, as provided in the installation instructions.
+
+### Register the appliance and start continuous discovery
+
+After the prerequisites check has completed, follow the steps to register the appliance and start continuous discovery for the respective scenarios:
+- [VMware VMs](./tutorial-discover-vmware.md#register-the-appliance-with-azure-migrate)
+- [Hyper-V VMs](./tutorial-discover-hyper-v.md#register-the-appliance-with-azure-migrate)
+- [Physical servers](./tutorial-discover-physical.md#register-the-appliance-with-azure-migrate)
+- [AWS VMs](./tutorial-discover-aws.md#register-the-appliance-with-azure-migrate)
+- [GCP VMs](./tutorial-discover-gcp.md#register-the-appliance-with-azure-migrate)
+
+>[!Note]
+> If you get DNS resolution issues during appliance registration or at the time of starting discovery, ensure that Azure Migrate resources created during the **Generate key** step in the portal are reachable from the on-premises server that hosts the Azure Migrate appliance. Learn more about how to verify [network connectivity](./troubleshoot-network-connectivity.md).
+
+## Assess your servers for migration to Azure
+After the discovery is complete, assess your servers, such as [VMware VMs](./tutorial-assess-vmware-azure-vm.md), [Hyper-V VMs](./tutorial-assess-hyper-v.md), [physical servers](./tutorial-assess-vmware-azure-vm.md), [AWS VMs](./tutorial-assess-aws.md), and [GCP VMs](./tutorial-assess-gcp.md), for migration to Azure VMs or Azure VMware Solution by using the Azure Migrate: Discovery and assessment tool.
+
+You can also [assess your on-premises machines](./tutorial-discover-import.md#prepare-the-csv) with the Azure Migrate: Discovery and assessment tool by using an imported CSV file.
+
+## Next steps
+
+- [Migrate servers to Azure using Private Link](migrate-servers-to-azure-using-private-link.md).
migrate How To Use Azure Migrate With Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-use-azure-migrate-with-private-endpoints.md
ms. Previously updated : 05/10/2020 Last updated : 12/29/2021
-# Use Azure Migrate with private endpoints
+# Support requirements and considerations
-This article describes how to use Azure Migrate to discover, assess, and migrate servers over a private network by using [Azure Private Link](../private-link/private-endpoint-overview.md).
+The article series describes how to use Azure Migrate to discover, assess, and migrate servers over a private network by using [Azure Private Link](../private-link/private-endpoint-overview.md). You can use the [Azure Migrate: Discovery and assessment](migrate-services-overview.md#azure-migrate-discovery-and-assessment-tool) and [Azure Migrate: Server Migration](migrate-services-overview.md#azure-migrate-server-migration-tool) tools to connect privately and securely to Azure Migrate over an Azure ExpressRoute private peering or a site-to-site (S2S) VPN connection by using Private Link.
-You can use the [Azure Migrate: Discovery and assessment](./migrate-services-overview.md#azure-migrate-discovery-and-assessment-tool) and [Azure Migrate: Server Migration](./migrate-services-overview.md#azure-migrate-server-migration-tool) tools to connect privately and securely to Azure Migrate over an Azure ExpressRoute private peering or a site-to-site (S2S) VPN connection by using Private Link.
+We recommend the private endpoint connectivity method when there is an organizational requirement to access Azure Migrate and other Azure resources without traversing public networks. By using Private Link, you can use your existing ExpressRoute private peering circuits for better bandwidth or latency requirements.
-We recommend the private endpoint connectivity method when there's an organizational requirement to access Azure Migrate and other Azure resources without traversing public networks. By using Private Link, you can use your existing ExpressRoute private peering circuits for better bandwidth or latency requirements.
+Before you get started, review the required permissions and the supported scenarios and tools.
## Support requirements
You must have Contributor + User Access Administrator or Owner permissions on th
**Discovery and assessment** | Perform an agentless, at-scale discovery and assessment of your servers running on any platform. Examples include hypervisor platforms such as [VMware vSphere](./tutorial-discover-vmware.md) or [Microsoft Hyper-V](./tutorial-discover-hyper-v.md), public clouds such as [AWS](./tutorial-discover-aws.md) or [GCP](./tutorial-discover-gcp.md), or even [bare metal servers](./tutorial-discover-physical.md). | Azure Migrate: Discovery and assessment <br/> **Software inventory** | Discover apps, roles, and features running on VMware VMs. | Azure Migrate: Discovery and assessment **Dependency visualization** | Use the dependency analysis capability to identify and understand dependencies across servers. <br/> [Agentless dependency visualization](./how-to-create-group-machine-dependencies-agentless.md) is supported natively with Azure Migrate private link support. <br/>[Agent-based dependency visualization](./how-to-create-group-machine-dependencies.md) requires internet connectivity. Learn how to use [private endpoints for agent-based dependency visualization](../azure-monitor/logs/private-link-security.md). | Azure Migrate: Discovery and assessment |
-**Migration** | Perform [agentless Hyper-V migrations](./tutorial-migrate-hyper-v.md) or use the agent-based approach to migrate your [VMware VMs](./tutorial-migrate-vmware-agent.md), [Hyper-V VMs](./tutorial-migrate-physical-virtual-machines.md), [physical servers](./tutorial-migrate-physical-virtual-machines.md), [VMs running on AWS](./tutorial-migrate-aws-virtual-machines.md), [VMs running on GCP](./tutorial-migrate-gcp-virtual-machines.md), or VMs running on a different virtualization provider. | Azure Migrate: Server Migration
-
->[!Note]
-> [Agentless migration of VMware VMs](./tutorial-migrate-vmware.md) currently supports replication data transfer over a private network. Other traffic (orchestration, non-voluminous traffic) will require internet access or connectivity via ExpressRoute Microsoft peering. [Learn more.](./replicate-using-expressroute.md)
+**Migration** | Perform [agentless VMware migrations](./tutorial-migrate-vmware.md), [agentless Hyper-V migrations](./tutorial-migrate-hyper-v.md), or use the agent-based approach to migrate your [VMware VMs](./tutorial-migrate-vmware-agent.md), [Hyper-V VMs](./tutorial-migrate-physical-virtual-machines.md), [physical servers](./tutorial-migrate-physical-virtual-machines.md), [VMs running on AWS](./tutorial-migrate-aws-virtual-machines.md), [VMs running on GCP](./tutorial-migrate-gcp-virtual-machines.md), or VMs running on a different virtualization provider. | Azure Migrate: Server Migration
#### Other integrated tools
To enable public network access for the Azure Migrate project, sign in to the Az
**Pricing** | For pricing information, see [Azure Page Blobs pricing](https://azure.microsoft.com/pricing/details/storage/page-blobs/) and [Private Link pricing](https://azure.microsoft.com/pricing/details/private-link/). **Virtual network requirements** | The ExpressRoute/VPN gateway endpoint should reside in the selected virtual network or a virtual network connected to it. You might need about 15 IP addresses in the virtual network.
-## Create a project with private endpoint connectivity
-
-To set up a new Azure Migrate project, see [Create and manage projects](./create-manage-projects.md#create-a-project-for-the-first-time).
-
-> [!Note]
-> You can't change the connectivity method to private endpoint connectivity for existing Azure Migrate projects.
-
-In the **Advanced** configuration section, provide the following details to create a private endpoint for your Azure Migrate project.
-1. In **Connectivity method**, choose **Private endpoint**.
-1. In **Disable public endpoint access**, keep the default setting **No**. Some migration tools might not be able to upload usage data to the Azure Migrate project if public network access is disabled. Learn more about [other integrated tools](#other-integrated-tools).
-1. In **Virtual network subscription**, select the subscription for the private endpoint virtual network.
-1. In **Virtual network**, select the virtual network for the private endpoint. The Azure Migrate appliance and other software components that need to connect to the Azure Migrate project must be on this network or a connected virtual network.
-1. In **Subnet**, select the subnet for the private endpoint.
-
- ![Screenshot that shows the Advanced section on the Create project page.](./media/how-to-use-azure-migrate-with-private-endpoints/create-project.png)
-
-1. Select **Create** to create a migration project and attach a private endpoint to it. Wait a few minutes for the Azure Migrate project to deploy. Don't close this page while the project creation is in progress.
-
-## Discover and assess servers for migration by using Private Link
-
-This section describes how to set up the Azure Migrate appliance. Then you'll use it to discover and assess servers for migration.
-
-### Set up the Azure Migrate appliance
-
-1. In **Discover machines** > **Are your machines virtualized?**, select the server type.
-1. In **Generate Azure Migrate project key**, provide a name for the Azure Migrate appliance.
-1. Select **Generate key** to create the required Azure resources.
-
- > [!Important]
- > Don't close the **Discover machines** page during the creation of resources.
- - At this step, Azure Migrate creates a key vault, a storage account, a Recovery Services vault (only for agentless VMware migrations), and a few internal resources. Azure Migrate attaches a private endpoint to each resource. The private endpoints are created in the virtual network selected during the project creation.
- - After the private endpoints are created, the DNS CNAME resource records for the Azure Migrate resources are updated to an alias in a subdomain with the prefix *privatelink*. By default, Azure Migrate also creates a private DNS zone corresponding to the *privatelink* subdomain for each resource type and inserts DNS A records for the associated private endpoints. This action enables the Azure Migrate appliance and other software components that reside in the source network to reach the Azure Migrate resource endpoints on private IP addresses.
- - Azure Migrate also enables a [managed identity](../active-directory/managed-identities-azure-resources/overview.md) for the migrate project and grants permissions to the managed identity to securely access the storage account.
-
-1. After the key is successfully generated, copy the key details to configure and register the appliance.
-
-#### Download the appliance installer file
-
-Azure Migrate: Discovery and assessment use a lightweight Azure Migrate appliance. The appliance performs server discovery and sends server configuration and performance metadata to Azure Migrate.
-
-> [!Note]
-> If you have deployed an appliance using a template (OVA for servers on a VMware environment and VHD for a Hyper-V environment), you can use the same appliance and register it with an Azure Migrate project with private endpoint connectivity.
-
-To set up the appliance:
- 1. Download the zipped file that contains the installer script from the portal.
- 1. Copy the zipped file on the server that will host the appliance.
- 1. After you download the zipped file, verify the file security.
- 1. Run the installer script to deploy the appliance.
-
-#### Verify security
-
-Check that the zipped file is secure, before you deploy it.
-
-1. On the server to which you downloaded the file, open an administrator command window.
-2. Run the following command to generate the hash for the zipped file:
- - ```C:\>CertUtil -HashFile <file_location> [Hashing Algorithm]```
- - Example usage: ```C:\>CertUtil -HashFile C:\Users\administrator\Desktop\AzureMigrateInstaller.zip SHA256 ```
-3. Verify the latest appliance version and hash value:
-
- **Download** | **Hash value**
- |
- [Latest version](https://go.microsoft.com/fwlink/?linkid=2160648) | 7745817a5320628022719f24203ec0fbf56a0e0f02b4e7713386cbc003f0053c
-
-> [!NOTE]
-> The same script can be used to set up an appliance with private endpoint connectivity for any of the chosen scenarios, such as VMware, Hyper-V, physical or other to deploy an appliance with the desired configuration.
-
-Make sure the server meets the [hardware requirements](./migrate-appliance.md) for the chosen scenario, such as VMware, Hyper-V, physical or other, and can connect to the [required URLs](./migrate-appliance.md#public-cloud-urls-for-private-link-connectivity).
-
-#### Run the Azure Migrate installer script
-
-1. Extract the zipped file to a folder on the server that will host the appliance. Make sure you don't run the script on a server with an existing Azure Migrate appliance.
-
-2. Launch PowerShell on the above server with administrative (elevated) privilege.
-
-3. Change the PowerShell directory to the folder where the contents have been extracted from the downloaded zipped file.
-
-4. Run the script named `AzureMigrateInstaller.ps1` by running the following command:
-
- `PS C:\Users\administrator\Desktop\AzureMigrateInstaller> .\AzureMigrateInstaller.ps1`
-
-5. Select from the scenario, cloud and connectivity options to deploy an appliance with the desired configuration. For instance, the selection shown below sets up an appliance to discover and assess **servers running in your VMware environment** to an Azure Migrate project with **private endpoint connectivity** on **Azure public cloud**.
-
- :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/script-vmware-private-inline.png" alt-text="Screenshot that shows how to set up appliance with desired configuration for private endpoint." lightbox="./media/how-to-use-azure-migrate-with-private-endpoints/script-vmware-private-expanded.png":::
-
-After the script has executed successfully, the appliance configuration manager will be launched automatically.
-
-> [!NOTE]
-> If you come across any issues, you can access the script logs at C:\ProgramData\Microsoft Azure\Logs\AzureMigrateScenarioInstaller_<em>Timestamp</em>.log for troubleshooting.
-
-### Configure the appliance and start continuous discovery
-
-Open a browser on any machine that can connect to the appliance server. Open the URL of the appliance configuration manager, `https://appliance name or IP address: 44368`. Or, you can open the configuration manager from the appliance server desktop by selecting the shortcut for the configuration manager.
-
-#### Set up prerequisites
-
-1. Read the third-party information, and accept the **license terms**.
-
-1. In the configuration manager under **Set up prerequisites**, do the following:
- - **Connectivity**: The appliance checks for access to the required URLs. If the server uses a proxy:
- - Select **Set up proxy** to specify the proxy address `http://ProxyIPAddress` or `http://ProxyFQDN` and listening port.
- - Specify credentials if the proxy needs authentication. Only HTTP proxy is supported.
- - You can add a list of URLs or IP addresses that should bypass the proxy server.
- ![Adding to bypass proxy list](./media/how-to-use-azure-migrate-with-private-endpoints/bypass-proxy-list.png)
- - Select **Save** to register the configuration if you've updated the proxy server details or added URLs or IP addresses to bypass proxy.
-
- > [!Note]
- > If you get an error with the aka.ms/* link during the connectivity check and you don't want the appliance to access this URL over the internet, disable the auto-update service on the appliance. Follow the steps in [Turn off auto-update](./migrate-appliance.md#turn-off-auto-update). After you've disabled auto-update, the aka.ms/* URL connectivity check will be skipped.
-
- - **Time sync**: The time on the appliance should be in sync with internet time for discovery to work properly.
- - **Install updates**: The appliance ensures that the latest updates are installed. After the check completes, select **View appliance services** to see the status and versions of the services running on the appliance server.
- > [!Note]
- > If you disabled auto-update on the appliance, you can update the appliance services manually to get the latest versions of the services. Follow the steps in [Manually update an older version](./migrate-appliance.md#manually-update-an-older-version).
- - **Install VDDK**: _(Needed only for VMware appliance.)_ The appliance checks that the VMware vSphere Virtual Disk Development Kit (VDDK) is installed. If it isn't installed, download VDDK 6.7 from VMware. Extract the downloaded zipped contents to the specified location on the appliance, as provided in the installation instructions.
-
-#### Register the appliance and start continuous discovery
-
-After the prerequisites check has completed, follow the steps to register the appliance and start continuous discovery for the respective scenarios:
-- [VMware VMs](./tutorial-discover-vmware.md#register-the-appliance-with-azure-migrate)-- [Hyper-V VMs](./tutorial-discover-hyper-v.md#register-the-appliance-with-azure-migrate)-- [Physical servers](./tutorial-discover-physical.md#register-the-appliance-with-azure-migrate)-- [AWS VMs](./tutorial-discover-aws.md#register-the-appliance-with-azure-migrate)-- [GCP VMs](./tutorial-discover-gcp.md#register-the-appliance-with-azure-migrate)-
->[!Note]
-> If you get DNS resolution issues during appliance registration or at the time of starting discovery, ensure that Azure Migrate resources created during the **Generate key** step in the portal are reachable from the on-premises server that hosts the Azure Migrate appliance. Learn more about how to verify [network connectivity](./troubleshoot-network-connectivity.md).
-
-### Assess your servers for migration to Azure
-After the discovery is complete, assess your servers, such as [VMware VMs](./tutorial-assess-vmware-azure-vm.md), [Hyper-V VMs](./tutorial-assess-hyper-v.md), [physical servers](./tutorial-assess-vmware-azure-vm.md), [AWS VMs](./tutorial-assess-aws.md), and [GCP VMs](./tutorial-assess-gcp.md), for migration to Azure VMs or Azure VMware Solution by using the Azure Migrate: Discovery and assessment tool.
-
-You can also [assess your on-premises machines](./tutorial-discover-import.md#prepare-the-csv) with the Azure Migrate: Discovery and assessment tool by using an imported CSV file.
-
-## Migrate servers to Azure by using Private Link
-
-The following sections describe the steps required to use Azure Migrate with [private endpoints](../private-link/private-endpoint-overview.md) for migrations by using ExpressRoute private peering or VPN connections.
-
-This article shows a proof-of-concept deployment path for agent-based replications to migrate your [VMware VMs](./tutorial-migrate-vmware-agent.md), [Hyper-V VMs](./tutorial-migrate-physical-virtual-machines.md), [physical servers](./tutorial-migrate-physical-virtual-machines.md), [VMs running on AWS](./tutorial-migrate-aws-virtual-machines.md), [VMs running on GCP](./tutorial-migrate-gcp-virtual-machines.md), or VMs running on a different virtualization provider by using Azure private endpoints. You can use a similar approach for performing [agentless Hyper-V migrations](./tutorial-migrate-hyper-v.md) by using Private Link.
-
->[!Note]
->[Agentless VMware migrations](./tutorial-assess-physical.md) require internet access or connectivity via ExpressRoute Microsoft peering.
-
-### Set up a replication appliance for migration
-
-The following diagram illustrates the agent-based replication workflow with private endpoints by using the Azure Migrate: Server Migration tool.
-
-![Diagram that shows replication architecture.](./media/how-to-use-azure-migrate-with-private-endpoints/replication-architecture.png)
-
-The tool uses a replication appliance to replicate your servers to Azure. Learn more about how to [prepare and set up a machine for the replication appliance](./tutorial-migrate-physical-virtual-machines.md#prepare-a-machine-for-the-replication-appliance).
-
-After you set up the replication appliance, follow these steps to create the required resources for migration.
-
-1. In **Discover machines** > **Are your machines virtualized?**, select **Not virtualized/Other**.
-1. In **Target region**, select and confirm the Azure region to which you want to migrate the machines.
-1. Select **Create resources** to create the required Azure resources. Don't close the page during the creation of resources.
- - This step creates a Recovery Services vault in the background and enables a managed identity for the vault. A Recovery Services vault is an entity that contains the replication information of servers and is used to trigger replication operations.
- - If the Azure Migrate project has private endpoint connectivity, a private endpoint is created for the Recovery Services vault. This step adds five fully qualified domain names (FQDNs) to the private endpoint, one for each microservice linked to the Recovery Services vault.
- - The five domain names are formatted in this pattern: <br/> _{Vault-ID}-asr-pod01-{type}-.{target-geo-code}_.privatelink.siterecovery.windowsazure.com
- - By default, Azure Migrate automatically creates a private DNS zone and adds DNS A records for the Recovery Services vault microservices. The private DNS is then linked to the private endpoint virtual network.
-
-1. Before you register the replication appliance, ensure that the vault's private link FQDNs are reachable from the machine that hosts the replication appliance. Additional DNS configuration may be required for the on-premises replication appliance to resolve the private link FQDNs to their private IP addresses. Learn more about [how to verify network connectivity](./troubleshoot-network-connectivity.md#verify-dns-resolution).
-
-1. After you verify the connectivity, download the appliance setup and key file, run the installation process, and register the appliance to Azure Migrate. Learn more about how to [set up the replication appliance](./tutorial-migrate-physical-virtual-machines.md#set-up-the-replication-appliance). After you set up the replication appliance, follow these instructions to [install the mobility service](./tutorial-migrate-physical-virtual-machines.md#install-the-mobility-service) on the machines you want to migrate.
-
-### Replicate servers to Azure by using Private Link
-
-Follow [these steps](./tutorial-migrate-physical-virtual-machines.md#replicate-machines) to select servers for replication.
-
-In **Replicate** > **Target settings** > **Cache/Replication storage account**, use the dropdown list to select a storage account to replicate over a private link.
-
-If your Azure Migrate project has private endpoint connectivity, you must [grant permissions to the Recovery Services vault managed identity](#grant-access-permissions-to-the-recovery-services-vault) to access the storage account required by Azure Migrate.
-
-To enable replications over a private link, [create a private endpoint for the storage account](#create-a-private-endpoint-for-the-storage-account-optional).
-
-#### Grant access permissions to the Recovery Services vault
-
-You must grant the permissions to the Recovery Services vault for authenticated access to the cache/replication storage account.
-
-To identify the Recovery Services vault created by Azure Migrate and grant the required permissions, follow these steps.
-
-**Identify the Recovery Services vault and the managed identity object ID**
-
-You can find the details of the Recovery Services vault on the Azure Migrate: Server Migration **Properties** page.
-
-1. Go to the **Azure Migrate** hub, and on the **Azure Migrate: Server Migration** tile, select **Overview**.
-
- ![Screenshot that shows the Overview page on the Azure Migrate hub.](./media/how-to-use-azure-migrate-with-private-endpoints/hub-overview.png)
-
-1. In the left pane, select **Properties**. Make a note of the Recovery Services vault name and managed identity ID. The vault will have **Private endpoint** as the **Connectivity type** and **Other** as the **Replication type**. You'll need this information when you provide access to the vault.
-
- ![Screenshot that shows the Azure Migrate: Server Migration Properties page.](./media/how-to-use-azure-migrate-with-private-endpoints/vault-info.png)
-
-**Permissions to access the storage account**
-
- To the managed identity of the vault, you must grant the following role permissions on the storage account required for replication. In this case, you must create the storage account in advance.
-
->[!Note]
-> When you migrate Hyper-V VMs to Azure by using Private Link, you must grant access to both the replication storage account and the cache storage account.
-
-The role permissions for the Azure Resource Manager vary depending on the type of storage account.
-
-|**Storage account type** | **Role permissions**|
-| | |
-|Standard type | [Contributor](../role-based-access-control/built-in-roles.md#contributor)<br>[Storage Blob Data Contributor](../role-based-access-control/built-in-roles.md#storage-blob-data-contributor)|
-|Premium type | [Contributor](../role-based-access-control/built-in-roles.md#contributor)<br>[Storage Blob Data Owner](../role-based-access-control/built-in-roles.md#storage-blob-data-owner)
-
-1. Go to the replication/cache storage account selected for replication. In the left pane, select **Access control (IAM)**.
-
-1. Select **+ Add**, and select **Add role assignment**.
-
- ![Screenshot that shows Add role assignment.](./media/how-to-use-azure-migrate-with-private-endpoints/storage-role-assignment.png)
-
-1. On the **Add role assignment** page in the **Role** box, select the appropriate role from the permissions list previously mentioned. Enter the name of the vault noted previously, and select **Save**.
-
- ![Screenshot that shows the Add role assignment page.](./media/how-to-use-azure-migrate-with-private-endpoints/storage-role-assignment-select-role.png)
-
-1. In addition to these permissions, you must also allow access to Microsoft trusted services. If your network access is restricted to selected networks, on the **Networking** tab in the **Exceptions** section, select **Allow trusted Microsoft services to access this storage account**.
-
- ![Screenshot that shows the Allow trusted Microsoft services to access this storage account option.](./media/how-to-use-azure-migrate-with-private-endpoints/exceptions.png)
-
-### Create a private endpoint for the storage account (optional)
-
-To replicate by using ExpressRoute with private peering, [create a private endpoint](../private-link/tutorial-private-endpoint-storage-portal.md#create-storage-account-with-a-private-endpoint) for the cache/replication storage accounts (target subresource: _blob_).
-
->[!Note]
-> You can create private endpoints only on a general-purpose v2 storage account. For pricing information, see [Azure Page Blobs pricing](https://azure.microsoft.com/pricing/details/storage/page-blobs/) and [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link/).
-
-Create the private endpoint for the storage account in the same virtual network as the Azure Migrate project private endpoint or another virtual network connected to this network.
-
-Select **Yes**, and integrate with a private DNS zone. The private DNS zone helps in routing the connections from the virtual network to the storage account over a private link. Selecting **Yes** automatically links the DNS zone to the virtual network. It also adds the DNS records for the resolution of new IPs and FQDNs that are created. Learn more about [private DNS zones](../dns/private-dns-overview.md).
-
-If the user who created the private endpoint is also the storage account owner, the private endpoint creation will be auto-approved. Otherwise, the owner of the storage account must approve the private endpoint for use. To approve or reject a requested private endpoint connection, on the storage account page under **Networking**, go to **Private endpoint connections**.
-
-Review the status of the private endpoint connection state before you continue.
-
-![Screenshot that shows the Private endpoint approval status.](./media/how-to-use-azure-migrate-with-private-endpoints/private-endpoint-connection-state.png)
-
-After you've created the private endpoint, use the dropdown list in **Replicate** > **Target settings** > **Cache storage account** to select the storage account for replicating over a private link.
-
-Ensure that the on-premises replication appliance has network connectivity to the storage account on its private endpoint. Learn more about how to verify [network connectivity](./troubleshoot-network-connectivity.md).
-
->[!Note]
-> For Hyper-V VM migrations to Azure, if the replication storage account is of _Premium_ type, you must select another storage account of _Standard_ type for the cache storage account. In this case, you must create private endpoints for both the replication and cache storage account.
-
-Next, follow the instructions to [review and start replication](./tutorial-migrate-physical-virtual-machines.md#replicate-machines) and [perform migrations](./tutorial-migrate-physical-virtual-machines.md#run-a-test-migration).
-## Next steps
+This three-part article series illustrates how to:
-- Complete the [migration process](./tutorial-migrate-physical-virtual-machines.md#complete-the-migration).-- Review the [post-migration best practices](./tutorial-migrate-physical-virtual-machines.md#post-migration-best-practices).
+- [Discover and assess servers for migration using Private Link](discover-and-assess-using-private-endpoints.md)
+- [Migrate servers to Azure using Private Link](migrate-servers-to-azure-using-private-link.md)
+- [Troubleshoot common issues with private endpoint connectivity](troubleshoot-network-connectivity.md)
migrate Migrate Appliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-appliance.md
management.azure.com | Used for resource deployments and management operations
*.services.visualstudio.com (optional) | Upload appliance logs used for internal monitoring. aka.ms/* (optional) | Allow access to these links; used to download and install the latest updates for appliance services. download.microsoft.com/download | Allow downloads from Microsoft download center.
-*.servicebus.windows.net | **Used for VMware agentless migration**<br><br> Communication between the appliance and the Azure Migrate service.
-*.hypervrecoverymanager.windowsazure.com | **Used for VMware agentless migration**<br><br> Connect to Azure Migrate service URLs.
-*.blob.core.windows.net | **Used for VMware agentless migration**<br><br>Upload data to storage for migration. <br>This is optional and is not required if the storage accounts (both cache storage account and gateway storage account) have a private endpoint attached.
+*.blob.core.windows.net (optional) | This is optional and is not required if the storage account has a private endpoint attached.
+
+### Government cloud URLs for private link connectivity
+
+**URL** | **Details**
+ | |
+*.portal.azure.us | Navigate to the Azure portal.
+graph.windows.net | Sign in to your Azure subscription.
+login.microsoftonline.us | Used for access control and identity management by Azure Active Directory.
+management.usgovcloudapi.net | Used for resource deployments and management operations.
+*.services.visualstudio.com (optional)| Upload appliance logs used for internal monitoring.
+aka.ms/* (optional)| Allow access to these links; used to download and install the latest updates for appliance services.
+download.microsoft.com/download | Allow downloads from Microsoft download center.
+*.blob.core.usgovcloudapi.net (optional)| This is optional and is not required if the storage account has a private endpoint attached.
+*.applicationinsights.us (optional)| Upload appliance logs used for internal monitoring.
+ ### Azure China 21Vianet (Azure China) URLs
migrate Migrate Servers To Azure Using Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-servers-to-azure-using-private-link.md
+
+ Title: Migrate servers to Azure by using Private Link
+description: Use Azure Migrate with private endpoints for migrations by using ExpressRoute private peering or VPN connections.
++
+ms.
+zone_pivot_groups: migrate-agentlessvmware-hyperv-agentbased
+ Last updated : 12/29/2021++
+# Migrate servers to Azure using Private Link
+
+This article describes how to use Azure Migrate to migrate servers over a private network by using [Azure Private Link](../private-link/private-endpoint-overview.md). You can use the [Azure Migrate: Server Migration](migrate-services-overview.md#azure-migrate-server-migration-tool) tool to connect privately and securely to Azure Migrate over an Azure ExpressRoute private peering or a site-to-site (S2S) VPN connection by using Private Link.
+++
+This article shows how to migrate on-premises VMware VMs to Azure, using the [Azure Migrate: Server Migration tool](migrate-services-overview.md#azure-migrate-server-migration-tool), with agentless migration.
+
+## Set up the Azure Migrate appliance
+
+Azure Migrate: Server Migration runs a lightweight VMware VM appliance to enable the discovery, assessment, and agentless migration of VMware VMs. If you have followed the [Discovery and assessment tutorial](discover-and-assess-using-private-endpoints.md), you've already set the appliance up. If you didn't, [set up and configure the appliance](./discover-and-assess-using-private-endpoints.md#set-up-the-azure-migrate-appliance) before you proceed.
+
+## Replicate VMs
+
+After setting up the appliance and completing discovery, you can begin replicating VMware VMs to Azure.
+
+The following diagram illustrates the agentless replication workflow with private endpoints by using the Azure Migrate: Server Migration tool.
+
+![Diagram that shows agentless replication architecture.](./media/how-to-use-azure-migrate-with-private-endpoints/agentless-replication-architecture.png)
+
+Enable replication as follows:
+1. In the Azure Migrate project > **Servers** > **Migration tools** > Azure Migrate: Server Migration, click **Replicate**.
+
+ ![Diagram that shows how to replicate servers.](./media/how-to-use-azure-migrate-with-private-endpoints/replicate-servers.png)
+
+1. In **Replicate** > **Basics** > **Are your machines virtualized?**, select **Yes, with VMware vSphere**.
+1. In **On-premises appliance**, select the name of the Azure Migrate appliance. Select **OK**.
+
+ :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/source-settings-vmware.png" alt-text="Diagram that shows how to complete source settings.":::
+
+1. In **Virtual machines**, select the machines you want to replicate. To apply VM sizing and disk type from an assessment, in **Import migration settings from an Azure Migrate assessment?**,
+ - Select **Yes**, and select the VM group and assessment name.
+ - Select **No** if you aren't using assessment settings.
+
+ :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/migrate-machines-vmware.png" alt-text="Diagram that shows how to select the VMs.":::
+
+1. In **Virtual machines**, select VMs you want to migrate. Then click **Next**.
+
+ :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/select-vm-vmware.png" alt-text="Screenshot of selected VMs to be replicated.":::
+
+1. In **Target settings**, select the **target region** in which the Azure VMs will reside after migration.
+
+ :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/target-settings.png" alt-text="Screenshot of the Target settings screen.":::
+
+1. In **Replication storage account**, use the dropdown list to select a storage account to replicate over a private link.
+ >[!NOTE]
+ > Only the storage accounts in the selected target region and Azure Migrate project subscription are listed.
+
+1. Next, [**create a private endpoint for the storage account**](migrate-servers-to-azure-using-private-link.md#create-a-private-endpoint-for-the-storage-account) to enable replications over a private link. Ensure that the Azure Migrate appliance has network connectivity to the storage account on its private endpoint. Learn how to [verify network connectivity](./troubleshoot-network-connectivity.md#verify-dns-resolution).
+ >[!NOTE]
+ > - The storage account cannot be changed after you enable replication.
+ > - To orchestrate replications, Azure Migrate will grant the trusted Microsoft services and the Recovery Services vault managed identity access to the selected storage account.
+
+ >[!Tip]
+ > You can manually update the DNS records by editing the DNS hosts file on the Azure Migrate appliance with the private link FQDNs and private IP address of the storage account.
+
+1. Select the **Subscription** and **Resource group** in which the Azure VMs reside after migration.
+1. In **Virtual network**, select the Azure VNet/subnet for the migrated Azure VMs.
+1. In **Availability options**, select:
+
+ - Availability Zone to pin the migrated machine to a specific Availability Zone in the region. Use this option to distribute servers that form a multi-node application tier across Availability Zones. If you select this option, you'll need to specify the Availability Zone to use for each of the selected machine in the Compute tab. This option is only available if the target region selected for the migration supports Availability Zones
+
+ - Availability Set to place the migrated machine in an Availability Set. The target Resource Group that was selected must have one or more availability sets in order to use this option.
+
+ - No infrastructure redundancy required option if you don't need either of these availability configurations for the migrated machines.
+1. In **Disk encryption type**, select:
+
+ - Encryption-at-rest with platform-managed key
+
+ - Encryption-at-rest with customer-managed key
+
+ - Double encryption with platform-managed and customer-managed keys
+
+ >[!Note]
+ > To replicate VMs with CMK, you'll need to [create a disk encryption set](../virtual-machines/disks-enable-customer-managed-keys-portal.md#set-up-your-disk-encryption-set) under the target Resource Group. A disk encryption set object maps Managed Disks to a Key Vault that contains the CMK to use for SSE.
+1. In **Azure Hybrid Benefit**:
+
+ - Select **No** if you don't want to apply Azure Hybrid Benefit and click **Next**.
+
+ - Select **Yes** if you have Windows Server machines that are covered with active Software Assurance or Windows Server subscriptions, and you want to apply the benefit to the machines you're migrating and click **Next**.
+
+ :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/azure-hybrid-benefit.png" alt-text="Screenshot shows the options in Azure Hybrid Benefit.":::
+
+1. In **Compute**, review the VM name, size, OS disk type, and availability configuration (if selected in the previous step). VMs must conform with [Azure requirements](migrate-support-matrix-vmware-migration.md#azure-vm-requirements).
+
+ - **VM size**: If you're using assessment recommendations, the VM size dropdown shows the recommended size. Otherwise, Azure Migrate picks a size based on the closest match in the Azure subscription. Alternatively, pick a manual size in **Azure VM size**.
+
+ - **OS disk**: Specify the OS (boot) disk for the VM. The OS disk is the disk that has the operating system bootloader and installer.
+
+ - **Availability Zone**: Specify the Availability Zone to use.
+
+ - **Availability Set**: Specify the Availability Set to use.
+ >[!Note]
+ > If you want to select a different availability option for a set of virtual machines, go to step 1 and repeat the steps by selecting different availability options after starting replication for one set of virtual machines.
+1. In **Disks**, specify whether the VM disks should be replicated to Azure, and select the disk type (standard SSD/HDD or premium-managed disks) in Azure. Then click **Next**.
+
+ :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/disks-agentless-vmware.png" alt-text="Screenshot shows the Disks tab of the Replicate dialog box.":::
+
+1. In **Tags**, add tags to your migrated virtual machines, disks, and NICs.
+
+1. In **Review and start replication**, review the settings, and click **Replicate** to start the initial replication for the servers.
+ Next, follow the instructions to [perform migrations](tutorial-migrate-vmware.md#run-a-test-migration).
+
+#### Provisioning for the first time
+
+Azure Migrate does not create any additional resources for replications using Azure Private Link (Service Bus, Key Vault, and storage accounts are not created). Azure Migrate will make use of the selected storage account for uploading replication data, state data, and orchestration messages.
+
+## Create a private endpoint for the storage account
+
+To replicate by using ExpressRoute with private peering, [**create a private endpoint**](../private-link/tutorial-private-endpoint-storage-portal.md#create-storage-account-with-a-private-endpoint) for the cache/replication storage account (target subresource: *blob*).
+
+>[!Note]
+> You can create private endpoints only on a general-purpose v2 storage account. For pricing information, see [Azure Page Blobs pricing](https://azure.microsoft.com/pricing/details/storage/page-blobs/) and [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link/).
+
+Create the private endpoint for the storage account in the same virtual network as the Azure Migrate project private endpoint or another virtual network connected to this network.
+
+Select **Yes** and integrate with a private DNS zone. The private DNS zone helps in routing the connections from the virtual network to the storage account over a private link. Selecting **Yes** automatically links the DNS zone to the virtual network. It also adds the DNS records for the resolution of new IPs and FQDNs that are created. Learn more about [private DNS zones](../dns/private-dns-overview.md).
+
+If the user who created the private endpoint is also the storage account owner, the private endpoint creation will be auto approved. Otherwise, the owner of the storage account must approve the private endpoint for use. To approve or reject a requested private endpoint connection, on the storage account page under **Networking**, go to **Private endpoint connections**.
+
+Review the status of the private endpoint connection state before you continue.
++
+Ensure that the on-premises appliance has network connectivity to the storage account via its private endpoint. To validate the private link connection, perform a DNS resolution of the storage account endpoint (private link resource FQDN) from the on-premises server hosting the Migrate appliance and ensure that it resolves to a private IP address. Learn how to verify [network connectivity.](./troubleshoot-network-connectivity.md#verify-dns-resolution)
+
+## Next steps
+
+ - [Migrate VMs](tutorial-migrate-vmware.md#migrate-vms)
+ - Complete the [migration process](tutorial-migrate-vmware.md#complete-the-migration).
+ - Review the [post-migration best practices](tutorial-migrate-vmware.md#post-migration-best-practices).
++++++
+This article shows you how to [migrate on-premises Hyper-V VMs to Azure](tutorial-migrate-hyper-v.md), using the [Azure Migrate: Server Migration](migrate-services-overview.md#azure-migrate-server-migration-tool) tool, with agentless migration. You can also migrate using agent-based migration.
+
+## Set up the replication provider for migration
+
+The following diagram illustrates the agentless migration workflow with private endpoints by using the Azure Migrate: Server Migration tool.
+
+ ![Diagram that shows replication architecture.](./media/how-to-use-azure-migrate-with-private-endpoints/replication-architecture.png)
+
+For migrating Hyper-V VMs, Azure Migrate: Server Migration installs software providers (Microsoft Azure Site Recovery provider and Microsoft Azure Recovery Service agent) on Hyper-V Hosts or cluster nodes.
+1. In the Azure Migrate project > **Servers**, in **Azure Migrate: Server Migration**, click **Discover**.
+1. In **Discover machines** > **Are your machines virtualized?**, select **Yes, with Hyper-V**.
+1. In **Target region**, select the Azure region to which you want to migrate the machines.
+1. Select **Confirm that the target region for migration is region-name**.
+1. Select **Create resources**. This creates an Azure Site Recovery vault in the background. Don't close the page during the creation of resources. If you have already set up migration with Azure Migrate: Server Migration, this option won't appear since resources were set up previously.
+ - This step creates a Recovery Services vault in the background and enables a managed identity for the vault. A Recovery Services vault is an entity that contains the replication information of servers and is used to trigger replication operations.
+ - If the Azure Migrate project has private endpoint connectivity, a private endpoint is created for the Recovery Services vault. This step adds five fully qualified domain names (FQDNs) to the private endpoint, one for each microservice linked to the Recovery Services vault.
+ - The five domain names are formatted in this pattern: <br> _{Vault-ID}-asr-pod01-{type}-.{target-geo-code}_.privatelink.siterecovery.windowsazure.com
+ - By default, Azure Migrate automatically creates a private DNS zone and adds DNS A records for the Recovery Services vault microservices. The private DNS is then linked to the private endpoint virtual network.
+1. In **Prepare Hyper-V host servers**, download the Hyper-V Replication provider, and the registration key file.
+
+ - The registration key is needed to register the Hyper-V host with Azure Migrate Server Migration.
+
+ - The key is valid for five days after you generate it.
+
+ ![Screenshot of discover machines screen.](./media/how-to-use-azure-migrate-with-private-endpoints/discover-machines-hyperv.png)
+1. Copy the provider setup file and registration key file to each Hyper-V host (or cluster node) running VMs you want to replicate.
+> [!Note]
+>Before you register the replication provider, ensure that the vault's private link FQDNs are reachable from the machine that hosts the replication provider. Additional DNS configuration may be required for the on-premises replication appliance to resolve the private link FQDNs to their private IP addresses. Learn more about [how to verify network connectivity](./troubleshoot-network-connectivity.md#verify-dns-resolution)
+
+Next, follow these instructions to [install and register the replication provider](tutorial-migrate-hyper-v.md#install-and-register-the-provider).
+
+## Replicate Hyper-V VMs
+
+With discovery completed, you can begin replication of Hyper-V VMs to Azure.
+
+> [!Note]
+> You can replicate up to 10 machines together. If you need to replicate more, then replicate them simultaneously in batches of 10.
+
+1. In the Azure Migrate project > **Servers** > **Migration tools** > Azure Migrate: Server Migration, click **Replicate**.
+1. In **Replicate** > **Basics** > **Are your machines virtualized?**, select **Yes, with Hyper-V**. Then click **Next: Virtual machines**.
+1. In **Virtual machines**, select the machines you want to replicate.
+ - If you've run an assessment for the VMs, you can apply VM sizing and disk type (premium/standard) recommendations from the assessment results. To do this, in **Import migration settings from an Azure Migrate assessment?**, select the **Yes** option.
+ - If you didn't run an assessment, or you don't want to use the assessment settings, select the **No** option.
+ - If you selected to use the assessment, select the VM group, and assessment name.
+
+ :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/migrate-machines-vmware.png" alt-text="Screenshot of migrate machines screen.":::
+
+1. In **Virtual machines**, search for VMs as needed, and select each VM you want to migrate. Then click **Next:Target settings**.
+
+ :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/select-vm.png" alt-text="Screenshot of selected VMs.":::
+
+1. In **Target settings**, select the target region to which you'll migrate, the subscription, and the resource group in which the Azure VMs will reside after migration.
+
+ :::image type="content" source="./media/tutorial-migrate-hyper-v/target-settings.png" alt-text="Screenshot of target settings.":::
+
+1. In **Replication storage account**, select the Azure storage account in which replicated data will be stored in Azure.
+
+1. Next, [**create a private endpoint for the storage account**](migrate-servers-to-azure-using-private-link.md#create-a-private-endpoint-for-the-storage-account-1) and [**grant permissions to the Recovery Services vault managed identity**](migrate-servers-to-azure-using-private-link.md#grant-access-permissions-to-the-recovery-services-vault) to access the storage account required by Azure Migrate. This is mandatory before you proceed.
+
+ - For Hyper-V VM migrations to Azure, if the replication storage account is of *Premium* type, you must select another storage account of *Standard* type for the cache storage account. In this case, you must create private endpoints for both the replication and cache storage account.
+
+ - Ensure that the server hosting the replication provider has network connectivity to the storage accounts via the private endpoints before you proceed. Learn how to [verify network connectivity](./troubleshoot-network-connectivity.md#verify-dns-resolution).
+ >[!Tip]
+ > You can manually update the DNS records by editing the DNS hosts file on the Azure Migrate appliance with the private link FQDNs and private IP addresses of the storage account.
+
+1. In **Virtual network**, select the Azure VNet/subnet for the migrated Azure VMs.
+
+1. In **Availability options**, select:
+
+ - Availability Zone to pin the migrated machine to a specific Availability Zone in the region. Use this option to distribute servers that form a multi-node application tier across Availability Zones. If you select this option, you'll need to specify the Availability Zone to use for each of the selected machine in the Compute tab. This option is only available if the target region selected for the migration supports Availability Zones.
+
+ - Availability Set to place the migrated machine in an Availability Set. The target Resource Group that was selected must have one or more availability sets in order to use this option.
+
+ - No infrastructure redundancy required option if you don't need either of these availability configurations for the migrated machines.
+
+1. In **Azure Hybrid Benefit**:
+
+ - Select **No** if you don't want to apply Azure Hybrid Benefit. Then, click **Next**.
+
+ - Select **Yes** if you have Windows Server machines that are covered with active Software Assurance or Windows Server subscriptions, and you want to apply the benefit to the machines you're migrating. Then click **Next**.
+
+ :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/azure-hybrid-benefit.png" alt-text="Screenshot of Azure Hybrid benefit selection.":::
+
+1. In **Compute**, review the VM name, size, OS disk type, and availability configuration (if selected in the previous step). VMs must conform with [Azure requirements](migrate-support-matrix-hyper-v-migration.md#azure-vm-requirements).
+
+ - **VM size**: If you're using assessment recommendations, the VM size dropdown shows the recommended size. Otherwise, Azure Migrate picks a size based on the closest match in the Azure subscription. Alternatively, pick a manual size in **Azure VM size**.
+
+ - **OS disk**: Specify the OS (boot) disk for the VM. The OS disk is the disk that has the operating system bootloader and installer.
+
+ - **Availability Set**: If the VM should be in an Azure availability set after migration, specify the set. The set must be in the target resource group you specify for the migration.
+
+1. In **Disks**, specify the VM disks that need to be replicated to Azure. Then click **Next**.
+ - You can exclude disks from replication.
+ - If you exclude disks, they won't be present on the Azure VM after migration.
+
+ :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/disks.png" alt-text="Screenshot shows the Disks tab of the Replicate dialog box.":::
+
+1. In **Tags**, add tags to your migrated virtual machines, disks, and NICs.
+
+1. In **Review and start replication**, review the settings, and click **Replicate** to start the initial replication for the servers.
+
+ > [!Note]
+ > You can update replication settings any time before replication starts, **Manage** > **Replicating machines**. Settings can't be changed after replication starts.
+
+ Next, follow the instructions to [perform migrations](tutorial-migrate-hyper-v.md#migrate-vms).
+]
+### Grant access permissions to the Recovery Services vault
+
+You must grant the permissions to the Recovery Services vault for authenticated access to the cache/replication storage account.
+
+To identify the Recovery Services vault created by Azure Migrate and grant the required permissions, follow these steps.
+
+**Identify the Recovery Services vault and the managed identity object ID**
+
+You can find the details of the Recovery Services vault on the Azure Migrate: Server Migration **Properties** page.
+
+1. Go to the **Azure Migrate** hub, and on the **Azure Migrate: Server Migration** tile, select **Overview**.
+
+ ![Screenshot that shows the Overview page on the Azure Migrate hub.](./media/how-to-use-azure-migrate-with-private-endpoints/hub-overview.png)
+
+1. In the left pane, select **Properties**. Make a note of the Recovery Services vault name and managed identity ID. The vault will have **Private endpoint** as the **Connectivity type** and **Other** as the **Replication type**. You'll need this information when you provide access to the vault.
+
+ ![Screenshot that shows the Azure Migrate: Server Migration Properties page.](./media/how-to-use-azure-migrate-with-private-endpoints/vault-info.png)
+
+**Permissions to access the storage account**
+
+ To the managed identity of the vault, you must grant the following role permissions on the storage account required for replication. In this case, you must create the storage account in advance.
+
+The role permissions for the Azure Resource Manager vary depending on the type of storage account.
+
+|**Storage account type** | **Role permissions**|
+| | |
+|Standard type | [Contributor](../role-based-access-control/built-in-roles.md#contributor)<br>[Storage Blob Data Contributor](../role-based-access-control/built-in-roles.md#storage-blob-data-contributor)|
+|Premium type | [Contributor](../role-based-access-control/built-in-roles.md#contributor)<br>[Storage Blob Data Owner](../role-based-access-control/built-in-roles.md#storage-blob-data-owner)
+
+1. Go to the replication/cache storage account selected for replication. In the left pane, select **Access control (IAM)**.
+
+1. Select **+ Add**, and select **Add role assignment**.
+
+ :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/storage-role-assignment.png" alt-text="Screenshot that shows Add role assignment.":::
+
+1. On the **Add role assignment** page in the **Role** box, select the appropriate role from the permissions list previously mentioned. Enter the name of the vault noted previously and select **Save**.
+
+ :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/storage-role-assignment-select-role.png" alt-text="Screenshot that shows the Add role assignment page.":::
+
+1. In addition to these permissions, you must also allow access to Microsoft trusted services. If your network access is restricted to selected networks, on the **Networking** tab in the **Exceptions** section, select **Allow trusted Microsoft services to access this storage account**.
+
+ :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/exceptions.png" alt-text="Screenshot that shows the Allow trusted Microsoft services to access this storage account option.":::
+
+## Create a private endpoint for the storage account
+
+To replicate by using ExpressRoute with private peering, [create a private endpoint](../private-link/tutorial-private-endpoint-storage-portal.md#create-storage-account-with-a-private-endpoint) for the cache/replication storage accounts (target subresource: _blob_).
+
+>[!Note]
+> You can create private endpoints only on a general-purpose v2 storage account. For pricing information, see [Azure Page Blobs pricing](https://azure.microsoft.com/pricing/details/storage/page-blobs/) and [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link/).
+
+Create the private endpoint for the storage account in the same virtual network as the Azure Migrate project private endpoint or another virtual network connected to this network.
+
+Select **Yes** and integrate with a private DNS zone. The private DNS zone helps in routing the connections from the virtual network to the storage account over a private link. Selecting **Yes** automatically links the DNS zone to the virtual network. It also adds the DNS records for the resolution of new IPs and FQDNs that are created. Learn more about [private DNS zones](../dns/private-dns-overview.md).
+
+If the user who created the private endpoint is also the storage account owner, the private endpoint creation will be auto approved. Otherwise, the owner of the storage account must approve the private endpoint for use. To approve or reject a requested private endpoint connection, on the storage account page under **Networking**, go to **Private endpoint connections**.
+
+Review the status of the private endpoint connection state before you continue.
+
+![Screenshot that shows the Private endpoint approval status.](./media/how-to-use-azure-migrate-with-private-endpoints/private-endpoint-connection-state.png)
+
+After you've created the private endpoint, use the dropdown list in **Replicate** > **Target settings** > **Cache storage account** to select the storage account for replicating over a private link.
+
+Ensure that the on-premises replication appliance has network connectivity to the storage account on its private endpoint. Learn more about how to verify [network connectivity](./troubleshoot-network-connectivity.md).
+
+Ensure that the replication provider has network connectivity to the storage account via its private endpoint. To validate the private link connection, perform a DNS resolution of the storage account endpoint (private link resource FQDN) from the on-premises server hosting the replication provider and ensure that it resolves to a private IP address. Learn how to verify [network connectivity.](./troubleshoot-network-connectivity.md#verify-dns-resolution)
++
+>[!Note]
+> For Hyper-V VM migrations to Azure, if the replication storage account is of _Premium_ type, you must select another storage account of _Standard_ type for the cache storage account. In this case, you must create private endpoints for both the replication and cache storage account.
++
+## Next steps
+ - [Migrate VMs](tutorial-migrate-hyper-v.md#migrate-vms)
+ - Complete the [migration process](tutorial-migrate-hyper-v.md#complete-the-migration).
+ - Review the [post-migration best practices](tutorial-migrate-hyper-v.md#post-migration-best-practices).
++++++
+This article shows a proof-of-concept deployment path for agent-based replications to migrate your [VMware VMs](tutorial-migrate-vmware-agent.md), [Hyper-V VMs](tutorial-migrate-physical-virtual-machines.md), [physical servers](tutorial-migrate-physical-virtual-machines.md), [VMs running on AWS](tutorial-migrate-aws-virtual-machines.md), [VMs running on GCP](tutorial-migrate-gcp-virtual-machines.md), or VMs running on a different virtualization provider by using Azure private endpoints.
+
+## Set up a replication appliance for migration
+
+The following diagram illustrates the agent-based replication workflow with private endpoints by using the Azure Migrate: Server Migration tool.
+
+![Diagram that shows replication architecture.](./media/how-to-use-azure-migrate-with-private-endpoints/replication-architecture.png)
+
+The tool uses a replication appliance to replicate your servers to Azure. Follow these steps to create the required resources for migration.
+
+1. In **Discover machines** > **Are your machines virtualized?**, select **Not virtualized/Other**.
+1. In **Target region**, select and confirm the Azure region to which you want to migrate the machines.
+1. Select **Create resources** to create the required Azure resources. Don't close the page during the creation of resources.
+ - This step creates a Recovery Services vault in the background and enables a managed identity for the vault. A Recovery Services vault is an entity that contains the replication information of servers and is used to trigger replication operations.
+ - If the Azure Migrate project has private endpoint connectivity, a private endpoint is created for the Recovery Services vault. This step adds five fully qualified domain names (FQDNs) to the private endpoint, one for each microservice linked to the Recovery Services vault.
+ - The five domain names are formatted in this pattern: <br> _{Vault-ID}-asr-pod01-{type}-.{target-geo-code}_.privatelink.siterecovery.windowsazure.com
+ - By default, Azure Migrate automatically creates a private DNS zone and adds DNS A records for the Recovery Services vault microservices. The private DNS is then linked to the private endpoint virtual network.
+
+>[!Note]
+> Before you register the replication appliance, ensure that the vault's private link FQDNs are reachable from the machine that hosts the replication appliance. Additional DNS configuration may be required for the on-premises replication appliance to resolve the private link FQDNs to their private IP addresses. Learn more about [how to verify network connectivity](./troubleshoot-network-connectivity.md#verify-dns-resolution).
+
+After you verify the connectivity, download the appliance setup and key file, run the installation process, and register the appliance to Azure Migrate. Learn more about how to [set up the replication appliance](./tutorial-migrate-physical-virtual-machines.md#prepare-a-machine-for-the-replication-appliance). After you set up the replication appliance, follow these instructions to [install the mobility service](./tutorial-migrate-physical-virtual-machines.md#install-the-mobility-service) on the machines you want to migrate.
+
+## Replicate servers
+
+Now, select machines for replication and migration.
+
+>[!Note]
+> You can replicate up to 10 machines together. If you need to replicate more, then replicate them simultaneously in batches of 10.
+
+1. In the Azure Migrate project > **Servers** > **Migration tools** > Azure Migrate: Server Migration, click **Replicate**.
+
+ ![Diagram that shows how to replicate servers.](./media/how-to-use-azure-migrate-with-private-endpoints/replicate-servers.png)
+
+1. In **Replicate** > **Basics** > **Are your machines virtualized?**, select **Not virtualized/Other**.
+1. In **On-premises appliance**, select the name of the Azure Migrate appliance that you set up.
+1. In **Process Server**, select the name of the replication appliance.
+1. In **Guest credentials**, please select the dummy account created previously during the [replication installer setup](tutorial-migrate-physical-virtual-machines.md#download-the-replication-appliance-installer) to install the Mobility service manually (push install is not supported). Then click **Next: Virtual machines.**
+
+ :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/source-settings-vmware.png" alt-text="Diagram that shows how to complete source settings.":::
+
+1. In **Virtual machines**, in **Import migration settings from an assessment?**, leave the default setting **No, I'll specify the migration settings manually**.
+1. Select each VM you want to migrate. Then click **Next:Target settings**.
+
+ :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/select-vm.png" alt-text="Screenshot of selected VMs to be replicated.":::
+
+1. In **Target settings**, select the subscription,the target region to which you'll migrate, and the resource group in which the Azure VMs will reside after migration.
+
+ :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/target-settings-agent-inline.png" alt-text="Screenshot displays the options in Overview." lightbox="./media/how-to-use-azure-migrate-with-private-endpoints/target-settings-agent-expanded.png":::
+
+1. In **Virtual network**, select the Azure VNet/subnet for the migrated Azure VMs.
+1. In **Cache storage account**, use the dropdown list to select a storage account to replicate over a private link.
+
+1. Next, [**create a private endpoint for the storage account**](migrate-servers-to-azure-using-private-link.md#create-a-private-endpoint-for-the-storage-account-1) and [**grant permissions to the Recovery Services vault managed identity**](migrate-servers-to-azure-using-private-link.md#grant-access-permissions-to-the-recovery-services-vault) to access the storage account required by Azure Migrate. This is mandatory before you proceed.
+
+ - Ensure that the server hosting the replication appliance has network connectivity to the storage accounts via the private endpoints before you proceed. Learn how to [verify network connectivity](./troubleshoot-network-connectivity.md#verify-dns-resolution).
+
+ >[!Tip]
+ > You can manually update the DNS records by editing the DNS hosts file on the Azure Migrate appliance with the private link FQDNs and private IP addresses of the storage account.
+
+1. In **Availability options**, select:
+
+ - Availability Zone to pin the migrated machine to a specific Availability Zone in the region. Use this option to distribute servers that form a multi-node application tier across Availability Zones. If you select this option, you'll need to specify the Availability Zone to use for each of the selected machine in the Compute tab. This option is only available if the target region selected for the migration supports Availability Zones.
+
+ - Availability Set to place the migrated machine in an Availability Set. The target Resource Group that was selected must have one or more availability sets in order to use this option.
+
+ - No infrastructure redundancy required option if you don't need either of these availability configurations for the migrated machines.
+1. In **Disk encryption type**, select:
+
+ - Encryption-at-rest with platform-managed key
+ - Encryption-at-rest with customer-managed key
+ - Double encryption with platform-managed and customer-managed keys
+ > [!Note]
+ > To replicate VMs with CMK, you'll need to [create a disk encryption set](../virtual-machines/disks-enable-customer-managed-keys-portal.md#set-up-your-disk-encryption-set) under the target Resource Group. A disk encryption set object maps Managed Disks to a Key Vault that contains the CMK to use for SSE.
+1. In **Azure Hybrid Benefit**:
+ - Select **No** if you don't want to apply Azure Hybrid Benefit. Then, click **Next**.
+ - Select **Yes** if you have Windows Server machines that are covered with active Software Assurance or Windows Server subscriptions, and you want to apply the benefit to the machines you're migrating. Then click **Next**.
+1. In **Compute**, review the VM name, size, OS disk type, and availability configuration (if selected in the previous step). VMs must conform with [Azure requirements](migrate-support-matrix-physical-migration.md#azure-vm-requirements).
+ - **VM size**: If you're using assessment recommendations, the VM size dropdown shows the recommended size. Otherwise, Azure Migrate picks a size based on the closest match in the Azure subscription. Alternatively, pick a manual size in **Azure VM size**.
+
+ - **OS disk**: Specify the OS (boot) disk for the VM. The OS disk is the disk that has the operating system bootloader and installer.
+
+ - **Availability Zone**: Specify the Availability Zone to use.
+
+ - **Availability Set**: Specify the Availability Set to use.
+
+1. In **Disks**, specify whether the VM disks should be replicated to Azure, and select the disk type (standard SSD/HDD or premium managed disks) in Azure. Then click **Next**.
+ - You can exclude disks from replication.
+ - If you exclude disks, they won't be present on the Azure VM after migration.
+
+ :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/disks.png" alt-text="Screenshot shows the Disks tab of the Replicate dialog box.":::
++
+1. In **Tags**, add tags to your migrated virtual machines, disks, and NICs.
+
+1. In **Review and start replication**, review the settings, and click **Replicate** to start the initial replication for the servers.
+
+ > [!Note]
+ > You can update replication settings any time before replication starts, **Manage** > **Replicating machines**. Settings can't be changed after replication starts.
+
+ Next, follow the instructions to [perform migrations](tutorial-migrate-physical-virtual-machines.md#run-a-test-migration).
+
+### Grant access permissions to the Recovery Services vault
+
+You must grant the permissions to the Recovery Services vault for authenticated access to the cache/replication storage account.
+
+To identify the Recovery Services vault created by Azure Migrate and grant the required permissions, follow these steps.
+
+**Identify the Recovery Services vault and the managed identity object ID**
+
+You can find the details of the Recovery Services vault on the **Azure Migrate: Server Migration Properties** page.
+
+1. Go to the **Azure Migrate** hub, and on the **Azure Migrate: Server Migration** tile, select **Overview**.
+
+ ![Screenshot that shows the Overview page on the Azure Migrate hub.](./media/how-to-use-azure-migrate-with-private-endpoints/hub-overview.png)
+
+1. In the left pane, select **Properties**. Make a note of the Recovery Services vault name and managed identity ID. The vault will have **Private endpoint** as the **Connectivity type** and **Other** as the **Replication type**. You'll need this information when you provide access to the vault.
+
+ ![Screenshot that shows the Azure Migrate: Server Migration Properties page.](./media/how-to-use-azure-migrate-with-private-endpoints/vault-info.png)
+
+**Permissions to access the storage account**
+
+ To the managed identity of the vault, you must grant the following role permissions on the storage account required for replication. In this case, you must create the storage account in advance.
+
+The role permissions for the Azure Resource Manager vary depending on the type of storage account.
+
+|**Storage account type** | **Role permissions**|
+| | |
+|Standard type | [Contributor](../role-based-access-control/built-in-roles.md#contributor)<br>[Storage Blob Data Contributor](../role-based-access-control/built-in-roles.md#storage-blob-data-contributor)|
+|Premium type | [Contributor](../role-based-access-control/built-in-roles.md#contributor)<br>[Storage Blob Data Owner](../role-based-access-control/built-in-roles.md#storage-blob-data-owner)
+
+1. Go to the replication/cache storage account selected for replication. In the left pane, select **Access control (IAM)**.
+1. Select **+ Add**, and select **Add role assignment**.
+
+ ![Screenshot that shows Add role assignment.](./media/how-to-use-azure-migrate-with-private-endpoints/storage-role-assignment.png)
+
+1. On the **Add role assignment** page in the **Role** box, select the appropriate role from the permissions list previously mentioned. Enter the name of the vault noted previously and select **Save**.
+
+ ![Screenshot that shows the Add role assignment page.](./media/how-to-use-azure-migrate-with-private-endpoints/storage-role-assignment-select-role.png)
+
+1. In addition to these permissions, you must also allow access to Microsoft trusted services. If your network access is restricted to selected networks, on the **Networking** tab in the **Exceptions** section, select **Allow trusted Microsoft services to access this storage account**.
+
+ ![Screenshot that shows the Allow trusted Microsoft services to access this storage account option.](./media/how-to-use-azure-migrate-with-private-endpoints/exceptions.png)
+
+## Create a private endpoint for the storage account
+
+To replicate by using ExpressRoute with private peering, [create a private endpoint](../private-link/tutorial-private-endpoint-storage-portal.md#create-storage-account-with-a-private-endpoint) for the cache/replication storage accounts (target subresource: _blob_).
+
+>[!Note]
+> You can create private endpoints only on a general-purpose v2 storage account. For pricing information, see [Azure Page Blobs pricing](https://azure.microsoft.com/pricing/details/storage/page-blobs/) and [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link/).
+
+Create the private endpoint for the storage account in the same virtual network as the Azure Migrate project private endpoint or another virtual network connected to this network.
+
+Select **Yes**, and integrate with a private DNS zone. The private DNS zone helps in routing the connections from the virtual network to the storage account over a private link. Selecting **Yes** automatically links the DNS zone to the virtual network. It also adds the DNS records for the resolution of new IPs and FQDNs that are created. Learn more about [private DNS zones](../dns/private-dns-overview.md).
+
+If the user who created the private endpoint is also the storage account owner, the private endpoint creation will be auto-approved. Otherwise, the owner of the storage account must approve the private endpoint for use. To approve or reject a requested private endpoint connection, on the storage account page under **Networking**, go to **Private endpoint connections**.
+
+Review the status of the private endpoint connection state before you continue.
+
+![Screenshot that shows the Private endpoint approval status.](./media/how-to-use-azure-migrate-with-private-endpoints/private-endpoint-connection-state.png)
+
+After you've created the private endpoint, use the dropdown list in **Replicate** > **Target settings** > **Cache storage account** to select the storage account for replicating over a private link.
+
+Ensure that the on-premises replication appliance has network connectivity to the storage account on its private endpoint. To validate the private link connection, perform a DNS resolution of the storage account endpoint (private link resource FQDN) from the on-premises server hosting the replication appliance and ensure that it resolves to a private IP address. Learn how to verify [network connectivity.](./troubleshoot-network-connectivity.md#verify-dns-resolution)
+
+## Next steps
+- [Migrate VMs](tutorial-migrate-physical-virtual-machines.md#migrate-vms)
+- Complete the [migration process](tutorial-migrate-physical-virtual-machines.md#complete-the-migration).
+- Review the [post-migration best practices](tutorial-migrate-physical-virtual-machines.md#post-migration-best-practices).
+++
migrate Server Migrate Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/server-migrate-overview.md
Use these selected comparisons to help you decide which method to use. You can a
**Disk limits** | OS disk: 2 TB<br/><br/> Data disk: 32 TB<br/><br/> Maximum disks: 60 | OS disk: 2 TB<br/><br/> Data disk: 32 TB<br/><br/> Maximum disks: 63 **Passthrough disks** | Not supported | Supported **UEFI boot** | Supported. | Supported.
-**Connectivity** | Public internet <br/> ExpressRoute with Microsoft peering <br/> <br/> [Learn how](./replicate-using-expressroute.md) to use private endpoints for replication over an ExpressRoute private peering or a S2S VPN connection. |Public internet <br/> ExpressRoute with Private peering <br/> ExpressRoute with Microsoft peering <br/> Site-to-site VPN
+**Connectivity** | Public internet <br/> ExpressRoute with Private peering <br/> ExpressRoute with Microsoft peering <br/> Site-to-site VPN |Public internet <br/> ExpressRoute with Private peering <br/> ExpressRoute with Microsoft peering <br/> Site-to-site VPN
## Compare deployment steps
migrate Troubleshoot Network Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-network-connectivity.md
Make sure the private endpoint is an approved state.
2. The properties page contains the list of private endpoints and private link FQDNs that were automatically created by Azure Migrate. 3. Select the private endpoint you want to diagnose.
- a. Validate that the connection state is Approved.
- b. If the connection is in a Pending state, you need to get it approved.
- c. You may also navigate to the private endpoint resource and review if the virtual network matches the Migrate project private endpoint virtual network.
+ a. Validate that the connection state is Approved.
+ b. If the connection is in a Pending state, you need to get it approved.
+ c. You may also navigate to the private endpoint resource and review if the virtual network matches the Migrate project private endpoint virtual network.
- ![View Private Endpoint connection](./media/how-to-use-azure-migrate-with-private-endpoints/private-endpoint-connection.png)
+ :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/private-endpoint-connection.png" alt-text="Screenshot of View Private Endpoint connection.":::
## Validate the data flow through the private endpoints
Review the data flow metrics to verify the traffic flow through private endpoint
The on-premises appliance (or replication provider) will access the Azure Migrate resources using their fully qualified private link domain names (FQDNs). You may require additional DNS settings to resolve the private IP address of the private endpoints from the source environment. [See this article](../private-link/private-endpoint-dns.md#on-premises-workloads-using-a-dns-forwarder) to understand the DNS configuration scenarios that can help troubleshoot any network connectivity issues.
-To validate the private link connection, perform a DNS resolution of the Azure Migrate resource endpoints (private link resource FQDNs) from the on-premises server hosting the Migrate appliance and ensure that it resolves to a private IP address.
-The private endpoint details and private link resource FQDNs' information is available in the Discovery and Assessment and Server Migration properties pages. Select **Download DNS settings** to view the list.
+To validate the private link connection, perform a DNS resolution of the Azure Migrate resource endpoints (private link resource FQDNs) from the on-premises server hosting the Migrate appliance and ensure that it resolves to a private IP address.
- ![Azure Migrate: Discovery and Assessment Properties](./media/how-to-use-azure-migrate-with-private-endpoints/server-assessment-properties.png)
+**To obtain the private endpoint details to verify DNS resolution:**
- [![Azure Migrate: Server Migration Properties](./media/how-to-use-azure-migrate-with-private-endpoints/azure-migrate-server-migration-properties-inline.png)](./media/how-to-use-azure-migrate-with-private-endpoints/azure-migrate-server-migration-properties-expanded.png#lightbox)
+1. The private endpoint details and private link resource FQDNs' information is available in the Discovery and Assessment and Server Migration properties pages. Select **Download DNS settings** to view the list. Note, only the private endpoints that were automatically created by Azure Migrate are listed below.
+
+ ![Azure Migrate: Discovery and Assessment Properties](./media/how-to-use-azure-migrate-with-private-endpoints/server-assessment-properties.png)
+
+ [![Azure Migrate: Server Migration Properties](./media/how-to-use-azure-migrate-with-private-endpoints/azure-migrate-server-migration-properties-inline.png)](./media/how-to-use-azure-migrate-with-private-endpoints/azure-migrate-server-migration-properties-expanded.png#lightbox)
+
+2. If you have created a private endpoint for the storage account(s) for replicating over a private network, you can obtain the private link FQDN and IP address as illustrated below.
+
+ - Go to the **Storage account** > **Networking** > **Private endpoint connections** and select the private endpoint created.
+
+ :::image type="content" source="./media/troubleshoot-network-connectivity/private-endpoint.png" alt-text="Screenshot of the Private Endpoint connections.":::
+
+ - Go to **Settings** > **DNS configuration** to obtain the storage account FQDN and private IP address.
+
+ :::image type="content" source="./media/troubleshoot-network-connectivity/private-link-info.png" alt-text="Screenshot showing the Private Link FQDN information.":::
An illustrative example for DNS resolution of the storage account private link FQDN. -- Enter _nslookup ```<storage-account-name>_.blob.core.windows.net.``` Replace ```<storage-account-name>``` with the name of the storage account used for Azure Migrate.
+- Enter ```nslookup_<storage-account-name>_.blob.core.windows.net.``` Replace ```<storage-account-name>``` with the name of the storage account used for Azure Migrate.
You'll receive a message like this:
- ![DNS resolution example](./media/how-to-use-azure-migrate-with-private-endpoints/dns-resolution-example.png)
+ :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/dns-resolution-example.png" alt-text="Screenshot showing a DNS resolution example.":::
- A private IP address of 10.1.0.5 is returned for the storage account. This address belongs to the private endpoint virtual network subnet.
You can verify the DNS resolution for other Azure Migrate artifacts using a simi
If the DNS resolution is incorrect, follow these steps:
-**Recommended** for testing: You can manually update your source environment DNS records by editing the DNS hosts file on your on-premises appliance with the private link resource FQDNs and their associated private IP addresses.
+**Recommended**: Manually update your source environment DNS records by editing the DNS hosts file on your on-premises appliance with the private link resource FQDNs and their associated private IP addresses.
- If you use a custom DNS, review your custom DNS settings, and validate that the DNS configuration is correct. For guidance, see [private endpoint overview: DNS configuration](../private-link/private-endpoint-overview.md#dns-configuration). - If you use Azure-provided DNS servers, refer to the below section for further troubleshooting. > [!Tip]
-> You can manually update your source environment DNS records by editing the DNS hosts file on your on-premises appliance with the private link resource FQDNs and their associated private IP addresses. This option is recommended only for testing. <br/>
+> For testing, you can manually update your source environment DNS records by editing the DNS hosts file on your on-premises appliance with the private link resource FQDNs and their associated private IP addresses. <br/>
## Validate the Private DNS Zone
If the DNS resolution is incorrect, follow these steps:
1. **Proxy server considerations**: If the appliance uses a proxy server for outbound connectivity, you may need to validate your network settings and configurations to ensure the private link URLs are reachable and can be routed as expected.
- - If the proxy server is for internet connectivity, you may need to add traffic forwarders or rules to bypass the proxy server for the private link FQDNs. [Learn more](./how-to-use-azure-migrate-with-private-endpoints.md#set-up-prerequisites) on how to add proxy bypass rules.
+ - If the proxy server is for internet connectivity, you may need to add traffic forwarders or rules to bypass the proxy server for the private link FQDNs. [Learn more](./discover-and-assess-using-private-endpoints.md#set-up-prerequisites) on how to add proxy bypass rules.
- Alternatively, if the proxy server is for all outbound traffic, make sure the proxy server can resolve the private link FQDNs to their respective private IP addresses. For a quick workaround, you can manually update the DNS records on the proxy server with the DNS mappings and the associated private IP addresses, as shown above. This option is recommended for testing. 1. If the issue still persists, [refer to this section](#validate-the-private-dns-zone) for further troubleshooting.
In addition to the URLs above, the appliance needs access to the following URLs
|*.windows.net <br/> *.msftauth.net <br/> *.msauth.net <br/> *.microsoft.com <br/> *.live.com <br/> *.office.com <br/> *.microsoftonline.com <br/> *.microsoftonline-p.com <br/> | Used for access control and identity management by Azure Active Directory |management.azure.com | For triggering Azure Resource Manager deployments |*.services.visualstudio.com (optional) | Upload appliance logs used for internal monitoring.
-|aka.ms/* (optional) | Allow access to aka links; used to download and install the latest updates for appliance services
+|aka.ms/* (optional) | Allow access to *also know as* links; used to download and install the latest updates for appliance services
|download.microsoft.com/download | Allow downloads from Microsoft download center - Open the command line and run the following nslookup command to verify privatelink connectivity to the URLs listed in the DNS settings file. Repeat this step for all URLs in the DNS settings file.
If the DNS resolution is incorrect, follow these steps:
1. **Proxy server considerations**: If the appliance uses a proxy server for outbound connectivity, you may need to validate your network settings and configurations to ensure the private link URLs are reachable and can be routed as expected.
- - If the proxy server is for internet connectivity, you may need to add traffic forwarders or rules to bypass the proxy server for the private link FQDNs. [Learn more](./how-to-use-azure-migrate-with-private-endpoints.md#set-up-prerequisites) on how to add proxy bypass rules.
+ - If the proxy server is for internet connectivity, you may need to add traffic forwarders or rules to bypass the proxy server for the private link FQDNs. [Learn more](./discover-and-assess-using-private-endpoints.md#set-up-prerequisites) on how to add proxy bypass rules.
- Alternatively, if the proxy server is for all outbound traffic, make sure the proxy server can resolve the private link FQDNs to their respective private IP addresses. For a quick workaround, you can manually update the DNS records on the proxy server with the DNS mappings and the associated private IP addresses, as shown above. This option is recommended for testing. 1. If the issue still persists, [refer to this section](#validate-the-private-dns-zone) for further troubleshooting.
migrate Tutorial Discover Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-import.md
Set up a new Azure Migrate project if you don't have one.
![Boxes for project name and region](./media/tutorial-discover-import/new-project.png) > [!Note]
- > Use the **Advanced** configuration section to create an Azure Migrate project with private endpoint connectivity. [Learn more](how-to-use-azure-migrate-with-private-endpoints.md#create-a-project-with-private-endpoint-connectivity)
+ > Use the **Advanced** configuration section to create an Azure Migrate project with private endpoint connectivity. [Learn more](discover-and-assess-using-private-endpoints.md#create-a-project-with-private-endpoint-connectivity)
7. Select **Create**. 8. Wait a few minutes for the Azure Migrate project to deploy.
migrate Tutorial Migrate Aws Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-aws-virtual-machines.md
A Mobility service agent must be installed on the source AWS VMs to be migrated.
10. In **Cache storage account**, keep the default option to use the cache storage account that is automatically created for the project. Use the drop down if you'd like to specify a different storage account to use as the cache storage account for replication. <br/> > [!NOTE] >
- > - If you selected private endpoint as the connectivity method for the Azure Migrate project, grant the Recovery Services vault access to the cache storage account. [**Learn more**](how-to-use-azure-migrate-with-private-endpoints.md#grant-access-permissions-to-the-recovery-services-vault)
- > - To replicate using ExpressRoute with private peering, create a private endpoint for the cache storage account. [**Learn more**](how-to-use-azure-migrate-with-private-endpoints.md#create-a-private-endpoint-for-the-storage-account-optional)
+ > - If you selected private endpoint as the connectivity method for the Azure Migrate project, grant the Recovery Services vault access to the cache storage account. [**Learn more**](migrate-servers-to-azure-using-private-link.md#grant-access-permissions-to-the-recovery-services-vault)
+ > - To replicate using ExpressRoute with private peering, create a private endpoint for the cache storage account. [**Learn more**](migrate-servers-to-azure-using-private-link.md#create-a-private-endpoint-for-the-storage-account-1)
11. In **Availability options**, select: - Availability Zone to pin the migrated machine to a specific Availability Zone in the region. Use this option to distribute servers that form a multi-node application tier across Availability Zones. If you select this option, you'll need to specify the Availability Zone to use for each of the selected machine in the Compute tab. This option is only available if the target region selected for the migration supports Availability Zones - Availability Set to place the migrated machine in an Availability Set. The target Resource Group that was selected must have one or more availability sets in order to use this option.
migrate Tutorial Migrate Gcp Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-gcp-virtual-machines.md
Set up an assessment as follows:
- Azure Migrate uses password authentication when discovering GCP VM instances. GCP instances don't support password authentication by default. Before you can discover, you need to enable password authentication. - For Windows machines, allow WinRM port 5985 (HTTP). This allows remote WMI calls. - For Linux machines:
- 1. Sign into each Linux machine.
- 2. Open the sshd_config file : vi /etc/ssh/sshd_config
+ 1. Sign in to each Linux machine.
+ 2. Open the sshd_config file: vi /etc/ssh/sshd_config
3. In the file, locate the **PasswordAuthentication** line, and change the value to **yes**. 4. Save the file and close it. Restart the ssh service. - If you are using a root user to discover your Linux VMs, ensure root login is allowed on the VMs. 1. Sign into each Linux machine
- 2. Open the sshd_config file : vi /etc/ssh/sshd_config
+ 2. Open the sshd_config file: vi /etc/ssh/sshd_config
3. In the file, locate the **PermitRootLogin** line, and change the value to **yes**. 4. Save the file and close it. Restart the ssh service.
A Mobility service agent must be installed on the source GCP VMs to be migrated.
8. In **Target settings**, select the subscription, and target region to which you'll migrate, and specify the resource group in which the Azure VMs will reside after migration. 9. In **Virtual Network**, select the Azure VNet/subnet to which the Azure VMs will be joined after migration.
-10. In **Cache storage account**, keep the default option to use the cache storage account that is automatically created for the project. Use the drop down if you'd like to specify a different storage account to use as the cache storage account for replication. <br/>
+10. In **Cache storage account**, keep the default option to use the cache storage account that is automatically created for the project. Use the dropdown if you'd like to specify a different storage account to use as the cache storage account for replication. <br/>
> [!NOTE] >
- > - If you selected private endpoint as the connectivity method for the Azure Migrate project, grant the Recovery Services vault access to the cache storage account. [**Learn more**](how-to-use-azure-migrate-with-private-endpoints.md#grant-access-permissions-to-the-recovery-services-vault)
- > - To replicate using ExpressRoute with private peering, create a private endpoint for the cache storage account. [**Learn more**](how-to-use-azure-migrate-with-private-endpoints.md#create-a-private-endpoint-for-the-storage-account-optional)
+ > - If you selected private endpoint as the connectivity method for the Azure Migrate project, grant the Recovery Services vault access to the cache storage account. [**Learn more**](migrate-servers-to-azure-using-private-link.md#grant-access-permissions-to-the-recovery-services-vault)
+ > - To replicate using ExpressRoute with private peering, create a private endpoint for the cache storage account. [**Learn more**](migrate-servers-to-azure-using-private-link.md#create-a-private-endpoint-for-the-storage-account-1)
11. In **Availability options**, select: - Availability Zone to pin the migrated machine to a specific Availability Zone in the region. Use this option to distribute servers that form a multi-node application tier across Availability Zones. If you select this option, you'll need to specify the Availability Zone to use for each of the selected machine in the Compute tab. This option is only available if the target region selected for the migration supports Availability Zones - Availability Set to place the migrated machine in an Availability Set. The target Resource Group that was selected must have one or more availability sets in order to use this option.
After you've verified that the test migration works as expected, you can migrate
## Troubleshooting / Tips
-**Question:** I cannot see my GCP VM in the discovered list of servers for migration
+**Question:** I cannot see my GCP VM in the discovered list of servers for migration.
**Answer:** Check if your replication appliance meets the requirements. Make sure Mobility Agent is installed on the source VM to be migrated and is registered the Configuration Server. Check the firewall rules to enable a network path between the replication appliance and source GCP VMs.
-**Question:** How do I know if my VM was successfully migrated
+**Question:** How do I know if my VM was successfully migrated?
**Answer:** Post-migration, you can view and manage the VM from the Virtual Machines page. Connect to the migrated VM to validate.
-**Question:** I am unable to import VMs for migration from my previously created Server Assessment results
+**Question:** I am unable to import VMs for migration from my previously created Server Assessment results.
**Answer:** Currently, we do not support the import of assessment for this workflow. As a workaround, you can export the assessment and then manually select the VM recommendation during the Enable Replication step.
-**Question:** I am getting the error ΓÇ£Failed to fetch BIOS GUIDΓÇ¥ while trying to discover my GCP VMs
+**Question:** I am getting the error ΓÇ£Failed to fetch BIOS GUIDΓÇ¥ while trying to discover my GCP VMs.
**Answer:** Use root login for authentication and not any pseudo user. If you are not able to use a root user, ensure the required capabilities are set on the user, as per the instructions provided in the [support matrix](migrate-support-matrix-physical.md#physical-server-requirements). Also review supported operating systems for GCP VMs.
-**Question:** My replication status is not progressing
+**Question:** My replication status is not progressing.
**Answer:** Check if your replication appliance meets the requirements. Make sure youΓÇÖve enabled the required ports on your replication appliance TCP port 9443 and HTTPS 443 for data transport. Ensure that there are no stale duplicate versions of the replication appliance connected to the same project.
-**Question:** I am unable to Discover GCP Instances using Azure Migrate due to HTTP status code of 504 from the remote Windows management service
+**Question:** I am unable to Discover GCP Instances using Azure Migrate due to HTTP status code of 504 from the remote Windows management service.
**Answer:** Make sure to review the Azure migrate appliance requirements and URL access needs. Make sure no proxy settings are blocking the appliance registration.
-**Question:** Do I have to make any changes before I migrate my GCP VMs to Azure
+**Question:** Do I have to make any changes before I migrate my GCP VMs to Azure?
**Answer:** You may have to make these changes before migrating your GCP VMs to Azure: - If you are using cloud-init for your VM provisioning, you may want to disable cloud-init on the VM before replicating it to Azure. The provisioning steps performed by cloud-init on the VM maybe GCP specific and won't be valid after the migration to Azure. ​
migrate Tutorial Migrate Physical Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-physical-virtual-machines.md
Now, select machines for migration.
8. In **Target settings**, select the subscription, and target region to which you'll migrate, and specify the resource group in which the Azure VMs will reside after migration. 9. In **Virtual Network**, select the Azure VNet/subnet to which the Azure VMs will be joined after migration. 10. In **Cache storage account**, keep the default option to use the cache storage account that is automatically created for the project. Use the drop down if you'd like to specify a different storage account to use as the cache storage account for replication. <br/>
- > [!NOTE]
- >
- > - If you selected private endpoint as the connectivity method for the Azure Migrate project, grant the Recovery Services vault access to the cache storage account. [**Learn more**](how-to-use-azure-migrate-with-private-endpoints.md#grant-access-permissions-to-the-recovery-services-vault)
- > - To replicate using ExpressRoute with private peering, create a private endpoint for the cache storage account. [**Learn more**](how-to-use-azure-migrate-with-private-endpoints.md#create-a-private-endpoint-for-the-storage-account-optional)
+ >[!NOTE]
+ > - If you selected private endpoint as the connectivity method for the Azure Migrate project, grant the Recovery Services vault access to the cache storage account. [**Learn more**](migrate-servers-to-azure-using-private-link.md#grant-access-permissions-to-the-recovery-services-vault)
+ > - To replicate using ExpressRoute with private peering, create a private endpoint for the cache storage account. [**Learn more**](migrate-servers-to-azure-using-private-link.md#create-a-private-endpoint-for-the-storage-account-1)
11. In **Availability options**, select: - Availability Zone to pin the migrated machine to a specific Availability Zone in the region. Use this option to distribute servers that form a multi-node application tier across Availability Zones. If you select this option, you'll need to specify the Availability Zone to use for each of the selected machine in the Compute tab. This option is only available if the target region selected for the migration supports Availability Zones
migrate Tutorial Migrate Vmware Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-vmware-agent.md
Select VMs for migration.
12. In **Cache storage account**, keep the default option to use the cache storage account that is automatically created for the project. Use the drop down if you'd like to specify a different storage account to use as the cache storage account for replication. <br/> > [!NOTE] >
- > - If you selected private endpoint as the connectivity method for the Azure Migrate project, grant the Recovery Services vault access to the cache storage account. [**Learn more**](how-to-use-azure-migrate-with-private-endpoints.md#grant-access-permissions-to-the-recovery-services-vault)
- > - To replicate using ExpressRoute with private peering, create a private endpoint for the cache storage account. [**Learn more**](how-to-use-azure-migrate-with-private-endpoints.md#create-a-private-endpoint-for-the-storage-account-optional)
+ > - If you selected private endpoint as the connectivity method for the Azure Migrate project, grant the Recovery Services vault access to the cache storage account. [**Learn more**](migrate-servers-to-azure-using-private-link.md#grant-access-permissions-to-the-recovery-services-vault )
+ > - To replicate using ExpressRoute with private peering, create a private endpoint for the cache storage account. [**Learn more**](migrate-servers-to-azure-using-private-link.md#create-a-private-endpoint-for-the-storage-account-1)
13. In **Availability options**, select: - Availability Zone to pin the migrated machine to a specific Availability Zone in the region. Use this option to distribute servers that form a multi-node application tier across Availability Zones. If you select this option, you'll need to specify the Availability Zone to use for each of the selected machine in the Compute tab. This option is only available if the target region selected for the migration supports Availability Zones - Availability Set to place the migrated machine in an Availability Set. The target Resource Group that was selected must have one or more availability sets in order to use this option.
migrate Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/whats-new.md
[Azure Migrate](migrate-services-overview.md) helps you to discover, assess, and migrate on-premises servers, apps, and data to the Microsoft Azure cloud. This article summarizes new releases and features in Azure Migrate.
+## Update (March 2022)
+- Perform agentless VMware VM discovery, assessments, and migrations over a private network using Azure Private Link. [Learn more.](how-to-use-azure-migrate-with-private-endpoints.md)
++ ## Update (February 2022) - General Availability: Migrate Windows and Linux Hyper-V virtual machines with large data disks (up to 32 TB in size). - Azure Migrate is now supported in Azure China. [Learn more](/azure/china/overview-operations#azure-operations-in-china).
mysql Concepts Networking Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-networking-vnet.md
Here are some concepts to be familiar with when using virtual networks with MySQ
> [!IMPORTANT] > Private DNS zone names must end with `mysql.database.azure.com`.
- >If you are connecting to the Azure Database for MySQL - Flexible sever with SSL and are using an option to perform full verification (sslmode=VERTIFY_IDENTITY) with certificate subject name, use <servername>.mysql.database.azure.com in your connection string.
+ >If you are connecting to the Azure Database for MySQL - Flexible sever with SSL and are using an option to perform full verification (sslmode=VERTIFY_IDENTITY) with certificate subject name, use \<servername\>.mysql.database.azure.com in your connection string.
Learn how to create a flexible server with private access (VNet integration) in [the Azure portal](how-to-manage-virtual-network-portal.md) or [the Azure CLI](how-to-manage-virtual-network-cli.md).
mysql How To Connect Tls Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-connect-tls-ssl.md
mysql -h mydemoserver.mysql.database.azure.com -u mydemouser -p --ssl-mode=REQUI
> [!Note] > Confirm that the value passed to `--ssl-ca` matches the file path for the certificate you saved.
->If you are connecting to the Azure Database for MySQL- Flexible with SSL and are using an option to perform full verification (sslmode=VERTIFY_IDENTITY) with certificate subject name, use <servername>.mysql.database.azure.com in your connection string.
+>If you are connecting to the Azure Database for MySQL- Flexible with SSL and are using an option to perform full verification (sslmode=VERTIFY_IDENTITY) with certificate subject name, use \<servername\>.mysql.database.azure.com in your connection string.
orbital Geospatial Reference Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/geospatial-reference-architecture.md
+
+ Title: 'End-to-end geospatial reference architecture'
+description: 'Concepts: shows how to architect end-to-end geospatial on Azure '
++++ Last updated : 03/15/2022+
+# Customer intent: As a geospatial architect, I'd like to understand how to architecture a solution on Azure.
+
+# End-to-end geospatial storage, analysis, and visualization
+
+Geospatial data comes in various forms and requires a wide range of capabilities to process, analyze and visualize the data. While Geographic Information System (GIS) is common, it also is largely not cloud-native. Most GIS run on the desktop which limits their scale and performance. While there have been advances in moving the data to the backend, these systems remain IaaS bound which makes them difficult to scale.
+
+This article will provide a high-level approach to use cloud-native capabilities along with some open-source software options and commercial options. Three personas will be considered. The personas are architects that are looking for a high-level flow without getting into the specifics of an implementation. The personas include the following:
+
+- General geospatial architect. This architect is looking for a means to implement geospatial but may not have a background in GIS or Remote Sensing.
+- OSS geospatial architect. This architect is dedicated to an open-source software (OSS) solution but takes advantage of the cloud for compute and storage.
+- COTS geospatial architect. This architect is dedicated to COTS but also takes advantage of cloud compute and storage.
+
+## Potential use cases
+
+The solutions provided in these architectures apply to many use cases:
+
+- Processing, storing, and providing access to large amounts of raster data, such as layers or climate data.
+- Combining entity location data from ERP systems with GIS reference data or include vector data, arrays, point clouds, etc.
+- Storing Internet of Things (IoT) telemetry from moving devices, and analyze in real time or in batch
+- Running analytical geospatial queries.
+- Embedding curated and contextualized geospatial data in web apps.
+- Processing data from drones, aerial photography, satellite imagery, LiDAR, gridded model results etc.
+
+## General geospatial architecture
+
+Azure has many native geospatial capabilities. In this diagram and the ones that follow, you&#39;ll find high-level stages in which geospatial data undergoes. First, you have the data source, an ingestion step, a place where the data is stored, transformed, served, published, and finally consumed. Note the globe icon beside the services with native geospatial capabilities. Also, these diagrams are not to be considered linear processes. One may start in the Transforms column, Publish and Consume, and then create some additional derived datasets which requires going back to a previous column.
+
+ :::image type="content" source="media/geospatial-overview.png" alt-text="Geospatial On Azure" lightbox="media/geospatial-overview.png":::
+
+This architecture flow assumes that the data may be coming from databases, files or streaming sources and not stored in a native GIS format. Once the data is ingested with Azure Data Factory, or via Azure IoT, Event Hubs and Stream Analytics, it could then be stored permanently in warm storage with Azure SQL, Azure SQL Managed Instance, Azure Database for PostgreSQL or Azure Data Lake. From there, the data can be transformed and processed in batch with Azure Batch or Synapse Spark Pool, of which both can be automated through the usage of an Azure Data Factory or Synapse pipeline. For real-time data, it can be further transformed or processed with Stream Analytics, Azure Maps or brought into context with Azure Digital Twins. Once the data is transformed, it can then once again be served for additional uses in Azure SQL DB or Azure Database for PostgreSQL, Synapse SQL Pool (for abstracted non-geospatial data), Cosmos DB or Azure Data Explorer. Once ready, the data can be queried directly through the data base API, but frequently a publish layer is used. The Azure Maps Data API would suffice for small datasets, otherwise a non-native service can be introduced based on OSS or COTS, for accessing the data through web services or desktop applications. Finally, the Azure Maps Web SDK hosted in Azure App Service would allow for geovisualization. Another option is to use Azure Maps in Power BI. Lastly, HoloLens and Azure Spatial Anchors can be used to view the data and place it in the real-world for virtual reality (VR) and augmented reality (AR) experiences.
+
+It should be noted as well that many of these options are optional and could be supplemented with OSS to reduce cost while also maintaining scalability, or 3rd party tools to utilize their specific capabilities. The next session addresses this need.
+
+## 3rd Party and Open-source software geospatial architecture
+
+This pattern takes the approach of using Azure native geospatial capabilities while at the same time taking advantage of some 3rd party tools and open-source software tools.
+
+The most significant difference between this approach and the previous flow diagram is the use of FME on from Safe Software, Inc. which can be acquired from the Azure Marketplace. FME allows geospatial architects to integrate various type of geospatial data which includes CAD (for Azure Maps Creator), GIS, BIM, 3D, point clouds, LIDAR, etc. There are 450+ integration options, and can speed up the creation of many data transformations through its functionality. Implementation, however, is based on the usage of a virtual machine, and has therefore limits in its scaling capabilities. The automation of FME transformations might be reached using FME API calls with the use of Azure Data Factory and/or with Azure Functions. Once the data is loaded in Azure SQL, for example, it can then be served in GeoServer and published as a Web Feature Service (vector) or Web Mapping Tile Service (raster) and visualized in Azure Maps web SDK or analyzed with QGIS for the desktop along with the other [Azure Maps base maps](../azure-maps/supported-map-styles.md).
+
+ :::image type="content" source="media/geospatial-3rd-open-source-software.png" alt-text="Diagram of Azure and 3rd Party tools and open-source software." lightbox="media/geospatial-3rd-open-source-software.png":::
+
+## COTS geospatial architecture: Esri with static and streaming sources
+
+The next approach we&#39;ll look at uses commercial GIS as the basis for the solution. Esri&#39;s technology, available from the Azure Marketplace, will be the foundation for this architecture, although other commercial software can fit the same patterns. As before, the sources, ingestion, (raw) store, Load/Serve largely remain the same. The data can also be transformed with ArcGIS Pro on a standalone computer (VM) or as part of a larger solution with [Azure Virtual Desktop](https://azure.microsoft.com/services/virtual-desktop/). The data can be published via [ArcGIS Enterprise](https://enterprise.arcgis.com/en/) or with the ArcGIS Enterprise on Kubernetes (Azure Kubernetes Service). Imagery can be processed on IaaS with ArcGIS Image as part of the ArcGIS Enterprise deployment. The data can be consumed in web apps hosted in Azure App Service with the ArcGIS JavaScript SDK, an ArcGIS Pro end user, ArcGIS Runtime mobile SDK, or with ArcGIS for Power BI. Likewise, users can consume the data with ArcGIS Online.
+
+ :::image type="content" source="media/geospatial-esri-static.png" alt-text="Diagram of Esri with static and streaming sources." lightbox="media/geospatial-esri-static.png":::
+
+## COTS geospatial imagery architecture: Esri&#39;s ArcGIS Image and Azure Orbital
+
+The next architecture involves Azure Orbital and Esri&#39;s ArcGIS Image. With this end-to-end flow, Azure Orbital allows you to schedule contacts with satellites and downlink the data into a VM or stream to Azure Event Hubs. Besides direct streamed satellite data, drone or other imagery data can be brought on the platform, and processed. The raw data can be stored in Azure NetApp Files, an Azure Storage account (blob), or in a database such as Azure Database for PostgreSQL. Depending on the satellite and sensor platform, the data is transformed from Level 0 to Level 2 dataset. See [NASA Data Processing Levels](https://earthdata.nasa.gov/collaborate/open-data-services-and-software/data-information-policy/data-levels). To which level is required, is dependent on the satellite and sensor. Next, ArcGIS Pro can transform the data into a Mosaic Dataset. The Mosaic Dataset is then turned into an image service with ArcGIS Enterprise (on VMs or Kubernetes). ArcGIS Image Server can serve the data directly as an image service or a user can consume the image service via [ArcGIS Image for ArcGIS Online](https://www.esri.com/en-us/cp/arcgis-image-for-arcgis-online/overview).
+
+ :::image type="content" source="media/geospatial-esri-image.png" alt-text="Diagram of Esri's ArcGIS Image and Azure Orbital." lightbox="media/geospatial-esri-image.png":::
+
+## COTS/Open-source software geospatial imagery architecture: Azure Space to Analysis Ready Dataset
+
+When Analysis Ready Datasets are made available through APIs that enable search and query capabilities, like with Microsoft&#39;s Planetary Computer, there is no need to first download the data from a satellite. However, if low lead times are required for imagery, acquiring the data directly from Azure Space is ideal because a satellite operator or mission driven organization can schedule a contact with a satellite via Azure Orbital. The process for going from Level 0 to Level 2 Analysis Ready Dataset varies by the satellite and the imagery products. Multiple tools and intermediate steps are often required. Azure Batch or another compute resource can process the data in a cluster and store the resulting data. The data may go through multiple steps before it is ready for being used in ArcGIS or QGIS or some other geovisualization tool. For example, once the data is in a [Cloud Optimized GeoTIFF](https://www.cogeo.org/) (COG) format, it is served up via a Storage Account or Azure Data Lake and made accessible and queryable via the [STAC API](https://stacspec.org/), which can be deployed on Azure as a service, with AKS among others. Alternatively, the data is published as a Web Mapping Tile Service with GeoServer. Consumers can then access the data in ArcGIS Pro or QGIS or via a web app with Azure Maps or Esri&#39;s mobile and web SDKs.
+
+ :::image type="content" source="media/geospatial-space-ard.png" alt-text="Diagram of Azure Space to Analysis Ready Dataset." lightbox="media/geospatial-space-ard.png":::
+
+## Components
+
+- [Azure Event Hubs](../event-hubs/event-hubs-about.md) is a fully managed streaming platform for big data. This platform as a service (PaaS) offers a partitioned consumer model. Multiple applications can use this model to process the data stream at the same time.
+- [Azure Orbital](../orbital/overview.md) is a fully managed, cloud-based ground station as a service that allows you to streamline your operations by ingesting space data directly into Azure.
+- [Azure Data Factory](../data-factory/introduction.md) is an integration service that works with data from disparate data stores. You can use this fully managed, serverless platform to create, schedule, and orchestrate data transformation workflows.
+- [Azure Cosmos DB](../cosmos-db/introduction.md) is a fully managed NoSQL database service for modern app development.
+- [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md) is an enterprise analytics service that accelerates time to insight across data warehouses and big data systems.
+- [Azure Digital Twins](../digital-twins/overview.md) is a platform as a service offering that enabled the creation of twin graphs based on digital models of entire environments, which could be buildings, factories, farms, energy networks, railways, stadiums, or entire cities.
+- [Azure Virtual Desktop](../virtual-desktop/overview.md) is a desktop and app virtualization service that runs on the cloud.
+- [Azure Databricks](/azure/databricks/scenarios/what-is-azure-databricks) is a data analytics platform. Its fully managed Spark clusters process large streams of data from multiple sources. Azure Databricks can transform geospatial data at large scale for use in analytics and data visualization.
+- [Azure Batch](https://azure.microsoft.com/services/batch/) allows you to run large-scale parallel and high-performance computing jobs.
+- [Azure Data Lake Storage](../data-lake-store/data-lake-store-overview.md) is a scalable and secure data lake for high-performance analytics workloads. This service can manage multiple petabytes of information while sustaining hundreds of gigabits of throughput. The data typically comes from multiple, heterogeneous sources and can be structured, semi-structured, or unstructured.
+- [Azure SQL Database](https://azure.microsoft.com/products/azure-sql/database/) is a PaaS version of SQL Server and is an intelligent, scalable, relational database service.
+- [Azure Database for PostgreSQL](../postgresql/overview.md) is a fully managed relational database service that&#39;s based on the community edition of the open-source [PostgreSQL](https://www.postgresql.org/) database engine.
+- [PostGIS](https://www.postgis.net/) is an extension for the PostgreSQL database that integrates with GIS servers. PostGIS can run SQL location queries that involve geographic objects.
+- [Power BI](/power-bi/fundamentals/power-bi-overview) is a collection of software services and apps. You can use Power BI to connect unrelated sources of data and create visuals of them.
+- The [Azure Maps visual for Power BI](../azure-maps/power-bi-visual-get-started.md) provides a way to enhance maps with spatial data. You can use this visual to show how location data affects business metrics.
+- [App Service](../app-service/overview.md) and its [Web Apps](../app-service/overview.md) feature provide a framework for building, deploying, and scaling web apps. The App Service platform offers built-in infrastructure maintenance, security patching, and scaling.
+- [GIS data APIs in Azure Maps](../azure-maps/about-azure-maps.md) store and retrieve map data in formats like GeoJSON and vector tiles.
+- [Azure Data Explorer](/azure/data-explorer/data-explorer-overview) is a fast, fully managed data analytics service that can work with [large volumes of data](/azure/data-explorer/engine-v3). This service originally focused on time series and log analytics. It now also handles diverse data streams from applications, websites, IoT devices, and other sources. [Geospatial functionality](https://azure.microsoft.com/updates/adx-geo-updates/) in Azure Data Explorer provides options for rendering map data.
+- [Azure NetApp Files](https://azure.microsoft.com/services/netapp/) is an enterprise-class, high-performance, metered file Network Attached Storage (NAS) service.
+- [Quantum GIS](https://www.qgis.org/en/site/) is a free and open-source desktop GIS that supports editing, analysis, geovisualization of geospatial data.
+- [ArcGIS Enterprise](https://enterprise.arcgis.com/en/get-started/latest/windows/what-is-arcgis-enterprise-.htm) is a platform for mapping and geovisualization, analytics and data management, which hosts data, applications, and custom low-code/no-code applications. It works along with the desktop GIS called ArcGIS Pro or ArcGIS Desktop (not included here because it has been supplanted by ArcGIS Pro).
+- [ArcGIS Pro](https://www.esri.com/arcgis/products/arcgis-pro/overview) is Esri&#39;s professional desktop GIS application. It allows power users to explore, geovisualize, and analyze data. It includes 2D and 3D capabilities and runs best on Azure High Performance Compute VMs such as the NV series. The use of ArcGIS can be scaled using Azure Virtual Desktop.
+- [ArcGIS Image for ArcGIS Online](https://www.esri.com/en-us/cp/arcgis-image-for-arcgis-online/overview) is an extension to ArcGIS Online (SaaS) which allows for geovisualization, hosting, publishing, and analysis.
+- [STAC](https://stacspec.org/) API specification allows you to query and retrieve raster data via a catalog.
+
+Although not shown in the diagrams above, Azure Monitor, Log Analytics and Key Vault would also be part of a broader solution.
+
+- [Azure Monitor](../azure-monitor/overview.md) collects data on environments and Azure resources. This diagnostic information is helpful for maintaining availability and performance. Two data platforms make up Monitor:
+ - [Azure Monitor Logs](../azure-monitor/logs/log-analytics-overview.md) records and stores log and performance data.
+ - [Azure Monitor Metrics](../azure-monitor/essentials/metrics-getting-started.md) collects numerical values at regular intervals.
+- [Azure Log Analytics](../azure-monitor/logs/log-analytics-overview.md) is an Azure portal tool that runs queries on Monitor log data. Log Analytics also provides features for charting and statistically analyzing query results.
+- [Key Vault](../key-vault/general/basic-concepts.md) stores and controls access to secrets such as tokens, passwords, and API keys. Key Vault also creates and controls encryption keys and manages security certificates.
+
+## Alternatives
+
+Various Spark libraries are available for working with geospatial data on Azure Databricks and Synapse Spark Pools. See these libraries:
+
+- [Apache Sedona (GeoSpark)](http://sedona.apache.org/)
+- [GeoPandas](https://geopandas.org/)
+- [GeoTrellis](https://geotrellis.io/)
+
+But [other solutions also exist for processing and scaling geospatial workloads with Azure Databricks](https://databricks.com/blog/2019/12/05/processing-geospatial-data-at-scale-with-databricks.html).
+
+- Other Python libraries to consider include [PySAL](http://pysal.org/), [Rasterio](https://rasterio.readthedocs.io/en/latest/intro.html), [WhiteboxTools](https://www.whiteboxgeo.com/manual/wbt_book/intro.html), [Turf.js](https://turfjs.org/), [Pointpats](https://pointpats.readthedocs.io/en/latest/), [Raster Vision](https://docs.rastervision.io/en/0.13/), [EarthPy](https://earthpy.readthedocs.io/en/latest/https://docsupdatetracker.net/index.html), [Planetary Computer](https://planetarycomputer.microsoft.com/), [PDAL](https://pdal.io/), etc.
+
+- [Vector tiles](https://github.com/mapbox/vector-tile-spec) provide an efficient way to display GIS data on maps. A solution could use PostGIS to dynamically query vector tiles. This approach works well for simple queries and result sets that contain well under 1 million records. But in the following cases, a different approach may be better:
+ - Your queries are computationally expensive.
+ - Your data doesn&#39;t change frequently.
+ - You&#39;re displaying large data sets.
+
+In these situations, consider using [Tippecanoe](https://github.com/mapbox/tippecanoe) to generate vector tiles. You can run Tippecanoe as part of your data processing flow, either as a container or with [Azure Functions](../azure-functions/functions-overview.md). You can make the resulting tiles available through APIs.
+
+- Like Event Hubs, [Azure IoT Hub](../iot-hub/iot-concepts-and-iot-hub.md) can ingest large amounts of data. But IoT Hub also offers bi-directional communication capabilities with devices. If you receive data directly from devices but also send commands and policies back to devices, consider IoT Hub instead of Event Hubs.
+
+### Next steps
+
+- [Connect a WFS to Azure Maps](../azure-maps/spatial-io-connect-wfs-service.md)
+- [Geospatial clustering](/azure/data-explorer/kusto/query/geospatial-grid-systems)
+- [Explore ways to display data with Azure Maps.](https://samples.azuremaps.com/)
+
+### Related resources
+
+#### Related architectures
+
+- [Big data analytics with Azure Data Explorer](/azure/architecture/solution-ideas/articles/big-data-azure-data-explorer)
+- [Health data consortium on Azure](/azure/architecture/example-scenario/data/azure-health-data-consortium)
+- [DataOps for the modern data warehouse](/azure/architecture/example-scenario/data-warehouse/dataops-mdw)
+- [Azure Data Explorer interactive analytics](/azure/architecture/solution-ideas/articles/interactive-azure-data-explorer)
+
+#### Related guides
+
+- [Geospatial data processing and analytics](/azure/architecture/example-scenario/data/geospatial-data-processing-analytics-azure)
+- [Compare the machine learning products and technologies from Microsoft - Azure Databricks](/azure/architecture/data-guide/technology-choices/data-science-and-machine-learning#azure-databricks)
+- [Machine learning operations (MLOps) framework to scale up machine learning lifecycle with Azure Machine Learning](/azure/architecture/example-scenario/mlops/mlops-technical-paper)
+- [Azure Machine Learning decision guide for optimal tool selection](/azure/architecture/example-scenario/mlops/aml-decision-tree)
+- [Monitor Azure Databricks](/azure/architecture/databricks-monitoring/)
postgresql How To Manage Virtual Network Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-virtual-network-cli.md
Refer to the Azure CLI reference documentation <!--FIXME --> for the complete li
az postgres flexible-server create --subnet /subscriptions/{SubID}/resourceGroups/{ResourceGroup}/providers/Microsoft.Network/virtualNetworks/{VNetName}/subnets/{SubnetName} ``` > [!Note]
- > The virtual network and subnet should be in the same region and subscription as your flexible server.
+ > - The virtual network and subnet should be in the same region and subscription as your flexible server.
+ > - The virtual network should not have any resource lock set at the VNET or subnet level. Make sure to remove any lock (**Delete** or **Read only**) from your VNET and all subnets before creating the server in a virtual network, and you can set it back after server creation.
> [!IMPORTANT] > The names including `AzureFirewallSubnet`, `AzureFirewallManagementSubnet`, `AzureBastionSubnet` and `GatewaySubnet` are reserved names within Azure. Please do not use these as your subnet name.
postgresql How To Manage Virtual Network Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-virtual-network-portal.md
You can deploy your flexible server into a virtual network and subnet during ser
To create a flexible server in a virtual network, you need: - A [Virtual Network](../../virtual-network/quick-create-portal.md#create-a-virtual-network) > [!Note]
- > The virtual network and subnet should be in the same region and subscription as your flexible server.
+ > - The virtual network and subnet should be in the same region and subscription as your flexible server.
+ > - The virtual network should not have any resource lock set at the VNET or subnet level. Make sure to remove any lock (**Delete** or **Read only**) from your VNET and all subnets before creating the server in a virtual network, and you can set it back after server creation.
- To [delegate a subnet](../../virtual-network/manage-subnet-delegation.md#delegate-a-subnet-to-an-azure-service) to **Microsoft.DBforPostgreSQL/flexibleServers**. This delegation means that only Azure Database for PostgreSQL Flexible Servers can use that subnet. No other Azure resource types can be in the delegated subnet. - Add `Microsoft.Storage` to the service end point for the subnet delegated to Flexible servers. This is done by performing following steps:
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md
Previously updated : 03/14/2022 Last updated : 03/21/2022
One advantage of running your workload in Azure is global reach. The flexible se
| Australia Southeast | :heavy_check_mark: | :x: | :x: | | Brazil South | :heavy_check_mark: (v3 only) | :x: | :x: | | Canada Central | :heavy_check_mark: | :heavy_check_mark: | :x: |
-| Central India | :heavy_check_mark: | :heavy_check_mark: ** | :x: |
+| Central India | :heavy_check_mark: | :heavy_check_mark: ** | :heavy_check_mark: |
| Central US | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | East Asia | :heavy_check_mark: | :heavy_check_mark: ** | :x: | | East US | :heavy_check_mark: | :heavy_check_mark: | :x: |
One advantage of running your workload in Azure is global reach. The flexible se
| Norway East | :heavy_check_mark: | :x: | :x: | | South Africa North | :heavy_check_mark: | :x: | :x: | | South Central US | :heavy_check_mark: | :heavy_check_mark: | :x: |
+| South India | :heavy_check_mark: | :x: | :heavy_check_mark: |
| Southeast Asia | :heavy_check_mark: | :x: $ | :x: | | Sweden Central | :heavy_check_mark: | :x: | :x: | | Switzerland North | :heavy_check_mark: | :heavy_check_mark: ** | :x: |
purview Concept Data Owner Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-data-owner-policies.md
+
+ Title: Azure Purview data owner policies concepts
+description: Understand Azure Purview data owner policies
+++++ Last updated : 03/20/2022++
+# Concepts for Azure Purview data owner policies
++
+This article discusses concepts related to managing access to data sources in your data estate from within Azure Purview Studio.
+
+> [!Note]
+> This capability is different from access control for Azure Purview itself, which is described in [Access control in Azure Purview](catalog-permissions.md).
+
+## Overview
+
+Access policies in Azure Purview enable you to manage access to different data systems across your entire data estate. For example:
+
+A user needs read access to an Azure Storage account that has been registered in Azure Purview. You can grant this access directly in Azure Purview by creating a data access policy through the **Policy management** app in Azure Purview Studio.
+
+Data access policies can be enforced through Purview on data systems that have been registered for policy.
+
+## Azure Purview policy concepts
+
+### Azure Purview policy
+
+A **policy** is a named collection of policy statements. When a policy is published to one or more data systems under PurviewΓÇÖs governance, it's then enforced by them. A policy definition includes a policy name, description, and a list of one or more policy statements.
+
+### Policy statement
+
+A **policy statement** is a human readable instruction that dictates how the data source should handle a specific data operation. The policy statement comprises **Effect**, **Action, Data Resource** and **Subject**.
+
+#### Action
+
+An **action** is the operation being permitted or denied as part of this policy. For example: Read or Modify. These high-level logical actions map to one (or more) data actions in the data system where they are enforced.
+
+#### Effect
+
+The **effect** indicates what should be resultant effect of this policy. Currently, the only supported value is **Allow**.
+
+#### Data resource
+
+The **data resource** is the fully qualified data asset path to which a policy statement is applicable. It conforms to the following format:
+
+*/subscription/\<subscription-id>/resourcegroups/\<resource-group-name>/providers/\<provider-name>/\<data-asset-path>*
+
+Azure Storage data-asset-path format:
+
+*Microsoft.Storage/storageAccounts/\<account-name>/blobservice/default/containers/\<container-name>*
+
+Azure SQL DB data-asset-path format:
+
+*Microsoft.Sql/servers/\<server-name>*
+
+#### Subject
+
+The end-user identity from Azure Active Directory for whom this policy statement is applicable. This identity can be a service principal, an individual user, a group, or a managed service identity (MSI).
+
+### Example
+
+Deny Read on Data Asset:
+*/subscription/finance/resourcegroups/prod/providers/Microsoft.Storage/storageAccounts/finDataLake/blobservice/default/containers/FinData to group Finance-analyst*
+
+In the above policy statement, the effect is *Deny*, the action is *Read*, the data resource is Azure Storage container *FinData*, and the subject is Azure Active Directory group *Finance-analyst*. If any user that belongs to this group attempts to read data from the storage container *FinData*, the request will be denied.
+
+### Hierarchical enforcement of policies
+
+The data resource specified in a policy statement is hierarchical by default. This means that the policy statement applies to the data object itself and to **all** the child objects contained by the data object. For example, a policy statement on Azure storage container applies to all the blobs contained within it.
+
+### Policy combining algorithm
+
+Azure Purview can have different policy statements that refer to the same data asset. When evaluating a decision for data access, Azure Purview combines all the applicable policies and provides a consolidated decision. The combining strategy picks the most restrictive policy.
+For example, letΓÇÖs assume two different policies on an Azure Storage container *FinData* as follows,
+
+Policy 1 - *Allow Read on Data Asset /subscription/…./containers/FinData
+To group Finance-analyst*
+
+Policy 2 - *Deny Read on Data Asset /subscription/…./containers/FinData
+To group Finance-contractors*
+
+Then letΓÇÖs assume that user ΓÇÿuser1ΓÇÖ, who is part of two groups:
+*Finance-analyst* and *Finance-contractors*, executes a call to blob read API. Since both policies will be applicable, Azure Purview will choose the most restrictive one, which is *Deny* of *Read*. Thus, the access request will be denied.
+
+> [!Note]
+> Currently, the only supported effect is **Allow**.
+
+## Policy publishing
+
+A newly created policy exists in the draft mode state, only visible in Azure Purview. The act of publishing initiates enforcement of a policy in the specified data systems. It's an asynchronous action that can take up to 2 minutes to be effective on the underlying data sources.
+
+A policy published to a data source could contain references to an asset belonging to a different data source. Such references will be ignored since the asset in question does not exist in the data source where the policy is applied.
+
+## Next steps
+Check the tutorials on how to create policies in Azure Purview that work on specific data systems such as Azure Storage:
+
+* [Access provisioning by data owner to Azure Storage datasets](tutorial-data-owner-policies-storage.md)
+* [Enable Azure Purview data owner policies on all data sources in a subscription or a resource group](./tutorial-data-owner-policies-resource-group.md)
purview Concept Self Service Data Access Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-self-service-data-access-policy.md
This article helps you understand Azure Purview Self-service data access policy.
## Important limitations
-The self-service data access policy is only supported when the prerequisites mentioned in [data use governance](./how-to-enable-data-use-governance.md) are satisfied.
+The self-service data access policy is only supported when the prerequisites mentioned in [data use governance](./tutorial-data-owner-policies-storage.md) are satisfied.
## Overview
With self-service data access workflow, data consumers can not only find data as
A default self-service data access workflow template is provided with every Azure Purview account.The default template can be amended to add more approvers and/or set the approver's email address. For more details refer [Create and enable self-service data access workflow](./how-to-workflow-self-service-data-access-hybrid.md).
-Whenever a data consumer requests access to a dataset, the notification is sent to the workflow approver(s). The approver(s) can view the request and approve it either from Azure purview portal or from within the email notification. When the request is approved, a policy is auto-generated and applied against the respective data source. Self-service data access policy gets auto-generated only if the data source is registered for **data use governance**. The pre-requisites mentioned within the [data use governance](./how-to-enable-data-use-governance.md) have to be satisfied.
+Whenever a data consumer requests access to a dataset, the notification is sent to the workflow approver(s). The approver(s) can view the request and approve it either from Azure purview portal or from within the email notification. When the request is approved, a policy is auto-generated and applied against the respective data source. Self-service data access policy gets auto-generated only if the data source is registered for **data use governance**. The pre-requisites mentioned within the [data use governance](./tutorial-data-owner-policies-storage.md) have to be satisfied.
## Next steps If you would like to preview these features in your environment, follow the link below.-- [Enable data use governance](./how-to-enable-data-use-governance.md)
+- [Enable data use governance](./tutorial-data-owner-policies-storage.md)
- [create self-service data access workflow](./how-to-workflow-self-service-data-access-hybrid.md) - [working with policies at file level](https://techcommunity.microsoft.com/t5/azure-purview-blog/data-policy-features-accessing-data-when-file-level-permission/ba-p/3102166) - [working with policies at folder level](https://techcommunity.microsoft.com/t5/azure-purview-blog/data-policy-features-accessing-data-when-folder-level-permission/ba-p/3109583)
purview How To Delete Self Service Data Access Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-delete-self-service-data-access-policy.md
This guide describes how to delete self-service data access policies that have b
Self-service policies must exist for them to be deleted. Refer to the articles below to create self-service policies -- [Enable Data Use Governance](./how-to-enable-data-use-governance.md)
+- [Enable Data Use Governance](./tutorial-data-owner-policies-storage.md)
- [Create a self-service data access workflow](./how-to-workflow-self-service-data-access-hybrid.md) - [Approve self-service data access request](how-to-workflow-manage-requests-approvals.md)
purview How To View Self Service Data Access Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-view-self-service-data-access-policy.md
This guide describes how to view self-service data access policies that have bee
Self-service policies must exist for them to be viewed. Refer to the articles below to create self-service policies -- [Enable Data Use Governance](./how-to-enable-data-use-governance.md)
+- [Enable Data Use Governance](./tutorial-data-owner-policies-storage.md)
- [Create a self-service data access workflow](./how-to-workflow-self-service-data-access-hybrid.md) - [Approve self-service data access request](how-to-workflow-manage-requests-approvals.md)
purview Register Scan Azure Synapse Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-synapse-analytics.md
Last updated 11/10/2021-+ # Connect to and manage dedicated SQL pools in Azure Purview
-This article outlines how to register dedicated SQL pools(formerly SQL DW), and how to authenticate and interact with dedicated SQL pools in Azure Purview. For more information about Azure Purview, read the [introductory article](overview.md)
+This article outlines how to register dedicated SQL pools (formerly SQL DW), and how to authenticate and interact with dedicated SQL pools in Azure Purview. For more information about Azure Purview, read the [introductory article](overview.md)
> [!NOTE] > If you are looking to register and scan a dedicated SQL database within a Synapse workspace, you must follow instructions [here](register-scan-synapse-workspace.md).
purview Tutorial Data Owner Policies Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-data-owner-policies-resource-group.md
The limit for Azure Purview policies that can be enforced by Storage accounts is
## Next steps Check blog, demo and related tutorials
+* [Concepts for Azure Purview data owner policies](./concept-data-owner-policies.md)
+* [Data owner policies on an Azure Storage account](./tutorial-data-owner-policies-storage.md)
* [Blog: resource group-level governance can significantly reduce effort](https://techcommunity.microsoft.com/t5/azure-purview-blog/data-policy-features-resource-group-level-governance-can/ba-p/3096314) * [Demo of data owner access policies for Azure Storage](/video/media/8ce7c554-0d48-430f-8f63-edf94946947c/purview-policy-storage-dataowner-scenario_mid.mp4)
-* [Fine-grain data owner policies on an Azure Storage account](./tutorial-data-owner-policies-storage.md)
purview Tutorial Data Owner Policies Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-data-owner-policies-storage.md
This section contains a reference of how actions in Azure Purview data policies
## Next steps Check blog, demo and related tutorials
-* [Enable Azure Purview data owner policies on all data sources in a subscription or a resource group](./tutorial-data-owner-policies-resource-group.md)
* [Demo of access policy for Azure Storage](/video/media/8ce7c554-0d48-430f-8f63-edf94946947c/purview-policy-storage-dataowner-scenario_mid.mp4)
+* [Concepts for Azure Purview data owner policies](./concept-data-owner-policies.md)
+* [Enable Azure Purview data owner policies on all data sources in a subscription or a resource group](./tutorial-data-owner-policies-resource-group.md)
* [Blog: What's New in Azure Purview at Microsoft Ignite 2021](https://techcommunity.microsoft.com/t5/azure-purview/what-s-new-in-azure-purview-at-microsoft-ignite-2021/ba-p/2915954) * [Blog: Accessing data when folder level permission is granted](https://techcommunity.microsoft.com/t5/azure-purview-blog/data-policy-features-accessing-data-when-folder-level-permission/ba-p/3109583) * [Blog: Accessing data when file level permission is granted](https://techcommunity.microsoft.com/t5/azure-purview-blog/data-policy-features-accessing-data-when-file-level-permission/ba-p/3102166)
search Search Query Partial Matching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-partial-matching.md
Previously updated : 12/03/2020 Last updated : 03/22/2022 # Partial term search and patterns with special characters (hyphens, wildcard, regex, patterns)
-A *partial term search* refers to queries consisting of term fragments, where instead of a whole term, you might have just the start, middle, or end of term (sometimes referred to as prefix, infix, or suffix queries). A partial term search might include a combination of fragments, often with special characters such as hyphens, dashes, or slashes that are part of the query string. Common use-cases include parts of a phone number, URL, codes, or hyphenated compound words.
+A *partial term search* refers to queries consisting of term fragments, where instead of a whole term, you might have just the beginning, middle, or end of term (sometimes referred to as prefix, infix, or suffix queries). A partial term search might include a combination of fragments, often with special characters such as hyphens, dashes, or slashes that are part of the query string. Common use-cases include parts of a phone number, URL, codes, or hyphenated compound words.
-Partial term search and query strings that include special characters can be problematic if the index doesn't have tokens in the expected format. During the [lexical analysis phase](search-lucene-query-architecture.md#stage-2-lexical-analysis) of indexing (assuming the default standard analyzer), special characters are discarded, compound words are split up, and whitespace is deleted; all of which can cause queries to fail when no match is found. For example, a phone number like `+1 (425) 703-6214` (tokenized as `"1"`, `"425"`, `"703"`, `"6214"`) won't show up in a `"3-62"` query because that content doesn't actually exist in the index.
+Partial terms and special characters can be problematic if the index doesn't have tokens in the expected format. During the [lexical analysis phase](search-lucene-query-architecture.md#stage-2-lexical-analysis) of indexing (assuming the default standard analyzer), special characters are discarded, compound words are split up, and whitespace is deleted; all of which can cause queries to fail when no match is found. For example, a phone number like `+1 (425) 703-6214` (tokenized as `"1"`, `"425"`, `"703"`, `"6214"`) won't show up in a `"3-62"` query because that content doesn't actually exist in the index.
-The solution is to invoke an analyzer during indexing that preserves a complete string, including spaces and special characters if necessary, so that you can include the spaces and characters in your query string. Likewise, having a complete string that is not tokenized into smaller parts enables pattern matching for "starts with" or "ends with" queries, where the pattern you provide can be evaluated against a term that is not transformed by lexical analysis. Creating an additional field for an intact string, plus using a content-preserving analyzer that emits whole-term tokens, is the solution for both pattern matching and for matching on query strings that include special characters.
+The solution is to invoke an analyzer during indexing that preserves a complete string, including spaces and special characters if necessary, so that you can include the spaces and characters in your query string. Having a whole, un-tokenized string enables pattern matching for "starts with" or "ends with" queries, where the pattern you provide can be evaluated against a term that is not transformed by lexical analysis.
+
+To support both classic full text search and partial terms with characters, you can create two fields. One field undergoes lexical analysis. The second field stores an intact string, using a content-preserving analyzer that emits whole-string tokens for pattern matching.
> [!TIP] > If you are familiar with Postman and REST APIs, [download the query examples collection](https://github.com/Azure-Samples/azure-search-postman-samples/) to query partial terms and special characters described in this article.
Azure Cognitive Search scans for whole tokenized terms in the index and won't fi
+ [Wildcard with infix and suffix matching](query-lucene-syntax.md#bkmk_wildcard) places the `*` and `?` operators inside or at the beginning of a term, and requires regular expression syntax (where the expression is enclosed with forward slashes). For example, the query string (`search=/.*numeric*./`) returns results on "alphanumeric" and "alphanumerical" as suffix and infix matches.
-For partial term or pattern search, and a few other query forms like fuzzy search, analyzers are not used at query time. For these query forms, which the parser detects by the presence of operators and delimiters, the query string is passed to the engine without lexical analysis. For these query forms, the analyzer specified on the field is ignored.
+For regular expression, wildcard, and fuzzy search, analyzers are not used at query time. For these query forms, which the parser detects by the presence of operators and delimiters, the query string is passed to the engine without lexical analysis. For these query forms, the analyzer specified on the field is ignored.
> [!NOTE] > When a partial query string includes characters, such as slashes in a URL fragment, you might need to add escape characters. In JSON, a forward slash `/` is escaped with a backward slash `\`. As such, `search=/.*microsoft.com\/azure\/.*/` is the syntax for the URL fragment "microsoft.com/azure/".
For partial term or pattern search, and a few other query forms like fuzzy searc
When you need to search on fragments or patterns or special characters, you can override the default analyzer with a custom analyzer that operates under simpler tokenization rules, retaining the entire string in the index. Taking a step back, the approach looks like this:
-1. Define a field to store an intact version of the string (assuming you want analyzed and non-analyzed text at query time)
+1. Define a second field to store an intact version of the string (assuming you want analyzed and non-analyzed text at query time)
1. Evaluate and choose among the various analyzers that emit tokens at the right level of granularity 1. Assign the analyzer to the field 1. Build and test the index
security Encryption Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/encryption-models.md
na Previously updated : 03/14/2022 Last updated : 03/21/2022 # Data encryption models
The Azure services that support each encryption model:
| Azure Data Factory | Yes | Yes, including Managed HSM | - | | Azure Data Lake Store | Yes | Yes, RSA 2048-bit | - | | **Containers** | | | |
-| Azure Kubernetes Service | Yes | Yes | - |
+| Azure Kubernetes Service | Yes | Yes, including Managed HSM | - |
| Container Instances | Yes | Yes | - | | Container Registry | Yes | Yes | - | | **Compute** | | | |
security Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/feature-availability.md
The following tables display the current Microsoft Sentinel feature availability
| - [Azure DDoS Protection](../../sentinel/data-connectors-reference.md#azure-ddos-protection) | GA | GA | | - [Azure Purview](../../sentinel/data-connectors-reference.md#azure-purview) | Public Preview | Not Available | | - [Microsoft Defender for Cloud](../../sentinel/connect-azure-security-center.md) | GA | GA |
-| - [Microsoft Defender for IoT](../../sentinel/data-connectors-reference.md#microsoft-defender-for-iot) | Public Preview | Not Available |
-| - [Microsoft Insider Risk Management](../../sentinel/sentinel-solutions-catalog.md#domain-solutions) | Public Preview | Not Available |
+| - [Microsoft Defender for IoT](../../sentinel/data-connectors-reference.md#microsoft-defender-for-iot) | GA | GA |
+| - [Microsoft Insider Risk Management](/azure/sentinel/sentinel-solutions-catalog#domain-solutions) | Public Preview | Not Available |
| - [Azure Firewall ](../../sentinel/data-connectors-reference.md#azure-firewall) | GA | GA | | - [Azure Information Protection](../../sentinel/data-connectors-reference.md#azure-information-protection-preview) | Public Preview | Not Available | | - [Azure Key Vault ](../../sentinel/data-connectors-reference.md#azure-key-vault) | Public Preview | Not Available |
The following tables display the current Microsoft Sentinel feature availability
| - [Azure SQL Databases](../../sentinel/data-connectors-reference.md#azure-sql-databases) | GA | GA | | - [Azure WAF](../../sentinel/data-connectors-reference.md#azure-web-application-firewall-waf) | GA | GA | | - [Microsoft Defender for Cloud](../../sentinel/connect-azure-security-center.md) | GA | GA |
-| - [Microsoft Defender for IoT](../../sentinel/data-connectors-reference.md#microsoft-defender-for-iot) | Public Preview | Not Available |
| - [Microsoft Insider Risk Management](../../sentinel/sentinel-solutions-catalog.md#domain-solutions) | Public Preview | Not Available | | **Windows connectors** | | | | - [Windows Firewall](../../sentinel/data-connectors-reference.md#windows-firewall) | GA | GA |
The following table displays the current Microsoft Defender for IoT feature avai
| [Email](../../defender-for-iot/organizations/how-to-forward-alert-information-to-partners.md#email-address-action) | GA | GA | | [FortiGate](../../defender-for-iot/organizations/tutorial-fortinet.md) | GA | GA | | [FortiSIEM](../../defender-for-iot/organizations/tutorial-fortinet.md) | GA | GA |
-| [Microsoft Sentinel](../../defender-for-iot/organizations/how-to-configure-with-sentinel.md) | Public Preview | Public Preview |
+| [Microsoft Sentinel](../../defender-for-iot/organizations/how-to-configure-with-sentinel.md) | GA | GA |
| [NetWitness](../../defender-for-iot/organizations/how-to-forward-alert-information-to-partners.md#netwitness-action) | GA | GA | | [Palo Alto NGFW](../../defender-for-iot/organizations/tutorial-palo-alto.md) | GA | GA | | [Palo Alto Panorama](../../defender-for-iot/organizations/tutorial-palo-alto.md) | GA | GA |
The following table displays the current Microsoft Defender for IoT feature avai
| Feature | Azure | Azure Government | |--|--|--| | [Micro agent for Azure RTOS](../../defender-for-iot/iot-security-azure-rtos.md) | GA | GA |
-| [Configure Sentinel with Microsoft Defender for IoT](../../defender-for-iot/how-to-configure-with-sentinel.md) | Public Preview | Public Preview |
+| [Configure Sentinel with Microsoft Defender for IoT](../../defender-for-iot/how-to-configure-with-sentinel.md) | GA | GA |
| **Standalone micro agent for Linux** | | | | [Standalone agent binary installation](../../defender-for-iot/quickstart-standalone-agent-binary-installation.md) | Public Preview | Public Preview |
sentinel Iot Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/iot-solution.md
# Tutorial: Integrate Microsoft Sentinel and Microsoft Defender for IoT
-> [!IMPORTANT]
->
-> The *Microsoft Sentinel Data connector for Microsoft Defender for IoT* and the *IoT OT Threat Monitoring with Defender for IoT* solution are in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
->
- ΓÇï[Microsoft Defender for IoT](../defender-for-iot/index.yml) enables you to secure your entire OT environment, whether you need to protect existing OT devices or build security into new OT innovations. Microsoft Sentinel and Microsoft Defender for IoT help to bridge the gap between IT and OT security challenges, and to empower SOC teams with out-of-the-box capabilities to efficiently and effectively detect and respond to OT threats. The integration between Microsoft Defender for IoT and Microsoft Sentinel helps organizations to quickly detect multistage attacks, which often cross IT and OT boundaries.
service-fabric Service Fabric Application Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-application-scenarios.md
Here's an example application that uses stateful
## Next steps
+* Listen to [customer case studies](/shows/building-microservices-applications-on-azure-service-fabric/service-fabric-history-and-customer-stories)
* Get started building stateless and stateful services with the Service Fabric [Reliable Services](service-fabric-reliable-services-quick-start.md) and [Reliable Actors](service-fabric-reliable-actors-get-started.md) programming models. * Visit the Azure Architecture Center for guidance on [building microservices on Azure](/azure/architecture/microservices/).
service-fabric Service Fabric Application Upgrade Tutorial Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-application-upgrade-tutorial-powershell.md
A monitored application upgrade can be performed using the managed or native API
With Service Fabric monitored rolling upgrades, the application administrator can configure the health evaluation policy that Service Fabric uses to determine if the application is healthy. In addition, the administrator can configure the action to be taken when the health evaluation fails (for example, doing an automatic rollback.) This section walks through a monitored upgrade for one of the SDK samples that uses PowerShell.
+[Check this page for a training video that also walks you through an application upgrade:](/shows/building-microservices-applications-on-azure-service-fabric/upgrading-an-application.md)
++ > [!NOTE] > [ApplicationParameter](/dotnet/api/system.fabric.description.applicationdescription.applicationparameters#System_Fabric_Description_ApplicationDescription_ApplicationParameters)s are not preserved across an application upgrade. In order to preserve current application parameters, the user should get the parameters first and pass them into the upgrade API call like below: ```powershell
site-recovery Vmware Physical Mobility Service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-mobility-service-overview.md
During a push installation of the Mobility service, the following steps are perf
- Run this command to install the agent. ```cmd
- UnifiedAgent.exe /Role "Agent" /InstallLocation "C:\Program Files (x86)\Microsoft Azure Site Recovery" /Platform "VmWare" /Silent
+ UnifiedAgent.exe /Role "MS" /InstallLocation "C:\Program Files (x86)\Microsoft Azure Site Recovery" /Platform "VmWare" /Silent
``` - Run these commands to register the agent with the configuration server.
spring-cloud Access App Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/access-app-virtual-network.md
Update your app to assign an endpoint to it. Customize the value of your app nam
```azurecli SPRING_CLOUD_APP='your spring cloud app' az spring-cloud app update \
- --name $SPRING_CLOUD_APP \
--resource-group $RESOURCE_GROUP \
+ --name $SPRING_CLOUD_APP \
--service $SPRING_CLOUD_NAME \ --assign-endpoint true ```
After the assignment, you can access the application's private FQDN in the priva
![Access private endpoint in vnet](media/spring-cloud-access-app-vnet/access-private-endpoint.png)
+## Clean up resources
+
+If you plan to continue working with subsequent articles, you might want to leave these resources in place. When no longer needed, delete the resource group, which deletes the resources in the resource group. To delete the resource group by using Azure CLI, use the following command:
+
+```azurecli
+az group delete --name $RESOURCE_GROUP
+```
+ ## Next steps -- [Expose applications to Internet - using Application Gateway](./expose-apps-gateway.md)
+- [Expose applications with end-to-end TLS in a virtual network](./expose-apps-gateway-end-to-end-tls.md)
- [Troubleshooting Azure Spring Cloud in VNET](./troubleshooting-vnet.md) - [Customer Responsibilities for Running Azure Spring Cloud in VNET](./vnet-customer-responsibilities.md)
spring-cloud Expose Apps Gateway End To End Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/expose-apps-gateway-end-to-end-tls.md
+
+ Title: Expose applications with end-to-end TLS in a virtual network using Application Gateway
+
+description: How to expose applications to the internet using Application Gateway
++++ Last updated : 02/28/2022+
+ms.devlang: java, azurecli
++
+# Expose applications with end-to-end TLS in a virtual network
+
+**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+
+This article explains how to expose applications to the internet using Application Gateway. When an Azure Spring Cloud service instance is deployed in your virtual network, applications on the service instance are only accessible in the private network. To make the applications accessible on the Internet, you need to integrate with Azure Application Gateway.
+
+## Prerequisites
+
+- [Azure CLI version 2.0.4 or later](/cli/azure/install-azure-cli).
+- An Azure Spring Cloud service instance deployed in a virtual network with an application accessible over the private network using the default `.private.azuremicroservices.io` domain suffix. For more information, see [Deploy Azure Spring Cloud in a virtual network](./how-to-deploy-in-azure-virtual-network.md)
+- A custom domain to be used to access the application.
+- A certificate, stored in Key Vault, which matches the custom domain to be used to establish the HTTPS listener. For more information, see [Tutorial: Import a certificate in Azure Key Vault](../key-vault/certificates/tutorial-import-certificate.md).
+
+## Configure Application Gateway for Azure Spring Cloud
+
+We recommend that the domain name, as seen by the browser, is the same as the host name which Application Gateway uses to direct traffic to the Azure Spring Cloud back end. This recommendation provides the best experience when using Application Gateway to expose applications hosted in Azure Spring Cloud and residing in a virtual network. If the domain exposed by Application Gateway is different from the domain accepted by Azure Spring Cloud, cookies and generated redirect URLs (for example) can be broken.
+
+To configure Application Gateway in front of Azure Spring Cloud, use the following steps.
+
+1. Follow the instructions in [Deploy Azure Spring Cloud in a virtual network](./how-to-deploy-in-azure-virtual-network.md).
+1. Follow the instructions in [Access your application in a private network](./access-app-virtual-network.md).
+1. Acquire a certificate for your domain of choice and store that in Key Vault. For more information, see [Tutorial: Import a certificate in Azure Key Vault](../key-vault/certificates/tutorial-import-certificate.md).
+1. Configure a custom domain and corresponding certificate from Key Vault on an app deployed onto Azure Spring Cloud. For more information, see [Tutorial: Map an existing custom domain to Azure Spring Cloud](./tutorial-custom-domain.md).
+1. Deploy Application Gateway in a virtual network configured according to the following list:
+ - Use Azure Spring Cloud in the backend pool, referenced by the domain suffixed with `private.azuremicroservices.io`.
+ - Include an HTTPS listener using the same certificate from Key Vault.
+ - Configure the virtual network with HTTP settings that use the custom domain name configured on Azure Spring Cloud instead of the domain suffixed with `private.azuremicroservices.io`.
+1. Configure your public DNS to point to Application Gateway.
+
+## Define variables
+
+Next, use the following commands to define variables for the resource group and virtual network you created as directed in [Deploy Azure Spring Cloud in a virtual network](./how-to-deploy-in-azure-virtual-network.md). Customize the values based on your real environment. When you define `SPRING_APP_PRIVATE_FQDN`, remove `https://` from the URI.
+
+```bash
+SUBSCRIPTION='subscription-id'
+RESOURCE_GROUP='my-resource-group'
+LOCATION='eastus'
+SPRING_CLOUD_NAME='name-of-spring-cloud-instance'
+APPNAME='name-of-app-in-azure-spring-cloud'
+SPRING_APP_PRIVATE_FQDN='$APPNAME.private.azuremicroservices.io'
+VIRTUAL_NETWORK_NAME='azure-spring-cloud-vnet'
+APPLICATION_GATEWAY_SUBNET_NAME='app-gw-subnet'
+APPLICATION_GATEWAY_SUBNET_CIDR='10.1.2.0/24'
+```
+
+## Sign in to Azure
+
+Use the following command to sign in to the Azure CLI and choose your active subscription.
+
+```azurecli
+az login
+az account set --subscription $SUBSCRIPTION
+```
+
+## Acquire a certificate
+
+### [Use a publicly signed certificate](#tab/public-cert)
+
+For production deployments, you'll most likely use a publicly signed certificate. In this case, import the certificate in Azure Key Vault. For more information, see [Tutorial: Import a certificate in Azure Key Vault](../key-vault/certificates/tutorial-import-certificate.md). Make sure the certificate includes the entire certificate chain.
+
+### [Use a self-signed certificate](#tab/self-signed-cert)
+
+When you need a self-signed certificate for testing or development, you need to create it. You'll also need to ensure that the list of "Subject Alternative Names" in the certificate contains the domain name on which you'll expose the application. When creating a self-signed certificate through Azure Key Vault, you can do so through the Azure portal. Alternatively, when using the Azure CLI, you'll need a policy JSON file.
+
+To request the default policy, use the following command:
+
+```azurecli
+az keyvault certificate get-default-policy
+```
+
+Next, adapt the policy JSON as shown in the following example, indicating the `subject` and `SubjectAlternativeNames`:
+
+```json
+{
+ // ...
+ "subject": "C=US, ST=WA, L=Redmond, O=Contoso, OU=Contoso HR, CN=myapp.mydomain.com",
+ "subjectAlternativeNames": {
+ "dnsNames": [
+ "myapp.mydomain.com",
+ "*.myapp.mydomain.com"
+ ],
+ "emails": [
+ "hello@contoso.com"
+ ],
+ "upns": []
+ }
+ // ...
+}
+```
+
+After you've finished updating the policy JSON (see [Update Certificate Policy](/rest/api/keyvault/certificates/update-certificate-policy/update-certificate-policy)), you can create a self-signed certificate in Key Vault by using the following commands:
+
+```azurecli
+KV_NAME='name-of-key-vault'
+CERT_NAME_IN_KV='name-of-certificate-in-key-vault'
+
+az keyvault certificate create \
+ --vault-name $KV_NAME \
+ --name $CERT_NAME_IN_KV \
+ --policy "$KV_CERT_POLICY"
+```
+++
+## Configure the public domain name on Azure Spring Cloud
+
+Traffic will enter the application deployed on Azure Spring Cloud using the public domain name. To configure your application to listen to this host name and do so over HTTPS, use the following commands to add a custom domain to your app:
+
+```azurecli
+KV_NAME='name-of-key-vault'
+KV_RG='resource-group-name-of-key-vault'
+CERT_NAME_IN_ASC='name-of-certificate-in-Azure-Spring-Cloud'
+CERT_NAME_IN_KV='name-of-certificate-with-intermediaries-in-key-vault'
+DOMAIN_NAME=myapp.mydomain.com
+
+# provide permissions to ASC to read the certificate from Key Vault:
+VAULTURI=$(az keyvault show -n $KV_NAME -g $KV_RG --query properties.vaultUri -o tsv)
+
+# get the object id for the Azure Spring Cloud Domain-Management Service Principal:
+ASCDM_OID=$(az ad sp show --id 03b39d0f-4213-4864-a245-b1476ec03169 --query objectId --output tsv)
+
+# allow this Service Principal to read and list certificates and secrets from Key Vault:
+az keyvault set-policy -g $KV_RG -n $KV_NAME --object-id $ASCDM_OID --certificate-permissions get list --secret-permissions get list
+
+# add custom domain name and configure TLS using the certificate:
+az spring-cloud certificate add \
+ --resource-group $RESOURCE_GROUP \
+ --service $SPRING_CLOUD_NAME \
+ --name $CERT_NAME_IN_ASC \
+ --vault-certificate-name $CERT_NAME_IN_KV \
+ --vault-uri $VAULTURI
+az spring-cloud app custom-domain bind \
+ --resource-group $RESOURCE_GROUP \
+ --service $SPRING_CLOUD_NAME \
+ --domain-name $DOMAIN_NAME \
+ --certificate $CERT_NAME_IN_ASC \
+ --app $APPNAME
+```
+## Create network resources
+
+The Azure Application Gateway to be created will join the same virtual network as--or peered virtual network to--the Azure Spring Cloud service instance. First create a new subnet for the Application Gateway in the virtual network using `az network vnet subnet create`, and also create a Public IP address as the Frontend of the Application Gateway using `az network public-ip create`.
+
+```azurecli
+APPLICATION_GATEWAY_PUBLIC_IP_NAME='app-gw-public-ip'
+az network vnet subnet create \
+ --name $APPLICATION_GATEWAY_SUBNET_NAME \
+ --resource-group $RESOURCE_GROUP \
+ --vnet-name $VIRTUAL_NETWORK_NAME \
+ --address-prefix $APPLICATION_GATEWAY_SUBNET_CIDR
+az network public-ip create \
+ --resource-group $RESOURCE_GROUP \
+ --location $LOCATION \
+ --name $APPLICATION_GATEWAY_PUBLIC_IP_NAME \
+ --allocation-method Static \
+ --sku Standard
+```
+
+## Create a Managed Identity for Application Gateway
+
+Application Gateway will need to be able to access Key Vault to read the certificate. To do so, it will use a User-assigned [Managed Identity](../active-directory/managed-identities-azure-resources/overview.md). Create the Managed Identity by using the following command:
+
+```azurecli
+APPGW_IDENTITY_NAME='name-for-appgw-managed-identity'
+az identity create \
+ --resource-group $RESOURCE_GROUP \
+ --name $APPGW_IDENTITY_NAME
+```
+
+Then fetch the objectId for the Managed Identity as it will be used later on to give rights to access the certificate in Key Vault:
+
+```azurecli
+APPGW_IDENTITY_CLIENTID=$(az identity show --resource-group $RESOURCE_GROUP --name $APPGW_IDENTITY_NAME --query clientId --output tsv)
+APPGW_IDENTITY_OID=$(az ad sp show --id $APPGW_IDENTITY_CLIENTID --query objectId --output tsv)
+```
+
+## Set policy on Key Vault
+
+Configure Key Vault using the following command so that the Managed Identity for Application Gateway is allowed to access the certificate stored in Key Vault:
+
+```azurecli
+az keyvault set-policy \
+ --name $KV_NAME \
+ --resource-group $KV_RG \
+ --object-id $APPGW_IDENTITY_OID \
+ --secret-permissions get list \
+ --certificate-permissions get list
+```
+
+## Create Application Gateway
+
+Create an application gateway using `az network application-gateway create` and specify your application's private fully qualified domain name (FQDN) as servers in the backend pool. Make sure to use the user-assigned Managed Identity and to point to the certificate in Key Vault using the certificate's Secret ID. Then update the HTTP setting using `az network application-gateway http-settings update` to use the public host name.
+
+```azurecli
+APPGW_NAME='name-for-application-gateway'
+
+KEYVAULT_SECRET_ID_FOR_CERT=$(az keyvault certificate show --name $CERT_NAME_IN_KV --vault-name $KV_NAME --query sid --output tsv)
+
+az network application-gateway create \
+ --name $APPGW_NAME \
+ --resource-group $RESOURCE_GROUP \
+ --location $LOCATION \
+ --capacity 2 \
+ --sku Standard_v2 \
+ --frontend-port 443 \
+ --http-settings-cookie-based-affinity Disabled \
+ --http-settings-port 443 \
+ --http-settings-protocol Https \
+ --public-ip-address $APPLICATION_GATEWAY_PUBLIC_IP_NAME \
+ --vnet-name $VIRTUAL_NETWORK_NAME \
+ --subnet $APPLICATION_GATEWAY_SUBNET_NAME \
+ --servers $SPRING_APP_PRIVATE_FQDN \
+ --key-vault-secret-id $KEYVAULT_SECRET_ID_FOR_CERT \
+ --identity $APPGW_IDENTITY_NAME
+```
+
+It can take up to 30 minutes for Azure to create the application gateway.
+
+### Update HTTP Settings to use the domain name towards the backend
+
+#### [Use a publicly signed certificate](#tab/public-cert-2)
+
+Update the HTTP settings to use the public domain name as the hostname instead of the domain suffixed with ".private.azuremicroservices.io" to send traffic to Azure Spring Cloud with.
+
+```azurecli
+az network application-gateway http-settings update \
+ --resource-group $RESOURCE_GROUP \
+ --gateway-name $APPGW_NAME \
+ --host-name-from-backend-pool false \
+ --host-name $DOMAIN_NAME \
+ --name appGatewayBackendHttpSettings
+```
+
+#### [Use a self-signed certificate](#tab/self-signed-cert-2)
+
+Update the HTTP settings to use the public domain name as the hostname instead of the domain suffixed with ".private.azuremicroservices.io" to send traffic to Azure Spring Cloud with. Given that a self-signed certificate is used, it will need to be allow-listed on the HTTP Settings of Application Gateway.
+
+To allowlist the certificate, first fetch the public portion of it from Key Vault by using the following command:
+
+```azurecli
+az keyvault certificate download \
+ --vault-name $KV_NAME \
+ --name $CERT_NAME_IN_KV \
+ --file ./selfsignedcert.crt \
+ --encoding DER
+```
+
+Then upload it to Application Gateway using this command:
+
+```azurecli
+az network application-gateway root-cert create \
+ --resource-group $RG \
+ --cert-file ./selfsignedcert.crt \
+ --gateway-name $APPGW_NAME \
+ --name MySelfSignedTrustedRootCert
+```
+
+Now you can update the HTTP Settings to trust the new (self-signed) root certificate by using this command:
+
+```azurecli
+az network application-gateway http-settings update \
+ --resource-group $RG \
+ --gateway-name $APPGW_NAME \
+ --host-name-from-backend-pool false \
+ --host-name $DOMAIN_NAME \
+ --name appGatewayBackendHttpSettings \
+ --root-certs MySelfSignedTrustedRootCert
+```
+++
+### Check the deployment of Application Gateway
+
+After it's created, check the backend health by using the following command. The output of this command enables you to determine whether the application gateway reaches your application through its private FQDN.
+
+```azurecli
+az network application-gateway show-backend-health \
+ --name $APPGW_NAME \
+ --resource-group $RESOURCE_GROUP
+```
+
+The output indicates the healthy status of backend pool, as shown in the following example:
+
+```output
+{
+ "backendAddressPools": [
+ {
+ "backendHttpSettingsCollection": [
+ {
+ "servers": [
+ {
+ "address": "my-azure-spring-cloud-hello-vnet.private.azuremicroservices.io",
+ "health": "Healthy",
+ "healthProbeLog": "Success. Received 200 status code",
+ "ipConfiguration": null
+ }
+ ]
+ }
+ ]
+ }
+ ]
+}
+```
+
+## Configure DNS and access the application
+
+Now configure the public DNS to point to Application Gateway using a CNAME or A-record. You can find the public address for Application Gateway by using the following command:
+
+```azurecli
+az network public-ip show \
+ --resource-group $RESOURCE_GROUP \
+ --name $APPLICATION_GATEWAY_PUBLIC_IP_NAME \
+ --query [ipAddress] \
+ --output tsv
+```
+
+You can now access the application using the public domain name.
+
+## Next steps
+
+- [Troubleshooting Azure Spring Cloud in VNET](./troubleshooting-vnet.md)
+- [Customer Responsibilities for Running Azure Spring Cloud in VNET](./vnet-customer-responsibilities.md)
spring-cloud Expose Apps Gateway Tls Termination https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/expose-apps-gateway-tls-termination.md
+
+ Title: "Expose applications to the internet using Application Gateway with TLS termination"
+
+description: How to expose applications to internet using Application Gateway with TLS termination
++++ Last updated : 11/09/2021+++
+# Expose applications to the internet with TLS Termination at Application Gateway
+
+This article explains how to expose applications to the internet using Application Gateway.
+
+When an Azure Spring Cloud service instance is deployed in your virtual network (VNET), applications on the service instance are only accessible in the private network. To make the applications accessible on the Internet, you need to integrate with Azure Application Gateway. The incoming encrypted traffic can be decrypted at the application gateway or it can be passed to Azure Spring Cloud encrypted to achieve end-to-end TLS/SSL. For dev and test purposes, you can start with SSL termination at the application gateway, which is covered in this guide. For production, we recommend end-to-end TLS/SSL with private certificate, as described in [Expose applications with end-to-end TLS in a virtual network](expose-apps-gateway-end-to-end-tls.md).
+
+## Prerequisites
+
+- [Azure CLI version 2.0.4 or later](/cli/azure/install-azure-cli).
+- An Azure Spring Cloud service instance deployed in a virtual network with an application accessible over the private network using the default `.private.azuremicroservices.io` domain suffix. For more information, see [Deploy Azure Spring Cloud in a virtual network](how-to-deploy-in-azure-virtual-network.md)
+- A custom domain to be used to access the application.
+- A certificate, stored in Key Vault, which matches the custom domain to be used to establish the HTTPS listener. For more information, see [Tutorial: Import a certificate in Azure Key Vault](../key-vault/certificates/tutorial-import-certificate.md).
+
+## Configure Application Gateway for Azure Spring Cloud
+
+We recommend that the domain name, as seen by the browser, is the same as the host name which Application Gateway uses to direct traffic to the Azure Spring Cloud back end. This recommendation provides the best experience when using Application Gateway to expose applications hosted in Azure Spring Cloud and residing in a virtual network. If the domain exposed by Application Gateway is different from the domain accepted by Azure Spring Cloud, cookies and generated redirect URLs (for example) can be broken.
+
+To configure Application Gateway in front of Azure Spring Cloud in a private VNET, use the following steps.
+
+1. Follow the instructions in [Deploy Azure Spring Cloud in a virtual network](how-to-deploy-in-azure-virtual-network.md).
+1. Follow the instructions in [Access your application in a private network](access-app-virtual-network.md).
+1. Acquire a certificate for your domain of choice and store that in Key Vault. For more information, see [Tutorial: Import a certificate in Azure Key Vault](../key-vault/certificates/tutorial-import-certificate.md).
+1. Configure a custom domain and corresponding certificate from Key Vault on an app deployed onto Azure Spring Cloud. For more information, see [Tutorial: Map an existing custom domain to Azure Spring Cloud](tutorial-custom-domain.md).
+1. Deploy Application Gateway in a virtual network configured according to the following list:
+ - Use Azure Spring Cloud in the backend pool, referenced by the domain suffixed with `private.azuremicroservices.io`.
+ - Include an HTTPS listener using the same certificate from Key Vault.
+ - Configure the virtual network with HTTP settings that use the custom domain name configured on Azure Spring Cloud instead of the domain suffixed with `private.azuremicroservices.io`.
+1. Configure your public DNS to point to the application gateway.
+
+## Define variables
+
+Next, use the following commands to define variables for the resource group and virtual network you created as directed in [Deploy Azure Spring Cloud in a virtual network](how-to-deploy-in-azure-virtual-network.md). Replace the *\<...>* placeholders with real values based on your actual environment. When you define `SPRING_APP_PRIVATE_FQDN`, remove `https://` from the URI.
+
+```bash
+SUBSCRIPTION='<subscription-id>'
+RESOURCE_GROUP='<resource-group-name>'
+LOCATION='eastus'
+SPRING_CLOUD_NAME='<name-of-azure-spring-cloud-instance>'
+APPNAME='<name-of-app-in-azure-spring-cloud>'
+SPRING_APP_PRIVATE_FQDN='$APPNAME.private.azuremicroservices.io'
+VIRTUAL_NETWORK_NAME='azure-spring-cloud-vnet'
+APPLICATION_GATEWAY_SUBNET_NAME='app-gw-subnet'
+APPLICATION_GATEWAY_SUBNET_CIDR='10.1.2.0/24'
+```
+
+## Sign in to Azure
+
+Use the following command to sign in to the Azure CLI and choose your active subscription.
+
+```azurecli
+az login
+az account set --subscription $SUBSCRIPTION
+```
+
+## Configure the public domain name on Azure Spring Cloud
+
+Traffic will enter the application deployed on Azure Spring Cloud using the public domain name. To configure your application to listen to this host name over HTTP, use the following commands to add a custom domain to your app, replacing the *\<...>* placeholders with real values:
+
+```azurecli
+KV_NAME='<name-of-key-vault>'
+KV_RG='<resource-group-name-of-key-vault>'
+CERT_NAME_IN_KV='<name-of-certificate-with-intermediaries-in-key-vault>'
+DOMAIN_NAME=myapp.mydomain.com
+
+az spring-cloud app custom-domain bind \
+ --resource-group $RESOURCE_GROUP \
+ --service $SPRING_CLOUD_NAME \
+ --domain-name $DOMAIN_NAME \
+ --app $APPNAME
+```
+
+## Create network resources
+
+The application gateway to be created will join the same virtual network as the Azure Spring Cloud service instance. First, create a new subnet for the application gateway in the virtual network, then create a public IP address as the frontend of the application gateway, as shown in the following example.
+
+```azurecli
+APPLICATION_GATEWAY_PUBLIC_IP_NAME='app-gw-public-ip'
+az network vnet subnet create \
+ --name $APPLICATION_GATEWAY_SUBNET_NAME \
+ --resource-group $RESOURCE_GROUP \
+ --vnet-name $VIRTUAL_NETWORK_NAME \
+ --address-prefix $APPLICATION_GATEWAY_SUBNET_CIDR
+az network public-ip create \
+ --resource-group $RESOURCE_GROUP \
+ --location $LOCATION \
+ --name $APPLICATION_GATEWAY_PUBLIC_IP_NAME \
+ --allocation-method Static \
+ --sku Standard
+```
+
+### Create a managed identity for the application gateway
+
+Your application gateway will need to be able to access Key Vault to read the certificate. To do this, the application gateway will use a user-assigned managed identity. For more information, see [What are managed identities for Azure resources?](../active-directory/managed-identities-azure-resources/overview.md). Create the managed identity by using the following command, replacing the *\<...>* placeholder:
+
+```azurecli
+APPGW_IDENTITY_NAME='<name-for-appgw-managed-identity>'
+az identity create \
+ --resource-group $RESOURCE_GROUP \
+ --name $APPGW_IDENTITY_NAME
+```
+
+Then, use the following command to fetch the `objectId` for the managed identity. This value will be used later on to give rights to access the certificate in Key Vault.
+
+```azurecli
+APPGW_IDENTITY_CLIENTID=$(az identity show --resource-group $RESOURCE_GROUP --name $APPGW_IDENTITY_NAME --query clientId --output tsv)
+APPGW_IDENTITY_OID=$(az ad sp show --id $APPGW_IDENTITY_CLIENTID --query objectId --output tsv)
+```
+
+### Set policy on Key Vault
+
+Configure Key Vault using the following command so that the managed identity for the application gateway is allowed to access the certificate stored in Key Vault:
+
+```azurecli
+az keyvault set-policy \
+ --resource-group $KV_RG \
+ --name $KV_NAME \
+ --object-id $APPGW_IDENTITY_OID \
+ --secret-permissions get list \
+ --certificate-permissions get list
+```
+
+## Create an application gateway
+
+### [CLI](#tab/azure-cli)
+
+Create an application gateway using `az network application-gateway create` and specify your application's private fully qualified domain name (FQDN) as servers in the backend pool. Be sure to use the user-assigned managed identity and point to the certificate in Key Vault using the certificate's secret ID.
+
+```azurecli
+APPGW_NAME='<name-for-application-gateway>'
+CERT_NAME_IN_KV='<name-of-certificate-in-key-vault>'
+KEYVAULT_SECRET_ID_FOR_CERT=$(az keyvault certificate show --name $CERT_NAME_IN_KV --vault-name $KV_NAME --query sid --output tsv)
+
+az network application-gateway create \
+ --name $APPGW_NAME \
+ --resource-group $RESOURCE_GROUP \
+ --location $LOCATION \
+ --capacity 2 \
+ --sku Standard_v2 \
+ --frontend-port 443 \
+ --http-settings-cookie-based-affinity Disabled \
+ --http-settings-port 80 \
+ --http-settings-protocol Http \
+ --public-ip-address $APPLICATION_GATEWAY_PUBLIC_IP_NAME \
+ --vnet-name $VIRTUAL_NETWORK_NAME \
+ --subnet $APPLICATION_GATEWAY_SUBNET_NAME \
+ --servers $SPRING_APP_PRIVATE_FQDN \
+ --key-vault-secret-id $KEYVAULT_SECRET_ID_FOR_CERT \
+ --identity $APPGW_IDENTITY_NAME
+```
+
+It can take up to 30 minutes for Azure to create the application gateway.
+
+### [Azure portal](#tab/azure-portal)
+
+Create an application gateway using the following steps to enable SSL termination at the application gateway.
+
+1. Sign in to the Azure portal and create a new Application Gateway resource.
+1. Fill in the required fields for creating the application gateway. Leave the default values as they are.
+1. After you provide a value for the **Virtual network** field, the **Subnet** field appears. Create a separate subnet for the application gateway in the VNET, as shown in the following screenshot.
+
+ :::image type="content" source="media/expose-apps-gateway-tls-termination/create-application-gateway-basics.png" alt-text="Azure portal screenshot of 'Create application gateway' page.":::
+
+1. Create a public IP address and assign it to the frontend of the application gateway, as shown in the following screenshot.
+
+ :::image type="content" source="media/expose-apps-gateway-tls-termination/create-frontend-ip.png" alt-text="Azure portal screenshot showing Frontends tab of 'Create application gateway' page.":::
+
+1. Create a backend pool for the application gateway. Select **Target** as your FQDN of the application deployed in Azure Spring Cloud.
+
+ :::image type="content" source="media/expose-apps-gateway-tls-termination/create-backend-pool.png" alt-text="Azure portal screenshot of 'Add a backend pool' page.":::
+
+1. Create a routing rule with HTTP listener.
+ 1. Select the public IP that you created earlier.
+ 1. Select **HTTPS** as protocol and **443** as port.
+ 1. Choose a certificate from Key Vault.
+ 1. Select the managed identity you created earlier.
+ 1. Select the right key vault and certificate, which were added to the key vault earlier.
+
+ :::image type="content" source="media/expose-apps-gateway-tls-termination/create-routingrule-with-http-listener.png" alt-text="Azure portal screenshot of 'Add a routing rule' page.":::
+
+ 1. Select the **Backend targets** tab.
+
+ :::image type="content" source="media/expose-apps-gateway-tls-termination/create-backend-http-settings.png" alt-text="Azure portal screenshot of 'Add a HTTP setting' page.":::
+
+1. Select **Review and Create** to create the application gateway.
+
+It can take up to 30 minutes for Azure to create the application gateway.
+++
+### Update HTTP settings to use the domain name towards the backend
+
+Update the HTTP settings to use the public domain name as the hostname instead of the domain suffixed with `.private.azuremicroservices.io` to send traffic to Azure Spring Cloud with.
+
+```azurecli
+az network application-gateway http-settings update \
+ --resource-group $RESOURCE_GROUP \
+ --gateway-name $APPGW_NAME \
+ --host-name-from-backend-pool false \
+ --host-name $DOMAIN_NAME \
+ --name appGatewayBackendHttpSettings
+```
+
+### Check the deployment of the application gateway
+
+After it's created, check the backend health by using the following command. The output of this command enables you to determine whether the application gateway reaches your application through its private fully qualified domain name (FQDN).
+
+```azurecli
+az network application-gateway show-backend-health \
+ --name $APPGW_NAME \
+ --resource-group $RESOURCE_GROUP
+```
+
+The output indicates the healthy status of backend pool, as shown in the following example:
+
+```output
+{
+ "backendAddressPools": [
+ {
+ "backendHttpSettingsCollection": [
+ {
+ "servers": [
+ {
+ "address": "my-azure-spring-cloud-hello-vnet.private.azuremicroservices.io",
+ "health": "Healthy",
+ "healthProbeLog": "Success. Received 200 status code",
+ "ipConfiguration": null
+ }
+ ]
+ }
+ ]
+ }
+ ]
+}
+```
+
+## Configure DNS and access the application
+
+Configure the public DNS to point to the application gateway using a CNAME or A-record. You can find the public address for the application gateway by using the following command:
+
+```azurecli
+az network public-ip show \
+ --resource-group $RESOURCE_GROUP \
+ --name $APPLICATION_GATEWAY_PUBLIC_IP_NAME \
+ --query [ipAddress] \
+ --output tsv
+```
+
+You can now access the application using the public domain name.
+
+## Clean up resources
+
+If you plan to continue working with subsequent articles, you might want to leave these resources in place. When no longer needed, delete the resource group, which deletes the resources in the resource group. To delete the resource group by using Azure CLI, use the following command:
+
+```azurecli
+az group delete --name $RESOURCE_GROUP
+```
+
+## Next steps
+
+- [Exposing applications with end-to-end TLS in a virtual network](./expose-apps-gateway-end-to-end-tls.md)
+- [Troubleshooting Azure Spring Cloud in VNET](./troubleshooting-vnet.md)
+- [Customer Responsibilities for Running Azure Spring Cloud in VNET](./vnet-customer-responsibilities.md)
spring-cloud How To Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-permissions.md
We'll implement the following custom roles.
The Developer role includes permissions to restart apps and see their log streams. This role can't make changes to apps or configurations.
-#### [Portal](#tab/Azure-portal)
+### [Portal](#tab/Azure-portal)
1. In the Azure portal, open the subscription where you want to assign the custom role. 2. Open **Access control (IAM)**.
The Developer role includes permissions to restart apps and see their log stream
8. Select the permissions for the Developer role. Under **Microsoft.AppPlatform/Spring**, select:+ * **Write : Create or Update Azure Spring Cloud service instance** * **Read : Get Azure Spring Cloud service instance** * **Other : List Azure Spring Cloud service instance test keys**
+ (For Enterprise tier only) Under **Microsoft.AppPlatform/Spring/buildServices**, select:
+
+ * **Read : Read Microsoft Azure Spring Cloud Build Services**
+ * **Other : Get an Upload URL in Azure Spring Cloud**
+
+ (For Enterprise tier only) Under **Microsoft.AppPlatform/Spring/buildServices/builds**, select:
+
+ * **Read : Read Microsoft Azure Spring Cloud Builds**
+ * **Write : Write Microsoft Azure Spring Cloud Builds**
+
+ (For Enterprise tier only) Under **Microsoft.AppPlatform/Spring/buildServices/builds/results**, select:
+
+ * **Read : Read Microsoft Azure Spring Cloud Build Results**
+ * **Other : Get an Log File URL in Azure Spring Cloud**
+
+ (For Enterprise tier only) Under **Microsoft.AppPlatform/Spring/buildServices/builders**, select:
+
+ * **Read : Read Microsoft Azure Spring Cloud Builders**
+ * **Write : Write Microsoft Azure Spring Cloud Builders**
+ * **Delete : Delete Microsoft Azure Spring Cloud Builders**
+
+ (For Enterprise tier only) Under **Microsoft.AppPlatform/Spring/buildServices/builders/buildpackBindings**, select:
+
+ * **Read : Read Microsoft Azure Spring Cloud Builder BuildpackBinding**
+ * **Write : Write Microsoft Azure Spring Cloud Builder BuildpackBinding**
+ * **Delete : Delete Microsoft Azure Spring Cloud Builder BuildpackBinding**
+
+ (For Enterprise tier only) Under **Microsoft.AppPlatform/Spring/buildServices/supportedBuildpacks**, select:
+
+ * **Read : Read Microsoft Azure Spring Cloud Supported Buildpacks**
+
+ (For Enterprise tier only) Under **Microsoft.AppPlatform/Spring/buildServices/supportedStacks**, select:
+
+ * **Read : Read Microsoft Azure Spring Cloud Supported Stacks**
+ Under **Microsoft.AppPlatform/Spring/apps**, select:+ * **Read : Read Microsoft Azure Spring Cloud application** * **Other : Get Microsoft Azure Spring Cloud application resource upload URL** Under **Microsoft.AppPlatform/Spring/apps/bindings**, select:+ * **Read : Read Microsoft Azure Spring Cloud application binding** Under **Microsoft.AppPlatform/Spring/apps/deployments**, select:+ * **Write : Write Microsoft Azure Spring Cloud application deployment** * **Read : Read Microsoft Azure Spring Cloud application deployment** * **Other : Start Microsoft Azure Spring Cloud application deployment**
The Developer role includes permissions to restart apps and see their log stream
* **Other : Get Microsoft Azure Spring Cloud application deployment log file URL** Under **Microsoft.AppPlatform/Spring/apps/domains**, select:+ * **Read : Read Microsoft Azure Spring Cloud application custom domain** Under **Microsoft.AppPlatform/Spring/certificates**, select:+ * **Read : Read Microsoft Azure Spring Cloud certificate** Under **Microsoft.AppPlatform/locations/operationResults/Spring**, select:+ * **Read : Read operation result** Under **Microsoft.AppPlatform/locations/operationStatus/operationId**, select:+ * **Read : Read operation status**
- [ ![Screenshot that shows the selections for Developler permissions.](media/spring-cloud-permissions/developer-permissions-box.png) ](media/spring-cloud-permissions/developer-permissions-box.png#lightbox)
+ [![Azure portal screenshot that shows the selections for Developer permissions.](media/spring-cloud-permissions/developer-permissions-box.png)](media/spring-cloud-permissions/developer-permissions-box.png#lightbox)
9. Select **Add**.
The Developer role includes permissions to restart apps and see their log stream
11. Select **Review and create**.
-#### [JSON](#tab/JSON)
+### [JSON](#tab/JSON)
1. In the Azure portal, open the subscription where you want to assign the custom role. 2. Open **Access control (IAM)**.
The Developer role includes permissions to restart apps and see their log stream
8. Paste in the following JSON to define the Developer role:
- ```json
- {
- "properties": {
- "roleName": "Developer",
- "description": "",
- "assignableScopes": [
- "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
- ],
- "permissions": [
- {
- "actions": [
- "Microsoft.AppPlatform/Spring/write",
- "Microsoft.AppPlatform/Spring/read",
- "Microsoft.AppPlatform/Spring/listTestKeys/action",
- "Microsoft.AppPlatform/Spring/apps/read",
- "Microsoft.AppPlatform/Spring/apps/getResourceUploadUrl/action",
- "Microsoft.AppPlatform/Spring/apps/bindings/read",
- "Microsoft.AppPlatform/Spring/apps/domains/read",
- "Microsoft.AppPlatform/Spring/apps/deployments/write",
- "Microsoft.AppPlatform/Spring/apps/deployments/read",
- "Microsoft.AppPlatform/Spring/apps/deployments/start/action",
- "Microsoft.AppPlatform/Spring/apps/deployments/stop/action",
- "Microsoft.AppPlatform/Spring/apps/deployments/restart/action",
- "Microsoft.AppPlatform/Spring/apps/deployments/getLogFileUrl/action",
- "Microsoft.AppPlatform/Spring/certificates/read",
- "Microsoft.AppPlatform/locations/operationResults/Spring/read",
- "Microsoft.AppPlatform/locations/operationStatus/operationId/read"
- ],
- "notActions": [],
- "dataActions": [],
- "notDataActions": []
- }
- ]
- }
- }
- ```
+ * Basic/Standard tier
+
+ ```json
+ {
+ "properties": {
+ "roleName": "Developer",
+ "description": "",
+ "assignableScopes": [
+ "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
+ ],
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.AppPlatform/Spring/write",
+ "Microsoft.AppPlatform/Spring/read",
+ "Microsoft.AppPlatform/Spring/listTestKeys/action",
+ "Microsoft.AppPlatform/Spring/apps/read",
+ "Microsoft.AppPlatform/Spring/apps/getResourceUploadUrl/action",
+ "Microsoft.AppPlatform/Spring/apps/bindings/read",
+ "Microsoft.AppPlatform/Spring/apps/domains/read",
+ "Microsoft.AppPlatform/Spring/apps/deployments/write",
+ "Microsoft.AppPlatform/Spring/apps/deployments/read",
+ "Microsoft.AppPlatform/Spring/apps/deployments/start/action",
+ "Microsoft.AppPlatform/Spring/apps/deployments/stop/action",
+ "Microsoft.AppPlatform/Spring/apps/deployments/restart/action",
+ "Microsoft.AppPlatform/Spring/apps/deployments/getLogFileUrl/action",
+ "Microsoft.AppPlatform/Spring/certificates/read",
+ "Microsoft.AppPlatform/locations/operationResults/Spring/read",
+ "Microsoft.AppPlatform/locations/operationStatus/operationId/read"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ]
+ }
+ }
+ ```
+
+ * Enterprise tier
+
+ ```json
+ {
+ "properties": {
+ "roleName": "Developer",
+ "description": "",
+ "assignableScopes": [
+ "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
+ ],
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.AppPlatform/Spring/write",
+ "Microsoft.AppPlatform/Spring/read",
+ "Microsoft.AppPlatform/Spring/listTestKeys/action",
+ "Microsoft.AppPlatform/Spring/buildServices/read",
+ "Microsoft.AppPlatform/Spring/buildServices/getResourceUploadUrl/action",
+ "Microsoft.AppPlatform/Spring/buildServices/builds/read",
+ "Microsoft.AppPlatform/Spring/buildServices/builds/write",
+ "Microsoft.AppPlatform/Spring/buildServices/builds/results/read",
+ "Microsoft.AppPlatform/Spring/buildServices/builds/results/getLogFileUrl/action",
+ "Microsoft.AppPlatform/Spring/buildServices/builders/read",
+ "Microsoft.AppPlatform/Spring/buildServices/builders/write",
+ "Microsoft.AppPlatform/Spring/buildServices/builders/delete",
+ "Microsoft.AppPlatform/Spring/buildServices/builders/buildpackBindings/read",
+ "Microsoft.AppPlatform/Spring/buildServices/builders/buildpackBindings/write",
+ "Microsoft.AppPlatform/Spring/buildServices/builders/buildpackBindings/delete",
+ "Microsoft.AppPlatform/Spring/buildServices/supportedBuildpacks/read",
+ "Microsoft.AppPlatform/Spring/buildServices/supportedStacks/read",
+ "Microsoft.AppPlatform/Spring/apps/read",
+ "Microsoft.AppPlatform/Spring/apps/getResourceUploadUrl/action",
+ "Microsoft.AppPlatform/Spring/apps/bindings/read",
+ "Microsoft.AppPlatform/Spring/apps/domains/read",
+ "Microsoft.AppPlatform/Spring/apps/deployments/write",
+ "Microsoft.AppPlatform/Spring/apps/deployments/read",
+ "Microsoft.AppPlatform/Spring/apps/deployments/start/action",
+ "Microsoft.AppPlatform/Spring/apps/deployments/stop/action",
+ "Microsoft.AppPlatform/Spring/apps/deployments/restart/action",
+ "Microsoft.AppPlatform/Spring/apps/deployments/getLogFileUrl/action",
+ "Microsoft.AppPlatform/Spring/certificates/read",
+ "Microsoft.AppPlatform/locations/operationResults/Spring/read",
+ "Microsoft.AppPlatform/locations/operationStatus/operationId/read"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ]
+ }
+ }
+ ```
![Screenshot that shows the JSON for the Developer role.](media/spring-cloud-permissions/create-custom-role-json.png)
The Developer role includes permissions to restart apps and see their log stream
This procedure defines a role that has permissions to deploy, test, and restart Azure Spring Cloud apps.
-#### [Portal](#tab/Azure-portal)
+### [Portal](#tab/Azure-portal)
1. Repeat steps 1 through 4 in the procedure for adding the Developer role. 2. Select the permissions for the DevOps Engineer role: Under **Microsoft.AppPlatform/Spring**, select:+ * **Write : Create or Update Azure Spring Cloud service instance** * **Delete : Delete Azure Spring Cloud service instance** * **Read : Get Azure Spring Cloud service instance**
This procedure defines a role that has permissions to deploy, test, and restart
* **Other : List Azure Spring Cloud service instance test keys** * **Other : Regenerate Azure Spring Cloud service instance test key**
+ (For Enterprise tier only) Under **Microsoft.AppPlatform/Spring/buildServices**, select:
+
+ * **Read : Read Microsoft Azure Spring Cloud Build Services**
+ * **Other : Get an Upload URL in Azure Spring Cloud**
+
+ (For Enterprise tier only) Under **Microsoft.AppPlatform/Spring/buildServices/agentPools**, select:
+
+ * **Read : Read Microsoft Azure Spring Cloud Agent Pools**
+ * **Write : Write Microsoft Azure Spring Cloud Agent Pools**
+
+ (For Enterprise tier only) Under **Microsoft.AppPlatform/Spring/buildServices/builds**, select:
+
+ * **Read : Read Microsoft Azure Spring Cloud Builds**
+ * **Write : Write Microsoft Azure Spring Cloud Builds**
+
+ (For Enterprise tier only) Under **Microsoft.AppPlatform/Spring/buildServices/builds/results**, select:
+
+ * **Read : Read Microsoft Azure Spring Cloud Build Results**
+ * **Other : Get an Log File URL in Azure Spring Cloud**
+
+ (For Enterprise tier only) Under **Microsoft.AppPlatform/Spring/buildServices/builders**, select:
+
+ * **Read : Read Microsoft Azure Spring Cloud Builders**
+ * **Write : Write Microsoft Azure Spring Cloud Builders**
+ * **Delete : Delete Microsoft Azure Spring Cloud Builders**
+
+ (For Enterprise tier only) Under **Microsoft.AppPlatform/Spring/buildServices/builders/buildpackBindings**, select:
+
+ * **Read : Read Microsoft Azure Spring Cloud Builder BuildpackBinding**
+ * **Write : Write Microsoft Azure Spring Cloud Builder BuildpackBinding**
+ * **Delete : Delete Microsoft Azure Spring Cloud Builder BuildpackBinding**
+
+ (For Enterprise tier only) Under **Microsoft.AppPlatform/Spring/buildServices/supportedBuildpacks**, select:
+
+ * **Read : Read Microsoft Azure Spring Cloud Supported Buildpacks**
+
+ (For Enterprise tier only) Under **Microsoft.AppPlatform/Spring/buildServices/supportedStacks**, select:
+
+ * **Read : Read Microsoft Azure Spring Cloud Supported Stacks**
+ Under **Microsoft.AppPlatform/Spring/apps**, select:+ * **Write : Write Microsoft Azure Spring Cloud application** * **Delete : Delete Microsoft Azure Spring Cloud application** * **Read : Read Microsoft Azure Spring Cloud application**
This procedure defines a role that has permissions to deploy, test, and restart
* **Other : Validate Microsoft Azure Spring Cloud application custom domain** Under **Microsoft.AppPlatform/Spring/apps/bindings**, select:+ * **Write : Write Microsoft Azure Spring Cloud application binding** * **Delete : Delete Microsoft Azure Spring Cloud application binding** * **Read : Read Microsoft Azure Spring Cloud application binding** Under **Microsoft.AppPlatform/Spring/apps/deployments**, select:+ * **Write : Write Microsoft Azure Spring Cloud application deployment** * **Delete : Delete Azure Spring Cloud application deployment** * **Read : Read Microsoft Azure Spring Cloud application deployment**
This procedure defines a role that has permissions to deploy, test, and restart
* **Other : Get Microsoft Azure Spring Cloud application deployment log file URL** Under **Microsoft.AppPlatform/Spring/apps/deployments/skus**, select:+ * **Read : List application deployment available skus** Under **Microsoft.AppPlatform/locations**, select:+ * **Other : Check name availability** Under **Microsoft.AppPlatform/locations/operationResults/Spring** select:+ * **Read : Read operation result** Under **Microsoft.AppPlatform/locations/operationStatus/operationId**, select:+ * **Read : Read operation status** Under **Microsoft.AppPlatform/skus**, select:+ * **Read : List available skus**
- [ ![Screenshot that shows the selections for DevOps permissions.](media/spring-cloud-permissions/dev-ops-permissions.png) ](media/spring-cloud-permissions/dev-ops-permissions.png#lightbox)
+ [![Azure portal screenshot that shows the selections for DevOps permissions.](media/spring-cloud-permissions/dev-ops-permissions.png)](media/spring-cloud-permissions/dev-ops-permissions.png#lightbox)
3. Select **Add**.
This procedure defines a role that has permissions to deploy, test, and restart
5. Select **Review and create**.
-#### [JSON](#tab/JSON)
+### [JSON](#tab/JSON)
1. Repeat steps 1 through 4 from the procedure for adding the Developer role. 2. Select **Next**.
This procedure defines a role that has permissions to deploy, test, and restart
5. Paste in the following JSON to define the DevOps Engineer role:
- ```json
- {
- "properties": {
- "roleName": "DevOps engineer",
- "description": "",
- "assignableScopes": [
- "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
- ],
- "permissions": [
- {
- "actions": [
- "Microsoft.AppPlatform/Spring/write",
- "Microsoft.AppPlatform/Spring/delete",
- "Microsoft.AppPlatform/Spring/read",
- "Microsoft.AppPlatform/Spring/enableTestEndpoint/action",
- "Microsoft.AppPlatform/Spring/disableTestEndpoint/action",
- "Microsoft.AppPlatform/Spring/listTestKeys/action",
- "Microsoft.AppPlatform/Spring/regenerateTestKey/action",
- "Microsoft.AppPlatform/Spring/apps/write",
- "Microsoft.AppPlatform/Spring/apps/delete",
- "Microsoft.AppPlatform/Spring/apps/read",
- "Microsoft.AppPlatform/Spring/apps/getResourceUploadUrl/action",
- "Microsoft.AppPlatform/Spring/apps/validateDomain/action",
- "Microsoft.AppPlatform/Spring/apps/bindings/write",
- "Microsoft.AppPlatform/Spring/apps/bindings/delete",
- "Microsoft.AppPlatform/Spring/apps/bindings/read",
- "Microsoft.AppPlatform/Spring/apps/deployments/write",
- "Microsoft.AppPlatform/Spring/apps/deployments/delete",
- "Microsoft.AppPlatform/Spring/apps/deployments/read",
- "Microsoft.AppPlatform/Spring/apps/deployments/start/action",
- "Microsoft.AppPlatform/Spring/apps/deployments/stop/action",
- "Microsoft.AppPlatform/Spring/apps/deployments/restart/action",
- "Microsoft.AppPlatform/Spring/apps/deployments/getLogFileUrl/action",
- "Microsoft.AppPlatform/Spring/apps/deployments/skus/read",
- "Microsoft.AppPlatform/locations/checkNameAvailability/action",
- "Microsoft.AppPlatform/locations/operationResults/Spring/read",
- "Microsoft.AppPlatform/locations/operationStatus/operationId/read",
- "Microsoft.AppPlatform/skus/read"
- ],
- "notActions": [],
- "dataActions": [],
- "notDataActions": []
- }
- ]
- }
- }
- ```
+ * Basic/Standard tier
+
+ ```json
+ {
+ "properties": {
+ "roleName": "DevOps engineer",
+ "description": "",
+ "assignableScopes": [
+ "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
+ ],
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.AppPlatform/Spring/write",
+ "Microsoft.AppPlatform/Spring/delete",
+ "Microsoft.AppPlatform/Spring/read",
+ "Microsoft.AppPlatform/Spring/enableTestEndpoint/action",
+ "Microsoft.AppPlatform/Spring/disableTestEndpoint/action",
+ "Microsoft.AppPlatform/Spring/listTestKeys/action",
+ "Microsoft.AppPlatform/Spring/regenerateTestKey/action",
+ "Microsoft.AppPlatform/Spring/apps/write",
+ "Microsoft.AppPlatform/Spring/apps/delete",
+ "Microsoft.AppPlatform/Spring/apps/read",
+ "Microsoft.AppPlatform/Spring/apps/getResourceUploadUrl/action",
+ "Microsoft.AppPlatform/Spring/apps/validateDomain/action",
+ "Microsoft.AppPlatform/Spring/apps/bindings/write",
+ "Microsoft.AppPlatform/Spring/apps/bindings/delete",
+ "Microsoft.AppPlatform/Spring/apps/bindings/read",
+ "Microsoft.AppPlatform/Spring/apps/deployments/write",
+ "Microsoft.AppPlatform/Spring/apps/deployments/delete",
+ "Microsoft.AppPlatform/Spring/apps/deployments/read",
+ "Microsoft.AppPlatform/Spring/apps/deployments/start/action",
+ "Microsoft.AppPlatform/Spring/apps/deployments/stop/action",
+ "Microsoft.AppPlatform/Spring/apps/deployments/restart/action",
+ "Microsoft.AppPlatform/Spring/apps/deployments/getLogFileUrl/action",
+ "Microsoft.AppPlatform/Spring/apps/deployments/skus/read",
+ "Microsoft.AppPlatform/locations/checkNameAvailability/action",
+ "Microsoft.AppPlatform/locations/operationResults/Spring/read",
+ "Microsoft.AppPlatform/locations/operationStatus/operationId/read",
+ "Microsoft.AppPlatform/skus/read"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ]
+ }
+ }
+ ```
+
+ * Enterprise tier
+
+ ```json
+ {
+ "properties": {
+ "roleName": "DevOps engineer",
+ "description": "",
+ "assignableScopes": [
+ "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
+ ],
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.AppPlatform/Spring/write",
+ "Microsoft.AppPlatform/Spring/delete",
+ "Microsoft.AppPlatform/Spring/read",
+ "Microsoft.AppPlatform/Spring/enableTestEndpoint/action",
+ "Microsoft.AppPlatform/Spring/disableTestEndpoint/action",
+ "Microsoft.AppPlatform/Spring/listTestKeys/action",
+ "Microsoft.AppPlatform/Spring/regenerateTestKey/action",
+ "Microsoft.AppPlatform/Spring/buildServices/read",
+ "Microsoft.AppPlatform/Spring/buildServices/getResourceUploadUrl/action",
+ "Microsoft.AppPlatform/Spring/buildServices/agentPools/read",
+ "Microsoft.AppPlatform/Spring/buildServices/agentPools/write",
+ "Microsoft.AppPlatform/Spring/buildServices/builds/read",
+ "Microsoft.AppPlatform/Spring/buildServices/builds/write",
+ "Microsoft.AppPlatform/Spring/buildServices/builds/results/read",
+ "Microsoft.AppPlatform/Spring/buildServices/builds/results/getLogFileUrl/action",
+ "Microsoft.AppPlatform/Spring/buildServices/builders/read",
+ "Microsoft.AppPlatform/Spring/buildServices/builders/write",
+ "Microsoft.AppPlatform/Spring/buildServices/builders/delete",
+ "Microsoft.AppPlatform/Spring/buildServices/builders/buildpackBindings/read",
+ "Microsoft.AppPlatform/Spring/buildServices/builders/buildpackBindings/write",
+ "Microsoft.AppPlatform/Spring/buildServices/builders/buildpackBindings/delete",
+ "Microsoft.AppPlatform/Spring/buildServices/supportedBuildpacks/read",
+ "Microsoft.AppPlatform/Spring/buildServices/supportedStacks/read",
+ "Microsoft.AppPlatform/Spring/apps/write",
+ "Microsoft.AppPlatform/Spring/apps/delete",
+ "Microsoft.AppPlatform/Spring/apps/read",
+ "Microsoft.AppPlatform/Spring/apps/getResourceUploadUrl/action",
+ "Microsoft.AppPlatform/Spring/apps/validateDomain/action",
+ "Microsoft.AppPlatform/Spring/apps/bindings/write",
+ "Microsoft.AppPlatform/Spring/apps/bindings/delete",
+ "Microsoft.AppPlatform/Spring/apps/bindings/read",
+ "Microsoft.AppPlatform/Spring/apps/deployments/write",
+ "Microsoft.AppPlatform/Spring/apps/deployments/delete",
+ "Microsoft.AppPlatform/Spring/apps/deployments/read",
+ "Microsoft.AppPlatform/Spring/apps/deployments/start/action",
+ "Microsoft.AppPlatform/Spring/apps/deployments/stop/action",
+ "Microsoft.AppPlatform/Spring/apps/deployments/restart/action",
+ "Microsoft.AppPlatform/Spring/apps/deployments/getLogFileUrl/action",
+ "Microsoft.AppPlatform/Spring/apps/deployments/skus/read",
+ "Microsoft.AppPlatform/locations/checkNameAvailability/action",
+ "Microsoft.AppPlatform/locations/operationResults/Spring/read",
+ "Microsoft.AppPlatform/locations/operationStatus/operationId/read",
+ "Microsoft.AppPlatform/skus/read"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ]
+ }
+ }
+ ```
6. Review the permissions.
This procedure defines a role that has permissions to deploy, test, and restart
This procedure defines a role that has permissions to deploy, test, and restart Azure Spring Cloud apps.
-#### [Portal](#tab/Azure-portal)
+### [Portal](#tab/Azure-portal)
1. Repeat steps 1 through 4 from the procedure for adding the Developer role. 2. Select the permissions for the Ops - Site Reliability Engineering role: Under **Microsoft.AppPlatform/Spring**, select:+ * **Read : Get Azure Spring Cloud service instance** * **Other : List Azure Spring Cloud service instance test keys** Under **Microsoft.AppPlatform/Spring/apps**, select:+ * **Read : Read Microsoft Azure Spring Cloud application** Under **Microsoft.AppPlatform/apps/deployments**, select:+ * **Read : Read Microsoft Azure Spring Cloud application deployment** * **Other : Start Microsoft Azure Spring Cloud application deployment** * **Other : Stop Microsoft Azure Spring Cloud application deployment** * **Other : Restart Microsoft Azure Spring Cloud application deployment** Under **Microsoft.AppPlatform/locations/operationResults/Spring**, select:+ * **Read : Read operation result** Under **Microsoft.AppPlatform/locations/operationStatus/operationId**, select:+ * **Read : Read operation status**
- [ ![Screenshot that shows the selections for Ops - Site Reliability Engineering permissions.](media/spring-cloud-permissions/ops-sre-permissions.png)](media/spring-cloud-permissions/ops-sre-permissions.png#lightbox)
+ [![Azure portal screenshot that shows the selections for Ops - Site Reliability Engineering permissions.](media/spring-cloud-permissions/ops-sre-permissions.png)](media/spring-cloud-permissions/ops-sre-permissions.png#lightbox)
3. Select **Add**.
This procedure defines a role that has permissions to deploy, test, and restart
5. Select **Review and create**.
-#### [JSON](#tab/JSON)
+### [JSON](#tab/JSON)
1. Repeat steps 1 through 4 from the procedure for adding the Developer role. 2. Select **Next**.
This procedure defines a role that has permissions to deploy, test, and restart
5. Paste in the following JSON to define the Ops - Site Reliability Engineering role:
- ```json
- {
- "properties": {
- "roleName": "Ops - Site Reliability Engineering",
- "description": "",
- "assignableScopes": [
- "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
- ],
- "permissions": [
- {
- "actions": [
- "Microsoft.AppPlatform/Spring/read",
- "Microsoft.AppPlatform/Spring/listTestKeys/action",
- "Microsoft.AppPlatform/Spring/apps/read",
- "Microsoft.AppPlatform/Spring/apps/deployments/read",
- "Microsoft.AppPlatform/Spring/apps/deployments/start/action",
- "Microsoft.AppPlatform/Spring/apps/deployments/stop/action",
- "Microsoft.AppPlatform/Spring/apps/deployments/restart/action",
- "Microsoft.AppPlatform/Spring/apps/deployments/getLogFileUrl/action",
- "Microsoft.AppPlatform/locations/operationResults/Spring/read",
- "Microsoft.AppPlatform/locations/operationStatus/operationId/read"
- ],
- "notActions": [],
- "dataActions": [],
- "notDataActions": []
- }
- ]
- }
- }
- ```
+ * Enterprise/Basic/Standard tier
+
+ ```json
+ {
+ "properties": {
+ "roleName": "Ops - Site Reliability Engineering",
+ "description": "",
+ "assignableScopes": [
+ "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
+ ],
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.AppPlatform/Spring/read",
+ "Microsoft.AppPlatform/Spring/listTestKeys/action",
+ "Microsoft.AppPlatform/Spring/apps/read",
+ "Microsoft.AppPlatform/Spring/apps/deployments/read",
+ "Microsoft.AppPlatform/Spring/apps/deployments/start/action",
+ "Microsoft.AppPlatform/Spring/apps/deployments/stop/action",
+ "Microsoft.AppPlatform/Spring/apps/deployments/restart/action",
+ "Microsoft.AppPlatform/Spring/apps/deployments/getLogFileUrl/action",
+ "Microsoft.AppPlatform/locations/operationResults/Spring/read",
+ "Microsoft.AppPlatform/locations/operationStatus/operationId/read"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ]
+ }
+ }
+ ```
6. Review the permissions.
This procedure defines a role that has permissions to deploy, test, and restart
This role can create and configure everything in Azure Spring Cloud and apps with a service instance. This role is for releasing or deploying code.
-#### [Portal](#tab/Azure-portal)
+### [Portal](#tab/Azure-portal)
1. Repeat steps 1 through 4 from the procedure for adding the Developer role. 2. Open the **Permissions** options.
This role can create and configure everything in Azure Spring Cloud and apps wit
3. Select the permissions for the Azure Pipelines / Jenkins / GitHub Actions role: Under **Microsoft.AppPlatform/Spring**, select:+ * **Write : Create or Update Azure Spring Cloud service instance** * **Delete : Delete Azure Spring Cloud service instance** * **Read : Get Azure Spring Cloud service instance**
This role can create and configure everything in Azure Spring Cloud and apps wit
* **Other : Disable Azure Spring Cloud service instance test endpoint** * **Other : List Azure Spring Cloud service instance test keys** * **Other : Regenerate Azure Spring Cloud service instance test key**
+
+ (For Enterprise tier only) Under **Microsoft.AppPlatform/Spring/buildServices**, select:
+
+ * **Read : Read Microsoft Azure Spring Cloud Build Services**
+ * **Other : Get an Upload URL in Azure Spring Cloud**
+
+ (For Enterprise tier only) Under **Microsoft.AppPlatform/Spring/buildServices/builds**, select:
+
+ * **Read : Read Microsoft Azure Spring Cloud Builds**
+ * **Write : Write Microsoft Azure Spring Cloud Builds**
+
+ (For Enterprise tier only) Under **Microsoft.AppPlatform/Spring/buildServices/builds/results**, select:
+
+ * **Read : Read Microsoft Azure Spring Cloud Build Results**
+ * **Other : Get an Log File URL in Azure Spring Cloud**
+
+ (For Enterprise tier only) Under **Microsoft.AppPlatform/Spring/buildServices/builders**, select:
+
+ * **Read : Read Microsoft Azure Spring Cloud Builders**
+ * **Write : Write Microsoft Azure Spring Cloud Builders**
+ * **Delete : Delete Microsoft Azure Spring Cloud Builders**
+
+ (For Enterprise tier only) Under **Microsoft.AppPlatform/Spring/buildServices/builders/buildpackBindings**, select:
+
+ * **Read : Read Microsoft Azure Spring Cloud Builder BuildpackBinding**
+ * **Write : Write Microsoft Azure Spring Cloud Builder BuildpackBinding**
+ * **Delete : Delete Microsoft Azure Spring Cloud Builder BuildpackBinding**
+
+ (For Enterprise tier only) Under **Microsoft.AppPlatform/Spring/buildServices/supportedBuildpacks**, select:
+
+ * **Read : Read Microsoft Azure Spring Cloud Supported Buildpacks**
+
+ (For Enterprise tier only) Under **Microsoft.AppPlatform/Spring/buildServices/supportedStacks**, select:
+
+ * **Read : Read Microsoft Azure Spring Cloud Supported Stacks**
Under **Microsoft.AppPlatform/Spring/apps**, select:+ * **Write : Write Microsoft Azure Spring Cloud application** * **Delete : Delete Microsoft Azure Spring Cloud application** * **Read : Read Microsoft Azure Spring Cloud application**
This role can create and configure everything in Azure Spring Cloud and apps wit
* **Other : Validate Microsoft Azure Spring Cloud application custom domain** Under **Microsoft.AppPlatform/Spring/apps/bindings**, select:+ * **Write : Write Microsoft Azure Spring Cloud application binding** * **Delete : Delete Microsoft Azure Spring Cloud application binding** * **Read : Read Microsoft Azure Spring Cloud application binding** Under **Microsoft.AppPlatform/Spring/apps/deployments**, select:+ * **Write : Write Microsoft Azure Spring Cloud application deployment** * **Delete : Delete Azure Spring Cloud application deployment** * **Read : Read Microsoft Azure Spring Cloud application deployment**
This role can create and configure everything in Azure Spring Cloud and apps wit
* **Other : Get Microsoft Azure Spring Cloud application deployment log file URL** Under **Microsoft.AppPlatform/Spring/apps/deployments/skus**, select:+ * **Read : List application deployment available skus** Under **Microsoft.AppPlatform/locations**, select:+ * **Other : Check name availability** Under **Microsoft.AppPlatform/locations/operationResults/Spring**, select:+ * **Read : Read operation result** Under **Microsoft.AppPlatform/locations/operationStatus/operationId**, select:+ * **Read : Read operation status** Under **Microsoft.AppPlatform/skus**, select:+ * **Read : List available skus**
- [ ![Screenshot that shows the selections for Azure Pipelines / Jenkins / GitHub Actions permissions.](media/spring-cloud-permissions/pipelines-permissions-box.png) ](media/spring-cloud-permissions/pipelines-permissions-box.png#lightbox)
+ [![Azure portal screenshot that shows the selections for Azure Pipelines / Jenkins / GitHub Actions permissions.](media/spring-cloud-permissions/pipelines-permissions-box.png)](media/spring-cloud-permissions/pipelines-permissions-box.png#lightbox)
4. Select **Add**.
This role can create and configure everything in Azure Spring Cloud and apps wit
6. Select **Review and create**.
-#### [JSON](#tab/JSON)
+### [JSON](#tab/JSON)
1. Repeat steps 1 through 4 from the procedure for adding the Developer role.+ 2. Select **Next**. 3. Select the **JSON** tab.
This role can create and configure everything in Azure Spring Cloud and apps wit
5. Paste in the following JSON to define the Azure Pipelines / Jenkins / GitHub Actions role:
- ```json
- {
- "properties": {
- "roleName": "Azure Pipelines/Provisioning",
- "description": "",
- "assignableScopes": [
- "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
- ],
- "permissions": [
- {
- "actions": [
- "Microsoft.AppPlatform/Spring/write",
- "Microsoft.AppPlatform/Spring/delete",
- "Microsoft.AppPlatform/Spring/read",
- "Microsoft.AppPlatform/Spring/enableTestEndpoint/action",
- "Microsoft.AppPlatform/Spring/disableTestEndpoint/action",
- "Microsoft.AppPlatform/Spring/listTestKeys/action",
- "Microsoft.AppPlatform/Spring/regenerateTestKey/action",
- "Microsoft.AppPlatform/Spring/apps/write",
- "Microsoft.AppPlatform/Spring/apps/delete",
- "Microsoft.AppPlatform/Spring/apps/read",
- "Microsoft.AppPlatform/Spring/apps/getResourceUploadUrl/action",
- "Microsoft.AppPlatform/Spring/apps/validateDomain/action",
- "Microsoft.AppPlatform/Spring/apps/bindings/write",
- "Microsoft.AppPlatform/Spring/apps/bindings/delete",
- "Microsoft.AppPlatform/Spring/apps/bindings/read",
- "Microsoft.AppPlatform/Spring/apps/deployments/write",
- "Microsoft.AppPlatform/Spring/apps/deployments/delete",
- "Microsoft.AppPlatform/Spring/apps/deployments/read",
- "Microsoft.AppPlatform/Spring/apps/deployments/start/action",
- "Microsoft.AppPlatform/Spring/apps/deployments/stop/action",
- "Microsoft.AppPlatform/Spring/apps/deployments/restart/action",
- "Microsoft.AppPlatform/Spring/apps/deployments/getLogFileUrl/action",
- "Microsoft.AppPlatform/skus/read",
- "Microsoft.AppPlatform/locations/checkNameAvailability/action",
- "Microsoft.AppPlatform/locations/operationResults/Spring/read",
- "Microsoft.AppPlatform/locations/operationStatus/operationId/read"
- ],
- "notActions": [],
- "dataActions": [],
- "notDataActions": []
- }
- ]
- }
- }
- ```
+ * Basic/Standard tier
+
+ ```json
+ {
+ "properties": {
+ "roleName": "Azure Pipelines/Provisioning",
+ "description": "",
+ "assignableScopes": [
+ "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
+ ],
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.AppPlatform/Spring/write",
+ "Microsoft.AppPlatform/Spring/delete",
+ "Microsoft.AppPlatform/Spring/read",
+ "Microsoft.AppPlatform/Spring/enableTestEndpoint/action",
+ "Microsoft.AppPlatform/Spring/disableTestEndpoint/action",
+ "Microsoft.AppPlatform/Spring/listTestKeys/action",
+ "Microsoft.AppPlatform/Spring/regenerateTestKey/action",
+ "Microsoft.AppPlatform/Spring/apps/write",
+ "Microsoft.AppPlatform/Spring/apps/delete",
+ "Microsoft.AppPlatform/Spring/apps/read",
+ "Microsoft.AppPlatform/Spring/apps/getResourceUploadUrl/action",
+ "Microsoft.AppPlatform/Spring/apps/validateDomain/action",
+ "Microsoft.AppPlatform/Spring/apps/bindings/write",
+ "Microsoft.AppPlatform/Spring/apps/bindings/delete",
+ "Microsoft.AppPlatform/Spring/apps/bindings/read",
+ "Microsoft.AppPlatform/Spring/apps/deployments/write",
+ "Microsoft.AppPlatform/Spring/apps/deployments/delete",
+ "Microsoft.AppPlatform/Spring/apps/deployments/read",
+ "Microsoft.AppPlatform/Spring/apps/deployments/start/action",
+ "Microsoft.AppPlatform/Spring/apps/deployments/stop/action",
+ "Microsoft.AppPlatform/Spring/apps/deployments/restart/action",
+ "Microsoft.AppPlatform/Spring/apps/deployments/getLogFileUrl/action",
+ "Microsoft.AppPlatform/skus/read",
+ "Microsoft.AppPlatform/locations/checkNameAvailability/action",
+ "Microsoft.AppPlatform/locations/operationResults/Spring/read",
+ "Microsoft.AppPlatform/locations/operationStatus/operationId/read"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ]
+ }
+ }
+ ```
+
+ * Enterprise tier
+
+ ```json
+ {
+ "properties": {
+ "roleName": "Azure Pipelines/Provisioning",
+ "description": "",
+ "assignableScopes": [
+ "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
+ ],
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.AppPlatform/Spring/write",
+ "Microsoft.AppPlatform/Spring/delete",
+ "Microsoft.AppPlatform/Spring/read",
+ "Microsoft.AppPlatform/Spring/enableTestEndpoint/action",
+ "Microsoft.AppPlatform/Spring/disableTestEndpoint/action",
+ "Microsoft.AppPlatform/Spring/listTestKeys/action",
+ "Microsoft.AppPlatform/Spring/regenerateTestKey/action",
+ "Microsoft.AppPlatform/Spring/buildServices/read",
+ "Microsoft.AppPlatform/Spring/buildServices/getResourceUploadUrl/action",
+ "Microsoft.AppPlatform/Spring/buildServices/builds/read",
+ "Microsoft.AppPlatform/Spring/buildServices/builds/write",
+ "Microsoft.AppPlatform/Spring/buildServices/builds/results/read",
+ "Microsoft.AppPlatform/Spring/buildServices/builds/results/getLogFileUrl/action",
+ "Microsoft.AppPlatform/Spring/buildServices/builders/read",
+ "Microsoft.AppPlatform/Spring/buildServices/builders/write",
+ "Microsoft.AppPlatform/Spring/buildServices/builders/delete",
+ "Microsoft.AppPlatform/Spring/buildServices/builders/buildpackBindings/read",
+ "Microsoft.AppPlatform/Spring/buildServices/builders/buildpackBindings/write",
+ "Microsoft.AppPlatform/Spring/buildServices/builders/buildpackBindings/delete",
+ "Microsoft.AppPlatform/Spring/buildServices/supportedBuildpacks/read",
+ "Microsoft.AppPlatform/Spring/buildServices/supportedStacks/read",
+ "Microsoft.AppPlatform/Spring/apps/write",
+ "Microsoft.AppPlatform/Spring/apps/delete",
+ "Microsoft.AppPlatform/Spring/apps/read",
+ "Microsoft.AppPlatform/Spring/apps/getResourceUploadUrl/action",
+ "Microsoft.AppPlatform/Spring/apps/validateDomain/action",
+ "Microsoft.AppPlatform/Spring/apps/bindings/write",
+ "Microsoft.AppPlatform/Spring/apps/bindings/delete",
+ "Microsoft.AppPlatform/Spring/apps/bindings/read",
+ "Microsoft.AppPlatform/Spring/apps/deployments/write",
+ "Microsoft.AppPlatform/Spring/apps/deployments/delete",
+ "Microsoft.AppPlatform/Spring/apps/deployments/read",
+ "Microsoft.AppPlatform/Spring/apps/deployments/start/action",
+ "Microsoft.AppPlatform/Spring/apps/deployments/stop/action",
+ "Microsoft.AppPlatform/Spring/apps/deployments/restart/action",
+ "Microsoft.AppPlatform/Spring/apps/deployments/getLogFileUrl/action",
+ "Microsoft.AppPlatform/skus/read",
+ "Microsoft.AppPlatform/locations/checkNameAvailability/action",
+ "Microsoft.AppPlatform/locations/operationResults/Spring/read",
+ "Microsoft.AppPlatform/locations/operationStatus/operationId/read"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ]
+ }
+ }
+ ```
6. Select **Add**.
spring-cloud Vnet Customer Responsibilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/vnet-customer-responsibilities.md
The following list shows the resource requirements for Azure Spring Cloud servic
| \*:123 *or* ntp.ubuntu.com:123 | UDP:123 | NTP time synchronization on Linux nodes. | | | \*.azure.io:443 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - AzureContainerRegistry:443 | TCP:443 | Azure Container Registry. | Can be replaced by enabling *Azure Container Registry* [service endpoint in virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). | | \*.core.windows.net:443 and \*.core.windows.net:445 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - Storage:443 and Storage:445 | TCP:443, TCP:445 | Azure Files | Can be replaced by enabling *Azure Storage* [service endpoint in virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). |
-| \*.servicebus.windows.net:443 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - EventHub:443 | TCP:443 | Azure Event Hub. | Can be replaced by enabling *Azure Event Hubs* [service endpoint in virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). |
+| \*.servicebus.windows.net:443 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - EventHub:443 | TCP:443 | Azure Event Hubs. | Can be replaced by enabling *Azure Event Hubs* [service endpoint in virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). |
## Azure Spring Cloud FQDN requirements/application rules
Azure Firewall provides the FQDN tag **AzureKubernetesService** to simplify the
## Next steps - [Access your application in a private network](access-app-virtual-network.md)-- [Expose applications to the internet using Application Gateway](expose-apps-gateway.md)
+- [Expose applications with end-to-end TLS in a virtual network](expose-apps-gateway-end-to-end-tls.md)
storage Blob Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-cli.md
done < /mnt/c/temp/bloblist.xml
You can delete either a single blob or series of blobs with the `az storage blob delete` and `az storage blob delete-batch` commands. When deleting multiple blobs, you can use conditional operations, loops, or other automation as shown in the examples below.
-[!WARNING] Running the following examples may permanently delete blobs. Microsoft recommends enabling container soft delete to protect containers and blobs from accidental deletion. For more info, see Soft delete for containers.
+> [!WARNING]
+> Running the following examples may permanently delete blobs. Microsoft recommends enabling container soft delete to protect containers and blobs from accidental deletion. For more info, see [Soft delete for containers](soft-delete-blob-overview.md).
-The following sample code provides an example of both single and multiple download approaches. The first example deletes a single, named blob. The second example illustrates the use of logical operations in Bash to delete multiple blobs. The third example uses the `delete-batch` command to delete all blobs with the format *bennett-x*, except *bennett-2*.
+The following sample code provides an example of both individual and batch delete operations. The first example deletes a single, named blob. The second example illustrates the use of logical operations in Bash to delete multiple blobs. The third example uses the `delete-batch` command to delete all blobs with the format *bennett-x*, except *bennett-2*.
For more information, see the [az storage blob delete](/cli/azure/storage/blob#az-storage-blob-delete) and [az storage blob delete-batch](/cli/azure/storage/blob#az-storage-blob-delete-batch) reference.
az storage blob delete-batch \
--auth-mode login ```
-If your storage account's soft delete data protection option is enabled, you can use a listing operation to return blobs deleted within the associated retention period. To learn more about soft delete, refer to the [Soft delete for blobs](soft-delete-blob-overview.md) article.
+In some cases, it's possible to retrieve blobs that have been deleted. If your storage account's soft delete data protection option is enabled, the `--include d` parameter and value will return blobs deleted within the account's retention period. To learn more about soft delete, refer to thee [Soft delete for blobs](soft-delete-blob-overview.md) article.
-Use the following example to retrieve a list of blobs deleted within container's associated retention period. The result displays a list of recently deleted blobs.
+Use the following examples to retrieve a list of blobs deleted within container's associated retention period. The first example displays a list of all recently deleted blobs and the dates on which they were deleted. The second example lists all deleted blobs matching a specific prefix.
```azurecli-interactive #!/bin/bash
az storage blob list \
--auth-mode login \ --query "[?deleted].{name:name,deleted:properties.deletedTime}"
-#Retrieve a list of all blobs matching specific prefix
+#Retrieve a list of all deleted blobs matching a specific prefix
az storage blob list \ --container-name $containerName \ --prefix $blobPrefix \
az storage blob list \
--query "[].{name:name,deleted:deleted}" ```
-## Restore a soft-deleted blob
-As mentioned in the [List blobs](#list-blobs) section, you can configure the soft delete data protection option on your storage account. When enabled, it's possible to restore containers deleted within the associated retention period.
+## Restore a deleted blob
+As mentioned in the [List blobs](#list-blobs) section, you can configure the soft delete data protection option on your storage account. When enabled, it's possible to restore containers deleted within the associated retention period. You may also use versioning to maintain previous versions of your blobs for each recovery and restoration.
-The following examples restore soft-deleted blobs with the `az storage blob undelete` method. The first example uses the `--name` parameter to restore a single named blob. The second example uses a loop to restore the remainder of the deleted blobs. Before you can follow this example, you'll need to enable soft delete on at least one of your storage accounts.
+If blob versioning and blob soft delete are both enabled, then modifying, overwriting, deleting, or restoring a blob automatically creates a new version. The method you'll use to restore a deleted blob will depend upon whether versioning is enabled on your storage account.
-To learn more about the soft delete data protection option, refer to the [Soft delete for blobs](soft-delete-blob-overview.md) article or the [az storage blob undelete](/cli/azure/storage/blob#az-storage-blob-undelete) reference.
+The following code sample restores all soft-deleted blobs or, if versioning is enabled, restores the latest version of a blob. It first determines whether versioning is enabled with the `az storage account blob-service-properties show` command.
+
+If versioning is enabled, the `az storage blob list` command retrieves a list of all uniquely-named blob versions. Next, the blob versions on the list are retrieved and ordered by date. If no versions are found with the `isCurrentVersion` attribute value, the `az storage blob copy start` command is used to make an active copy of the blob's latest version.
+
+If versioning is disabled, the `az storage blob undelete` command is used to restore each soft-deleted blob in the container.
+
+Before you can follow this example, you'll need to enable soft delete on at least one of your storage accounts. To learn more about the soft delete data protection option, refer to the [Soft delete for blobs](soft-delete-blob-overview.md) article or the [az storage blob undelete](/cli/azure/storage/blob#az-storage-blob-undelete) reference.
```azurecli-interactive #!/bin/bash storageAccount="<storage-account>"
+groupName="myResourceGroup"
containerName="demo-container"
-blobName="demo-file.txt"
-
-#Restore a single, named blob
-az storage blob undelete \
- --container-name $containerName \
- --name $blobName \
- --account-name $storageAccount \
- --auth-mode login
-
-#Retrieve all deleted blobs
-blobList=$( \
- az storage blob list \
- --container-name $containerName \
- --include d \
- --output tsv \
+blobSvcProps=$(
+ az storage account blob-service-properties show \
--account-name $storageAccount \
- --auth-mode login \
- --query "[?deleted].[name]" \
-)
+ --resource-group $groupName)
-#Iterate list of deleted blobs and restore
-for row in $blobList
-do
- tmpName=$(echo $row | sed -e 's/\r//g')
- echo "Restoring $tmpName"
- az storage blob undelete \
- --container-name $containerName \
- --name $tmpName \
- --account-name $storageAccount \
- --auth-mode login
-done
+softDelete=$(echo "${blobSvcProps}" | jq -r '.deleteRetentionPolicy.enabled')
+versioning=$(echo "${blobSvcProps}" | jq -r '.isVersioningEnabled')
+
+# If soft delete is enabled
+if $softDelete
+then
+
+ # If versioning is enabled
+ if $versioning
+ then
+
+ # Get all blobs and versions using -Unique to avoid processing duplicates/versions
+ blobList=$(
+ az storage blob list \
+ --account-name $storageAccount \
+ --container-name $containerName \
+ --include dv \--query "[?versionId != null].{name:name}" \
+ --auth-mode login -o tsv | uniq)
+
+ # Iterate the collection
+ for blob in $blobList
+ do
+ # Get all versions of the blob, newest to oldest
+ blobVers=$(
+ az storage blob list \
+ --account-name $storageAccount \
+ --container-name $containerName \
+ --include dv \
+ --prefix $blob \
+ --auth-mode login -o json | jq 'sort_by(.versionId) | reverse | .[]')
+ # Select the first (newest) object
+ delBlob=$(echo "$blobVers" | jq -sr '.[0]')
+
+ # Verify that the newest version is NOT the latest (that the version is "deleted")
+ if [[ $(echo "$delBlob" | jq '.isCurrentVersion') != true ]];
+ then
+ # Get the blob's versionId property, build the URI to the blob
+ versionID=$(echo "$delBlob" | jq -r '.versionId')
+ uri="https://$storageAccount.blob.core.windows.net/$containerName/$blob?versionId=$versionID"
+
+ # Copy the latest version
+ az storage blob copy start \
+ --account-name $storageAccount \
+ --destination-blob $blob \
+ --destination-container $containerName \
+ --source-uri $uri \
+ --auth-mode login
+
+ delBlob=""
+ fi
+ done
+
+ else
+
+ #Retrieve all deleted blobs
+ blobList=$( \
+ az storage blob list \
+ --container-name $containerName \
+ --include d \
+ --output tsv \
+ --account-name $storageAccount \
+ --auth-mode login \
+ --query "[?deleted].[name]" \
+ )
+
+ #Iterate list of deleted blobs and restore
+ for row in $blobList
+ do
+ tmpName=$(echo $row | sed -e 's/\r//g')
+ echo "Restoring $tmpName"
+ az storage blob undelete \
+ --container-name $containerName \
+ --name $tmpName \
+ --account-name $storageAccount \
+ --auth-mode login
+ done
+
+ fi
+
+else
+
+ #Soft delete is not enabled
+ echo "Sorry, the delete retention policy is not enabled."
+
+fi
``` ## Next steps
storage Blob Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-powershell.md
$data.Venue.Files.ChildNodes | ForEach-Object {
You can delete either a single blob or series of blobs with the `Remove-AzStorageBlob` cmdlet. When deleting multiple blobs, you can leverage conditional operations, loops, or the PowerShell pipeline as shown in the examples below.
+> [!WARNING]
+> Running the following examples may permanently delete blobs. Microsoft recommends enabling container soft delete to protect containers and blobs from accidental deletion. For more info, see [Soft delete for containers](soft-delete-blob-overview.md).
+ ```azurepowershell #Create variables $containerName = "myContainer"
storage Secure File Transfer Protocol Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support.md
SFTP clients commonly found to not support algorithms listed above include Apach
To get started, enable SFTP support, create a local user, and assign permissions for that local user. Then, you can use any SFTP client to securely connect and then transfer files. For step-by-step guidance, see [Connect to Azure Blob Storage by using the SSH File Transfer Protocol (SFTP)](secure-file-transfer-protocol-support-how-to.md).
-## Known issues and limitations
+## Limitations and known issues
-See the [Known issues](secure-file-transfer-protocol-known-issues.md) article for a complete list of issues and limitations with the current release of SFTP support.
+See the [limitations and known issues article](secure-file-transfer-protocol-known-issues.md) for a complete list of limitations and issues with SFTP support for Azure Blob Storage.
## Pricing and billing
storage Storage Blob Change Feed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-change-feed.md
description: Learn about change feed logs in Azure Blob Storage and how to use t
Previously updated : 03/07/2022 Last updated : 03/10/2022
The following event types may be captured in the change feed records with schema
- BlobSnapshotCreated - BlobTierChanged - BlobAsyncOperationInitiated
+- RestorePointMarkerCreated
The following example shows a change event record in JSON format that uses event schema version 4:
The following example shows a change event record in JSON format that uses event
} ```
+#### Schema version 5
+
+The following event types may be captured in the change feed records with schema version 5:
+
+- BlobCreated
+- BlobDeleted
+- BlobPropertiesUpdated
+- BlobSnapshotCreated
+- BlobTierChanged
+- BlobAsyncOperationInitiated
+
+The following example shows a change event record in JSON format that uses event schema version 5:
+
+```json
+{
+ "schemaVersion": 5,
+ "topic": "/subscriptions/<subscription>/resourceGroups/<resource-group>/providers/Microsoft.Storage/storageAccounts/<storage-account>",
+ "subject": "/blobServices/default/containers/<container>/blobs/<blob>",
+ "eventType": "BlobCreated",
+ "eventTime": "2022-02-17T13:12:11.5746587Z",
+ "id": "62616073-8020-0000-00ff-233467060cc0",
+ "data": {
+ "api": "PutBlob",
+ "clientRequestId": "b3f9b39a-ae5a-45ac-afad-95ac9e9f2791",
+ "requestId": "62616073-8020-0000-00ff-233467000000",
+ "etag": "0x8D9F2171BE32588",
+ "contentType": "application/octet-stream",
+ "contentLength": 128,
+ "blobType": "BlockBlob",
+ "blobVersion": "2022-02-17T16:11:52.5901564Z",
+ "containerVersion": "0000000000000001",
+ "blobTier": "Archive",
+ "url": "https://www.myurl.com",
+ "sequencer": "00000000000000010000000000000002000000000000001d",
+ "previousInfo": {
+ "SoftDeleteSnapshot": "2022-02-17T13:12:11.5726507Z",
+ "WasBlobSoftDeleted": true,
+ "BlobVersion": "2024-02-17T16:11:52.0781797Z",
+ "LastVersion" : "2022-02-17T16:11:52.0781797Z",
+ "PreviousTier": "Hot"
+ },
+ "snapshot" : "2022-02-17T16:09:16.7261278Z",
+ "blobPropertiesUpdated" : {
+ "ContentLanguage" : {
+ "current" : "pl-Pl",
+ "previous" : "nl-NL"
+ },
+ "CacheControl" : {
+ "current" : "max-age=100",
+ "previous" : "max-age=99"
+ },
+ "ContentEncoding" : {
+ "current" : "gzip, identity",
+ "previous" : "gzip"
+ },
+ "ContentMD5" : {
+ "current" : "Q2h1Y2sgSW51ZwDIAXR5IQ==",
+ "previous" : "Q2h1Y2sgSW="
+ },
+ "ContentDisposition" : {
+ "current" : "attachment",
+ "previous" : ""
+ },
+ "ContentType" : {
+ "current" : "application/json",
+ "previous" : "application/octet-stream"
+ }
+ },
+ "asyncOperationInfo": {
+ "DestinationTier": "Hot",
+ "WasAsyncOperation": true,
+ "CopyId": "copyId"
+ },
+ "blobTagsUpdated": {
+ "previous": {
+ "Tag1": "Value1_3",
+ "Tag2": "Value2_3"
+ },
+ "current": {
+ "Tag1": "Value1_4",
+ "Tag2": "Value2_4"
+ }
+ },
+ "restorePointMarker": {
+ "rpi": "cbd73e3d-f650-4700-b90c-2f067bce639c",
+ "rpp": "cbd73e3d-f650-4700-b90c-2f067bce639c",
+ "rpl": "test-restore-label",
+ "rpt": "2022-02-17T13:56:09.3559772Z"
+ },
+ "storageDiagnostics": {
+ "bid": "9d726db1-8006-0000-00ff-233467000000",
+ "seq": "(2,18446744073709551615,29,29)",
+ "sid": "4cc94e71-f6be-75bf-e7b2-f9ac41458e5a"
+ }
+ }
+}
+```
+ <a id="specifications"></a> ## Specifications
storage Storage Ref Azcopy Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-copy.md
There's only two supported ways to use a wildcard character in a URL.
- You can use one just after the final forward slash (/) of a URL. This use of the wildcard character copies all of the files in a directory directly to the destination without placing them into a subdirectory. -- You can also a wildcard character in the name of a container as long as the URL refers only to a container and not to a blob. You can use this approach to obtain files from a subset of containers.
+- You can also use a wildcard character in the name of a container as long as the URL refers only to a container and not to a blob. You can use this approach to obtain files from a subset of containers.
Download the contents of a directory without copying the containing directory itself.
storage Storage Files How To Create Nfs Shares https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-how-to-create-nfs-shares.md
- Title: Create an NFS share - Azure Files
-description: Learn how to create an Azure file share that can be mounted using the Network File System protocol.
--- Previously updated : 11/16/2021-----
-# How to create an NFS share
-Azure file shares are fully managed file shares that live in the cloud. This article covers creating a file share that uses the NFS protocol.
-
-## Applies to
-| File share type | SMB | NFS |
-|-|:-:|:-:|
-| Standard file shares (GPv2), LRS/ZRS | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| Standard file shares (GPv2), GRS/GZRS | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| Premium file shares (FileStorage), LRS/ZRS | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-
-## Limitations
--
-### Regional availability
-
-## Prerequisites
-- NFS shares only accept numeric UID/GID. To avoid your clients sending alphanumeric UID/GID, disable ID mapping.-- NFS shares can only be accessed from trusted networks. Connections to your NFS share must originate from one of the following sources:
- - Either [create a private endpoint](storage-files-networking-endpoints.md#create-a-private-endpoint) (recommended) or [restrict access to your public endpoint](storage-files-networking-endpoints.md#restrict-public-endpoint-access).
- - [Configure a Point-to-Site (P2S) VPN on Linux for use with Azure Files](storage-files-configure-p2s-vpn-linux.md).
- - [Configure a Site-to-Site VPN for use with Azure Files](storage-files-configure-s2s-vpn.md).
- - Configure [ExpressRoute](../../expressroute/expressroute-introduction.md).
--- If you intend to use the Azure CLI, [install the latest version](/cli/azure/install-azure-cli).-
-## Create a FileStorage storage account
-Currently, NFS 4.1 shares are only available as premium file shares. To deploy a premium file share with NFS 4.1 protocol support, you must first create a FileStorage storage account. A storage account is a top-level object in Azure that represents a shared pool of storage which can be used to deploy multiple Azure file shares.
-
-# [Portal](#tab/azure-portal)
-To create a FileStorage storage account, navigate to the Azure portal.
-
-1. In the [Azure portal](https://portal.azure.com/), select **Storage Accounts** on the left menu.
-
- ![Azure portal main page select storage account.](media/storage-how-to-create-premium-fileshare/azure-portal-storage-accounts.png)
-
-1. On the **Storage Accounts** window that appears, choose **Add**.
-1. Select the subscription in which to create the storage account.
-1. Select the resource group in which to create the storage account
-1. Next, enter a name for your storage account. The name you choose must be unique across Azure. The name also must be between 3 and 24 characters in length, and can include numbers and lowercase letters only.
-1. Select a location for your storage account, or use the default location.
-1. For **Performance** select **Premium**.
-
- You must select **Premium** for **Fileshares** to be an available option in the **Account kind** dropdown.
-
-1. For **Premium account type** choose **Fileshares**.
-
- :::image type="content" source="media/storage-how-to-create-file-share/files-create-smb-share-performance-premium.png" alt-text="Screenshot of premium performance selected.":::
-
-1. Leave **Replication** set to its default value of **Locally-redundant storage (LRS)**.
-1. Select **Review + Create** to review your storage account settings and create the account.
-1. Select **Create**.
-
-Once your storage account resource has been created, navigate to it.
-
-# [PowerShell](#tab/azure-powershell)
-To create a FileStorage storage account, open up a PowerShell prompt and execute the following commands, remembering to replace `<resource-group>` and `<storage-account>` with the appropriate values for your environment.
-
-```powershell
-$resourceGroupName = "<resource-group>"
-$storageAccountName = "<storage-account>"
-$location = "westus2"
-
-$storageAccount = New-AzStorageAccount `
- -ResourceGroupName $resourceGroupName `
- -Name $storageAccountName `
- -SkuName Premium_LRS `
- -Location $location `
- -Kind FileStorage
-```
-
-# [Azure CLI](#tab/azure-cli)
-To create a FileStorage storage account, open up your terminal and execute the following commands, remembering to replace `<resource-group>` and `<storage-account>` with the appropriate values for your environment.
-
-```azurecli-interactive
-resourceGroup="<resource-group>"
-storageAccount="<storage-account>"
-location="westus2"
-
-az storage account create \
- --resource-group $resourceGroup \
- --name $storageAccount \
- --location $location \
- --sku Premium_LRS \
- --kind FileStorage
-```
--
-## Disable secure transfer
-
-You can't mount an NFS file share unless you disable secure transfer.
-
-# [Portal](#tab/azure-portal)
-
-1. Navigate to the storage account you created.
-1. Select **Configuration**.
-1. Select **Disabled** for **Secure transfer required**.
-1. Select **Save**.
-
- :::image type="content" source="media/storage-files-how-to-mount-nfs-shares/disable-secure-transfer.png" alt-text="Screenshot of storage account configuration screen with secure transfer disabled." lightbox="media/storage-files-how-to-mount-nfs-shares/disable-secure-transfer.png":::
-
-# [PowerShell](#tab/azure-powershell)
-
-```azurepowershell
-Set-AzStorageAccount -Name "{StorageAccountName}" -ResourceGroupName "{ResourceGroupName}" -EnableHttpsTrafficOnly $False
-```
-
-# [Azure CLI](#tab/azure-cli)
-
-```azurecli
-az storage account update -g {ResourceGroupName} -n {StorageAccountName} --https-only false
-```
--
-## Create an NFS share
-
-# [Portal](#tab/azure-portal)
-
-Now that you have created a FileStorage account and configured the networking, you can create an NFS file share. The process is similar to creating an SMB share, you select **NFS** instead of **SMB** when creating the share.
-
-1. Navigate to your storage account and select **File shares**.
-1. Select **+ File share** to create a new file share.
-1. Name your file share, select a provisioned capacity.
-1. For **Protocol** select **NFS**.
-1. For **Root Squash** make a selection.
-
- - Root squash - Access for the remote superuser (root) is mapped to UID (65534) and GID (65534).
- - No root squash (default) - Remote superuser (root) receives access as root.
- - All squash - All user access is mapped to UID (65534) and GID (65534).
-
-1. Select **Create**.
-
- :::image type="content" source="media/storage-files-how-to-create-mount-nfs-shares/files-nfs-create-share.png" alt-text="Screenshot of file share creation blade.":::
-
-# [PowerShell](#tab/azure-powershell)
-
-1. To create a premium file share with the Azure PowerShell module, use the [New-AzRmStorageShare](/powershell/module/az.storage/new-azrmstorageshare) cmdlet.
-
- > [!NOTE]
- > Premium file shares are billed using a provisioned model. The provisioned size of the share is specified by `QuotaGiB` below. For more information, see [Understanding the provisioned model](understanding-billing.md#provisioned-model) and the [Azure Files pricing page](https://azure.microsoft.com/pricing/details/storage/files/).
-
- ```powershell
- New-AzRmStorageShare `
- -StorageAccount $storageAccount `
- -Name myshare `
- -EnabledProtocol NFS `
- -RootSquash RootSquash `
- -QuotaGiB 1024
- ```
-
-# [Azure CLI](#tab/azure-cli)
-To create a premium file share with the Azure CLI, use the [az storage share create](/cli/azure/storage/share-rm) command.
-
-> [!NOTE]
-> Premium file shares are billed using a provisioned model. The provisioned size of the share is specified by `quota` below. For more information, see [Understanding the provisioned model](understanding-billing.md#provisioned-model) and the [Azure Files pricing page](https://azure.microsoft.com/pricing/details/storage/files/).
-
-```azurecli-interactive
-az storage share-rm create \
- --resource-group $resourceGroup \
- --storage-account $storageAccount \
- --name "myshare" \
- --enabled-protocol NFS \
- --root-squash RootSquash \
- --quota 1024
-```
--
-## Next steps
-Now that you've created an NFS share, to use it you have to mount it on your Linux client. For details, see [How to mount an NFS share](storage-files-how-to-mount-nfs-shares.md).
-
-If you experience any issues, see [Troubleshoot Azure NFS file shares](storage-troubleshooting-files-nfs.md).
storage Storage Files Identity Ad Ds Assign Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-assign-permissions.md
The following table lists the share-level permissions and how they align with th
|[Storage File Data SMB Share Contributor](../../role-based-access-control/built-in-roles.md#storage-file-data-smb-share-contributor) |Allows for read, write, and delete access on files and directories in Azure file shares. [Learn more](storage-files-identity-auth-active-directory-enable.md). | |[Storage File Data SMB Share Elevated Contributor](../../role-based-access-control/built-in-roles.md#storage-file-data-smb-share-elevated-contributor) |Allows for read, write, delete, and modify ACLs on files and directories in Azure file shares. This role is analogous to a file share ACL of change on Windows file servers. [Learn more](storage-files-identity-auth-active-directory-enable.md). | - ## Share-level permissions for specific Azure AD users or groups
-If you intend to use a specific Azure AD user or group to access Azure file share resources, that identity must be a **hybrid identity that exists in both on-premises AD DS and Azure AD**. For example, say you have a user in your AD that is user1@onprem.contoso.com and you have synced to Azure AD as user1@contoso.com using Azure AD Connect sync. For this user to access Azure Files, you must assign the share-level permissions to user1@contoso.com. The same concept applies to groups or service principals. Because of this, you must sync the users and groups from your AD to Azure AD using Azure AD Connect sync.
+If you intend to use a specific Azure AD user or group to access Azure file share resources, that identity must be a **hybrid identity that exists in both on-premises AD DS and Azure AD**. For example, say you have a user in your AD that is user1@onprem.contoso.com and you have synced to Azure AD as user1@contoso.com using Azure AD Connect sync. For this user to access Azure Files, you must assign the share-level permissions to user1@contoso.com. The same concept applies to groups or service principals.
+
+In order for share-level permissions to work, you must:
+
+- Sync the users **and** the groups from your local AD to Azure AD using Azure AD Connect sync
+- Add AD synced groups to RBAC role so they can access your storage account
Share-level permissions must be assigned to the Azure AD identity representing the same user or group in your AD DS to support AD DS authentication to your Azure file share. Authentication and authorization against identities that only exist in Azure AD, such as Azure Managed Identities (MSIs), are not supported with AD DS authentication. You can use the Azure portal, Azure PowerShell module, or Azure CLI to assign the built-in roles to the Azure AD identity of a user for granting share-level permissions. > [!IMPORTANT]
-> The share level permissions will take upto 3 hours to take effect once completed. Please wait for the permissions to sync before connecting to your file share using your credentials
+> The share-level permissions will take up to three hours to take effect once completed. Please wait for the permissions to sync before connecting to your file share using your credentials.
# [Portal](#tab/azure-portal)
When you set a default share-level permission, all authenticated users and group
# [Portal](#tab/azure-portal)
-You cannot currently assign permissions to the storage account with the Azure portal. Use either the Azure PowerShell module or the Azure CLI, instead.
+You can't currently assign permissions to the storage account with the Azure portal. Use either the Azure PowerShell module or the Azure CLI, instead.
# [Azure PowerShell](#tab/azure-powershell)
You could also assign permissions to all authenticated Azure AD users and specif
Now that you've assigned share-level permissions, you must configure directory and file-level permissions. Continue to the next article.
-[Part three: configure directory and file level permissions over SMB](storage-files-identity-ad-ds-configure-permissions.md)
+[Part three: configure directory and file-level permissions over SMB](storage-files-identity-ad-ds-configure-permissions.md)
storage Storage Files Quick Create Use Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-quick-create-use-linux.md
description: This tutorial covers how to use the Azure portal to deploy a Linux
Previously updated : 03/21/2022 Last updated : 03/22/2022 #Customer intent: As an IT admin new to Azure Files, I want to try out Azure file share using NFS and Linux so I can determine whether I want to subscribe to the service.
If you don't have an Azure subscription, create a [free account](https://azure.m
Sign in to the [Azure portal](https://portal.azure.com).
-### Create a storage account
+### Create a FileStorage storage account
-Before you can work with an NFS Azure file share, you have to create an Azure storage account with the Premium performance tier. Premium is the only tier that supports NFS Azure file shares.
+Before you can work with an NFS 4.1 Azure file share, you have to create an Azure storage account with the premium performance tier. Currently, NFS 4.1 shares are only available as premium file shares.
1. On the Azure portal menu, select **All services**. In the list of resources, type **Storage Accounts**. As you begin typing, the list filters based on your input. Select **Storage Accounts**. 1. On the **Storage Accounts** window that appears, choose **+ Create**.
Next, create an Azure VM running Linux to represent the on-premises server. When
1. In the **Basics** tab, under **Project details**, make sure the correct subscription and resource group are selected. Under **Instance details**, type *myVM* for the **Virtual machine name**, and select the same region as your storage account. Choose the default Ubuntu Server version for your **Image**. Leave the other defaults. The default size and pricing is only shown as an example. Size availability and pricing is dependent on your region and subscription.
- :::image type="content" source="media/storage-files-quick-create-use-linux/create-vm-project-instance-details.png" alt-text="Screenshot showing how to enter the project and instance details to create a new VM." lightbox="media/storage-files-quick-create-use-linux/create-vm-project-instance-details.png" border="true":::
+ :::image type="content" source="media/storage-files-quick-create-use-linux/create-vm-project-instance-details.png" alt-text="Screenshot showing how to enter the project and instance details to create a new V M." lightbox="media/storage-files-quick-create-use-linux/create-vm-project-instance-details.png" border="true":::
1. Under **Administrator account**, select **SSH public key**. Leave the rest of the defaults.
- :::image type="content" source="media/storage-files-quick-create-use-linux/create-vm-admin-account.png" alt-text="Screenshot showing how to configure the administrator account and create an SSH key pair for a new VM." lightbox="media/storage-files-quick-create-use-linux/create-vm-admin-account.png" border="true":::
+ :::image type="content" source="media/storage-files-quick-create-use-linux/create-vm-admin-account.png" alt-text="Screenshot showing how to configure the administrator account and create an S S H key pair for a new V M." lightbox="media/storage-files-quick-create-use-linux/create-vm-admin-account.png" border="true":::
1. Under **Inbound port rules > Public inbound ports**, choose **Allow selected ports** and then select **SSH (22) and HTTP (80)** from the drop-down.
- :::image type="content" source="media/storage-files-quick-create-use-linux/create-vm-inbound-port-rules.png" alt-text="Screenshot showing how to configure the inbound port rules for a new VM." lightbox="media/storage-files-quick-create-use-linux/create-vm-inbound-port-rules.png" border="true":::
+ :::image type="content" source="media/storage-files-quick-create-use-linux/create-vm-inbound-port-rules.png" alt-text="Screenshot showing how to configure the inbound port rules for a new V M." lightbox="media/storage-files-quick-create-use-linux/create-vm-inbound-port-rules.png" border="true":::
> [!IMPORTANT] > Setting SSH port(s) open to the internet is only recommended for testing. If you want to change this setting later, go back to the **Basics** tab.
Now you're ready to create an NFS file share and provide network-level security
1. Name the new file share *qsfileshare* and enter "100" for the minimum **Provisioned capacity**, or provision more capacity (up to 102,400 GiB) to get more performance. Select **NFS** protocol, leave **No Root Squash** selected, and select **Create**.
- :::image type="content" source="media/storage-files-quick-create-use-linux/create-nfs-share.png" alt-text="Screenshot showing how to name the file share and provision capacity to create a new NFS file share." lightbox="media/storage-files-quick-create-use-linux/create-nfs-share.png" border="true":::
+ :::image type="content" source="media/storage-files-quick-create-use-linux/create-nfs-share.png" alt-text="Screenshot showing how to name the file share and provision capacity to create a new N F S file share." lightbox="media/storage-files-quick-create-use-linux/create-nfs-share.png" border="true":::
### Set up a private endpoint
Next, you'll need to set up a private endpoint for your storage account. This gi
1. Select the file share *qsfileshare*. You should see a dialog that says *Connect to this NFS share from Linux*. Under **Network configuration**, select **Review options**
- :::image type="content" source="media/storage-files-quick-create-use-linux/connect-from-linux.png" alt-text="Screenshot showing how to configure network and secure transfer settings to connect the NFS share from Linux." lightbox="media/storage-files-quick-create-use-linux/connect-from-linux.png" border="true":::
+ :::image type="content" source="media/storage-files-quick-create-use-linux/connect-from-linux.png" alt-text="Screenshot showing how to configure network and secure transfer settings to connect the N F S share from Linux." lightbox="media/storage-files-quick-create-use-linux/connect-from-linux.png" border="true":::
1. Next, select **Setup a private endpoint**.
Next, you'll need to set up a private endpoint for your storage account. This gi
:::image type="content" source="media/storage-files-quick-create-use-linux/create-private-endpoint.png" alt-text="Screenshot showing how to select + private endpoint to create a new private endpoint.":::
-1. Leave **Subscription** and **Resource group** the same. Under **Instance**, provide a name and select a region for the new private endpoint. Your private endpoint must be in the same region as your virtual network, so use the same region as you specified when creating the VM. When all the fields are complete, select **Next: Resource**.
+1. Leave **Subscription** and **Resource group** the same. Under **Instance**, provide a name and select a region for the new private endpoint. Your private endpoint must be in the same region as your virtual network, so use the same region as you specified when creating the V M. When all the fields are complete, select **Next: Resource**.
:::image type="content" source="media/storage-files-quick-create-use-linux/private-endpoint-basics.png" alt-text="Screenshot showing how to provide the project and instance details for a new private endpoint." lightbox="media/storage-files-quick-create-use-linux/private-endpoint-basics.png" border="true":::
Next, you'll need to set up a private endpoint for your storage account. This gi
1. Under **Networking**, select the virtual network associated with your VM and leave the default subnet. Select **Yes** for **Integrate with private DNS zone**. Select the correct subscription and resource group, and then select **Next: Tags**.
- :::image type="content" source="media/storage-files-quick-create-use-linux/private-endpoint-networking.png" alt-text="Screenshot showing how to add virtual networking and DNS integration to a new private endpoint." lightbox="media/storage-files-quick-create-use-linux/private-endpoint-networking.png" border="true":::
+ :::image type="content" source="media/storage-files-quick-create-use-linux/private-endpoint-networking.png" alt-text="Screenshot showing how to add virtual networking and D N S integration to a new private endpoint." lightbox="media/storage-files-quick-create-use-linux/private-endpoint-networking.png" border="true":::
1. You can optionally apply tags to categorize your resources, such as applying the name **Environment** and the value **Test** to all testing resources. Enter name/value pairs if desired, and then select **Next: Review + create**.
Create an SSH connection with the VM.
1. Select the Linux VM you created for this tutorial and ensure that its status is **Running**. Take note of the VM's public IP address and copy it to your clipboard.
- :::image type="content" source="media/storage-files-quick-create-use-linux/connect-to-vm.png" alt-text="Screenshot showing how to confirm that the VM is running and find its public IP address." lightbox="media/storage-files-quick-create-use-linux/connect-to-vm.png" border="true":::
+ :::image type="content" source="media/storage-files-quick-create-use-linux/connect-to-vm.png" alt-text="Screenshot showing how to confirm that the V M is running and find its public I P address." lightbox="media/storage-files-quick-create-use-linux/connect-to-vm.png" border="true":::
1. If you are on a Mac or Linux machine, open a Bash prompt. If you are on a Windows machine, open a PowerShell prompt.
Now that you've created an NFS share, to use it you have to mount it on your Lin
1. You should see **Connect to this NFS share from Linux** along with sample commands to use NFS on your Linux distribution and a provided mounting script.
- :::image type="content" source="media/storage-files-quick-create-use-linux/mount-nfs-share.png" alt-text="Screenshot showing how to connect to an NFS file share from Linux using a provided mounting script." lightbox="media/storage-files-quick-create-use-linux/mount-nfs-share.png" border="true":::
+ :::image type="content" source="media/storage-files-quick-create-use-linux/mount-nfs-share.png" alt-text="Screenshot showing how to connect to an N F S file share from Linux using a provided mounting script." lightbox="media/storage-files-quick-create-use-linux/mount-nfs-share.png" border="true":::
1. Select your Linux distribution (Ubuntu).
storage Isv File Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/primary-secondary-storage/isv-file-services.md
description: Basic guidance for different ISV options on running file services i
Previously updated : 05/24/2021 Last updated : 03/22/2022
This article compares several ISV solutions that provide files services in Azure
| **Nasuni** | **UniFS** is an enterprise file service with a simpler, low-cost, cloud alternative built on Microsoft Azure | - Primary file storage <br> - Departmental file shares <br> - Centralized file management <br> - multi-site collaboration with global file locking <br> - Windows Virtual Desktop <br> - Remote work/VDI file shares | | **NetApp** | **Cloud Volumes ONTAP** optimizes your cloud storage costs, and performance while enhancing data protection, security, and compliance. Includes enterprise-grade data management, availability, and durability | - Business applications <br> - Relational and NoSQL databases <br> - Big Data & Analytics <br> - Persistent data for containers <br> - CI/CD pipelines <br> - Disaster recovery for on-premises NetApp solutions | | **Panzura**| **CloudFS** is an enterprise global file system with added resiliency and high-performance. Offers ransomware protection. | - Simplified legacy storage replacement <br> - Backup and disaster recovery, with granular recovery ability <br> - Cloud native access to unstructured data for Analytics, AI/ML. <br> - Multi-site file collaboration, with automatic file locking and real time global file consistency <br> - Global remote work with cloud VDI <br> - Accelerated cloud migration for legacy workloads |
+| **Qumulo** | **Qumulo** on Azure offers multiple petabytes (PiB) of storage capacity, and up to 20 GB/s of performance per file system. Windows (SMB) and Linux (NFS) are both natively supported, and Qumulo provides onboard real-time workload analytics. | ΓÇô Primary file storage for High Performance Compute, Media & Entertainment, Genomics, Electronic design, and Financial modeling. |
| **Tiger Technology** | **Tiger Bridge** is a data management software solution. Provides tiering between an NTFS file system and Azure Blob Storage or Azure managed disks. Creates a single namespace with local file locking. | - Cloud archive<br> - Continuous data protection (CDP) <br> - Disaster Recovery for Windows servers <br> - Multi-site sync and collaboration <br> - Remote workflows (VDI)<br> - Native access to cloud data for Analytics, AI, ML | | **XenData** | **Cloud File Gateway** creates a highly scalable global file system using windows file servers | - Global sharing of engineering and scientific files <br> - Collaborative video editing |
This article compares several ISV solutions that provide files services in Azure
### Supported protocols
-| | Nasuni | NetApp CVO | Panzura | Tiger Technology | XenData |
-|--|-|--||--|--|
-| **SMB 2.1** | Yes | Yes | Yes | Yes | Yes |
-| **SMB 3.0** | Yes | Yes | Yes | Yes | Yes |
-| **SMB 3.1** | Yes | Yes | Yes | Yes | Yes |
-| **NFS v3** | Yes | Yes | Yes | Yes | Yes |
-| **NFS v4.1** | Yes | Yes | Yes | Yes | Yes |
-| **iSCSI** | No | Yes | No | Yes | No |
+| | Nasuni | NetApp CVO | Panzura | Qumulo | Tiger Technology | XenData |
+|--|-|--||--|--|--|
+| **SMB 2.1** | Yes | Yes | Yes | Yes | Yes | Yes |
+| **SMB 3.0** | Yes | Yes | Yes | Yes | Yes | Yes |
+| **SMB 3.1** | Yes | Yes | Yes | Yes | Yes | Yes |
+| **NFS v3** | Yes | Yes | Yes | Yes | Yes | Yes |
+| **NFS v4.1** | Yes | Yes | Yes | Yes | Yes | Yes |
+| **iSCSI** | No | Yes | No | No | Yes | No |
### Supported services for persistent storage
-| | Nasuni | NetApp CVO | Panzura | Tiger Technology | XenData |
-|--|-|--||--|--|
-| **Managed disks** | No | Yes | Yes | Yes | No |
-| **Unmanaged disks** | No | No | No | Yes | No |
-| **Azure Storage Block blobs** | Yes | Yes (tiering) | Yes | Yes | Yes |
-| **Azure Storage Page blobs** | No | Yes (for HA) | Yes | No | No |
-| **Azure Archive tier support** | No | No | Yes | Yes | Yes |
-| **Files accessible in non-opaque format** | No | No | No | Yes | Yes |
+| | Nasuni | NetApp CVO | Panzura | Qumulo | Tiger Technology | XenData |
+|--|-|--||--|--|--|
+| **Managed disks** | No | Yes | Yes | No | Yes | No |
+| **Unmanaged disks** | No | No | No | No | Yes | No |
+| **Azure Storage Block blobs** | Yes | Yes (tiering) | Yes | No | Yes | Yes |
+| **Azure Storage Page blobs** | No | Yes (for HA) | Yes | Yes | No | No |
+| **Azure Archive tier support** | No | No | Yes | No | Yes | Yes |
+| **Files accessible in non-opaque format** | No | No | No | No | Yes | Yes |
### Extended features
-| | Nasuni | NetApp CVO | Panzura | Tiger Technology | XenData |
-|--|-|--||--|--|
-| **Operating Environment** | UniFS | ONTAP | PFOS | Windows Server | Windows Server |
-| **High-Availability** | Yes | Yes | Yes | Yes (requires setup) | Yes |
-| **Automatic failover between nodes in the cluster** | Yes | Yes | Yes | Yes (windows cluster) | yes (windows cluster) |
-| **Automatic failover across availability zones** | Yes | No | Yes | Yes (windows cluster) | yes (windows cluster) |
-| **Automatic failover across regions** | Yes (with Nasuni support)| No | No | Yes (windows cluster) | yes (windows cluster) |
-| **Snapshot support** | Yes | Yes | Yes | Yes | No |
-| **Consistent snapshot support** | Yes | Yes | Yes | Yes | No |
-| **Integrated backup** | Yes | Yes (Add-on) | No | Yes | Yes |
-| **Versioning** | Yes | Yes | No | Yes | Yes |
-| **File level restore** | Yes | Yes | Yes | Yes | Yes |
-| **Volume level restore** | Yes | Yes | Yes | Yes | Yes |
-| **WORM** | Yes | Yes | No | Yes | No |
-| **Automatic tiering** | Yes | Yes | No | Yes | Yes |
-| **Global file locking** | Yes | Yes (NetApp Global File Cache) | Yes | Yes | Yes |
-| **Namespace aggregation over backend sources** | Yes | Yes | No | Yes | Yes |
-| **Caching of active data** | Yes | Yes | Yes | yes | Yes |
-| **Supported caching modes** | LRU, manually pinned | LRU | LRU, manually pinned | LRU | LRU |
-| **Encryption at rest** | Yes | Yes | Yes | Yes | No |
-| **De-duplication** | Yes | Yes | Yes | No | No |
-| **Compression** | Yes | Yes | Yes | No | No |
+| | Nasuni | NetApp CVO | Panzura | Qumulo | Tiger Technology | XenData |
+|--|-|--||--|--|--|
+| **Operating Environment** | UniFS | ONTAP | PFOS | Qumulo Core | Windows Server | Windows Server |
+| **High-Availability** | Yes | Yes | Yes | Yes | Yes (requires setup) | Yes |
+| **Automatic failover between nodes in the cluster** | Yes | Yes | Yes | Yes | Yes (windows cluster) | yes (windows cluster) |
+| **Automatic failover across availability zones** | Yes | No | Yes | No | Yes (windows cluster) | yes (windows cluster) |
+| **Automatic failover across regions** | Yes (with Nasuni support)| No | No | No | Yes (windows cluster) | yes (windows cluster) |
+| **Snapshot support** | Yes | Yes | Yes | Yes | Yes | No |
+| **Consistent snapshot support** | Yes | Yes | Yes | Yes | Yes | No |
+| **Integrated backup** | Yes | Yes (Add-on) | No | Yes | Yes | Yes |
+| **Versioning** | Yes | Yes | No | Yes | Yes | Yes |
+| **File level restore** | Yes | Yes | Yes | Yes | Yes | Yes |
+| **Volume level restore** | Yes | Yes | Yes | No | Yes | Yes |
+| **WORM** | Yes | Yes | No | No | Yes | No |
+| **Automatic tiering** | Yes | Yes | No | Yes | Yes | Yes |
+| **Global file locking** | Yes | Yes (NetApp Global File Cache) | Yes | Yes | Yes | Yes |
+| **Namespace aggregation over backend sources** | Yes | Yes | No | Yes | Yes | Yes |
+| **Caching of active data** | Yes | Yes | Yes | Yes | yes | Yes |
+| **Supported caching modes** | LRU, manually pinned | LRU | LRU, manually pinned | Predictive | LRU | LRU |
+| **Encryption at rest** | Yes | Yes | Yes | Yes | Yes | No |
+| **De-duplication** | Yes | Yes | Yes | No | No | No |
+| **Compression** | Yes | Yes | Yes | No | No | No |
### Authentication sources
-| | Nasuni | NetApp CVO | Panzura | Tiger Technology | XenData |
-|--|-|--||--|--|
-| **Azure AD support** | Yes (via ADDS) | Yes (via ADDS) | Yes (via ADDS) | Yes (via ADDS) | Yes (via ADDS) |
-| **Active directory support** | Yes | Yes | Yes | Yes | Yes |
-| **LDAP support** | Yes | Yes | No | Yes | Yes |
+| | Nasuni | NetApp CVO | Panzura | Qumulo | Tiger Technology | XenData |
+|--|-|--||--|--|--|
+| **Azure AD support** | Yes (via AD DS) | Yes (via AD DS) | Yes (via AD DS) | Yes | Yes (via AD DS) | Yes (via AD DS) |
+| **Active directory support** | Yes | Yes | Yes | Yes | Yes | Yes |
+| **LDAP support** | Yes | Yes | No | Yes | Yes | Yes |
### Management
-| | Nasuni | NetApp CVO | Panzura | Tiger Technology | XenData |
-|--|-|--||--|--|
-| **REST API** | Yes | Yes | Yes | Yes | No |
-| **Web GUI** | Yes | Yes | Yes | No | No |
+| | Nasuni | NetApp CVO | Panzura | Qumulo | Tiger Technology | XenData |
+|--|-|--||--|--|--|
+| **REST API** | Yes | Yes | Yes | Yes | Yes | No |
+| **Web GUI** | Yes | Yes | Yes | Yes | No | No |
### Scalability
-| | Nasuni | NetApp CVO | Panzura | Tiger Technology | XenData |
-|--|-|--||--|--|
-| **Maximum number of nodes in a single cluster** | 100 | 2 (HA) | Tested up to 60 nodes | N / A | N / A |
-| **Maximum number of volumes** | 800 | 1024 | Unlimited | N / A | 1 |
-| **Maximum number of snapshots** | Unlimited | Unlimited | Unlimited | N / A | N / A |
-| **Maximum size of a single namespace** | Unlimited | Depends on infrastructure | Unlimited | N / A | N / A |
+| | Nasuni | NetApp CVO | Panzura | Qumulo | Tiger Technology | XenData |
+|--|-|--||--|--|--|
+| **Maximum number of nodes in a single cluster** | 100 | 2 (HA) | Tested up to 60 nodes | 100 | N / A | N / A |
+| **Maximum number of volumes** | 800 | 1024 | Unlimited | N / A | N / A | 1 |
+| **Maximum number of snapshots** | Unlimited | Unlimited | Unlimited | Unlimited | N / A | N / A |
+| **Maximum size of a single namespace** | Unlimited | Depends on infrastructure | Unlimited | Unlimited | N / A | N / A |
### Licensing
-| | Nasuni | NetApp CVO | Panzura | Tiger Technology | XenData |
-|--|-|--||--|--|
-| **BYOL** | Yes | Yes | Yes | Yes | yes |
-| **Azure Benefit Eligible** | No | Yes | Yes | No | No |
-| **Deployment model (IaaS, SaaS)** | IaaS | IaaS | IaaS | IaaS | IaaS |
+| | Nasuni | NetApp CVO | Panzura | Qumulo | Tiger Technology | XenData |
+|--|-|--||--|--|--|
+| **BYOL** | Yes | Yes | Yes | Yes | Yes | yes |
+| **Azure Benefit Eligible** | No | Yes | Yes | Yes | No | No |
+| **Deployment model (IaaS, SaaS)** | IaaS | IaaS | IaaS | SaaS | IaaS | IaaS |
### Other features
This article compares several ISV solutions that provide files services in Azure
- [Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/panzura-file-system.panzura-freedom-filer) - Byte Level Locking (multiple simultaneous R/W opens)
+**Qumulo**
+- [Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/qumulo1584033880660.qumulo-saas?tab=Overview)
+- Support for REST, and FTP
+ **Tiger Technology** - Invisible to applications - Partial Restore
stream-analytics Stream Analytics Monitor And Manage Jobs Use Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-monitor-and-manage-jobs-use-powershell.md
Last updated 03/28/2017
# Monitor and manage Stream Analytics jobs with Azure PowerShell cmdlets
-Learn how to monitor and manage Stream Analytics resources with Azure PowerShell cmdlets and powershell scripting that execute basic Stream Analytics tasks.
+Learn how to monitor and manage Stream Analytics resources with Azure PowerShell cmdlets and PowerShell scripting that execute basic Stream Analytics tasks.
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
synapse-analytics Synapse Spark Sql Pool Import Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/synapse-spark-sql-pool-import-export.md
Title: Import and Export data between serverless Apache Spark pools and SQL pools
-description: This article introduces the Synapse Dedicated SQL Pool Connector API for moving data between dedicated SQL pools and serverless Apache Spark pools.
-
+ Title: Azure Synapse Dedicated SQL Pool Connector for Apache Spark
+description: Presents Azure Synapse Dedicated SQL Pool Connector for Apache Spark for moving data between Apache Spark Runtime (Serverless Spark Pool) and the Synapse Dedicated SQL Pool.
+ Previously updated : 01/27/2022-- Last updated : 03/18/2022++
-# Azure Synapse Dedicated SQL Pool connector for Apache Spark
+# Azure Synapse Dedicated SQL Pool Connector for Apache Spark
-The Synapse Dedicated SQL Pool Connector is an API that efficiently moves data between [Apache Spark runtime](../../synapse-analytics/spark/apache-spark-overview.md) and [Dedicated SQL pool](../../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md) in Azure Synapse Analytics. This connector is available in `Scala`.
+## Introduction
-It uses Azure Storage and [PolyBase](/sql/relational-databases/polybase/polybase-guide) to transfer data in parallel and at scale.
+The Azure Synapse Dedicated SQL Pool Connector for Apache Spark in Azure Synapse Analytics efficiently transfers large volume data sets between the [Apache Spark runtime](../../synapse-analytics/spark/apache-spark-overview.md) and the [Dedicated SQL pool](../../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md). The connector is implemented using `Scala` language. The connector is shipped as a default library within Azure Synapse environment - workspace Notebook and Serverless Spark Pool runtime. Using the Spark magic command `%%spark`, the Scala Connector code can be placed in any Synapse Notebook Cell regardless of the notebook language preferences.
-## Authentication
+At a high-level, the connector provides the following capabilities:
-Authentication works automatically with the signed in Azure Active Directory user after the following prerequisites.
+* Write to Azure Synapse Dedicated SQL Pool:
+ * Ingest large volume data to Internal and External table types.
+ * Supports following DataFrame save mode preferences:
+ * `Append`
+ * `ErrorIfExists`
+ * `Ignore`
+ * `Overwrite`
+ * Write to External Table type supports Parquet and Delimited Text file format (example - CSV).
+ * Write path implementation leverages [COPY statement](../../synapse-analytics/sql-data-warehouse/quickstart-bulk-load-copy-tsql.md) instead of CETAS/CTAS approach.
+ * Enhancements to optimize end-to-end write throughput performance.
+ * Introduces an optional call-back handle (a Scala function argument) that clients can use to receive post-write metrics.
+ * For example - time taken to stage data, time taken to write data to target tables, number of records staged, number of records committed to target table, and the failure cause (if the request submitted has failed).
+* Read from Azure Synapse Dedicated SQL Pool:
+ * Read large data sets from Synapse Dedicated SQL Pool Tables (Internal and External) and Views.
+ * Comprehensive predicate push down support, where filters on DataFrame get mapped to corresponding SQL predicate push down.
+ * Support for column pruning.
-* Add the user to [db_exporter role](/sql/relational-databases/security/authentication-access/database-level-roles#special-roles-for--and-azure-synapse) using system-stored procedure [sp_addrolemember](/sql/relational-databases/system-stored-procedures/sp-addrolemember-transact-sql).
-* Add the user to [Storage Blob Data Contributor role](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor) on the storage account.
+## Orchestration Approach
-The connector also supports password-based [SQL authentication](../../azure-sql/database/logins-create-manage.md#authentication-and-authorization) after the following prerequisites.
- * Add the user to [db_exporter role](/sql/relational-databases/security/authentication-access/database-level-roles#special-roles-for--and-azure-synapse) using system-stored procedure [sp_addrolemember](/sql/relational-databases/system-stored-procedures/sp-addrolemember-transact-sql).
- * Create an [external data source](/sql/t-sql/statements/create-external-data-source-transact-sql), whose [database scoped credential](/sql/t-sql/statements/create-database-scoped-credential-transact-sql) secret is the access key to an Azure Storage Account. The API requires the name of this external data source.
+### Write
-## API reference
+![Write-Orchestration](./media/synapse-spark-sql-pool-import-export/synapse-dedicated-sql-pool-spark-connector-write-orchestration.png)
-See the [Scala API reference](https://synapsesql.blob.core.windows.net/docs/1.0.0/scaladocs/com/microsoft/spark/sqlanalytics/https://docsupdatetracker.net/index.html).
+### Read
-## Example usage
+![Read-Orchestration](./media/synapse-spark-sql-pool-import-export/synapse-dedicated-sql-pool-spark-connector-read-orchestration.png)
-* Create and show a `DataFrame` representing a database table in the dedicated SQL pool.
+## Pre-requisites
- ```scala
- import com.microsoft.spark.sqlanalytics.utils.Constants
- import org.apache.spark.sql.SqlAnalyticsConnector._
+This section details necessary pre-requisite steps include Azure Resource set up and Configurations including authentication and authorization requirements for using the Azure Synapse Dedicated SQL Pool Connector for Apache Spark.
- val df = spark.read.
- option(Constants.SERVER, "servername.database.windows.net").
- synapsesql("databaseName.schemaName.tablename")
+### Azure Resources
- df.show
- ```
+Review and set up following dependent Azure Resources:
-* Save the content of a `DataFrame` to a database table in the dedicated SQL pool. The table type can be internal (i.e. managed) or external.
+* [Azure Data Lake Storage](../../storage/blobs/data-lake-storage-introduction.md) - used as the primary storage account for the Azure Synapse Workspace.
+* [Azure Synapse Workspace](../../synapse-analytics/get-started-create-workspace.md) - create notebooks, build and deploy DataFrame based ingress-egress workflows.
+* [Dedicated SQL Pool (formerly SQL DW)](../../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md) - used to host and manage various data assets.
+* [Azure Synapse Serverless Spark Pool](../../synapse-analytics/get-started-analyze-spark.md) - Spark runtime where the jobs are executed as Spark Applications.
- ```scala
- import com.microsoft.spark.sqlanalytics.utils.Constants
- import org.apache.spark.sql.SqlAnalyticsConnector._
+#### Database Set up
- val df = spark.sql("select * from tmpview")
+Connect to the Synapse Dedicated SQL Pool database and run following set up statements:
- df.write.
- option(Constants.SERVER, "servername.database.windows.net").
- synapsesql("databaseName.schemaName.tablename", Constants.INTERNAL)
- ```
+* Create a database user for the Azure Active Directory User Identity. This must be the same identity that is used to log in to Azure Synapse Workspace. If your use case for the Connector is to write data to destination tables in Azure Synapse Dedicated SQL Pool, this step can be skipped. This step is necessary only if your scenario is both write-to and read-from Synapse Dedicated SQL Pool, where the database user must be present in order to assign the [`db_exporter`](/sql/relational-databases/security/authentication-access/database-level-roles#special-roles-for--and-azure-synapse) role.
+
+ ```sql
+ CREATE USER [username@domain.com] FROM EXTERNAL PROVIDER;
+ ```
-* Use the connector API with SQL authentication with option keys `Constants.USER` and `Constants.PASSWORD`. It also requires option key `Constants.DATA_SOURCE`, specifying an external data source.
+* Create schema in which tables will be defined, such that the Connector can successfully write-to and read-from respective tables.
- ```scala
- import com.microsoft.spark.sqlanalytics.utils.Constants
- import org.apache.spark.sql.SqlAnalyticsConnector._
+ ```sql
+ CREATE SCHEMA [<schema_name>];
+ ```
- val df = spark.read.
- option(Constants.SERVER, "servername.database.windows.net").
- option(Constants.USER, "username").
- option(Constants.PASSWORD, "password").
- option(Constants.DATA_SOURCE, "datasource").
- synapsesql("databaseName.schemaName.tablename")
+### Authentication
- df.show
- ```
+#### Azure Active Directory based Authentication
-* We can use the `Scala` connector API to interact with content from a `DataFrame` in `PySpark` by using [DataFrame.createOrReplaceTempView](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrame.createOrReplaceTempView.html#pyspark.sql.DataFrame.createOrReplaceTempView) or [DataFrame.createOrReplaceGlobalTempView](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrame.createOrReplaceGlobalTempView.html#pyspark.sql.DataFrame.createOrReplaceGlobalTempView).
+Azure Active Directory based authentication is an integrated authentication approach. The user is required to successfully log in to the Azure Synapse Analytics Workspace. When interacting with respective resources such as storage and Synapse Dedicated SQL Pool, the user tokens are leveraged from the runtime. It's important to verify that the respective users can connect and access respective resources to perform write and read actions. The User Identity must be set up in the Azure Active Directory associated with the Azure Subscription where the resources are set up and configured to connect using Azure Active Directory based authentication.
- ```py
- %%pyspark
- df.createOrReplaceTempView("tempview")
- ```
+#### SQL Basic Authentication
- ```scala
- %%spark
- import com.microsoft.spark.sqlanalytics.utils.Constants
- import org.apache.spark.sql.SqlAnalyticsConnector._
+An alternative to Azure Active Directory based authentication is to use SQL basic authentication. This approach requires additional parameters as described below:
- val df = spark.sqlContext.sql("select * from tempview")
+* Write Data to Azure Synapse Dedicated SQL Pool
+ * When reading data from the data source by initializing a DataFrame object:
+ * Consider an example scenario where the data is read from a Storage Account for which the workspace user doesn't have access permissions.
+ * In such a scenario, the initialization attempt should pass relevant access credentials, as shown in the following sample code snippet:
- df.write.
- option(Constants.SERVER, "servername.database.windows.net").
- synapsesql("databaseName.schemaName.tablename")
- ```
+ ```Scala
+ //Specify options that Spark runtime must support when interfacing and consuming source data
+ val storageAccountName="<storageAccountName>"
+ val storageContainerName="<storageContainerName>"
+ val subscriptionId="<AzureSubscriptionID>"
+ val spnClientId="<ServicePrincipalClientID>"
+ val spnSecretKeyUsedAsAuthCred="<spn_secret_key_value>"
+ val dfReadOptions:Map[String, String]=Map("header"->"true",
+ "delimiter"->",",
+ "fs.defaultFS" -> s"abfss://$storageContainerName@$storageAccountName.dfs.core.windows.net",
+ s"fs.azure.account.auth.type.$storageAccountName.dfs.core.windows.net" -> "OAuth",
+ s"fs.azure.account.oauth.provider.type.$storageAccountName.dfs.core.windows.net" ->
+ "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider",
+ "fs.azure.account.oauth2.client.id" -> s"$spnClientId",
+ "fs.azure.account.oauth2.client.secret" -> s"$spnSecretKeyUsedAsAuthCred",
+ "fs.azure.account.oauth2.client.endpoint" -> s"https://login.microsoftonline.com/$subscriptionId/oauth2/token",
+ "fs.AbstractFileSystem.abfss.impl" -> "org.apache.hadoop.fs.azurebfs.Abfs",
+ "fs.abfss.impl" -> "org.apache.hadoop.fs.azurebfs.SecureAzureBlobFileSystem")
+ //Initialize the Storage Path string, where source data is maintained/kept.
+ val pathToInputSource=s"abfss://$storageContainerName@$storageAccountName.dfs.core.windows.net/<base_path_for_source_data>/<specific_file (or) collection_of_files>"
+ //Define data frame to interface with the data source
+ val df:DataFrame = spark.
+ read.
+ options(dfReadOptions).
+ csv(pathToInputSource).
+ limit(100)
+ ```
-## Next steps
+ * Similar to above snippet, the DataFrame over the source data must have credentials to meet the requirement to perform read from some other source you would like fetch data!
+
+ * When staging the source data to the temporary folders:
+ * Connector expects the workspace user is granted permission to connect and successfully write to the staging folders (i..e, temporary folders).
+
+ * Writing to Azure Synapse Dedicated SQL Pool table:
+ * To successfully connect to Azure Synapse Dedicated SQL Pool table, the Connector expects the `user` and `password` option parameters.
+ * Committing data to SQL occurs in two forms depending on type of the target table that the user's request requires:
+ * Internal Table Type - the Connector requires the option `staging_storage_acount_key` set on the DataFrameWriter[Row] before invoking the method `synapsesql`.
+ * External Table Type - the Connector expects the workspace user has access to read/write access to the target storage location where external table's data is staged.
-- [Create a dedicated SQL pool using the Azure portal](../../synapse-analytics/quickstart-create-apache-spark-pool-portal.md)-- [Create a new Apache Spark pool using the Azure portal](../../synapse-analytics/quickstart-create-apache-spark-pool-portal.md)-- [Create, develop, and maintain Synapse notebooks in Azure Synapse Analytics](../../synapse-analytics/spark/apache-spark-development-using-notebooks.md)
+* Reading from Azure Synapse Dedicated SQL Pool table:
+ * With SQL basic authentication approach, in order to read data from the source tables, Connector's ability to write to staging table must met.
+ * This requirement can be made possible by providing the `data_source` configuration option on the DataFrameReader reference, prior to invoking the `synapsesql` method.
+
+### Authorization
+
+This section focuses on required authorization grants that must be set for the User on respective Azure Resource Types - Azure Storage and Azure Synapse Dedicated SQL Pool.
+
+#### [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md)
+
+There are two ways to grant access permissions to Azure Data Lake Storage Gen2 - Storage Account:
+
+* Role based Access Control role - [Storage Blob Data Contributor role](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor)
+ * Assigning the `Storage Blob Data Contributor Role` grants the User permissions to read, write and delete from the Azure Storage Blob Containers.
+ * RBAC offers a coarse control approach at the container level.
+* [Access Control Lists (ACL)](../../storage/blobs/data-lake-storage-access-control.md)
+ * ACL approach allows for fine-grained controls over specific paths and/or files under a given folder.
+ * ACL checks aren't enforced if the User is already granted permissions using RBAC approach.
+ * There are two broad types of ACL permissions:
+ * Access Permissions (applied at a specific level or object).
+ * Default Permissions (automatically applied for all child objects at the time of their creation).
+ * Type of permissions include:
+ * `Execute` enables ability to traverse or navigate the folder hierarchies.
+ * `Read` enables ability to read.
+ * `Write` enables ability to write.
+ * It's important to configure ACLs such that the Connector can successfully write and read from the storage locations.
+
+#### [Azure Synapse Dedicated SQL Pool](../../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md)
+
+This section details authorization settings necessary to interact with Azure Synapse Dedicated SQL Pool. This step can be skipped, if the User Identity used to log in to Azure Synapse Analytics Workspace is also configured as an `Active Directory Admin`, for the database in the target Synapse Dedicated SQL Pool.
+
+* Write Scenario
+ * Connector uses the COPY command to write data from staging to the internal table's managed location.
+ * Set up required permissions described [here](../../synapse-analytics/sql-data-warehouse/quickstart-bulk-load-copy-tsql.md#set-up-the-required-permissions).
+ * Following is a quick access snippet of the same:
+
+ ```sql
+ --Make sure your user has the permissions to CREATE tables in the [dbo] schema
+ GRANT CREATE TABLE TO [<your_domain_user>@<your_domain_name>.com];
+ GRANT ALTER ON SCHEMA::<target_database_schema_name> TO [<your_domain_user>@<your_domain_name>.com];
+
+ --Make sure your user has ADMINISTER DATABASE BULK OPERATIONS permissions
+ GRANT ADMINISTER DATABASE BULK OPERATIONS TO [<your_domain_user>@<your_domain_name>.com];
+
+ --Make sure your user has INSERT permissions on the target table
+ GRANT INSERT ON <your_table> TO [<your_domain_user>@<your_domain_name>.com]
+ ```
+
+* Read Scenario
+ * Data set that matches the User's read requirements i.e., table, columns and predicates is first fetched to an external staging location using external tables.
+ * In order to successfully create temporary external tables over data in the staging folders, grant the `db_exporter` the system stored procedure [sp_addrolemember](/sql/relational-databases/system-stored-procedures/sp-addrolemember-transact-sql).
+ * Following is a reference sample:
+
+ ```sql
+ EXEC sp_addrolemember 'db_exporter', [<your_domain_user>@<your_domain_name>.com];
+ ```
+
+## Processing the Response
+
+Invoking `synapsesql` has two possible end states - Successful completion of the request (write or read) or a Failure state. In this section we'll review how to handle each of these states with respect to a write and read use case.
+
+### Read Request Response
+
+Upon completion, in either case of a success or a failure the result is rendered below the respective cell. Detailed information can be obtained from the application logs.
+
+### Write Request Response
+
+The new write path API introduces a graceful approach, where the results can be programmatically interpreted and processed, besides printing the snippets below respective cell from which the request is submitted. The method `synapsesql` now supports an additional argument to pass an optional lambda (i.e., Scala Function ). The expected arguments for this function are - a `scala.collection.immutable.Map[String, Any]` and an optional `Throwable`.
+
+Benefits of this approach over printing the end state result to console (partial snippet) and to the application logs include:
+
+* Allow the end-users (i.e., developers) to model dependent workflow activities that depend on a prior state, without having to change the cell.
+* Provide a programmatic approach to handle the outcome - `if <success> <do_something_next> else <capture_error_and_handle_necessary_mitigation>`.
+ * Reviewing the sample error code snippet presented in the section [Write Request Callback Handle](../../synapse-analytics/spark/synapse-spark-sql-pool-import-export.md#write-request-callback-handle).
+* Recommend to review and leverage the [Write Scenario - Code Template](../../synapse-analytics/spark/synapse-spark-sql-pool-import-export.md#write-code-template) that makes easy to adopt to the signature changes, as well motivate to build better write workflows by leveraging the call-back function (a.ka., lambda).
+
+## Connector API Documentation
+
+Azure Synapse Dedicated SQL Pool Connector for Apache Spark - [API Documentation](https://synapsesql.blob.core.windows.net/docs/2.0.0/scaladocs/com/microsoft/spark/sqlanalytics/utils/https://docsupdatetracker.net/index.html).
+
+## Code Templates
+
+This section presents reference code templates to describe how to use and invoke the Azure Synapse Dedicated SQL Pool Connector for Apache Spark.
+
+### Write Scenario
+
+#### `synapsesql` Write Method Signature
+
+The method signature for the Connector version built for Spark 2.4.8 has one less argument, than that applied to the Spark 3.1.2 version. Following are the two method signatures:
+
+* Spark Pool Version 2.4.8
+
+```Scala
+synapsesql(tableName:String,
+ tableType:String = Constants.INTERNAL,
+ location:Option[String] = None)
+```
+
+* Spark Pool Version 3.1.2
+
+```Scala
+synapsesql(tableName:String,
+ tableType:String = Constants.INTERNAL,
+ location:Option[String] = None,
+ callBackHandle=Option[(Map[String, Any], Option[Throwable])=>Unit])
+```
+
+#### Write Code Template
+
+```Scala
+//Add required imports
+import org.apache.spark.sql.DataFrame
+import org.apache.spark.sql.SaveMode
+import com.microsoft.spark.sqlanalytics.utils.Constants
+import org.apache.spark.sql.SqlAnalyticsConnector._
+
+//Define read options for example, if reading from CSV source, configure header and delimiter options.
+val pathToInputSource="abfss://<storage_container_name>@<storage_account_name>.dfs.core.windows.net/<some_folder>/<some_dataset>.csv"
+
+//Define read configuration for the input CSV
+val dfReadOptions:Map[String, String] = Map("header" -> "true", "delimiter" -> ",")
+
+//Initialize DataFrame that reads CSV data from a given source
+val readDF:DataFrame=spark.
+ read.
+ options(dfReadOptions).
+ csv(pathToInputSource).
+ limit(1000) //Reads first 1000 rows from the source CSV input.
+
+//Set up and trigger the read DataFrame for write to Synapse Dedicated SQL Pool.
+//Fully qualified SQL Server DNS name can be obtained using one of the following methods:
+// 1. Synapse Workspace - Manage Pane - SQL Pools - <Properties view of the corresponding Dedicated SQL Pool>
+// 2. From Azure Portal, follow the bread-crumbs for <Portal_Home> -> <Resource_Group> -> <Dedicated SQL Pool> and then go to Connection Strings/JDBC tab.
+val writeOptions:Map[String, String] = Map(Constants.SERVER -> "<dedicated-pool-sql-server-name>.sql.azuresynapse.net",
+ Constants.TEMP_FOLDER -> "abfss://<storage_container_name>@<storage_account_name>.dfs.core.windows.net/<some_temp_folder>")
+
+//Set up optional callback/feedback function that can receive post write metrics of the job performed.
+var errorDuringWrite:Option[Throwable] = None
+val callBackFunctionToReceivePostWriteMetrics: (Map[String, Any], Option[Throwable]) => Unit =
+ (feedback: Map[String, Any], errorState: Option[Throwable]) => {
+ println(s"Feedback map - ${feedback.map{case(key, value) => s"$key -> $value"}.mkString("{",",\n","}")}")
+ errorDuringWrite = errorState
+}
+
+//Configure and trigger write to Synapse Dedicated SQL Pool (note - default SaveMode is set to ErrorIfExists)
+readDF.
+ write.
+ options(writeOptions).
+ mode(SaveMode.Overwrite).
+ synapsesql(tableName = "<database_name>.<schema_name>.<table_name>",
+ tableType = Constants.INTERNAL, //For external table type value is Constants.EXTERNAL
+ location = None, //Not required for writing to an internal table
+ callBackHandle = Some(callBackFunctionToReceivePostWriteMetrics))
+
+//If write request has failed, raise an error and fail the Cell's execution.
+if(errorDuringWrite.isDefined) throw errorDuringWrite.get
+```
+
+#### SaveModes
+
+Following is a brief description of how the SaveMode setting by the User would translate into actions taken by the Connector:
+
+* ErrorIfExists (Connector's default save mode)
+ * If destination table exists, then the write is aborted with an exception returned to the callee. Else, a new table is created with data from the staging folders.
+* Ignore
+ * If the destination table exists, then the write will ignore the write request without returning an error. Else, a new table is created with data from the staging folders.
+* Overwrite
+ * If the destination table exists, then existing data in the destination is replaced with data from the staging folders. Else, a new table is created with data from the staging folders.
+* Append
+ * If the destination table exists, then the new data is appended to it. Else, a new table is created with data from the staging folders.
+
+#### Write Request Callback Handle
+
+The new write path API changes introduced an experimental feature to provide the client with a key->value map of post-write metrics. These metrics provide information such as number of records staged, to number of records written to SQL table, time spent in staging and executing the SQL statements to write data to the Synapse Dedicated SQL Pool. String values for each Metric key are defined and accessible from the new Object reference - `Constants.FeedbackConstants`. These metrics are by default written to the Spark Driver logs. One can also fetch these by passing a call-back handle (a `Scala Function`). Following is the signature of this function:
+
+```Scala
+//Function signature is expected to have two arguments - a `scala.collection.immutable.Map[String, Any]` and an Option[Throwable]
+//Post-write if there's a reference of this handle passed to the `synapsesql` signature, it will be invoked by the closing process.
+//These arguments will have valid objects in either Success or Failure case. In case of Failure the second argument will be a Some(Throwable) i.e., some error reference.
+(Map[String, Any], Option[Throwable]) => Unit
+```
+
+Following is a list of some notable metric constants, with values described using Camel-case format:
+
+* WRITE_FAILURE_CAUSE -> "WriteFailureCause"
+* TIME_INMILLIS_TO_COMPLETE_DATA_STAGING -> "DataStagingSparkJobDurationInMilliseconds"
+* NUMBER_OF_RECORDS_STAGED_FOR_SQL_COMMIT -> "NumberOfRecordsStagedForSQLCommit"
+* TIME_INMILLIS_TO_EXECUTE_COMMIT_SQLS -> "SQLStatementExecutionDurationInMilliseconds"
+* COPY_INTO_COMMAND_PROCESSED_ROW_COUNT -> "rows_processed" ()
+* ROW_COUNT_POST_WRITE_ACTION (applied for scenario where table type is external)
+
+Following is a sample JSON string with post-write metrics:
+
+ ```doc
+ {
+ SparkApplicationId -> <spark_yarn_application_id>,
+ SQLStatementExecutionDurationInMilliseconds -> 10113,
+ WriteRequestReceivedAtEPOCH -> 1647523790633,
+ WriteRequestProcessedAtEPOCH -> 1647523808379,
+ StagingDataFileSystemCheckDurationInMilliseconds -> 60,
+ command -> "COPY INTO [schema_name].[table_name] ...",
+ NumberOfRecordsStagedForSQLCommit -> 100,
+ DataStagingSparkJobEndedAtEPOCH -> 1647523797245,
+ SchemaInferenceAssertionCompletedAtEPOCH -> 1647523790920,
+ DataStagingSparkJobDurationInMilliseconds -> 5252,
+ rows_processed -> 100,
+ SaveModeApplied -> TRUNCATE_COPY,
+ DurationInMillisecondsToValidateFileFormat -> 75,
+ status -> Completed,
+ SparkApplicationName -> <spark_application_name>,
+ ThreePartFullyQualifiedTargetTableName -> <database_name>.<schema_name>.<table_name>,
+ request_id -> <query_id_as_retrieved_from_synapse_dedicated_sql_db_query_reference>,
+ StagingFolderConfigurationCheckDurationInMilliseconds -> 2,
+ JDBCConfigurationsSetupAtEPOCH -> 193,
+ StagingFolderConfigurationCheckCompletedAtEPOCH -> 1647523791012,
+ FileFormatValidationsCompletedAtEPOCHTime -> 1647523790995,
+ SchemaInferenceCheckDurationInMilliseconds -> 91,
+ SaveModeRequested -> Overwrite,
+ DataStagingSparkJobStartedAtEPOCH -> 1647523791993,
+ DurationInMillisecondsTakenToGenerateWriteSQLStatements -> 4
+ }
+ ```
+
+### Read Scenario
+
+#### `synapsesql` Read Method Signature
+
+Following is the signature to leverage `synapsesql` (applies to both Spark 2.4.8 and Spark 3.1.2 Connector versions):
+
+```Scala
+synapsesql(tableName:String) => org.apache.spark.sql.DataFrame
+```
+
+#### Read Code Template
+
+```Scala
+//Use case is to read data from an internal table in Synapse Dedicated SQL Pool DB
+//Azure Active Directory based authentication approach is preferred here.
+import org.apache.spark.sql.DataFrame
+import com.microsoft.spark.sqlanalytics.utils.Constants
+import org.apache.spark.sql.SqlAnalyticsConnector._
+
+//Read from existing internal table
+val dfToReadFromTable:DataFrame = spark.read.
+ option(Constants.SERVER, "<sql-server-name>.sql.azuresynapse.net").
+ option(Constants.TEMP_FOLDER, "abfss://<container_name>@<storage_account_name>.dfs.core.windows.net/<some_base_path_for_temporary_staging_folders>").
+ synapsesql("<database_name>.<schema_name>.<table_name>").
+ select("<some_column_1>", "<some_column_5>", "<some_column_n>"). //Column-pruning i.e., query select column values
+ filter(col("Title").startsWith("E")). //Push-down filter criteria that gets translated to SQL Push-down Predicates
+ limit(10) //Fetch a sample of 10 records
+
+//Show contents of the dataframe
+dfToReadFromTable.show()
+```
+
+### Additional Code Samples
+
+#### Using Connector with PySpark
+
+```Python
+%%spark
+
+import org.apache.spark.sql.DataFrame
+import com.microsoft.spark.sqlanalytics.utils.Constants
+import org.apache.spark.sql.SqlAnalyticsConnector._
+
+//Code for either writing or reading from Azure Synapse Dedicated SQL Pool (similar to afore-mentioned code templates)
+
+```
+
+### Things to Note
+
+The Connector leverages the capabilities of dependent resources (Azure Storage and Synapse Dedicated SQL Pool) to achieve efficient data transfers. Following are few important aspects must be taken into consideration when tuning for optimized (note, doesn't necessarily mean speed; this also relates to predictable outcomes) performance:
+
+* The `Performance Level` setting in Synapse Dedicated SQL Pool will drive write throughput, in terms of maximum achievable concurrency, data distribution and threshold cap for max rows per transaction.
+ * Review the [transaction size](../../synapse-analytics/sql-data-warehouse/sql-data-warehouse-develop-transactions.md#transaction-size) limitation when selecting the `Performance Level` of the Synapse Dedicated SQL Pool.
+ * `Performance Level` can be adjusted using the [Scale](../../synapse-analytics/sql-data-warehouse/quickstart-scale-compute-portal.md) feature.
+* Initial parallelism for a write scenario is heavily dependent on the number of partitions the job would identify. Partition count can be adjusted using the Spark configuration setting `spark.sql.files.maxPartitionBytes` to better re-group the source data during file scans. Besides, one can try DataFrame's repartition method.
+* Besides factoring in the data characteristics also derive optimal Executor node count and choice (for example, small vs medium sizes that drive CPU & Memory resource allocations).
+* When tuning for write or read performance, recommend factoring in the dominating pattern - I/O intensive or CPU intensive, and adjust your choices for Spark Pool capacities. Leverage auto-scale.
+* Review the data orchestration illustrations to see where your job's performance can suffer. For example:
+ * In a read scenario, determine if adding additional filters or choosing select columns (i.e., column-pruning) can help avoid unwarranted data movement.
+ * In a write scenario, review the source DataFrame plan and identify if concurrency can be tuned in reading the data for staging. This initial parallelism will help downstream data movement as well. Leverage feedback handle to draw some patterns.
+* Besides, Spark and Synapse Dedicated SQL Pool, also watch for write and read latencies associated with the ADLS Gen2 resources used to stage data or to hold data-at-rest.
+
+## Additional Reading
+
+* [Runtime library versions](../../synapse-analytics/spark/apache-spark-3-runtime.md)
+* [Azure Storage](../../storage/blobs/data-lake-storage-introduction.md)
+* [Dedicated SQL pool](../../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md)
synapse-analytics Develop Storage Files Storage Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-storage-files-storage-access-control.md
This article describes the types of credentials you can use and how credential l
## Storage permissions A serverless SQL pool in Synapse Analytics workspace can read the content of files stored in Azure Data Lake storage. You need to configure permissions on storage to enable a user who executes a SQL query to read the files. There are three methods for enabling the access to the files:-- **[Role based access control (RBAC)](../../role-based-access-control/overview.md)** enables you to assign a role to some Azure AD user in the tenant where your storage is placed. A reader must have `Storage Blob Data Reader`, `Storage Blob Data Contributor`, or `Storage Blob Data Owner` RBAC role on storage account. A user who writes data in the Azure storage must have `Storage Blob Data Writer` or `Storage Blob Data Owner` role. Note that `Storage Owner` role does not imply that a user is also `Storage Data Owner`.
+- **[Role based access control (RBAC)](../../role-based-access-control/overview.md)** enables you to assign a role to some Azure AD user in the tenant where your storage is placed. A reader must have `Storage Blob Data Reader`, `Storage Blob Data Contributor`, or `Storage Blob Data Owner` RBAC role on storage account. A user who writes data in the Azure storage must have `Storage Blob Data Contributor` or `Storage Blob Data Owner` role. Note that `Storage Owner` role does not imply that a user is also `Storage Data Owner`.
- **Access Control Lists (ACL)** enable you to define a fine grained [Read(R), Write(W), and Execute(X) permissions](../../storage/blobs/data-lake-storage-access-control.md#levels-of-permission) on the files and directories in Azure storage. ACL can be assigned to Azure AD users. If readers want to read a file on a path in Azure Storage, they must have Execute(X) ACL on every folder in the file path, and Read(R) ACL on the file. [Learn more how to set ACL permissions in storage layer](../../storage/blobs/data-lake-storage-access-control.md#how-to-set-acls). - **Shared access signature (SAS)** enables a reader to access the files on the Azure Data Lake storage using the time-limited token. The reader doesnΓÇÖt even need to be authenticated as Azure AD user. SAS token contains the permissions granted to the reader as well as the period when the token is valid. SAS token is good choice for time-constrained access to any user that doesn't even need to be in the same Azure AD tenant. SAS token can be defined on the storage account or on specific directories. Learn more about [granting limited access to Azure Storage resources using shared access signatures](../../storage/common/storage-sas-overview.md).
synapse-analytics Develop Tables External Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-tables-external-tables.md
WITH (
{ FIELD_TERMINATOR = field_terminator | STRING_DELIMITER = string_delimiter
- | First_Row = integer
+ | FIRST_ROW = integer
| USE_TYPE_DEFAULT = { TRUE | FALSE }
- | Encoding = {'UTF8' | 'UTF16'}
+ | ENCODING = {'UTF8' | 'UTF16'}
| PARSER_VERSION = {'parser_version'} } ```
virtual-desktop Azure Monitor Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/azure-monitor-costs.md
Title: Monitor Azure Virtual Desktop cost pricing estimates - Azure
+ Title: Estimate Azure Virtual Desktop monitoring costs - Azure
description: How to estimate costs and pricing for using Azure Monitor for Azure Virtual Desktop.
-# Estimate Azure Monitor costs
+# Estimate Azure Virtual Desktop monitoring costs
-Azure Monitor Logs is a service that collects, indexes, and stores data generated by your environment. Because of this, the Azure Monitor pricing model is based on the amount of data that's brought into and processed (or "ingested") by your Log Analytics workspace in gigabytes per day. The cost of a Log Analytics workspace isn't only based on the volume of data collected, but also which Azure payment plan you've selected and how long you choose to store the data your environment generates.
+Azure Virtual Desktop uses the Azure Monitor Logs service to collect, index, and store data generated by your environment. Because of this, the Azure Monitor pricing model is based on the amount of data that's brought into and processed (or "ingested") by your Log Analytics workspace in gigabytes per day. The cost of a Log Analytics workspace isn't only based on the volume of data collected, but also which Azure payment plan you've selected and how long you choose to store the data your environment generates.
This article will explain the following things to help you understand how pricing in Azure Monitor works:
Learn more about Azure Monitor for Azure Virtual Desktop at these articles:
- [Use Azure Monitor for Azure Virtual Desktop to monitor your deployment](azure-monitor.md). - Use the [glossary](azure-monitor-glossary.md) to learn more about terms and concepts. - If you encounter a problem, check out our [troubleshooting guide](troubleshoot-azure-monitor.md) for help.-- Check out [Monitoring usage and estimated costs in Azure Monitor](../azure-monitor/usage-estimated-costs.md) to learn more about managing your monitoring costs.
+- Check out [Monitoring usage and estimated costs in Azure Monitor](../azure-monitor/usage-estimated-costs.md) to learn more about managing your monitoring costs.
virtual-machines Trusted Launch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/trusted-launch.md
No additional cost to existing VM pricing.
**The following features are not supported**: - Azure Site Recovery - Azure Compute Gallery (formerly known as Shared Image Gallery)-- Ephemeral OS disk
+- [Ephemeral OS disk (Preview)](ephemeral-os-disks.md#trusted-launch-for-ephemeral-os-disks-preview)
- Shared disk - Ultra disk - Managed image
web-application-firewall Waf Front Door Custom Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-custom-rules.md
Previously updated : 03/21/2022 Last updated : 03/22/2022
You can control access with a custom WAf rule that defines a priority number, a
- **Action:** defines how to route a request if a WAF rule is matched. You can choose one of the below actions to apply when a request matches a custom rule.
- - *Allow* - WAF forwards the request to the backend, logs an entry in WAF logs, and exits.
- - *Block* - Request is blocked. WAF sends response to client without forwarding the request to the backend. WAF logs an entry in WAF logs and exits.
- - *Log* - WAF forwards the request to the backend, logs an entry in WAF logs, and continues to evaluate the next rule in the priority order.
+ - *Allow* - WAF allows the request to process, logs an entry in WAF logs, and exits.
+ - *Block* - Request is blocked. WAF sends response to client without forwarding the request further. WAF logs an entry in WAF logs and exits.
+ - *Log* - WAF logs an entry in WAF logs, and continues to evaluate the next rule in the priority order.
- *Redirect* - WAF redirects the request to a specified URI, logs an entry in WAF logs, and exits. - **Match condition:** defines a match variable, an operator, and match value. Each rule may contain multiple match conditions. A match condition may be based on geo location, client IP addresses (CIDR), size, or string match. String match can be against a list of match variables.
web-application-firewall Waf Front Door Drs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-drs.md
Last updated 02/04/2022
# Web Application Firewall DRS rule groups and rules
-Azure Front Door web application firewall (WAF) protects web applications from common vulnerabilities and exploits. Azure-managed rule sets provide an easy way to deploy protection against a common set of security threats. Since such rule sets are managed by Azure, the rules are updated as needed to protect against new attack signatures.
+Azure Front Door web application firewall (WAF) protects web applications from common vulnerabilities and exploits. Azure-managed rule sets provide an easy way to deploy protection against a common set of security threats. Since such rule sets are managed by Azure, the rules are updated as needed to protect against new attack signatures. Default rule set also includes the Microsoft Threat Intelligence Collection rules that are written in partnership with the Microsoft Intelligence team to provide increased coverage, patches for specific vulnerabilities, and better false positive reduction.
## Default rule sets
web-application-firewall Waf Front Door Policy Configure Bot Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-policy-configure-bot-protection.md
# Configure bot protection for Web Application Firewall (Preview)
-This article shows you how to configure bot protection rule in Azure Web Application Firewall (WAF) for Front Door by using Azure portal. Bot protection rule can also be configured using CLI, Azure PowerShell, or Azure Resource Manager template.
+
+Azure WAF for Front Door provides bot rules to identify good bots and protect from bad bots. This article shows you how to configure bot protection rule in Azure Web Application Firewall (WAF) for Front Door by using Azure portal. Bot protection rule can also be configured using CLI, Azure PowerShell, or Azure Resource Manager template.
> [!IMPORTANT] > Bot protection rule set is currently in public preview and is provided with a preview service level agreement. Certain features may not be supported or may have constrained capabilities. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for details.
web-application-firewall Waf Front Door Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-tuning.md
# Tuning Web Application Firewall (WAF) for Azure Front Door
-The Azure-managed Default Rule Set is based on the [OWASP Core Rule Set (CRS)](https://github.com/SpiderLabs/owasp-modsecurity-crs/tree/v3.1/dev) and is designed to be strict out of the box. It is often expected that WAF rules need to be tuned to suit the specific needs of the application or organization using the WAF. This is commonly achieved by defining rule exclusions, creating custom rules, and even disabling rules that may be causing issues or false positives. There are a few things you can do if requests that should pass through your Web Application Firewall (WAF) are blocked.
+The Microsoft-managed Default Rule Set is based on the [OWASP Core Rule Set (CRS)](https://github.com/SpiderLabs/owasp-modsecurity-crs/tree/v3.1/dev) and includes Microsoft Thread Intelligence Collection rules. It is often expected that WAF rules need to be tuned to suit the specific needs of the application or organization using the WAF. This is commonly achieved by defining rule exclusions, creating custom rules, and even disabling rules that may be causing issues or false positives. There are a few things you can do if requests that should pass through your Web Application Firewall (WAF) are blocked.
First, ensure youΓÇÖve read the [Front Door WAF overview](afds-overview.md) and the [WAF Policy for Front Door](waf-front-door-create-portal.md) documents. Also, make sure youΓÇÖve enabled [WAF monitoring and logging](waf-front-door-monitor.md). These articles explain how the WAF functions, how the WAF rule sets work, and how to access WAF logs.