Updates from: 02/19/2022 02:08:53
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Concept Certificate Based Authentication Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-limitations.md
Previously updated : 02/09/2022 Last updated : 02/18/2022
The following scenarios aren't supported:
- [Technical deep dive for Azure AD CBA](concept-certificate-based-authentication-technical-deep-dive.md) - [How to configure Azure AD CBA](how-to-certificate-based-authentication.md) - [FAQ](certificate-based-authentication-faq.yml)-- [Troubleshoot AZure AD CBA](troubleshoot-certificate-based-authentication.md)
+- [Troubleshoot Azure AD CBA](troubleshoot-certificate-based-authentication.md)
active-directory Concept Certificate Based Authentication Technical Deep Dive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-technical-deep-dive.md
Let's cover each step:
:::image type="content" border="true" source="./media/concept-certificate-based-authentication-technical-deep-dive/sign-in-alt.png" alt-text="Screenshot of the Sign-in if FIDO2 is also enabled.":::
-1. After the user clicks the link, the client is redirected to the certauth endpoint, which is [http://certauth.login.microsoftonline.com](http://certauth.login.microsoftonline.com) for Azure Global. For [Azure Government](/azure-government/compare-azure-government-global-azure.md#guidance-for-developers), the certauth endpoint is [http://certauth.login.microsoftonline.us](http://certauth.login.microsoftonline.us). For the correct endpoint for other environments, see the specific Microsoft cloud docs.
+1. After the user clicks the link, the client is redirected to the certauth endpoint, which is [https://certauth.login.microsoftonline.com](https://certauth.login.microsoftonline.com) for Azure Global. For [Azure Government](/azure-government/compare-azure-government-global-azure.md#guidance-for-developers), the certauth endpoint is [https://certauth.login.microsoftonline.us](https://certauth.login.microsoftonline.us). For the correct endpoint for other environments, see the specific Microsoft cloud docs.
The endpoint performs mutual authentication and requests the client certificate as part of the TLS handshake. You will see an entry for this request in the Sign-in logs. There is a [known issue](#known-issues) where User ID is displayed instead of Username.
active-directory How To Certificate Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-certificate-based-authentication.md
Previously updated : 02/09/2022 Last updated : 02/18/2022
Let's walk through a scenario where we will validate strong authentication by cr
1. Because policy OID rule takes precedence over issuer rule, the certificate will satisfy multifactor authentication. 1. The conditional access policy for the user requires MFA and the certificate satisfies multifactor, so the user will be authenticated into the application.
-### Enable Azure AD CBA using Microsoft Graph API
+## Enable Azure AD CBA using Microsoft Graph API
To enable the certificate-based authentication and configure username bindings using Graph API, complete the following steps.
To enable the certificate-based authentication and configure username bindings u
1. Go to [Microsoft Graph Explorer](https://developer.microsoft.com/graph/graph-explorer). 1. Click **Sign into Graph Explorer** and sign in to your tenant.
-1. Follow the steps to [consent to the _Policy.ReadWrite.AuthenticationMethod_ delegated permission](/graph/graph-explorer/graph-explorer-features.md#consent-to-permissions).
+1. Follow the steps to [consent to the _Policy.ReadWrite.AuthenticationMethod_ delegated permission](/graph/graph-explorer/graph-explorer-features#consent-to-permissions).
1. GET all authentication methods: ```http
To enable the certificate-based authentication and configure username bindings u
1. GET the configuration for the x509Certificate authentication method: ```http
- GET https://graph.microsoft.com/beta/policies/authenticationmethodspolicy/authenticationMetHodConfigurations/X509Certificate
+ GET https://graph.microsoft.com/beta/policies/authenticationmethodspolicy/authenticationMethodConfigurations/X509Certificate
``` 1. By default, the x509Certificate authentication method is disabled. To allow users to sign in with a certificate, you must enable the authentication method and configure the authentication and username binding policies through an update operation. To update policy, run a PATCH request.
To enable the certificate-based authentication and configure username bindings u
- [Technical deep dive for Azure AD CBA](concept-certificate-based-authentication-technical-deep-dive.md) - [Limitations with Azure AD CBA](concept-certificate-based-authentication-limitations.md) - [FAQ](certificate-based-authentication-faq.yml)-- [Troubleshoot Azure AD CBA](troubleshoot-certificate-based-authentication.md)
+- [Troubleshoot Azure AD CBA](troubleshoot-certificate-based-authentication.md)
active-directory Howto Authentication Passwordless Security Key On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-security-key-on-premises.md
# Enable passwordless security key sign-in to on-premises resources by using Azure AD
-This document discusses how to enable passwordless authentication to on-premises resources for environments with both *Azure Active Directory (Azure AD)-joined* and *hybrid Azure AD-joined* Windows 10 devices. This passwordless authentication functionality provides seamless single sign-on (SSO) to on-premises resources when you use Microsoft-compatible security keys.
+This document discusses how to enable passwordless authentication to on-premises resources for environments with both *Azure Active Directory (Azure AD)-joined* and *hybrid Azure AD-joined* Windows 10 devices. This passwordless authentication functionality provides seamless single sign-on (SSO) to on-premises resources when you use Microsoft-compatible security keys, or with [Windows Hello for Business Cloud trust](/windows/security/identity-protection/hello-for-business/hello-hybrid-cloud-trust.md)
## Use SSO to sign in to on-premises resources by using FIDO2 keys
You must also meet the following system requirements:
- Devices must be running Windows 10 version 2004 or later. -- You must be running [Azure AD Connect version 1.4.32.0 or later](../hybrid/how-to-connect-install-roadmap.md#install-azure-ad-connect).
- - For more information about the available Azure AD hybrid authentication options, see the following articles:
- - [Choose the right authentication method for your Azure AD hybrid identity solution](../hybrid/choose-ad-authn.md)
- - [Select which installation type to use for Azure AD Connect](../hybrid/how-to-connect-install-select-installation.md)
- - Your Windows Server domain controllers must have patches installed for the following servers: - [Windows Server 2016](https://support.microsoft.com/help/4534307/windows-10-update-kb4534307) - [Windows Server 2019](https://support.microsoft.com/help/4534321/windows-10-update-kb4534321)
active-directory Howto Authentication Passwordless Security Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-security-key.md
This document focuses on enabling security key based passwordless authentication
- [Azure AD Multi-Factor Authentication](howto-mfa-getstarted.md) - Enable [Combined security information registration](concept-registration-mfa-sspr-combined.md) - Compatible [FIDO2 security keys](concept-authentication-passwordless.md#fido2-security-keys)-- WebAuthN requires Windows 10 version 1903 or higher**
+- WebAuthN requires Windows 10 version 1903 or higher
To use security keys for logging in to web apps and services, you must have a browser that supports the WebAuthN protocol. These include Microsoft Edge, Chrome, Firefox, and Safari. ## Prepare devices
-For Azure AD joined devices the best experience is on Windows 10 version 1903 or higher.
+For Azure AD joined devices, the best experience is on Windows 10 version 1903 or higher.
Hybrid Azure AD joined devices must run Windows 10 version 2004 or higher.
active-directory Active Directory Acs Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/active-directory-acs-migration.md
Last updated 10/03/2018 -+
active-directory Azure Ad Endpoint Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/azure-ad-endpoint-comparison.md
Last updated 07/17/2020 -+
active-directory Azure Ad Federation Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/azure-ad-federation-metadata.md
Last updated 01/07/2017 -+
active-directory V1 Authentication Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/v1-authentication-scenarios.md
Last updated 10/14/2019 -+ #Customer intent: As an application developer, I want to learn about the basic authentication concepts in Azure AD for developers (v1.0), including the app model, API, provisioning, and supported scenarios, so I understand what I need to do when I create apps that integrate Microsoft sign-in.
active-directory V1 Oauth2 On Behalf Of Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/v1-oauth2-on-behalf-of-flow.md
Last updated 08/5/2020 -+
active-directory V1 Protocols Oauth Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/v1-protocols-oauth-code.md
Last updated 12/12/2019 -+
active-directory V1 Protocols Openid Connect Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/v1-protocols-openid-connect-code.md
Last updated 09/05/2019 -+
active-directory Access Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/access-tokens.md
Title: Microsoft identity platform access tokens | Azure
description: Learn about access tokens emitted by the Azure AD v1.0 and Microsoft identity platform (v2.0) endpoints. -+
Last updated 12/28/2021-+
active-directory Active Directory Authentication Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-authentication-protocols.md
Last updated 09/27/2021 -+
active-directory Active Directory Certificate Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-certificate-credentials.md
Title: Microsoft identity platform certificate credentials
description: This article discusses the registration and use of certificate credentials for application authentication. -+
Last updated 02/09/2022--++
To compute the assertion, you can use one of the many JWT libraries in the langu
Claim type | Value | Description - | - | -
-`aud` | `https://login.microsoftonline.com/{tenantId}/v2.0` | The "aud" (audience) claim identifies the recipients that the JWT is intended for (here Azure AD) See [RFC 7519, Section 4.1.3](https://tools.ietf.org/html/rfc7519#section-4.1.3). In this case, that recipient is the login server (login.microsoftonline.com).
+`aud` | `https://login.microsoftonline.com/{tenantId}/V2.0/token` | The "aud" (audience) claim identifies the recipients that the JWT is intended for (here Azure AD) See [RFC 7519, Section 4.1.3](https://tools.ietf.org/html/rfc7519#section-4.1.3). In this case, that recipient is the login server (login.microsoftonline.com).
`exp` | 1601519414 | The "exp" (expiration time) claim identifies the expiration time on or after which the JWT MUST NOT be accepted for processing. See [RFC 7519, Section 4.1.4](https://tools.ietf.org/html/rfc7519#section-4.1.4). This allows the assertion to be used until then, so keep it short - 5-10 minutes after `nbf` at most. Azure AD does not place restrictions on the `exp` time currently. `iss` | {ClientID} | The "iss" (issuer) claim identifies the principal that issued the JWT, in this case your client application. Use the GUID application ID. `jti` | (a Guid) | The "jti" (JWT ID) claim provides a unique identifier for the JWT. The identifier value MUST be assigned in a manner that ensures that there is a negligible probability that the same value will be accidentally assigned to a different data object; if the application uses multiple issuers, collisions MUST be prevented among values produced by different issuers as well. The "jti" value is a case-sensitive string. [RFC 7519, Section 4.1.7](https://tools.ietf.org/html/rfc7519#section-4.1.7)
The signature is computed by applying the certificate as described in the [JSON
} . {
- "aud": "https: //login.microsoftonline.com/contoso.onmicrosoft.com/oauth2/token",
+ "aud": "https: //login.microsoftonline.com/contoso.onmicrosoft.com/oauth2/V2.0/token",
"exp": 1484593341, "iss": "97e0a5b7-d745-40b6-94fe-5f77d35c6e05", "jti": "22b3bb26-e046-42df-9c96-65dbd72c1c81",
active-directory Active Directory Claims Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-claims-mapping.md
Last updated 06/16/2021 -+ # Customize claims emitted in tokens for a specific app in a tenant
active-directory Active Directory Configurable Token Lifetimes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-configurable-token-lifetimes.md
Last updated 11/22/2021 -+ # Configurable token lifetimes in the Microsoft identity platform (preview)
You can set token lifetime policies for access tokens, SAML tokens, and ID token
### Access tokens
-Clients use access tokens to access a protected resource. An access token can be used only for a specific combination of user, client, and resource. Access tokens cannot be revoked and are valid until their expiry. A malicious actor that has obtained an access token can use it for extent of its lifetime. Adjusting the lifetime of an access token is a trade-off between improving system performance and increasing the amount of time that the client retains access after the userΓÇÖs account is disabled. Improved system performance is achieved by reducing the number of times a client needs to acquire a fresh access token.
+Clients use access tokens to access a protected resource. An access token can be used only for a specific combination of user, client, and resource. Access tokens cannot be revoked and are valid until their expiry. A malicious actor that has obtained an access token can use it for extent of its lifetime. Adjusting the lifetime of an access token is a trade-off between improving system performance and increasing the amount of time that the client retains access after the user's account is disabled. Improved system performance is achieved by reducing the number of times a client needs to acquire a fresh access token.
The default lifetime of an access token is variable. When issued, an access token's default lifetime is assigned a random value ranging between 60-90 minutes (75 minutes on average). The default lifetime also varies depending on the client application requesting the token or if conditional access is enabled in the tenant. For more information, see [Access token lifetime](access-tokens.md#access-token-lifetime).
The subject confirmation NotOnOrAfter specified in the `<SubjectConfirmationData
### ID tokens
-ID tokens are passed to websites and native clients. ID tokens contain profile information about a user. An ID token is bound to a specific combination of user and client. ID tokens are considered valid until their expiry. Usually, a web application matches a userΓÇÖs session lifetime in the application to the lifetime of the ID token issued for the user. You can adjust the lifetime of an ID token to control how often the web application expires the application session, and how often it requires the user to be re-authenticated with the Microsoft identity platform (either silently or interactively).
+ID tokens are passed to websites and native clients. ID tokens contain profile information about a user. An ID token is bound to a specific combination of user and client. ID tokens are considered valid until their expiry. Usually, a web application matches a user's session lifetime in the application to the lifetime of the ID token issued for the user. You can adjust the lifetime of an ID token to control how often the web application expires the application session, and how often it requires the user to be re-authenticated with the Microsoft identity platform (either silently or interactively).
## Token lifetime policies for refresh tokens and session tokens
You can not set token lifetime policies for refresh tokens and session tokens. F
> [!IMPORTANT] > As of January 30, 2021 you can not configure refresh and session token lifetimes. Azure Active Directory no longer honors refresh and session token configuration in existing policies. New tokens issued after existing tokens have expired are now set to the [default configuration](#configurable-token-lifetime-properties). You can still configure access, SAML, and ID token lifetimes after the refresh and session token configuration retirement. >
-> Existing tokenΓÇÖs lifetime will not be changed. After they expire, a new token will be issued based on the default value.
+> Existing token's lifetime will not be changed. After they expire, a new token will be issued based on the default value.
> > If you need to continue to define the time period before a user is asked to sign in again, configure sign-in frequency in Conditional Access. To learn more about Conditional Access, read [Configure authentication session management with Conditional Access](../conditional-access/howto-conditional-access-session-lifetime.md).
Access, ID, and SAML2 token configuration are affected by the following properti
### Refresh and session token lifetime policy properties
-Refresh and session token configuration are affected by the following properties and their respectively set values. After the retirement of refresh and session token configuration on January 30, 2021, Azure AD will only honor the default values described below. If you decide not to use [Conditional Access](../conditional-access/howto-conditional-access-session-lifetime.md) to manage sign-in frequency, your refresh and session tokens will be set to the default configuration on that date and youΓÇÖll no longer be able to change their lifetimes.
+Refresh and session token configuration are affected by the following properties and their respectively set values. After the retirement of refresh and session token configuration on January 30, 2021, Azure AD will only honor the default values described below. If you decide not to use [Conditional Access](../conditional-access/howto-conditional-access-session-lifetime.md) to manage sign-in frequency, your refresh and session tokens will be set to the default configuration on that date and you'll no longer be able to change their lifetimes.
|Property |Policy property string |Affects |Default | |-|--|||
You can create and then assign a token lifetime policy to a specific application
For more information about the relationship between application objects and service principal objects, see [Application and service principal objects in Azure Active Directory](app-objects-and-service-principals.md).
-A tokenΓÇÖs validity is evaluated at the time the token is used. The policy with the highest priority on the application that is being accessed takes effect.
+A token's validity is evaluated at the time the token is used. The policy with the highest priority on the application that is being accessed takes effect.
All timespans used here are formatted according to the C# [TimeSpan](/dotnet/api/system.timespan) object - D.HH:MM:SS. So 80 days and 30 minutes would be `80.00:30:00`. The leading D can be dropped if zero, so 90 minutes would be `00:90:00`.
active-directory Active Directory Optional Claims https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-optional-claims.md
Last updated 12/3/2021 -+
active-directory Active Directory Saml Claims Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-saml-claims-customization.md
You can use the following functions to transform claims.
| Function | Description | |-|-| | **ExtractMailPrefix()** | Removes the domain suffix from either the email address or the user principal name. This extracts only the first part of the user name being passed through (for example, "joe_smith" instead of joe_smith@contoso.com). |
-| **Join()** | Creates a new value by joining two attributes. Optionally, you can use a separator between the two attributes. |
+| **Join()** | Creates a new value by joining two attributes. Optionally, you can use a separator between the two attributes. For NameID claim transformation, the Join() function has specific behaviour when the transformation input has a domain part. It will remove the domain part from input before joining it with the separator and the selected parameter. For example, if the input of the transformation is ΓÇÿjoe_smith@contoso.comΓÇÖ and the separator is ΓÇÿ@ΓÇÖ and the parameter is ΓÇÿfabrikam.comΓÇÖ, this will result in joe_smith@fabrikam.com. |
| **ToLowercase()** | Converts the characters of the selected attribute into lowercase characters. | | **ToUppercase()** | Converts the characters of the selected attribute into uppercase characters. | | **Contains()** | Outputs an attribute or constant if the input matches the specified value. Otherwise, you can specify another output if thereΓÇÖs no match.<br/>For example, if you want to emit a claim where the value is the userΓÇÖs email address if it contains the domain ΓÇ£@contoso.comΓÇ¥, otherwise you want to output the user principal name. To do this, you would configure the following values:<br/>*Parameter 1(input)*: user.email<br/>*Value*: "@contoso.com"<br/>Parameter 2 (output): user.email<br/>Parameter 3 (output if there's no match): user.userprincipalname |
active-directory Active Directory Schema Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-schema-extensions.md
Last updated 07/29/2020 -+ # Using directory schema extension attributes in claims
active-directory Active Directory Signing Key Rollover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-signing-key-rollover.md
Last updated 09/03/2021 -+
active-directory Active Directory V2 Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-v2-protocols.md
Title: OAuth 2.0 and OpenID Connect protocols on the Microsoft identity platform
description: A guide to OAuth 2.0 and OpenID Connect protocols that are supported by the Microsoft identity platform. -+
Last updated 07/21/2020--++
active-directory App Sign In Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/app-sign-in-flow.md
Last updated 05/18/2020 -+ #Customer intent: As an application developer, I want to understand the sign-in flow of web, desktop, and mobile apps in Microsoft identity platform
active-directory Application Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/application-model.md
Last updated 09/27/2021 -+ #Customer intent: As an application developer, I want to understand how to register an application so it can integrate with the Microsoft identity platform.
active-directory Authentication Vs Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/authentication-vs-authorization.md
Last updated 05/22/2020 -+ #Customer intent: As an application developer, I want to understand the basic concepts of authentication and authorization in the Microsoft identity platform.
active-directory Configure Token Lifetimes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/configure-token-lifetimes.md
Last updated 04/08/2021 -+ # Configure token lifetime policies (preview) You can specify the lifetime of an access, SAML, or ID token issued by Microsoft identity platform. You can set token lifetimes for all apps in your organization, for a multi-tenant (multi-organization) application, or for a specific service principal in your organization. For more info, read [configurable token lifetimes](active-directory-configurable-token-lifetimes.md).
active-directory Howto Create Self Signed Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-create-self-signed-certificate.md
Last updated 08/10/2021 -+ #Customer intent: As an application developer, I want to understand the basic concepts of authentication and authorization in the Microsoft identity platform.
active-directory Id Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/id-tokens.md
Title: Microsoft identity platform ID tokens | Azure
description: Learn how to use id_tokens emitted by the Azure AD v1.0 and Microsoft identity platform (v2.0) endpoints. -+ Last updated 01/25/2022--++
active-directory Reference Aadsts Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-aadsts-error-codes.md
Last updated 10/11/2021 -+
active-directory Reference Breaking Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-breaking-changes.md
Last updated 11/24/2021 -+
active-directory Reference Claims Mapping Policy Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-claims-mapping-policy-type.md
Last updated 01/04/2022 -+ # Claims mapping policy type
active-directory Reference Third Party Cookies Spas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-third-party-cookies-spas.md
Title: How to handle Intelligent Tracking Protection (ITP) in Safari | Azure
description: Single-page app (SPA) authentication when third-party cookies are no longer allowed. -+
Last updated 10/06/2021-+
active-directory Refresh Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/refresh-tokens.md
Last updated 05/25/2021 -+
active-directory Security Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/security-tokens.md
Last updated 09/27/2021 -+ #Customer intent: As an application developer, I want to understand the basic concepts of security tokens in the Microsoft identity platform.
active-directory Userinfo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/userinfo.md
Title: Microsoft identity platform UserInfo endpoint | Azure
description: Learn about the UserInfo endpoint on the Microsoft identity platform. -+
Last updated 09/21/2020--++
active-directory V2 Admin Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-admin-consent.md
Last updated 12/18/2020 -+
active-directory V2 Oauth Ropc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth-ropc.md
Title: Sign in with resource owner password credentials grant | Azure
description: Support browser-less authentication flows using the resource owner password credential (ROPC) grant. -+
Last updated 07/16/2021--++
active-directory V2 Oauth2 Auth Code Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth2-auth-code-flow.md
Title: Microsoft identity platform and OAuth 2.0 authorization code flow | Azure
description: Build web applications using the Microsoft identity platform implementation of the OAuth 2.0 authentication protocol. -+
Last updated 02/02/2022--++
active-directory V2 Oauth2 Client Creds Grant Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth2-client-creds-grant-flow.md
Title: OAuth 2.0 client credentials flow on the Microsoft identity platform | Azure description: Build web applications by using the Microsoft identity platform implementation of the OAuth 2.0 authentication protocol. -+
Last updated 02/09/2022-+
active-directory V2 Oauth2 Device Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth2-device-code.md
Title: OAuth 2.0 device code flow | Azure
description: Sign in users without a browser. Build embedded and browser-less authentication flows using the device authorization grant. -+
Last updated 06/25/2021-+
active-directory V2 Oauth2 Implicit Grant Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth2-implicit-grant-flow.md
Title: OAuth 2.0 implicit grant flow - The Microsoft identity platform | Azure description: Secure single-page apps using Microsoft identity platform implicit flow. -+
Last updated 07/19/2021--++
active-directory V2 Oauth2 On Behalf Of Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth2-on-behalf-of-flow.md
Title: Microsoft identity platform and OAuth2.0 On-Behalf-Of flow | Azure
description: This article describes how to use HTTP messages to implement service to service authentication using the OAuth2.0 On-Behalf-Of flow. -+
Last updated 08/30/2021--++
active-directory V2 Permissions And Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-permissions-and-consent.md
Last updated 01/14/2022 -+
active-directory V2 Protocols Oidc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-protocols-oidc.md
Title: Microsoft identity platform and OpenID Connect protocol | Azure
description: Build web applications by using the Microsoft identity platform implementation of the OpenID Connect authentication protocol. -+
Last updated 07/19/2021--++
active-directory V2 Saml Bearer Assertion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-saml-bearer-assertion.md
Last updated 01/11/2022 -+ # Exchange a SAML token issued by AD FS for a Microsoft Graph access token
active-directory Device Management Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/device-management-azure-portal.md
You must be assigned one of the following roles to view or manage device setting
- **Users may join devices to Azure AD**: This setting enables you to select the users who can register their devices as Azure AD joined devices. The default is **All**. > [!NOTE]
- > The **Users may join devices to Azure AD** setting is applicable only to Azure AD join on Windows 10 or newer. This setting doesn't apply to hybrid Azure AD joined devices, [Azure AD joined VMs in Azure](./howto-vm-sign-in-azure-ad-windows.md#enabling-azure-ad-login-in-for-windows-vm-in-azure), or Azure AD joined devices that use [Windows Autopilot self-deployment mode](/mem/autopilot/self-deploying) because these methods work in a userless context.
+ > The **Users may join devices to Azure AD** setting is applicable only to Azure AD join on Windows 10 or newer. This setting doesn't apply to hybrid Azure AD joined devices, [Azure AD joined VMs in Azure](./howto-vm-sign-in-azure-ad-windows.md#enabling-azure-ad-login-for-windows-vm-in-azure), or Azure AD joined devices that use [Windows Autopilot self-deployment mode](/mem/autopilot/self-deploying) because these methods work in a userless context.
- **Additional local administrators on Azure AD joined devices**: This setting allows you to select the users who are granted local administrator rights on a device. These users are added to the Device Administrators role in Azure AD. Global Administrators in Azure AD and device owners are granted local administrator rights by default. This option is a premium edition capability available through products like Azure AD Premium and Enterprise Mobility + Security.
This option is a premium edition capability available through products like Azur
- **Require Multi-Factor Authentication to register or join devices with Azure AD**: This setting allows you to specify whether users are required to provide another authentication factor to join or register their devices to Azure AD. The default is **No**. We recommend that you require multifactor authentication when a device is registered or joined. Before you enable multifactor authentication for this service, you must ensure that multifactor authentication is configured for users that register their devices. For more information on Azure AD Multi-Factor Authentication services, see [getting started with Azure AD Multi-Factor Authentication](../authentication/concept-mfa-howitworks.md). > [!NOTE]
- > The **Require Multi-Factor Authentication to register or join devices with Azure AD** setting applies to devices that are either Azure AD joined (with some exceptions) or Azure AD registered. This setting doesn't apply to hybrid Azure AD joined devices, [Azure AD joined VMs in Azure](./howto-vm-sign-in-azure-ad-windows.md#enabling-azure-ad-login-in-for-windows-vm-in-azure), or Azure AD joined devices that use [Windows Autopilot self-deployment mode](/mem/autopilot/self-deploying).
+ > The **Require Multi-Factor Authentication to register or join devices with Azure AD** setting applies to devices that are either Azure AD joined (with some exceptions) or Azure AD registered. This setting doesn't apply to hybrid Azure AD joined devices, [Azure AD joined VMs in Azure](./howto-vm-sign-in-azure-ad-windows.md#enabling-azure-ad-login-for-windows-vm-in-azure), or Azure AD joined devices that use [Windows Autopilot self-deployment mode](/mem/autopilot/self-deploying).
> [!IMPORTANT] > - We recommend that you use the [Register or join devices user](../conditional-access/concept-conditional-access-cloud-apps.md#user-actions) action in Conditional Access to enforce multifactor authentication for joining or registering a device.
active-directory Howto Vm Sign In Azure Ad Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md
For Azure China
- `https://login.chinacloudapi.cn` - For authentication flows. - `https://pas.chinacloudapi.cn` - For Azure RBAC flows.
-## Enabling Azure AD login in for Windows VM in Azure
+## Enabling Azure AD login for Windows VM in Azure
-To use Azure AD login in for Windows VM in Azure, you need to first enable Azure AD login option for your Windows VM and then you need to configure Azure role assignments for users who are authorized to login in to the VM.
+To use Azure AD login for Windows VM in Azure, you need to first enable Azure AD login option for your Windows VM and then you need to configure Azure role assignments for users who are authorized to login in to the VM.
There are multiple ways you can enable Azure AD login for your Windows VM: - Using the Azure portal experience when creating a Windows VM
active-directory Direct Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/direct-federation.md
Previously updated : 12/06/2021 Last updated : 02/18/2022
This article describes how to set up federation with any organization whose identity provider (IdP) supports the SAML 2.0 or WS-Fed protocol. When you set up federation with a partner's IdP, new guest users from that domain can use their own IdP-managed organizational account to sign in to your Azure AD tenant and start collaborating with you. There's no need for the guest user to create a separate Azure AD account. > [!IMPORTANT]
+> - In the SAML request sent by Azure AD for external federations, the Issuer URL is a tenanted endpoint. For any new federations, we recommend that all our partners set the audience of the SAML or WS-Fed based IdP to a tenanted endpoint. Refer to the [SAML 2.0](#required-saml-20-attributes-and-claims) and [WS-Fed](#required-ws-fed-attributes-and-claims) required attributes and claims sections below. Any existing federations configured with the global endpoint will continue to work, but new federations will stop working if your external IdP is expecting a global issuer URL in the SAML request.
> - We've removed the limitation that required the authentication URL domain to match the target domain or be from an allowed IdP. For details, see [Step 1: Determine if the partner needs to update their DNS text records](#step-1-determine-if-the-partner-needs-to-update-their-dns-text-records).
->- We now recommend that the partner set the audience of the SAML or WS-Fed based IdP to a tenanted audience. Refer to the [SAML 2.0](#required-saml-20-attributes-and-claims) and [WS-Fed](#required-ws-fed-attributes-and-claims) required attributes and claims sections below.
## When is a guest user authenticated with SAML/WS-Fed IdP federation?
Azure AD B2B can be configured to federate with IdPs that use the SAML protocol
#### Required SAML 2.0 attributes and claims The following tables show requirements for specific attributes and claims that must be configured at the third-party IdP. To set up federation, the following attributes must be received in the SAML 2.0 response from the IdP. These attributes can be configured by linking to the online security token service XML file or by entering them manually.
+> [!NOTE]
+> Ensure the value below matches the cloud for which you're setting up external federation.
+ Required attributes for the SAML 2.0 response from the IdP: |Attribute |Value | ||| |AssertionConsumerService |`https://login.microsoftonline.com/login.srf` |
-|Audience |`https://login.microsoftonline.com/<tenant ID>/` (Recommended tenanted audience.) Replace `<tenant ID>` with the tenant ID of the Azure AD tenant you're setting up federation with.<br><br>`urn:federation:MicrosoftOnline` (This value will be deprecated.) |
+|Audience |`https://login.microsoftonline.com/<tenant ID>/` (Recommended) Replace `<tenant ID>` with the tenant ID of the Azure AD tenant you're setting up federation with.<br></br> In the SAML request sent by Azure AD for external federations, the Issuer URL is a tenanted endpoint (for example, `https://login.microsoftonline.com/<tenant ID>/`). For any new federations, we recommend that all our partners set the audience of the SAML or WS-Fed based IdP to a tenanted endpoint. Any existing federations configured with the global endpoint (for example, `urn:federation:MicrosoftOnline`) will continue to work, but new federations will stop working if your external IdP is expecting a global issuer URL in the SAML request sent by Azure AD.|
|Issuer |The issuer URI of the partner IdP, for example `http://www.example.com/exk10l6w90DHM0yi...` |
Azure AD B2B can be configured to federate with IdPs that use the WS-Fed protoco
The following tables show requirements for specific attributes and claims that must be configured at the third-party WS-Fed IdP. To set up federation, the following attributes must be received in the WS-Fed message from the IdP. These attributes can be configured by linking to the online security token service XML file or by entering them manually.
+> [!NOTE]
+> Ensure the value below matches the cloud for which you're setting up external federation.
+ Required attributes in the WS-Fed message from the IdP: |Attribute |Value | ||| |PassiveRequestorEndpoint |`https://login.microsoftonline.com/login.srf` |
-|Audience |`https://login.microsoftonline.com/<tenant ID>/` (Recommended tenanted audience.) Replace `<tenant ID>` with the tenant ID of the Azure AD tenant you're federating with.<br><br>`urn:federation:MicrosoftOnline` (This value will be deprecated.) |
+|Audience |`https://login.microsoftonline.com/<tenant ID>/` (Recommended) Replace `<tenant ID>` with the tenant ID of the Azure AD tenant you're setting up federation with.<br></br> In the SAML request sent by Azure AD for external federations, the Issuer URL is a tenanted endpoint (for example, `https://login.microsoftonline.com/<tenant ID>/`). For any new federations, we recommend that all our partners set the audience of the SAML or WS-Fed based IdP to a tenanted endpoint. Any existing federations configured with the global endpoint (for example, `urn:federation:MicrosoftOnline`) will continue to work, but new federations will stop working if your external IdP is expecting a global issuer URL in the SAML request sent by Azure AD. |
|Issuer |The issuer URI of the partner IdP, for example `http://www.example.com/exk10l6w90DHM0yi...` | Required claims for the WS-Fed token issued by the IdP:
active-directory Keep Me Signed In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/keep-me-signed-in.md
Last updated 06/05/2020 -+
active-directory Multi Tenant User Management Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/multi-tenant-user-management-introduction.md
There are several mechanisms available for creating and managing the lifecycle o
[Multi-tenant common considerations](multi-tenant-common-considerations.md) [Multi-tenant common solutions](multi-tenant-common-solutions.md)
+
+[Multi-tenant synchronization from Active Directory](https://docs.microsoft.com/azure/active-directory/hybrid/plan-connect-topologies#multiple-azure-ad-tenants.md)
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md
IT Admins can start using the new "Hybrid Admin" role as the least privileged ro
In May 2020, we have added the following 36 new applications in our App gallery with Federation support:
-[Moula](https://moula.com.au/pay/merchants), [Surveypal](https://www.surveypal.com/app), [Kbot365](https://www.konverso.ai/virtual-assistant-digital-workplace/), [TackleBox](https://tacklebox.in/), [Powell Teams](https://powell-software.com/en/powell-teams-en/), [Talentsoft Assistant](https://msteams.talent-soft.com/), [ASC Recording Insights](https://teams.asc-recording.app/product), [GO1](https://www.go1.com/), [B-Engaged](https://b-engaged.se/), [Competella Contact Center Workgroup](http://www.competella.com/), [Asite](http://www.asite.com/), [ImageSoft Identity](https://identity.imagesoftinc.com/), [My IBISWorld](https://identity.imagesoftinc.com/), [insuite](../saas-apps/insuite-tutorial.md), [Change Process Management](../saas-apps/change-process-management-tutorial.md), [Cyara CX Assurance Platform](../saas-apps/cyara-cx-assurance-platform-tutorial.md), [Smart Global Governance](../saas-apps/smart-global-governance-tutorial.md), [Prezi](../saas-apps/prezi-tutorial.md), [Mapbox](../saas-apps/mapbox-tutorial.md), [Datava Enterprise Service Platform](../saas-apps/datava-enterprise-service-platform-tutorial.md), [Whimsical](../saas-apps/whimsical-tutorial.md), [Trelica](../saas-apps/trelica-tutorial.md), [EasySSO for Confluence](../saas-apps/easysso-for-confluence-tutorial.md), [EasySSO for BitBucket](../saas-apps/easysso-for-bitbucket-tutorial.md), [EasySSO for Bamboo](../saas-apps/easysso-for-bamboo-tutorial.md), [Torii](../saas-apps/torii-tutorial.md), [Axiad Cloud](../saas-apps/axiad-cloud-tutorial.md), [Humanage](../saas-apps/humanage-tutorial.md), [ColorTokens ZTNA](../saas-apps/colortokens-ztna-tutorial.md), [CCH Tagetik](../saas-apps/cch-tagetik-tutorial.md), [ShareVault](../saas-apps/sharevault-tutorial.md), [Vyond](../saas-apps/vyond-tutorial.md), [TextExpander](../saas-apps/textexpander-tutorial.md), [Anyone Home CRM](../saas-apps/anyone-home-crm-tutorial.md), [askSpoke](../saas-apps/askspoke-tutorial.md), [ice Contact Center](../saas-apps/ice-contact-center-tutorial.md)
+[Moula](https://moula.com.au/pay/merchants), [Surveypal](https://www.surveypal.com/app), [Kbot365](https://www.konverso.ai/), [TackleBox](https://tacklebox.in/), [Powell Teams](https://powell-software.com/en/powell-teams-en/), [Talentsoft Assistant](https://msteams.talent-soft.com/), [ASC Recording Insights](https://teams.asc-recording.app/product), [GO1](https://www.go1.com/), [B-Engaged](https://b-engaged.se/), [Competella Contact Center Workgroup](http://www.competella.com/), [Asite](http://www.asite.com/), [ImageSoft Identity](https://identity.imagesoftinc.com/), [My IBISWorld](https://identity.imagesoftinc.com/), [insuite](../saas-apps/insuite-tutorial.md), [Change Process Management](../saas-apps/change-process-management-tutorial.md), [Cyara CX Assurance Platform](../saas-apps/cyara-cx-assurance-platform-tutorial.md), [Smart Global Governance](../saas-apps/smart-global-governance-tutorial.md), [Prezi](../saas-apps/prezi-tutorial.md), [Mapbox](../saas-apps/mapbox-tutorial.md), [Datava Enterprise Service Platform](../saas-apps/datava-enterprise-service-platform-tutorial.md), [Whimsical](../saas-apps/whimsical-tutorial.md), [Trelica](../saas-apps/trelica-tutorial.md), [EasySSO for Confluence](../saas-apps/easysso-for-confluence-tutorial.md), [EasySSO for BitBucket](../saas-apps/easysso-for-bitbucket-tutorial.md), [EasySSO for Bamboo](../saas-apps/easysso-for-bamboo-tutorial.md), [Torii](../saas-apps/torii-tutorial.md), [Axiad Cloud](../saas-apps/axiad-cloud-tutorial.md), [Humanage](../saas-apps/humanage-tutorial.md), [ColorTokens ZTNA](../saas-apps/colortokens-ztna-tutorial.md), [CCH Tagetik](../saas-apps/cch-tagetik-tutorial.md), [ShareVault](../saas-apps/sharevault-tutorial.md), [Vyond](../saas-apps/vyond-tutorial.md), [TextExpander](../saas-apps/textexpander-tutorial.md), [Anyone Home CRM](../saas-apps/anyone-home-crm-tutorial.md), [askSpoke](../saas-apps/askspoke-tutorial.md), [ice Contact Center](../saas-apps/ice-contact-center-tutorial.md)
You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial.
active-directory Entitlement Management Logs And Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-logs-and-reporting.md
Azure AD stores audit events for up to 30 days in the audit log. However, you ca
## Configure Azure AD to use Azure Monitor
-Before using the Azure Monitor workbooks, you must configure Azure AD to send a copy of its audit logs to Azure Monitor.
+Before you use the Azure Monitor workbooks, you must configure Azure AD to send a copy of its audit logs to Azure Monitor.
Archiving Azure AD audit logs requires you to have Azure Monitor in an Azure subscription. You can read more about the prerequisites and estimated costs of using Azure Monitor in [Azure AD activity logs in Azure Monitor](../reports-monitoring/concept-activity-logs-azure-monitor.md).
-**Prerequisite role**: Global Admin
+**Prerequisite role**: Global Administrator
1. Sign in to the Azure portal as a user who is a Global Admin. Make sure you have access to the resource group containing the Azure Monitor workspace. 1. Select **Azure Active Directory** then click **Diagnostic settings** under Monitoring in the left navigation menu. Check if there's already a setting to send the audit logs to that workspace.
-1. If there isn't already a setting, click **Add diagnostic setting**. Use the instructions in the article [Integrate Azure AD logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md#send-logs-to-azure-monitor)
-to send the Azure AD audit log to the Azure Monitor workspace.
+1. If there isn't already a setting, click **Add diagnostic setting**. Use the instructions in [Integrate Azure AD logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md#send-logs-to-azure-monitor) to send the Azure AD audit log to the Azure Monitor workspace.
![Diagnostics settings pane](./media/entitlement-management-logs-and-reporting/audit-log-diagnostics-settings.png) - 1. After the log is sent to Azure Monitor, select **Log Analytics workspaces**, and select the workspace that contains the Azure AD audit logs. 1. Select **Usage and estimated costs** and click **Data Retention**. Change the slider to the number of days you want to keep the data to meet your auditing requirements.
to send the Azure AD audit log to the Azure Monitor workspace.
1. Expand the section **Azure Active Directory Troubleshooting**, and click on **Archived Log Date Range**. - ## View events for an access package To view events for an access package, you must have access to the underlying Azure monitor workspace (see [Manage access to log data and workspaces in Azure Monitor](../../azure-monitor/logs/manage-access.md#manage-access-using-azure-permissions) for information) and in one of the following roles:
$bResponse = Invoke-AzOperationalInsightsQuery -WorkspaceId $wks[0].CustomerId -
$bResponse.Results |ft ```
-## Next steps:
+## Next steps
- [Create interactive reports with Azure Monitor workbooks](../../azure-monitor/visualize/workbooks-overview.md)
active-directory Configure Authentication For Federated Users Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-authentication-for-federated-users-portal.md
Title: Configure sign-in auto-acceleration using Home Realm Discovery
description: Learn how to force federated IdP acceleration for an application using Home Realm Discovery policy. -+ Last updated 02/09/2022-+ zone_pivot_groups: home-realm-discovery
active-directory Home Realm Discovery Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/home-realm-discovery-policy.md
Title: Home Realm Discovery policy
description: Learn how to manage Home Realm Discovery policy for Azure Active Directory authentication for federated users, including auto-acceleration and domain hints. -+ Last updated 02/09/2021-+ # Home Realm Discovery for an application
active-directory Prevent Domain Hints With Home Realm Discovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/prevent-domain-hints-with-home-realm-discovery.md
Title: Prevent sign-in auto-acceleration using Home Realm Discovery policy
description: Learn how to prevent domain_hint auto-acceleration to federated IDPs. -+ Last updated 02/09/2022-+ zone_pivot_groups: home-realm-discovery #customer intent: As an admin, I want to disable auto-acceleration to federated IDP during sign in using Home Realm Discovery policy
active-directory Tenant Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/tenant-restrictions.md
Last updated 12/6/2021 -+
active-directory How Manage User Assigned Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md
Previously updated : 01/20/2022 Last updated : 02/18/2022 zone_pivot_groups: identity-mi-methods
zone_pivot_groups: identity-mi-methods
-Managed identities for Azure resources eliminate the need to manage credentials in code. You can use them to get an Azure Active Directory (Azure AD) token your applications can use when you access resources that support Azure AD authentication. Azure manages the identity so you don't have to.
+Managed identities for Azure resources eliminate the need to manage credentials in code. You can use them to get an Azure Active Directory (Azure AD) token for your applications. The applications can use the token when accessing resources that support Azure AD authentication. Azure manages the identity so you don't have to.
-There are two types of managed identities: system-assigned and user-assigned. The main difference between them is that system-assigned managed identities have their lifecycle linked to the resource where they're used. User-assigned managed identities can be used on multiple resources. To learn more about managed identities, see [What are managed identities for Azure resources?](overview.md).
+There are two types of managed identities: system-assigned and user-assigned. System-assigned managed identities have their lifecycle tied to the resource that created them. User-assigned managed identities can be used on multiple resources. To learn more about managed identities, see [What are managed identities for Azure resources?](overview.md).
::: zone pivot="identity-mi-methods-azp"+ In this article, you learn how to create, list, delete, or assign a role to a user-assigned managed identity by using the Azure portal. ## Prerequisites
In this article, you learn how to create, list, delete, or assign a role to a us
To create a user-assigned managed identity, your account needs the [Managed Identity Contributor](../../role-based-access-control/built-in-roles.md#managed-identity-contributor) role assignment.
-1. Sign in to the [Azure portal](https://portal.azure.com) by using an account associated with the Azure subscription to create the user-assigned managed identity.
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. In the search box, enter **Managed Identities**. Under **Services**, select **Managed Identities**. 1. Select **Add**, and enter values in the following boxes in the **Create User Assigned Managed Identity** pane: - **Subscription**: Choose the subscription to create the user-assigned managed identity under.
To create a user-assigned managed identity, your account needs the [Managed Iden
## List user-assigned managed identities
-To list or read a user-assigned managed identity, your account needs the [Managed Identity Operator](../../role-based-access-control/built-in-roles.md#managed-identity-operator) or [Managed Identity Contributor](../../role-based-access-control/built-in-roles.md#managed-identity-contributor) role assignment.
+To list or read a user-assigned managed identity, your account needs to have either [Managed Identity Operator](../../role-based-access-control/built-in-roles.md#managed-identity-operator) or [Managed Identity Contributor](../../role-based-access-control/built-in-roles.md#managed-identity-contributor) role assignments.
-1. Sign in to the [Azure portal](https://portal.azure.com) by using an account associated with the Azure subscription to list the user-assigned managed identities.
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. In the search box, enter **Managed Identities**. Under **Services**, select **Managed Identities**. 1. A list of the user-assigned managed identities for your subscription is returned. To see the details of a user-assigned managed identity, select its name. 1. You can now view the details about the managed identity as shown in the image below.
To delete a user-assigned managed identity, your account needs the [Managed Iden
Deleting a user-assigned identity doesn't remove it from the VM or resource it was assigned to. To remove the user-assigned identity from a VM, see [Remove a user-assigned managed identity from a VM](qs-configure-portal-windows-vm.md#remove-a-user-assigned-managed-identity-from-a-vm).
-1. Sign in to the [Azure portal](https://portal.azure.com) by using an account associated with the Azure subscription to delete a user-assigned managed identity.
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Select the user-assigned managed identity, and select **Delete**. 1. Under the confirmation box, select **Yes**.
Deleting a user-assigned identity doesn't remove it from the VM or resource it w
To assign a role to a user-assigned managed identity, your account needs the [User Access Administrator](../../role-based-access-control/built-in-roles.md#user-access-administrator) role assignment.
-1. Sign in to the [Azure portal](https://portal.azure.com) by using an account associated with the Azure subscription to list the user-assigned managed identities.
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. In the search box, enter **Managed Identities**. Under **Services**, select **Managed Identities**. 1. A list of the user-assigned managed identities for your subscription is returned. Select the user-assigned managed identity that you want to assign a role. 1. Select **Access control (IAM)**, and then select **Add role assignment**.
You can't list and delete a user-assigned managed identity by using a Resource M
## Template creation and editing
-As with the Azure portal and scripting, Resource Manager templates provide the ability to deploy new or modified resources defined by an Azure resource group. Several options are available for template editing and deployment, both local and portal-based. You can:
+Resource Manager templates help you deploy new or modified resources defined by an Azure resource group. Several options are available for template editing and deployment, both local and portal-based. You can:
- Use a [custom template from Azure Marketplace](../../azure-resource-manager/templates/deploy-portal.md#deploy-resources-from-custom-template) to create a template from scratch or base it on an existing common or [quickstart template](https://azure.microsoft.com/resources/templates/).-- Derive from an existing resource group by exporting a template from either [the original deployment](../../azure-resource-manager/management/manage-resource-groups-portal.md#export-resource-groups-to-templates) or from the [current state of the deployment](../../azure-resource-manager/management/manage-resource-groups-portal.md#export-resource-groups-to-templates).
+- Derive from an existing resource group by exporting a template. You can export them from either [the original deployment](../../azure-resource-manager/management/manage-resource-groups-portal.md#export-resource-groups-to-templates) or from the [current state of the deployment](../../azure-resource-manager/management/manage-resource-groups-portal.md#export-resource-groups-to-templates).
- Use a local [JSON editor (such as VS Code)](../../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md), and then upload and deploy by using PowerShell or the Azure CLI. - Use the Visual Studio [Azure Resource Group project](../../azure-resource-manager/templates/create-visual-studio-deployment-project.md) to create and deploy a template.
To create a user-assigned managed identity, use the following template. Replace
``` ## Next steps
-For information on how to assign a user-assigned managed identity to an Azure VM by using a Resource Manager template, see [Configure managed identities for Azure resources on an Azure VM using a template](qs-configure-template-windows-vm.md).
--
+To assign a user-assigned managed identity to an Azure VM using a Resource Manager template, see [Configure managed identities for Azure resources on an Azure VM using a template](qs-configure-template-windows-vm.md).
::: zone-end
active-directory How Managed Identities Work Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-managed-identities-work-vm.md
ms.devlang: Previously updated : 01/11/2022 Last updated : 02/17/2022
The following diagram shows how managed service identities work with Azure virtu
||-|--| | Creation | Created as part of an Azure resource (for example, an Azure virtual machine or Azure App Service). | Created as a stand-alone Azure resource. | | Life cycle | Shared life cycle with the Azure resource that the managed identity is created with. <br/> When the parent resource is deleted, the managed identity is deleted as well. | Independent life cycle. <br/> Must be explicitly deleted. |
-| Sharing across Azure resources | Cannot be shared. <br/> It can only be associated with a single Azure resource. | Can be shared. <br/> The same user-assigned managed identity can be associated with more than one Azure resource. |
+| Sharing across Azure resources | CanΓÇÖt be shared. <br/> It can only be associated with a single Azure resource. | Can be shared. <br/> The same user-assigned managed identity can be associated with more than one Azure resource. |
| Common use cases | Workloads that are contained within a single Azure resource. <br/> Workloads for which you need independent identities. <br/> For example, an application that runs on a single virtual machine | Workloads that run on multiple resources and which can share a single identity. <br/> Workloads that need pre-authorization to a secure resource as part of a provisioning flow. <br/> Workloads where resources are recycled frequently, but permissions should stay consistent. <br/> For example, a workload where multiple virtual machines need to access the same resource | ## System-assigned managed identity
The following diagram shows how managed service identities work with Azure virtu
2. Azure Resource Manager creates a service principal in Azure AD for the identity of the VM. The service principal is created in the Azure AD tenant that's trusted by the subscription.
-3. Azure Resource Manager configures the identity on the VM by updating the Azure Instance Metadata Service identity endpoint with the service principal client ID and certificate.
+3. Azure Resource Manager updates the VM identity using the Azure Instance Metadata Service identity endpoint, providing the endpoint with the service principal client ID and certificate.
4. After the VM has an identity, use the service principal information to grant the VM access to Azure resources. To call Azure Resource Manager, use Azure role-based access control (Azure RBAC) to assign the appropriate role to the VM service principal. To call Key Vault, grant your code access to the specific secret or key in Key Vault.
active-directory How To Use Vm Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-use-vm-token.md
na Previously updated : 01/11/2022 Last updated : 02/18/2022
Managed identities for Azure resources provide Azure services with an automatically managed identity in Azure Active Directory. You can use this identity to authenticate to any service that supports Azure AD authentication, without having credentials in your code.
-This article provides various code and script examples for token acquisition, as well as guidance on important topics such as handling token expiration and HTTP errors.
+This article provides various code and script examples for token acquisition. It also contains guidance about handling token expiration and HTTP errors.
## Prerequisites
If you plan to use the Azure PowerShell examples in this article, be sure to ins
## Overview
-A client application can request managed identities for Azure resources [app-only access token](../develop/developer-glossary.md#access-token) for accessing a given resource. The token is [based on the managed identities for Azure resources service principal](overview.md#managed-identity-types). As such, there is no need for the client to register itself to obtain an access token under its own service principal. The token is suitable for use as a bearer token in
+A client application can request a managed identity [app-only access token](../develop/developer-glossary.md#access-token) to access a given resource. The token is [based on the managed identities for Azure resources service principal](overview.md#managed-identity-types). As such, there's no need for the client to obtain an access token under its own service principal. The token is suitable for use as a bearer token in
[service-to-service calls requiring client credentials](../develop/v2-oauth2-client-creds-grant-flow.md). | Link | Description |
A client application can request managed identities for Azure resources [app-onl
## Get a token using HTTP
-The fundamental interface for acquiring an access token is based on REST, making it accessible to any client application running on the VM that can make HTTP REST calls. This is similar to the Azure AD programming model, except the client uses an endpoint on the virtual machine (vs an Azure AD endpoint).
+The fundamental interface for acquiring an access token is based on REST, making it accessible to any client application running on the VM that can make HTTP REST calls. This approach is similar to the Azure AD programming model, except the client uses an endpoint on the virtual machine (vs an Azure AD endpoint).
Sample request using the Azure Instance Metadata Service (IMDS) endpoint *(recommended)*:
GET 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-0
| `http://169.254.169.254/metadata/identity/oauth2/token` | The managed identities for Azure resources endpoint for the Instance Metadata Service. | | `api-version` | A query string parameter, indicating the API version for the IMDS endpoint. Use API version `2018-02-01` or greater. | | `resource` | A query string parameter, indicating the App ID URI of the target resource. It also appears in the `aud` (audience) claim of the issued token. This example requests a token to access Azure Resource Manager, which has an App ID URI of `https://management.azure.com/`. |
-| `Metadata` | An HTTP request header field, required by managed identities for Azure resources as a mitigation against Server Side Request Forgery (SSRF) attack. This value must be set to "true", in all lower case. |
+| `Metadata` | An HTTP request header field required by managed identities. This information is used as a mitigation against server side request forgery (SSRF) attacks. This value must be set to "true", in all lower case. |
| `object_id` | (Optional) A query string parameter, indicating the object_id of the managed identity you would like the token for. Required, if your VM has multiple user-assigned managed identities.| | `client_id` | (Optional) A query string parameter, indicating the client_id of the managed identity you would like the token for. Required, if your VM has multiple user-assigned managed identities.| | `mi_res_id` | (Optional) A query string parameter, indicating the mi_res_id (Azure Resource ID) of the managed identity you would like the token for. Required, if your VM has multiple user-assigned managed identities. |
Content-Type: application/json
| Element | Description | | - | -- |
-| `access_token` | The requested access token. When calling a secured REST API, the token is embedded in the `Authorization` request header field as a "bearer" token, allowing the API to authenticate the caller. |
+| `access_token` | The requested access token. When you call a secured REST API, the token is embedded in the `Authorization` request header field as a "bearer" token, allowing the API to authenticate the caller. |
| `refresh_token` | Not used by managed identities for Azure resources. | | `expires_in` | The number of seconds the access token continues to be valid, before expiring, from time of issuance. Time of issuance can be found in the token's `iat` claim. | | `expires_on` | The timespan when the access token expires. The date is represented as the number of seconds from "1970-01-01T0:0:0Z UTC" (corresponds to the token's `exp` claim). |
Content-Type: application/json
| `resource` | The resource the access token was requested for, which matches the `resource` query string parameter of the request. | | `token_type` | The type of token, which is a "Bearer" access token, which means the resource can give access to the bearer of this token. |
+## Get a token using the Azure identity client library
+
+Using the Azure identity client library is the recommended way to use managed identities. All Azure SDKs are integrated with the ```Azure.Identity``` library that provides support for DefaultAzureCredential. This class makes it easy to use Managed Identities with Azure SDKs.[Learn more](/dotnet/api/overview/azure/identity-readme)
+
+1. Install the [Azure.Identity](https://www.nuget.org/packages/Azure.Identity) package and other required [Azure SDK library packages](https://aka.ms/azsdk), such as [Azure.Security.KeyVault.Secrets](https://www.nuget.org/packages/Azure.Security.KeyVault.Secrets/).
+2. Use the sample code below. You don't need to worry about getting tokens. You can directly use the Azure SDK clients. The code is for demonstrating how to get the token, if you need to.
+
+ ```csharp
+ using Azure.Core;
+ using Azure.Identity;
+
+ string userAssignedClientId = "<your managed identity client Id>";
+ var credential = new DefaultAzureCredential(new DefaultAzureCredentialOptions { ManagedIdentityClientId = userAssignedClientId });
+ var accessToken = credential.GetToken(new TokenRequestContext(new[] { "https://vault.azure.net" }));
+ // To print the token, you can convert it to string
+ String accessTokenString = accessToken.Token.ToString();
+
+ //You can use the credential object directly with Key Vault client.
+ var client = new SecretClient(new Uri("https://myvault.vault.azure.net/"), credential);
+ ```
+ ## Get a token using the Microsoft.Azure.Services.AppAuthentication library for .NET
-For .NET applications and functions, the simplest way to work with managed identities for Azure resources is through the Microsoft.Azure.Services.AppAuthentication package. This library will also allow you to test your code locally on your development machine, using your user account from Visual Studio, the [Azure CLI](/cli/azure), or Active Directory Integrated Authentication. For more on local development options with this library, see the [Microsoft.Azure.Services.AppAuthentication reference](/dotnet/api/overview/azure/service-to-service-authentication). This section shows you how to get started with the library in your code.
+For .NET applications and functions, the simplest way to work with managed identities for Azure resources is through the Microsoft.Azure.Services.AppAuthentication package. This library will also allow you to test your code locally on your development machine. You can test your code using your user account from Visual Studio, the [Azure CLI](/cli/azure), or Active Directory Integrated Authentication. For more on local development options with this library, see the [Microsoft.Azure.Services.AppAuthentication reference](/dotnet/api/overview/azure/service-to-service-authentication). This section shows you how to get started with the library in your code.
1. Add references to the [Microsoft.Azure.Services.AppAuthentication](https://www.nuget.org/packages/Microsoft.Azure.Services.AppAuthentication) and [Microsoft.Azure.KeyVault](https://www.nuget.org/packages/Microsoft.Azure.KeyVault) NuGet packages to your application.
The following example demonstrates how to use the managed identities for Azure r
Invoke-WebRequest -Uri 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fmanagement.azure.com%2F' -Headers @{Metadata="true"} ```
-Example on how to parse the access token from the response:
+Example of how to parse the access token from the response:
```azurepowershell # Get an access token for managed identities for Azure resources $response = Invoke-WebRequest -Uri 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fmanagement.azure.com%2F' `
curl 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-
```
-Example on how to parse the access token from the response:
+Example of how to parse the access token from the response:
```bash response=$(curl 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fmanagement.azure.com%2F' -H Metadata:true -s)
echo The managed identities for Azure resources access token is $access_token
## Token caching
-While the managed identities for Azure resources subsystem does cache tokens, we also recommend to implement token caching in your code. As a result, you should prepare for scenarios where the resource indicates that the token is expired.
+The managed identities subsystem caches tokens but we still recommend that you implement token caching in your code.
+You should prepare for scenarios where the resource indicates that the token is expired.
On-the-wire calls to Azure AD result only when:
On-the-wire calls to Azure AD result only when:
## Error handling
-The managed identities for Azure resources endpoint signals errors via the status code field of the HTTP response message header, as either 4xx or 5xx errors:
+The managed identities endpoint signals errors via the status code field of the HTTP response message header, as either 4xx or 5xx errors:
| Status Code | Error Reason | How To Handle | | -- | | - | | 404 Not found. | IMDS endpoint is updating. | Retry with Exponential Backoff. See guidance below. | | 429 Too many requests. | IMDS Throttle limit reached. | Retry with Exponential Backoff. See guidance below. |
-| 4xx Error in request. | One or more of the request parameters was incorrect. | Do not retry. Examine the error details for more information. 4xx errors are design-time errors.|
-| 5xx Transient error from service. | The managed identities for Azure resources subsystem or Azure Active Directory returned a transient error. | It is safe to retry after waiting for at least 1 second. If you retry too quickly or too often, IMDS and/or Azure AD may return a rate limit error (429).|
+| 4xx Error in request. | One or more of the request parameters was incorrect. | Don't retry. Examine the error details for more information. 4xx errors are design-time errors.|
+| 5xx Transient error from service. | The managed identities for Azure resources subsystem or Azure Active Directory returned a transient error. | It's safe to retry after waiting for at least 1 second. If you retry too quickly or too often, IMDS and/or Azure AD may return a rate limit error (429).|
| timeout | IMDS endpoint is updating. | Retry with Exponential Backoff. See guidance below. | If an error occurs, the corresponding HTTP response body contains JSON with the error details:
This section documents the possible error responses. A "200 OK" status is a succ
| Status code | Error | Error Description | Solution | | -- | -- | -- | -- |
-| 400 Bad Request | invalid_resource | AADSTS50001: The application named *\<URI\>* was not found in the tenant named *\<TENANT-ID\>*. This can happen if the application has not been installed by the administrator of the tenant or consented to by any user in the tenant. You might have sent your authentication request to the wrong tenant.\ | (Linux only) |
+| 400 Bad Request | invalid_resource | AADSTS50001: The application named *\<URI\>* wasn't found in the tenant named *\<TENANT-ID\>*. This message shows if the tenant administrator hasn't installed the application or no tenant user consented to it. You might have sent your authentication request to the wrong tenant.\ | (Linux only) |
| 400 Bad Request | bad_request_102 | Required metadata header not specified | Either the `Metadata` request header field is missing from your request, or is formatted incorrectly. The value must be specified as `true`, in all lower case. See the "Sample request" in the preceding REST section for an example.| | 401 Unauthorized | unknown_source | Unknown Source *\<URI\>* | Verify that your HTTP GET request URI is formatted correctly. The `scheme:host/resource-path` portion must be specified as `http://localhost:50342/oauth2/token`. See the "Sample request" in the preceding REST section for an example.| | | invalid_request | The request is missing a required parameter, includes an invalid parameter value, includes a parameter more than once, or is otherwise malformed. | |
-| | unauthorized_client | The client is not authorized to request an access token using this method. | Caused by a request on a VM that doesn't have managed identities for Azure resources configured correctly. See [Configure managed identities for Azure resources on a VM using the Azure portal](qs-configure-portal-windows-vm.md) if you need assistance with VM configuration. |
+| | unauthorized_client | The client isn't authorized to request an access token using this method. | Caused by a request on a VM that doesn't have managed identities for Azure resources configured correctly. See [Configure managed identities for Azure resources on a VM using the Azure portal](qs-configure-portal-windows-vm.md) if you need assistance with VM configuration. |
| | access_denied | The resource owner or authorization server denied the request. | |
-| | unsupported_response_type | The authorization server does not support obtaining an access token using this method. | |
+| | unsupported_response_type | The authorization server doesn't support obtaining an access token using this method. | |
| | invalid_scope | The requested scope is invalid, unknown, or malformed. | |
-| 500 Internal server error | unknown | Failed to retrieve token from the Active directory. For details see logs in *\<file path\>* | Verify that managed identities for Azure resources is enabled on the VM. See [Configure managed identities for Azure resources on a VM using the Azure portal](qs-configure-portal-windows-vm.md) if you need assistance with VM configuration.<br><br>Also verify that your HTTP GET request URI is formatted correctly, particularly the resource URI specified in the query string. See the "Sample request" in the preceding REST section for an example, or [Azure services that support Azure AD authentication](./services-support-managed-identities.md) for a list of services and their respective resource IDs.
+| 500 Internal server error | unknown | Failed to retrieve token from the Active directory. For details see logs in *\<file path\>* | Verify that the VM has managed identities for Azure resources enabled. See [Configure managed identities for Azure resources on a VM using the Azure portal](qs-configure-portal-windows-vm.md) if you need assistance with VM configuration.<br><br>Also verify that your HTTP GET request URI is formatted correctly, particularly the resource URI specified in the query string. See the "Sample request" in the preceding REST section for an example, or [Azure services that support Azure AD authentication](./services-support-managed-identities.md) for a list of services and their respective resource IDs.
> [!IMPORTANT] > - IMDS is not intended to be used behind a proxy and doing so is unsupported. For examples of how to bypass proxies, refer to the [Azure Instance Metadata Samples](https://github.com/microsoft/azureimds). ## Retry guidance
-It is recommended to retry if you receive a 404, 429, or 5xx error code (see [Error handling](#error-handling) above).
+It's recommended to retry if you receive a 404, 429, or 5xx error code (see [Error handling](#error-handling) above).
Throttling limits apply to the number of calls made to the IMDS endpoint. When the throttling threshold is exceeded, IMDS endpoint limits any further requests while the throttle is in effect. During this period, the IMDS endpoint will return the HTTP status code 429 ("Too many requests"), and the requests fail.
For retry, we recommend the following strategy:
## Resource IDs for Azure services
-See [Azure services that support Azure AD authentication](./services-support-managed-identities.md) for a list of resources that support Azure AD and have been tested with managed identities for Azure resources, and their respective resource IDs.
+See [Azure Services with managed identities support](managed-identities-status.md) for a list of resources that support managed identities for Azure resources.
## Next steps
active-directory Tutorial Linux Vm Access Storage Sas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-storage-sas.md
Title: 'Tutorial: Access Azure Storage using a SAS credential - Linux - Azure AD'
-description: A tutorial that shows you how to use a Linux VM system-assigned managed identity to access Azure Storage, using a SAS credential instead of a storage account access key.
+description: Tutorial showing how to use a Linux VM system-assigned managed identity to access Azure Storage using a SAS credential instead of a storage account access key.
documentationcenter: ''
na Previously updated : 01/11/2022 Last updated : 02/17/2022
This tutorial shows you how to use a system-assigned managed identity for a Linu
> [!NOTE] > The SAS key generated in this tutorial will not be restricted/bound to the VM.
-A Service SAS provides the ability to grant limited access to objects in a storage account, for a limited time and a specific service (in our case, the blob service), without exposing an account access key. You can use a SAS credential as usual when doing storage operations, for example when using the Storage SDK. For this tutorial, we demonstrate uploading and downloading a blob using Azure Storage CLI. You will learn how to:
+A Service SAS grants limited access to objects in a storage account without exposing an account access key. Access can be granted for a limited time and a specific service. You can use a SAS credential as usual when doing storage operations, for example when using the Storage SDK. For this tutorial, we demonstrate uploading and downloading a blob using Azure Storage CLI. You'll learn how to:
> [!div class="checklist"]
A Service SAS provides the ability to grant limited access to objects in a stora
## Create a storage account
-If you don't already have one, you will now create a storage account. You can also skip this step and grant your VM system-assigned managed identity access to the keys of an existing storage account.
+If you don't already have one, you'll now create a storage account. You can also skip this step and grant your VM system-assigned managed identity access to the keys of an existing storage account.
-1. Click the **+/Create new service** button found on the upper left-hand corner of the Azure portal.
-2. Click **Storage**, then **Storage Account**, and a new "Create storage account" panel will display.
-3. Enter a **Name** for the storage account, which you will use later.
+1. Select the **+/Create new service** button found on the upper left-hand corner of the Azure portal.
+2. Select **Storage**, then **Storage Account**, and a new "Create storage account" panel will display.
+3. Enter a **Name** for the storage account, which you'll use later.
4. **Deployment model** and **Account kind** should be set to "Resource Manager" and "General purpose", respectively. 5. Ensure the **Subscription** and **Resource Group** match the ones you specified when you created your VM in the previous step.
-6. Click **Create**.
+6. Select **Create**.
![Create new storage account](./media/msi-tutorial-linux-vm-access-storage/msi-storage-create.png) ## Create a blob container in the storage account
-Later we will upload and download a file to the new storage account. Because files require blob storage, we need to create a blob container in which to store the file.
+Later we'll upload and download a file to the new storage account. Because files require blob storage, we need to create a blob container in which to store the file.
1. Navigate back to your newly created storage account.
-2. Click the **Containers** link in the left panel, under "Blob service."
-3. Click **+ Container** on the top of the page, and a "New container" panel slides out.
-4. Give the container a name, select an access level, then click **OK**. The name you specified will be used later in the tutorial.
+2. Select the **Containers** link in the left panel, under "Blob service."
+3. Select **+ Container** on the top of the page, and a "New container" panel slides out.
+4. Give the container a name, select an access level, then select **OK**. The name you specified will be used later in the tutorial.
![Create storage container](./media/msi-tutorial-linux-vm-access-storage/create-blob-container.png) ## Grant your VM's system-assigned managed identity access to use a storage SAS
-Azure Storage natively supports Azure AD authentication, so you can use your VM's system-assigned managed identity to retrieve a storage SAS from Resource Manager, then use the SAS to access storage. In this step, you grant your VM's system-assigned managed identity access to your storage account SAS. Grant access by assigning the [Storage Account Contributor](../../role-based-access-control/built-in-roles.md#storage-account-contributor) role to the managed-identity at the scope of the resource group that contains your storage account.
+Azure Storage natively supports Azure AD authentication, so you can use your VM's system-assigned managed identity to retrieve a storage SAS from Resource Manager, then use the SAS to access storage. In this step, you grant your VM's system-assigned managed identity access to your storage account SAS. Assign the [Storage Account Contributor](../../role-based-access-control/built-in-roles.md#storage-account-contributor) role to the managed-identity at the scope of the resource group that contains your storage account.
For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
For detailed steps, see [Assign Azure roles using the Azure portal](../../role-b
## Get an access token using the VM's identity and use it to call Azure Resource Manager
-For the remainder of the tutorial, we will work from the VM we created earlier.
+For the remainder of the tutorial, we'll work from the VM we created earlier.
-To complete these steps, you will need an SSH client. If you are using Windows, you can use the SSH client in the [Windows Subsystem for Linux](/windows/wsl/install-win10). If you need assistance configuring your SSH client's keys, see [How to Use SSH keys with Windows on Azure](../../virtual-machines/linux/ssh-from-windows.md), or [How to create and use an SSH public and private key pair for Linux VMs in Azure](../../virtual-machines/linux/mac-create-ssh-keys.md).
+You need an SSH client to complete these steps. If you're using Windows, you can use the SSH client in the [Windows Subsystem for Linux](/windows/wsl/install-win10). If you need assistance configuring your SSH client's keys, see:
-1. In the Azure portal, navigate to **Virtual Machines**, go to your Linux virtual machine, then from the **Overview** page click **Connect** at the top. Copy the string to connect to your VM.
+ - [How to Use SSH keys with Windows on Azure](../../virtual-machines/linux/ssh-from-windows.md)
+ - [How to create and use an SSH public and private key pair for Linux VMs in Azure](../../virtual-machines/linux/mac-create-ssh-keys.md).
+
+Now that you have your SSH client continue to the steps below:
+
+1. In the Azure portal, navigate to **Virtual Machines**, go to your Linux virtual machine, then from the **Overview** page select **Connect** at the top. Copy the string to connect to your VM.
2. Connect to your VM using your SSH client.
-3. Next, you will be prompted to enter in your **Password** you added when creating the **Linux VM**. You should then be successfully signed in.
+3. Next, you'll be prompted to enter in your **Password** you added when creating the **Linux VM**. You should then be successfully signed in.
4. Use CURL to get an access token for Azure Resource Manager. The CURL request and response for the access token is below:
To complete these steps, you will need an SSH client. If you are using Windows,
Now use CURL to call Resource Manager using the access token we retrieved in the previous section, to create a storage SAS credential. Once we have the SAS credential, we can call storage upload/download operations.
-For this request we'll use the follow HTTP request parameters to create the SAS credential:
+For this request, we'll use the following HTTP request parameters to create the SAS credential:
```JSON {
The CURL response returns the SAS credential:
{"serviceSasToken":"sv=2015-04-05&sr=c&spr=https&st=2017-09-22T00%3A10%3A00Z&se=2017-09-22T02%3A00%3A00Z&sp=rcw&sig=QcVwljccgWcNMbe9roAJbD8J5oEkYoq%2F0cUPlgriBn0%3D"} ```
-Create a sample blob file to upload to your blob storage container. On a Linux VM, you can do this with the following command.
+On a Linux VM, create a sample blob file to upload to your blob storage container using the following command:
```bash echo "This is a test file." > test.txt ```
-Next, authenticate with the CLI `az storage` command using the SAS credential, and upload the file to the blob container. For this step, you will need to [install the latest Azure CLI](/cli/azure/install-azure-cli) on your VM, if you haven't already.
+Next, authenticate with the CLI `az storage` command using the SAS credential, and upload the file to the blob container. For this step, you'll need to [install the latest Azure CLI](/cli/azure/install-azure-cli) on your VM, if you haven't already.
```azurecli az storage blob upload --container-name
active-directory Tutorial Vm Windows Access Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-vm-windows-access-storage.md
na Previously updated : 01/11/2022 Last updated : 02/18/2022
This tutorial shows you how to use a system-assigned managed identity for a Wind
In this section, you create a storage account.
-1. Click the **+ Create a resource** button found on the upper left-hand corner of the Azure portal.
-2. Click **Storage**, then **Storage account - blob, file, table, queue**.
+1. Select the **+ Create a resource** button found on the upper left-hand corner of the Azure portal.
+2. Select **Storage**, then **Storage account - blob, file, table, queue**.
3. Under **Name**, enter a name for the storage account. 4. **Deployment model** and **Account kind** should be set to **Resource manager** and **Storage (general purpose v1)**. 5. Ensure the **Subscription** and **Resource Group** match the ones you specified when you created your VM in the previous step.
-6. Click **Create**.
+6. Select **Create**.
![Create new storage account](./media/msi-tutorial-linux-vm-access-storage/msi-storage-create.png)
In this section, you create a storage account.
Files require blob storage so you need to create a blob container in which to store the file. You then upload a file to the blob container in the new storage account. 1. Navigate back to your newly created storage account.
-2. Under **Blob Service**, click **Containers**.
-3. Click **+ Container** on the top of the page.
-4. Under **New container**, enter a name for the container and under **Public access level** keep the default value .
+2. Under **Blob Service**, select **Containers**.
+3. Select **+ Container** on the top of the page.
+4. Under **New container**, enter a name for the container and under **Public access level** keep the default value.
![Create storage container](./media/msi-tutorial-linux-vm-access-storage/create-blob-container.png) 5. Using an editor of your choice, create a file titled *hello world.txt* on your local machine. Open the file and add the text (without the quotes) "Hello world! :)" and then save it. 6. Upload the file to the newly created container by clicking on the container name, then **Upload**
-7. In the **Upload blob** pane, under **Files**, click the folder icon and browse to the file **hello_world.txt** on your local machine, select the file, then click **Upload**.
+7. In the **Upload blob** pane, under **Files**, select the folder icon and browse to the file **hello_world.txt** on your local machine, select the file, then select **Upload**.
![Upload text file](./media/msi-tutorial-linux-vm-access-storage/upload-text-file.png) ### Grant access
Files require blob storage so you need to create a blob container in which to st
This section shows how to grant your VM access to an Azure Storage container. You can use the VM's system-assigned managed identity to retrieve the data in the Azure storage blob. 1. Navigate back to your newly created storage account.
-1. Click **Access control (IAM)**.
-1. Click **Add** > **Add role assignment** to open the Add role assignment page.
+1. Select **Access control (IAM)**.
+1. Select **Add** > **Add role assignment** to open the Add role assignment page.
1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md). | Setting | Value |
This section shows how to grant your VM access to an Azure Storage container. Yo
## Access data 
-Azure Storage natively supports Azure AD authentication, so it can directly accept access tokens obtained using a managed identity. This is part of Azure Storage's integration with Azure AD, and is different from supplying credentials on the connection string.
+Azure Storage natively supports Azure AD authentication, so it can directly accept access tokens obtained using a managed identity. This approach uses Azure Storage's integration with Azure AD, and is different from supplying credentials on the connection string.
-Here's a .NET code example of opening a connection to Azure Storage using an access token and then reading the contents of the file you created earlier. This code must run on the VM to be able to access the VM's managed identity endpoint. .NET Framework 4.6 or higher is required to use the access token method. Replace the value of `<URI to blob file>` accordingly. You can obtain this value by navigating to file you created and uploaded to blob storage and copying the **URL** under **Properties** the **Overview** page.
+Here's a .NET code example of opening a connection to Azure Storage. The example uses an access token and then reads the contents of the file you created earlier. This code must run on the VM to be able to access the VM's managed identity endpoint. .NET Framework 4.6 or higher is required to use the access token method. Replace the value of `<URI to blob file>` accordingly. You can obtain this value by navigating to file you created and uploaded to blob storage and copying the **URL** under **Properties** the **Overview** page.
```csharp using System;
The response contains the contents of the file:
## Next steps
-In this tutorial, you learned how enable a Windows VM's system-assigned identity to access Azure Storage. To learn more about Azure Storage see:
+In this tutorial, you learned how enable a Windows VM's system-assigned identity to access Azure Storage. To learn more about Azure Storage, see:
> [!div class="nextstepaction"] > [Azure Storage](../../storage/common/storage-introduction.md)
active-directory Tutorial Windows Vm Access Datalake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-datalake.md
na Previously updated : 01/11/2022 Last updated : 02/18/2022
[!INCLUDE [preview-notice](../../../includes/active-directory-msi-preview-notice.md)]
-This tutorial shows you how to use a system-assigned managed identity for a Windows virtual machine (VM) to access an Azure Data Lake Store. Managed Service Identities are automatically managed by Azure and enable you to authenticate to services that support Azure AD authentication, without needing to insert credentials into your code. You learn how to:
+This tutorial shows you how to use a system-assigned managed identity for a Windows virtual machine (VM) to access an Azure Data Lake Store. Managed identities are automatically managed by Azure. They enable your application to authenticate to services that support Azure AD authentication, without needing to insert credentials into your code.
+
+In this article you learn how to:
> [!div class="checklist"] > * Grant your VM access to an Azure Data Lake Store
This tutorial shows you how to use a system-assigned managed identity for a Wind
Now you can grant your VM access to files and folders in an Azure Data Lake Store. For this step, you can use an existing Data Lake Store or create a new one. To create a new Data Lake Store using the Azure portal, follow this [Azure Data Lake Store quickstart](../../data-lake-store/data-lake-store-get-started-portal.md). There are also quickstarts that use the Azure CLI and Azure PowerShell in the [Azure Data Lake Store documentation](../../data-lake-store/data-lake-store-overview.md).
-In your Data Lake Store, create a new folder and grant your VM's system-assigned identity permission to read, write, and execute files in that folder:
+In your Data Lake Store, create a new folder and grant your VM's system-assigned identity permission. The identity needs rights to read, write, and execute files in that folder:
-1. In the Azure portal, click **Data Lake Store** in the left-hand navigation.
-2. Click the Data Lake Store you want to use for this tutorial.
-3. Click **Data Explorer** in the command bar.
-4. The root folder of the Data Lake Store is selected. Click **Access** in the command bar.
-5. Click **Add**. In the **Select** field, enter the name of your VM, for example **DevTestVM**. Click to select your VM from the search results, then click **Select**.
-6. Click **Select Permissions**. Select **Read** and **Execute**, add to **This folder**, and add as **An access permission only**. Click **Ok**. The permission should be added successfully.
+1. In the Azure portal, select **Data Lake Store** in the left-hand navigation.
+2. Select the Data Lake Store you want to use for this tutorial.
+3. Select **Data Explorer** in the command bar.
+4. The root folder of the Data Lake Store is selected. Select **Access** in the command bar.
+5. Select **Add**. In the **Select** field, enter the name of your VM, for example **DevTestVM**. Select to select your VM from the search results, then select **Select**.
+6. Select **Select Permissions**. Select **Read** and **Execute**, add to **This folder**, and add as **An access permission only**. Select **Ok**. The permission should be added successfully.
7. Close the **Access** blade.
-8. For this tutorial, create a new folder. Click **New Folder** in the command bar, and give the new folder a name, for example **TestFolder**. Click **Ok**.
-9. Click on the folder you created, then click **Access** in the command bar.
-10. Similar to step 5, click **Add**, in the **Select** field enter the name of your VM, select it and click **Select**.
-11. Similar to step 6, click **Select Permissions**, select **Read**, **Write**, and **Execute**, add to **This folder**, and add as **An access permission entry and a default permission entry**. Click **Ok**. The permission should be added successfully.
+8. For this tutorial, create a new folder. Select **New Folder** in the command bar, and give the new folder a name, for example **TestFolder**. Select **Ok**.
+9. Select on the folder you created, then select **Access** in the command bar.
+10. Similar to step 5, select **Add**, in the **Select** field enter the name of your VM, select it and select **Select**.
+11. Similar to step 6, select **Select Permissions**, select **Read**, **Write**, and **Execute**, add to **This folder**, and add as **An access permission entry and a default permission entry**. Select **Ok**. The permission should be added successfully.
Your VM's system-assigned managed identity can now perform all operations on files in the folder you created. For more information on managing access to Data Lake Store, read this article on [Access Control in Data Lake Store](../../data-lake-store/data-lake-store-access-control.md). ## Access data
-Azure Data Lake Store natively supports Azure AD authentication, so it can directly accept access tokens obtained using managed identities for Azure resources. To authenticate to the Data Lake Store filesystem you send an access token issued by Azure AD to your Data Lake Store filesystem endpoint, in an Authorization header in the format "Bearer <ACCESS_TOKEN_VALUE>". To learn more about Data Lake Store support for Azure AD authentication, read [Authentication with Data Lake Store using Azure Active Directory](../../data-lake-store/data-lakes-store-authentication-using-azure-active-directory.md)
+Azure Data Lake Store natively supports Azure AD authentication, so it can directly accept access tokens obtained using managed identities for Azure resources. To authenticate to the Data Lake Store filesystem, you send an access token issued by Azure AD to your Data Lake Store filesystem endpoint in an Authorization header. The header has the format "Bearer <ACCESS_TOKEN_VALUE>". To learn more about Data Lake Store support for Azure AD authentication, read [Authentication with Data Lake Store using Azure Active Directory](../../data-lake-store/data-lakes-store-authentication-using-azure-active-directory.md)
> [!NOTE] > The Data Lake Store filesystem client SDKs do not yet support managed identities for Azure resources. This tutorial will be updated when support is added to the SDK. In this tutorial, you authenticate to the Data Lake Store filesystem REST API using PowerShell to make REST requests. To use the VM's system-assigned managed identity for authentication, you need to make the requests from the VM.
-1. In the portal, navigate to **Virtual Machines**, go to your Windows VM, and in the **Overview** click **Connect**.
+1. In the portal, navigate to **Virtual Machines**, go to your Windows VM, and in the **Overview** select **Connect**.
2. Enter in your **Username** and **Password** for which you added when you created the Windows VM.
-3. Now that you have created a **Remote Desktop Connection** with the virtual machine, open **PowerShell** in the remote session.
+3. Now that you've created a **Remote Desktop Connection** with the virtual machine, open **PowerShell** in the remote session.
4. Using PowerShellΓÇÖs `Invoke-WebRequest`, make a request to the local managed identities for Azure resources endpoint to get an access token for Azure Data Lake Store. The resource identifier for Data Lake Store is `https://datalake.azure.net/`. Data Lake does an exact match on the resource identifier and the trailing slash is important. ```powershell
In this tutorial, you authenticate to the Data Lake Store filesystem REST API us
$AccessToken = $content.access_token ```
-5. Using PowerShell's `Invoke-WebRequest', make a request to your Data Lake Store's REST endpoint to list the folders in the root folder. This is a simple way to check everything is configured correctly. It is important the string "Bearer" in the Authorization header has a capital "B". You can find the name of your Data Lake Store in the **Overview** section of the Data Lake Store blade in the Azure portal.
+5. Check that everything is configured correctly. Using PowerShell's `Invoke-WebRequest', make a request to your Data Lake Store's REST endpoint to list the folders in the root folder. It's important the string "Bearer" in the Authorization header has a capital "B". You can find the name of your Data Lake Store in the **Overview** section of your Data Lake Store.
```powershell Invoke-WebRequest -Uri https://<YOUR_ADLS_NAME>.azuredatalakestore.net/webhdfs/v1/?op=LISTSTATUS -Headers @{Authorization="Bearer $AccessToken"}
In this tutorial, you authenticate to the Data Lake Store filesystem REST API us
$HdfsRedirectResponse = Invoke-WebRequest -Uri https://<YOUR_ADLS_NAME>.azuredatalakestore.net/webhdfs/v1/TestFolder/Test1.txt?op=CREATE -Method PUT -Headers @{Authorization="Bearer $AccessToken"} -Infile Test1.txt -MaximumRedirection 0 ```
- If you inspect the value of `$HdfsRedirectResponse` it should look like the following response:
+ If you inspect the value of `$HdfsRedirectResponse`, it should look like the following response:
```powershell PS C:\> $HdfsRedirectResponse
Using other Data Lake Store filesystem APIs you can append to files, download fi
## Next steps
-In this tutorial, you learned how to use a system-assigned managed identity for a Windows virtual machine to access an Azure Data Lake Store. To learn more about Azure Data Lake Store see:
+In this tutorial, you learned how to use a system-assigned managed identity for a Windows virtual machine to access an Azure Data Lake Store. To learn more about Azure Data Lake Store, see:
> [!div class="nextstepaction"] >[Azure Data Lake Store](../../data-lake-store/data-lake-store-overview.md)
active-directory Tutorial Windows Vm Access Nonaad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-nonaad.md
na Previously updated : 01/11/2022 Last updated : 02/18/2022 #Customer intent: As a developer or administrator I want to configure a Windows virtual machine to retrieve a secret from key vault using a managed identity and have a simple way to validate my configuration before using it for development
[!INCLUDE [preview-notice](../../../includes/active-directory-msi-preview-notice.md)]
-This tutorial shows you how a Windows virtual machine (VM) can use a system-assigned managed identity to access [Azure Key Vault](../../key-vault/general/overview.md). Serving as a bootstrap, Key Vault makes it possible for your client application to then use a secret to access resources not secured by Azure Active Directory (AD). Managed Service Identities are automatically managed by Azure and enable you to authenticate to services that support Azure AD authentication, without including authentication information in your code.
+This tutorial shows you how a Windows virtual machine (VM) can use a system-assigned managed identity to access [Azure Key Vault](../../key-vault/general/overview.md). Key Vault makes it possible for your client application to use a secret to access resources not secured by Azure Active Directory (Azure AD). Managed identities are automatically managed by Azure. They enable you to authenticate to services that support Azure AD authentication, without including authentication information in your code.
You learn how to:
You learn how to:
## Create a Key Vault  
-This section shows how to grant your VM access to a secret stored in a Key Vault. Using managed identities for Azure resources, your code can get access tokens to authenticate to resources that support Azure AD authentication.  However, not all Azure services support Azure AD authentication. To use managed identities for Azure resources with those services, store the service credentials in Azure Key Vault, and use the VM's managed identity to access Key Vault to retrieve the credentials.
+This section shows how to grant your VM access to a secret stored in a Key Vault. When you use managed identities for Azure resources, your code can get access tokens to authenticate to resources that support Azure AD authentication.  However, not all Azure services support Azure AD authentication. To use managed identities for Azure resources with those services, store the service credentials in Azure Key Vault, and use the VM's managed identity to access Key Vault to retrieve the credentials.
First, we need to create a Key Vault and grant our VMΓÇÖs system-assigned managed identity access to the Key Vault.
First, we need to create a Key Vault and grant our VMΓÇÖs system-assigned manage
![Create a Key vault screen](./media/msi-tutorial-windows-vm-access-nonaad/create-key-vault.png)
-1. Fill out all required information making sure that you choose the subscription and resource group where you created the virtual machine that you are using for this tutorial.
+1. Fill out all required information. Make sure that you choose the subscription and resource group that you're using for this tutorial.
1. Select **Review+ create** 1. Select **Create** ### Create a secret
-Next, add a secret to the Key Vault, so you can retrieve it later using code running in your VM. In this tutorial, we are using PowerShell but the same concepts apply to any code executing in this virtual machine.
+Next, add a secret to the Key Vault, so you can retrieve it later using code running in your VM. In this tutorial, we're using PowerShell but the same concepts apply to any code executing in this virtual machine.
1. Navigate to your newly created Key Vault.
-1. Select **Secrets**, and click **Add**.
+1. Select **Secrets**, and select **Add**.
1. Select **Generate/Import**
-1. In the **Create a secret** screen from **Upload options** leave **Manual** selected.
+1. In the **Create a secret** screen, from **Upload options** leave **Manual** selected.
1. Enter a name and value for the secret.  The value can be anything you want.  1. Leave the activation date and expiration date clear, and leave **Enabled** as **Yes**. 
-1. Click **Create** to create the secret.
+1. Select **Create** to create the secret.
![Create a secret](./media/msi-tutorial-windows-vm-access-nonaad/create-secret.png) ## Grant access
-The managed identity used by the virtual machine needs to be granted access to read the secret that we will store in the Key Vault.
+The managed identity used by the virtual machine needs to be granted access to read the secret that we'll store in the Key Vault.
1. Navigate to your newly created Key Vault 1. Select **Access Policy** from the menu on the left side.
The managed identity used by the virtual machine needs to be granted access to r
![key vault create access policy screen](./media/msi-tutorial-windows-vm-access-nonaad/key-vault-access-policy.png)
-1. In the **Add access policy** section under **Configure from template (optional)** choose **Secret Management** from the pull-down menu.
+1. In the **Add access policy** section, under **Configure from template (optional)**, choose **Secret Management** from the pull-down menu.
1. Choose **Select Principal**, and in the search field enter the name of the VM you created earlier.  Select the VM in the result list and choose **Select**. 1. Select **Add** 1. Select **Save**.
This section shows how to get an access token using the VM identity and use it t
First, we use the VM’s system-assigned managed identity to get an access token to authenticate to Key Vault:  
-1. In the portal, navigate to **Virtual Machines** and go to your Windows virtual machine and in the **Overview**, click **Connect**.
+1. In the portal, navigate to **Virtual Machines** and go to your Windows virtual machine and in the **Overview**, select **Connect**.
2. Enter in your **Username** and **Password** for which you added when you created the **Windows VM**.  
-3. Now that you have created a **Remote Desktop Connection** with the virtual machine, open PowerShell in the remote session.  
+3. Now that you've created a **Remote Desktop Connection** with the virtual machine, open PowerShell in the remote session.  
4. In PowerShell, invoke the web request on the tenant to get the token for the local host in the specific port for the VM.   The PowerShell request:
Once youΓÇÖve retrieved the secret from the Key Vault, you can use it to authent
When you want to clean up the resources, visit the [Azure portal](https://portal.azure.com), select **Resource groups**, locate, and select the resource group that was created in the process of this tutorial (such as `mi-test`), and then use the **Delete resource group** command.
-Alternatively you may also do this via [PowerShell or the CLI](../../azure-resource-manager/management/delete-resource-group.md)
+Alternatively you may also clean up resources via [PowerShell or the CLI](../../azure-resource-manager/management/delete-resource-group.md)
## Next steps
-In this tutorial, you learned how to use a Windows VM system-assigned managed identity to access Azure Key Vault. To learn more about Azure Key Vault see:
+In this tutorial, you learned how to use a Windows VM system-assigned managed identity to access Azure Key Vault. To learn more about Azure Key Vault, see:
> [!div class="nextstepaction"] >[Azure Key Vault](../../key-vault/general/overview.md)
active-directory Security Emergency Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/security-emergency-access.md
Previously updated : 11/05/2020 Last updated : 02/18/2022 -+
An organization might need to use an emergency access account in the following s
Create two or more emergency access accounts. These accounts should be cloud-only accounts that use the \*.onmicrosoft.com domain and that are not federated or synchronized from an on-premises environment.
+### How to create an emergency access account
+
+1. Sign in to the [Azure portal](https://portal.azure.com) or [Azure AD admin center](https://aad.portal.azure.com) as an existing Global Administrator.
+
+1. Select **Azure Active Directory** > **Users**.
+
+1. Select **New user**.
+
+1. Select **Create user**.
+
+1. Give the account a **User name**.
+
+1. Give the account a **Name**.
+
+1. Create a long and complex password for the account.
+
+1. Under **Roles**, assign the **Global Administrator** role.
+
+1. Under **Usage location**, select the appropriate location.
+
+ :::image type="content" source="./media/security-emergency-access/create-emergency-access-account-azure-ad.png" alt-text="Creating an emergency access account in Azure AD." lightbox="./media/security-emergency-access/create-emergency-access-account-azure-ad.png":::
+
+1. Select **Create**.
+
+1. [Store account credentials safely](#store-account-credentials-safely).
+
+1. [Monitor sign-in and audit logs](#monitor-sign-in-and-audit-logs).
+
+1. [Validate accounts regularly](#validate-accounts-regularly).
+ When configuring these accounts, the following requirements must be met: - The emergency access accounts should not be associated with any individual user in the organization. Make sure that your accounts are not connected with any employee-supplied mobile phones, hardware tokens that travel with individual employees, or other employee-specific credentials. This precaution covers instances where an individual employee is unreachable when the credential is needed. It is important to ensure that any registered devices are kept in a known, secure location that has multiple means of communicating with Azure AD.
active-directory Tracker Software Technologies Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tracker-software-technologies-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Tracker Software Technologies'
+description: Learn how to configure single sign-on between Azure Active Directory and Tracker Software Technologies.
++++++++ Last updated : 01/27/2022++++
+# Tutorial: Azure AD SSO integration with Tracker Software Technologies
+
+In this tutorial, you'll learn how to integrate Tracker Software Technologies with Azure Active Directory (Azure AD). When you integrate Tracker Software Technologies with Azure AD, you can:
+
+* Control in Azure AD who has access to Tracker Software Technologies.
+* Enable your users to be automatically signed-in to Tracker Software Technologies with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Tracker Software Technologies single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Tracker Software Technologies supports **IDP** initiated SSO.
+* Tracker Software Technologies supports **Just In Time** user provisioning.
+
+## Add Tracker Software Technologies from the gallery
+
+To configure the integration of Tracker Software Technologies into Azure AD, you need to add Tracker Software Technologies from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Tracker Software Technologies** in the search box.
+1. Select **Tracker Software Technologies** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Tracker Software Technologies
+
+Configure and test Azure AD SSO with Tracker Software Technologies using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Tracker Software Technologies.
+
+To configure and test Azure AD SSO with Tracker Software Technologies, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Tracker Software Technologies SSO](#configure-tracker-software-technologies-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Tracker Software Technologies test user](#create-tracker-software-technologies-test-user)** - to have a counterpart of B.Simon in Tracker Software Technologies that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Tracker Software Technologies** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type a URL using the following pattern:
+ `https://<Environment>.at-sw.com`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://<Environment>.at-sw.com/users/auth/<CustomerName>/callback`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Tracker Software Technologies Client support team](mailto:admin@gtglobaltracker.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Tracker Software Technologies.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Tracker Software Technologies**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Tracker Software Technologies SSO
+
+To configure single sign-on on **Tracker Software Technologies** side, you need to send the **App Federation Metadata Url** to [Tracker Software Technologies support team](mailto:admin@gtglobaltracker.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Tracker Software Technologies test user
+
+In this section, a user called Britta Simon is created in Tracker Software Technologies. Tracker Software Technologies supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Tracker Software Technologies, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on Test this application in Azure portal and you should be automatically signed in to the Tracker Software Technologies for which you set up the SSO.
+
+* You can use Microsoft My Apps. When you click the Tracker Software Technologies tile in the My Apps, you should be automatically signed in to the Tracker Software Technologies for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Tracker Software Technologies you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Zendesk Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zendesk-provisioning-tutorial.md
For information on how to read the Azure AD provisioning logs, see [Reporting on
* When a custom role is assigned to a user or group, the Azure AD automatic user provisioning service also assigns the default role **Agent**. Only Agents can be assigned a custom role. For more information, see the [Zendesk API documentation](https://developer.zendesk.com/rest_api/docs/support/users#json-format-for-agent-or-admin-requests).
-* Import of all roles will fail if any of the custom roles is either "agent" or "end-user". To avoid this, ensure that none of the roles being imported has the above display names.
+* Import of all roles will fail if any of the custom roles has a display name similar to the built in roles of "agent" or "end-user". To avoid this, ensure that none of the custom roles being imported has the above display names.
## Additional resources
aks Use Azure Dedicated Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-azure-dedicated-hosts.md
Last updated 02/11/2021
-# Add Azure Dedicated Host to an Azure Kubernetes Service (AKS) cluster
+# Add Azure Dedicated Host to an Azure Kubernetes Service (AKS) cluster (Preview)
Azure Dedicated Host is a service that provides physical servers - able to host one or more virtual machines - dedicated to one Azure subscription. Dedicated hosts are the same physical servers used in our data centers, provided as a resource. You can provision dedicated hosts within a region, availability zone, and fault domain. Then, you can place VMs directly into your provisioned hosts, in whatever configuration best meets your needs. Using Azure Dedicated Hosts for nodes with your AKS cluster has the following benefits: * Hardware isolation at the physical server level. No other VMs will be placed on your hosts. Dedicated hosts are deployed in the same data centers and share the same network and underlying storage infrastructure as other, non-isolated hosts.
-* Control over maintenance events initiated by the Azure platform. While the majority of maintenance events have little to no impact on your virtual machines, there are some sensitive workloads where each second of pause can have an impact. With dedicated hosts, you can opt in to a maintenance window to reduce the impact to your service.
+* Control over maintenance events initiated by the Azure platform. While most maintenance events have little to no impact on your virtual machines, there are some sensitive workloads where each second of pause can have an impact. With dedicated hosts, you can opt in to a maintenance window to reduce the impact to your service.
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
az provider register --namespace Microsoft.ContainerService
## Limitations The following limitations apply when you integrate Azure Dedicated Host with Azure Kubernetes Service:
-* An existing agentpool cannot be converted from non-ADH to ADH or ADH to non-ADH.
-* It is not supported to update agentpool from host group A to host group B.
+
+* An existing agent pool can't be converted from non-ADH to ADH or ADH to non-ADH.
+* It is not supported to update agent pool from host group A to host group B.
## Add a Dedicated Host Group to an AKS cluster A host group is a resource that represents a collection of dedicated hosts. You create a host group in a region and an availability zone, and add hosts to it. When planning for high availability, there are additional options. You can use one or both of the following options with your dedicated hosts:
-Span across multiple availability zones. In this case, you are required to have a host group in each of the zones you wish to use.
-Span across multiple fault domains which are mapped to physical racks.
+* Span across multiple availability zones. In this case, you are required to have a host group in each of the zones you wish to use.
+* Span across multiple fault domains, which are mapped to physical racks.
+ In either case, you are need to provide the fault domain count for your host group. If you do not want to span fault domains in your group, use a fault domain count of 1. You can also decide to use both availability zones and fault domains. Not all host SKUs are available in all regions, and availability zones. You can list host availability, and any offer restrictions before you start provisioning dedicated hosts.+ ```azurecli-interactive az vm list-skus -l eastus2 -r hostGroups/hosts -o table ```
For more information about the host SKUs and pricing, see [Azure Dedicated Host
Use az vm host create to create a host. If you set a fault domain count for your host group, you will be asked to specify the fault domain for your host.
-In this example, we will use [az vm host group create](/cli/azure/vm/host/group#az_vm_host_group_create?view=azure-cli-latest&preserve-view=true) to create a host group using both availability zones and fault domains.
+In this example, we will use [az vm host group create][az-vm-host-group-create] to create a host group using both availability zones and fault domains.
```azurecli-interactive az vm host group create \
az vm host group create \
``` ## Create an AKS cluster using the Host Group+ Create an AKS cluster, and add the Host Group you just configured. ```azurecli-interactive az aks create -g MyResourceGroup -n MyManagedCluster --location westus2 --kubernetes-version 1.20.13 --nodepool-name agentpool1 --node-count 1 --host-group-id <id> --node-vm-size Standard_D2s_v3 --enable-managed-identity --assign-identity <id> ```
-## Add a Dedicated Host Nodepool to an existing AKS cluster
+## Add a Dedicated Host Node Pool to an existing AKS cluster
+ Add a Host Group to an already existing AKS cluster. ```azurecli-interactive az aks nodepool add --cluster-name MyManagedCluster --name agentpool3 --resource-group MyResourceGroup --node-count 1 --host-group-id <id> --node-vm-size Standard_D2s_v3 ```
-## Remove a Dedicated Host Nodepool from an AKS cluster
+## Remove a Dedicated Host Node Pool from an AKS cluster
```azurecli-interactive az aks nodepool delete --cluster-name MyManagedCluster --name agentpool3 --resource-group MyResourceGroup
In this article, you learned how to create an AKS cluster with a Dedicated host,
[aks-faq]: faq.md [azure-cli-install]: /cli/azure/install-azure-cli [dedicated-hosts]: /azure/virtual-machines/dedicated-hosts.md
+[az-vm-host-group-create]: /cli/azure/vm/host/group#az_vm_host_group_create
api-management Devops Api Development Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/devops-api-development-templates.md
API developers face challenges when working with Resource Manager templates:
* API developers often work with the [OpenAPI Specification](https://github.com/OAI/OpenAPI-Specification) and might not be familiar with Resource Manager schemas. Authoring templates manually might be error-prone.
- A tool called [Creator](https://github.com/Azure/azure-api-management-devops-resource-kit/blob/master/src/APIM_ARMTemplate/README.md#Creator) in the resource kit can help automate the creation of API templates based on an Open API Specification file. Additionally, developers can supply API Management policies for an API in XML format.
+ A tool called [Creator](https://github.com/Azure/azure-api-management-devops-resource-kit/blob/main/src/README.md#Creator) in the resource kit can help automate the creation of API templates based on an Open API Specification file. Additionally, developers can supply API Management policies for an API in XML format.
-* For customers who are already using API Management, another challenge is to extract existing configurations into Resource Manager templates. For those customers, a tool called [Extractor](https://github.com/Azure/azure-api-management-devops-resource-kit/blob/master/src/APIM_ARMTemplate/README.md#extractor) in the resource kit can help generate templates by extracting configurations from their API Management instances.
+* For customers who are already using API Management, another challenge is to extract existing configurations into Resource Manager templates. For those customers, a tool called [Extractor](https://github.com/Azure/azure-api-management-devops-resource-kit/blob/main/src/README.md#extractor) in the resource kit can help generate templates by extracting configurations from their API Management instances.
## Workflow
api-management How To Deploy Self Hosted Gateway Kubernetes Helm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-deploy-self-hosted-gateway-kubernetes-helm.md
This article provides the steps for deploying self-hosted gateway component of A
## Next steps
-* To learn more about the self-hosted gateway, see [Azure API Management self-hosted gateway overview](self-hosted-gateway-overview.md).
+* Learn more about the self-hosted gateway, see [Azure API Management self-hosted gateway overview](self-hosted-gateway-overview.md).
+* Learn more about guidance for [running the self-hosted gateway on Kubernetes in production](how-to-self-hosted-gateway-on-kubernetes-in-production.md).
* Learn [how to deploy API Management self-hosted gateway to Azure Arc-enabled Kubernetes clusters](how-to-deploy-self-hosted-gateway-azure-arc.md). * Learn [how to configure and persist logs in the cloud](how-to-configure-cloud-metrics-logs.md). * Learn [how to configure and persist logs locally](how-to-configure-local-metrics-logs.md).
api-management Self Hosted Gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-overview.md
When connectivity is restored, each self-hosted gateway affected by the outage w
## Next steps -- Learn more about [API Management in a Hybrid and MultiCloud World](https://aka.ms/hybrid-and-multi-cloud-api-management)
+- Learn more about [API Management in a Hybrid and Multi-Cloud World](https://aka.ms/hybrid-and-multi-cloud-api-management)
+- Learn more about guidance for [running the self-hosted gateway on Kubernetes in production](how-to-self-hosted-gateway-on-kubernetes-in-production.md)
- [Deploy self-hosted gateway to Docker](how-to-deploy-self-hosted-gateway-docker.md) - [Deploy self-hosted gateway to Kubernetes](how-to-deploy-self-hosted-gateway-kubernetes.md) - [Deploy self-hosted gateway to Azure Arc-enabled Kubernetes cluster](how-to-deploy-self-hosted-gateway-azure-arc.md)
app-service Tutorial Dotnetcore Sqldb App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-dotnetcore-sqldb-app.md
Replace the *server-name* placeholder with a unique SQL Database name. This name
az sql server create --location eastus --resource-group msdocs-core-sql
- --server <server-name>
+ --name <server-name>
--admin-user <db-username> --admin-password <db-password> ```
application-gateway Application Gateway Configure Listener Specific Ssl Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-configure-listener-specific-ssl-policy.md
Previously updated : 03/30/2021 Last updated : 02/18/2022
-# Configure listener-specific SSL policies on Application Gateway through portal (Preview)
+# Configure listener-specific SSL policies on Application Gateway through portal
This article describes how to use the Azure portal to configure listener-specific SSL policies on your Application Gateway. Listener-specific SSL policies allow you to configure specific listeners to use different SSL policies from each other. You'll still be able to set a default SSL policy that all listeners will use unless overwritten by the listener-specific SSL policy.
First create a new Application Gateway as you would usually through the portal -
## Set up a listener-specific SSL policy
-To set up a listener-specific SSL policy, you'll need to first go to the **SSL settings (Preview)** tab in the Portal and create a new SSL profile. When you create an SSL profile, you'll see two tabs: **Client Authentication** and **SSL Policy**. The **SSL Policy** tab is to configure a listener-specific SSL policy. The **Client Authentication** tab is where to upload a client certificate(s) for mutual authentication - for more information, check out [Configuring a mutual authentication](./mutual-authentication-portal.md).
+To set up a listener-specific SSL policy, you'll need to first go to the **SSL settings** tab in the Portal and create a new SSL profile. When you create an SSL profile, you'll see two tabs: **Client Authentication** and **SSL Policy**. The **SSL Policy** tab is to configure a listener-specific SSL policy. The **Client Authentication** tab is where to upload a client certificate(s) for mutual authentication - for more information, check out [Configuring a mutual authentication](./mutual-authentication-portal.md).
> [!NOTE] > We recommend using TLS 1.2 as TLS 1.2 will be mandated in the future. 1. Search for **Application Gateway** in portal, select **Application gateways**, and click on your existing Application Gateway.
-2. Select **SSL settings (Preview)** from the left-side menu.
+2. Select **SSL settings** from the left-side menu.
3. Click on the plus sign next to **SSL Profiles** at the top to create a new SSL profile.
Now that we've created an SSL profile with a listener-specific SSL policy, we ne
![Associate SSL profile to new listener](./media/mutual-authentication-portal/mutual-authentication-listener-portal.png)
+### Limitations
+There is a limitation right now on Application Gateway where different listeners using the same port cannot have the same custom SSL policy configured. To ensure that the custom protocols configured as part of the custom SSL policy are applied to a listener, make sure that different listeners are running on different ports or configure the same custom SSL policy with the same custom protocols across all listeners running on the same port.
+ ## Next steps > [!div class="nextstepaction"]
-> [Manage web traffic with an application gateway using the Azure CLI](./tutorial-manage-web-traffic-cli.md)
+> [Manage web traffic with an application gateway using the Azure CLI](./tutorial-manage-web-traffic-cli.md)
application-gateway Mutual Authentication Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/mutual-authentication-overview.md
-# Overview of mutual authentication with Application Gateway (Preview)
+# Overview of mutual authentication with Application Gateway
Mutual authentication, or client authentication, allows for the Application Gateway to authenticate the client sending requests. Usually only the client is authenticating the Application Gateway; mutual authentication allows for both the client and the Application Gateway to authenticate each other.
For more information on how to set up mutual authentication, see [configure mutu
> [!IMPORTANT] > Make sure you upload the entire trusted client CA certificate chain to the Application Gateway when using mutual authentication.
+Each SSL profile can support up to 5 trusted client CA certificate chains.
+ ## Additional client authentication validation ### Verify client certificate DN
For more information on how to extract trusted client CA certificate chains, see
## Server variables
-With mutual authentication, there are additional server variables that you can use to pass information about the client certificate to the backend servers behind the Application Gateway. For more information about which server variables are available and how to use them, check out [server variables](./rewrite-http-headers-url.md#mutual-authentication-server-variables-preview).
+With mutual authentication, there are additional server variables that you can use to pass information about the client certificate to the backend servers behind the Application Gateway. For more information about which server variables are available and how to use them, check out [server variables](./rewrite-http-headers-url.md#mutual-authentication-server-variables).
+
+## Certificate Revocation
+
+Client certificate revocation with OCSP (Online Certificate Status Protocol) will be supported shortly.
## Next steps
application-gateway Mutual Authentication Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/mutual-authentication-portal.md
Previously updated : 04/02/2021 Last updated : 02/18/2022
-# Configure mutual authentication with Application Gateway through portal (Preview)
+# Configure mutual authentication with Application Gateway through portal
This article describes how to use the Azure portal to configure mutual authentication on your Application Gateway. Mutual authentication means Application Gateway authenticates the client sending the request using the client certificate you upload onto the Application Gateway.
First create a new Application Gateway as you would usually through the portal -
## Configure mutual authentication
-To configure an existing Application Gateway with mutual authentication, you'll need to first go to the **SSL settings (Preview)** tab in the Portal and create a new SSL profile. When you create an SSL profile, you'll see two tabs: **Client Authentication** and **SSL Policy**. The **Client Authentication** tab is where you'll upload your client certificate(s). The **SSL Policy** tab is to configure a listener specific SSL policy - for more information, check out [Configuring a listener specific SSL policy](./application-gateway-configure-listener-specific-ssl-policy.md).
+To configure an existing Application Gateway with mutual authentication, you'll need to first go to the **SSL settings** tab in the Portal and create a new SSL profile. When you create an SSL profile, you'll see two tabs: **Client Authentication** and **SSL Policy**. The **Client Authentication** tab is where you'll upload your client certificate(s). The **SSL Policy** tab is to configure a listener specific SSL policy - for more information, check out [Configuring a listener specific SSL policy](./application-gateway-configure-listener-specific-ssl-policy.md).
> [!IMPORTANT] > Please ensure that you upload the entire client CA certificate chain in one file, and only one chain per file. 1. Search for **Application Gateway** in portal, select **Application gateways**, and click on your existing Application Gateway.
-2. Select **SSL settings (Preview)** from the left-side menu.
+2. Select **SSL settings** from the left-side menu.
3. Click on the plus sign next to **SSL Profiles** at the top to create a new SSL profile.
Now that we've created an SSL profile with mutual authentication configured, we
In the case that your client CA certificate has expired, you can update the certificate on your gateway through the following steps:
-1. Navigate to your Application Gateway and go to the **SSL settings (Preview)** tab in the left-hand menu.
+1. Navigate to your Application Gateway and go to the **SSL settings** tab in the left-hand menu.
1. Select the existing SSL profile(s) with the expired client certificate.
In the case that your client CA certificate has expired, you can update the cert
## Next steps -- [Manage web traffic with an application gateway using the Azure CLI](./tutorial-manage-web-traffic-cli.md)
+- [Manage web traffic with an application gateway using the Azure CLI](./tutorial-manage-web-traffic-cli.md)
application-gateway Mutual Authentication Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/mutual-authentication-powershell.md
Previously updated : 04/02/2021 Last updated : 02/18/2022
-# Configure mutual authentication with Application Gateway through PowerShell (Preview)
+# Configure mutual authentication with Application Gateway through PowerShell
This article describes how to use the PowerShell to configure mutual authentication on your Application Gateway. Mutual authentication means Application Gateway authenticates the client sending the request using the client certificate you upload onto the Application Gateway. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
application-gateway Mutual Authentication Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/mutual-authentication-troubleshooting.md
Previously updated : 04/02/2021 Last updated : 02/18/2022
-# Troubleshooting mutual authentication errors in Application Gateway (Preview)
+# Troubleshooting mutual authentication errors in Application Gateway
Learn how to troubleshoot problems with mutual authentication when using Application Gateway.
After configuring mutual authentication on an Application Gateway, there can be
* Uploaded a certificate chain that only contained a leaf certificate without a CA certificate * Validation errors due to issuer DN mismatch
-We'll go through different scenarios that you might run into and how to troubleshoot those scenarios. We'll then address error codes and explain likely causes for certain error codes you might be seeing with mutual authentication.
+We'll go through different scenarios that you might run into and how to troubleshoot those scenarios. We'll then address error codes and explain likely causes for certain error codes you might be seeing with mutual authentication. All client certificate authentication failures should result in an HTTP 400 error code.
## Scenario troubleshooting - configuration problems There are a few scenarios that you might be facing when trying to configure mutual authentication. We'll walk through how to troubleshoot some of the most common pitfalls.
application-gateway Overview V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/overview-v2.md
The new v2 SKU includes the following enhancements:
- **Static VIP**: Application Gateway v2 SKU supports the static VIP type exclusively. This ensures that the VIP associated with the application gateway doesn't change for the lifecycle of the deployment, even after a restart. There isn't a static VIP in v1, so you must use the application gateway URL instead of the IP address for domain name routing to App Services via the application gateway. - **Header Rewrite**: Application Gateway allows you to add, remove, or update HTTP request and response headers with v2 SKU. For more information, see [Rewrite HTTP headers with Application Gateway](./rewrite-http-headers-url.md) - **Key Vault Integration**: Application Gateway v2 supports integration with Key Vault for server certificates that are attached to HTTPS enabled listeners. For more information, see [TLS termination with Key Vault certificates](key-vault-certs.md).
+- **Mutual Authentication (mTLS)**: Application Gateway v2 supports authentication of client requests. For more information, see [Overview of mutual authentication with Application Gateway](mutual-authentication-overview.md).
- **Azure Kubernetes Service Ingress Controller**: The Application Gateway v2 Ingress Controller allows the Azure Application Gateway to be used as the ingress for an Azure Kubernetes Service (AKS) known as AKS Cluster. For more information, see [What is Application Gateway Ingress Controller?](ingress-controller-overview.md). - **Performance enhancements**: The v2 SKU offers up to 5X better TLS offload performance as compared to the Standard/WAF SKU. - **Faster deployment and update time** The v2 SKU provides faster deployment and update time as compared to Standard/WAF SKU. This also includes WAF configuration changes.
The following table compares the features available with each SKU.
| Rewrite HTTP(S) headers | | &#x2713; | | URL-based routing | &#x2713; | &#x2713; | | Multiple-site hosting | &#x2713; | &#x2713; |
+| Mutual Authentication (mTLS) | | &#x2713; |
| Traffic redirection | &#x2713; | &#x2713; | | Web Application Firewall (WAF) | &#x2713; | &#x2713; | | WAF custom rules | | &#x2713; |
An Azure PowerShell script is available in the PowerShell gallery to help you mi
Depending on your requirements and environment, you can create a test Application Gateway using either the Azure portal, Azure PowerShell, or Azure CLI. -- [Tutorial: Create an application gateway that improves web application access](tutorial-autoscale-ps.md)
+- [Tutorial: Create an application gateway that improves web application access](tutorial-autoscale-ps.md)
application-gateway Rewrite Http Headers Url https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/rewrite-http-headers-url.md
Application gateway supports the following server variables:
| ssl_enabled | "On" if the connection operates in TLS mode. Otherwise, an empty string. | | uri_path | Identifies the specific resource in the host that the web client wants to access. This is the part of the request URI without the arguments. Example: In the request `http://contoso.com:8080/article.aspx?id=123&title=fabrikam`, uri_path value will be `/article.aspx` |
-### Mutual authentication server variables (Preview)
+### Mutual authentication server variables
Application Gateway supports the following server variables for mutual authentication scenarios. Use these server variables the same way as above with the other server variables.
applied-ai-services Build Custom Model V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/build-custom-model-v3.md
Last updated 02/16/2022
-# Build your training data set for a custom model
+# Build your training dataset for a custom model
Form Recognizer models require as few as five training documents to get started. If you have at least five documents, you can get started training a custom model. You can train either a [custom template model (custom form)](../concept-custom-template.md) or a [custom neural model (custom document)](../concept-custom-neural.md). The training process is identical for both models and this document walks you through the process of training either model.
Congratulations you've trained a custom model in the Form Recognizer Studio! You
> [Learn about custom model types](../concept-custom.md) > [!div class="nextstepaction"]
-> [Learn about accuracy and confidence with custom models](../concept-accuracy-confidence.md)
+> [Learn about accuracy and confidence with custom models](../concept-accuracy-confidence.md)
automation Automation Hybrid Runbook Worker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-hybrid-runbook-worker.md
Azure Automation provides native integration of the Hybrid Runbook Worker role t
| Platform | Description | |||
-|Agent-based (V1) |Installed after the [Log Analytics agent](../azure-monitor/agents/log-analytics-agent.md) reporting to an Azure Monitor [Log Analytics workspace](../azure-monitor/logs/design-logs-deployment.md) is completed.|
-|Extension-based (V2) |Installed using the [Hybrid Runbook Worker VM extension](./extension-based-hybrid-runbook-worker-install.md), without any dependency on the Log Analytics agent reporting to an Azure Monitor Log Analytics workspace. This is the recommended platform.|
+|**Extension-based (V2)** |Installed using the [Hybrid Runbook Worker VM extension](./extension-based-hybrid-runbook-worker-install.md), without any dependency on the Log Analytics agent reporting to an Azure Monitor Log Analytics workspace. **This is the recommended platform**.|
+|**Agent-based (V1)** |Installed after the [Log Analytics agent](../azure-monitor/agents/log-analytics-agent.md) reporting to an Azure Monitor [Log Analytics workspace](../azure-monitor/logs/design-logs-deployment.md) is completed.|
+ :::image type="content" source="./media/automation-hybrid-runbook-worker/hybrid-worker-group-platform.png" alt-text="Hybrid worker group showing platform field":::
azure-arc Create Data Controller Direct Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-direct-cli.md
Optionally, you can specify certificates for logs and metrics UI dashboards. See
After the extension and custom location are created, proceed to deploy the Azure Arc data controller as follows. ```azurecli
-az arcdata dc create --name <name> --resource-group <resourcegroup> --location <location> --connectivity-mode direct --profile-name <profile name> --auto-upload-logs true --auto-upload-metrics true --custom-location <name of custom location>
+az arcdata dc create --name <name> --resource-group <resourcegroup> --location <location> --connectivity-mode direct --profile-name <profile name> --auto-upload-logs true --auto-upload-metrics true --custom-location <name of custom location> --storage-class <storageclass>
# Example
-az arcdata dc create --name arc-dc1 --resource-group my-resource-group --location eastasia --connectivity-mode direct --profile-name azure-arc-aks-premium-storage --auto-upload-logs true --auto-upload-metrics true --custom-location mycustomlocation
+az arcdata dc create --name arc-dc1 --resource-group my-resource-group --location eastasia --connectivity-mode direct --profile-name azure-arc-aks-premium-storage --auto-upload-logs true --auto-upload-metrics true --custom-location mycustomlocation --storage-class mystorageclass
``` If you want to create the Azure Arc data controller using a custom configuration template, follow the steps described in [Create custom configuration profile](create-custom-configuration-template.md) and provide the path to the file as follows:
az arcdata dc create --name arc-dc1 --resource-group my-resource-group --locatio
The deployment status of the Arc data controller on the cluster can be monitored as follows: ```console
-kubectl get datacontrollers --name arc
+kubectl get datacontrollers --namespace arc
``` ## Next steps [Create an Azure Arc-enabled PostgreSQL Hyperscale server group](create-postgresql-hyperscale-server-group.md)
-[Create an Azure SQL managed instance on Azure Arc](create-sql-managed-instance.md)
+[Create an Azure SQL managed instance on Azure Arc](create-sql-managed-instance.md)
azure-arc Create Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-sql-managed-instance.md
az sql mi-arc create -n sqldemo --k8s-namespace my-namespace --use-k8s
### [Directly connected mode](#tab/directly) ```azurecli
-az sql mi-arc create --name <name> --resource-group <group> --location <Azure location> ΓÇôsubscription <subscription> --custom-location <custom-location>
+az sql mi-arc create --name <name> --resource-group <group> --location <Azure location> -ΓÇôsubscription <subscription> --custom-location <custom-location>
``` Example: ```azurecli
-az sql mi-arc create --name sqldemo --resource-group rg --location uswest2 ΓÇôsubscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --custom-location private-location
+az sql mi-arc create --name sqldemo --resource-group rg --location uswest2 -ΓÇôsubscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --custom-location private-location
```
azure-arc Conceptual Custom Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/platform/conceptual-custom-locations.md
Title: "Overview of Custom Locations with Azure Arc"
+ Title: "Overview of custom locations with Azure Arc"
Previously updated : 10/13/2021 Last updated : 02/17/2022
-description: "This article provides a conceptual overview of Custom Locations capability of Azure Arc"
+description: "This article provides a conceptual overview of the custom locations capability of Azure Arc."
-# What is a Custom location?
+# Custom locations
-As an extension of the Azure location construct, *Custom Locations* provides a reference as deployment target which administrators can setup, and user can point to, when creating an Azure resource. It abstracts the backend infrastructure details from application developers, database admin users, or other users in the organization. Since Custom Locations is an Azure Resource Manager resource that supports [Role based Access Control](../../role-based-access-control/overview.md) (RBAC), an administrator or operator can determine which users have access to create resource instances on:
+As an extension of the Azure location construct, a *custom location* provides a reference as deployment target which administrators can set up, and users can point to, when creating an Azure resource. It abstracts the backend infrastructure details from application developers, database admin users, or other users in the organization.
+
+Since the custom location is an Azure Resource Manager resource that supports [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md), an administrator or operator can determine which users have access to create resource instances on:
* A namespace within a Kubernetes cluster to target deployment of Azure Arc-enabled SQL Managed Instance and Azure Arc-enabled PostgreSQL Hyperscale instances. * The compute, storage, networking, and other vCenter or Azure Stack HCI resources to deploy and manage VMs.
-They are represented by a custom location by assigning RBAC permissions to users within your organization on the custom location.
-
-For example, a cluster operator can create a custom location **Contoso-Michigan-Healthcare-App** representing a namespace on a Kubernetes cluster in your organization's Michigan Data Center and assign permissions to application developers on this custom location to deploy healthcare related web applications without the developer having to know details of the namespace and Kubernetes cluster where the application would be deployed on.
+For example, a cluster operator could create a custom location **Contoso-Michigan-Healthcare-App** representing a namespace on a Kubernetes cluster in your organization's Michigan Data Center. The operator can then assign Azure RBAC permissions to application developers on this custom location so that they can deploy healthcare-related web applications. The developers can then deploy these applications without having to know details of the namespace and Kubernetes cluster.
-On Arc-enabled Kubernetes clusters, Custom Locations represents an abstraction of a namespace within the Azure Arc-enabled Kubernetes cluster. Custom Locations creates the granular [RoleBindings and ClusterRoleBindings](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding) necessary for other Azure services to access the cluster. These other Azure services require cluster access to manage resources you want to deploy on your clusters.
+On Arc-enabled Kubernetes clusters, a custom location represents an abstraction of a namespace within the Azure Arc-enabled Kubernetes cluster. Custom locations create the granular [RoleBindings and ClusterRoleBindings](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding) necessary for other Azure services to access the cluster.
> [!IMPORTANT] > In the interest of ensuring new features are documented no later than their release, this page may include documentation for features that may not yet be publicly available. ## Architecture for Arc-enabled Kubernetes
-When an administrator enables the Custom Locations feature on the cluster, a ClusterRoleBinding is created on the cluster, authorizing the Azure AD application used by the Custom Locations Resource Provider (RP). Once authorized, Custom Locations RP can create ClusterRoleBindings or RoleBindings needed by other Azure RPs to create custom resources on this cluster. The cluster extensions installed on the cluster determines the list of RPs to authorize.
+When an administrator enables the custom locations feature on a cluster, a ClusterRoleBinding is created, authorizing the Azure AD application used by the Custom Locations Resource Provider (RP). Once authorized, Custom Locations RP can create ClusterRoleBindings or RoleBindings needed by other Azure RPs to create custom resources on this cluster. The cluster extensions installed on the cluster determines the list of RPs to authorize.
[ ![Use custom locations](../kubernetes/media/conceptual-custom-locations-usage.png) ](../kubernetes/media/conceptual-custom-locations-usage.png#lightbox) [!INCLUDE [preview features note](../kubernetes/includes/preview/preview-callout.md)] When the user creates a data service instance on the cluster:+ 1. The **PUT** request is sent to Azure Resource Manager. 1. The **PUT** request is forwarded to the Azure Arc-enabled Data Services RP.
-1. The RP fetches the `kubeconfig` file associated with the Azure Arc-enabled Kubernetes cluster, on which the Custom Location exists.
- * Custom Location is referenced as `extendedLocation` in the original PUT request.
-1. Azure Arc-enabled Data Services RP uses the `kubeconfig` to communicate with the cluster to create a custom resource of the Azure Arc-enabled Data Services type on the namespace mapped to the Custom Location.
- * The Azure Arc-enabled Data Services operator was deployed via cluster extension creation before the Custom Location existed.
+1. The RP fetches the `kubeconfig` file associated with the Azure Arc-enabled Kubernetes cluster, on which the custom location exists.
+ * The custom location is referenced as `extendedLocation` in the original PUT request.
+1. The Azure Arc-enabled Data Services RP uses the `kubeconfig` to communicate with the cluster to create a custom resource of the Azure Arc-enabled Data Services type on the namespace mapped to the custom location.
+ * The Azure Arc-enabled Data Services operator was deployed via cluster extension creation before the custom location existed.
1. The Azure Arc-enabled Data Services operator reads the new custom resource created on the cluster and creates the data controller, translating into realization of the desired state on the cluster. The sequence of steps to create the SQL managed instance and PostgreSQL instance are identical to the sequence of steps described above. ## Next steps
-* Use our quickstart to [connect a Kubernetes cluster to Azure Arc](../kubernetes/quickstart-connect-cluster.md). Then [Create a custom location](../kubernetes/custom-locations.md) on your Azure Arc-enabled Kubernetes cluster.
+* Use our quickstart to [connect a Kubernetes cluster to Azure Arc](../kubernetes/quickstart-connect-cluster.md). Then [create a custom location](../kubernetes/custom-locations.md) on your Azure Arc-enabled Kubernetes cluster.
azure-arc Manage Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-agent.md
Title: Managing the Azure Arc-enabled servers agent description: This article describes the different management tasks that you will typically perform during the lifecycle of the Azure Connected Machine agent. Previously updated : 10/28/2021 Last updated : 02/17/2022 # Managing and maintaining the Connected Machine agent
-After initial deployment of the Azure Connected Machine agent for Windows or Linux, you may need to reconfigure the agent, upgrade it, or remove it from the computer. You can easily manage these routine maintenance tasks manually or through automation, which reduces both operational error and expenses.
+After initial deployment of the Azure Connected Machine agent, you may need to reconfigure the agent, upgrade it, or remove it from the computer. You can easily manage these routine maintenance tasks manually or through automation, which reduces both operational error and expenses.
-## Before uninstalling agent
+## About the azcmagent tool
-Before removing the Connected Machine agent from your Azure Arc-enabled server, consider the following to avoid unexpected issues or costs added to your Azure bill:
+The azcmagent tool is used to configure the Azure Connected Machine agent during installation, or modify the initial configuration of the agent after installation. azcmagent.exe provides command-line parameters to customize the agent and view its status:
-* If you have deployed Azure VM extensions to an enabled server, and you remove the Connected Machine agent or you delete the resource representing the Azure Arc-enabled server in the resource group, those extensions continue to run and perform their normal operation.
+* **connect** - To connect the machine to Azure Arc
-* If you delete the resource representing the Azure Arc-enabled server in your resource group, but you don't uninstall the VM extensions, when you re-register the machine, you won't be able to manage the installed VM extensions.
+* **disconnect** - To disconnect the machine from Azure Arc
-For servers or machines you no longer want to manage with Azure Arc-enabled servers, it is necessary to follow these steps to successfully stop managing it:
+* **show** - View agent status and its configuration properties (Resource Group name, Subscription ID, version, etc.), which can help when troubleshooting an issue with the agent. Include the `-j` parameter to output the results in JSON format.
-1. Remove the VM extensions from the machine or server. Steps are provided below.
+* **config** - View and change settings to enable features and control agent behavior
-2. Disconnect the machine from Azure Arc using one of the following methods:
+* **logs** - Creates a .zip file in the current directory containing logs to assist you while troubleshooting.
- * Running `azcmagent disconnect` command on the machine or server.
+* **version** - Shows the Connected Machine agent version.
- * From the selected registered Azure Arc-enabled server in the Azure portal by selecting **Delete** from the top bar.
+* **-useStderr** - Directs error and verbose output to stderr. Include the `-json` parameter to output the results in JSON format.
- * Using the [Azure CLI](../../azure-resource-manager/management/delete-resource-group.md?tabs=azure-cli#delete-resource) or [Azure PowerShell](../../azure-resource-manager/management/delete-resource-group.md?tabs=azure-powershell#delete-resource). For the`ResourceType` parameter use `Microsoft.HybridCompute/machines`.
+* **-h or --help** - Shows available command-line parameters
-3. [Uninstall the agent](#remove-the-agent) from the machine or server following the steps below.
+ For example, to see detailed help for the **Connect** parameter, type `azcmagent connect -h`.
-## Renaming a machine
+* **-v or --verbose** - Enable verbose logging
-When you change the name of the Linux or Windows machine connected to Azure Arc-enabled servers, the new name is not recognized automatically because the resource name in Azure is immutable. As with other Azure resources, you have to delete the resource and re-create it in order to use the new name.
+You can perform a **Connect** and **Disconnect** manually while logged on interactively, or automate using the same service principal you used to onboard multiple agents or with a Microsoft identity platform [access token](../../active-directory/develop/access-tokens.md). If you didn't use a service principal to register the machine with Azure Arc-enabled servers, see the following [article](onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale) to create a service principal.
+
+>[!NOTE]
+>You must have *Administrator* permissions on Windows or *root* access permissions on Linux machines to run **azcmagent**.
+
+### Connect
+
+This parameter specifies a resource in Azure Resource Manager representing the machine is created in Azure. The resource is in the subscription and resource group specified, and data about the machine is stored in the Azure region specified by the `--location` setting. The default resource name is the hostname of the machine if not specified.
+
+A certificate corresponding to the system-assigned identity of the machine is then downloaded and stored locally. Once this step is completed, the Azure Connected Machine Metadata Service and guest configuration agent service begins synchronizing with Azure Arc-enabled servers.
-For Azure Arc-enabled servers, before you rename the machine, it is necessary to remove the VM extensions before proceeding.
+To connect using a service principal, run the following command:
+
+`azcmagent connect --service-principal-id <serviceprincipalAppID> --service-principal-secret <serviceprincipalPassword> --tenant-id <tenantID> --subscription-id <subscriptionID> --resource-group <ResourceGroupName> --location <resourceLocation>`
+
+To connect using an access token, run the following command:
+
+`azcmagent connect --access-token <> --subscription-id <subscriptionID> --resource-group <ResourceGroupName> --location <resourceLocation>`
+
+To connect with your elevated logged-on credentials (interactive), run the following command:
+
+`azcmagent connect --tenant-id <TenantID> --subscription-id <subscriptionID> --resource-group <ResourceGroupName> --location <resourceLocation>`
+
+### Disconnect
+
+This parameter specifies a resource in Azure Resource Manager representing the machine is deleted in Azure. It doesn't remove the agent from the machine, you uninstall the agent separately. After the machine is disconnected, if you want to re-register it with Azure Arc-enabled servers, use `azcmagent connect` so a new resource is created for it in Azure.
> [!NOTE]
-> While installed extensions continue to run and perform their normal operation after this procedure is complete, you won't be able to manage them. If you attempt to redeploy the extensions on the machine, you may experience unpredictable behavior.
+> If you have deployed one or more of the Azure VM extensions to your Azure Arc-enabled server and you delete its registration in Azure, the extensions are still installed. It's important to understand that depending on the extension installed, it's actively performing its function. Machines that are intended to be retired or no longer managed by Azure Arc-enabled servers should first have the extensions removed before removing its registration from Azure.
-> [!WARNING]
-> We recommend you avoid renaming the machine's computer name and only perform this procedure if absolutely necessary.
+To disconnect using a service principal, run the following command:
-1. Audit the VM extensions installed on the machine and note their configuration, using the [Azure CLI](manage-vm-extensions-cli.md#list-extensions-installed) or using [Azure PowerShell](manage-vm-extensions-powershell.md#list-extensions-installed).
+`azcmagent disconnect --service-principal-id <serviceprincipalAppID> --service-principal-secret <serviceprincipalPassword> --tenant-id <tenantID>`
-2. Remove VM extensions installed from the [Azure portal](manage-vm-extensions-portal.md#remove-extensions), using the [Azure CLI](manage-vm-extensions-cli.md#remove-extensions), or using [Azure PowerShell](manage-vm-extensions-powershell.md#remove-extensions).
+To disconnect using an access token, run the following command:
-3. Use the **azcmagent** tool with the [Disconnect](manage-agent.md#disconnect) parameter to disconnect the machine from Azure Arc and delete the machine resource from Azure. Disconnecting the machine from Azure Arc-enabled servers does not remove the Connected Machine agent, and you do not need to remove the agent as part of this process. You can run azcmagent manually while logged on interactively, or automate using the same service principal you used to onboard multiple agents, or with a Microsoft identity platform [access token](../../active-directory/develop/access-tokens.md). If you did not use a service principal to register the machine with Azure Arc-enabled servers, see the following [article](onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale) to create a service principal.
+`azcmagent disconnect --access-token <accessToken>`
+
+To disconnect with your elevated logged-on credentials (interactive), run the following command:
+
+`azcmagent disconnect`
+
+### Config
+
+This parameter allows you to view and configure settings that control agent behavior.
+
+To view a list of all the configuration properties and their values, run the following command:
+
+`azcmagent config list`
+
+To get the value for a particular configuration property, run the following command:
+
+`azcmagent config get <propertyName>`
+
+To change a configuration property, run the following command:
-4. Rename the machines computer name.
+`azcmagent config set <propertyName> <propertyValue>`
-5. Re-register the Connected Machine agent with Azure Arc-enabled servers. Run the `azcmagent` tool with the [Connect](manage-agent.md#connect) parameter complete this step.
+To clear a configuration property's value, run the following command:
-6. Redeploy the VM extensions that were originally deployed to the machine from Azure Arc-enabled servers. If you deployed the Azure Monitor for VMs (insights) agent or the Log Analytics agent using an Azure Policy definition, the agents are redeployed after the next [evaluation cycle](../../governance/policy/how-to/get-compliance-data.md#evaluation-triggers).
+`azcmagent config clear <propertyName>`
## Upgrading agent
-The Azure Connected Machine agent is updated regularly to address bug fixes, stability enhancements, and new functionality. [Azure Advisor](../../advisor/advisor-overview.md) identifies resources that are not using the latest version of machine agent and recommends that you upgrade to the latest version. It will notify you when you select the Azure Arc-enabled server by presenting a banner on the **Overview** page or when you access Advisor through the Azure portal.
+The Azure Connected Machine agent is updated regularly to address bug fixes, stability enhancements, and new functionality. [Azure Advisor](../../advisor/advisor-overview.md) identifies resources that are not using the latest version of machine agent and recommends that you upgrade to the latest version. It won't notify you when you select the Azure Arc-enabled server by presenting a banner on the **Overview** page or when you access Advisor through the Azure portal.
The Azure Connected Machine agent for Windows and Linux can be upgraded to the latest release manually or automatically depending on your requirements.
The following table describes the methods supported to perform the agent upgrade
| Operating system | Upgrade method | ||-|
-| Windows | Manually<br> Windows Update |
-| Ubuntu | [Apt](https://help.ubuntu.com/lts/serverguide/apt.html) |
+| Windows | Manually<br> Microsoft Update |
+| Ubuntu | [apt](https://help.ubuntu.com/lts/serverguide/apt.html) |
| SUSE Linux Enterprise Server | [zypper](https://en.opensuse.org/SDB:Zypper_usage_11.3) | | RedHat Enterprise, Amazon, CentOS Linux | [yum](https://wiki.centos.org/PackageManagement/Yum) | ### Windows agent
-Update package for the Connected Machine agent for Windows is available from:
+The latest version of the Azure Connected Machine agent for Windows-based machines can be obtained from:
* Microsoft Update * [Microsoft Update Catalog](https://www.catalog.update.microsoft.com/Home.aspx)
-* [Windows agent Windows Installer package](https://aka.ms/AzureConnectedMachineAgent) from the Microsoft Download Center.
+* [Microsoft Download Center](https://aka.ms/AzureConnectedMachineAgent)
-The agent can be upgraded following various methods to support your software update management process. Outside of obtaining from Microsoft Update, you can download and run manually from the Command Prompt, from a script or other automation solution, or from the UI wizard by executing `AzureConnectedMachine.msi`.
+#### Microsoft Update configuration
-> [!NOTE]
-> * To upgrade the agent, you must have *Administrator* permissions.
-> * To upgrade manually, you must first download and copy the Installer package to a folder on the target server, or from a shared network folder.
+The recommended way of keeping the Windows agent up to date is to automatically obtain the latest version through Microsoft Update. This allows you to utilize your existing update infrastructure (such as Microsoft Endpoint Configuration Manager or Windows Server Update Services) and include Azure Connected Machine agent updates with your regular OS update schedule.
+
+Windows Server doesn't check for updates in Microsoft Update by default. You need to configure the Windows Update client on the machine to also check for other Microsoft products in order to receive automatic updates for the Azure Connected Machine Agent.
+
+For Windows Servers that belong to a workgroup and connect to the Internet to check for updates, you can enable Microsoft Update by running the following commands in PowerShell as an administrator:
+
+```powershell
+$ServiceManager = (New-Object -com "Microsoft.Update.ServiceManager")
+$ServiceID = "7971f918-a847-4430-9279-4a52d1efe18d"
+$ServiceManager.AddService2($ServiceId,7,"")
+```
+
+For Windows Servers that belong to a domain and connect to the Internet to check for updates, you can configure this setting at-scale using Group Policy:
-If you are unfamiliar with the command-line options for Windows Installer packages, review [Msiexec standard command-line options](/windows/win32/msi/standard-installer-command-line-options) and [Msiexec command-line options](/windows/win32/msi/command-line-options).
+1. Sign into a computer used for server administration with an account that can manage Group Policy Objects (GPO) for your organization
+1. Open the **Group Policy Management Console**
+1. Expand the forest, domain, and organizational unit(s) to select the appropriate scope for your new GPO. If you already have a GPO you wish to modify, skip to step 6.
+1. Right click the container and select **Create a GPO in this domain, and Link it here...**
+1. Provide a name for your policy such as "Enable Microsoft Update"
+1. Right click the policy and select **Edit**
+1. Navigate to **Computer Configuration > Administrative Templates > Windows Components > Windows Update**
+1. Double click the **Configure Automatic Updates** setting to edit it
+1. Select the **Enabled** radio button to allow the policy to take effect
+1. In the Options section, check the box for **Install updates for other Microsoft products** at the bottom
+1. Select **OK**
-#### To upgrade using the Setup Wizard
+The next time computers in your selected scope refresh their policy, they will start to check for updates in both Windows Update and Microsoft Update.
+
+For organizations that use Microsoft Endpoint Configuration Manager (MECM) or Windows Server Update Services (WSUS) to deliver updates to their servers, you need to configure WSUS to synchronize the Azure Connected Machine Agent packages and approve them for installation on your servers. Follow the guidance for [Windows Server Update Services](/windows-server/administration/windows-server-update-services/manage/setting-up-update-synchronizations#to-specify-update-products-and-classifications-for-synchronization) or [MECM](/mem/configmgr/sum/get-started/configure-classifications-and-products#to-configure-classifications-and-products-to-synchronize) to add the following products and classifications to your configuration:
+
+* **Product Name**: Azure Connected Machine Agent (select all 3 sub-options)
+* **Classifications**: Critical Updates, Updates
+
+Once the updates are being synchronized, you can optionally add the Azure Connected Machine Agent product to your auto-approval rules so your servers automatically stay up to date with the latest agent software.
+
+#### To manually upgrade using the Setup Wizard
1. Sign on to the computer with an account that has administrative rights.
-2. Execute **AzureConnectedMachineAgent.msi** to start the Setup Wizard.
+2. Download the latest agent installer from https://aka.ms/AzureConnectedMachineAgent
+
+3. Execute **AzureConnectedMachineAgent.msi** to start the Setup Wizard.
The Setup Wizard discovers if a previous version exists, and then it automatically performs an upgrade of the agent. When the upgrade completes, the Setup Wizard automatically closes. #### To upgrade from the command line
+If you're unfamiliar with the command-line options for Windows Installer packages, review [Msiexec standard command-line options](/windows/win32/msi/standard-installer-command-line-options) and [Msiexec command-line options](/windows/win32/msi/command-line-options).
+ 1. Sign on to the computer with an account that has administrative rights.
-2. To upgrade the agent silently and create a setup log file in the `C:\Support\Logs` folder, run the following command.
+2. Download the latest agent installer from https://aka.ms/AzureConnectedMachineAgent
+
+3. To upgrade the agent silently and create a setup log file in the `C:\Support\Logs` folder, run the following command.
```dos
- msiexec.exe /i AzureConnectedMachineAgent.msi /qn /l*v "C:\Support\Logs\Azcmagentupgradesetup.log"
+ msiexec.exe /i AzureConnectedMachineAgent.msi /qn /l*v "C:\Support\Logs\azcmagentupgradesetup.log"
``` ### Linux agent
You can download the latest agent package from Microsoft's [package repository](
> [!NOTE] > To upgrade the agent, you must have *root* access permissions or with an account that has elevated rights using Sudo.
-#### Upgrade Ubuntu
+#### Upgrade the agent on Ubuntu
1. To update the local package index with the latest changes made in the repositories, run the following command:
You can download the latest agent package from Microsoft's [package repository](
Actions of the [apt](https://help.ubuntu.com/lts/serverguide/apt.html) command, such as installation and removal of packages, are logged in the `/var/log/dpkg.log` log file.
-#### Upgrade Red Hat/CentOS/Amazon Linux
+#### Upgrade the agent on Red Hat/CentOS/Oracle Linux/Amazon Linux
1. To update the local package index with the latest changes made in the repositories, run the following command:
Actions of the [apt](https://help.ubuntu.com/lts/serverguide/apt.html) command,
sudo yum update azcmagent ```
-Actions of the [yum](https://access.redhat.com/articles/yum-cheat-sheet) command, such as installation and removal of packages, are logged in the `/var/log/yum.log` log file.
+Actions of the [yum](https://access.redhat.com/articles/yum-cheat-sheet) command, such as installation and removal of packages, are logged in the `/var/log/yum.log` log file.
-#### Upgrade SUSE Linux Enterprise
+#### Upgrade the agent on SUSE Linux Enterprise
1. To update the local package index with the latest changes made in the repositories, run the following command:
Actions of the [yum](https://access.redhat.com/articles/yum-cheat-sheet) command
Actions of the [zypper](https://en.opensuse.org/Portal:Zypper) command, such as installation and removal of packages, are logged in the `/var/log/zypper.log` log file.
-## About the Azcmagent tool
-
-The Azcmagent tool (Azcmagent.exe) is used to configure the Azure Connected Machine agent during installation, or modify the initial configuration of the agent after installation. Azcmagent.exe provides command-line parameters to customize the agent and view its status:
-
-* **connect** - To connect the machine to Azure Arc
-
-* **disconnect** - To disconnect the machine from Azure Arc
-
-* **show** - View agent status and its configuration properties (Resource Group name, Subscription ID, version, etc.), which can help when troubleshooting an issue with the agent. Include the `-j` parameter to output the results in JSON format.
-
-* **config** - View and change settings to enable features and control agent behavior
-
-* **logs** - Creates a .zip file in the current directory containing logs to assist you while troubleshooting.
-
-* **version** - Shows the Connected Machine agent version.
-
-* **-useStderr** - Directs error and verbose output to stderr. Include the `-json` parameter to output the results in JSON format.
-
-* **-h or --help** - Shows available command-line parameters
-
- For example, to see detailed help for the **Connect** parameter, type `azcmagent connect -h`.
-
-* **-v or --verbose** - Enable verbose logging
-
-You can perform a **Connect** and **Disconnect** manually while logged on interactively, or automate using the same service principal you used to onboard multiple agents or with a Microsoft identity platform [access token](../../active-directory/develop/access-tokens.md). If you did not use a service principal to register the machine with Azure Arc-enabled servers, see the following [article](onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale) to create a service principal.
-
->[!NOTE]
->You must have *Administrator* permissions on Windows or *root* access permissions on Linux machines to run **azcmagent**.
-
-### Connect
-
-This parameter specifies a resource in Azure Resource Manager representing the machine is created in Azure. The resource is in the subscription and resource group specified, and data about the machine is stored in the Azure region specified by the `--location` setting. The default resource name is the hostname of the machine if not specified.
-
-A certificate corresponding to the system-assigned identity of the machine is then downloaded and stored locally. Once this step is completed, the Azure Connected Machine Metadata Service and guest configuration agent service begins synchronizing with Azure Arc-enabled servers.
-
-To connect using a service principal, run the following command:
-
-`azcmagent connect --service-principal-id <serviceprincipalAppID> --service-principal-secret <serviceprincipalPassword> --tenant-id <tenantID> --subscription-id <subscriptionID> --resource-group <ResourceGroupName> --location <resourceLocation>`
-
-To connect using an access token, run the following command:
-
-`azcmagent connect --access-token <> --subscription-id <subscriptionID> --resource-group <ResourceGroupName> --location <resourceLocation>`
-
-To connect with your elevated logged-on credentials (interactive), run the following command:
+## Renaming an Azure Arc-enabled server resource
-`azcmagent connect --tenant-id <TenantID> --subscription-id <subscriptionID> --resource-group <ResourceGroupName> --location <resourceLocation>`
-
-### Disconnect
-
-This parameter specifies a resource in Azure Resource Manager representing the machine is deleted in Azure. It does not remove the agent from the machine, you uninstall the agent separately. After the machine is disconnected, if you want to re-register it with Azure Arc-enabled servers, use `azcmagent connect` so a new resource is created for it in Azure.
-
-> [!NOTE]
-> If you have deployed one or more of the Azure VM extensions to your Azure Arc-enabled server and you delete its registration in Azure, the extensions are still installed. It is important to understand that depending on the extension installed, it is actively performing its function. Machines that are intended to be retired or no longer managed by Azure Arc-enabled servers should first have the extensions removed before removing its registration from Azure.
-
-To disconnect using a service principal, run the following command:
-
-`azcmagent disconnect --service-principal-id <serviceprincipalAppID> --service-principal-secret <serviceprincipalPassword> --tenant-id <tenantID>`
-
-To disconnect using an access token, run the following command:
-
-`azcmagent disconnect --access-token <accessToken>`
-
-To disconnect with your elevated logged-on credentials (interactive), run the following command:
+When you change the name of the Linux or Windows machine connected to Azure Arc-enabled servers, the new name is not recognized automatically because the resource name in Azure is immutable. As with other Azure resources, you have to delete the resource and re-create it in order to use the new name.
-`azcmagent disconnect`
+For Azure Arc-enabled servers, before you rename the machine, it's necessary to remove the VM extensions before proceeding.
-### Config
+1. Audit the VM extensions installed on the machine and note their configuration, using the [Azure CLI](manage-vm-extensions-cli.md#list-extensions-installed) or using [Azure PowerShell](manage-vm-extensions-powershell.md#list-extensions-installed).
-This parameter allows you to view and configure settings that control agent behavior.
+2. Remove VM extensions installed from the [Azure portal](manage-vm-extensions-portal.md#remove-extensions), using the [Azure CLI](manage-vm-extensions-cli.md#remove-extensions), or using [Azure PowerShell](manage-vm-extensions-powershell.md#remove-extensions).
-To view a list of all the configuration properties and their values, run the following command:
+3. Use the **azcmagent** tool with the [Disconnect](manage-agent.md#disconnect) parameter to disconnect the machine from Azure Arc and delete the machine resource from Azure. Disconnecting the machine from Azure Arc-enabled servers doesn't remove the Connected Machine agent, and you do not need to remove the agent as part of this process. You can run azcmagent manually while logged on interactively, or automate using the same service principal you used to onboard multiple agents, or with a Microsoft identity platform [access token](../../active-directory/develop/access-tokens.md). If you didn't use a service principal to register the machine with Azure Arc-enabled servers, see the following [article](onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale) to create a service principal.
-`azcmagent config list`
+4. Re-register the Connected Machine agent with Azure Arc-enabled servers. Run the `azcmagent` tool with the [Connect](manage-agent.md#connect) parameter complete this step. The agent will default to using the computer's current hostname, but you can choose your own resource name by passing the `--resource-name` parameter to the connect command.
-To get the value for a particular configuration property, run the following command:
+5. Redeploy the VM extensions that were originally deployed to the machine from Azure Arc-enabled servers. If you deployed the Azure Monitor for VMs (insights) agent or the Log Analytics agent using an Azure Policy definition, the agents are redeployed after the next [evaluation cycle](../../governance/policy/how-to/get-compliance-data.md#evaluation-triggers).
-`azcmagent config get <propertyName>`
+## Uninstall the agent
-To change a configuration property, run the following command:
+For servers you no longer want to manage with Azure Arc-enabled servers, follow the steps below to remove any VM extensions from the server, disconnect the agent, and uninstall the software from your server. it's important to complete each of the 3 steps to fully remove all related software components from your system.
-`azcmagent config set <propertyName> <propertyValue>`
+### Step 1: Remove VM extensions
-To clear a configuration property's value, run the following command:
+If you have deployed Azure VM extensions to an Azure Arc-enabled server, you must uninstall the extensions before disconnecting the agent or uninstalling the software. Uninstalling the Azure Connected Machine agent doesn't automatically remove extensions, and they won't be recognized if you later connect the server to Azure Arc again.
-`azcmagent config clear <propertyName>`
+For guidance on how to identify and remove any extensions on your Azure Arc-enabled server, see the following resources:
-## Remove the agent
+* [Manage VM extensions with the Azure portal](manage-vm-extensions-portal.md#remove-extensions)
+* [Manage VM extensions with Azure PowerShell](manage-vm-extensions-powershell.md#remove-extensions)
+* [Manage VM extensions with Azure CLI](manage-vm-extensions-cli.md#remove-extensions)
-Perform one of the following methods to uninstall the Windows or Linux Connected Machine agent from the machine. Removing the agent does not unregister the machine with Azure Arc-enabled servers or remove the Azure VM extensions installed. For servers or machines you no longer want to manage with Azure Arc-enabled servers, it is necessary to follow these steps to successfully stop managing it:
+### Step 2: Disconnect the server from Azure Arc
-1. Remove VM extensions installed from the [Azure portal](manage-vm-extensions-portal.md#remove-extensions), using the [Azure CLI](manage-vm-extensions-cli.md#remove-extensions), or using [Azure PowerShell](manage-vm-extensions-powershell.md#remove-extensions) that you don't want to remain on the machine.
-1. Unregister the machine by running `azcmagent disconnect` to delete the Azure Arc-enabled servers resource in Azure. If that fails, you can delete the resource manually in Azure. Otherwise, if the resource was deleted in Azure, you'll need to run `azcmagent disconnect --force-local-only` on the server to remove the local configuration.
+Disconnecting the agent deletes the corresponding Azure resource for the server and clears the local state of the agent. The recommended way to disconnect the agent is to run the `azcmagent disconnect` command as an administrator on the server. You'll be prompted to log in with an Azure account that has permission to delete the resource in your subscription. If the resource has already been deleted in Azure, you'll need to pass an additional flag to only clean up the local state: `azcmagent disconnect --force-local-only`.
-### Windows agent
+### Step 3a: Uninstall the Windows agent
Both of the following methods remove the agent, but they do not remove the *C:\Program Files\AzureConnectedMachineAgent* folder on the machine.
To uninstall the agent manually from the Command Prompt or to use an automated m
ForEach-Object {MsiExec.exe /x "$($_.PsChildName)" /qn} ```
-### Linux agent
+### Step 3b: Uninstall the Linux agent
> [!NOTE]
-> To uninstall the agent, you must have *root* access permissions or with an account that has elevated rights using Sudo.
+> To uninstall the agent, you must have *root* access permissions or with an account that has elevated rights using sudo.
To uninstall the Linux agent, the command to use depends on the Linux operating system. -- For Ubuntu, run the following command:
+* For Ubuntu, run the following command:
```bash sudo apt purge azcmagent ``` -- For RHEL, CentOS, and Amazon Linux, run the following command:
+* For RHEL, CentOS, Oracle Linux, and Amazon Linux, run the following command:
```bash sudo yum remove azcmagent ``` -- For SLES, run the following command:
+* For SLES, run the following command:
```bash sudo zypper remove azcmagent ```
-## Unregister machine
-
-If you are planning to stop managing the machine with supporting services in Azure, perform the following steps to unregister the machine with Azure Arc-enabled servers. You can perform these steps either before or after you have removed the Connected Machine agent from the machine.
-
-1. Open Azure Arc-enabled servers by going to the [Azure portal](https://aka.ms/hybridmachineportal).
-
-2. Select the machine in the list, select the ellipsis (**...**), and then select **Delete**.
- ## Update or remove proxy settings To configure the agent to communicate to the service through a proxy server or remove this configuration after deployment, or use one of the following methods to complete this task. The agent communicates outbound using the HTTP protocol under this scenario.
To configure the agent to communicate to the service through a proxy server or r
As of agent version 1.13, proxy settings can be configured using the `azcmagent config` command or system environment variables. If a proxy server is specified in both the agent configuration and system environment variables, the agent configuration will take precedence and become the effective setting. `azcmagent show` returns the effective proxy configuration for the agent. > [!NOTE]
-> Azure Arc-enabled servers does not support using proxy servers that require authentication, TLS (HTTPS) connections, or a [Log Analytics gateway](../../azure-monitor/agents/gateway.md) as a proxy for the Connected Machine agent.
+> Azure Arc-enabled servers doesn't support using proxy servers that require authentication, TLS (HTTPS) connections, or a [Log Analytics gateway](../../azure-monitor/agents/gateway.md) as a proxy for the Connected Machine agent.
-### Universal proxy configuration
+### Agent-specific proxy configuration
-Universal proxy configuration is available starting with version 1.13 of the Azure Connected Machine agent and is the preferred way of configuring proxy server settings.
+Agent-specific proxy configuration is available starting with version 1.13 of the Azure Connected Machine agent and is the preferred way of configuring proxy server settings. This approach prevents the proxy settings for the Azure Connected Machine agent from interfering with other applications on your system.
+
+> [!NOTE]
+> Extensions deployed by Azure Arc will not inherit the agent-specific proxy configuration.
+> Refer to the documentation for the extensions you deploy for guidance on how to configure proxy settings for each extension.
To configure the agent to communicate through a proxy server, run the following command:
You do not need to restart any services when reconfiguring the proxy settings wi
On Windows, the Azure Connected Machine agent will first check the `proxy.url` agent configuration property (starting with agent version 1.13), then the system-wide `HTTPS_PROXY` environment variable to determine which proxy server to use. If both are empty, no proxy server is used, even if the default Windows system-wide proxy setting is configured.
-Microsoft recommends using the agent configuration property instead of the system environment variable.
+Microsoft recommends using the agent-specific proxy configuration instead of the system environment variable.
To set the proxy server environment variable, run the following commands:
Restart-Service -Name himds, ExtensionService, GCArcService
### Linux environment variables
-On Linux, the Azure Connected Machine agent first checks the `proxy.url` agent configuration property (starting with agent version 1.13), and then the `HTTPS_PROXY` environment variable set for the himds, GC_Ext, and GCArcService daemons. There is an included script that will configure systemd's default proxy settings for the Azure Connected Machine agent and all other services on the machine to use a specified proxy server.
+On Linux, the Azure Connected Machine agent first checks the `proxy.url` agent configuration property (starting with agent version 1.13), and then the `HTTPS_PROXY` environment variable set for the himds, GC_Ext, and GCArcService daemons. There's an included script that will configure systemd's default proxy settings for the Azure Connected Machine agent and all other services on the machine to use a specified proxy server.
To configure the agent to communicate through a proxy server, run the following command:
To remove the environment variable, run the following command:
sudo /opt/azcmagent/bin/azcmagent_proxy remove ```
-### Migrating from environment variables to universal proxy configuration
+### Migrating from environment variables to agent-specific proxy configuration
-If you are already using environment variables to configure the proxy server for the Azure Connected Machine agent and want to migrate to the universal proxy configuration based on local agent settings, follow these steps:
+If you're already using environment variables to configure the proxy server for the Azure Connected Machine agent and want to migrate to the agent-specific proxy configuration based on local agent settings, follow these steps:
1. [Upgrade the Azure Connected Machine agent](#upgrading-agent) to the latest version (starting with version 1.13) to use the new proxy configuration settings 1. Configure the agent with your proxy server information by running `azcmagent config set proxy.url "http://ProxyServerFQDN:port"`
azure-arc Plan Evaluate On Azure Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/plan-evaluate-on-azure-virtual-machine.md
After you install and configure the agent to register with Azure Arc-enabled ser
* Learn about our [supported Azure VM extensions](manage-vm-extensions.md) available to simplify deployment with other Azure services like Automation, KeyVault, and others for your Windows or Linux machine.
-* When you have finished testing, see [Remove Azure Arc-enabled servers agent](manage-agent.md#remove-the-agent).
+* When you have finished testing, [uninstall the Azure Arc-enabled servers agent](manage-agent.md#uninstall-the-agent).
azure-arc Scenario Migrate To Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/scenario-migrate-to-azure.md
With the Azure CLI, use the [az connectedmachine extension list](/cli/azure/ext/
After identifying which VM extensions are deployed, you can remove them using the [Azure portal](manage-vm-extensions-portal.md), using the [Azure PowerShell](manage-vm-extensions-powershell.md), or using the [Azure CLI](manage-vm-extensions-cli.md). If the Log Analytics VM extension or Dependency agent VM extension was deployed using Azure Policy and the [VM insights initiative](../../azure-monitor/vm/vminsights-enable-policy.md), it is necessary to [create an exclusion](../../governance/policy/tutorials/create-and-manage.md#remove-a-non-compliant-or-denied-resource-from-the-scope-with-an-exclusion) to prevent re-evaluation and deployment of the extensions on the Azure Arc-enabled server before the migration is complete.
-## Step 2: Review access rights
+## Step 2: Review access rights
-List role assignments for the Azure Arc-enabled servers resource, using [Azure PowerShell](../../role-based-access-control/role-assignments-list-powershell.md#list-role-assignments-for-a-resource) and with other PowerShell code, you can export the results to CSV or another format.
+List role assignments for the Azure Arc-enabled servers resource, using [Azure PowerShell](../../role-based-access-control/role-assignments-list-powershell.md#list-role-assignments-for-a-resource) and with other PowerShell code, you can export the results to CSV or another format.
-If you're using a managed identity for an application or process running on an Azure Arc-enabled server, you need to make sure the Azure VM has a managed identity assigned. To view the role assignment for a managed identity, you can use the Azure PowerShell `Get-AzADServicePrincipal` cmdlet. For more information, see [List role assignments for a managed identity](../../role-based-access-control/role-assignments-list-powershell.md#list-role-assignments-for-a-managed-identity).
+If you're using a managed identity for an application or process running on an Azure Arc-enabled server, you need to make sure the Azure VM has a managed identity assigned. To view the role assignment for a managed identity, you can use the Azure PowerShell `Get-AzADServicePrincipal` cmdlet. For more information, see [List role assignments for a managed identity](../../role-based-access-control/role-assignments-list-powershell.md#list-role-assignments-for-a-managed-identity).
A system-managed identity is also used when Azure Policy is used to audit or configure settings inside a machine or server. With Azure Arc-enabled servers, the guest configuration agent service is included, and performs validation of audit settings. After you migrate, see [Deploy requirements for Azure virtual machines](../../governance/policy/concepts/guest-configuration.md#deploy-requirements-for-azure-virtual-machines) for information on how to configure your Azure VM manually or with policy with the guest configuration extension. Update role assignment with any resources accessed by the managed identity to allow the new Azure VM identity to authenticate to those services. See the following to learn [how managed identities for Azure resources work for an Azure Virtual Machine (VM)](../../active-directory/managed-identities-azure-resources/how-managed-identities-work-vm.md).
-## Step 3: Disconnect from Azure Arc and uninstall agent
+## Step 3: Uninstall the Azure Connected Machine agent
-Delete the resource ID of the Azure Arc-enabled server in Azure using one of the following methods:
-
- * Running `azcmagent disconnect` command on the machine or server.
-
- * From the selected registered Azure Arc-enabled server in the Azure portal by selecting **Delete** from the top bar.
-
- * Using the [Azure CLI](../../azure-resource-manager/management/delete-resource-group.md?tabs=azure-cli#delete-resource) or [Azure PowerShell](../../azure-resource-manager/management/delete-resource-group.md?tabs=azure-powershell#delete-resource). For the`ResourceType` parameter use `Microsoft.HybridCompute/machines`.
-
-Then, remove the Azure Arc-enabled servers Windows or Linux agent following the [Remove agent](manage-agent.md#remove-the-agent) steps.
+Follow the guidance to [uninstall the agent](manage-agent.md#uninstall-the-agent) from the server. Double check that all extensions are removed before disconnecting the agent.
## Step 4: Install the Azure Guest Agent
azure-arc Troubleshoot Agent Onboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/troubleshoot-agent-onboard.md
If you receive an error when configuring the Azure Arc-enabled servers agent, th
| AZCM0064 | The agent service is not responding | Check the status of the `himds` service to ensure it is running. Start the service if it is not running. If it is running, wait a minute then try again. | | AZCM0065 | An internal agent communication error occurred | Contact Microsoft Support for assistance | | AZCM0066 | The agent web service is not responding or unavailable | Contact Microsoft Support for assistance |
-| AZCM0067 | The agent is already connected to Azure | Follow the steps in [disconnect the agent](manage-agent.md#unregister-machine) first, then try again. |
+| AZCM0067 | The agent is already connected to Azure | Run `azcmagent disconnect` to remove the current connection, then try again. |
| AZCM0068 | An internal error occurred while disconnecting the server from Azure | Contact Microsoft Support for assistance | | AZCM0081 | An error occurred while downloading the Azure Active Directory managed identity certificate | If this message is encountered while attempting to connect the server to Azure, the agent won't be able to communicate with the Azure Arc service. Delete the resource in Azure and try connecting again. | | AZCM0101 | The command was not parsed successfully | Run `azcmagent <command> --help` to review the correct command syntax |
The following table lists some of the known errors and suggestions on how to tro
|Failed to AzcmagentConnect ARM resource |`The subscription is not registered to use namespace 'Microsoft.HybridCompute'` |Azure resource providers are not registered. |Register the [resource providers](./agent-overview.md#register-azure-resource-providers). | |Failed to AzcmagentConnect ARM resource |`Get https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/myResourceGroup/providers/Microsoft.HybridCompute/machines/MSJC01?api-version=2019-03-18-preview: Forbidden` |Proxy server or firewall is blocking access to `management.azure.com` endpoint. |Verify connectivity to the endpoint and it is not blocked by a firewall or proxy server. |
-<a name="footnote1"></a><sup>1</sup>If this GPO is enabled and applies to machines with the Connected Machine agent, it deletes the user profile associated with the built-in account specified for the *himds* service. As a result, it also deletes the authentication certificate used to communicate with the service that is cached in the local certificate store for 30 days. Before the 30-day limit, an attempt is made to renew the certificate. To resolve this issue, follow the steps to [unregister the machine](manage-agent.md#unregister-machine) and then re-register it with the service running `azcmagent connect`.
+<a name="footnote1"></a><sup>1</sup>If this GPO is enabled and applies to machines with the Connected Machine agent, it deletes the user profile associated with the built-in account specified for the *himds* service. As a result, it also deletes the authentication certificate used to communicate with the service that is cached in the local certificate store for 30 days. Before the 30-day limit, an attempt is made to renew the certificate. To resolve this issue, follow the steps to [disconnect the agent](manage-agent.md#disconnect) and then re-register it with the service running `azcmagent connect`.
## Next steps
azure-functions Consumption Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/consumption-plan.md
You can also create function apps in a Consumption plan when you publish a Funct
## Multiple apps in the same plan
-Function apps in the same region can be assigned to the same Consumption plan. There's no downside or impact to having multiple apps running in the same Consumption plan. Assigning multiple apps to the same Consumption plan has no impact on resilience, scalability, or reliability of each app.
+The general recommendation is for each function app to have its own Consumption plan. However, if needed, function apps in the same region can be assigned to the same Consumption plan. Keep in mind that there is a [limit to the number of function apps that can run in a Consumption plan](functions-scale.md#service-limits). Function apps in a given plan are all scaled together, so any issues with scaling can affect all apps in the plan.
## Next steps
azure-functions Performance Reliability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/performance-reliability.md
Each function that you create has a memory footprint. While this footprint is us
If you run multiple function apps in a single Premium plan or dedicated (App Service) plan, these apps are all sharing the same resources allocated to the plan. If you have one function app that has a much higher memory requirement than the others, it uses a disproportionate amount of memory resources on each instance to which the app is deployed. Because this could leave less memory available for the other apps on each instance, you might want to run a high-memory-using function app like this in its own separate hosting plan. > [!NOTE]
-> When using the [Consumption plan](./functions-scale.md), we recommend you always put each app in its own plan, since apps are scaled independently anyway.
+> When using the [Consumption plan](./functions-scale.md), we recommend you always put each app in its own plan, since apps are scaled independently anyway. For more information, see [Multiple apps in the same plan](consumption-plan.md#multiple-apps-in-the-same-plan).
Consider whether you want to group functions with different load profiles. For example, if you have a function that processes many thousands of queue messages, and another that is only called occasionally but has high memory requirements, you might want to deploy them in separate function apps so they get their own sets of resources and they scale independently of each other.
azure-maps Creator Facility Ontology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-facility-ontology.md
The `unit` feature class defines a physical and non-overlapping area that can be
|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1000. | |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1000.| |`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1000. |
-|`anchorPoint` | [Point](/rest/api/maps/wfs/get-feature#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/wfs/get-feature#featuregeojson) that represents the feature as a point. Can be used to position the label of the feature.|
+|`anchorPoint` | [Point](/rest/api/maps/v2/wfs/get-features#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/v2/wfs/get-features#featuregeojson) that represents the feature as a point. Can be used to position the label of the feature.|
:::zone-end
The `unit` feature class defines a physical and non-overlapping area that can be
|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1000.| |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1000.| |`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1000.|
-|`anchorPoint` | [Point](/rest/api/maps/wfs/get-feature#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/wfs/get-feature#featuregeojson) that represents the feature as a point. Can be used to position the label of the feature.|
+|`anchorPoint` | [Point](/rest/api/maps/v2/wfs/get-features#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/v2/wfs/get-features#featuregeojson) that represents the feature as a point. Can be used to position the label of the feature.|
:::zone-end
The `structure` feature class defines a physical and non-overlapping area that c
|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1000. | |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1000. | |`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1000.|
-|`anchorPoint` | [Point](/rest/api/maps/wfs/get-feature#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/wfs/get-feature#featuregeojson) that represents the feature as a point. Can be used to position the label of the feature.|
+|`anchorPoint` | [Point](/rest/api/maps/v2/wfs/get-features#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/v2/wfs/get-features#featuregeojson) that represents the feature as a point. Can be used to position the label of the feature.|
:::zone-end
The `zone` feature class defines a virtual area, like a WiFi zone or emergency a
|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1000.| |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1000.| |`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1000. |
-|`anchorPoint` | [Point](/rest/api/maps/wfs/get-feature#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/wfs/get-feature#featuregeojson) that represents the feature as a point. Can be used to position the label of the feature.|
+|`anchorPoint` | [Point](/rest/api/maps/v2/wfs/get-features#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/v2/wfs/get-features#featuregeojson) that represents the feature as a point. Can be used to position the label of the feature.|
## level
The `level` class feature defines an area of a building at a set elevation. For
|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1000.| |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1000.| |`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1000.|
-|`anchorPoint` | [Point](/rest/api/maps/wfs/get-feature#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/wfs/get-feature#featuregeojson) that represents the feature as a point. Can be used to position the label of the feature.|
+|`anchorPoint` | [Point](/rest/api/maps/v2/wfs/get-features#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/v2/wfs/get-features#featuregeojson) that represents the feature as a point. Can be used to position the label of the feature.|
## facility
The `facility` feature class defines the area of the site, building footprint, a
|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1000. | |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1000. | |`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1000.|
-|`anchorPoint` | [Point](/rest/api/maps/wfs/get-feature#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/wfs/get-feature#featuregeojson) that represents the feature as a point. Can be used to position the label of the feature.|
+|`anchorPoint` | [Point](/rest/api/maps/v2/wfs/get-features#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/v2/wfs/get-features#featuregeojson) that represents the feature as a point. Can be used to position the label of the feature.|
|`anchorHeightAboveSeaLevel` | double | false | Height of anchor point above sea level, in meters. Sea level is defined by EGM 2008.| |`defaultLevelVerticalExtent` | double| false | Default value for vertical extent of levels, in meters.|
The `verticalPenetration` class feature defines an area that, when used in a set
|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1000.| |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1000.| |`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1000. |
-|`anchorPoint` | [Point](/rest/api/maps/wfs/get-feature#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/wfs/get-feature#featuregeojson) that represents the feature as a point. Can be used to position the label of the feature.|
+|`anchorPoint` | [Point](/rest/api/maps/v2/wfs/get-features#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/v2/wfs/get-features#featuregeojson) that represents the feature as a point. Can be used to position the label of the feature.|
:::zone-end
The `verticalPenetration` class feature defines an area that, when used in a set
|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1000.| |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1000.| |`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1000. |
-|`anchorPoint` | [Point](/rest/api/maps/wfs/get-feature#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/wfs/get-feature#featuregeojson) that represents the feature as a point. Can be used to position the label of the feature.|
+|`anchorPoint` | [Point](/rest/api/maps/v2/wfs/get-features#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/v2/wfs/get-features#featuregeojson) that represents the feature as a point. Can be used to position the label of the feature.|
:::zone-end
The `opening` class feature defines a traversable boundary between two units, or
| `accessRightToLeft`| enum [ "prohibited", "digitalKey", "physicalKey", "keyPad", "guard", "ticket", "fingerprint", "retina", "voice", "face", "palm", "iris", "signature", "handGeometry", "time", "ticketChecker", "other"] | false | Method of access when passing through the opening from right to left. Left and right are determined by the vertices in the feature geometry, standing at the first vertex and facing the second vertex. Omitting this property means there are no access restrictions.| | `accessLeftToRight`| enum [ "prohibited", "digitalKey", "physicalKey", "keyPad", "guard", "ticket", "fingerprint", "retina", "voice", "face", "palm", "iris", "signature", "handGeometry", "time", "ticketChecker", "other"] | false | Method of access when passing through the opening from left to right. Left and right are determined by the vertices in the feature geometry, standing at the first vertex and facing the second vertex. Omitting this property means there are no access restrictions.| | `isEmergency` | boolean | false | If `true`, the opening is navigable only during emergencies. Default value is `false` |
-|`anchorPoint` | [Point](/rest/api/maps/wfs/get-feature#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/wfs/get-feature#featuregeojson) y that represents the feature as a point. Can be used to position the label of the feature.|
+|`anchorPoint` | [Point](/rest/api/maps/v2/wfs/get-features#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/v2/wfs/get-features#featuregeojson) y that represents the feature as a point. Can be used to position the label of the feature.|
:::zone-end
The `opening` class feature defines a traversable boundary between two units, or
|`externalId` | string |true | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.| |`categoryId` |[category.Id](#category) |true | The ID of a category feature.| | `levelId` | [level.Id](#level) | true | The ID of a level feature. |
-|`anchorPoint` | [Point](/rest/api/maps/wfs/get-feature#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/wfs/get-feature#featuregeojson) y that represents the feature as a point. Can be used to position the label of the feature.|
+|`anchorPoint` | [Point](/rest/api/maps/v2/wfs/get-features#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/v2/wfs/get-features#featuregeojson) y that represents the feature as a point. Can be used to position the label of the feature.|
:::zone-end
The `lineElement` is a class feature that defines a line feature in a unit, such
|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1000. | |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1000. | |`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1000. |
-|`anchorPoint` | [Point](/rest/api/maps/wfs/get-feature#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/wfs/get-feature#featuregeojson) that represents the feature as a point. Can be used to position the label of the feature.|
-|`obstructionArea` | [Polygon](/rest/api/maps/wfs/get-feature#featuregeojson)| false | A simplified geometry (when the line geometry is complicated) of the feature that is to be avoided during routing. Requires `isObstruction` set to true.|
+|`anchorPoint` | [Point](/rest/api/maps/v2/wfs/get-features#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/v2/wfs/get-features#featuregeojson) that represents the feature as a point. Can be used to position the label of the feature.|
+|`obstructionArea` | [Polygon](/rest/api/maps/v2/wfs/get-features#featuregeojson)| false | A simplified geometry (when the line geometry is complicated) of the feature that is to be avoided during routing. Requires `isObstruction` set to true.|
## areaElement
The `areaElement` is a class feature that defines a polygon feature in a unit, s
|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1000. | |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1000.| |`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1000.|
-|`anchorPoint` | [Point](/rest/api/maps/wfs/get-feature#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/wfs/get-feature#featuregeojson) that represents the feature as a point. Can be used to position the label of the feature.|
+|`anchorPoint` | [Point](/rest/api/maps/v2/wfs/get-features#featuregeojson) | false | [GeoJSON Point geometry](/rest/api/maps/v2/wfs/get-features#featuregeojson) that represents the feature as a point. Can be used to position the label of the feature.|
## category
azure-maps Creator Indoor Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-indoor-maps.md
Applications can use the Render V2-Get Map Tile API to request tilesets. The til
### Web Feature Service API
-You can use the [Web Feature Service (WFS) API](/rest/api/maps/v2/wfs) to query datasets. WFS follows the [Open Geospatial Consortium API Features](http://docs.opengeospatial.org/DRAFTS/17-069r1.html). You can use the WFS API to query features within the dataset itself. For example, you can use WFS to find all mid-size meeting rooms of a specific facility and floor level.
+You can use the [Web Feature Service (WFS) API](/rest/api/maps/v2/wfs) to query datasets. WFS follows the [Open Geospatial Consortium API Features](http://docs.opengeospatial.org/DRAFTS/17-069r4.html). You can use the WFS API to query features within the dataset itself. For example, you can use WFS to find all mid-size meeting rooms of a specific facility and floor level.
### Alias API
azure-maps Indoor Map Dynamic Styling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/indoor-map-dynamic-styling.md
After you complete the prerequisites, you should have a simple web application c
To implement dynamic styling, a feature - such as a meeting or conference room - must be referenced by its feature `id`. You use the feature `id` to update the dynamic property or *state* of that feature. To view the features defined in a dataset, you can use one of the following methods:
-* WFS API (Web Feature service). You can use the [WFS API](/rest/api/maps/v2/wfs) to query datasets. WFS follows the [Open Geospatial Consortium API Features](http://docs.opengeospatial.org/DRAFTS/17-069r1.html). The WFS API is helpful for querying features within a dataset. For example, you can use WFS to find all mid-size meeting rooms of a specific facility and floor level.
+* WFS API (Web Feature service). You can use the [WFS API](/rest/api/maps/v2/wfs) to query datasets. WFS follows the [Open Geospatial Consortium API Features](http://docs.opengeospatial.org/DRAFTS/17-069r4.html). The WFS API is helpful for querying features within a dataset. For example, you can use WFS to find all mid-size meeting rooms of a specific facility and floor level.
* Implement customized code that a user can use to select features on a map using your web application. We use this option in this article.
azure-maps Tutorial Creator Wfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-creator-wfs.md
To query the unit collection in your dataset:
## Additional information
-See [WFS](/rest/api/maps/wfs) for information on the Creator Web Feature Service REST API.
+See [WFS](/rest/api/maps/v2/wfs) for information on the Creator Web Feature Service REST API.
## Next steps
azure-monitor Azure Monitor Agent Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md
You migration plan to the Azure Monitor agent should include the following consi
## Gap analysis between agents
-The following tables show gap analyses for the log types that are currently collected by each agent. This will be updated as support for AMA grows towards parity with the Log Analytics agent. For a general comparison of Azure Monitor agents, see [Overview of Azure Monitor agents](../agents/azure-monitor-agent-overview.md).
+The following tables show gap analyses for the **log types** that are currently collected by each agent. This will be updated as support for AMA grows towards parity with the Log Analytics agent. For a general comparison of Azure Monitor agents, see [Overview of Azure Monitor agents](../agents/azure-monitor-agent-overview.md).
> [!IMPORTANT]
The following tables show gap analyses for the log types that are currently coll
| **Custom logs** | No | Yes | | **IIS logs** | No | Yes | | **Application and service logs** | Yes | Yes |
-| **DNS logs** | No | Yes |
| **Multi-homing** | Yes | Yes | ### Linux logs
The following tables show gap analyses for the log types that are currently coll
|Log type / Support |Azure Monitor agent support |Log Analytics agent support | |||| | **Syslog** | Yes | Yes |
+| **Performance counters** | Yes | Yes |
| **Custom logs** | No | Yes | | **Multi-homing** | Yes | No |
azure-monitor App Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md
The following diagram shows how Application Insights instrumentation in an app s
![Diagram that shows Application Insights instrumentation in an app sending telemetry to an Application Insights resource.](./media/app-insights-overview/diagram.png)
+## How to use Application Insights
+
+There are several ways to get started with Application Insights. Begin with whatever works best for you, and you can add others later.
+
+### Prerequisites
+
+- You need an Azure account. Application Insights is hosted in Azure, and sends its telemetry to Azure for analysis and presentation. If you don't have an Azure subscription, you can [sign up for free](https://azure.microsoft.com/free). If your organization already has an Azure subscription, an administrator can [add you to it](../../active-directory/fundamentals/add-users-azure-active-directory.md).
+
+- The basic [Application Insights pricing plan](https://azure.microsoft.com/pricing/details/application-insights/) has no charge until your app has substantial usage.
+
+### Get started
+
+To use Application Insights at run time, you can instrument your web app on the server. This approach is ideal for apps that are already deployed, because it avoids any updates to the app code.
+
+See the following articles for details and instructions:
+
+- [Application monitoring for Azure App Service overview](./azure-web-apps.md)
+- [Deploy the Azure Monitor Application Insights Agent on Azure virtual machines and Azure virtual machine scale sets](./azure-vm-vmss-apps.md)
+- [Deploy Azure Monitor Application Insights Agent for on-premises servers](./status-monitor-v2-overview.md)
+- [Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications](java-in-process-agent.md)
+
+You can also add Application Insights to your app code at development time. This approach lets you customize and add to telemetry collection.
+
+See the following articles for details and instructions:
+
+- [Configure Application Insights for your ASP.NET website](./asp-net.md)
+- [Application Insights for ASP.NET Core applications](./asp-net-core.md)
+- [Application Insights for .NET console applications](./console.md)
+- [Application Insights for web pages](./javascript.md)
+- [Monitor your Node.js services and apps with Application Insights](./nodejs.md)
+- [Set up Azure Monitor for your Python application](./opencensus-python.md)
+
+For all supported languages, platforms, and frameworks, see [Supported languages](./platforms.md).
+
+### Monitor
+
+After you set up Application Insights, monitor your app.
+
+- Set up [availability web tests](./monitor-web-app-availability.md).
+- Use the default [application dashboard](./overview-dashboard.md) for your team room, to track load, responsiveness, and performance. Monitor your dependencies, page loads, and AJAX calls.
+- Discover which requests are the slowest and fail most often.
+- Watch [Live Stream](./live-stream.md) when you deploy a new release, to know immediately about any degradation.
+
+### Detect and diagnose
+
+When you receive an alert or discover a problem:
+
+- Assess how many users are affected.
+- Correlate failures with exceptions, dependency calls, and traces.
+- Examine profiler, snapshots, stack dumps, and trace logs.
+
+### Measure, learn, and build
+
+- Plan to measure how customers use new user experience or business features.
+- Write custom telemetry into your code.
+- [Measure the effectiveness](./usage-overview.md) of each new feature that you deploy.
+- Base the next development cycle on evidence from your telemetry.
+ ## What Application Insights monitors Application Insights helps development teams understand app performance and usage. Application Insights monitors:
There are many ways to explore Application Insights telemetry. For more informat
Use continuous export to bulk export raw data to storage as soon as it arrives.
-## How to use Application Insights
-
-There are several ways to get started with Application Insights. Begin with whatever works best for you, and you can add others later.
-
-### Prerequisites
--- You need an Azure account. Application Insights is hosted in Azure, and sends its telemetry to Azure for analysis and presentation. If you don't have an Azure subscription, you can [sign up for free](https://azure.microsoft.com/free). If your organization already has an Azure subscription, an administrator can [add you to it](../../active-directory/fundamentals/add-users-azure-active-directory.md).--- The basic [Application Insights pricing plan](https://azure.microsoft.com/pricing/details/application-insights/) has no charge until your app has substantial usage. -
-### Get started
-
-To use Application Insights at run time, you can instrument your web app on the server. This approach is ideal for apps that are already deployed, because it avoids any updates to the app code.
-
-See the following articles for details and instructions:
--- [Application monitoring for Azure App Service overview](./azure-web-apps.md)-- [Deploy the Azure Monitor Application Insights Agent on Azure virtual machines and Azure virtual machine scale sets](./azure-vm-vmss-apps.md)-- [Deploy Azure Monitor Application Insights Agent for on-premises servers](./status-monitor-v2-overview.md)-- [Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications](java-in-process-agent.md)-
-You can also add Application Insights to your app code at development time. This approach lets you customize and add to telemetry collection.
-
-See the following articles for details and instructions:
--- [Configure Application Insights for your ASP.NET website](./asp-net.md)-- [Application Insights for ASP.NET Core applications](./asp-net-core.md)-- [Application Insights for .NET console applications](./console.md)-- [Application Insights for web pages](./javascript.md)-- [Monitor your Node.js services and apps with Application Insights](./nodejs.md)-- [Set up Azure Monitor for your Python application](./opencensus-python.md)
+## Next steps
-For all supported languages, platforms, and frameworks, see [Supported languages](./platforms.md).
+- [Manage usage and costs for Application Insights](pricing.md#manage-usage-and-costs-for-application-insights)
+- [Instrument your web pages](./javascript.md) for page view, AJAX, and other client-side telemetry.
+- [Analyze mobile app usage](../app/mobile-center-quickstart.md) by integrating with Visual Studio App Center.
+- [Monitor availability with URL ping tests](./monitor-web-app-availability.md) to your website from Application Insights servers.
-### Monitor
+## Troubleshooting
-After you set up Application Insights, monitor your app.
+### FAQ
-- Set up [availability web tests](./monitor-web-app-availability.md).-- Use the default [application dashboard](./overview-dashboard.md) for your team room, to track load, responsiveness, and performance. Monitor your dependencies, page loads, and AJAX calls.-- Discover which requests are the slowest and fail most often.-- Watch [Live Stream](./live-stream.md) when you deploy a new release, to know immediately about any degradation.
+Review [frequently asked questions](../faq.yml).
+### Microsoft Q&A questions forum
-### Detect and diagnose
+Post questions to the Microsoft Q&A [answers forum](https://docs.microsoft.com/answers/topics/24223/azure-monitor.html).
-When you receive an alert or discover a problem:
+### Stack Overflow
-- Assess how many users are affected.-- Correlate failures with exceptions, dependency calls, and traces.-- Examine profiler, snapshots, stack dumps, and trace logs.-
-### Measure, learn, and build
--- Plan to measure how customers use new user experience or business features.-- Write custom telemetry into your code.-- [Measure the effectiveness](./usage-overview.md) of each new feature that you deploy.-- Base the next development cycle on evidence from your telemetry.
+Post coding questions to [Stack Overflow]() using an Application Insights tag.
-## Next steps
+### User Voice
-- [Instrument your web pages](./javascript.md) for page view, AJAX, and other client-side telemetry.-- [Analyze mobile app usage](../app/mobile-center-quickstart.md) by integrating with Visual Studio App Center.-- [Monitor availability with URL ping tests](./monitor-web-app-availability.md) to your website from Application Insights servers.
+Leave product feedback for the engineering team on [UserVoice](https://feedback.azure.com/d365community/forum/8849e04d-1325-ec11-b6e6-000d3a4f09d0).
<!-- ## Support and feedback * Questions and Issues:
azure-monitor Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ip-addresses.md
Last updated 01/27/2020
> [!TIP] > You can use Azure [network service tags](../../virtual-network/service-tags-overview.md) to manage access if you are using Azure Network Security Groups. If you are managing access for hybrid/on premises resources you can download the equivalent IP address lists as [JSON files](../../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files) which are updated each week. To cover all the exceptions in this article you would need to use the service tags: `ActionGroup`, `ApplicationInsightsAvailability`, and `AzureMonitor`.
-Alternatively, you can subscribe to this page as a RSS feed by adding https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/ip-addresses.md.atom to your favorite RSS/ATOM reader to get notified of the latest changes.
+Alternatively, you can subscribe to this page as a RSS feed by adding https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/azure-monitor/app/ip-addresses.md to your favorite RSS/ATOM reader to get notified of the latest changes.
## Outgoing ports
Download [China Cloud IP addresses](https://www.microsoft.com/download/details.a
``` Australia East 20.40.124.176/28
-20.40.124.240/28
-20.40.125.80/28
+ Brazil South 191.233.26.176/28
-191.233.26.128/28
-191.233.26.64/28
+ France Central (Formerly France South) 20.40.129.96/28
-20.40.129.112/28
-20.40.129.128/28
-20.40.129.144/28
+ France Central 20.40.129.32/28
-20.40.129.48/28
-20.40.129.64/28
-20.40.129.80/28
+ East Asia 52.229.216.48/28
-52.229.216.64/28
-52.229.216.80/28
+ North Europe 52.158.28.64/28
-52.158.28.80/28
-52.158.28.96/28
-52.158.28.112/28
+ Japan East 52.140.232.160/28
-52.140.232.176/28
-52.140.232.192/28
+ West Europe 51.144.56.96/28
-51.144.56.112/28
-51.144.56.128/28
-51.144.56.144/28
-51.144.56.160/28
-51.144.56.176/28
+ UK South 51.105.9.128/28
-51.105.9.144/28
-51.105.9.160/28
+ UK West 20.40.104.96/28
-20.40.104.112/28
-20.40.104.128/28
-20.40.104.144/28
+ Southeast Asia 52.139.250.96/28
-52.139.250.112/28
-52.139.250.128/28
-52.139.250.144/28
+ West US 40.91.82.48/28
-40.91.82.64/28
-40.91.82.80/28
-40.91.82.96/28
-40.91.82.112/28
-40.91.82.128/28
+ Central US 13.86.97.224/28
-13.86.97.240/28
-13.86.98.48/28
-13.86.98.0/28
-13.86.98.16/28
-13.86.98.64/28
+ North Central US 23.100.224.16/28
-23.100.224.32/28
-23.100.224.48/28
-23.100.224.64/28
-23.100.224.80/28
-23.100.224.96/28
-23.100.224.112/28
-23.100.225.0/28
+ South Central US 20.45.5.160/28
-20.45.5.176/28
-20.45.5.192/28
-20.45.5.208/28
-20.45.5.224/28
-20.45.5.240/28
East US 20.42.35.32/28
-20.42.35.64/28
-20.42.35.80/28
-20.42.35.96/28
-20.42.35.112/28
-20.42.35.128/28
+ ```
azure-monitor Java Standalone Telemetry Processors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-telemetry-processors.md
To configure this option, under `exclude`, specify the `matchType` one or more `
| `GC Total Time` | custom metrics | Sum of time across all GC MXBeans (diff since last reported). See [GarbageCollectorMXBean.getCollectionTime()](https://docs.oracle.com/javase/7/docs/api/java/lang/management/GarbageCollectorMXBean.html).| yes | | `Heap Memory Used (MB)` | custom metrics | See [MemoryMXBean.getHeapMemoryUsage().getUsed()](https://docs.oracle.com/javase/8/docs/api/java/lang/management/MemoryMXBean.html#getHeapMemoryUsage--). | yes | | `% Of Max Heap Memory Used` | custom metrics | java.lang:type=Memory / maximum amount of memory in bytes. See [MemoryUsage](https://docs.oracle.com/javase/7/docs/api/java/lang/management/MemoryUsage.html)| yes |
-| `\Processor(_Total)\% Processor Time` | default metrics | Difference in [system wide CPU load tick counters](https://oshi.github.io/oshi/apidocs/oshi/hardware/CentralProcessor.html#getProcessorCpuLoadTicks())(Only User and System) divided by the number of [logical processors count](https://oshi.github.io/oshi/apidocs/oshi/hardware/CentralProcessor.html#getLogicalProcessors()) in a given interval of time | no |
+| `\Processor(_Total)\% Processor Time` | default metrics | Difference in [system wide CPU load tick counters](https://oshi.github.io/oshi/oshi-core/apidocs/oshi/hardware/CentralProcessor.html#getProcessorCpuLoadTicks())(Only User and System) divided by the number of [logical processors count](https://oshi.github.io/oshi/oshi-core/apidocs/oshi/hardware/CentralProcessor.html#getLogicalProcessorsΓÇö) in a given interval of time | no |
| `\Process(??APP_WIN32_PROC??)\% Processor Time` | default metrics | See [OperatingSystemMXBean.getProcessCpuTime()](https://docs.oracle.com/javase/8/docs/jre/api/management/extension/com/sun/management/OperatingSystemMXBean.html#getProcessCpuTime--) (diff since last reported, normalized by time and number of CPUs). | no | | `\Process(??APP_WIN32_PROC??)\Private Bytes` | default metrics | Sum of [MemoryMXBean.getHeapMemoryUsage()](https://docs.oracle.com/javase/8/docs/api/java/lang/management/MemoryMXBean.html#getHeapMemoryUsage--) and [MemoryMXBean.getNonHeapMemoryUsage()](https://docs.oracle.com/javase/8/docs/api/java/lang/management/MemoryMXBean.html#getNonHeapMemoryUsage--). | no | | `\Process(??APP_WIN32_PROC??)\IO Data Bytes/sec` | default metrics | `/proc/[pid]/io` Sum of bytes read and written by the process (diff since last reported). See [proc(5)](https://man7.org/linux/man-pages/man5/proc.5.html). | no | | `\Memory\Available Bytes` | default metrics | See [OperatingSystemMXBean.getFreePhysicalMemorySize()](https://docs.oracle.com/javase/7/docs/jre/api/management/extension/com/sun/management/OperatingSystemMXBean.html#getFreePhysicalMemorySize()). | no |-
azure-monitor Snapshot Debugger Appservice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/snapshot-debugger-appservice.md
For an Azure App Service, you can set app settings within the Azure Resource Man
}, ```
+## Not Supported Scenarios
+Below you can find scenarios where Snapshot Collector is not supported:
+
+|Scenario | Side Effects | Recommendation |
+||--|-|
+|When using the Snapshot Collector SDK in your application directly (.csproj) and you have enabled the advance option "Interop".| The local Application Insights SDK (including Snapshot Collector telemetry) will be lost, therefore, no Snapshots will be available.<br /><br />Your application could crash at startup with `System.ArgumentException: telemetryProcessorTypedoes not implement ITelemetryProcessor.`<br /><br />For more information about the Application Insights feature "Interop", see the [documentation.](https://docs.microsoft.com/azure/azure-monitor/app/azure-web-apps-net-core?#troubleshooting) | If you are using the advance option "Interop", use the codeless Snapshot Collector injection (enabled thru the Azure Portal UX) |
+ ## Next steps - Generate traffic to your application that can trigger an exception. Then, wait 10 to 15 minutes for snapshots to be sent to the Application Insights instance.
azure-monitor Snapshot Debugger Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/snapshot-debugger-troubleshoot.md
If you enabled Application Insights Snapshot Debugger for your application, but
There can be many different reasons why snapshots aren't generated. You can start by running the snapshot health check to identify some of the possible common causes.
+## Not Supported Scenarios
+Below you can find scenarios where Snapshot Collector is not supported:
+
+|Scenario | Side Effects | Recommendation |
+||--|-|
+|When using the Snapshot Collector SDK in your application directly (.csproj) and you have enabled the advance option "Interop".| The local Application Insights SDK (including Snapshot Collector telemetry) will be lost, therefore, no Snapshots will be available.<br /><br />Your application could crash at startup with `System.ArgumentException: telemetryProcessorTypedoes not implement ITelemetryProcessor.`<br /><br />For more information about the Application Insights feature "Interop", see the [documentation.](https://docs.microsoft.com/azure/azure-monitor/app/azure-web-apps-net-core?#troubleshooting) | If you are using the advance option "Interop", use the codeless Snapshot Collector injection (enabled thru the Azure Portal UX) |
+ ## Make sure you're using the appropriate Snapshot Debugger Endpoint Currently the only regions that require endpoint modifications are [Azure Government](../../azure-government/compare-azure-government-global-azure.md#application-insights) and [Azure China](/azure/china/resources-developer-guide).
If you still don't see an exception with that snapshot ID, then the exception re
If your application connects to the Internet via a proxy or a firewall, you may need to update the rules to communicate with the Snapshot Debugger service.
-The IPs used by Application Insights Snapshot Debugger are included in the Azure Monitor service tag. For more information, see [Service Tags documentation](../../virtual-network/service-tags-overview.md).
+The IPs used by Application Insights Snapshot Debugger are included in the Azure Monitor service tag. For more information, see [Service Tags documentation](../../virtual-network/service-tags-overview.md).
azure-monitor Activity Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/activity-log.md
# Azure Activity log
-The Activity log is a [platform log](./platform-logs-overview.md) in Azure that provides insight into subscription-level events. Activity log includes such information as when a resource is modified or when a virtual machine is started. You can view the Activity sign in the Azure portal or retrieve entries with PowerShell and CLI. This article provides details on viewing the Activity log and sending it to different destinations.
+The Activity log is a [platform log](./platform-logs-overview.md) in Azure that provides insight into subscription-level events. Activity log includes such information as when a resource is modified or when a virtual machine is started. You can view the Activity log in the Azure portal or retrieve entries with PowerShell and CLI. This article provides details on viewing the Activity log and sending it to different destinations.
For more functionality, you should create a diagnostic setting to send the Activity log to one or more of these locations for the following reasons: - to [Azure Monitor Logs](../logs/data-platform-logs.md) for more complex querying and alerting, and longer retention (up to 2 years)
Select the **Azure Activity Logs** tile to open the **Azure Activity Logs** view
## Next steps * [Read an overview of platform logs](./platform-logs-overview.md) * [Review Activity log event schema](activity-log-schema.md)
-* [Create diagnostic setting to send Activity logs to other destinations](./diagnostic-settings.md)
+* [Create diagnostic setting to send Activity logs to other destinations](./diagnostic-settings.md)
azure-monitor Log Standard Columns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-standard-columns.md
description: Describes columns that are common to multiple data types in Azure M
Previously updated : 08/16/2021 Last updated : 02/18/2022
union withsource = tt *
``` ## \_BilledSize
-The **\_BilledSize** column specifies the size in bytes of data that will be billed to your Azure account if **\_IsBillable** is true.
+The **\_BilledSize** column specifies the size in bytes of data that will be billed to your Azure account if **\_IsBillable** is true. [Learn more](manage-cost-storage.md#data-size) about the details of how the billed size is calculated.
### Examples
azure-monitor Logs Data Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-data-export.md
Title: Log Analytics workspace data export in Azure Monitor (preview)
+ Title: Log Analytics workspace data export in Azure Monitor
description: Log Analytics workspace data export in Azure Monitor lets you continuously export data per selected tables in your workspace, to an Azure Storage Account or Azure Event Hubs as it's collected.
Last updated 02/09/2022
-# Log Analytics workspace data export in Azure Monitor (preview)
+# Log Analytics workspace data export in Azure Monitor
Data export in Log Analytics workspace lets you continuously export data per selected tables in your workspace, to an Azure Storage Account or Azure Event Hubs as it's collected. This article provides details on this feature and steps to configure data export in your workspaces. ## Overview
azure-monitor Manage Cost Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/manage-cost-storage.md
na Previously updated : 02/17/2022 Last updated : 02/18/2022
Billing for the commitment tiers is done on a daily basis. [Learn more](https://
> [!NOTE] > Starting June 2, 2021, **Capacity Reservations** are now called **Commitment Tiers**. Data collected above your commitment tier level (overage) is now billed at the same price-per-GB as the current commitment tier level, lowering costs compared to the old method of billing at the Pay-As-You-Go rate, and reducing the need for users with large data volumes to fine-tune their commitment level. Three new commitment tiers were also added: 1000, 2000, and 5000 GB/day.
+### Data size calculation
+ <a name="data-size"></a> <a name="free-data-types"></a>
-In all pricing tiers, an event's data size is calculated from a string representation of the properties that are stored in Log Analytics for this event, regardless of whether the data is sent from an agent or added during the ingestion process. This includes any [custom fields](custom-fields.md) that are added as data is collected and then stored in Log Analytics. Several properties common to all data types, including some [Log Analytics Standard Properties](./log-standard-columns.md), are excluded in the calculation of the event size. This includes `_ResourceId`, `_SubscriptionId`, `_ItemId`, `_IsBillable`, `_BilledSize` and `Type`. All other properties stored in Log Analytics are included in the calculation of the event size. Some data types are free from data ingestion charges altogether, for example the [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity), [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat), [Usage](/azure/azure-monitor/reference/tables/usage) and [Operation](/azure/azure-monitor/reference/tables/operation) types. Some solutions have more solution-specific policies about free data ingestion, for instance [Azure Migrate](https://azure.microsoft.com/pricing/details/azure-migrate/) makes dependency visualization data free for the first 180-days of a Server Assessment. To determine whether an event was excluded from billing for data ingestion, you can use the [_IsBillable](log-standard-columns.md#_isbillable) property as shown [below]Fdata-volume-for-specific-events). Usage is reported in GB (10^9 bytes).
+In all pricing tiers, an event's data size is calculated from a string representation of the properties that are stored in Log Analytics for this event, regardless of whether the data is sent from an agent or added during the ingestion process. This includes any [custom fields](custom-fields.md) that are added as data is collected and then stored in Log Analytics. Several properties common to all data types, including some [Log Analytics Standard Properties](./log-standard-columns.md), are excluded in the calculation of the event size. This includes `_ResourceId`, `_SubscriptionId`, `_ItemId`, `_IsBillable`, `_BilledSize` and `Type`. All other properties stored in Log Analytics are included in the calculation of the event size. Some data types are free from data ingestion charges altogether, for example the [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity), [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat), [Usage](/azure/azure-monitor/reference/tables/usage) and [Operation](/azure/azure-monitor/reference/tables/operation) types. Some solutions have more solution-specific policies about free data ingestion, for instance [Azure Migrate](https://azure.microsoft.com/pricing/details/azure-migrate/) makes dependency visualization data free for the first 180-days of a Server Assessment. To determine whether an event was excluded from billing for data ingestion, you can use the [_IsBillable](log-standard-columns.md#_isbillable) property as shown [below](#data-volume-for-specific-events). Usage is reported in GB (10^9 bytes).
Also, some solutions, such as [Microsoft Defender for Cloud](https://azure.microsoft.com/pricing/details/azure-defender/), [Microsoft Sentinel](https://azure.microsoft.com/pricing/details/azure-sentinel/), and [Configuration management](https://azure.microsoft.com/pricing/details/automation/) have their own pricing models.
azure-monitor Vminsights Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-troubleshoot.md
Check the [capacity reservations and the pricing for data ingestion](https://azu
Open Log Analytics from **Logs** in the Azure Monitor menu in the Azure portal. Run the following query for your computer:
-```kuso
+```kusto
Heartbeat | where Computer == "my-computer" | sort by TimeGenerated desc
azure-percept How To Update Over The Air https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-update-over-the-air.md
Follow this guide to learn how to update the OS and firmware of the carrier boar
- [Azure subscription](https://azure.microsoft.com/free/) - [Azure Percept DK setup experience](./quickstart-percept-dk-set-up.md): you connected your dev kit to a Wi-Fi network, created an IoT Hub, and connected your dev kit to the IoT Hub - [Device Update for IoT Hub has been successfully configured](./how-to-set-up-over-the-air-updates.md)
+- Make sure you are using the Devic Update for IoT Hub with its **old version** (public preview) UX. When navigate to "device management - updates" in your IoT Hub, click the **"switch to the older version"** link in the banner.
+
+ :::image type="content" source="media/how-to-update-over-the-air/switch-banner.png" alt-text="Screenshot of banner." lightbox="media/how-to-update-over-the-air/switch-banner.png":::
+ > [!CAUTION]
+ > As the Device Update for IoT Hub has launched the public preview refresh, the new UX is only compatible with edge device that is using the newer client agent. The devkit is current using an older version of client agent, therefore you need to use the old device update UX accordingly. **Otherwise you will encounter issues when import updates or group device for deploying updates.**
+ ## Import your update file and manifest file
azure-resource-manager Tag Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-support.md
Title: Tag support for resources description: Shows which Azure resource types support tags. Provides details for all Azure services. Previously updated : 02/04/2022 Last updated : 02/18/2022 # Tag support for Azure resources
azure-resource-manager Deployment Complete Mode Deletion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deployment-complete-mode-deletion.md
Title: Complete mode deletion description: Shows how resource types handle complete mode deletion in Azure Resource Manager templates. Previously updated : 02/04/2022 Last updated : 02/18/2022 # Deletion of Azure resources for complete mode deployments
Jump to a resource provider namespace:
> | hsmPools | Yes | > | managedHSMs | Yes | > | vaults | Yes |
-> | vaults / accessPolicies | No |
+> | vaults / accessPolicies | Yes |
> | vaults / eventGridFilters | No | > | vaults / keys | No | > | vaults / keys / versions | No |
azure-signalr Howto Shared Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/howto-shared-private-endpoints.md
If you're using [serverless mode](concept-service-mode.md#serverless-mode) in Azure SignalR Service, you might have outbound traffic to upstream. Upstream such as Azure Web App and Azure Functions, can be configured to accept connections from a list of virtual networks and refuse outside connections that originate from a public network. You can create an outbound [private endpoint connection](../private-link/private-endpoint-overview.md) to reach these endpoints.
- :::image type="content" alt-text="Shared private endpoint overview." source="media\howto-shared-private-endpoints\shared-private-endpoint-overview.png" :::
+ :::image type="content" alt-text="Diagram showing architecture of shared private endpoint." source="media\howto-shared-private-endpoints\shared-private-endpoint-overview.png" :::
This outbound method is subject to the following requirements:
This outbound method is subject to the following requirements:
+ The Azure Web App or Azure Function must be on certain SKUs. See [Use Private Endpoints for Azure Web App](../app-service/networking/private-endpoint.md).
-## Shared Private Link Resources Management APIs
+## Shared Private Link Resources Management
-Private endpoints of secured resources that are created through Azure SignalR Service APIs are referred to as *shared private link resources*. This is because you're "sharing" access to a resource, such as an Azure Function, that has been integrated with the [Azure Private Link service](https://azure.microsoft.com/services/private-link/). These private endpoints are created inside Azure SignalR Service execution environment and are not directly visible to you.
-
-At this moment, you can use Management REST API to create or delete *shared private link resources*. In the remainder of this article, we will use [Azure CLI](/cli/azure/) to demonstrate the REST API calls.
+Private endpoints of secured resources that are created through Azure SignalR Service APIs are referred to as *shared private link resources*. This is because you're "sharing" access to a resource, such as an Azure Function, that has been integrated with the [Azure Private Link service](https://azure.microsoft.com/services/private-link/). These private endpoints are created inside Azure SignalR Service execution environment and aren't directly visible to you.
> [!NOTE] > The examples in this article are based on the following assumptions: > * The resource ID of this Azure SignalR Service is _/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/signalr/contoso-signalr_. > * The resource ID of upstream Azure Function is _/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.Web/sites/contoso-func.
-The rest of the examples show how the _contoso-signalr_ service can be configured so that its upstream calls to function go through a private endpoint rather than public network.
+The rest of the examples show how the *contoso-signalr* service can be configured so that its upstream calls to function go through a private endpoint rather than public network.
### Step 1: Create a shared private link resource to the function
+#### [Azure portal](#tab/azure-portal)
+
+1. In the Azure portal, go to your Azure SignalR Service resource.
+1. In the menu pane, select **Networking**. Switch to **Private access** tab.
+1. Click **Add shared private endpoint**.
+
+ :::image type="content" alt-text="Screenshot of shared private endpoints management." source="media\howto-shared-private-endpoints\portal-shared-private-endpoints-management.png" lightbox="media\howto-shared-private-endpoints\portal-shared-private-endpoints-management.png" :::
+
+1. Fill in a name for the shared private endpoint.
+1. Select the target linked resource either by selecting from your owned resources or by filling a resource ID.
+1. Click **Add**.
+
+ :::image type="content" alt-text="Screenshot of adding a shared private endpoint." source="media\howto-shared-private-endpoints\portal-shared-private-endpoints-add.png" :::
+
+1. The shared private endpoint resource will be in **Succeeded** provisioning state. The connection state is **Pending** approval at target resource side.
+
+ :::image type="content" alt-text="Screenshot of an added shared private endpoint." source="media\howto-shared-private-endpoints\portal-shared-private-endpoints-added.png" lightbox="media\howto-shared-private-endpoints\portal-shared-private-endpoints-added.png" :::
+
+#### [Azure CLI](#tab/azure-cli)
+ You can make the following API call with the [Azure CLI](/cli/azure/) to create a shared private link resource: ```dotnetcli
The process of creating an outbound private endpoint is a long-running (asynchro
You can poll this URI periodically to obtain the status of the operation.
-If you are using the CLI, you can poll for the status by manually querying the `Azure-AsyncOperationHeader` value,
+If you're using the CLI, you can poll for the status by manually querying the `Azure-AsyncOperationHeader` value,
-```donetcli
+```dotnetcli
az rest --method get --uri https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/signalr/contoso-signalr/operationStatuses/c0786383-8d5f-4554-8d17-f16fcf482fb2?api-version=2021-06-01-preview ``` Wait until the status changes to "Succeeded" before proceeding to the next steps.
-### Step 2a: Approve the private endpoint connection for the function
+--
-> [!NOTE]
-> In this section, you use the Azure portal to walk through the approval flow for a private endpoint to Azure Function. Alternately, you could use the [REST API](/rest/api/appservice/web-apps/approve-or-reject-private-endpoint-connection) that's available via the App Service provider.
+### Step 2a: Approve the private endpoint connection for the function
> [!IMPORTANT] > After you approved the private endpoint connection, the Function is no longer accessible from public network. You may need to create other private endpoints in your own virtual network to access the Function endpoint.
+#### [Azure portal](#tab/azure-portal)
+ 1. In the Azure portal, select the **Networking** tab of your Function App and navigate to **Private endpoint connections**. Click **Configure your private endpoint connections**. After the asynchronous operation has succeeded, there should be a request for a private endpoint connection with the request message from the previous API call. :::image type="content" alt-text="Screenshot of the Azure portal, showing the Private endpoint connections pane." source="media\howto-shared-private-endpoints\portal-function-approve-private-endpoint.png" :::
Wait until the status changes to "Succeeded" before proceeding to the next steps
:::image type="content" alt-text="Screenshot of the Azure portal, showing an Approved status on the Private endpoint connections pane." source="media\howto-shared-private-endpoints\portal-function-approved-private-endpoint.png" :::
+#### [Azure CLI](#tab/azure-cli)
+
+1. List private endpoint connections.
+
+ ```dotnetcli
+ az network private-endpoint-connection list -n <function-resource-name> -g <function-resource-group-name> --type 'Microsoft.Web/sites'
+ ```
+
+ There should be a pending private endpoint connection. Note down its ID.
+
+ ```json
+ [
+ {
+ "id": "<id>",
+ "location": "",
+ "name": "",
+ "properties": {
+ "privateLinkServiceConnectionState": {
+ "actionRequired": "None",
+ "description": "Please approve",
+ "status": "Pending"
+ }
+ }
+ }
+ ]
+ ```
+
+1. Approve the private endpoint connection.
+
+ ```dotnetcli
+ az network private-endpoint-connection approve --id <private-endpoint-connection-id>
+ ```
+
+--
+ ### Step 2b: Query the status of the shared private link resource
-It takes minutes for the approval to be propagated to Azure SignalR Service. To confirm that the shared private link resource has been updated after approval, you can also obtain the "Connection state" by using the GET API.
+It takes minutes for the approval to be propagated to Azure SignalR Service. You can check the state using either Azure portal or Azure CLI.
+
+#### [Azure portal](#tab/azure-portal)
+
+ :::image type="content" alt-text="Screenshot of an approved shared private endpoint." source="media\howto-shared-private-endpoints\portal-shared-private-endpoints-approved.png" lightbox="media\howto-shared-private-endpoints\portal-shared-private-endpoints-approved.png" :::
+
+#### [Azure CLI](#tab/azure-cli)
```dotnetcli az rest --method get --uri https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/signalr/contoso-signalr/sharedPrivateLinkResources/func-pe?api-version=2021-06-01-preview
This would return a JSON, where the connection state would show up as "status" u
If the "Provisioning State" (`properties.provisioningState`) of the resource is `Succeeded` and "Connection State" (`properties.status`) is `Approved`, it means that the shared private link resource is functional and Azure SignalR Service can communicate over the private endpoint.
+--
+
+At this point, the private endpoint between Azure SignalR Service and Azure Function is established.
+ ### Step 3: Verify upstream calls are from a private IP Once the private endpoint is set up, you can verify incoming calls are from a private IP by checking the `X-Forwarded-For` header at upstream side.
Once the private endpoint is set up, you can verify incoming calls are from a pr
Learn more about private endpoints:
-+ [What are private endpoints?](../private-link/private-endpoint-overview.md)
++ [What are private endpoints?](../private-link/private-endpoint-overview.md)
azure-sql Accelerated Database Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/accelerated-database-recovery.md
ms.devlang:
- Previously updated : 05/19/2020+ Last updated : 02/18/2022 # Accelerated Database Recovery in Azure SQL [!INCLUDE[appliesto-sqldb-sqlmi](includes/appliesto-sqldb-sqlmi.md)] **Accelerated Database RecoveryΓÇ»(ADR)** is a SQL Server database engine feature that greatly improves database availability, especially in the presence of long running transactions, by redesigning the SQL Server database engine recovery process.
-ADR is currently available for Azure SQL Database, Azure SQL Managed Instance, databases in Azure Synapse Analytics, and SQL Server on Azure VMs starting with SQL Server 2019.
+ADR is currently available for Azure SQL Database, Azure SQL Managed Instance, databases in Azure Synapse Analytics, and SQL Server on Azure VMs starting with SQL Server 2019. For information on ADR in SQL Server, see [Manage accelerated database recovery](/sql/relational-databases/accelerated-database-recovery-management).
> [!NOTE]
-> ADR is enabled by default in Azure SQL Database and Azure SQL Managed Instance and disabling ADR for either product is not supported.
+> ADR is enabled by default in Azure SQL Database and Azure SQL Managed Instance. Disabling ADR in Azure SQL Database and Azure SQL Managed Instance is not supported.
## Overview
The ADR recovery process has the same three phases as the current recovery proce
- **Analysis phase**
- The process remains the same as before with the addition of reconstructing sLog and copying log records for non-versioned operations.
+ The process remains the same as before with the addition of reconstructing SLOG and copying log records for non-versioned operations.
- **Redo** phase Broken into two phases (P) - Phase 1
- Redo from sLog (oldest uncommitted transaction up to last checkpoint). Redo is a fast operation as it only needs to process a few records from the sLog.
+ Redo from SLOG (oldest uncommitted transaction up to last checkpoint). Redo is a fast operation as it only needs to process a few records from the SLOG.
- Phase 2
The ADR recovery process has the same three phases as the current recovery proce
- **Undo phase**
- The Undo phase with ADR completes almost instantaneously by using sLog to undo non-versioned operations and Persisted Version Store (PVS) with Logical Revert to perform row level version-based Undo.
+ The Undo phase with ADR completes almost instantaneously by using SLOG to undo non-versioned operations and Persisted Version Store (PVS) with Logical Revert to perform row level version-based Undo.
## ADR recovery components
The four key components of ADR are:
- Performing rollback by using PVS for all user transactions, rather than physically scanning the transaction log and undoing changes one at a time. - Releasing all locks immediately after transaction abort. Since abort involves simply marking changes in memory, the process is very efficient and therefore locks do not have to be held for a long time. -- **sLog**
+- **SLOG**
- sLog is a secondary in-memory log stream that stores log records for non-versioned operations (such as metadata cache invalidation, lock acquisitions, and so on). The sLog is:
+ SLOG is a secondary in-memory log stream that stores log records for non-versioned operations (such as metadata cache invalidation, lock acquisitions, and so on). The SLOG is:
- Low volume and in-memory - Persisted on disk by being serialized during the checkpoint process
The four key components of ADR are:
The cleaner is the asynchronous process that wakes up periodically and cleans page versions that are not needed.
-## Accelerated Database Recovery Patterns
+## Accelerated Database Recovery (ADR) patterns
The following types of workloads benefit most from ADR: -- Workloads with long-running transactions.-- Workloads that have seen cases where active transactions are causing the transaction log to grow significantly. -- Workloads that have experienced long periods of database unavailability due to long running recovery (such as unexpected service restart or manual transaction rollback).
+- ADR is recommended for workloads with long running transactions.
+- ADR is recommended for workloads that have seen cases where active transactions are causing the transaction log to grow significantly.
+- ADR is recommended for workloads that have experienced long periods of database unavailability due to long running recovery (such as unexpected service restart or manual transaction rollback).
+
+## Best practices for Accelerated Database Recovery
+
+- Avoid long-running transactions in the database. Though one objective of ADR is to speed up database recovery due to redo long active transactions, long-running transactions can delay version cleanup and increase the size of the PVS.
+
+- Avoid large transactions with data definition changes or DDL operations. ADR uses a SLOG (system log stream) mechanism to track DDL operations used in recovery. The SLOG is only used while the transaction active. SLOG is checkpointed, so avoiding large transactions that use SLOG can help overall performance. These scenarios can cause the SLOG to take up more space:
+
+ - Many DDLs are executed in one transaction. For example, in one transaction, rapidly creating and dropping temp tables.
+
+ - A table has very large number of partitions/indexes that are modified. For example, a DROP TABLE operation on such table would require a large reservation of SLOG memory, which would delay truncation of the transaction log and delay undo/redo operations. The workaround can be drop the indexes individually and gradually, then drop the table. For more information on the SLOG, see [ADR recovery components](/sql/relational-databases/accelerated-database-recovery-conceptsadr-recovery-components).
+
+- Prevent or reduce unnecessary aborted situations. A high abort rate will put pressure on the PVS cleaner and lower ADR performance. The aborts may come from a high rate of deadlocks, duplicate keys, or other constraint violations.
+
+ - The `sys.dm_tran_aborted_transactions` DMV shows all aborted transactions on the SQL Server instance. The `nested_abort` column indicates that the transaction committed but there are portions that aborted (savepoints or nested transactions) which can block the PVS cleanup process. For more information, see [sys.dm_tran_aborted_transactions (Transact-SQL)](/sql/relational-databases/system-dynamic-management-views/sys-dm-tran-aborted-transactions).
+
+ - To activate the PVS cleanup process manually between workloads or during maintenance windows, use `sys.sp_persistent_version_cleanup`. For more information, see [sys.sp_persistent_version_cleanup](/sql/relational-databases/system-stored-procedures/sys-sp-persistent-version-cleanup-transact-sql).
+
+- If you observe issues either with storage usage, high abort transaction and other factors, see [Troubleshooting Accelerated Database Recovery (ADR) on SQL Server](/sql/relational-databases/accelerated-database-recovery-troubleshooting).
+
+## Next steps
+
+- [Accelerated database recovery](/sql/relational-databases/accelerated-database-recovery-concepts)
+- [Troubleshooting Accelerated Database Recovery (ADR) on SQL Server](/sql/relational-databases/accelerated-database-recovery-troubleshooting).
azure-sql Elastic Pool Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/elastic-pool-overview.md
Title: Manage multiple databases with elastic pools
-description: Manage and scale multiple databases in Azure SQL Database - hundreds and thousands - using elastic pools. One price for resources you can distribute where needed.
+description: Manage and scale multiple databases in Azure SQL Database, as many as hundreds or thousands, by using elastic pools. For one price, you can distribute resources where they're needed.
Last updated 06/23/2021
# Elastic pools help you manage and scale multiple databases in Azure SQL Database [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
-Azure SQL Database elastic pools are a simple, cost-effective solution for managing and scaling multiple databases that have varying and unpredictable usage demands. The databases in an elastic pool are on a single server and share a set number of resources at a set price. Elastic pools in Azure SQL Database enable SaaS developers to optimize the price performance for a group of databases within a prescribed budget while delivering performance elasticity for each database.
+Azure SQL Database elastic pools are a simple, cost-effective solution for managing and scaling multiple databases that have varying and unpredictable usage demands. The databases in an elastic pool are on a single server and share a set number of resources at a set price. Elastic pools in SQL Database enable software as a service (SaaS) developers to optimize the price performance for a group of databases within a prescribed budget while delivering performance elasticity for each database.
-## What are SQL elastic pools
+## What are SQL elastic pools?
-SaaS developers build applications on top of large scale data-tiers consisting of multiple databases. A common application pattern is to provision a single database for each customer. But different customers often have varying and unpredictable usage patterns, and it's difficult to predict the resource requirements of each individual database user. Traditionally, you had two options:
+SaaS developers build applications on top of large-scale data tiers that consist of multiple databases. A common application pattern is to provision a single database for each customer. But different customers often have varying and unpredictable usage patterns, and it's difficult to predict the resource requirements of each individual database user. Traditionally, you had two options:
-- Over-provision resources based on peak usage and over pay, or-- Under-provision to save cost, at the expense of performance and customer satisfaction during peaks.
+- Overprovision resources based on peak usage and overpay.
+- Underprovision to save cost, at the expense of performance and customer satisfaction during peaks.
-Elastic pools solve this problem by ensuring that databases get the performance resources they need when they need it. They provide a simple resource allocation mechanism within a predictable budget. To learn more about design patterns for SaaS applications using elastic pools, see [Design Patterns for Multi-tenant SaaS Applications with Azure SQL Database](saas-tenancy-app-design-patterns.md).
+Elastic pools solve this problem by ensuring that databases get the performance resources they need when they need it. They provide a simple resource allocation mechanism within a predictable budget. To learn more about design patterns for SaaS applications by using elastic pools, see [Design patterns for multitenant SaaS applications with SQL Database](saas-tenancy-app-design-patterns.md).
> > [!IMPORTANT]
-> There is no per-database charge for elastic pools. You are billed for each hour a pool exists at the highest eDTU or vCores, regardless of usage or whether the pool was active for less than an hour.
+> There's no per-database charge for elastic pools. You're billed for each hour a pool exists at the highest eDTU or vCores, regardless of usage or whether the pool was active for less than an hour.
-Elastic pools enable the developer to purchase resources for a pool shared by multiple databases to accommodate unpredictable periods of usage by individual databases. You can configure resources for the pool based either on the [DTU-based purchasing model](service-tiers-dtu.md) or the [vCore-based purchasing model](service-tiers-vcore.md). The resource requirement for a pool is determined by the aggregate utilization of its databases. The amount of resources available to the pool is controlled by the developer budget. The developer simply adds databases to the pool, optionally sets the minimum and maximum resources for the databases (either minimum and maximum DTUs or minimum or maximum vCores depending on your choice of resourcing model), and then sets the resources of the pool based on their budget. A developer can use pools to seamlessly grow their service from a lean startup to a mature business at ever-increasing scale.
+Elastic pools enable you to purchase resources for a pool shared by multiple databases to accommodate unpredictable periods of usage by individual databases. You can configure resources for the pool based either on the [DTU-based purchasing model](service-tiers-dtu.md) or the [vCore-based purchasing model](service-tiers-vcore.md). The resource requirement for a pool is determined by the aggregate utilization of its databases.
-Within the pool, individual databases are given the flexibility to auto-scale within set parameters. Under heavy load, a database can consume more resources to meet demand. Databases under light loads consume less, and databases under no load consume no resources. Provisioning resources for the entire pool rather than for single databases simplifies your management tasks. Plus, you have a predictable budget for the pool. Additional resources can be added to an existing pool with minimum downtime. Similarly, if extra resources are no longer needed they can be removed from an existing pool at any point in time. And you can add or remove databases from the pool. If a database is predictably under-utilizing resources, move it out.
+The amount of resources available to the pool is controlled by your budget. All you have to do is:
+
+- Add databases to the pool.
+- Optionally set the minimum and maximum resources for the databases. These resources are either minimum and maximum DTUs or minimum or maximum vCores depending on your choice of resourcing model.
+- Set the resources of the pool based on your budget.
+
+You can use pools to seamlessly grow your service from a lean startup to a mature business at ever-increasing scale.
+
+Within the pool, individual databases are given the flexibility to use resources within set parameters. Under heavy load, a database can consume more resources to meet demand. Databases under light loads consume less, and databases under no load consume no resources. Provisioning resources for the entire pool rather than for single databases simplifies your management tasks. Plus, you have a predictable budget for the pool.
+
+ More resources can be added to an existing pool with minimum downtime. If extra resources are no longer needed, they can be removed from an existing pool at any time. You can also add or remove databases from the pool. If a database is predictably underutilizing resources, you can move it out.
> [!NOTE]
-> When moving databases into or out of an elastic pool, there is no downtime except for a brief period of time (on the order of seconds) at the end of the operation when database connections are dropped.
+> When you move databases into or out of an elastic pool, there's no downtime except for a brief period (on the order of seconds) at the end of the operation when database connections are dropped.
-## When should you consider a SQL Database elastic pool
+## When should you consider a SQL Database elastic pool?
-Pools are well suited for a large number of databases with specific utilization patterns. For a given database, this pattern is characterized by low average utilization with relatively infrequent utilization spikes. Conversely, multiple databases with persistent medium-high utilization should not be placed in the same elastic pool.
+Pools are well suited for a large number of databases with specific utilization patterns. For a given database, this pattern is characterized by low average utilization with infrequent utilization spikes. Conversely, multiple databases with persistent medium-high utilization shouldn't be placed in the same elastic pool.
-The more databases you can add to a pool the greater your savings become. Depending on your application utilization pattern, it's possible to see savings with as few as two S3 databases.
+The more databases you can add to a pool, the greater your savings become. Depending on your application utilization pattern, it's possible to see savings with as few as two S3 databases.
-The following sections help you understand how to assess if your specific collection of databases can benefit from being in a pool. The examples use Standard pools but the same principles also apply to Basic and Premium pools.
+The following sections help you understand how to assess if your specific collection of databases can benefit from being in a pool. The examples use Standard pools, but the same principles also apply to Basic and Premium pools.
-### Assessing database utilization patterns
+### Assess database utilization patterns
-The following figure shows an example of a database that spends much time idle, but also periodically spikes with activity. This is a utilization pattern that is suited for a pool:
+The following figure shows an example of a database that spends much of its time idle but also periodically spikes with activity. This utilization pattern is suited for a pool.
- ![a single database suitable for a pool](./media/elastic-pool-overview/one-database.png)
+ ![Chart that shows a single database suitable for a pool.](./media/elastic-pool-overview/one-database.png)
-The chart illustrates DTU usage over a 1 hour time period from 12:00 to 1:00 where each data point has 1 minute granularity. At 12:10 DB1 peaks up to 90 DTUs, but its overall average usage is less than five DTUs. An S3 compute size is required to run this workload in a single database, but this leaves most of the resources unused during periods of low activity.
+The chart illustrates DTU usage over one hour from 12:00 to 1:00 where each data point has one-minute granularity. At 12:10, DB1 peaks up to 90 DTUs, but its overall average usage is less than five DTUs. An S3 compute size is required to run this workload in a single database, but this size leaves most of the resources unused during periods of low activity.
-A pool allows these unused DTUs to be shared across multiple databases, and so reduces the DTUs needed and overall cost.
+A pool allows these unused DTUs to be shared across multiple databases. A pool reduces the DTUs needed and the overall cost.
-Building on the previous example, suppose there are additional databases with similar utilization patterns as DB1. In the next two figures below, the utilization of four databases and 20 databases are layered onto the same graph to illustrate the non-overlapping nature of their utilization over time using the DTU-based purchasing model:
+Building on the previous example, suppose there are other databases with similar utilization patterns as DB1. In the next two figures, the utilization of four databases and 20 databases are layered onto the same graph to illustrate the nonoverlapping nature of their utilization over time by using the DTU-based purchasing model:
- ![four databases with a utilization pattern suitable for a pool](./media/elastic-pool-overview/four-databases.png)
+ ![Chart that shows four databases with a utilization pattern suitable for a pool.](./media/elastic-pool-overview/four-databases.png)
- ![twenty databases with a utilization pattern suitable for a pool](./media/elastic-pool-overview/twenty-databases.png)
+ ![Chart that shows 20 databases with a utilization pattern suitable for a pool.](./media/elastic-pool-overview/twenty-databases.png)
-The aggregate DTU utilization across all 20 databases is illustrated by the black line in the preceding figure. This shows that the aggregate DTU utilization never exceeds 100 DTUs, and indicates that the 20 databases can share 100 eDTUs over this time period. This results in a 20x reduction in DTUs and a 13x price reduction compared to placing each of the databases in S3 compute sizes for single databases.
+The aggregate DTU utilization across all 20 databases is illustrated by the black line in the preceding chart. This line shows that the aggregate DTU utilization never exceeds 100 DTUs and indicates that the 20 databases can share 100 eDTUs over this time period. The result is a 20-time reduction in DTUs and a 13-time price reduction compared to placing each of the databases in S3 compute sizes for single databases.
-This example is ideal for the following reasons:
+This example is ideal because:
- There are large differences between peak utilization and average utilization per database. - The peak utilization for each database occurs at different points in time. - eDTUs are shared between many databases.
-In the DTU purchasing model, the price of a pool is a function of the pool eDTUs. While the eDTU unit price for a pool is 1.5x greater than the DTU unit price for a single database, **pool eDTUs can be shared by many databases and fewer total eDTUs are needed**. These distinctions in pricing and eDTU sharing are the basis of the price savings potential that pools can provide.
+In the DTU purchasing model, the price of a pool is a function of the pool eDTUs. While the eDTU unit price for a pool is 1.5 times greater than the DTU unit price for a single database, *pool eDTUs can be shared by many databases and fewer total eDTUs are needed*. These distinctions in pricing and eDTU sharing are the basis of the price savings potential that pools can provide.
In the vCore purchasing model, the vCore unit price for elastic pools is the same as the vCore unit price for single databases.
-## How do I choose the correct pool size
+## How do I choose the correct pool size?
-The best size for a pool depends on the aggregate resources needed for all databases in the pool. This involves determining the following:
+The best size for a pool depends on the aggregate resources needed for all databases in the pool. You need to determine:
-- Maximum compute resources utilized by all databases in the pool. Compute resources are indexed by either eDTUs or vCores depending on your choice of purchasing model.
+- Maximum compute resources utilized by all databases in the pool. Compute resources are indexed by either eDTUs or vCores depending on your choice of purchasing model.
- Maximum storage bytes utilized by all databases in the pool. For service tiers and resource limits in each purchasing model, see the [DTU-based purchasing model](service-tiers-dtu.md) or the [vCore-based purchasing model](service-tiers-vcore.md). The following steps can help you estimate whether a pool is more cost-effective than single databases:
-1. Estimate the eDTUs or vCores needed for the pool as follows:
+1. Estimate the eDTUs or vCores needed for the pool:
- For the DTU-based purchasing model: - MAX(<*Total number of DBs* &times; *Average DTU utilization per DB*>, <*Number of concurrently peaking DBs* &times; *Peak DTU utilization per DB*>) - For the vCore-based purchasing model: - MAX(<*Total number of DBs* &times; *Average vCore utilization per DB*>, <*Number of concurrently peaking DBs* &times; *Peak vCore utilization per DB*>)
-2. Estimate the total storage space needed for the pool by adding the data size needed for all the databases in the pool. For the DTU purchasing model, then determine the eDTU pool size that provides this amount of storage.
-3. For the DTU-based purchasing model, take the larger of the eDTU estimates from Step 1 and Step 2. For the vCore-based purchasing model, take the vCore estimate from Step 1.
-4. See the [SQL Database pricing page](https://azure.microsoft.com/pricing/details/sql-database/) and find the smallest pool size that is greater than the estimate from Step 3.
-5. Compare the pool price from Step 4 to the price of using the appropriate compute sizes for single databases.
+1. Estimate the total storage space needed for the pool by adding the data size needed for all the databases in the pool. For the DTU purchasing model, determine the eDTU pool size that provides this amount of storage.
+1. For the DTU-based purchasing model, take the larger of the eDTU estimates from step 1 and step 2. For the vCore-based purchasing model, take the vCore estimate from step 1.
+1. See the [SQL Database pricing page](https://azure.microsoft.com/pricing/details/sql-database/) and find the smallest pool size that's greater than the estimate from step 3.
+1. Compare the pool price from step 4 to the price of using the appropriate compute sizes for single databases.
> [!IMPORTANT]
-> If the number of databases in a pool approaches the maximum supported, make sure to consider [Resource management in dense elastic pools](elastic-pool-resource-management.md).
+> If the number of databases in a pool approaches the maximum supported, make sure to consider [resource management in dense elastic pools](elastic-pool-resource-management.md).
-### Per database properties
+### Per-database properties
-You can optionally set "per database" properties to modify resource consumption patterns in elastic pools. For more information, see resource limits documentation for [DTU](resource-limits-dtu-elastic-pools.md#database-properties-for-pooled-databases) and [vCore](resource-limits-vcore-elastic-pools.md#database-properties-for-pooled-databases) elastic pools.
+You can optionally set per-database properties to modify resource consumption patterns in elastic pools. For more information, see resource limits documentation for [DTU](resource-limits-dtu-elastic-pools.md#database-properties-for-pooled-databases) and [vCore](resource-limits-vcore-elastic-pools.md#database-properties-for-pooled-databases) elastic pools.
-## Using other SQL Database features with elastic pools
+## Use other SQL Database features with elastic pools
+
+You can use other SQL Database features with elastic pools.
### Elastic jobs and elastic pools
-With a pool, management tasks are simplified by running scripts in **[elastic jobs](elastic-jobs-overview.md)**. An elastic job eliminates most of tedium associated with large numbers of databases.
+With a pool, management tasks are simplified by running scripts in [elastic jobs](elastic-jobs-overview.md). An elastic job eliminates most of the tedium associated with large numbers of databases.
-For more information about other database tools for working with multiple databases, see [Scaling out with Azure SQL Database](elastic-scale-introduction.md).
+For more information about other database tools for working with multiple databases, see [Scaling out with SQL Database](elastic-scale-introduction.md).
### Business continuity options for databases in an elastic pool
-Pooled databases generally support the same [business continuity features](business-continuity-high-availability-disaster-recover-hadr-overview.md) that are available to single databases.
--- **Point-in-time restore**-
- Point-in-time restore uses automatic database backups to recover a database in a pool to a specific point in time. See [Point-In-Time Restore](recovery-using-backups.md#point-in-time-restore)
--- **Geo-restore**
+Pooled databases generally support the same [business-continuity features](business-continuity-high-availability-disaster-recover-hadr-overview.md) that are available to single databases:
- Geo-restore provides the default recovery option when a database is unavailable because of an incident in the region where the database is hosted. See [Restore an Azure SQL Database or failover to a secondary](disaster-recovery-guidance.md)
+- **Point-in-time restore**: Point-in-time restore uses automatic database backups to recover a database in a pool to a specific point in time. See [Point-in-time restore](recovery-using-backups.md#point-in-time-restore).
+- **Geo-restore**: Geo-restore provides the default recovery option when a database is unavailable because of an incident in the region where the database is hosted. See [Restore a SQL database or fail over to a secondary](disaster-recovery-guidance.md).
+- **Active geo-replication**: For applications that have more aggressive recovery requirements than geo-restore can offer, configure [active geo-replication](active-geo-replication-overview.md) or an [auto-failover group](auto-failover-group-overview.md).
-- **Active geo-replication**-
- For applications that have more aggressive recovery requirements than geo-restore can offer, configure [Active geo-replication](active-geo-replication-overview.md) or an [auto-failover group](auto-failover-group-overview.md).
-
-## Creating a new SQL Database elastic pool using the Azure portal
+## Create a new SQL Database elastic pool by using the Azure portal
You can create an elastic pool in the Azure portal in two ways:
You can create an elastic pool in the Azure portal in two ways:
To create an elastic pool and select an existing or new server: 1. Go to the [Azure portal](https://portal.azure.com) to create an elastic pool. Search for and select **Azure SQL**.
-2. Select **Create** to open the **Select SQL deployment option** pane. To view more information about elastic pools, on the **Databases** tile, select **Show details**.
-3. On the **Databases** tile, in the **Resource type** dropdown, select **Elastic pool**, and then select **Create**.
+1. Select **Create** to open the **Select SQL deployment option** pane. To view more information about elastic pools, on the **Databases** tile, select **Show details**.
+1. On the **Databases** tile, in the **Resource type** dropdown, select **Elastic pool**. Then select **Create**.
- ![Create an elastic pool](./media/elastic-pool-overview/create-elastic-pool.png)
+ ![Screenshot that shows creating an elastic pool.](./media/elastic-pool-overview/create-elastic-pool.png)
To create an elastic pool from an existing server:
To create an elastic pool from an existing server:
> [!NOTE] > You can create multiple pools on a server, but you can't add databases from different servers into the same pool.
-The pool's service tier determines the features available to the elastics in the pool, and the maximum amount of resources available to each database. For details, see Resource limits for elastic pools in the [DTU model](resource-limits-dtu-elastic-pools.md#elastic-pool-storage-sizes-and-compute-sizes). For vCore-based resource limits for elastic pools, see [vCore-based resource limits - elastic pools](resource-limits-vcore-elastic-pools.md).
+The pool's service tier determines the features available to the elastics in the pool, and the maximum amount of resources available to each database. For more information, see resource limits for elastic pools in the [DTU model](resource-limits-dtu-elastic-pools.md#elastic-pool-storage-sizes-and-compute-sizes). For vCore-based resource limits for elastic pools, see [vCore-based resource limits - elastic pools](resource-limits-vcore-elastic-pools.md).
-To configure the resources and pricing of the pool, click **Configure pool**. Then select a service tier, add databases to the pool, and configure the resource limits for the pool and its databases.
+To configure the resources and pricing of the pool, select **Configure pool**. Then select a service tier, add databases to the pool, and configure the resource limits for the pool and its databases.
-When you have completed configuring the pool, you can click 'Apply', name the pool, and click 'OK' to create the pool.
+After you've configured the pool, select **Apply**, name the pool, and select **OK** to create the pool.
## Monitor an elastic pool and its databases In the Azure portal, you can monitor the utilization of an elastic pool and the databases within that pool. You can also make a set of changes to your elastic pool and submit all changes at the same time. These changes include adding or removing databases, changing your elastic pool settings, or changing your database settings.
-You can use the built-in [performance monitoring](./performance-guidance.md) and [alerting tools](./alerts-insights-configure-portal.md), combined with performance ratings. Additionally, SQL Database can [emit metrics and resource logs](./metrics-diagnostic-telemetry-logging-streaming-export-configure.md?tabs=azure-portal) for easier monitoring.
+You can use the built-in [performance monitoring](./performance-guidance.md) and [alerting tools](./alerts-insights-configure-portal.md) combined with performance ratings. SQL Database can also [emit metrics and resource logs](./metrics-diagnostic-telemetry-logging-streaming-export-configure.md?tabs=azure-portal) for easier monitoring.
## Customer case studies -- [SnelStart](https://azure.microsoft.com/resources/videos/azure-sql-database-case-study-snelstart/)-
- SnelStart used elastic pools with Azure SQL Database to rapidly expand its business services at a rate of 1,000 new Azure SQL databases per month.
--- [Umbraco](https://azure.microsoft.com/resources/videos/azure-sql-database-case-study-umbraco/)-
- Umbraco uses elastic pools with Azure SQL Database to quickly provision and scale services for thousands of tenants in the cloud.
--- [Daxko/CSI](https://customers.microsoft.com/story/726277-csi-daxko-partner-professional-service-azure)-
- Daxko/CSI uses elastic pools with Azure SQL Database to accelerate its development cycle and to enhance its customer services and performance.
+- [SnelStart](https://azure.microsoft.com/resources/videos/azure-sql-database-case-study-snelstart/): SnelStart used elastic pools with SQL Database to rapidly expand its business services at a rate of 1,000 new SQL databases per month.
+- [Umbraco](https://azure.microsoft.com/resources/videos/azure-sql-database-case-study-umbraco/): Umbraco uses elastic pools with SQL Database to quickly provision and scale services for thousands of tenants in the cloud.
+- [Daxko/CSI](https://customers.microsoft.com/story/726277-csi-daxko-partner-professional-service-azure): Daxko/CSI uses elastic pools with SQL Database to accelerate its development cycle and to enhance its customer services and performance.
## Next steps - For pricing information, see [Elastic pool pricing](https://azure.microsoft.com/pricing/details/sql-database/elastic).-- To scale elastic pools, see [Scaling elastic pools](elastic-pool-scale.md) and [Scale an elastic pool - sample code](scripts/monitor-and-scale-pool-powershell.md)-- To learn more about design patterns for SaaS applications using elastic pools, see [Design Patterns for Multi-tenant SaaS Applications with Azure SQL Database](saas-tenancy-app-design-patterns.md).-- For a SaaS tutorial using elastic pools, see [Introduction to the Wingtip SaaS application](saas-dbpertenant-wingtip-app-overview.md).
+- To scale elastic pools, see [Scale elastic pools](elastic-pool-scale.md) and [Scale an elastic pool - sample code](scripts/monitor-and-scale-pool-powershell.md).
+- To learn more about design patterns for SaaS applications by using elastic pools, see [Design patterns for multitenant SaaS applications with SQL Database](saas-tenancy-app-design-patterns.md).
+- For a SaaS tutorial by using elastic pools, see [Introduction to the Wingtip SaaS application](saas-dbpertenant-wingtip-app-overview.md).
- To learn about resource management in elastic pools with many databases, see [Resource management in dense elastic pools](elastic-pool-resource-management.md).
azure-sql Maintenance Window https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/maintenance-window.md
Previously updated : 02/02/2022 Last updated : 02/18/2022 # Maintenance window (Preview)
Choosing a maintenance window other than the default is currently available in t
| West US 3 | Yes | | | | | | | |
-## Gateway maintenance for Azure SQL Database
+## Gateway maintenance
To get the maximum benefit from maintenance windows, make sure your client applications are using the redirect connection policy. Redirect is the recommended connection policy, where clients establish connections directly to the node hosting the database, leading to reduced latency and improved throughput.
azure-sql Service Tier Business Critical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/service-tier-business-critical.md
Previously updated : 02/02/2022 Last updated : 02/18/2022 # Business Critical tier - Azure SQL Database and Azure SQL Managed Instance [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)]
This article describes and compares the Business Critical service tier used by A
## Overview
-The Business Critical service tier model is based on a cluster of database engine processes. This architectural model relies on a fact that there is always a quorum of available database engine nodes and has minimal performance impact on your workload even during maintenance activities.
+The Business Critical service tier model is based on a cluster of database engine processes. This architectural model relies on a fact that there's always a quorum of available database engine nodes and has minimal performance impact on your workload even during maintenance activities.
Azure upgrades and patches underlying operating system, drivers, and SQL Server database engine transparently with the minimal down-time for end users.
-Premium availability is enabled in the Business Critical service tier and is designed for intensive workloads that cannot tolerate reduced availability due to the ongoing maintenance operations.
+Premium availability is enabled in the Business Critical service tier and is designed for intensive workloads that can't tolerate reduced availability due to the ongoing maintenance operations.
Compute and storage is integrated on the single node in the premium model. High availability in this architectural model is achieved by replication of compute (SQL Server database engine process) and storage (locally attached SSD) deployed to a four node cluster, using technology similar to SQL Server [Always On availability groups](/sql/database-engine/availability-groups/windows/overview-of-always-on-availability-groups-sql-server).
Compute and storage is integrated on the single node in the premium model. High
Both the SQL Server database engine process and underlying .mdf/.ldf files are placed on the same node with locally attached SSD storage providing low latency to your workload. High availability is implemented using technology similar to SQL Server [Always On availability groups](/sql/database-engine/availability-groups/windows/overview-of-always-on-availability-groups-sql-server). Every database is a cluster of database nodes with one primary database that is accessible for customer workloads, and a three secondary processes containing copies of data. The primary node constantly pushes changes to the secondary nodes in order to ensure that the data is available on secondary replicas if the primary node fails for any reason. Failover is handled by the SQL Server database engine ΓÇô one secondary replica becomes the primary node and a new secondary replica is created to ensure there are enough nodes in the cluster. The workload is automatically redirected to the new primary node.
-In addition, the Business Critical cluster has built-in [Read Scale-Out](read-scale-out.md) capability that provides free-of charge built-in read-only replica that can be used to run read-only queries (for example reports) that should not affect performance of your primary workload.
+In addition, the Business Critical cluster has built-in [Read Scale-Out](read-scale-out.md) capability that provides free-of charge built-in read-only replica that can be used to run read-only queries (for example reports) that shouldn't affect performance of your primary workload.
## When to choose this service tier
The key reasons why you should choose Business Critical service tier instead of
- **Low I/O latency requirements** ΓÇô workloads that need a fast response from the storage layer (1-2 milliseconds in average) should use Business Critical tier. - **Workload with reporting and analytic queries** that can be redirected to the free-of-charge secondary read-only replica. - **Higher resiliency and faster recovery from failures**. In a case of system failure, the database on primary instance will be disabled and one of the secondary replicas will be immediately became new read-write primary database that is ready to process queries. The database engine doesn't need to analyze and redo transactions from the log file and load all data in the memory buffer.-- **Advanced data corruption protection**. The Business Critical tier leverages database replicas behind-the-scenes for business continuity purposes, and so the service also then leverages automatic page repair, which is the same technology used for SQL Server database [mirroring and availability groups](/sql/sql-server/failover-clusters/automatic-page-repair-availability-groups-database-mirroring). In the event that a replica cannot read a page due to a data integrity issue, a fresh copy of the page will be retrieved from another replica, replacing the unreadable page without data loss or customer downtime. This functionality is applicable in General Purpose tier if the database has geo-secondary replica.
+- **Advanced data corruption protection**. The Business Critical tier leverages database replicas behind-the-scenes for business continuity purposes, and so the service also then leverages automatic page repair, which is the same technology used for SQL Server database [mirroring and availability groups](/sql/sql-server/failover-clusters/automatic-page-repair-availability-groups-database-mirroring). In the event that a replica can't read a page due to a data integrity issue, a fresh copy of the page will be retrieved from another replica, replacing the unreadable page without data loss or customer downtime. This functionality is applicable in General Purpose tier if the database has geo-secondary replica.
- **Higher availability** - The Business Critical tier in Multi-AZ configuration provides resiliency to zonal failures and a higher availability SLA. - **Fast geo-recovery** - The Business Critical tier configured with geo-replication has a guaranteed Recovery Point Objective (RPO) of 5 seconds and Recovery Time Objective (RTO) of 30 seconds for 100% of deployed hours.
The following table shows resource limits for both Azure SQL Database and Azure
| **Log write throughput** | Single databases: [12 MB/s per vCore (max 96 MB/s)](resource-limits-vcore-single-databases.md) <br> Elastic pools: [15 MB/s per vCore (max 120 MB/s)](resource-limits-vcore-elastic-pools.md) | [4 MB/s per vCore (max 48 MB/s)](../managed-instance/resource-limits.md#service-tier-characteristics) | | **Availability** | [Default SLA](https://azure.microsoft.com/support/legal/sla/azure-sql-database/) | [Default SLA](https://azure.microsoft.com/support/legal/sla/azure-sql-sql-managed-instance/)| | **Backups** | RA-GRS, 1-35 days (7 days by default) | RA-GRS, 1-35 days (7 days by default)|
-| **Read-only replicas** |1 built-in, included in price <br> 0 - 4 using [geo-replication](active-geo-replication-overview.md) |1 built-in, included in price <br> 0 - 1 using [auto-failover groups](auto-failover-group-overview.md#best-practices-for-sql-managed-instance) |
-| **Pricing/Billing** |[vCore, reserved storage, and backup storage](https://azure.microsoft.com/pricing/details/sql-database/single/) are charged. <br/>IOPS is not charged. |[vCore, reserved storage, and backup storage](https://azure.microsoft.com/pricing/details/sql-database/managed/) is charged. <br/>IOPS is not charged. |
+| [**Read-only replicas**](read-scale-out.md) |1 built-in high availability replica is readable <br> 0 - 4 [geo-replicas](active-geo-replication-overview.md) |1 built-in high availability replica is readable <br> 0 - 1 geo-replicas using [auto-failover groups](auto-failover-group-overview.md#best-practices-for-sql-managed-instance) |
+| **Pricing/Billing** |[vCore, reserved storage, backup storage, and geo-replicas](https://azure.microsoft.com/pricing/details/sql-database/single/) are charged. <br/> High availability replicas aren't charged. <br/>IOPS isn't charged. |[vCore, reserved storage, backup storage, and geo-replicas](https://azure.microsoft.com/pricing/details/sql-database/managed/) are charged. <br/> High availability replicas aren't charged. <br/>IOPS isn't charged. |
| **Discount models** |[Reserved instances](reserved-capacity-overview.md)<br/>[Azure Hybrid Benefit](../azure-hybrid-benefit.md) (not available on dev/test subscriptions)<br/>[Enterprise](https://azure.microsoft.com/offers/ms-azr-0148p/) and [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0023p/) Dev/Test subscriptions|[Reserved instances](reserved-capacity-overview.md)<br/>[Azure Hybrid Benefit](../azure-hybrid-benefit.md) (not available on dev/test subscriptions)<br/>[Enterprise](https://azure.microsoft.com/offers/ms-azr-0148p/) and [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0023p/) Dev/Test subscriptions | | | |
azure-sql Service Tier General Purpose https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/service-tier-general-purpose.md
The following table shows resource limits for both Azure SQL Database and Azure
| **Log write throughput** | Single databases: [4.5 MB/s per vCore (max 50 MB/s)](resource-limits-vcore-single-databases.md) <br> Elastic pools: [6 MB/s per vCore (max 62.5 MB/s)](resource-limits-vcore-elastic-pools.md) | [3 MB/s per vCore (max 22 MB/s)](../managed-instance/resource-limits.md#service-tier-characteristics)| | **Availability** | [Default SLA](https://azure.microsoft.com/support/legal/sla/azure-sql-database/) | [Default SLA](https://azure.microsoft.com/support/legal/sla/azure-sql-sql-managed-instance/)| | **Backups** | 1-35 days (7 days by default) | 1-35 days (7 days by default)|
-| **Read-only replicas** | 0 built-in </br> 0 - 4 using [geo-replication](active-geo-replication-overview.md) | 0 built-in </br> 0 - 1 using [auto-failover groups](auto-failover-group-overview.md#best-practices-for-sql-managed-instance) |
-| **Pricing/Billing** | [vCore, reserved storage, and backup storage](https://azure.microsoft.com/pricing/details/sql-database/single/) are charged. <br/>IOPS is not charged.| [vCore, reserved storage, and backup storage](https://azure.microsoft.com/pricing/details/sql-database/managed/) is charged. <br/>IOPS is not charged. |
+| [**Read-only replicas**](read-scale-out.md) | 0 built-in </br> 0 - 4 [geo-replicas](active-geo-replication-overview.md) | 0 built-in </br> 0 - 1 geo-replicas using [auto-failover groups](auto-failover-group-overview.md#best-practices-for-sql-managed-instance) |
+| **Pricing/Billing** | [vCore, reserved storage, backup storage, and geo-replicas](https://azure.microsoft.com/pricing/details/sql-database/single/) are charged. <br/>IOPS is not charged.| [vCore, reserved storage, backup storage, and geo-replicas](https://azure.microsoft.com/pricing/details/sql-database/managed/) are charged. <br/>IOPS is not charged. |
| **Discount models** |[Reserved instances](reserved-capacity-overview.md)<br/>[Azure Hybrid Benefit](../azure-hybrid-benefit.md) (not available on dev/test subscriptions)<br/>[Enterprise](https://azure.microsoft.com/offers/ms-azr-0148p/) and [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0023p/) Dev/Test subscriptions | [Reserved instances](reserved-capacity-overview.md)<br/>[Azure Hybrid Benefit](../azure-hybrid-benefit.md) (not available on dev/test subscriptions)<br/>[Enterprise](https://azure.microsoft.com/offers/ms-azr-0148p/) and [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0023p/) Dev/Test subscriptions| | | |
azure-sql Transparent Data Encryption Byok Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/transparent-data-encryption-byok-identity.md
In addition to the system-assigned managed identity that is already supported fo
- If the key vault is behind a VNet, a user-assigned managed identity cannot be used with customer-managed TDE. A system-assigned managed identity must be used in this case. A user-assigned managed identity can only be used when the key vault is not behind a VNet. - When multiple user-assigned managed identities are assigned to the server or managed instance, if a single identity is removed from the server using the *Identity* blade of the Azure Portal, the operation succeeds but the identity does not get removed from the server. Removing all user-assigned managed identities together from the Azure portal works successfully. - When the server or managed instance is configured with customer-managed TDE and both system-assigned and user-assigned managed identities are enabled on the server, removing the user-assigned managed identities from the server without first giving the system-assigned managed identity access to the key vault results in an *Unexpected error occurred* message. Ensure the system-assigned managed identity has been provided key vault access prior to removing the primary user-assigned managed identity (and any other user-assigned managed identities) from the server.
+- User Assigned Managed Identity for SQL Managed Instance is currently not supported when AKV firewall is enabled.
## Next steps
azure-sql Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/resource-limits.md
Support for the premium-series hardware generations (public preview) is currentl
| North Europe | Yes | | | South Central US | Yes | Yes | | Southeast Asia | Yes | |
-| UK South | Yes | |
| West Europe | | Yes | | West US | Yes | Yes | | West US 2 | Yes | Yes |
azure-sql Sql Server On Azure Vm Iaas What Is Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview.md
After selecting **+ Add**, view additional information about the different optio
For details, see: - [Create a single database](../../database/single-database-create-quickstart.md)-- [Create an elastic pool](../../database/elastic-pool-overview.md#creating-a-new-sql-database-elastic-pool-using-the-azure-portal)
+- [Create an elastic pool](../../database/elastic-pool-overview.md#create-a-new-sql-database-elastic-pool-by-using-the-azure-portal)
- [Create a managed instance](../../managed-instance/instance-create-quickstart.md) - [Create a SQL Server virtual machine](sql-vm-create-portal-quickstart.md)
azure-vmware Azure Vmware Solution Platform Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-platform-updates.md
Last updated 12/22/2021
Azure VMware Solution will apply important updates starting in March 2021. You'll receive a notification through Azure Service Health that includes the timeline of the maintenance. For more information, see [Host maintenance and lifecycle management](concepts-private-clouds-clusters.md#host-maintenance-and-lifecycle-management).
+## February 18, 2022
+
+Per VMware security advisory [VMSA-2022-0004](https://www.vmware.com/security/advisories/VMSA-2022-0004.html), multiple vulnerabilities in VMware ESXi have been reported to VMware.
+
+To address the vulnerabilities (CVE-2021-22040 and CVE-2021-22041) reported in this VMware security advisory, ESXi hosts have been patched in all Azure VMware Solution private clouds to ESXi 6.7, Patch Release ESXi670-202111001. All new Azure VMware Solution private clouds are deployed with the same version.
+
+For more information on this ESXi version, see [VMware ESXi 6.7, Patch Release ESXi670-202111001](https://docs.vmware.com/en/VMware-vSphere/6.7/rn/esxi670-202111001.html).
+
+No further action is required.
+ ## December 22, 2021 Azure VMware Solution (AVS) has completed maintenance activities to address critical vulnerabilities in Apache Log4j.
azure-vmware Configure Identity Source Vcenter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-identity-source-vcenter.md
You'll run the `Get-ExternalIdentitySources` cmdlet to list all external identit
| **Timeout** | The period after which a cmdlet exits if taking too long to finish. | 1. Check **Notifications** or the **Run Execution Status** pane to see the progress.-
+
+ :::image type="content" source="media/run-command/run-packages-execution-command-status.png" alt-text="Screenshot showing how to check the run commands notification or status." lightbox="media/run-command/run-packages-execution-command-status.png":::
## Add Active Directory over LDAP with SSL
You'll run the `New-AvsLDAPSIdentitySource` cmdlet to add an AD over LDAP with S
1. Check **Notifications** or the **Run Execution Status** pane to see the progress. - ## Add Active Directory over LDAP >[!NOTE]
azure-vmware Set Up Backup Server For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/set-up-backup-server-for-azure-vmware-solution.md
Title: Set up Azure Backup Server for Azure VMware Solution description: Set up your Azure VMware Solution environment to back up virtual machines using Azure Backup Server. Previously updated : 02/04/2021 Last updated : 02/16/2022 # Set up Azure Backup Server for Azure VMware Solution
This article helps you prepare your Azure VMware Solution environment to back up
## Supported VMware features -- **Agentless backup:** Azure Backup Server doesn't require an agent to be installed on the vCenter or ESXi server to back up the VM. Instead, provide the IP address or fully qualified domain name (FQDN) and the sign-in credentials used to authenticate the VMware server with Azure Backup Server.
+- **Agentless backup:** Azure Backup Server doesn't require an agent to be installed on the vCenter or ESXi server to back up the VM. Instead, provide the IP address or fully qualified domain name (FQDN) and the sign in credentials used to authenticate the VMware server with Azure Backup Server.
- **Cloud-integrated backup:** Azure Backup Server protects workloads to disk and the cloud. The backup and recovery workflow of Azure Backup Server helps you manage long-term retention and offsite backup. - **Detect and protect VMs managed by vCenter:** Azure Backup Server detects and protects VMs deployed on a vCenter or ESXi server. Azure Backup Server also detects VMs managed by vCenter so that you can protect large deployments. - **Folder-level auto protection:** vCenter lets you organize your VMs in VM folders. Azure Backup Server detects these folders. You can use it to protect VMs at the folder level, including all subfolders. When protecting folders, Azure Backup Server protects the VMs in that folder and protects VMs added later. Azure Backup Server detects new VMs daily, protecting them automatically. As you organize your VMs in recursive folders, Azure Backup Server automatically detects and protects the new VMs deployed in the recursive folders.
Ensure that you [configure networking for your VMware private cloud in Azure](tu
### Determine the size of the VM
-Follow the instructions in the [Create your first Windows VM in the Azure portal](../virtual-machines/windows/quick-create-portal.md) tutorial. You'll create the VM in the virtual network, which you created in the previous step. Start with a gallery image of Windows Server 2019 Datacenter to run the Azure Backup Server.
+Use the [MABS Capacity Planner](https://www.microsoft.com/download/details.aspx) to determine the correct VM size. Based on your inputs, the capacity planner will give you the required memory size and CPU core count. Use this information to choose the appropriate Azure VM size. The capacity planner also provides total disk size required for the VM along with the required disk IOPS. We recommend using a standard SSD disk for the VM. By pooling more than one SSD, you can achieve the required IOPS.
-The table summarizes the maximum number of protected workloads for each Azure Backup Server VM size. The information is based on internal performance and scale tests with canonical values for the workload size and churn. The actual workload size can be larger but should be accommodated by the disks attached to the Azure Backup Server VM.
-
-| Maximum protected workloads | Average workload size | Average workload churn (daily) | Minimum storage IOPS | Recommended disk type/size | Recommended VM size |
-|-|--|--||--||
-| 20 | 100 GB | Net 5% churn | 2,000 | Standard HDD (8 TB or above size per disk) | A4V2 |
-| 40 | 150 GB | Net 10% churn | 4,500 | Premium SSD* (1 TB or above size per disk) | DS3_V2 |
-| 60 | 200 GB | Net 10% churn | 10,500 | Premium SSD* (8 TB or above size per disk) | DS3_V2 |
-
-*To get the required IOPs, use minimum recommended- or higher-size disks. Smaller-size disks offer lower IOPs.
+Follow the instructions in the [Create your first Windows VM in the Azure portal](../virtual-machines/windows/quick-create-portal.md) tutorial. You'll create the VM in the virtual network that you created in the previous step. Start with a gallery image of Windows Server 2019 Datacenter to run the Azure Backup Server.
> [!NOTE] > Azure Backup Server is designed to run on a dedicated, single-purpose server. You can't install Azure Backup Server on a computer that:
If you downloaded the software package to a different server, copy the files to
**Manual configuration**
- When you use your own SQL Server instance, make sure you add builtin\Administrators to the sysadmin role to the master database's sysadmin role.
+ When you use your own SQL Server instance, make sure you add builtin\Administrators to the sysadmin role to the main database sysadmin role.
**Configure reporting services with SQL Server 2017**
Azure Backup Server v3 only accepts storage volumes. When you add a volume, Azur
## Next steps
-Now that you've covered how to set up Azure Backup Server for Azure VMware Solution, you may want to learn about:
+Now that you've covered how to set up Azure Backup Server for Azure VMware Solution, you can use the following resources to learn more.
- [Configuring backups for your Azure VMware Solution VMs](backup-azure-vmware-solution-virtual-machines.md). - [Protecting your Azure VMware Solution VMs with Microsoft Defender for Cloud integration](azure-security-integration.md).
azure-web-pubsub Howto Secure Shared Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-secure-shared-private-endpoints.md
If you're using [event handler](concept-service-internals.md#event_handler) in Azure Web PubSub Service, you might have outbound traffic to upstream. Upstream such as Azure Web App and Azure Functions, can be configured to accept connections from a list of virtual networks and refuse outside connections that originate from a public network. You can create an outbound [private endpoint connection](../private-link/private-endpoint-overview.md) to reach these endpoints.
- :::image type="content" alt-text="Shared private endpoint overview." source="media\howto-secure-shared-private-endpoints\shared-private-endpoint-overview.png" border="false" :::
+ :::image type="content" alt-text="Diagram showing architecture of shared private endpoint." source="media\howto-secure-shared-private-endpoints\shared-private-endpoint-overview.png" border="false" :::
This outbound method is subject to the following requirements:
This outbound method is subject to the following requirements:
+ The Azure Web App or Azure Function must be on certain SKUs. See [Use Private Endpoints for Azure Web App](../app-service/networking/private-endpoint.md).
-## Shared Private Link Resources Management APIs
+## Shared Private Link Resources Management
-Private endpoints of secured resources that are created through Azure Web PubSub Service APIs are referred to as *shared private link resources*. This is because you're "sharing" access to a resource, such as an Azure Function, that has been integrated with the [Azure Private Link service](https://azure.microsoft.com/services/private-link/). These private endpoints are created inside Azure Web PubSub Service execution environment and are not directly visible to you.
-
-At this moment, you can use Management REST API to create or delete *shared private link resources*. In the remainder of this article, we will use [Azure CLI](/cli/azure/) to demonstrate the REST API calls.
+Private endpoints of secured resources that are created through Azure Web PubSub Service APIs are referred to as *shared private link resources*. This is because you're "sharing" access to a resource, such as an Azure Function, that has been integrated with the [Azure Private Link service](https://azure.microsoft.com/services/private-link/). These private endpoints are created inside Azure Web PubSub Service execution environment and aren't directly visible to you.
> [!NOTE] > The examples in this article are based on the following assumptions:
The rest of the examples show how the _contoso-webpubsub_ service can be configu
### Step 1: Create a shared private link resource to the function
+#### [Azure portal](#tab/azure-portal)
+
+1. In the Azure portal, go to your Azure Web PubSub Service resource.
+1. In the menu pane, select **Networking**. Switch to **Private access** tab.
+1. Click **Add shared private endpoint**.
+
+ :::image type="content" alt-text="Screenshot of shared private endpoints management." source="media\howto-secure-shared-private-endpoints\portal-shared-private-endpoints-management.png" lightbox="media\howto-secure-shared-private-endpoints\portal-shared-private-endpoints-management.png" :::
+
+1. Fill in a name for the shared private endpoint.
+1. Select the target linked resource either by selecting from your owned resources or by filling a resource ID.
+1. Click **Add**.
+
+ :::image type="content" alt-text="Screenshot of adding a shared private endpoint." source="media\howto-secure-shared-private-endpoints\portal-shared-private-endpoints-add.png" :::
+
+1. The shared private endpoint resource will be in **Succeeded** provisioning state. The connection state is **Pending** approval at target resource side.
+
+ :::image type="content" alt-text="Screenshot of an added shared private endpoint." source="media\howto-secure-shared-private-endpoints\portal-shared-private-endpoints-added.png" lightbox="media\howto-secure-shared-private-endpoints\portal-shared-private-endpoints-added.png" :::
+
+#### [Azure CLI](#tab/azure-cli)
+ You can make the following API call with the [Azure CLI](/cli/azure/) to create a shared private link resource: ```dotnetcli
The process of creating an outbound private endpoint is a long-running (asynchro
You can poll this URI periodically to obtain the status of the operation.
-If you are using the CLI, you can poll for the status by manually querying the `Azure-AsyncOperationHeader` value,
+If you're using the CLI, you can poll for the status by manually querying the `Azure-AsyncOperationHeader` value,
-```donetcli
+```dotnetcli
az rest --method get --uri https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/webPubSub/contoso-webpubsub/operationStatuses/c0786383-8d5f-4554-8d17-f16fcf482fb2?api-version=2021-06-01-preview ``` Wait until the status changes to "Succeeded" before proceeding to the next steps.
-### Step 2a: Approve the private endpoint connection for the function
+--
-> [!NOTE]
-> In this section, you use the Azure portal to walk through the approval flow for a private endpoint to Azure Function. Alternately, you could use the [REST API](/rest/api/appservice/web-apps/approve-or-reject-private-endpoint-connection) that's available via the App Service provider.
+### Step 2a: Approve the private endpoint connection for the function
> [!IMPORTANT] > After you approved the private endpoint connection, the Function is no longer accessible from public network. You may need to create other private endpoints in your own virtual network to access the Function endpoint.
+#### [Azure portal](#tab/azure-portal)
+ 1. In the Azure portal, select the **Networking** tab of your Function App and navigate to **Private endpoint connections**. Click **Configure your private endpoint connections**. After the asynchronous operation has succeeded, there should be a request for a private endpoint connection with the request message from the previous API call. :::image type="content" alt-text="Screenshot of the Azure portal, showing the Private endpoint connections pane." source="media\howto-secure-shared-private-endpoints\portal-function-approve-private-endpoint.png" lightbox="media\howto-secure-shared-private-endpoints\portal-function-approve-private-endpoint.png" :::
Wait until the status changes to "Succeeded" before proceeding to the next steps
:::image type="content" alt-text="Screenshot of the Azure portal, showing an Approved status on the Private endpoint connections pane." source="media\howto-secure-shared-private-endpoints\portal-function-approved-private-endpoint.png" lightbox="media\howto-secure-shared-private-endpoints\portal-function-approved-private-endpoint.png" :::
+#### [Azure CLI](#tab/azure-cli)
+
+1. List private endpoint connections.
+
+ ```dotnetcli
+ az network private-endpoint-connection list -n <function-resource-name> -g <function-resource-group-name> --type 'Microsoft.Web/sites'
+ ```
+
+ There should be a pending private endpoint connection. Note down its ID.
+
+ ```json
+ [
+ {
+ "id": "<id>",
+ "location": "",
+ "name": "",
+ "properties": {
+ "privateLinkServiceConnectionState": {
+ "actionRequired": "None",
+ "description": "Please approve",
+ "status": "Pending"
+ }
+ }
+ }
+ ]
+ ```
+
+1. Approve the private endpoint connection.
+
+ ```dotnetcli
+ az network private-endpoint-connection approve --id <private-endpoint-connection-id>
+ ```
+
+--
+ ### Step 2b: Query the status of the shared private link resource
-It takes minutes for the approval to be propagated to Azure Web PubSub Service. To confirm that the shared private link resource has been updated after approval, you can also obtain the "Connection state" by using the GET API.
+It takes minutes for the approval to be propagated to Azure Web PubSub Service. You can check the state using either Azure portal or Azure CLI.
+
+#### [Azure portal](#tab/azure-portal)
+
+ :::image type="content" alt-text="Screenshot of an approved shared private endpoint." source="media\howto-secure-shared-private-endpoints\portal-shared-private-endpoints-approved.png" lightbox="media\howto-secure-shared-private-endpoints\portal-shared-private-endpoints-approved.png" :::
+
+#### [Azure CLI](#tab/azure-cli)
```dotnetcli az rest --method get --uri https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/webPubSub/contoso-webpubsub/sharedPrivateLinkResources/func-pe?api-version=2021-06-01-preview
This would return a JSON, where the connection state would show up as "status" u
If the "Provisioning State" (`properties.provisioningState`) of the resource is `Succeeded` and "Connection State" (`properties.status`) is `Approved`, it means that the shared private link resource is functional and Azure Web PubSub Service can communicate over the private endpoint.
+--
+
+At this point, the private endpoint between Azure SignalR Service and Azure Function is established.
+ ### Step 3: Verify upstream calls are from a private IP Once the private endpoint is set up, you can verify incoming calls are from a private IP by checking the `X-Forwarded-For` header at upstream side.
Once the private endpoint is set up, you can verify incoming calls are from a pr
Learn more about private endpoints:
-+ [What are private endpoints?](../private-link/private-endpoint-overview.md)
++ [What are private endpoints?](../private-link/private-endpoint-overview.md)
blockchain Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/blockchain/workbench/architecture.md
Title: Azure Blockchain Workbench architecture description: Overview of Azure Blockchain Workbench Preview architecture and its components. Previously updated : 09/05/2019 Last updated : 02/18/2022 #Customer intent: As a developer, I want to understand the architecture and components of Azure Blockchain Workbench. # Azure Blockchain Workbench architecture + Azure Blockchain Workbench Preview simplifies blockchain application development by providing a solution using several Azure components. Blockchain Workbench can be deployed using a solution template in the Azure Marketplace. The template allows you to pick modules and components to deploy including blockchain stack, type of client application, and support for IoT integration. Once deployed, Blockchain Workbench provides access to a web app, iOS app, and Android app. ![Blockchain Workbench architecture](./media/architecture/architecture.png)
blockchain Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/blockchain/workbench/configuration.md
Title: Azure Blockchain Workbench configuration metadata reference description: Azure Blockchain Workbench Preview application configuration metadata overview. Previously updated : 12/09/2019 Last updated : 02/18/2022 #Customer intent: As a developer, I want to understand application configuration metadata details used by Azure Blockchain Workbench. # Azure Blockchain Workbench configuration reference + Azure Blockchain Workbench applications are multi-party workflows defined by configuration metadata and smart contract code. Configuration metadata defines the high-level workflows and interaction model of the blockchain application. Smart contracts define the business logic of the blockchain application. Workbench uses configuration and smart contract code to generate blockchain application user experiences. Configuration metadata specifies the following information for each blockchain application:
blockchain Create App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/blockchain/workbench/create-app.md
Title: Create a blockchain application - Azure Blockchain Workbench description: Tutorial on how to create a blockchain application for Azure Blockchain Workbench Preview. Previously updated : 08/24/2020 Last updated : 02/18/2022 #Customer intent: As a developer, I want to use Azure Blockchain Workbench to create a blockchain app. # Tutorial: Create a blockchain application for Azure Blockchain Workbench + You can use Azure Blockchain Workbench to create blockchain applications that represent multi-party workflows defined by configuration and smart contract code. You'll learn how to:
blockchain Data Excel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/blockchain/workbench/data-excel.md
Title: Use Azure Blockchain Workbench data in Microsoft Excel description: Learn how to load and view Azure Blockchain Workbench Preview SQL DB data in Microsoft Excel. Previously updated : 09/05/2019 Last updated : 02/18/2022 #Customer intent: As a developer, I want to view Azure Blockchain Workbench data in Microsoft Excel for analysis.
# View Azure Blockchain Workbench data with Microsoft Excel + You can use Microsoft Excel to view data in Azure Blockchain Workbench's SQL DB. This article provides the steps you need to: * Connect to the Blockchain Workbench database from Microsoft Excel
blockchain Data Powerbi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/blockchain/workbench/data-powerbi.md
Title: Use Azure Blockchain Workbench data in Microsoft Power BI description: Learn how to load and view Azure Blockchain Workbench SQL DB data in Microsoft Power BI. Previously updated : 04/22/2020 Last updated : 02/18/2022 #Customer intent: As a developer, I want to load and view Azure Blockchain Workbench data in Power BI for analysis. # Using Azure Blockchain Workbench data with Microsoft Power BI + Microsoft Power BI provides the ability to easily generate powerful reports from SQL DB databases using Power BI Desktop and then publish them to [https://www.powerbi.com](https://www.powerbi.com). This article contains a step by step walkthrough of how to connect to Azure Blockchain Workbench's SQL Database from within PowerBI desktop, create a report, and deploy the report to powerbi.com.
blockchain Data Sql Management Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/blockchain/workbench/data-sql-management-studio.md
Title: Query Azure Blockchain Workbench data using SQL Server Management Studio description: Learn how to connect to Azure Blockchain Workbench's SQL Database from within SQL Server Management Studio. Previously updated : 11/20/2019 Last updated : 02/18/2022
# Using Azure Blockchain Workbench data with SQL Server Management Studio + Microsoft SQL Server Management Studio provides the ability to rapidly write and test queries against Azure Blockchain Workbench's SQL DB. This section contains a step-by-step walkthrough of how to connect to Azure Blockchain Workbench's SQL Database from within SQL Server Management Studio.
blockchain Database Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/blockchain/workbench/database-firewall.md
Title: Configure Azure Blockchain Workbench database firewall description: Learn how to configure the Azure Blockchain Workbench Preview database firewall to allow external clients and applications to connect. Previously updated : 09/09/2019 Last updated : 02/18/2022 #Customer intent: As an administrator, I want to configure Azure Blockchain Workbench's SQL Server firewall to allow external clients to connect.
# Configure the Azure Blockchain Workbench database firewall + This article shows how to configure a firewall rule using the Azure portal. Firewall rules let external clients or applications connect to your Azure Blockchain Workbench database. ## Connect to the Blockchain Workbench database
blockchain Database Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/blockchain/workbench/database-views.md
Title: Azure Blockchain Workbench database views description: Overview of available Azure Blockchain Workbench Preview SQL DB database views. Previously updated : 09/05/2019 Last updated : 02/18/2022 #Customer intent: As a developer, I want to understand the available Azure Blockchain Workbench SQL Server database views for querying off-chain blockchain data. # Azure Blockchain Workbench database views + Azure Blockchain Workbench Preview delivers data from distributed ledgers to an *off-chain* SQL DB database. The off-chain database makes it possible to use SQL and existing tools, such as [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms), to interact with blockchain data. Azure Blockchain Workbench provides a set of database views that provide access to data that will be helpful when performing your queries. These views are heavily denormalized to make it easy to quickly get started building reports, analytics, and otherwise consume blockchain data with existing tools and without having to retrain database staff.
blockchain Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/blockchain/workbench/deploy.md
Title: Deploy Azure Blockchain Workbench Preview description: How to deploy Azure Blockchain Workbench Preview Previously updated : 09/15/2021 Last updated : 02/18/2022
# Deploy Azure Blockchain Workbench Preview + Azure Blockchain Workbench Preview is deployed using a solution template in the Azure Marketplace. The template simplifies the deployment of components needed to create blockchain applications. Once deployed, Blockchain Workbench provides access to client apps to create and manage users and blockchain applications. For more information about the components of Blockchain Workbench, see [Azure Blockchain Workbench architecture](architecture.md).
blockchain Getdb Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/blockchain/workbench/getdb-details.md
Title: Get Azure Blockchain Workbench database details description: Learn how to get Azure Blockchain Workbench Preview database and database server information. Previously updated : 09/05/2019 Last updated : 02/18/2022 #Customer intent: As a developer, I want to get Azure Blockchain database details to connect and view off-chain blockchain data.
# Get information about your Azure Blockchain Workbench database This article shows how to get detailed information about your Azure Blockchain Workbench Preview database. ## Overview
blockchain Integration Patterns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/blockchain/workbench/integration-patterns.md
Title: Smart contract integration patterns - Azure Blockchain Workbench description: Overview of smart contract integration patterns in Azure Blockchain Workbench Preview. Previously updated : 11/20/2019 Last updated : 02/18/2022 #Customer intent: As a developer, I want to understand recommended integration pattern using Azure Blockchain Workbench so that I can integrate with external systems. # Smart contract integration patterns + Smart contracts often represent a business workflow that needs to integrate with external systems and devices. The requirements of these workflows include a need to initiate transactions on a distributed ledger that include data from an external system, service, or device. They also need to have external systems react to events originating from smart contracts on a distributed ledger.
blockchain Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/blockchain/workbench/manage-users.md
Title: Manage users in Azure Blockchain Workbench description: How to manage users in Azure Blockchain Workbench. Previously updated : 07/15/2020 Last updated : 02/18/2022 #Customer intent: As an administrator of Blockchain Workbench, I want to manage users for blockchain apps in Azure Blockchain Workbench. # Manage Users in Azure Blockchain Workbench + Azure Blockchain Workbench includes user management for people and organizations that are part of your consortium. ## Prerequisites
blockchain Messages Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/blockchain/workbench/messages-overview.md
 Title: Use messages to integrate with Azure Blockchain Workbench description: Overview of using messages to integrate Azure Blockchain Workbench Preview with other systems. Previously updated : 09/05/2019 Last updated : 02/18/2022 #Customer intent: As an developer, I want to use messages to integrate external systems with Azure Blockchain Workbench.
# Azure Blockchain Workbench messaging integration + In addition to providing a REST API, Azure Blockchain Workbench also provides messaging-based integration. Workbench publishes ledger-centric events via Azure Event Grid, enabling downstream consumers to ingest data or take action based on these events. For those clients that require reliable messaging, Azure Blockchain Workbench delivers messages to an Azure Service Bus endpoint as well. ## Input APIs
The request requires the following fields:
| **Name** | **Description** | |-|| | requestId | Client supplied GUID |
-| userChainIdentifier | Address of the user that was created on the blockchain network. In Ethereum, this address is the userΓÇÖs **on chain** address. |
+| userChainIdentifier | Address of the user that was created on the blockchain network. In Ethereum, this address is the user's **on chain** address. |
| applicationName | Name of the application | | version | Version of the application. Required if you have multiple versions of the application enabled. Otherwise, version is optional. For more information on application versioning, see [Azure Blockchain Workbench application versioning](version-app.md). | | workflowName | Name of the workflow |
The request requires the following fields:
| **Name** | **Description** | |--|| | requestId | Client supplied GUID |
-| userChainIdentifier | Address of the user that was created on the blockchain network. In Ethereum, this address is the userΓÇÖs **on chain** address. |
+| userChainIdentifier | Address of the user that was created on the blockchain network. In Ethereum, this address is the user's **on chain** address. |
| contractLedgerIdentifier | Address of the contract on the ledger | | version | Version of the application. Required if you have multiple versions of the application enabled. Otherwise, version is optional. For more information on application versioning, see [Azure Blockchain Workbench application versioning](version-app.md). | | workflowFunctionName | Name of the workflow function |
If a user wants to use Event Grid to be notified about events that happen in Blo
2. Create a new function. 3. Locate the template for Event Grid. Basic template code for reading the message is shown. Modify the code as needed. 4. Save the Function.
-5. Select the Event Grid from Blockchain WorkbenchΓÇÖs resource group.
+5. Select the Event Grid from Blockchain Workbench's resource group.
### Consuming Event Grid events with Logic Apps
If a user wants to use Event Grid to be notified about events that happen in Blo
Service Bus Topics can be used to notify users about events that happen in Blockchain Workbench.
-1. Browse to the Service Bus within the WorkbenchΓÇÖs resource group.
+1. Browse to the Service Bus within the Workbench's resource group.
2. Select **Topics**. 3. Select **egress-topic**. 4. Create a new subscription to this topic. Obtain a key for it.
blockchain Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/blockchain/workbench/overview.md
Title: Azure Blockchain Workbench Preview overview description: Overview of Azure Blockchain Workbench Preview and its capabilities. Previously updated : 05/22/2020 Last updated : 02/18/2022 #Customer intent: As an developer or administrator, I want to understand what Azure Blockchain Workbench is and its capabilities. # What is Azure Blockchain Workbench? + Azure Blockchain Workbench Preview is a collection of Azure services and capabilities designed to help you create and deploy blockchain applications to share business processes and data with other organizations. Azure Blockchain Workbench provides the infrastructure scaffolding for building blockchain applications enabling developers to focus on creating business logic and smart contracts. It also makes it easier to create blockchain applications by integrating several Azure services and capabilities to help automate common development tasks. [!INCLUDE [Preview note](./includes/preview.md)]
blockchain Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/blockchain/workbench/troubleshooting.md
Title: Azure Blockchain Workbench troubleshooting description: How to troubleshoot an Azure Blockchain Workbench Preview application. Previously updated : 10/14/2019 Last updated : 02/18/2022 #Customer intent: As an developer, I want to know how I can troubleshoot a blockchain application in Azure Blockchain Workbench.
# Azure Blockchain Workbench Preview troubleshooting + A PowerShell script is available to assist with developer debugging or support. The script generates a summary and collects detailed logs for troubleshooting. Collected logs include: * Blockchain network, such as Ethereum
blockchain Use Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/blockchain/workbench/use-api.md
Title: Using Azure Blockchain Workbench REST APIs description: Scenarios for how to use the Azure Blockchain Workbench Preview REST API Previously updated : 03/05/2020 Last updated : 02/18/2022 #Customer intent: As a developer, I want to understand the Azure Blockchain Workbench REST API to so that I can integrate apps with Blockchain Workbench. # Using the Azure Blockchain Workbench Preview REST API + Azure Blockchain Workbench Preview REST API provides developers and information workers a way to build rich integrations to blockchain applications. This article highlights several scenarios of how to use the Workbench REST API. For example, suppose you want to create a custom blockchain client that allows signed in users to view and interact with their assigned blockchain applications. The client can use the Blockchain Workbench API to view contract instances and take actions on smart contracts. ## Blockchain Workbench API endpoint
blockchain Use https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/blockchain/workbench/use.md
Title: Using applications in Azure Blockchain Workbench description: Tutorial on how to use application contracts in Azure Blockchain Workbench Preview. Previously updated : 10/14/2019 Last updated : 02/18/2022 #Customer intent: As a developer, I want to use a blockchain application I created in Azure Blockchain Workbench.
# Tutorial: Using applications in Azure Blockchain Workbench + You can use Blockchain Workbench to create and take actions on contracts. You can also view contract details such as status and transaction history. You'll learn how to:
blockchain Version App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/blockchain/workbench/version-app.md
Title: Blockchain app versioning - Azure Blockchain Workbench description: How to use application versions in Azure Blockchain Workbench Preview. Previously updated : 11/20/2019 Last updated : 02/18/2022 #Customer intent: As a developer, I want to create and use multiple versions of an Azure Blockchain Workbench app. # Azure Blockchain Workbench Preview application versioning + You can create and use multiple versions of an Azure Blockchain Workbench Preview app. If multiple versions of the same application are uploaded, a version history is available and users can choose which version they want to use. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
cdn Cdn Restrict Access By Country Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-restrict-access-by-country-region.md
In the country/region filtering rules table, select the delete icon next to a ru
* Only one rule can be applied to the same relative path. That is, you can't create multiple country/region filters that point to the same relative path. However, because country/region filters are recursive, a folder can have multiple country/region filters. In other words, a subfolder of a previously configured folder can be assigned a different country/region filter.
-* The geo-filtering feature uses country/region codes to define the countries/regions from which a request is allowed or blocked for a secured directory. **Azure CDN from Verizon** and **Azure CDN from Akamai** profiles use ISO 3166-1 alpha-2 country codes to define the countries from which a request will be allowed or blocked for a secured directory.
+* The geo-filtering feature uses [country/region codes](microsoft-pop-abbreviations.md) codes to define the countries/regions from which a request is allowed or blocked for a secured directory. **Azure CDN from Verizon** and **Azure CDN from Akamai** profiles use ISO 3166-1 alpha-2 country codes to define the countries from which a request will be allowed or blocked for a secured directory.
cloud-services Cloud Services Custom Domain Name Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-custom-domain-name-portal.md
To create a CNAME record, you must add a new entry in the DNS table for your cus
![quick glance section showing the site URL][csurl] **OR**
- * Install and configure [Azure Powershell](/powershell/azure/), and then use the following command:
+ * Install and configure [Azure PowerShell](/powershell/azure/), and then use the following command:
```powershell Get-AzureDeployment -ServiceName yourservicename | Select Url
To create an A record, you must first find the virtual IP address of your cloud
![quick glance section showing the VIP][vip] **OR**
- * Install and configure [Azure Powershell](/powershell/azure/), and then use the following command:
+ * Install and configure [Azure PowerShell](/powershell/azure/), and then use the following command:
```powershell get-azurevm -servicename yourservicename | get-azureendpoint -VM {$_.VM} | select Vip
cloud-services Cloud Services Role Enable Remote Desktop Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-role-enable-remote-desktop-visual-studio.md
To use the RDP extension from Azure DevOps Services, include the following detai
1. After your build steps, add the **Azure Cloud Service Deployment** step and set its properties.
-1. After the deployment step, add an **Azure Powershell** step, set its **Display name** property to "Azure Deployment: Enable RDP Extension" (or another suitable name), and select your appropriate Azure subscription.
+1. After the deployment step, add an **Azure PowerShell** step, set its **Display name** property to "Azure Deployment: Enable RDP Extension" (or another suitable name), and select your appropriate Azure subscription.
1. Set **Script Type** to "Inline" and paste the code below into the **Inline Script** field. (You can also create a `.ps1` file in your project with this script, set **Script Type** to "Script File Path", and set **Script Path** to point to the file.)
cognitive-services Patterns Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/concepts/patterns-features.md
The machine-learning entity in this example is more complex with nested subentit
This example uses features at the subentity level and child of subentity level. Which level gets what kind of phrase list or model as a feature is an important part of your entity design.
-While subentities can have many phrase lists as features that help detect the entity, each subentity has only one model as a feature. In this [pizza app](/Azure/pizza_luis_bot/blob/master/CognitiveModels/MicrosoftPizza.json), those models are primarily lists.
+While subentities can have many phrase lists as features that help detect the entity, each subentity has only one model as a feature. In this [pizza app](https://github.com/Azure/pizza_luis_bot/blob/master/CognitiveModels/MicrosoftPizza.json), those models are primarily lists.
:::image type="content" source="../media/luis-concept-patterns/pizza-example-example-phrase-lists.png" alt-text="A screenshot showing a machine learning entity many phrase lists as features." lightbox="../media/luis-concept-patterns/pizza-example-example-phrase-lists.png":::
cognitive-services Luis Traffic Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-traffic-manager.md
This article explains how to manage the traffic across keys with Azure [Traffic
## Connect to PowerShell in the Azure portal In the [Azure][azure-portal] portal, open the PowerShell window. The icon for the PowerShell window is the **>_** in the top navigation bar. By using PowerShell from the portal, you get the latest PowerShell version and you are authenticated. PowerShell in the portal requires an [Azure Storage](https://azure.microsoft.com/services/storage/) account.
-![Screenshot of Azure portal with Powershell window open](./media/traffic-manager/azure-portal-powershell.png)
+![Screenshot of Azure portal with PowerShell window open](./media/traffic-manager/azure-portal-powershell.png)
The following sections use [Traffic Manager PowerShell cmdlets](/powershell/module/az.trafficmanager/#traffic_manager).
cognitive-services Luis Tutorial Bing Spellcheck https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-tutorial-bing-spellcheck.md
Two solutions are:
* Create a phrase list with all variations of the word. With this solution, you do not need to label the word variations in the example utterances. ## Next steps
-[Learn more about example utterances](/how-to/entities.md)
+[Learn more about example utterances](./how-to/entities.md)
cognitive-services Custom Keyword Basics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-keyword-basics.md
Last updated 11/12/2021
ms.devlang: csharp, objective-c, python
-zone_pivot_groups: keyword-quickstart
+zone_pivot_groups: programming-languages-speech-services
# Get started with Custom Keyword
zone_pivot_groups: keyword-quickstart
[!INCLUDE [C# include](includes/quickstarts/keyword-recognition/csharp.md)] ::: zone-end +++ ::: zone-end ::: zone pivot="programming-language-objectivec" ::: zone-end ::: zone pivot="programming-language-swift" [!INCLUDE [Swift include](includes/quickstarts/keyword-recognition/swift.md)] ::: zone-end +++ ## Next steps > [!div class="nextstepaction"]
cognitive-services Get Started Speaker Recognition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/get-started-speaker-recognition.md
Last updated 01/08/2022
ms.devlang: cpp, csharp, javascript
-zone_pivot_groups: programming-languages-set-twenty-five
+zone_pivot_groups: programming-languages-speech-services
keywords: speaker recognition, voice biometry
keywords: speaker recognition, voice biometry
[!INCLUDE [C++ include](includes/quickstarts/speaker-recognition-basics/cpp.md)] ::: zone-end ++ ::: zone pivot="programming-language-javascript" [!INCLUDE [JavaScript include](includes/quickstarts/speaker-recognition-basics/javascript.md)] ::: zone-end +++ ::: zone pivot="programming-language-rest" [!INCLUDE [REST include](includes/quickstarts/speaker-recognition-basics/rest.md)] ::: zone-end + ## Next steps > [!div class="nextstepaction"]
cognitive-services Get Started Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/get-started-speech-to-text.md
Title: "Speech-to-text quickstart - Speech service"
-description: Learn how to use the Speech SDK to convert speech to text, including object construction, supported audio input formats, and configuration options for speech recognition.
+description: Use the Speech SDK to convert speech to text with recognition from a microphone.
Previously updated : 01/08/2022 Last updated : 02/11/2022 ms.devlang: cpp, csharp, golang, java, javascript, objective-c, python
keywords: speech to text, speech to text software
## Next steps > [!div class="nextstepaction"]
-> [See the quickstart samples on GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart)
+> [Learn more about speech recognition](how-to-recognize-speech.md)
cognitive-services How To Deploy And Use Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-deploy-and-use-endpoint.md
The application settings that you use as REST API [request parameters](#request-
* The **Endpoint key** shows the subscription key the endpoint is associated with. Use the endpoint key as the value of your `Ocp-Apim-Subscription-Key` request header. * The **Endpoint URL** shows your service region. Use the value that precedes `voice.speech.microsoft.com` as your service region request parameter. For example, use `eastus` if the endpoint URL is `https://eastus.voice.speech.microsoft.com/cognitiveservices/v1`. * The **Endpoint URL** shows your endpoint ID. Use the value appended to the `?deploymentId=` query parameter as the value of your endpoint ID request parameter.
-* The Azure region the endpoint is associated with.
#### Get endpoint
cognitive-services How To Recognize Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-recognize-speech.md
+
+ Title: "How to recognize speech - Speech service"
+
+description: Learn how to use the Speech SDK to convert speech to text, including object construction, supported audio input formats, and configuration options for speech recognition.
++++++ Last updated : 02/17/2022+
+ms.devlang: cpp, csharp, golang, java, javascript, objective-c, python
+zone_pivot_groups: programming-languages-speech-services
+keywords: speech to text, speech to text software
++
+# How to recognize speech
+++++++++++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [See the quickstart samples on GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart)
cognitive-services How To Select Audio Input Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-select-audio-input-devices.md
# Select an audio input device with the Speech SDK
-Version 1.3.0 of the Speech SDK introduces an API to select the audio input. This article describes how to obtain the IDs of the audio devices connected to a system. These IDs can then be used in the Speech SDK. You configure the audio device through the `AudioConfig` object:
+This article describes how to obtain the IDs of the audio devices connected to a system. These IDs can then be used in the Speech SDK to select the audio input. You configure the audio device through the `AudioConfig` object:
```C++ audioConfig = AudioConfig.FromMicrophoneInput("<device id>");
cognitive-services How To Use Conversation Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-use-conversation-transcription.md
You can transcribe meetings and other conversations with the ability to add, rem
This article assumes that you have an Azure account and Speech service subscription. If you don't have an account and subscription, [try the Speech service for free](overview.md#try-the-speech-service-for-free).
+> [!NOTE]
+> The Speech SDK for C++, Java, Objective-C, and Swift support Conversation Transcription, but we haven't yet included a guide here.
+ ::: zone pivot="programming-language-javascript" [!INCLUDE [JavaScript Basics include](includes/how-to/conversation-transcription/real-time-javascript.md)] ::: zone-end
cognitive-services How To Use Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-use-logging.md
Logging to file is an optional feature for the Speech SDK. During development logging provides additional information and diagnostics from the Speech SDK's core components. It can be enabled by setting the property `Speech_LogFilename` on a speech configuration object to the location and name of the log file. Logging is handled by a static class in Speech SDKΓÇÖs native library. You can turn on logging for any Speech SDK recognizer or synthesizer instance. All instances in the same process write log entries to the same log file. > [!NOTE]
-> Logging is available since Speech SDK version 1.4.0 in all supported Speech SDK programming languages, with the exception of JavaScript.
+> Logging is available in all supported Speech SDK programming languages, with the exception of JavaScript.
## Sample
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/overview.md
The Speech service is the unification of speech-to-text, text-to-speech, and speech translation into a single Azure subscription. It's easy to speech enable your applications, tools, and devices with the [Speech CLI](spx-overview.md), [Speech SDK](./speech-sdk.md), [Speech Studio](speech-studio-overview.md), or [REST APIs](#reference-docs).
-> [!IMPORTANT]
-> The Speech service has replaced the Bing Speech API and Translator Speech. For migration instructions, see the _Migration_ section.
- The following features are part of the Speech service. Use the links in this table to learn more about common use-cases for each feature. You can also browse the API reference. | Service | Feature | Description | SDK | REST |
The following features are part of the Speech service. Use the links in this tab
| [Text-to-speech](text-to-speech.md) | Prebuilt neural voices | Text-to-speech converts input text into humanlike synthesized speech by using the [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md). Use neural voices, which are humanlike voices powered by deep neural networks. See [Language support](language-support.md). | [Yes](./speech-sdk.md) | [Yes](#reference-docs) | | | [Custom neural voices](#customize-your-speech-experience) | Create custom neural voice fonts unique to your brand or product. | No | [Yes](#reference-docs) | | [Speech translation](speech-translation.md) | Speech translation | Speech translation enables real-time, multilanguage translation of speech to your applications, tools, and devices. Use this feature for speech-to-speech and speech-to-text translation. | [Yes](./speech-sdk.md) | No |
+| [Language identification](language-identification.md) | Language identification | Language identification is used to identify languages spoken in audio when compared against a list of [supported languages](language-support.md). Use language identification by itself, with speech-to-text recognition, or with speech translation. | [Yes](./speech-sdk.md) | No |
| [Voice assistants](voice-assistants.md) | Voice assistants | Voice assistants using the Speech service empower developers to create natural, humanlike conversational interfaces for their applications and experiences. The voice assistant feature provides fast, reliable interaction between a device and an assistant implementation that uses the Bot Framework's Direct Line Speech channel or the integrated custom commands service for task completion. | [Yes](voice-assistants.md) | No | | [Speaker recognition](speaker-recognition-overview.md) | Speaker verification and identification | Speaker recognition provides algorithms that verify and identify speakers by their unique voice characteristics. Speaker recognition is used to answer the question, "Who is speaking?". | Yes | [Yes](/rest/api/speakerrecognition/) |
The following features are part of the Speech service. Use the links in this tab
For the following steps, you need a Microsoft account and an Azure account. If you don't have a Microsoft account, you can sign up for one free of charge at the [Microsoft account portal](https://account.microsoft.com/account). Select **Sign in with Microsoft**. When you're asked to sign in, select **Create a Microsoft account**. Follow the steps to create and verify your new Microsoft account.
-After you have a Microsoft account, go to the [Azure sign-up page](https://azure.microsoft.com/free/ai/) and select **Start free**. Create a new Azure account by using a Microsoft account. Here's a video of [how to sign up for an Azure free account](https://www.youtube.com/watch?v=GWT2R1C_uUU).
-
-> [!NOTE]
-> When you sign up for a free Azure account, it comes with $200 in service credit that you can apply toward a paid Speech service subscription, valid for up to 30 days. Your Azure services are disabled when your credit runs out or expires at the end of the 30 days. To continue using Azure services, you must upgrade your account. For more information, see [Upgrade your Azure free account](../../cost-management-billing/manage/upgrade-azure-subscription.md).
->
-> The Speech service has two service tiers, free (f0) and subscription (s0), which have different limitations and benefits. If you use the free, low-volume Speech service tier, you can keep this free subscription even after your free trial or service credit expires. For more information, see [Cognitive Services pricing - Speech service](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+After you have a Microsoft account, go to the [Azure sign-up page](https://azure.microsoft.com/free/ai/) and select **Start free**. Create a new Azure account by using a Microsoft account. Here's a video of [how to sign up for an Azure free account](https://www.youtube.com/watch?v=GWT2R1C\_uUU).
### Create the Azure resource
Other products offer speech models tuned for specific purposes, like healthcare
## Next steps
-> [!div class="nextstepaction"]
-> * [Get started with speech-to-text](./get-started-speech-to-text.md)
-> * [Get started with text-to-speech](get-started-text-to-speech.md)
+* [Get started with speech-to-text](./get-started-speech-to-text.md)
+* [Get started with text-to-speech](get-started-text-to-speech.md)
cognitive-services Multi Device Conversation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/quickstarts/multi-device-conversation.md
[!INCLUDE [Header](../includes/quickstarts/multi-device-conversation/header.md)]
+> [!NOTE]
+> The Speech SDK for Java, JavaScript, Objective-C, and Swift support Multi-device Conversation, but we haven't yet included a guide here.
+ ::: zone pivot="programming-language-csharp" [!INCLUDE [Header](../includes/quickstarts/multi-device-conversation/csharp/header.md)] [!INCLUDE [csharp](../includes/quickstarts/multi-device-conversation/csharp/csharp.md)]
cognitive-services Voice Assistants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/quickstarts/voice-assistants.md
zone_pivot_groups: programming-languages-voice-assistants
[!INCLUDE [Header](../includes/quickstarts/voice-assistants/header.md)]
+> [!NOTE]
+> The Speech SDK for C++, JavaScript, Objective-C, Python, and Swift support custom voice assistants, but we haven't yet included a guide here.
+ ::: zone pivot="programming-language-csharp" [!INCLUDE [Header](../includes/quickstarts/voice-assistants/csharp/header.md)] [!INCLUDE [csharp](../includes/quickstarts/voice-assistants/csharp/csharp.md)]
cognitive-services Spx Basics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/spx-basics.md
This article assumes that you have working knowledge of the Command Prompt windo
> [!NOTE] > In PowerShell, the [stop-parsing token](/powershell/module/microsoft.powershell.core/about/about_special_characters#stop-parsing-token) (`--%`) should follow `spx`. For example, run `spx --% config @region` to view the current region config value.
+
+## Download and install
[!INCLUDE [](includes/spx-setup.md)] - ## Create a subscription configuration # [Terminal](#tab/terminal)
cognitive-services Cognitive Services Virtual Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-virtual-networks.md
To apply a virtual network rule to a Cognitive Services resource, the user must
Cognitive Services resource and the virtual networks granted access may be in different subscriptions, including subscriptions that are a part of a different Azure AD tenant. > [!NOTE]
-> Configuration of rules that grant access to subnets in virtual networks that are a part of a different Azure Active Directory tenant are currently only supported through Powershell, CLI and REST APIs. Such rules cannot be configured through the Azure portal, though they may be viewed in the portal.
+> Configuration of rules that grant access to subnets in virtual networks that are a part of a different Azure Active Directory tenant are currently only supported through PowerShell, CLI and REST APIs. Such rules cannot be configured through the Azure portal, though they may be viewed in the portal.
### Managing virtual network rules
You can manage virtual network rules for Cognitive Services resources through th
> [!NOTE] > If a service endpoint for Azure Cognitive Services wasn't previously configured for the selected virtual network and subnets, you can configure it as part of this operation. >
- > Presently, only virtual networks belonging to the same Azure Active Directory tenant are shown for selection during rule creation. To grant access to a subnet in a virtual network belonging to another tenant, please use Powershell, CLI or REST APIs.
+ > Presently, only virtual networks belonging to the same Azure Active Directory tenant are shown for selection during rule creation. To grant access to a subnet in a virtual network belonging to another tenant, please use PowerShell, CLI or REST APIs.
1. To remove a virtual network or subnet rule, select **...** to open the context menu for the virtual network or subnet, and select **Remove**.
communication-services Join Teams Meeting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/join-teams-meeting.md
It's currently not possible for a Teams user to join a call that was initiated u
## Enabling anonymous meeting join in your Teams tenant
-When a BYOI user joins a Teams meeting, they're treated as an anonymous external user, similar to users that join a Teams meeting anonymously using the Teams web application. The ability for BYOI users to join Teams meetings as anonymous users is controlled by the existing "allow anonymous meeting join" configuration. This same configuration also controls the existing Teams anonymous meeting join. This setting can be updated in the [Teams admin center](https://admin.teams.microsoft.com/meetings/settings) or with the Teams PowerShell cmdlet [Set-CsTeamsMeetingConfiguration](/powershell/module/skype/set-csteamsmeetingconfiguration).
+When a BYOI user joins a Teams meeting, they're treated as an anonymous external user, similar to users that join a Teams meeting anonymously using the Teams web application. The ability for BYOI users to join Teams meetings as anonymous users is controlled by the same Teams settings that control anonymous meeting join using the Teams web application, and is enabled by default. The article [Manage meeting settings in Microsoft Teams](/microsoftteams/meeting-settings-in-teams#allow-anonymous-users-to-join-meetings) describes these settings.
Custom applications built with Azure Communication Services to connect and communicate with Teams users can be used by end users or by bots, and there is no differentiation in how they appear to Teams users unless the developer of the application explicitly indicates this as part of the communication. Your custom application should consider user authentication and other security measures to protect Teams meetings. Be mindful of the security implications of enabling anonymous users to join meetings, and use the [Teams security guide](/microsoftteams/teams-security-guide#addressing-threats-to-teams-meetings) to configure capabilities available to anonymous users.
Microsoft will indicate to you via the Azure Communication Services API that rec
- [How-to: Join a Teams meeting](../how-tos/calling-sdk/teams-interoperability.md) - [Quickstart: Join a BYOI calling app to a Teams meeting](../quickstarts/voice-video-calling/get-started-teams-interop.md)-- [Quickstart: Join a BYOI chat app to a Teams meeting](../quickstarts/chat/meeting-interop.md)
+- [Quickstart: Join a BYOI chat app to a Teams meeting](../quickstarts/chat/meeting-interop.md)
communication-services Subscribe Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/router-sdk/subscribe-events.md
If the storage account, queue or system topic do not exist, they will be created
### Parameters -- **Azure Communication Services Resource Name**: The name of your Azure Communication Services resource. For example, if the endpoint to your resource is https://contoso.communication.azure.net, then set to `contoso`.
+- **Azure Communication Services Resource Name**: The name of your Azure Communication Services resource. For example, if the endpoint to your resource is `https://contoso.communication.azure.net`, then set to `contoso`.
- **Storage Name**: The name of your Azure Storage Account. If it does not exist, it will be created. - **Event Sub Name**: The name of the event subscription to create. - **System Topic Name**: If you have existing event subscriptions on your ACS resource, find the `System Topic` name in the `Events` tab of your ACS resource. Otherwise, specify a unique name such as the ACS resource name itself.
container-apps Scale App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/scale-app.md
The following example shows how to create a memory scaling rule.
- In this example, the container app scales when memory usage exceeds 50%. - At a minimum, a single replica remains in memory for apps that scale based on memory utilization.
-## Azure Pipelines
-
-Azure Pipelines scaling allows your container app to scale in or out depending on the number of jobs in the Azure DevOps agent pool. With Azure Pipelines, your app can scale to zero, but you need [at least one agent registered in the pool schedule additional agents](https://keda.sh/blog/2021-05-27-azure-pipelines-scaler/). For more information regarding this scaler, see [KEDA Azure Pipelines scaler](https://keda.sh/docs/2.4/scalers/azure-pipelines/).
-
-The following example shows how to create a memory scaling rule.
-
-```json
-{
- ...
- "resources": {
- ...
- "properties": {
- ...
- "template": {
- ...
- "scale": {
- "minReplicas": "0",
- "maxReplicas": "10",
- "rules": [{
- "name": "azdo-agent-scaler",
- "custom": {
- "type": "azure-pipelines",
- "metadata": {
- "poolID": "<pool id>",
- "targetPipelinesQueueLength": "1"
- },
- "auth": [
- {
- "secretRef": "<secret reference pat>",
- "triggerParameter": "personalAccessToken"
- },
- {
- "secretRef": "<secret reference Azure DevOps url>",
- "triggerParameter": "organizationURL"
- }
- ]
- }
- }]
- }
- }
- }
- }
-}
-```
-
-In this example, the container app scales when at least one job is waiting in the pool queue.
- ## Considerations - Vertical scaling is not supported.
cosmos-db Graph Visualization Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/graph-visualization-partners.md
The interactive interface of Linkurious Enterprise offers an easy way to investi
* [Product details](https://linkurio.us/product/) * [Documentation](https://doc.linkurio.us/)
-* [Demo](https://resources.linkurio.us/demo)
+* [Demo](https://linkurious.com/demo/)
* [Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/linkurious.linkurious001?tab=overview) ## Cambridge Intelligence
Typical use-cases and data models include:
### Next Steps
-* [Cosmos DB - Gremlin API Pricing](../how-pricing-works.md)
+* [Cosmos DB - Gremlin API Pricing](../how-pricing-works.md)
cosmos-db How To Configure Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-configure-firewall.md
Title: Configure an IP firewall for your Azure Cosmos DB account
description: Learn how to configure IP access control policies for firewall support on Azure Cosmos accounts. Previously updated : 03/03/2021 Last updated : 02/18/2022
You can secure the data stored in your Azure Cosmos DB account by using IP firew
To set the IP access control policy in the Azure portal, go to the Azure Cosmos DB account page and select **Firewall and virtual networks** on the navigation menu. Change the **Allow access from** value to **Selected networks**, and then select **Save**.
-![Screenshot showing how to open the Firewall page in the Azure portal](./media/how-to-configure-firewall/azure-portal-firewall.png)
When IP access control is turned on, the Azure portal provides the ability to specify IP addresses, IP address ranges, and switches. Switches enable access to other Azure services and the Azure portal. The following sections give details about these switches.
When you enable an IP access control policy programmatically, you need to add th
You can enable requests to access the Azure portal by selecting the **Allow access from Azure portal** option, as shown in the following screenshot:
-![Screenshot showing how to enable Azure portal access](./media/how-to-configure-firewall/enable-azure-portal.png)
### Allow requests from global Azure datacenters or other sources within Azure If you access your Azure Cosmos DB account from services that donΓÇÖt provide a static IP (for example, Azure Stream Analytics and Azure Functions), you can still use the IP firewall to limit access. You can enable access from other sources within the Azure by selecting the **Accept connections from within Azure datacenters** option, as shown in the following screenshot:
-![Screenshot showing how to accept connections from Azure datacenters](./media/how-to-configure-firewall/enable-azure-services.png)
When you enable this option, the IP address `0.0.0.0` is added to the list of allowed IP addresses. The `0.0.0.0` IP address restricts requests to your Azure Cosmos DB account from Azure datacenter IP range. This setting does not allow access for any other IP ranges to your Azure Cosmos DB account.
The portal automatically detects the client IP address. It might be the client I
To add your current IP to the list of IPs, select **Add my current IP**. Then select **Save**. ### Requests from cloud services
cosmos-db How To Setup Cmk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-cmk.md
description: Learn how to configure customer-managed keys for your Azure Cosmos
Previously updated : 02/03/2022 Last updated : 02/18/2022 ms.devlang: azurecli
If you're using an existing Azure Key Vault instance, you can verify that these
1. Under **Select principal**, select **None selected**.
-1. Search for **Azure Cosmos DB** principal and select it (to make it easier to find, you can also search by principal ID: `a232010e-820c-4083-83bb-3ace5fc29d0b` for any Azure region except Azure Government regions where the principal ID is `57506a73-e302-42a9-b869-6f12d9ec29e9`). If the **Azure Cosmos DB** principal isn't in the list, you might need to re-register the **Microsoft.DocumentDB** resource provider as described in the [Register the resource provider](#register-resource-provider) section of this article.
+1. Search for **Azure Cosmos DB** principal and select it (to make it easier to find, you can also search by application ID: `a232010e-820c-4083-83bb-3ace5fc29d0b` for any Azure region except Azure Government regions where the application ID is `57506a73-e302-42a9-b869-6f12d9ec29e9`). If the **Azure Cosmos DB** principal isn't in the list, you might need to re-register the **Microsoft.DocumentDB** resource provider as described in the [Register the resource provider](#register-resource-provider) section of this article.
> [!NOTE] > This registers the Azure Cosmos DB first-party-identity in your Azure Key Vault access policy. To replace this first-party identity by your Azure Cosmos DB account managed identity, see [Using a managed identity in the Azure Key Vault access policy](#using-managed-identity).
cosmos-db Migrate Dotnet V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/migrate-dotnet-v3.md
Previously updated : 02/15/2022 Last updated : 02/18/2022 ms.devlang: csharp
Some settings in `ConnectionPolicy` have been renamed or replaced:
|`EnableEndpointRediscovery`|`LimitToEndpoint` - The value is now inverted, if `EnableEndpointRediscovery` was being set to `true`, `LimitToEndpoint` should be set to `false`. Before using this setting, you need to understand [how it affects the client](troubleshoot-sdk-availability.md).| |`ConnectionProtocol`|Removed. Protocol is tied to the Mode, either it's Gateway (HTTPS) or Direct (TCP). Direct mode with HTTPS protocol is no longer supported on V3 SDK and the recommendation is to use TCP protocol. | |`MediaRequestTimeout`|Removed. Attachments are no longer supported.|
+|`SetCurrentLocation`|`CosmosClientOptions.ApplicationRegion` can be used to achieve the same effect.|
+|`PreferredLocations`|`CosmosClientOptions.ApplicationPreferredRegions` can be used to achieve the same effect.|
### Indexing policy
cosmos-db Performance Tips Dotnet Sdk V3 Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/performance-tips-dotnet-sdk-v3-sql.md
Previously updated : 01/25/2022 Last updated : 02/18/2022 ms.devlang: csharp
If you're testing at high throughput levels, or at rates that are greater than 5
> [!NOTE] > High CPU usage can cause increased latency and request timeout exceptions.
+## <a id="metadata-operations"></a> Metadata operations
+
+Do not verify a Database and/or Container exists by calling `Create...IfNotExistsAsync` and/or `Read...Async` in the hot path and/or before doing an item operation. The validation should only be done on application startup when it is necessary, if you expect them to be deleted (otherwise it's not needed). These metadata operations will generate extra end-to-end latency, have no SLA, and their own separate [limitations](https://aka.ms/CosmosDB/sql/errors/metadata-429) that do not scale like data operations.
+ ## <a id="logging-and-tracing"></a> Logging and tracing Some environments have the [.NET DefaultTraceListener](/dotnet/api/system.diagnostics.defaulttracelistener) enabled. The DefaultTraceListener poses performance issues on production environments causing high CPU and I/O bottlenecks. Check and make sure that the DefaultTraceListener is disabled for your application by removing it from the [TraceListeners](/dotnet/framework/debug-trace-profile/how-to-create-and-initialize-trace-listeners) on production environments.
cosmos-db Performance Tips https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/performance-tips.md
Previously updated : 01/24/2022 Last updated : 02/18/2022 ms.devlang: csharp
If you're testing at high throughput levels (more than 50,000 RU/s), the client
> [!NOTE] > High CPU usage can cause increased latency and request timeout exceptions.
+## <a id="metadata-operations"></a> Metadata operations
+
+Do not verify a Database and/or Collection exists by calling `Create...IfNotExistsAsync` and/or `Read...Async` in the hot path and/or before doing an item operation. The validation should only be done on application startup when it is necessary, if you expect them to be deleted (otherwise it's not needed). These metadata operations will generate extra end-to-end latency, have no SLA, and their own separate [limitations](https://aka.ms/CosmosDB/sql/errors/metadata-429) that do not scale like data operations.
+ ## <a id="logging-and-tracing"></a> Logging and tracing Some environments have the [.NET DefaultTraceListener](/dotnet/api/system.diagnostics.defaulttracelistener) enabled. The DefaultTraceListener poses performance issues on production environments causing high CPU and I/O bottlenecks. Check and make sure that the DefaultTraceListener is disabled for your application by removing it from the [TraceListeners](/dotnet/framework/debug-trace-profile/how-to-create-and-initialize-trace-listeners) on production environments.
cosmos-db Sql Query Geospatial Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-geospatial-intro.md
Previously updated : 02/25/2021 Last updated : 02/17/2022
Azure Cosmos DB supports the following spatial data types:
- Polygon - MultiPolygon
+> [!TIP]
+> Currently spatial data in Azure Cosmos DB is not supported by Entity Framework. Please use one of the Azure Cosmos DB SDKs instead.
+ ### Points A **Point** denotes a single position in space. In geospatial data, a Point represents the exact location, which could be a street address of a grocery store, a kiosk, an automobile, or a city. A point is represented in GeoJSON (and Azure Cosmos DB) using its coordinate pair or longitude and latitude.
cost-management-billing Cost Analysis Built In Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/cost-analysis-built-in-views.md
+
+ Title: Use built-in views in Cost analysis
+
+description: This article helps you understand when to use which view, how each one provides unique insights about your costs and recommended next steps to investigate further.
++ Last updated : 02/17/2022++++++
+# Use built-in views in Cost analysis
+
+Cost Management includes several tools to help you view and monitor your cloud costs. As you get started, cost analysis is the first one you should familiarize yourself with. And within cost analysis, you'll start with built-in views. This article helps you understand when to use which view, how each one provides unique insights about your costs and recommended next steps to investigate further.
+
+## Access built-in views
+
+When you're in classic Cost analysis, you can access the preview views at the top of the page with the **Cost by resource** list.
++
+## Analyze resource costs
+
+Cost Management offers two views to analyze your resource costs:
+
+- **Cost by resource**
+- **Resources (preview)**
+
+Both views are only available when you have a subscription or resource group scope selected.
+
+The classic **Cost by resource** view shows a list of all resources. Information is shown in tabular format.
++
+The preview **Resources** view shows a list of all resources, including deleted resources. The view is like the Cost by resource view in classic cost analysis. Compared to the classic Cost by resource view, the new view:
+
+- Has optimized performance and loads resources faster. It better groups together related costs. Azure and Marketplace costs are grouped together.
+- Provides improved troubleshooting details.
+- Shows grouped Azure and Marketplace costs together per resource.
+- Shows resource types with icons.
+- Includes a simpler custom date range selection with support for relative date ranges.
+- Allows you to customize the download to exclude nested details. For example, resources without meters in the Resources view.
+- Provides smart insights to help you better understand your data, like subscription cost anomalies.
+
+Use either view to:
+
+- Identify top cost contributors by resource.
+- Understand how you're charged for a resource.
+- Find the biggest opportunities to save money.
+- Stop or delete resources that shouldn't be running.
+- Identify significant month-over-month changes.
+- Identify and tag untagged resources.
++
+## Analyze resource group costs
+
+The **Resource groups** view separates each resource group in your subscription, management group, or billing account showing nested resources.
+
+Use this view to:
+
+- Identify top cost contributors by resource group.
+- Find the biggest opportunities to save money.
+- Help perform chargeback by resource group.
+- Identify significant month-over-month changes.
+- Identify and tag untagged resources using resource group tags.
++
+## Analyze your subscription costs
+
+The **Subscriptions** view is only available when you have a billing account or management group scope selected. The view separates costs by subscription and resource group.
+
+Use this view to:
+
+- Identify top cost contributors by subscription.
+- Find the biggest opportunities to save money.
+- Help perform chargeback by resource group.
+- Identify significant month-over-month changes.
+- Identify and tag untagged resources using resource subscription tags.
++
+## Review reservation resource utilization
+
+The **Reservations** view provides a breakdown of amortized reservation costs, allowing you to see which resources are consuming each reservation.
+
+The view shows amortized cost for the last 30 days with a breakdown of the resources that utilized each reservation during that time. Any unused portion of the reservation is also available when viewing cost for billing accounts and billing profiles.
+
+Use this view to:
+
+- Identify under-utilized reservations.
+- Identify significant month-over-month changes.
+- Help perform chargeback for reservations.
+
+### Understand amortized costs
+
+Amortized cost breaks down reservation purchases into daily chunks and spreads them over the duration of the reservation term. For example, instead of seeing a $365 purchase on January 1, you'll see a $1.00 purchase every day from January 1 to December 31. In addition to basic amortization, these costs are also reallocated and associated by using the specific resources that used the reservation. For example, if that $1.00 daily charge was split between two virtual machines, you'd see two $0.50 charges for the day. If part of the reservation isn't utilized for the day, you'd see one $0.50 charge associated with the applicable virtual machine and another $0.50 charge with a charge type of UnusedReservation. Unused reservation costs can be seen only when viewing amortized cost.
+
+Because of the change in how costs are represented, it's important to note that actual cost and amortized cost views will show different total numbers. In general, the total cost of months with a reservation purchase will decrease when viewing amortized costs, and months following a reservation purchase will increase. Amortization is available only for reservation purchases and doesn't apply to any other purchases.
++
+## Break down product and service costs
+
+The **Services view** shows a list of your services and products. This view is like the Invoice details view in classic cost analysis. The main difference is that rows are grouped by service, making it simpler to see your total cost at a service level. It also separates individual products you're using in each service.
+
+Use this view to:
+
+- Identify top cost contributors by service.
+- Find the biggest opportunities to save money.
++
+## Review current cost trends
+
+Use the **Accumulated costs** view to:
+
+- Determine whether your current month's costs are on track with your expectations. For example, forecast, budget, and credit.
++
+## Compare monthly service run rate costs
+
+Use the **Cost by service** view to:
+
+- Review month-over-month changes in cost.
++
+## Reconcile invoiced usage charges
+
+Use the **Invoice details** view to:
+
+- Review and reconcile billed charges.
++
+## Next steps
+
+- Now that you're familiar with using built-in views, read about [Saving and sharing customized views](save-share-views.md).
cost-management-billing Tutorial Acm Create Budgets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-acm-create-budgets.md
install-module -name Az
The following example commands create a budget. ```azurepowershell-interactive
-#Sign into Azure Powershell with your account
+#Sign into Azure PowerShell with your account
Connect-AzAccount
cost-management-billing Download Azure Invoice Daily Usage Date https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/download-azure-invoice-daily-usage-date.md
You can opt out of getting your invoice by email by following the steps above an
Azure Government users use the same agreement types as other Azure users.
-Azure Government billing billing owners can opt in to receive invoices by email. However, they can't allow others to get invoices by email.
+Azure Government billing owners can opt in to receive invoices by email. However, they can't allow others to get invoices by email.
To download your invoice, follow the steps above at [Download invoices for an individual subscription](#download-invoices-for-an-individual-subscription).
cost-management-billing Download Azure Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/download-azure-invoice.md
You may want to share your invoice every month with your accounting team or send
Azure Government users use the same agreement types as other Azure users.
-Azure Government billing billing owners can opt in to receive invoices by email. However, they can't allow others to get invoices by email.
+Azure Government billing owners can opt in to receive invoices by email. However, they can't allow others to get invoices by email.
To download your invoice, follow the steps above at [Download your MOSP Azure subscription invoice](#download-your-mosp-azure-subscription-invoice).
data-catalog Data Catalog How To Annotate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-how-to-annotate.md
Title: How to annotate data sources in Azure Data Catalog description: How-to article highlighting how to annotate data assets in Azure Data Catalog, including friendly names, tags, descriptions, and experts.--++ Previously updated : 08/01/2019 Last updated : 02/18/2022 # How to annotate data sources in Azure Data Catalog
Last updated 08/01/2019
**Microsoft Azure Data Catalog** is a fully managed cloud service that serves as a system of registration and system of discovery for enterprise data sources. In other words, Data Catalog is all about helping people discover, understand, and use data sources, and helping organizations to get more value from their existing data. When a data source is registered with Data Catalog, its metadata is copied and indexed by the service, but the story doesnΓÇÖt end there. Data Catalog allows users to provide their own descriptive metadata ΓÇô such as descriptions and tags ΓÇô to supplement the metadata extracted from the data source, and to make the data source more understandable to more people. ## Annotation and crowdsourcing+ Everyone has an opinion. And this is a good thing. Data Catalog recognizes that different users have different perspectives on enterprise data sources, and that each of these perspectives can be valuable. Consider the following scenario:
Data Catalog recognizes that different users have different perspectives on ente
* The data steward knows how the assets and attributes in the data source map to the enterprise data model. * The analyst knows how the data is used in the context of the business processes they support.
-Each of these perspectives is valuable, and Data Catalog uses a crowdsourcing approach to metadata that allows each one to be captured and used to provide a complete picture of registered data sources. Using the Data Catalog portal, each user can add and edit their own annotations, while being able to view annotations provided by other users.
+Each of these perspectives is valuable, and Data Catalog uses a crowdsourcing approach to metadata that allows each one to be captured and used to provide a complete picture of registered data sources. Each user can add and edit their own annotations in the Data Catalog portal, while being able to view annotations provided by other users.
## Different types of annotations+ Data Catalog supports the following types of annotations: | Annotation | Notes |
Data Catalog supports the following types of annotations:
| Friendly name |Friendly names can be supplied at the data asset level, to make the data assets more easily understood. Friendly names are most useful when the underlying object name is cryptic, abbreviated or otherwise not meaningful to users. | | Description |Descriptions can be supplied at the data asset and attribute / column levels. Descriptions are free-form short text annotations that describe the userΓÇÖs perspective on the data asset or its use. | | Tags (user tags) |Tags can be supplied at the data asset and attribute / column levels. User tags are user-defined labels that can be used to categorize data assets or attributes. |
-| Tags (glossary tags) |Tags can be supplied at the data asset and attribute / column levels. Glossary tags are centrally-defined glossary terms that can be used to categorize data assets or attributes using a common business taxonomy. For more information see [How to set up the Business Glossary for Governed Tagging](data-catalog-how-to-business-glossary.md) |
-| Experts |Experts can be supplied at the data asset level. Experts identify users or groups with expert perspectives on the data and can serve as points of contact for users who discover the registered data sources and have questions that are not answered by the existing annotations. |
-| Request access |Request access information can be supplied at the data asset level. This information is for users who discover a data source that they do not yet have permissions to access. Users can enter the email address of the user or group who grants access, the URL of the process or tool that users need to gain access, or can enter the process itself as text. |
+| Tags (glossary tags) |Tags can be supplied at the data asset and attribute / column levels. Glossary tags are centrally defined glossary terms that can be used to categorize data assets or attributes using a common business taxonomy. For more information, see [How to set up the Business Glossary for Governed Tagging](data-catalog-how-to-business-glossary.md) |
+| Experts |Experts can be supplied at the data asset level. Experts identify users or groups with expert perspectives on the data and can serve as points of contact for users who discover the registered data sources and have questions that aren't answered by the existing annotations. |
+| Request access |Request access information can be supplied at the data asset level. This information is for users who discover a data source that they don't yet have permissions to access. Users can enter the email address of the user or group who grants access, the URL of the process or tool that users need to gain access, or can enter the process itself as text. |
| Documentation |Documentation can be supplied at the data asset level. Asset documentation is rich text information that can include links and images, and which can provide any information not conveyed through descriptions and tags. | ## Annotating multiple assets
-When selecting multiple data assets in the Data Catalog portal, users can annotate all selected assets in a single operation. Annotations will apply to all selected assets, making it easy to select and provide a consistent description and sets of tags and experts for related data assets.
+
+Users can select multiple data assets in the Data Catalog portal, and annotate all selected assets in a single operation. Annotations will apply to all selected assets, making it easy to select and provide a consistent description and sets of tags and experts for related data assets.
> [!NOTE] > Tags and experts can also be provided when registering data assets using the Data Catalog data source registration tool.
->
->
-When selecting multiple tables and views, only columns that all selected data assets have in common will be displayed in the Data Catalog portal. This allows users to provide tags and descriptions for all columns with the same name for all selected assets.
+When multiple tables and views are selected, only columns that all selected data assets have in common will be displayed in the Data Catalog portal. This allows users to provide tags and descriptions for all columns with the same name for all selected assets.
## Annotations and discovery+ Just as the metadata extracted from the data source during registration is added to the Data Catalog search index, user-supplied metadata is also indexed. This means that not only do annotations make it easier for users to understand the data they discover, annotations also make it easier for users to discover the annotated data assets by searching using the terms that make sense to them. ## Summary+ Registering a data source with Data Catalog makes that data discoverable by copying structural and descriptive metadata from the data source into the Catalog service. Once a data source has been registered, users can provide annotations to make easier to discover and understand from within the Data Catalog portal. ## See also+ * [Get Started with Azure Data Catalog](data-catalog-get-started.md) tutorial for step-by-step details about how to annotate data sources.
data-catalog Data Catalog How To Data Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-how-to-data-profile.md
Title: How to use data profiling data sources in Azure Data Catalog description: How-to article highlighting how to include table- and column-level data profiles when registering data sources in Azure Data Catalog, and how to use data profiles to understand data sources.--++ Previously updated : 08/01/2019 Last updated : 02/18/2022 # How to data profile data sources in Azure Data Catalog
The **Data Profiling** feature of **Azure Data Catalog** examines the data from
Data profiling examines the data in the data source being registered, and collects statistics and information about that data. During data source discovery, these statistics can help you determine the suitability of the data to solve their business problem.
-<!-- In [How to discover data sources](data-catalog-how-to-discover.md), you learn about **Azure Data Catalog's** extensive search capabilities including searching for data assets that have a profile. See [How to include a data profile when registering a data source](#howto). -->
- The following data sources support data profiling: * SQL Server (including Azure SQL DB and Azure Synapse Analytics) tables and views
Including data profiles when registering data assets helps users answer question
> [!NOTE] > You can also add documentation to an asset to describe how data could be integrated into an application. See [How to document data sources](data-catalog-how-to-documentation.md).
->
-
-<a name="howto"></a>
## How to include a data profile when registering a data source It's easy to include a profile of your data source. When you register a data source, in the **Objects to be registered** panel of the data source registration tool, choose **Include Data Profile**.
-![Include Data Profile checkbox](media/data-catalog-data-profile/data-catalog-register-profile.png)
To learn more about how to register data sources, see [How to register data sources](data-catalog-how-to-register.md) and [Get started with Azure Data Catalog](data-catalog-get-started.md).
To discover data assets that include a data profile, you can include `has:tableD
> [!NOTE] > Selecting **Include Data Profile** in the data source registration tool includes both table and column-level profile information. However, the Data Catalog API allows data assets to be registered with only one set of profile information included.
->
## Viewing data profile information Once you find a suitable data source with a profile, you can view the data profile details. To view the data profile, select a data asset and choose **Data Profile** in the Data Catalog portal window.
-![Data Profile tab](media/data-catalog-data-profile/data-catalog-view.png)
A data profile in **Azure Data Catalog** shows table and column profile information including:
data-factory Connector Teamdesk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-teamdesk.md
+
+ Title: Transform data in TeamDesk (Preview)
+
+description: Learn how to transform data in TeamDesk (Preview) by using Data Factory or Azure Synapse Analytics.
++++++ Last updated : 02/17/2022++
+# Transform data in TeamDesk (Preview) using Azure Data Factory or Synapse Analytics
++
+This article outlines how to use Data Flow to transform data in TeamDesk (Preview). To learn more, read the introductory article for [Azure Data Factory](introduction.md) or [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md).
+
+## Supported capabilities
+
+This TeamDesk connector is supported for the following activities:
+
+- [Mapping data flow](concepts-data-flow-overview.md)
+
+## Create a TeamDesk linked service using UI
+
+Use the following steps to create a TeamDesk linked service in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then select New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
+
+2. Search for TeamDesk (Preview) and select the TeamDesk (Preview) connector.
+
+ :::image type="content" source="mediesk connector.":::
+
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="mediesk linked service.":::
+
+## Connector configuration details
+
+The following sections provide information about properties that are used to define Data Factory and Synapse pipeline entities specific to TeamDesk.
+
+## Linked service properties
+
+The following properties are supported for the TeamDesk linked service:
+
+| Property | Description | Required |
+|: |: |: |
+| type | The type property must be set to **TeamDesk**. |Yes |
+| url | The URL of your TeamDesk database. An example is `https://www.teamdesk.net/secure/db/xxxxx`. | Yes |
+| authenticationType | Type of authentication used to connect to the TeamDesk service. Allowed values are **Basic** and **Token**. Refer to corresponding sections below on more properties and examples respectively.|Yes |
+
+### Basic authentication
+
+Set the **authenticationType** property to **basic**. In addition to the generic properties that are described in the preceding section, specify the following properties:
+
+| Property | Description | Required |
+|: |: |: |
+| userName | The user name used to log in to TeamDesk. |Yes |
+| password | Specify a password for the user account you specified for the user name. Mark this field as **SecureString** to store it securely. Or, you can [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). |Yes |
+
+**Example:**
+
+```json
+{
+ "name": "TeamDeskLinkedService",
+ "properties": {
+ "type": "TeamDesk",
+ "typeProperties": {
+ "url": "https://www.teamdesk.net/secure/db/xxxxx",
+ "authenticationType": "basic",
+ "userName": "<user name>",
+ "password": {
+ "type": "SecureString",
+ "value": "<password>"
+ }
+ }
+ }
+}
+```
+
+### Token authentication
+
+Set the **authenticationType** property to **token**. In addition to the generic properties that are described in the preceding section, specify the following properties:
+
+| Property | Description | Required |
+|: |: |: |
+| apiToken | Specify an API token for the TeamDesk. Mark this field as **SecureString** to store it securely. Or, you can [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). |Yes |
+
+**Example:**
+
+```json
+{
+ "name": "TeamDeskLinkedService",
+ "properties": {
+ "type": "TeamDesk",
+ "typeProperties": {
+ "url": "https://www.teamdesk.net/secure/db/xxxxx",
+ "authenticationType": "token",
+ "apiToken": {
+ "type": "SecureString",
+ "value": "<API token>"
+ }
+ }
+ }
+}
+```
+
+## Mapping data flow properties
+
+When transforming data in mapping data flow, you can read tables from TeamDesk. For more information, see the [source transformation](data-flow-source.md) in mapping data flows. You can only use an [inline dataset](data-flow-source.md#inline-datasets) as source type.
+
+### Source transformation
+
+The below table lists the properties supported by TeamDesk source. You can edit these properties in the **Source options** tab.
+
+| Name | Description | Required | Allowed values | Data flow script property |
+| - | -- | -- | -- | - |
+| Table | Data flow will fetch all the data from the table specified in the source options. | Yes when use inline mode| - | table |
+| View | Data flow will fetch the specified view in the table specified in the source options.| No | - | view |
+
+#### TeamDesk source script examples
+
+```
+source(allowSchemaDrift: true,
+ validateSchema: false,
+ store: 'teamdesk',
+ format: 'rest',
+ table: 'Table',
+ view: 'View') ~> TeamDesksource
+```
+
+## Next steps
+
+For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Enable Aad Authentication Azure Ssis Ir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/enable-aad-authentication-azure-ssis-ir.md
For more info about the managed identity for your ADF, see [Managed identity for
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-> [!NOTE]
-> For Azure-SSIS IR in Azure Synapse, user-assigned managed identity is not supported.
- ## Enable Azure AD authentication on Azure SQL Database Azure SQL Database supports creating a database with an Azure AD user. First, you need to create an Azure AD group with the specified system/user-assigned managed identity for your ADF as a member. Next, you need to set an Azure AD user as the Active Directory admin for your Azure SQL Database server and then connect to it on SQL Server Management Studio (SSMS) using that user. Finally, you need to create a contained user representing the Azure AD group, so the specified system/user-assigned managed identity for your ADF can be used by Azure-SSIS IR to create SSISDB on your behalf.
data-factory Encrypt Credentials Self Hosted Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/encrypt-credentials-self-hosted-integration-runtime.md
Replace `<servername>`, `<databasename>`, `<username>`, and `<password>` with va
``` ### Encrypt credentials
-To encrypt the sensitive data from the JSON payload on an on-premises self-hosted integration runtime, run **New-AzDataFactoryV2LinkedServiceEncryptedCredential**, and pass on the JSON payload. This cmdlet ensures the credentials are encrypted using DPAPI and stored on the self-hosted integration runtime node locally. It can be run from any machine provided the **Remote access** option is enabled on the targeted self-hosted integration runtime, and Powershell 7.0 or higher is used to execute it. The output payload containing the encrypted reference to the credential can be redirected to another JSON file (in this case 'encryptedLinkedService.json').
+To encrypt the sensitive data from the JSON payload on an on-premises self-hosted integration runtime, run **New-AzDataFactoryV2LinkedServiceEncryptedCredential**, and pass on the JSON payload. This cmdlet ensures the credentials are encrypted using DPAPI and stored on the self-hosted integration runtime node locally. It can be run from any machine provided the **Remote access** option is enabled on the targeted self-hosted integration runtime, and PowerShell 7.0 or higher is used to execute it. The output payload containing the encrypted reference to the credential can be redirected to another JSON file (in this case 'encryptedLinkedService.json').
```powershell New-AzDataFactoryV2LinkedServiceEncryptedCredential -DataFactoryName $dataFactoryName -ResourceGroupName $ResourceGroupName -Name "SqlServerLinkedService" -DefinitionFile ".\SQLServerLinkedService.json" > encryptedSQLServerLinkedService.json
data-factory How To Develop Azure Ssis Ir Licensed Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-develop-azure-ssis-ir-licensed-components.md
Previously updated : 10/22/2021 Last updated : 02/17/2022 # Install paid or licensed custom components for the Azure-SSIS integration runtime This article describes how an ISV can develop and install paid or licensed custom components for SQL Server Integration Services (SSIS) packages that run in Azure in the Azure-SSIS integration runtime.
data-factory Manage Azure Ssis Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/manage-azure-ssis-integration-runtime.md
description: Learn how to reconfigure an Azure-SSIS integration runtime in Azure
Previously updated : 10/22/2021 Last updated : 02/17/2022 # Reconfigure the Azure-SSIS integration runtime
-This article describes how to reconfigure an existing Azure-SSIS integration runtime. To create an Azure-SSIS integration runtime (IR) in Azure Data Factory, see [Create an Azure-SSIS integration runtime](create-azure-ssis-integration-runtime.md).
+This article describes how to reconfigure an existing Azure-SSIS integration runtime. To create an Azure-SSIS integration runtime (IR), see [Create an Azure-SSIS integration runtime](create-azure-ssis-integration-runtime.md).
-## Data Factory UI
+## Azure portal
+
+# [Azure Data Factory](#tab/data-factory)
You can use Data Factory UI to stop, edit/reconfigure, or delete an Azure-SSIS IR. 1. Open Data Factory UI by selecting the **Author & Monitor** tile on the home page of your data factory. 2. Select the **Manage** hub below **Home**, **Edit**, and **Monitor** hubs to show the **Connections** pane. ### To reconfigure an Azure-SSIS IR
-On the **Connections** pane of **Manage** hub, switch to the **Integration runtimes** page and select **Refresh**.
+On **Manage** hub, switch to the **Integration runtimes** page and select **Refresh**.
:::image type="content" source="./media/tutorial-create-azure-ssis-runtime-portal/connections-pane.png" alt-text="Connections pane"::: You can edit/reconfigure your Azure-SSIS IR by selecting its name. You can also select the relevant buttons to monitor/start/stop/delete your Azure-SSIS IR, auto-generate an ADF pipeline with Execute SSIS Package activity to run on your Azure-SSIS IR, and view the JSON code/payload of your Azure-SSIS IR. Editing/deleting your Azure-SSIS IR can only be done when it's stopped.
+# [Synapse Analytics](#tab/synapse-analytics)
+You can use Synapse workspace to stop, edit/reconfigure, or delete an Azure-SSIS IR.
+
+### To reconfigure an Azure-SSIS IR
+On **Manage** hub, switch to the **Integration runtimes** page.
+
+ :::image type="content" source="./media/tutorial-create-azure-ssis-runtime-portal/connections-pane-synapse.png" lightbox="./media/tutorial-create-azure-ssis-runtime-portal/connections-pane-synapse.png" alt-text="Screenshot of connections pane in Synapse.":::
+
+ You can edit/reconfigure your Azure-SSIS IR by selecting its name. You can also select the relevant buttons to monitor/start/stop/delete your Azure-SSIS IR, auto-generate an ADF pipeline with Execute SSIS Package activity to run on your Azure-SSIS IR, and view the JSON code/payload of your Azure-SSIS IR. Editing/deleting your Azure-SSIS IR can only be done when it's stopped.
+++ ## Azure PowerShell [!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] After you provision and start an instance of Azure-SSIS integration runtime, you can reconfigure it by running a sequence of `Stop` - `Set` - `Start` PowerShell cmdlets consecutively. For example, the following PowerShell script changes the number of nodes allocated for the Azure-SSIS integration runtime instance to five.
+> [!NOTE]
+> For Azure-SSIS IR in Azure Synapse Analytics, replace with corresponding Azure Synapse Analytics PowerShell interfaces: [Get-AzSynapseIntegrationRuntime](/powershell/module/az.synapse/get-azsynapseintegrationruntime), [Set-AzSynapseIntegrationRuntime (Az.Synapse)](/powershell/module/az.synapse/set-azsynapseintegrationruntime), [Remove-AzSynapseIntegrationRuntime](/powershell/module/az.synapse/remove-azsynapseintegrationruntime), [Start-AzSynapseIntegrationRuntime](/powershell/module/az.synapse/start-azsynapseintegrationruntime) and [Stop-AzSynapseIntegrationRuntime](/powershell/module/az.synapse/stop-azsynapseintegrationruntime).
+ ### Reconfigure an Azure-SSIS IR 1. First, stop the Azure-SSIS integration runtime by using the [Stop-AzDataFactoryV2IntegrationRuntime](/powershell/module/az.datafactory/stop-Azdatafactoryv2integrationruntime) cmdlet. This command releases all of its nodes and stops billing.
data-factory Quickstart Create Data Factory Dot Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-dot-net.md
Next, create a C# .NET console application in Visual Studio:
> [!NOTE] > For Sovereign clouds, you must use the appropriate cloud-specific endpoints for ActiveDirectoryAuthority and ResourceManagerUrl (BaseUri). > For example, in US Azure Gov you would use authority of https://login.microsoftonline.us instead of https://login.microsoftonline.com, and use https://management.usgovcloudapi.net instead of https://management.azure.com/, and then create the data factory management client.
-> You can use Powershell to easily get the endpoint Urls for various clouds by executing ΓÇ£Get-AzEnvironment | Format-ListΓÇ¥, which will return a list of endpoints for each cloud environment.
+> You can use PowerShell to easily get the endpoint Urls for various clouds by executing ΓÇ£Get-AzEnvironment | Format-ListΓÇ¥, which will return a list of endpoints for each cloud environment.
3. Add the following code to the **Main** method that creates an instance of **DataFactoryManagementClient** class. You use this object to create a data factory, a linked service, datasets, and a pipeline. You also use this object to monitor the pipeline run details.
data-factory Quickstart Create Data Factory Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-rest-api.md
If you don't have an Azure subscription, create a [free](https://azure.microsoft
* **Create an application in Azure Active Directory** following [this instruction](../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal). Make note of the following values that you use in later steps: **application ID**, **clientSecrets**, and **tenant ID**. Assign application to "**Contributor**" role at either subscription or resource group level. >[!NOTE] > For Sovereign clouds, you must use the appropriate cloud-specific endpoints for ActiveDirectoryAuthority and ResourceManagerUrl (BaseUri).
-> You can use Powershell to easily get the endpoint Urls for various clouds by executing ΓÇ£Get-AzEnvironment | Format-ListΓÇ¥, which will return a list of endpoints for each cloud environment.
+> You can use PowerShell to easily get the endpoint Urls for various clouds by executing ΓÇ£Get-AzEnvironment | Format-ListΓÇ¥, which will return a list of endpoints for each cloud environment.
> ## Set global variables
data-factory Data Factory Build Your First Pipeline Using Editor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-build-your-first-pipeline-using-editor.md
Last updated 10/22/2021
> This article applies to version 1 of Azure Data Factory, which is generally available. If you use the current version of the Data Factory service, see [Quickstart: Create a data factory by using Data Factory](../quickstart-create-data-factory-dot-net.md). > [!WARNING]
-> The JSON editor in Azure Portal for authoring & deploying ADF v1 pipelines will be turned OFF on 31st July 2019. After 31st July 2019, you can continue to use [ADF v1 Powershell cmdlets](/powershell/module/az.datafactory/), [ADF v1 .Net SDK](/dotnet/api/microsoft.azure.management.datafactories.models), [ADF v1 REST APIs](/rest/api/datafactory/) to author & deploy your ADF v1 pipelines.
+> The JSON editor in Azure Portal for authoring & deploying ADF v1 pipelines will be turned OFF on 31st July 2019. After 31st July 2019, you can continue to use [ADF v1 PowerShell cmdlets](/powershell/module/az.datafactory/), [ADF v1 .Net SDK](/dotnet/api/microsoft.azure.management.datafactories.models), [ADF v1 REST APIs](/rest/api/datafactory/) to author & deploy your ADF v1 pipelines.
In this article, you learn how to use the [Azure portal](https://portal.azure.com/) to create your first data factory. To do the tutorial by using other tools/SDKs, select one of the options from the drop-down list.
databox Data Box Heavy Deploy Ordered https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-heavy-deploy-ordered.md
Before you begin, make sure that:
- You should have a host computer connected to the datacenter network. Data Box Heavy will copy the data from this computer. Your host computer must run a supported operating system as described in [Azure Data Box Heavy system requirements](data-box-system-requirements.md). - You need to have a laptop with RJ-45 cable to connect to the local UI and configure the device. Use the laptop to configure each node of the device once. - Your datacenter needs to have high-speed network. We strongly recommend that you have at least one 10-GbE connection.-- You need one 40-Gbps or 10-Gbps cable per device node. Choose cables that are compatible with the [Mellanox MCX314A-BCCT](https://store.mellanox.com/products/mellanox-mcx314a-bcct-connectx-3-pro-en-network-interface-card-40-56gbe-dual-port-qsfp-pcie3-0-x8-8gt-s-rohs-r6.html) network interface:
+- You need one 40-Gbps or 10-Gbps cable per device node. Choose cables that are compatible with the [Mellanox MCX314A-BCCT](https://qnapdirect.com/products/mellanox-mcx314a-bcct-connectx-3-pro-en-network-interface-card?variant=31431916585011) network interface:
- For the 40-Gbps cable, device end of the cable needs to be QSFP+. - For the 10-Gbps cable, you need an SFP+ cable that plugs into a 10 G switch on one end, with a QSFP+ to SFP+ adapter (or the QSA adapter) for the end that plugs into the device.
databox Data Box Heavy Deploy Set Up https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-heavy-deploy-set-up.md
Before you begin, make sure that:
1. Your datacenter needs to have high-speed network. We strongly recommend that you have at least one 10-GbE connection. 1. You need to have a laptop with RJ-45 cable to connect to the local UI and configure the device. Use the laptop to configure each node of the device once. 1. You need one 40-Gbps cable or 10-Gbps cable per device node.
- - Choose cables that are compatible with the [Mellanox MCX314A-BCCT](https://store.mellanox.com/products/mellanox-mcx314a-bcct-connectx-3-pro-en-network-interface-card-40-56gbe-dual-port-qsfp-pcie3-0-x8-8gt-s-rohs-r6.html) network interface.
+ - Choose cables that are compatible with the [Mellanox MCX314A-BCCT](https://qnapdirect.com/products/mellanox-mcx314a-bcct-connectx-3-pro-en-network-interface-card?variant=31431916585011) network interface.
- For the 40-Gbps cable, device end of the cable needs to be QSFP+. - For the 10-Gbps cable, you need an SFP+ cable that plugs into a 10-Gbps switch on one end, with a QSFP+ to SFP+ adapter (or the QSA adapter) for the end that plugs into the device.
databox Data Box Heavy Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-heavy-overview.md
The Data Box Heavy device has the following features in this release.
| Weight | ~ 500 lbs. <br>Device on locking wheels for transport| | Dimensions | Width: 26 inches Height: 28 inches Length: 48 inches | | Rack space | Cannot be rack-mounted|
-| Cables required | 4 grounded 120 V / 10 A power cords (NEMA 5-15) included <br> Device supports up to 240 V power and has C-13 power receptacles <br> Use network cables compatible with [Mellanox MCX314A-BCCT](https://store.mellanox.com/products/mellanox-mcx314a-bcct-connectx-3-pro-en-network-interface-card-40-56gbe-dual-port-qsfp-pcie3-0-x8-8gt-s-rohs-r6.html) |
+| Cables required | 4 grounded 120 V / 10 A power cords (NEMA 5-15) included <br> Device supports up to 240 V power and has C-13 power receptacles <br> Use network cables compatible with [Mellanox MCX314A-BCCT](https://qnapdirect.com/products/mellanox-mcx314a-bcct-connectx-3-pro-en-network-interface-card?variant=31431916585011) |
| Power | 4 built-in power supply units (PSUs) shared across both the device nodes <br> 1,200 watt typical power draw| | Storage capacity | ~ 1-PB raw, 70 disks of 14 TB each <br> 770-TB usable capacity| | Number of nodes | 2 independent nodes per device (500 TB each) |
dms Quickstart Create Data Migration Service Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/quickstart-create-data-migration-service-portal.md
Open your web browser, navigate to the [Microsoft Azure portal](https://portal.a
> [!NOTE] > You can create up to 10 instances of DMS per subscription per region. If you require a greater number of instances, please create a support ticket.
-## Register the resource provider
+<! Register the resource provider -->
-Register the Microsoft.DataMigration resource provider before you create your first instance of the Database Migration Service.
-
-1. In the Azure portal, search for and select **Subscriptions**.
-
- ![Show portal subscriptions](media/quickstart-create-data-migration-service-portal/portal-select-subscription.png)
-
-2. Select the subscription in which you want to create the instance of Azure Database Migration Service, and then select **Resource providers**.
-
- ![Show resource providers](media/quickstart-create-data-migration-service-portal/portal-select-resource-provider.png)
-
-3. Search for migration, and then select **Register** for **Microsoft.DataMigration**.
-
- ![Register resource provider](media/quickstart-create-data-migration-service-portal/dms-register-provider.png)
-
-## Create an instance of the service
-
-1. In the Azure portal menu or on the **Home** page, select **Create a resource**. Search for and select **Azure Database Migration Service**.
-
- ![Azure Marketplace](media/quickstart-create-data-migration-service-portal/portal-marketplace.png)
-
-2. On the **Azure Database Migration Service** screen, select **Create**.
-
- ![Create Azure Database Migration Service instance](media/quickstart-create-data-migration-service-portal/dms-create.png)
-
-3. On the **Create Migration Service** basics screen:
-
- - Select the subscription.
- - Create a new resource group or choose an existing one.
- - Specify a name for the instance of the Azure Database Migration Service.
- - Select the location in which you want to create the instance of Azure Database Migration Service.
- - Choose **Azure** as the service mode.
- - Select a pricing tier. For more information on costs and pricing tiers, see the [pricing page](https://aka.ms/dms-pricing).
-
- ![Configure Azure Database Migration Service instance basics settings](media/quickstart-create-data-migration-service-portal/dms-create-basics.png)
-
- - Select Next: Networking.
-
-4. On the **Create Migration Service** networking screen:
-
- - Select an existing virtual network or create a new one. The virtual network provides Azure Database Migration Service with access to the source database and target environment. For more information about how to create a virtual network in the Azure portal, see the article [Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md).
-
- ![Configure Azure Database Migration Service instance networking settings](media/quickstart-create-data-migration-service-portal/dms-network-settings.png)
-
- - Select **Review + Create** to create the service.
-
- - After a few moments, your instance of Azure Database Migration service is created and ready to use:
-
- ![Migration service created](media/quickstart-create-data-migration-service-portal/dms-service-created.png)
+<! Create an instance of the service -->
## Clean up resources
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
The following table shows locations by service provider. If you want to view ava
| **[Teraco](https://www.teraco.co.za/services/africa-cloud-exchange/)** |Supported |Supported | Cape Town, Johannesburg | | **[TIME dotCom](https://www.time.com.my/enterprise/connectivity/direct-cloud)** | Supported | Supported | Kuala Lumpur | | **[Tokai Communications](https://www.tokai-com.co.jp/en/)** | Supported | Supported | Osaka, Tokyo2 |
+| **TPG Telecom**| Supported | Supported | Melbourne, Sydney |
| **[Transtelco](https://transtelco.net/enterprise-services/)** |Supported |Supported | Dallas, Queretaro(Mexico)| | **[T-Mobile](https://www.t-mobile.com/business/solutions/networking/cloud-networking )** |Supported |Supported |Chicago, Silicon Valley, Washington DC | | **[T-Systems](https://geschaeftskunden.telekom.de/vernetzung-digitalisierung/produkt/intraselect)** |Supported |Supported | Frankfurt |
If you are remote and do not have fiber connectivity or you want to explore othe
| **[POST Telecom Luxembourg](https://www.teralinksolutions.com/cloud-connectivity/cloudbridge-to-azure-expressroute/)**|Equinix | Amsterdam | | **[Proximus](https://www.proximus.be/en/id_b_cl_proximus_external_cloud_connect/companies-and-public-sector/discover/magazines/expert-blog/proximus-external-cloud-connect.html)**|Equinix | Amsterdam, Dublin, London, Paris | | **[QSC AG](https://www2.qbeyond.de/en/)** |Interxion | Frankfurt |
-| **[RETN](https://retn.net/services/cloud-connect/)** | Equinix | Amsterdam |
+| **[RETN](https://retn.net/products/cloud-connect)** | Equinix | Amsterdam |
| **Rogers** | Cologix, Equinix | Montreal, Toronto | | **[Spectrum Enterprise](https://enterprise.spectrum.com/services/cloud/cloud-connect.html)** | Equinix | Chicago, Dallas, Los Angeles, New York, Silicon Valley | | **[Tamares Telecom](http://www.tamarestelecom.com/our-services/#Connectivity)** | Equinix | London |
governance Create Blueprint Azurecli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/create-blueprint-azurecli.md
Title: "Quickstart: Create a blueprint with Azure CLI"
-description: In this quickstart, you use Azure Blueprints to create, define, and deploy artifacts using the Azure CLI.
+ Title: 'Quickstart: Create a blueprint with the Azure CLI'
+description: In this quickstart, you use Azure Blueprints to create, define, and deploy artifacts by using the Azure CLI.
Last updated 08/17/2021
-# Quickstart: Define and Assign an Azure blueprint with Azure CLI
+# Quickstart: Define and assign an Azure blueprint with the Azure CLI
-Learning how to create and assign blueprints enables the definition of common patterns to develop
-reusable and rapidly deployable configurations based on Azure Resource Manager templates (ARM
-templates), policy, security, and more. In this tutorial, you learn to use Azure Blueprints to do
-some of the common tasks related to creating, publishing, and assigning a blueprint within your
-organization, such as:
+In this tutorial, you learn to use Azure Blueprints to do some of the common tasks related to creating, publishing, and assigning a blueprint within your organization. This skill helps you define common patterns to develop reusable and rapidly deployable configurations, based on Azure Resource Manager (ARM) templates, policy, and security.
## Prerequisites - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free) before you begin.-- If you've not used Azure Blueprints before, register the resource provider through Azure CLI with
+- If you've not used Azure Blueprints before, register the resource provider through the Azure CLI with
`az provider register --namespace Microsoft.Blueprint`. [!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)]
-## Add the Blueprint extension
+## Add the blueprint extension
-To enable Azure CLI to manage blueprint definitions and assignments, the extension must be added.
-This extension works wherever Azure CLI can be used, including
-[bash on Windows 10](/windows/wsl/install-win10), [Cloud Shell](https://shell.azure.com) (both
-standalone and inside the portal), the [Azure CLI Docker
-image](https://hub.docker.com/_/microsoft-azure-cli), or locally installed.
+To enable the Azure CLI to manage blueprint definitions and assignments, you must add the extension. This extension works wherever you can use the Azure CLI. This includes [bash on Windows 10](/windows/wsl/install-win10), [Cloud Shell](https://shell.azure.com) (both the standalone version and the one inside the portal), the [Azure CLI Docker image](https://hub.docker.com/_/microsoft-azure-cli), or an extension that's locally installed.
-1. Check that the latest Azure CLI is installed (at least **2.0.76**). If it isn't yet installed,
- follow
- [these instructions](/cli/azure/install-azure-cli-windows).
+1. Check that the latest Azure CLI is installed (at least **2.0.76**). If it isn't yet installed, follow [these instructions](/cli/azure/install-azure-cli-windows).
-1. In your Azure CLI environment of choice, import it with the following command:
+1. In your Azure CLI environment of choice, import the extension with the following command:
```azurecli-interactive # Add the Blueprint extension to the Azure CLI environment
image](https://hub.docker.com/_/microsoft-azure-cli), or locally installed.
1. Validate that the extension has been installed and is the expected version (at least **0.1.0**): ```azurecli-interactive
- # Check the extension list (note that you may have other extensions installed)
+ # Check the extension list (note that you might have other extensions installed)
az extension list # Run help for extension options
image](https://hub.docker.com/_/microsoft-azure-cli), or locally installed.
## Create a blueprint The first step in defining a standard pattern for compliance is to compose a blueprint from the
-available resources. We'll create a blueprint named 'MyBlueprint' to configure role and policy
-assignments for the subscription. Then we'll add a resource group, an ARM template, and a role
+available resources. Let's create a blueprint named *MyBlueprint* to configure role and policy
+assignments for the subscription. Then you add a resource group, an ARM template, and a role
assignment on the resource group. > [!NOTE]
-> When using Azure CLI, the _blueprint_ object is created first. For each _artifact_ to be added
-> that has parameters, the parameters need to be defined in advance on the initial _blueprint_.
+> When you're using the Azure CLI, the _blueprint_ object is created first. For each _artifact_ to be added that has parameters, you define the parameters in advance on the initial _blueprint_.
-1. Create the initial _blueprint_ object. The **parameters** parameter takes a JSON file that
- includes all of the blueprint level parameters. The parameters are set during assignment and used
- by the artifacts added in later steps.
+1. Create the initial _blueprint_ object. The `parameters` parameter takes a JSON file that
+ includes all of the blueprint level parameters. You set the parameters during assignment, and they're used by the artifacts you add in later steps.
- - JSON file - blueprintparms.json
+ - JSON file - *blueprintparms.json*
```json {
assignment on the resource group.
``` > [!NOTE]
- > Use the filename _blueprint.json_ when importing your blueprint definitions.
- > This file name is used when calling
- > [az blueprint import](/cli/azure/blueprint#az_blueprint_import).
+ > Use the filename _blueprint.json_ when you import your blueprint definitions. This file name is used when you call [az blueprint import](/cli/azure/blueprint#az_blueprint_import).
The blueprint object is created in the default subscription by default. To specify the
- management group, use parameter **managementgroup**. To specify the subscription, use parameter
- **subscription**.
+ management group, use the parameter `managementgroup`. To specify the subscription, use the parameter `subscription`.
1. Add the resource group for the storage artifacts to the definition.
assignment on the resource group.
--description 'Contains the resource template deployment and a role assignment.' ```
-1. Add role assignment at subscription. In the following example, the principal identities granted
- the specified role are configured to a parameter that is set during blueprint assignment. This
- example uses the _Contributor_ built-in role with a GUID of
- `b24988ac-6180-42a0-ab88-20f7382dd24c`.
+1. Add a role assignment at the subscription. In the following example, the principal identities granted the specified role are configured to a parameter that is set during blueprint assignment. This example uses the `Contributor` built-in role, with a GUID of `b24988ac-6180-42a0-ab88-20f7382dd24c`.
```azurecli-interactive az blueprint artifact role create \
assignment on the resource group.
--principal-ids "[parameters('contributors')]" ```
-1. Add policy assignment at subscription. This example uses the _Apply tag and its default value to
- resource groups_ built-in policy with a GUID of `49c88fc8-6fd1-46fd-a676-f12d1d3a4c71`.
+1. Add a policy assignment at the subscription. This example uses the `Apply tag and its default value to resource groups` built-in policy, with a GUID of `49c88fc8-6fd1-46fd-a676-f12d1d3a4c71`.
- - JSON file - artifacts\policyTags.json
+ - JSON file - *artifacts\policyTags.json*
```json {
assignment on the resource group.
``` > [!NOTE]
- > When using `az blueprint` on a Mac, replace `\` with `/` for parameter values that include
- > the path. In this case, the value for **parameters** becomes `artifacts/policyTags.json`.
+ > When you use `az blueprint` on a Mac, replace `\` with `/` for parameter values that include the path. In this case, the value for `parameters` becomes `artifacts/policyTags.json`.
-1. Add another policy assignment for Storage tag (reusing _storageAccountType_ parameter) at
- subscription. This additional policy assignment artifact demonstrates that a parameter defined on
- the blueprint is usable by more than one artifact. In the example, the **storageAccountType** is
- used to set a tag on the resource group. This value provides information about the storage
- account that is created in the next step. This example uses the _Apply tag and its default value
- to resource groups_ built-in policy with a GUID of `49c88fc8-6fd1-46fd-a676-f12d1d3a4c71`.
+1. Add another policy assignment for the storage tag (by reusing `storageAccountType_ parameter`) at the subscription. This additional policy assignment artifact demonstrates that a parameter defined on the blueprint is usable by more than one artifact. In the example, you use the `storageAccountType` to set a tag on the resource group. This value provides information about the storage account that you create in the next step. This example uses the `Apply tag and its default value to resource groups` built-in policy, with a GUID of `49c88fc8-6fd1-46fd-a676-f12d1d3a4c71`.
- - JSON file - artifacts\policyStorageTags.json
+ - JSON file - *artifacts\policyStorageTags.json*
```json {
assignment on the resource group.
``` > [!NOTE]
- > When using `az blueprint` on a Mac, replace `\` with `/` for parameter values that include
- > the path. In this case, the value for **parameters** becomes `artifacts/policyStorageTags.json`.
+ > When you use `az blueprint` on a Mac, replace `\` with `/` for parameter values that include the path. In this case, the value for `parameters` becomes `artifacts/policyStorageTags.json`.
-1. Add template under resource group. The **template** parameter for an ARM template includes the
- normal JSON components of the template. The template also reuses the **storageAccountType**,
- **tagName**, and **tagValue** blueprint parameters by passing each to the template. The blueprint
- parameters are available to the template by using parameter **parameters** and inside the
- template JSON that key-value pair is used to inject the value. The blueprint and template
- parameter names could be the same.
+1. Add a template under resource group. The `template` parameter for an ARM template includes the normal JSON components of the template. The template also reuses the `storageAccountType`, `tagName`, and `tagValue` blueprint parameters by passing each to the template. The blueprint parameters are available to the template by using the parameter `parameters`, and inside the template JSON that key-value pair is used to inject the value. The blueprint and template parameter names might be the same.
- - JSON ARM template file - artifacts\templateStorage.json
+ - JSON ARM template file - *artifacts\templateStorage.json*
```json {
assignment on the resource group.
} ```
- - JSON ARM template parameter file - artifacts\templateStorageParams.json
+ - JSON ARM template parameter file - *artifacts\templateStorageParams.json*
```json {
assignment on the resource group.
``` > [!NOTE]
- > When using `az blueprint` on a Mac, replace `\` with `/` for parameter values that include
- > the path. In this case, the value for **template** becomes `artifacts/templateStorage.json`
- > and **parameters** becomes `artifacts/templateStorageParams.json`.
+ > When you use `az blueprint` on a Mac, replace `\` with `/` for parameter values that include the path. In this case, the value for `template` becomes `artifacts/templateStorage.json`, and `parameters` becomes `artifacts/templateStorageParams.json`.
-1. Add role assignment under resource group. Similar to the previous role assignment entry, the
- example below uses the definition identifier for the **Owner** role and provides it a different
- parameter from the blueprint. This example uses the _Owner_ built-in role with a GUID of
- `8e3af657-a8ff-443c-a75c-2fe8c4bcb635`.
+1. Add a role assignment under the resource group. Similar to the previous role assignment entry, the following example uses the definition identifier for the `Owner` role, and provides it a different parameter from the blueprint. This example uses the `Owner` built-in role, with a GUID of `8e3af657-a8ff-443c-a75c-2fe8c4bcb635`.
```azurecli-interactive az blueprint artifact role create \
assignment on the resource group.
## Publish a blueprint
-Now that the artifacts have been added to the blueprint, it's time to publish it. Publishing makes
-it available to assign to a subscription.
+Now that you've added the artifacts to the blueprint, it's time to publish it. Publishing makes
+the blueprint available to assign to a subscription.
```azurecli-interactive az blueprint publish --blueprint-name 'MyBlueprint' --version '{BlueprintVersion}' ```
-The value for `{BlueprintVersion}` is a string of letters, numbers, and hyphens (no spaces or other
-special characters) with a max length of 20 characters. Use something unique and informational such
-as **v20200605-135541**.
+The value for `{BlueprintVersion}` is a string of letters, numbers, and hyphens (with no spaces or other special characters). The maximum length is 20 characters. Use something unique and informational, such as `v20200605-135541`.
## Assign a blueprint
-Once a blueprint is published using the Azure CLI, it's assignable to a subscription. Assign the
-blueprint you created to one of the subscriptions under your management group hierarchy. If the
-blueprint is saved to a subscription, it can only be assigned to that subscription. The
-**blueprint-name** parameter specifies the blueprint to assign. To provide name, location, identity,
-lock, and blueprint parameters, use the matching Azure CLI parameters on the
-`az blueprint assignment create` command or provide them in the **parameters** JSON file.
+After you've published a blueprint by using the Azure CLI, it's assignable to a subscription. Assign the blueprint that you created to one of the subscriptions under your management group hierarchy. If the blueprint is saved to a subscription, it can only be assigned to that subscription. The `blueprint-name` parameter specifies the blueprint to assign. To provide the `name`, `location`, `identity`, `lock`, and `blueprint` parameters, use the matching Azure CLI parameters on the `az blueprint assignment create` command, or provide them in the *parameters* JSON file.
-1. Run the blueprint deployment by assigning it to a subscription. As the **contributors** and
- **owners** parameters require an array of objectIds of the principals to be granted the role
- assignment, use
- [Azure Active Directory Graph API](/graph/migrate-azure-ad-graph-planning-checklist)
- for gathering the objectIds for use in the **parameters** for your own users, groups, or
- service principals.
+1. Run the blueprint deployment by assigning it to a subscription. Because the `contributors` and `owners` parameters require an array of `objectIds` of the principals to be granted the role assignment, use [Azure Active Directory Graph API](/graph/migrate-azure-ad-graph-planning-checklist) for gathering the `objectIds` for use in the `parameters` for your own users, groups, or service principals.
- - JSON file - blueprintAssignment.json
+ - JSON file - *blueprintAssignment.json*
```json {
lock, and blueprint parameters, use the matching Azure CLI parameters on the
- User-assigned managed identity A blueprint assignment can also use a
- [user-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md).
- In this case, the **identity-type** parameter is set to _UserAssigned_ and the
- **user-assigned-identities** parameter specifies the identity. Replace `{userIdentity}` with
+ [user-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md). In this case, the `identity-type` parameter is set to `UserAssigned`, and the `user-assigned-identities` parameter specifies the identity. Replace `{userIdentity}` with
the name of your user-assigned managed identity. ```azurecli-interactive
lock, and blueprint parameters, use the matching Azure CLI parameters on the
--parameters blueprintAssignment.json ```
- The **user-assigned managed identity** can be in any subscription and resource group the user
- assigning the blueprint has permissions to.
+ The user-assigned managed identity can be in any subscription and resource group to which the user assigning the blueprint has permissions.
> [!IMPORTANT]
- > Azure Blueprints doesn't manage the user-assigned managed identity. Users are responsible for
- > assigning sufficient roles and permissions or the blueprint assignment will fail.
+ > Azure Blueprints doesn't manage the user-assigned managed identity. Users are responsible for assigning sufficient roles and permissions, or the blueprint assignment will fail.
## Clean up resources
-### Unassign a blueprint
-
-You can remove a blueprint from a subscription. Removal is often done when the artifact resources
-are no longer needed. When a blueprint is removed, the artifacts assigned as part of that blueprint
-are left behind. To remove a blueprint assignment, use the `az blueprint assignment delete`
-command:
+You can remove a blueprint from a subscription. Removal is often done when the artifact resources are no longer needed. When a blueprint is removed, the artifacts assigned as part of that blueprint are left behind. To remove a blueprint assignment, use the `az blueprint assignment delete` command:
```azurecli-interactive az blueprint assignment delete --name 'assignMyBlueprint'
az blueprint assignment delete --name 'assignMyBlueprint'
## Next steps
-In this quickstart, you've created, assigned, and removed a blueprint with Azure CLI. To learn more
-about Azure Blueprints, continue to the blueprint lifecycle article.
+In this quickstart, you created, assigned, and removed a blueprint with the Azure CLI. To learn more about Azure Blueprints, continue to the blueprint lifecycle article.
> [!div class="nextstepaction"] > [Learn about the blueprint lifecycle](./concepts/lifecycle.md)
governance Create Blueprint Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/create-blueprint-portal.md
# Quickstart: Define and assign a blueprint in the portal
-When you learn how to create and assign blueprints, you can define common patterns to develop
-reusable and rapidly deployable configurations based on Azure Resource Manager templates (ARM
-templates), policy, security, and more. In this tutorial, you learn to use Azure Blueprints to do
-some of the common tasks related to creating, publishing, and assigning a blueprint within your
-organization. These tasks include:
+In this tutorial, you learn to use Azure Blueprints to do some of the common tasks related to creating, publishing, and assigning a blueprint within your organization. This skill helps you define common patterns to develop reusable and rapidly deployable configurations, based on Azure Resource Manager (ARM) templates, policy, and security.
## Prerequisites
before you begin.
## Create a blueprint The first step in defining a standard pattern for compliance is to compose a blueprint from the
-available resources. In this example, create a new blueprint named **MyBlueprint** to configure role
-and policy assignments for the subscription. Then add a new resource group, and create a Resource
-Manager template and role assignment on the new resource group.
+available resources. Let's create a blueprint named *MyBlueprint* to configure role and policy
+assignments for the subscription. Then you add a resource group, an ARM template, and a role
+assignment on the resource group.
1. Select **All services** in the left pane. Search for and select **Blueprints**.
-1. Select **Blueprint definitions** from the page on the left and select the **+ Create blueprint**
- button at the top of the page.
+1. Select **Blueprint definitions**, and then select **+ Create blueprint**.
- Or, select **Create** from the **Getting started** page to go straight to creating a blueprint.
+ :::image type="content" source="./media/create-blueprint-portal/create-blueprint-button.png" alt-text="Screenshot that shows the Create blueprint button on the Blueprint definitions page." border="false":::
- :::image type="content" source="./media/create-blueprint-portal/create-blueprint-button.png" alt-text="Screenshot of the 'Create blueprint' button on the Blueprint definitions page." border="false":::
+ Or, select **Getting started** > **Create** to go straight to creating a blueprint.
1. Select **Start with blank blueprint** from the card at the top of the built-in blueprints list.
-1. Provide a **Blueprint name** such as **MyBlueprint**. (Use up to 48 letters and numbers,
- but no spaces or special characters). Leave **Blueprint description** blank
+1. Provide a blueprint name, such as *MyBlueprint*. (You can use up to 48 letters and numbers,
+ but no spaces or special characters.) Leave **Blueprint description** blank
for now.
-1. In the **Definition location** box, select the ellipsis on the right, select the
- [management group](../management-groups/overview.md) or subscription where you want to save the
- blueprint, and choose **Select**.
+1. In the **Definition location** box, select the ellipsis on the right. Then select the
+ [management group](../management-groups/overview.md) or subscription where you want to save the blueprint, and choose **Select**.
-1. Verify that the information is correct. The **Blueprint name** and **Definition location** fields
- can't be changed later. Then select **Next : Artifacts** at the bottom of the page or the
- **Artifacts** tab at the top of the page.
+1. Verify that the information is correct. The **Blueprint name** and **Definition location** fields can't be changed later. Then select **Next : Artifacts** at the bottom of the page, or the **Artifacts** tab at the top of the page.
1. Add a role assignment at the subscription level:
- 1. Select the **+ Add artifact** row under **Subscription**. The **Add artifact** window opens on
+ 1. Under **Subscription**, select **+ Add artifact**. The **Add artifact** window opens on
the right side of the browser.
- 1. Select **Role assignment** for **Artifact type**.
+ 1. For **Artifact type**, select **Role assignment**.
- 1. Under **Role**, select **Contributor**. Leave the **Add user, app or group** box with the
+ 1. For **Role**, select **Contributor**. Leave the **Add user, app or group** box with the
check box that indicates a dynamic parameter. 1. Select **Add** to add this artifact to the blueprint.
Manager template and role assignment on the new resource group.
> [!NOTE] > Most artifacts support parameters. A parameter that's assigned a value during blueprint
- > creation is a _static parameter_. If the parameter is assigned during blueprint assignment,
- > it's a _dynamic parameter_. For more information, see
- > [Blueprint parameters](./concepts/parameters.md).
+ > creation is a _static parameter_. If the parameter is assigned during blueprint assignment, it's a _dynamic parameter_. For more information, see [Blueprint parameters](./concepts/parameters.md).
1. Add a policy assignment at the subscription level:
- 1. Select the **+ Add artifact** row under the role assignment artifact.
+ 1. Under the role assignment artifact, select **+ Add artifact**.
- 1. Select **Policy assignment** for **Artifact type**.
+ 1. For **Artifact type**, select **Policy assignment**.
1. Change **Type** to **Built-in**. In **Search**, enter **tag**.
Manager template and role assignment on the new resource group.
1. Select the row of the policy assignment **Append tag and its value to resource groups**.
-1. The window to provide parameters to the artifact as part of the blueprint definition opens and
- allows setting the parameters for all assignments (static parameters) based on this blueprint
- instead of during assignment (dynamic parameters). This example uses dynamic parameters during
- blueprint assignment, so leave the defaults and select **Cancel**.
+1. The window to provide parameters to the artifact as part of the blueprint definition opens. You can set the parameters for all assignments (static parameters) based on this blueprint, instead of during assignment (dynamic parameters). This example uses dynamic parameters during blueprint assignment, so leave the defaults and select **Cancel**.
1. Add a resource group at the subscription level:
- 1. Select the **+ Add artifact** row under **Subscription**.
+ 1. Under **Subscription**, select **+ Add artifact**.
- 1. Select **Resource group** for **Artifact type**.
+ 1. For **Artifact type**, select **Resource group**.
- 1. Leave the **Artifact display name**, **Resource Group Name**, and **Location** boxes blank,
- but make sure that the check box is checked for each parameter property to make them dynamic
- parameters.
+ 1. Leave the **Artifact display name**, **Resource Group Name**, and **Location** boxes blank. Make sure that the check box is checked for each parameter property to make them dynamic parameters.
1. Select **Add** to add this artifact to the blueprint. 1. Add a template under the resource group:
- 1. Select the **+ Add artifact** row under the **ResourceGroup** entry.
+ 1. Under **ResourceGroup**, select **+ Add artifact**.
- 1. Select **Azure Resource Manager template** for **Artifact type**, set **Artifact display
+ 1. For **Artifact type**, select **Azure Resource Manager template**. Set **Artifact display
name** to **StorageAccount**, and leave **Description** blank.
- 1. On the **Template** tab in the editor box, paste the following ARM template. After you paste
- the template, select the **Parameters** tab and note that the template parameters
- **storageAccountType** and **location** were detected. Each parameter was automatically
- detected and populated, but configured as a dynamic parameter.
+ 1. On the **Template** tab in the editor box, paste the following ARM template. After you paste the template, select the **Parameters** tab, and note that the template parameters `storageAccountType` and `location` were detected. Each parameter was automatically detected and populated, but configured as a dynamic parameter.
> [!IMPORTANT] > If you're importing the template, ensure that the file is only JSON and doesn't include
- > HTML. When you're pointing to a URL on GitHub, ensure that you have selected **RAW** to get
- > the pure JSON file and not the one wrapped with HTML for display on GitHub. An error occurs
- > if the imported template is not purely JSON.
+ > HTML. When you're pointing to a URL on GitHub, ensure that you have selected **RAW** to get the pure JSON file, and not the one wrapped with HTML for display on GitHub. An error occurs if the imported template is not purely JSON.
```json {
Manager template and role assignment on the new resource group.
} ```
- 1. Clear the **storageAccountType** check box and note that the dropdown list contains only
- values included in the ARM template under **allowedValues**. Select the box to set it back to
- a dynamic parameter.
+ 1. Clear the **storageAccountType** check box, and note that the dropdown list contains only
+ values included in the ARM template under `allowedValues`. Select the box to set it back to a dynamic parameter.
1. Select **Add** to add this artifact to the blueprint. :::image type="content" source="./media/create-blueprint-portal/add-resource-manager-template.png" alt-text="Screenshot of the Resource Manager template artifact options for adding to a blueprint definition." border="false":::
-1. Your completed blueprint should look similar to the following. Notice that each artifact has
- **_x_ out of _y_ parameters populated** in the **Parameters** column. The dynamic parameters are
- set during each assignment of the blueprint.
+1. Your completed blueprint should look similar to the following. In the **Parameters** column, notice that each artifact has **_x_ out of _y_ parameters populated**. The dynamic parameters are set during each assignment of the blueprint.
:::image type="content" source="./media/create-blueprint-portal/completed-blueprint.png" alt-text="Screenshot of a completed blueprint definition with each artifact type." border="false":::
-1. Now that all planned artifacts have been added, select **Save Draft** at the bottom of the page.
+1. Now that you've added all planned artifacts, select **Save Draft** at the bottom of the page.
## Edit a blueprint
assignment to the new resource group. You can fix both by following these steps:
1. Select **Blueprint definitions** from the page on the left.
-1. In the list of blueprints, select and hold (or right-click) the one that you previously created
- and select **Edit blueprint**.
+1. In the list of blueprints, select and hold (or right-click) the one that you previously created. Then select **Edit blueprint**.
-1. In **Blueprint description**, provide some information about the blueprint and the artifacts that
- compose it. In this case, enter something like: **This blueprint sets tag policy and role
- assignment on the subscription, creates a ResourceGroup, and deploys a resource template and role
- assignment to that ResourceGroup.**
+1. In **Blueprint description**, provide some information about the blueprint and the artifacts that compose it. In this case, enter something like: *This blueprint sets tag policy and role assignment on the subscription, creates a ResourceGroup, and deploys a resource template and role assignment to that ResourceGroup.*
-1. Select **Next : Artifacts** at the bottom of the page or the **Artifacts** tab at the top of the
- page.
+1. Select **Next : Artifacts** at the bottom of the page, or the **Artifacts** tab at the top of the page.
1. Add a role assignment under the resource group:
- 1. Select the **+ Add artifact** row directly under the **ResourceGroup** entry.
+ 1. Under **ResourceGroup**, select **+ Add artifact**.
- 1. Select **Role assignment** for **Artifact type**.
+ 1. For **Artifact type**, select **Role assignment**.
- 1. Under **Role**, select **Owner**, and clear the check box under the **Add user, app or group**
- box.
+ 1. Under **Role**, select **Owner**, and clear the check box under the **Add user, app or group** box.
- 1. Search for and select a user, app, or group to add. This artifact uses a static parameter set
- the same in every assignment of this blueprint.
+ 1. Search for and select a user, app, or group to add. This artifact uses a static parameter set, the same in every assignment of this blueprint.
1. Select **Add** to add this artifact to the blueprint. :::image type="content" source="./media/create-blueprint-portal/add-role-assignment-2.png" alt-text="Screenshot of the second role assignment artifact options for adding to a blueprint definition." border="false":::
-1. Your completed blueprint should look similar to the following. Notice that the newly added role
- assignment shows **1 out of 1 parameters populated**. That means it's a static parameter.
+1. Your completed blueprint should look similar to the following. Notice that the newly added role assignment shows **1 out of 1 parameters populated**. That means it's a static parameter.
:::image type="content" source="./media/create-blueprint-portal/completed-blueprint-2.png" alt-text="Screenshot of the second completed blueprint definition with the additional role assignment artifact." border="false":::
assignment to the new resource group. You can fix both by following these steps:
## Publish a blueprint
-Now that all the planned artifacts have been added to the blueprint, it's time to publish it.
-Publishing makes the blueprint available to be assigned to a subscription.
+Now that you've added all the planned artifacts to the blueprint, it's time to publish it. Publishing makes the blueprint available to be assigned to a subscription.
1. Select **Blueprint definitions** from the page on the left.
-1. In the list of blueprints, select and hold (or right-click) the one you previously created and
- select **Publish blueprint**.
+1. In the list of blueprints, select and hold (or right-click) the one you previously created. Then select **Publish blueprint**.
1. In the pane that opens, provide a **Version** (letters, numbers, and hyphens with a maximum
- length of 20 characters), such as **v1**. Optionally, enter text in **Change notes**, such as
- **First publish**.
+ length of 20 characters), such as **v1**. Optionally, enter text in **Change notes**, such as *First publish*.
1. Select **Publish** at the bottom of the page. ## Assign a blueprint
-After a blueprint has been published, it can be assigned to a subscription. Assign the blueprint
-that you created to one of the subscriptions under your management group hierarchy. If the blueprint
-is saved to a subscription, it can only be assigned to that subscription.
+After you publish a blueprint, you can assign it to a subscription. Assign the blueprint
+that you created to one of the subscriptions under your management group hierarchy. If the blueprint is saved to a subscription, it can only be assigned to that subscription.
1. Select **Blueprint definitions** from the page on the left.
-1. In the list of blueprints, select and hold (or right-click) the one that you previously created
- (or select the ellipsis) and select **Assign blueprint**.
+1. In the list of blueprints, select and hold (or right-click) the one that you previously created (or select the ellipsis). Then select **Assign blueprint**.
1. On the **Assign blueprint** page, in the **Subscription** dropdown list, select the
- subscriptions that you want to deploy this blueprint to.
+ subscriptions to which you want to deploy this blueprint.
If there are supported Enterprise offerings available from [Azure Billing](../../cost-management-billing/index.yml), a **Create new** link is activated
is saved to a subscription, it can only be assigned to that subscription.
1. Select the **Create new** link to create a new subscription instead of selecting existing ones.
- 1. Provide a **Display name** for the new subscription.
+ 1. For **Display name**, enter a name for the new subscription.
- 1. Select the available **Offer** from the dropdown list.
+ 1. For **Offer**, select the available offer from the dropdown list.
- 1. Use the ellipsis to select the [management group](../management-groups/overview.md) that the
- subscription will be a child of.
+ 1. For **Management group**, select the ellipsis to choose the [management group](../management-groups/overview.md) that the subscription will be a child of.
1. Select **Create** at the bottom of the page.
is saved to a subscription, it can only be assigned to that subscription.
> The new subscription is created immediately after you select **Create**. > [!NOTE]
- > An assignment is created for each subscription that you select. You can make changes to a
- > single subscription assignment at a later time without forcing changes on the remainder of the
- > selected subscriptions.
+ > An assignment is created for each subscription that you select. You can make changes to a single subscription assignment at a later time, without forcing changes on the remainder of the selected subscriptions.
1. For **Assignment name**, provide a unique name for this assignment.
-1. In **Location**, select a region for the managed identity and subscription deployment object to
- be created in. Azure Blueprints uses this managed identity to deploy all artifacts in the assigned
- blueprint. To learn more, see
- [Managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md).
+1. In **Location**, select a region for the managed identity and subscription deployment object to be created in. Azure Blueprints uses this managed identity to deploy all artifacts in the assigned blueprint. To learn more, see [Managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md).
-1. Leave the **Blueprint definition version** dropdown list selection of **Published** versions on
- the **v1** entry. (The default is the most recently published version.)
+1. For the **Blueprint definition version** dropdown list selection of **Published** versions, leave the **v1** entry as it is. (The default is the most recently published version.)
1. For **Lock Assignment**, leave the default of **Don't Lock**. For more information, see [Blueprints resource locking](./concepts/resource-locking.md).
- :::image type="content" source="./media/create-blueprint-portal/assignment-locking-mi.png" alt-text="Screenshot of the Locking assignment and managed identity options for the blueprint assignment." border="false":::
+ :::image type="content" source="./media/create-blueprint-portal/assignment-locking-mi.png" alt-text="Screenshot of the locking assignment and managed identity options for the blueprint assignment." border="false":::
1. Under **Managed Identity**, leave the default of **System assigned**.
-1. For the subscription level role assignment **[User group or application name] : Contributor**,
- search for and select a user, app, or group.
+1. For the subscription-level role assignment **[User group or application name] : Contributor**, search for and select a user, app, or group.
-1. For the subscription level policy assignment, set **Tag Name** to **CostCenter** and the **Tag
- Value** to **ContosoIT**.
+1. For the subscription-level policy assignment, set **Tag Name** to **CostCenter**, and set **Tag Value** to **ContosoIT**.
-1. For **ResourceGroup**, provide a **Name** of **StorageAccount** and a **Location** of **East US
- 2** from the dropdown list.
+1. For **ResourceGroup**, provide a name of **StorageAccount** and a location of **East US 2** from the dropdown list.
> [!NOTE]
- > For each artifact that you added under the resource group during blueprint definition, that
- > artifact is indented to align with the resource group or object that you'll deploy it with.
- > Artifacts that either don't take parameters or have no parameters to be defined at assignment
- > are listed only for contextual information.
+ > For each artifact that you added under the resource group during blueprint definition, that artifact is indented to align with the resource group or object that you'll deploy it with. Artifacts that either don't take parameters, or have no parameters to be defined at assignment, are listed only for contextual information.
-1. On the ARM template **StorageAccount**, select **Standard_GRS** for the **storageAccountType**
- parameter.
+1. On the ARM template **StorageAccount**, select **Standard_GRS** for the **storageAccountType** parameter.
1. Read the information box at the bottom of the page, and then select **Assign**. ## Track deployment of a blueprint
-When a blueprint has been assigned to one or more subscriptions, two things happen:
+When you assign a blueprint to one or more subscriptions, two things happen:
- The blueprint is added to the **Assigned blueprints** page for each subscription. - The process of deploying all the artifacts defined by the blueprint begins.
-Now that the blueprint has been assigned to a subscription, verify the progress of the deployment:
+Now that you've assigned the blueprint to a subscription, verify the progress of the deployment:
1. Select **Assigned blueprints** from the page on the left.
-1. In the list of blueprints, select and hold (or right-click) the one that you previously assigned
- and select **View assignment details**.
+1. In the list of blueprints, select and hold (or right-click) the one that you previously assigned. Then select **View assignment details**.
- :::image type="content" source="./media/create-blueprint-portal/view-assignment-details.png" alt-text="Screenshot of the blueprint assignment context menu with the 'View assignment details' option selected." border="false":::
+ :::image type="content" source="./media/create-blueprint-portal/view-assignment-details.png" alt-text="Screenshot of the blueprint assignment context menu with the View assignment details option selected." border="false":::
-1. On the **Blueprint assignment** page, validate that all artifacts were successfully deployed and
- that there were no errors during the deployment. If errors occurred, see
- [Troubleshooting blueprints](./troubleshoot/general.md) for steps to determine what went wrong.
+1. On the **Blueprint assignment** page, validate that all artifacts were successfully deployed, and that there were no errors during the deployment. If errors occurred, see [Troubleshooting blueprints](./troubleshoot/general.md) for steps to determine what went wrong.
## Clean up resources ### Unassign a blueprint
-If you no longer need a blueprint assignment, remove it from a subscription. The blueprint might
-have been replaced by a newer blueprint with updated patterns, policies, and designs. When a
-blueprint is removed, the artifacts assigned as part of that blueprint are left behind. To remove a
-blueprint assignment, follow these steps:
+If you no longer need a blueprint assignment, remove it from a subscription. The blueprint might have been replaced by a newer blueprint with updated patterns, policies, and designs. When a blueprint is removed, the artifacts assigned as part of that blueprint are left behind. To remove a blueprint assignment, follow these steps:
1. Select **Assigned blueprints** from the page on the left.
-1. In the list of blueprints, select the blueprint that you want to unassign. Then select the
- **Unassign blueprint** button at the top of the page.
+1. In the list of blueprints, select the blueprint that you want to unassign. Then select **Unassign blueprint** at the top of the page.
-1. Read the confirmation message and then select **OK**.
+1. Read the confirmation message, and then select **OK**.
### Delete a blueprint 1. Select **Blueprint definitions** from the page on the left.
-1. Right-click the blueprint that you want to delete, and select **Delete blueprint**. Then select
- **Yes** in the confirmation dialog box.
+1. Right-click the blueprint that you want to delete, and select **Delete blueprint**. Then select **Yes** in the confirmation dialog box.
> [!NOTE]
-> Deleting a blueprint in this method also deletes all published versions of the selected blueprint.
-> To delete a single version, open the blueprint, select the **Published versions** tab, select the
-> version that you want to delete, and then select **Delete This Version**. Also, you can't delete a
-> blueprint until you've deleted all blueprint assignment of that blueprint definition.
+> Deleting a blueprint in this method also deletes all published versions of the selected blueprint. To delete a single version, open the blueprint, and select the **Published versions** tab. Then select the version that you want to delete, and then select **Delete This Version**. Also, you can't delete a blueprint until you've deleted all blueprint assignments of that blueprint definition.
## Next steps
-In this quickstart, you've created, assigned, and removed a blueprint with Azure portal. To learn
+In this quickstart, you created, assigned, and removed a blueprint with Azure portal. To learn
more about Azure Blueprints, continue to the blueprint lifecycle article. > [!div class="nextstepaction"]
governance Create Blueprint Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/create-blueprint-powershell.md
Title: 'Quickstart: Create a blueprint with PowerShell'
-description: In this quickstart, you use Azure Blueprints to create, define, and deploy artifacts using the PowerShell.
+description: In this quickstart, you use Azure Blueprints to create, define, and deploy artifacts by using PowerShell.
Last updated 08/17/2021
-# Quickstart: Define and Assign an Azure blueprint with PowerShell
+# Quickstart: Define and assign an Azure blueprint with PowerShell
-Learning how to create and assign blueprints enables the definition of common patterns to develop
-reusable and rapidly deployable configurations based on Azure Resource Manager templates (ARM
-templates), policy, security, and more. In this tutorial, you learn to use Azure Blueprints to do
-some of the common tasks related to creating, publishing, and assigning a blueprint within your
-organization, such as:
+In this tutorial, you learn to use Azure Blueprints to do some of the common tasks related to creating, publishing, and assigning a blueprint within your organization. This skill helps you define common patterns to develop reusable and rapidly deployable configurations, based on Azure Resource Manager (ARM) templates, policy, and security.
## Prerequisites - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free) before you begin. - If it isn't already installed, follow the instructions in
- [Add the Az.Blueprint module](./how-to/manage-assignments-ps.md#add-the-azblueprint-module) to
- install and validate the **Az.Blueprint** module from the PowerShell Gallery.
+ [Add the Az.Blueprint module](./how-to/manage-assignments-ps.md#add-the-azblueprint-module) to install and validate the **Az.Blueprint** module from the PowerShell Gallery.
- If you've not used Azure Blueprints before, register the resource provider through Azure PowerShell with `Register-AzResourceProvider -ProviderNamespace Microsoft.Blueprint`.
organization, such as:
## Create a blueprint The first step in defining a standard pattern for compliance is to compose a blueprint from the
-available resources. We'll create a blueprint named 'MyBlueprint' to configure role and policy
-assignments for the subscription. Then we'll add a resource group, an ARM template, and a role
+available resources. Let's create a blueprint named *MyBlueprint* to configure role and policy
+assignments for the subscription. Then you add a resource group, an ARM template, and a role
assignment on the resource group. > [!NOTE]
-> When using PowerShell, the _blueprint_ object is created first. For each _artifact_ to be added
-> that has parameters, the parameters need to be defined in advance on the initial _blueprint_.
+> When you're using PowerShell, the _blueprint_ object is created first. For each _artifact_ to be added that has parameters, you define the parameters in advance on the initial _blueprint_.
-1. Create the initial _blueprint_ object. The **BlueprintFile** parameter takes a JSON file that
- includes properties about the blueprint, any resource groups to create, and all of the blueprint
- level parameters. The parameters are set during assignment and used by the artifacts added in
- later steps.
+1. Create the initial _blueprint_ object. The `BlueprintFile` parameter takes a JSON file that
+ includes properties about the blueprint, any resource groups to create, and all of the blueprint-level parameters. You set the parameters during assignment, and they're used by the artifacts you add in later steps.
- - JSON file - blueprint.json
+ - JSON file - *blueprint.json*
```json {
assignment on the resource group.
``` > [!NOTE]
- > Use the filename _blueprint.json_ when creating your blueprint definitions programmatically.
- > This file name is used when calling
- > [Import-AzBlueprintWithArtifact](/powershell/module/az.blueprint/import-azblueprintwithartifact).
+ > Use the filename _blueprint.json_ when you create your blueprint definitions programmatically. This file name is used when you call [`Import-AzBlueprintWithArtifact`](/powershell/module/az.blueprint/import-azblueprintwithartifact).
The blueprint object is created in the default subscription by default. To specify the
- management group, use parameter **ManagementGroupId**. To specify the subscription, use
- parameter **SubscriptionId**.
+ management group, use the parameter `ManagementGroupId`. To specify the subscription, use
+ the parameter `SubscriptionId`.
-1. Add role assignment at subscription. The **ArtifactFile** defines the _kind_ of artifact, the
- properties align to the role definition identifier, and the principal identities are passed as an
- array of values. In the following example, the principal identities granted the specified role
- are configured to a parameter that is set during blueprint assignment. This example uses the
- _Contributor_ built-in role with a GUID of `b24988ac-6180-42a0-ab88-20f7382dd24c`.
+1. Add a role assignment at the subscription. The `ArtifactFile` defines the kind of artifact, the properties align to the role definition identifier, and the principal identities are passed as an array of values. In the following example, the principal identities granted the specified role are configured to a parameter that is set during blueprint assignment. This example uses the `Contributor` built-in role, with a GUID of `b24988ac-6180-42a0-ab88-20f7382dd24c`.
- - JSON file - \artifacts\roleContributor.json
+ - JSON file - *\artifacts\roleContributor.json*
```json {
assignment on the resource group.
New-AzBlueprintArtifact -Blueprint $blueprint -Name 'roleContributor' -ArtifactFile .\artifacts\roleContributor.json ```
-1. Add policy assignment at subscription. The **ArtifactFile** defines the _kind_ of artifact, the
- properties that align to a policy or initiative definition, and configures the policy assignment
- to use the defined blueprint parameters to configure during blueprint assignment. This example
- uses the _Apply tag and its default value to resource groups_ built-in policy with a GUID of
- `49c88fc8-6fd1-46fd-a676-f12d1d3a4c71`.
+1. Add a policy assignment at the subscription. The `ArtifactFile` defines the kind of artifact, the properties align to a policy or initiative definition, and the policy assignment is configured to use the defined blueprint parameters during blueprint assignment. This example uses the `Apply tag and its default value to resource groups` built-in policy, with a GUID of `49c88fc8-6fd1-46fd-a676-f12d1d3a4c71`.
- - JSON file - \artifacts\policyTags.json
+ - JSON file - *\artifacts\policyTags.json*
```json {
assignment on the resource group.
New-AzBlueprintArtifact -Blueprint $blueprint -Name 'policyTags' -ArtifactFile .\artifacts\policyTags.json ```
-1. Add another policy assignment for Storage tag (reusing _storageAccountType_ parameter) at
- subscription. This additional policy assignment artifact demonstrates that a parameter defined on
- the blueprint is usable by more than one artifact. In the example, the **storageAccountType** is
- used to set a tag on the resource group. This value provides information about the storage
- account that is created in the next step. This example uses the _Apply tag and its default value
- to resource groups_ built-in policy with a GUID of `49c88fc8-6fd1-46fd-a676-f12d1d3a4c71`.
+1. Add another policy assignment for the storage tag (by reusing `storageAccountType_ parameter`) at the subscription. This additional policy assignment artifact demonstrates that a parameter defined on the blueprint is usable by more than one artifact. In the example, you use the `storageAccountType` to set a tag on the resource group. This value provides information about the storage account that you create in the next step. This example uses the `Apply tag and its default value to resource groups` built-in policy, with a GUID of `49c88fc8-6fd1-46fd-a676-f12d1d3a4c71`.
- - JSON file - \artifacts\policyStorageTags.json
+ - JSON file - *\artifacts\policyStorageTags.json*
```json {
assignment on the resource group.
New-AzBlueprintArtifact -Blueprint $blueprint -Name 'policyStorageTags' -ArtifactFile .\artifacts\policyStorageTags.json ```
-1. Add template under resource group. The **TemplateFile** for an ARM template includes the normal
- JSON component of the template. The template also reuses the **storageAccountType**, **tagName**,
- and **tagValue** blueprint parameters by passing each to the template. The blueprint parameters
- are available to the template by using parameter **TemplateParameterFile** and inside the
- template JSON that key-value pair is used to inject the value. The blueprint and template
- parameter names could be the same.
+1. Add a template under the resource group. The `TemplateFile` for an ARM template includes the normal JSON component of the template. The template also reuses the `storageAccountType`, `tagName`, and `tagValue` blueprint parameters by passing each to the template. The blueprint parameters are available to the template by using the parameter `TemplateParameterFile`, and inside the template JSON that key-value pair is used to inject the value. The blueprint and template parameter names might be the same.
- - JSON ARM template file - \artifacts\templateStorage.json
+ - JSON ARM template file - *\artifacts\templateStorage.json*
```json {
assignment on the resource group.
} ```
- - JSON ARM template parameter file - \artifacts\templateStorageParams.json
+ - JSON ARM template parameter file - *\artifacts\templateStorageParams.json*
```json {
assignment on the resource group.
New-AzBlueprintArtifact -Blueprint $blueprint -Type TemplateArtifact -Name 'templateStorage' -TemplateFile .\artifacts\templateStorage.json -TemplateParameterFile .\artifacts\templateStorageParams.json -ResourceGroupName storageRG ```
-1. Add role assignment under resource group. Similar to the previous role assignment entry, the
- example below uses the definition identifier for the **Owner** role and provides it a different
- parameter from the blueprint. This example uses the _Owner_ built-in role with a GUID of
- `8e3af657-a8ff-443c-a75c-2fe8c4bcb635`.
+1. Add a role assignment under the resource group. Similar to the previous role assignment entry, the following example uses the definition identifier for the `Owner` role, and provides it a different parameter from the blueprint. This example uses the `Owner` built-in role, with a GUID of `8e3af657-a8ff-443c-a75c-2fe8c4bcb635`.
- - JSON file - \artifacts\roleOwner.json
+ - JSON file - *\artifacts\roleOwner.json*
```json {
assignment on the resource group.
## Publish a blueprint
-Now that the artifacts have been added to the blueprint, it's time to publish it. Publishing makes
-it available to assign to a subscription.
+Now that you've added the artifacts to the blueprint, it's time to publish it. Publishing makes
+the blueprint available to assign to a subscription.
```azurepowershell-interactive # Use the reference to the new blueprint object from the previous steps Publish-AzBlueprint -Blueprint $blueprint -Version '{BlueprintVersion}' ```
-The value for `{BlueprintVersion}` is a string of letters, numbers, and hyphens (no spaces or other
-special characters) with a max length of 20 characters. Use something unique and informational such
-as **v20180622-135541**.
+The value for `{BlueprintVersion}` is a string of letters, numbers, and hyphens (with no spaces or other special characters). The maximum length is 20 characters. Use something unique and informational, such as `v20180622-135541`.
## Assign a blueprint
-Once a blueprint is published using PowerShell, it's assignable to a subscription. Assign the
-blueprint you created to one of the subscriptions under your management group hierarchy. If the
-blueprint is saved to a subscription, it can only be assigned to that subscription. The
-**Blueprint** parameter specifies the blueprint to assign. To provide name, location, identity,
-lock, and blueprint parameters, use the matching PowerShell parameters on the
-`New-AzBlueprintAssignment` cmdlet or provide them in the **AssignmentFile** parameter JSON file.
+After you've published a blueprint by using PowerShell, it's assignable to a subscription. Assign the blueprint that you created to one of the subscriptions under your management group hierarchy. If the blueprint is saved to a subscription, it can only be assigned to that subscription. The `Blueprint` parameter specifies the blueprint to assign. To provide the `name`, `location`, `identity`, `lock`, and `blueprint` parameters, use the matching PowerShell parameters on the `New-AzBlueprintAssignment` cmdlet, or provide them in the `AssignmentFile` parameter JSON file.
-1. Run the blueprint deployment by assigning it to a subscription. As the **contributors** and
- **owners** parameters require an array of objectIds of the principals to be granted the role
- assignment, use
- [Azure Active Directory Graph API](/graph/migrate-azure-ad-graph-planning-checklist)
- for gathering the objectIds for use in the **AssignmentFile** for your own users, groups, or
- service principals.
+1. Run the blueprint deployment by assigning it to a subscription. Because the `contributors` and `owners` parameters require an array of `objectIds` of the principals to be granted the role assignment, use [Azure Active Directory Graph API](/graph/migrate-azure-ad-graph-planning-checklist) for gathering the `objectIds` for use in the `AssignmentFile` for your own users, groups, or service principals.
- - JSON file - blueprintAssignment.json
+ - JSON file - *blueprintAssignment.json*
```json {
lock, and blueprint parameters, use the matching PowerShell parameters on the
- User-assigned managed identity A blueprint assignment can also use a
- [user-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md).
- In this case, the **identity** portion of the JSON assignment file changes as follows. Replace
- `{tenantId}`, `{subscriptionId}`, `{yourRG}`, and `{userIdentity}` with your tenantId,
- subscriptionId, resource group name, and the name of your user-assigned managed identity,
- respectively.
+ [user-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md). In this case, the `identity` portion of the JSON assignment file changes as follows. Replace `{tenantId}`, `{subscriptionId}`, `{yourRG}`, and `{userIdentity}` with your tenant ID, subscription ID, resource group name, and the name of your user-assigned managed identity, respectively.
```json "identity": {
lock, and blueprint parameters, use the matching PowerShell parameters on the
}, ```
- The **user-assigned managed identity** can be in any subscription and resource group the user
- assigning the blueprint has permissions to.
+ The user-assigned managed identity can be in any subscription and resource group to which the user assigning the blueprint has permissions.
> [!IMPORTANT]
- > Azure Blueprints doesn't manage the user-assigned managed identity. Users are responsible for
- > assigning sufficient roles and permissions or the blueprint assignment will fail.
+ > Azure Blueprints doesn't manage the user-assigned managed identity. Users are responsible for assigning sufficient roles and permissions, or the blueprint assignment will fail.
## Clean up resources
-### Unassign a blueprint
-
-You can remove a blueprint from a subscription. Removal is often done when the artifact resources
-are no longer needed. When a blueprint is removed, the artifacts assigned as part of that blueprint
-are left behind. To remove a blueprint assignment, use the `Remove-AzBlueprintAssignment` cmdlet:
+You can remove a blueprint from a subscription. Removal is often done when the artifact resources are no longer needed. When a blueprint is removed, the artifacts assigned as part of that blueprint are left behind. To remove a blueprint assignment, use the `Remove-AzBlueprintAssignment` cmdlet:
assignMyBlueprint
Remove-AzBlueprintAssignment -Name 'assignMyBlueprint'
## Next steps
-In this quickstart, you've created, assigned, and removed a blueprint with PowerShell. To learn more
-about Azure Blueprints, continue to the blueprint lifecycle article.
+In this quickstart, you created, assigned, and removed a blueprint with PowerShell. To learn more about Azure Blueprints, continue to the blueprint lifecycle article.
> [!div class="nextstepaction"] > [Learn about the blueprint lifecycle](./concepts/lifecycle.md)
governance Create Blueprint Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/create-blueprint-rest-api.md
Title: "Quickstart: Create a blueprint with REST API"
-description: In this quickstart, you use Azure Blueprints to create, define, and deploy artifacts using the REST API.
+ Title: 'Quickstart: Create a blueprint with REST API'
+description: In this quickstart, you use Azure Blueprints to create, define, and deploy artifacts by using the REST API.
Last updated 08/17/2021 # Quickstart: Define and assign an Azure blueprint with REST API
-Learning how to create and assign blueprints enables the definition of common patterns to develop
-reusable and rapidly deployable configurations based on Azure Resource Manager templates (ARM
-templates), policy, security, and more. In this tutorial, you learn to use Azure Blueprints to do
-some of the common tasks related to creating, publishing, and assigning a blueprint within your
-organization, such as:
+In this tutorial, you learn to use Azure Blueprints to do some of the common tasks related to creating, publishing, and assigning a blueprint within your organization. This skill helps you define common patterns to develop reusable and rapidly deployable configurations, based on Azure Resource Manager (ARM) templates, policy, and security.
## Prerequisites
organization, such as:
[!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)]
-## Getting started with REST API
+## Get started with REST API
-If you're unfamiliar with REST API, start by reviewing [Azure REST API Reference](/rest/api/azure/)
-to get a general understanding of REST API, specifically request URI and request body. This article
-uses these concepts to provide directions for working with Azure Blueprints and assumes a working
-knowledge of them. Tools such as [ARMClient](https://github.com/projectkudu/ARMClient) and others
-may handle authorization automatically and are recommended for beginners.
+If you're unfamiliar with REST API, start by reviewing the [Azure REST API Reference](/rest/api/azure/), specifically the sections about request URI and request body. This quickstart uses these concepts to provide directions for working with Azure Blueprints, and assumes a working knowledge of them. Tools such as [ARMClient](https://github.com/projectkudu/ARMClient) can handle authorization automatically, and are recommended for beginners.
For the Azure Blueprints specs, see [Azure Blueprints REST API](/rest/api/blueprints/). ### REST API and PowerShell If you don't already have a tool for making REST API calls, consider using PowerShell for these
-instructions. Following is a sample header for authenticating with Azure. Generate an authentication
-header, sometimes called a **Bearer token**, and provide the REST API URI to connect to with any
-parameters or a **Request Body**:
+instructions. The following is a sample header for authenticating with Azure. Generate an authentication header, sometimes called a *bearer token*, and provide the REST API URI to connect with any parameters or a `Request Body`:
```azurepowershell-interactive # Log in first with Connect-AzAccount if not using Cloud Shell
$restUri = 'https://management.azure.com/subscriptions/{subscriptionId}?api-vers
$response = Invoke-RestMethod -Uri $restUri -Method Get -Headers $authHeader ```
-Replace `{subscriptionId}` in the **$restUri** variable above to get information about your
-subscription. The $response variable holds the result of the `Invoke-RestMethod` cmdlet, which can
-be parsed with cmdlets such as
-[ConvertFrom-Json](/powershell/module/microsoft.powershell.utility/convertfrom-json). If the REST
-API service endpoint expects a **Request Body**, provide a JSON formatted variable to the `-Body`
-parameter of `Invoke-RestMethod`.
+Replace `{subscriptionId}` in the preceding `$restUri` variable to get information about your
+subscription. The `$response` variable holds the result of the `Invoke-RestMethod` cmdlet, which you can parse with cmdlets such as [ConvertFrom-Json](/powershell/module/microsoft.powershell.utility/convertfrom-json). If the REST API service endpoint expects a `Request Body`, provide a JSON-formatted variable to the `-Body` parameter of `Invoke-RestMethod`.
## Create a blueprint The first step in defining a standard pattern for compliance is to compose a blueprint from the
-available resources. We'll create a blueprint named 'MyBlueprint' to configure role and policy
-assignments for the subscription. Then we'll add a resource group, an ARM template, and a role
+available resources. Let's create a blueprint named *MyBlueprint* to configure role and policy
+assignments for the subscription. Then you add a resource group, an ARM template, and a role
assignment on the resource group. > [!NOTE]
-> When using the REST API, the _blueprint_ object is created first. For each _artifact_ to be added
-> that has parameters, the parameters need to be defined in advance on the initial _blueprint_.
+> When you're using the REST API, the _blueprint_ object is created first. For each _artifact_ to be added that has parameters, you define the parameters in advance on the initial *blueprint*.
-In each REST API URI, there are variables that are used that you need to replace with your own
-values:
+In each REST API URI, replace the following variables with your own values:
-- `{YourMG}` - Replace with the ID of your management group-- `{subscriptionId}` - Replace with your subscription ID
+- `{YourMG}` - Replace with the ID of your management group.
+- `{subscriptionId}` - Replace with your subscription ID.
> [!NOTE]
-> Blueprints may also be created at the subscription level. To see an example, see
+> You can also create blueprints at the subscription level. For more information, see
> [create blueprint at subscription example](/rest/api/blueprints/blueprints/createorupdate#subscriptionblueprint).
-1. Create the initial _blueprint_ object. The **Request Body** includes properties about the
- blueprint, any resource groups to create, and all of the blueprint level parameters. The
- parameters are set during assignment and used by the artifacts added in later steps.
+1. Create the initial _blueprint_ object. The `Request Body` includes properties about the
+ blueprint, any resource groups to create, and all of the blueprint-level parameters. You set the parameters during assignment, and they're used by the artifacts you add in later steps.
- REST API URI
values:
} ```
-1. Add role assignment at subscription. The **Request Body** defines the _kind_ of artifact, the
- properties align to the role definition identifier, and the principal identities are passed as an
- array of values. In the following example, the principal identities granted the specified role
- are configured to a parameter that is set during blueprint assignment. This example uses the
- _Contributor_ built-in role with a GUID of `b24988ac-6180-42a0-ab88-20f7382dd24c`.
+1. Add a role assignment at the subscription. The `Request Body` defines the kind of artifact, the properties align to the role definition identifier, and the principal identities are passed as an array of values. In the following example, the principal identities granted the specified role are configured to a parameter that is set during blueprint assignment. This example uses the `Contributor` built-in role, with a GUID of `b24988ac-6180-42a0-ab88-20f7382dd24c`.
- REST API URI
values:
} ```
-1. Add policy assignment at subscription. The **Request Body** defines the _kind_ of artifact, the
- properties that align to a policy or initiative definition, and configures the policy assignment
- to use the defined blueprint parameters to configure during blueprint assignment. This example
- uses the _Apply tag and its default value to resource groups_ built-in policy with a GUID of
- `49c88fc8-6fd1-46fd-a676-f12d1d3a4c71`.
+1. Add a policy assignment at the subscription. The `Request Body` defines the kind of artifact, the properties align to a policy or initiative definition, and the policy assignment is configured to use the defined blueprint parameters during blueprint assignment. This example
+uses the `Apply tag and its default value to resource groups` built-in policy, with a GUID of `49c88fc8-6fd1-46fd-a676-f12d1d3a4c71`.
- REST API URI
values:
} ```
-1. Add another policy assignment for Storage tag (reusing _storageAccountType_ parameter) at
- subscription. This additional policy assignment artifact demonstrates that a parameter defined on
- the blueprint is usable by more than one artifact. In the example, the **storageAccountType** is
- used to set a tag on the resource group. This value provides information about the storage
- account that is created in the next step. This example uses the _Apply tag and its default value
- to resource groups_ built-in policy with a GUID of `49c88fc8-6fd1-46fd-a676-f12d1d3a4c71`.
+1. Add another policy assignment for the storage tag (by reusing `storageAccountType_ parameter`) at the subscription. This additional policy assignment artifact demonstrates that a parameter defined on the blueprint is usable by more than one artifact. In the example, you use the `storageAccountType` to set a tag on the resource group. This value provides information about the storage account that you create in the next step. This example uses the `Apply tag and its default value to resource groups` built-in policy, with a GUID of `49c88fc8-6fd1-46fd-a676-f12d1d3a4c71`.
- REST API URI
values:
} ```
-1. Add template under resource group. The **Request Body** for an ARM template includes the normal
- JSON component of the template and defines the target resource group with
- **properties.resourceGroup**. The template also reuses the **storageAccountType**, **tagName**,
- and **tagValue** blueprint parameters by passing each to the template. The blueprint parameters
- are available to the template by defining **properties.parameters** and inside the template JSON
- that key-value pair is used to inject the value. The blueprint and template parameter names could
- be the same, but were made different to illustrate how each passes from the blueprint to the
- template artifact.
+1. Add a template under the resource group. The `Request Body` for an ARM template includes the normal JSON component of the template, and defines the target resource group with
+`properties.resourceGroup`. The template also reuses the `storageAccountType`, `tagName`, and `tagValue` blueprint parameters by passing each to the template. The blueprint parameters are available to the template by defining `properties.parameters`, and inside the template JSON that key-value pair is used to inject the value. The blueprint and template parameter names can be the same, but are different here to illustrate how each passes from the blueprint to the
+template artifact.
- REST API URI
values:
} ```
-1. Add role assignment under resource group. Similar to the previous role assignment entry, the
- example below uses the definition identifier for the **Owner** role and provides it a different
- parameter from the blueprint. This example uses the _Owner_ built-in role with a GUID of
- `8e3af657-a8ff-443c-a75c-2fe8c4bcb635`.
+1. Add a role assignment under the resource group. Similar to the previous role assignment entry, the following example uses the definition identifier for the `Owner` role, and provides it a different parameter from the blueprint. This example uses the `Owner` built-in role, with a GUID of `8e3af657-a8ff-443c-a75c-2fe8c4bcb635`.
- REST API URI
values:
## Publish a blueprint
-Now that the artifacts have been added to the blueprint, it's time to publish it. Publishing makes
-it available to assign to a subscription.
+Now that you've added the artifacts to the blueprint, it's time to publish it. Publishing makes
+the blueprint available to assign to a subscription.
- REST API URI
it available to assign to a subscription.
PUT https://management.azure.com/providers/Microsoft.Management/managementGroups/{YourMG}/providers/Microsoft.Blueprint/blueprints/MyBlueprint/versions/{BlueprintVersion}?api-version=2018-11-01-preview ```
-The value for `{BlueprintVersion}` is a string of letters, numbers, and hyphens (no spaces or other
-special characters) with a max length of 20 characters. Use something unique and informational such
-as **v20180622-135541**.
+The value for `{BlueprintVersion}` is a string of letters, numbers, and hyphens (with no spaces or other special characters). The maximum length is 20 characters. Use something unique and informational, such as `v20180622-135541`.
## Assign a blueprint
-Once a blueprint is published using REST API, it's assignable to a subscription. Assign the
-blueprint you created to one of the subscriptions under your management group hierarchy. If the
-blueprint is saved to a subscription, it can only be assigned to that subscription. The **Request
-Body** specifies the blueprint to assign, provides name and location to any resource groups in the
-blueprint definition, and provides all parameters defined on the blueprint and used by one or more
-attached artifacts.
+After you've published a blueprint by using REST API, it's assignable to a subscription. Assign the blueprint that you created to one of the subscriptions under your management group hierarchy. If the blueprint is saved to a subscription, it can only be assigned to that subscription. The `Request Body` specifies the blueprint to assign, and provides the name and location to any resource groups in the blueprint definition. `Request Body` also provides all parameters defined on the blueprint and used by one or more attached artifacts.
-In each REST API URI, there are variables that are used that you need to replace with your own
-values:
+In each REST API URI, replace the following variables with your own values:
-- `{tenantId}` - Replace with your tenant ID-- `{YourMG}` - Replace with the ID of your management group-- `{subscriptionId}` - Replace with your subscription ID
+- `{tenantId}` - Replace with your tenant ID.
+- `{YourMG}` - Replace with the ID of your management group.
+- `{subscriptionId}` - Replace with your subscription ID.
-1. Provide the Azure Blueprints service principal the **Owner** role on the target subscription. The
- AppId is static (`f71766dc-90d9-4b7d-bd9d-4499c4331c3f`), but the service principal ID varies by
- tenant. Details can be requested for your tenant using the following REST API. It uses
- [Azure Active Directory Graph API](/graph/migrate-azure-ad-graph-planning-checklist),
- which has different authorization.
+1. Provide the Azure Blueprints service principal the `Owner` role on the target subscription. The `AppId` is static (`f71766dc-90d9-4b7d-bd9d-4499c4331c3f`), but the service principal ID varies by tenant. Use the following REST API to request details for your tenant. It uses [Azure Active Directory Graph API](/graph/migrate-azure-ad-graph-planning-checklist), which has different authorization.
- REST API URI
values:
GET https://graph.windows.net/{tenantId}/servicePrincipals?api-version=1.6&$filter=appId eq 'f71766dc-90d9-4b7d-bd9d-4499c4331c3f' ```
-1. Run the blueprint deployment by assigning it to a subscription. As the **contributors** and
- **owners** parameters require an array of objectIds of the principals to be granted the role
- assignment, use
- [Azure Active Directory Graph API](/graph/migrate-azure-ad-graph-planning-checklist)
- for gathering the objectIds for use in the **Request Body** for your own users, groups, or
- service principals.
+1. Run the blueprint deployment by assigning it to a subscription. Because the `contributors` and `owners` parameters require an array of `objectIds` of the principals to be granted the role assignment, use [Azure Active Directory Graph API](/graph/migrate-azure-ad-graph-planning-checklist) for gathering the `objectIds` for use in the `Request Body` for your own users, groups, or service principals.
- REST API URI
values:
A blueprint assignment can also use a [user-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md).
- In this case, the **identity** portion of the request body changes as follows. Replace
+ In this case, the `identity` portion of the request body changes as follows. Replace
`{yourRG}` and `{userIdentity}` with your resource group name and the name of your user-assigned managed identity, respectively.
values:
}, ```
- The **user-assigned managed identity** can be in any subscription and resource group the user
- assigning the blueprint has permissions to.
+ The user-assigned managed identity can be in any subscription and resource group to which the user assigning the blueprint has permissions.
> [!IMPORTANT]
- > Azure Blueprints doesn't manage the user-assigned managed identity. Users are responsible for
- > assigning sufficient roles and permissions or the blueprint assignment will fail.
+ > Azure Blueprints doesn't manage the user-assigned managed identity. Users are responsible for assigning sufficient roles and permissions, or the blueprint assignment will fail.
## Clean up resources ### Unassign a blueprint
-You can remove a blueprint from a subscription. Removal is often done when the artifact resources
-are no longer needed. When a blueprint is removed, the artifacts assigned as part of that blueprint
-are left behind. To remove a blueprint assignment, use the following REST API operation:
+You can remove a blueprint from a subscription. Removal is often done when the artifact resources are no longer needed. When a blueprint is removed, the artifacts assigned as part of that blueprint are left behind. To remove a blueprint assignment, use the following REST API operation:
- REST API URI
To remove the blueprint itself, use the following REST API operation:
## Next steps
-In this quickstart, you've created, assigned, and removed a blueprint with REST API. To learn more
+In this quickstart, you created, assigned, and removed a blueprint with REST API. To learn more
about Azure Blueprints, continue to the blueprint lifecycle article. > [!div class="nextstepaction"]
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/samples/ism-protected/control-mapping.md
to support audit requirements** built-in policy initiative.
> compliance in Azure Policy is only a partial view of your overall compliance status. The > associations between controls and Azure Policy definitions for this compliance blueprint sample > may change over time. To view the change history, see the
-> [GitHub Commit History](https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/ism-protected/control-mapping.md).
+> [GitHub Commit History](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/governance/blueprints/samples/ism-protected/control-mapping.md).
## Location Constraints
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/samples/iso27001-ase-sql-workload/control-mapping.md
specific VM Extensions to support audit requirements** built-in policy initiativ
> compliance in Azure Policy is only a partial view of your overall compliance status. The > associations between controls and Azure Policy definitions for this compliance blueprint sample > may change over time. To view the change history, see the
-> [GitHub Commit History](https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/iso27001-ase-sql-workload/control-mapping.md).
+> [GitHub Commit History](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/governance/blueprints/samples/iso27001-ase-sql-workload/control-mapping.md).
## A.6.1.2 Segregation of duties
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/samples/iso27001-shared/control-mapping.md
requirements** built-in policy initiative.
> compliance in Azure Policy is only a partial view of your overall compliance status. The > associations between controls and Azure Policy definitions for this compliance blueprint sample > may change over time. To view the change history, see the
-> [GitHub Commit History](https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/iso27001-shared/control-mapping.md).
+> [GitHub Commit History](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/governance/blueprints/samples/iso27001-shared/control-mapping.md).
## A.6.1.2 Segregation of duties
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/samples/media/control-mapping.md
directly to a specific control mapping. Many of the mapped controls are implemen
> compliance in Azure Policy is only a partial view of your overall compliance status. The > associations between controls and Azure Policy definitions for this compliance blueprint sample > may change over time. To view the change history, see the
-> [GitHub Commit History](https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/medi).
+> [GitHub Commit History](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/governance/blueprints/samples/medi).
## Access Control
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/samples/pci-dss-3.2.1/control-mapping.md
v3.2.1:2018** built-in policy initiative.
> compliance in Azure Policy is only a partial view of your overall compliance status. The > associations between controls and Azure Policy definitions for this compliance blueprint sample > may change over time. To view the change history, see the
-> [GitHub Commit History](https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/pci-dss-3.2.1/control-mapping.md).
+> [GitHub Commit History](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/governance/blueprints/samples/pci-dss-3.2.1/control-mapping.md).
## 1.3.2 and 1.3.4 Boundary Protection
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/samples/swift-2020/control-mapping.md
audit requirements** built-in policy initiative.
> compliance in Azure Policy is only a partial view of your overall compliance status. The > associations between controls and Azure Policy definitions for this compliance blueprint sample > may change over time. To view the change history, see the
-> [GitHub Commit History](https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/swift-2020/control-mapping.md).
+> [GitHub Commit History](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/governance/blueprints/samples/swift-2020/control-mapping.md).
## 1.2 and 5.1 Account Management
governance Guest Configuration Desired State Configuration Extension Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/guest-configuration-desired-state-configuration-extension-migration.md
before you can create a guest configuration package.
#### Update deployment templates If your deployment templates include the DSC extension
-(see [examples](/virtual-machines/extensions/dsc-template.md)),
+(see [examples](/azure/virtual-machines/extensions/dsc-template)),
there are two changes required. First, replace the DSC extension with the
-[extension for the guest configuration feature](/virtual-machines/extensions/guest-configuration.md).
+[extension for the guest configuration feature](/azure/virtual-machines/extensions/guest-configuration).
Then, add a [guest configuration assignment](../concepts/guest-configuration-assignments.md)
iot-central Concepts Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-architecture.md
IoT Central is a ready-made environment for IoT solution development. It's an ap
This article provides an overview of the key elements in an IoT Central solution architecture. Key capabilities in an IoT Central application include:
iot-central Quick Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/quick-export-data.md
Title: Quickstart - Export data from Azure IoT Central
description: Quickstart - Learn how to use the data export feature in IoT Central to integrate with other cloud services. Previously updated : 12/28/2021 Last updated : 02/18/2022
In this quickstart, you:
## Prerequisites - Before you begin, you should complete the first quickstart [Create an Azure IoT Central application](./quick-deploy-iot-central.md). The second quickstart, [Configure rules and actions for your device](quick-configure-rules.md), is optional.
+- You need the IoT Central application *URL prefix* that you chose in the first quickstart [Create an Azure IoT Central application](./quick-deploy-iot-central.md).
[!INCLUDE [azure-cli-prepare-your-environment-no-header](../../../includes/azure-cli-prepare-your-environment-no-header.md)]
In this quickstart, you:
Before you can export data from your IoT Central application, you need an Azure Data Explorer cluster and database. In this quickstart, you use the bash environment in the [Azure Cloud Shell](https://shell.azure.com) to create and configure them.
-Run the following script in the Azure Cloud Shell. Replace the `clustername` value with a unique name for your cluster before you run the script. The cluster name can contain only lowercase letters and numbers:
+Run the following script in the Azure Cloud Shell. Replace the `clustername` value with a unique name for your cluster before you run the script. The cluster name can contain only lowercase letters and numbers. Replace the `centralurlprefix` value with the URL prefix you chose in the first quickstart:
> [!IMPORTANT] > The script can take 20 to 30 minutes to run.
Run the following script in the Azure Cloud Shell. Replace the `clustername` val
# The cluster name can contain only lowercase letters and numbers. # It must contain from 4 to 22 characters. clustername="<A unique name for your cluster>"+
+centralurlprefix="<The URL prefix of your IoT Central application>"
+ databasename="phonedata" location="eastus" resourcegroup="IoTCentralExportData"
az kusto database create --cluster-name $clustername \
--read-write-database location=$location soft-delete-period=P365D hot-cache-period=P31D \ --resource-group $resourcegroup
-# Create a service principal to use when authenticating from IoT Central
-SP_JSON=$(az ad sp create-for-rbac --skip-assignment --name $clustername)
+# Create and assign a managed identity to use
+# when authenticating from IoT Central.
+# This assumes your IoT Central was created in the default
+# IOTC resource group.
+MI_JSON=$(az iot central app identity assign --name $centralurlprefix \
+ --resource-group IOTC --system-assigned)
+## Assign the managed identity permissions to use the database.
az kusto database-principal-assignment create --cluster-name $clustername \ --database-name $databasename \
- --principal-id $(jq -r .appId <<< $SP_JSON) \
- --principal-assignment-name $clustername \
+ --principal-id $(jq -r .principalId <<< $MI_JSON) \
+ --principal-assignment-name $centralurlprefix \
--resource-group $resourcegroup \ --principal-type App \
+ --tenant-id $(jq -r .tenantId <<< $MI_JSON) \
--role Admin
-echo "Azure Data Explorer URL: $(az kusto cluster show --name $clustername --resource-group $resourcegroup --query uri -o tsv)"
-echo "Client ID: $(jq -r .appId <<< $SP_JSON)"
-echo "Tenant ID: $(jq -r .tenant <<< $SP_JSON)"
-echo "Client secret: $(jq -r .password <<< $SP_JSON)"
+echo "Azure Data Explorer URL: $(az kusto cluster show --name $clustername --resource-group $resourcegroup --query uri -o tsv)"
```
-Make a note of the **Azure Data Explorer URL**, **Client ID**, **Tenant ID**, and **Client secret**. You use these values later in the quickstart.
+Make a note of the **Azure Data Explorer URL**. You use this value later in the quickstart.
## Configure the database
To configure the data export destination from IoT Central:
1. In **Cluster URL**, enter the **Azure Data Explorer URL** you made a note of previously. 1. In **Database name**, enter *phonedata*. 1. In **Table name**, enter *acceleration*.
-1. In **Client ID**, enter the **Client ID** you made a note of previously.
-1. In **Tenant ID**, enter the **Tenant ID** you made a note of previously.
-1. In **Client secret**, enter the **Client secret** you made a note of previously.
+1. In **Authorization**, select **System-assigned managed identity**.
1. Select **Save**. To configure the data export:
iot-develop Quickstart Devkit Stm B L4s5i https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-stm-b-l4s5i.md
ms.devlang: c
Last updated 06/02/2021
-zone_pivot_groups: iot-develop-stm-toolset
+zone_pivot_groups: iot-develop-stm32-toolset
# Owner: timlt
-# - id: iot-develop-stm-toolset
-# Title: IoT Devices
-# prompt: Choose a build environment
+#- id: iot-develop-stm32-toolset
+# Title: IoT Devices
+# prompt: Choose a build environment
# pivots: # - id: iot-toolset-cmake # Title: CMake # - id: iot-toolset-iar-ewarm # Title: IAR EWARM
+# - id: iot-toolset-stm32cube
+# Title: STM32Cube IDE
zone_pivot_groups: iot-develop-stm-toolset
:::zone pivot="iot-toolset-cmake" [![Browse code](media/common/browse-code.svg)](https://github.com/azure-rtos/getting-started/tree/master/STMicroelectronics/) :::zone-end [![Browse code](media/common/browse-code.svg)](https://github.com/azure-rtos/samples/) :::zone-end
To connect the STM DevKit to Azure, you'll modify a configuration file for Wi-Fi
1. On the STM DevKit MCU, locate the **Reset** button (1), the Micro USB port (2), which is labeled **USB STLink**, and the board part number (3). You'll refer to these items in the next steps. All of them are highlighted in the following picture:
- ::: image type="content" source="media/quickstart-devkit-stm-b-l4s5i/stm-b-l4s5i.png" alt-text="Locate key components on the STM DevKit board":::
+ :::image type="content" source="media/quickstart-devkit-stm-b-l4s5i/stm-b-l4s5i.png" alt-text="Locate key components on the STM DevKit board":::
1. Connect the Micro USB cable to the **USB STLINK** port on the STM DevKit, and then connect it to your computer.
Select the **About** tab from the device page.
:::zone-end
+## Prerequisites
+
+* A PC running Windows 10
+* [Git](https://git-scm.com/downloads) for cloning the repository
+* Hardware
+
+ * The [B-L4S5I-IOT01A](https://www.st.com/en/evaluation-tools/b-l4s5i-iot01a.html) (STM DevKit)
+ * Wi-Fi 2.4 GHz
+ * USB 2.0 A male to Micro USB male cable
+
+## Download the STM32Cube IDE
+
+You can download a free version of STM32Cube IDE, but you will need to create an account. Follow the instructions on the ST website. THe STM32Cube IDE can be downloaded from this website:
+https://www.st.com/en/development-tools/stm32cubeide.html
+
+The sample distribution zip file contains the following sub-folders that you will use later:
+
+|Folder|Contents|
+|-|--|
+|`sample_azure_iot_embedded_sdk` |{*Sample project to connect to Azure loT Hub using Azure loT Middleware for Azure RTOS*}|
+|`sample_azure_iot_embedded_sdk_pnp` |{*Sample project to connect to Azure loT Hub using Azure loT Middleware for Azure RTOS via loT Plug and Play*}|
+
+Download the STMicroelectronics B-L4S5I-IOT01A IAR sample from [Azure RTOS samples](https://github.com/azure-rtos/samples/), and unzip it to a working directory. Choose a directory with a short path to avoid compiler errors when you build.
+++
+## Prepare the device
+
+To connect the device to Azure, you'll modify a configuration file for Azure IoT settings and STM32Cube IDE settings for Wi-Fi, and then build and flash the image to the device.
+
+### Add configuration
+
+1. Launch STM32CubeIDE, select ***File > Open Projects from File System.*** Open the **stm32cubeide** folder from inside the extracted zip file, and then select ***Finish*** to open the projects.
+
+ :::image type="content" source="media/quickstart-devkit-stm-b-l4s5i/import-projects.png" alt-text="Import projects from distribution Zip file":::
+
+1. Select the sample project that you want to build and run. For example, ***sample_azure_iot_embedded_sdk_pnp.***
+
+1. Expand the ***command_hardware_code*** folder to open ***board_setup.c*** to configure the values for your WiFi to be used.
+
+ |Symbol name|Value|
+ |--|--|
+ |`WIFI_SSID` |{*Use your Wi-Fi SSID*}|
+ |`WIFI_PASSWORD` |{*se your Wi-Fi password*}|
+
+1. Expand the sample folder to open **sample_config.h** to set the Azure IoT device information constants to the values that you saved after you created Azure resources.
+
+ |Constant name|Value|
+ |-|--|
+ |`ENDPOINT` |{*Use this value: "global.azure-devices-provisioning.net"*}|
+ |`REGISTRATION_ID` |{*Use your Device ID value*}|
+ |`ID_SCOPE` |{*Use your ID scope value*}|
+ |`DEVICE_SYMMETRIC_KEY` |{*Use your Primary key value*}|
+
+ > [!NOTE]
+ > The `ENDPOINT`, `DEVICE_ID`, `ID_SCOPE`, and `DEVICE_SYMMETRIC_KEY` values are set in a `#ifndef ENABLE_DPS_SAMPLE` statement. Make sure you set the values in the `#else` statement, which will be used when the `ENABLE_DPS_SAMPLE` value is defined.
+
+### Build the project
+
+In STM32CubeIDE, select ***Project > Build All*** to build sample projects and its dependent libraries. You will observe compilation and linking of the sample project.
+
+Download and run the project
+
+1. On the STM DevKit MCU, locate the **Reset** button (1), the Micro USB port (2), which is labeled **USB STLink**, and the board part number (3). You will refer to these items in the next steps. All of them are highlighted in the following picture:
+
+ :::image type="content" source="media/quickstart-devkit-stm-b-l4s5i/stm-b-l4s5i.png" alt-text="Locate key components on the STM DevKit board":::
+
+1. Connect the Micro USB cable to the **USB STLINK** port on the STM DevKit, and then connect it to your computer.
+
+1. In STM32CubeIDE, Select ***Run > Debug (F11)*** or ***Debug*** on the toolbar to download the program and run it, and then select Resume. You may need to upgrade the ST-Link to make the debug work. Select ***Help > ST-Link Upgrade*** and follow the instructions.
+
+ :::image type="content" source="media/quickstart-devkit-stm-b-l4s5i/stlink-upgrade.png" alt-text="ST-Link upgrade instructions":::
+
+1. Verify the serial port in your OSΓÇÖs device manager. It should show up as a COM port.
+
+ :::image type="content" source="media/quickstart-devkit-stm-b-l4s5i/verify-com-port.png" alt-text="Verify the serial port":::
+
+1. Open your favorite serial terminal program such as Termite and connect to the COM port discovered above. Configure the following values for the serial ports:
+ Baud rate: ***115200***
+ Data bits: ***8***
+ Stop bits: ***1***
+
+1. As the project runs, the demo displays status information to the terminal output window. The demo also publishes the message to IoT Hub every five seconds. Check the terminal output to verify that messages have been successfully sent to the Azure IoT hub.
+
+ > [!NOTE]
+ > The terminal output content varies depending on which sample you choose to build and run.
+
+### Confirm device connection details
+
+In the terminal window, you should see output like the following, to verify that the device is initialized and connected to Azure IoT.
+
+```output
+STM32L4XX Lib:
+> CMSIS Device Version: 1.7.0.0.
+> HAL Driver Version: 1.12.0.0.
+> BSP Driver Version: 1.0.0.0.
+ES-WIFI Firmware:
+> Product Name: Inventek eS-WiFi
+> Product ID: ISM43362-M3G-L44-SPI
+> Firmware Version: C3.5.2.5.STM
+> API Version: v3.5.2
+ES-WIFI MAC Address: C4:7F:51:7:D7:73
+wifi connect try 1 times
+ES-WIFI Connected.
+> ES-WIFI IP Address: 10.0.0.204
+> ES-WIFI Gateway Address: 10.0.0.1
+> ES-WIFI DNS1 Address: 75.75.75.75
+> ES-WIFI DNS2 Address: 75.75.76.76
+IP address: 10.0.0.204
+Mask: 255.255.255.0
+Gateway: 10.0.0.1
+DNS Server address: 75.75.75.75
+SNTP Time Sync...0.pool.ntp.org
+SNTP Time Sync...1.pool.ntp.org
+SNTP Time Sync successfully.
+[INFO] Azure IoT Security Module has been enabled, status=0
+Start Provisioning Client...
+Registered Device Successfully.
+IoTHub Host Name: iotc-ad97cfe1-91b4-4476-bee8-dcdb0aa2cc0a.azure-devices.net; Device ID: 51pf4yld0g.
+Connected to IoTHub.
+Sent properties request.
+Telemetry message send: {"temperature":22}.
+[INFO] Azure IoT Security Module message is empty
+Received all properties
+Telemetry message send: {"temperature":22}.
+Telemetry message send: {"temperature":22}.
+Telemetry message send: {"temperature":22}.
+```
+
+Keep the terminal open to monitor device output in the following steps.
+
+## Verify the device status
+
+To view the device status in IoT Central portal:
+1. From the application dashboard, select **Devices** on the side navigation menu.
+1. Confirm that the **Device status** is updated to **Provisioned**.
+1. Confirm that the **Device template** is updated to **Thermostat**.
+
+ :::image type="content" source="media/quickstart-devkit-stm-b-l4s5i/iot-central-device-view-status-iar.png" alt-text="Screenshot of device status in IoT Central":::
+
+## View telemetry
+
+With IoT Central, you can view the flow of telemetry from your device to the cloud.
+
+To view telemetry in IoT Central portal:
+
+1. From the application dashboard, select **Devices** on the side navigation menu.
+1. Select the device from the device list.
+1. View the telemetry as the device sends messages to the cloud in the **Overview** tab.
+
+ :::image type="content" source="media/quickstart-devkit-stm-b-l4s5i/iot-central-device-telemetry-iar.png" alt-text="Screenshot of device telemetry in IoT Central":::
+
+ > [!NOTE]
+ > You can also monitor telemetry from the device by using the Termite app.
++
+## Call a direct method on the device
+
+You can also use IoT Central to call a direct method that you have implemented on your device. Direct methods have a name, and can optionally have a JSON payload, configurable connection, and method timeout.
+
+To call a method in IoT Central portal:
+
+1. Select the **Command** tab from the device page.
+1. In the **Since** field, use the date picker and time selectors to set a time, then select **Run**.
+
+ :::image type="content" source="media/quickstart-devkit-stm-b-l4s5i/iot-central-invoke-method-iar.png" alt-text="Screenshot of calling a direct method on a device in IoT Central":::
+
+1. You can see the command invocation in the terminal. In this case, because the sample thermostat application displays a simulated temperature value, there won't be minimum or maximum values during the time range.
+
+## View device information
+
+You can view the device information from IoT Central.
+
+Select the **About** tab from the device page.
+++ :::zone pivot="iot-toolset-cmake" ## Verify the device status
For debugging the application, see [Debugging with Visual Studio Code](https://g
:::zone pivot="iot-toolset-iar-ewarm" For help with debugging the application, see the selections under **Help** in **IAR EW for ARM**. :::zone-end
+For help with debugging the application, see the selections under **Help**.
## Clean up resources
iot-hub-device-update Create Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/create-update.md
Once you have your update files, create an import manifest to describe the updat
2. Navigate to `Tools/AduCmdlets` in your local clone from PowerShell.
-3. Run the following commands after replacing the following sample parameter values with your own: **Provider, Name, Version, Properties, Handler, Installed Criteria, Files**. See [Import schema and API information](import-schema.md) for details on what values you can use. In particular, be aware that the same exact set of compatibility properties cannot be used with more than one Provider and Name combination.
+3. Run the following commands after replacing the following sample parameter values with your own: **Provider, Name, Version, Properties, Handler, Installed Criteria, Files**. See [Import schema and API information](import-schema.md) for details on what values you can use. _In particular, be aware that the same exact set of compatibility properties cannot be used with more than one Provider and Name combination._
```powershell Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope Process
lab-services Class Type Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-sql-server.md
To use a shared resource in Lab Services, you first need to create the virtual n
>[!WARNING] >Shared resources for a lab should be setup before the lab is created. If the vnet is not [peered to the lab account](how-to-connect-peer-virtual-network.md) *before* the lab is created, the lab will not have access to the shared resource.
-Now that the networking side of things is handled, lets create a SQL Server Database. We are going to create a [single database](../azure-sql/database/single-database-create-quickstart.md?tabs=azure-portal) as it is the quickest deployment option for Azure SQL Database. For other deployment options, create an [elastic pool](../azure-sql/database/elastic-pool-overview.md#creating-a-new-sql-database-elastic-pool-using-the-azure-portal), [managed instance](../azure-sql/managed-instance/instance-create-quickstart.md), or [SQL virtual machine](../azure-sql/virtual-machines/windows/sql-vm-create-portal-quickstart.md).
+Now that the networking side of things is handled, lets create a SQL Server Database. We are going to create a [single database](../azure-sql/database/single-database-create-quickstart.md?tabs=azure-portal) as it is the quickest deployment option for Azure SQL Database. For other deployment options, create an [elastic pool](../azure-sql/database/elastic-pool-overview.md#create-a-new-sql-database-elastic-pool-by-using-the-azure-portal), [managed instance](../azure-sql/managed-instance/instance-create-quickstart.md), or [SQL virtual machine](../azure-sql/virtual-machines/windows/sql-vm-create-portal-quickstart.md).
1. From the Azure portal menu, choose **Create new resource**. 2. Choose **SQL Database** and click the **Create** button.
lighthouse Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/enterprise.md
Title: Azure Lighthouse in enterprise scenarios description: The capabilities of Azure Lighthouse can be used to simplify cross-tenant management within an enterprise which uses multiple Azure AD tenants. Previously updated : 10/21/2021 Last updated : 02/18/2022
For most organizations, management is easier with a single Azure AD tenant. Havi
Some organizations may need to use multiple Azure AD tenants. This might be a temporary situation, as when acquisitions have taken place and a long-term tenant consolidation strategy hasn't been defined yet. Other times, organizations may need to maintain multiple tenants on an ongoing basis due to wholly independent subsidiaries, geographical or legal requirements, or other considerations.
-In cases where a multi-tenant architecture is required, Azure Lighthouse can help centralize and streamline management operations. By using Azure Lighthouse, users in one managing tenant can perform [cross-tenant management functions](cross-tenant-management-experience.md) in a centralized, scalable manner.
+In cases where a [multitenant architecture](/azure/architecture/guide/multitenant/overview) is required, Azure Lighthouse can help centralize and streamline management operations. By using Azure Lighthouse, users in one managing tenant can perform [cross-tenant management functions](cross-tenant-management-experience.md) in a centralized, scalable manner.
## Tenant management architecture
Continuing with that example, Tenant A users with the appropriate permissions ca
## Next steps
+- Explore options for [resource organization in multitenant architectures](/azure/architecture/guide/multitenant/approaches/resource-organization).
- Learn about [cross-tenant management experiences](cross-tenant-management-experience.md). - Learn more about [how Azure Lighthouse works](architecture.md).
load-balancer Backend Pool Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/backend-pool-management.md
Previously updated : 12/27/2021 Last updated : 2/17/2022 # Backend pool management+ The backend pool is a critical component of the load balancer. The backend pool defines the group of resources that will serve traffic for a given load-balancing rule. There are two ways of configuring a backend pool:+ * Network Interface Card (NIC)
-* IP address
-When preallocating your backend pool with an IP address range which you plan to later create virtual machines and virtual machine scale sets, configure your backend pool by IP address and VNET ID combination.
+* IP address
+To preallocate a backend pool with an IP address range that later will contain virtual machines and virtual machine scale sets, configure the pool by IP address and virtual network ID.
This article focuses on configuration of backend pools by IP addresses. ## Configure backend pool by IP address and virtual network+ In scenarios with pre-populated backend pools, use IP and virtual network.
-All backend pool management is done directly on the backend pool object as highlighted in the examples below.
+You configure backend pool management on the backend pool object as highlighted in the following examples.
### PowerShell
-Create new backend pool:
+
+Create a new backend pool:
```azurepowershell-interactive
-$resourceGroup = "myResourceGroup"
-$loadBalancerName = "myLoadBalancer"
-$backendPoolName = "myBackendPool"
-$vnetName = "myVnet"
-$location = "eastus"
-$nicName = "myNic"
-
-$backendPool = New-AzLoadBalancerBackendAddressPool -ResourceGroupName $resourceGroup -LoadBalancerName $loadBalancerName -Name $backendPoolName  
+$be = @{
+ ResourceGroupName = 'myResourceGroup'
+ LoadBalancerName = 'myLoadBalancer'
+ Name = 'myBackendPool'
+}
+$backendPool = New-AzLoadBalancerBackendAddressPool @be
+ ``` Update backend pool with a new IP from existing virtual network:   ```azurepowershell-interactive
-$virtualNetwork = 
-Get-AzVirtualNetwork -Name $vnetName -ResourceGroupName $resourceGroup 
- 
-$ip1 = New-AzLoadBalancerBackendAddressConfig -IpAddress "10.0.0.5" -Name "TestVNetRef" -VirtualNetwork $virtualNetwork  
+$vnet = @{
+ Name = 'myVnet'
+ ResourceGroupName = 'myResourceGroup'
+}
+$virtualNetwork = Get-AzVirtualNetwork @vnet
+
+$add1 = @{
+ IpAddress = '10.0.0.5'
+ Name = 'TestVNetRef'
+ VirtualNetworkId = $virtualNetwork.Id
+}
+$ip1 = New-AzLoadBalancerBackendAddressConfig @add1
  $backendPool.LoadBalancerBackendAddresses.Add($ip1)  Set-AzLoadBalancerBackendAddressPool -InputObject $backendPool+ ``` Retrieve the backend pool information for the load balancer to confirm that the backend addresses are added to the backend pool: ```azurepowershell-interactive
-Get-AzLoadBalancerBackendAddressPool -ResourceGroupName $resourceGroup -LoadBalancerName $loadBalancerName -Name $backendPoolName 
+$pool = @{
+ ResourceGroupName = 'myResourceGroup'
+ LoadBalancerName = 'myLoadBalancer'
+ Name = 'myBackendPool'
+}
+Get-AzLoadBalancerBackendAddressPool @pool
+ ``` Create a network interface and add it to the backend pool. Set the IP address to one of the backend addresses: ```azurepowershell-interactive
-$nic =
-New-AzNetworkInterface -ResourceGroupName $resourceGroup -Location $location -Name $nicName -PrivateIpAddress 10.0.0.4 -Subnet $virtualNetwork.Subnets[0]
+$net = @{
+ Name = 'myNic'
+ ResourceGroupName = 'myResourceGroup'
+ Location = 'eastus'
+ PrivateIpAddress = '10.0.0.4'
+ Subnet = $virtualNetwork.Subnets[0]
+}
+$nic = New-AzNetworkInterface @net
+ ``` Create a VM and attach the NIC with an IP address in the backend pool:+ ```azurepowershell-interactive # Create a username and password for the virtual machine $cred = Get-Credential # Create a virtual machine configuration
-$vmname = "myVM1"
-$vmsize = "Standard_DS1_v2"
-$pubname = "MicrosoftWindowsServer"
-$nicname = "myNic"
-$off = "WindowsServer"
-$sku = "2019-Datacenter"
-$resourceGroup = "myResourceGroup"
-$location = "eastus"
-
-$nic =
-Get-AzNetworkInterface -Name $nicname -ResourceGroupName $resourceGroup
+$net = @{
+ Name = 'myNic'
+ ResourceGroupName = 'myResourceGroup'
+}
+$nic = Get-AzNetworkInterface @net
+
+$vmc = @{
+ VMName = 'myVM1'
+ VMSize = 'Standard_DS1_v2'
+}
+
+$vmos = @{
+ ComputerName = 'myVM1'
+ Credential = $cred
+}
+
+$vmi = @{
+ PublisherName = 'MicrosoftWindowsServer'
+ Offer = 'WindowsServer'
+ Skus = '2019-Datacenter'
+ Version = 'latest'
+}
+$vmConfig =
+New-AzVMConfig @vmc | Set-AzVMOperatingSystem -Windows @vmos | Set-AzVMSourceImage @vmi | Add-AzVMNetworkInterface -Id $nic.Id
-$vmConfig =
-New-AzVMConfig -VMName $vmname -VMSize $vmsize | Set-AzVMOperatingSystem -Windows -ComputerName $vmname -Credential $cred | Set-AzVMSourceImage -PublisherName $pubname -Offer $off -Skus $sku -Version latest | Add-AzVMNetworkInterface -Id $nic.Id
# Create a virtual machine using the configuration
-$vm1 = New-AzVM -ResourceGroupName $resourceGroup -Zone 1 -Location $location -VM $vmConfig
+$vm = @{
+ ResourceGroupName = 'myResourceGroup'
+ Zone = '1'
+ Location = 'eastus'
+ VM = $vmConfig
+
+}
+$vm1 = New-AzVM @vm
+ ``` ### CLI+ Using CLI you can either populate the backend pool via command-line parameters or through a JSON configuration file. Create and populate the backend pool via the command-line parameters:
az vm create \
* IP based backends can only be used for Standard Load Balancers * Limit of 100 IP addresses in the backend pool for IP based LBs * The backend resources must be in the same virtual network as the load balancer for IP based LBs
- * A Load Balancer with IP-based Backend Pool cannot function as a Private Link service
- * ACI containers are not currently supported by IP based LBs
- * Load Balancers or services such as Application Gateway cannot be placed in the backend pool of the load balancer
- * Inbound NAT Rules cannot be specified by IP address
- * You can configure IP-based and NIC-based backend pools for the same load balancer however, you cannot create a single backend pool that mixes backed addresses targeted by NIC and IP addresses within the same pool.
+ * A load balancer with IP based Backend Pool canΓÇÖt function as a Private Link service
+ * ACI containers aren't currently supported by IP based LBs
+ * Load balancers or services such as Application Gateway canΓÇÖt be placed in the backend pool of the load balancer
+ * Inbound NAT Rules canΓÇÖt be specified by IP address
+ * You can configure IP based and NIC based backend pools for the same load balancer. You canΓÇÖt create a single backend pool that mixes backed addresses targeted by NIC and IP addresses within the same pool.
>[!Important] > When a backend pool is configured by IP address, it will behave as a Basic Load Balancer with default outbound enabled. For secure by default configuration and applications with demanding outbound needs, configure the backend pool by NIC. ## Next steps+ In this article, you learned about Azure Load Balancer backend pool management and how to configure a backend pool by IP address and virtual network. Learn more about [Azure Load Balancer](load-balancer-overview.md).
-Review the [REST API](/rest/api/load-balancer/loadbalancerbackendaddresspools/createorupdate) for IP based backendpool management.
+Review the [REST API](/rest/api/load-balancer/loadbalancerbackendaddresspools/createorupdate) for IP based backend pool management.
load-balancer Load Balancer Floating Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-floating-ip.md
documentationcenter: na -+ na Last updated 12/2/2021
When Floating IP is enabled, Azure changes the IP address mapping to the Fronten
Without Floating IP, Azure exposes the VM instances' IP. Enabling Floating IP changes the IP address mapping to the Frontend IP of the load Balancer to allow for additional flexibility. Learn more [here](load-balancer-multivip-overview.md).
-Floating IP can be configured on a Load Balancer rule via the Azure Portal, REST API, CLI, PowerShell, or other client. In addition to the rule configuration, you must also configure your virtual machine's Guest OS in order to leverage Floating IP.
+Floating IP can be configured on a Load Balancer rule via the Azure portal, REST API, CLI, PowerShell, or other client. In addition to the rule configuration, you must also configure your virtual machine's Guest OS in order to leverage Floating IP.
## Floating IP Guest OS configuration For each VM in the backend pool, run the following commands at a Windows Command Prompt.
logic-apps Workflow Definition Language Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/workflow-definition-language-functions-reference.md
Title: Reference guide for expression functions
-description: Reference guide to expression functions for Azure Logic Apps and Power Automate
+description: Reference guide to workflow expression functions for Azure Logic Apps and Power Automate.
ms.suite: integration-+ Previously updated : 01/27/2022 Last updated : 02/18/2022
-# Reference guide to expression functions for Azure Logic Apps and Power Automate
+# Reference guide to workflow expression functions in Azure Logic Apps and Power Automate
-For workflow definitions in [Azure Logic Apps](../logic-apps/logic-apps-overview.md) and [Power Automate](/power-automate/getting-started), some [expressions](../logic-apps/logic-apps-workflow-definition-language.md#expressions) get their values from runtime actions that might not yet exist when your workflow starts running. To reference these values or process the values in these expressions, you can use *expression functions* provided by the [Workflow Definition Language](../logic-apps/logic-apps-workflow-definition-language.md).
+For workflow definitions in [Azure Logic Apps](../logic-apps/logic-apps-overview.md) and [Power Automate](/power-automate/getting-started), some [expressions](logic-apps-workflow-definition-language.md#expressions) get their values from runtime actions that might not yet exist when your workflow starts running. To reference or process the values in these expressions, you can use *expression functions* provided by the [Workflow Definition Language](logic-apps-workflow-definition-language.md).
> [!NOTE] > This reference page applies to both Azure Logic Apps and Power Automate, but appears in the > Azure Logic Apps documentation. Although this page refers specifically to logic app workflows, > these functions work for both flows and logic app workflows. For more information about functions
-> and expressions in Power Automate, see [Use expressions in conditions](/power-automate/use-expressions-in-conditions).
+> and expressions in Power Automate, review [Use expressions in conditions](/power-automate/use-expressions-in-conditions).
For example, you can calculate values by using math functions, such as the [add()](../logic-apps/workflow-definition-language-functions-reference.md#add) function, when you want the sum from integers or floats. Here are other example tasks that you can perform with functions: | Task | Function syntax | Result | | - | | |
-| Return a string in lowercase format. | toLower('<*text*>') <p>For example: toLower('Hello') | "hello" |
+| Return a string in lowercase format. | toLower('<*text*>') <br><br>For example: toLower('Hello') | "hello" |
| Return a globally unique identifier (GUID). | guid() |"c2ecc88d-88c8-4096-912c-d6f2e2b138ce" | ||||
Here are some other general ways that you can use functions in expressions:
| Task | Function syntax in an expression | | - | -- | | Perform work with an item by passing that item to a function. | "\@<*functionName*>(<*item*>)" |
-| 1. Get the *parameterName*'s value by using the nested `parameters()` function. </br>2. Perform work with the result by passing that value to *functionName*. | "\@<*functionName*>(parameters('<*parameterName*>'))" |
-| 1. Get the result from the nested inner function *functionName*. </br>2. Pass the result to the outer function *functionName2*. | "\@<*functionName2*>(<*functionName*>(<*item*>))" |
-| 1. Get the result from *functionName*. </br>2. Given that the result is an object with property *propertyName*, get that property's value. | "\@<*functionName*>(<*item*>).<*propertyName*>" |
+| 1. Get the *parameterName*'s value by using the nested `parameters()` function. <br>2. Perform work with the result by passing that value to *functionName*. | "\@<*functionName*>(parameters('<*parameterName*>'))" |
+| 1. Get the result from the nested inner function *functionName*. <br>2. Pass the result to the outer function *functionName2*. | "\@<*functionName2*>(<*functionName*>(<*item*>))" |
+| 1. Get the result from *functionName*. <br>2. Given that the result is an object with property *propertyName*, get that property's value. | "\@<*functionName*>(<*item*>).<*propertyName*>" |
||| For example, the `concat()` function can take two or more string values as parameters. This function combines those strings into one string. You can either pass in string literals, for example, "Sophia" and "Owen" so that you get a combined string, "SophiaOwen":
To work with strings, you can use these string functions and also some [collecti
| [indexOf](../logic-apps/workflow-definition-language-functions-reference.md#indexof) | Return the starting position for a substring. | | [lastIndexOf](../logic-apps/workflow-definition-language-functions-reference.md#lastindexof) | Return the starting position for the last occurrence of a substring. | | [length](../logic-apps/workflow-definition-language-functions-reference.md#length) | Return the number of items in a string or array. |
+| [nthIndexOf](../logic-apps/workflow-definition-language-functions-reference.md#nthIndexOf) | Return the starting position or index value where the *n*th occurrence of a substring appears in a string. |
| [replace](../logic-apps/workflow-definition-language-functions-reference.md#replace) | Replace a substring with the specified string, and return the updated string. |
+| [slice](../logic-apps/workflow-definition-language-functions-reference.md#slice) | Return a substring by specifying the starting and ending position or value. |
| [split](../logic-apps/workflow-definition-language-functions-reference.md#split) | Return an array that contains substrings, separated by commas, from a larger string based on a specified delimiter character in the original string. | | [startsWith](../logic-apps/workflow-definition-language-functions-reference.md#startswith) | Check whether a string starts with a specific substring. | | [substring](../logic-apps/workflow-definition-language-functions-reference.md#substring) | Return characters from a string, starting from the specified position. |
To change a value's type or format, you can use these conversion functions. For
## Implicit data type conversions
-Azure Logic Apps automatically or implicitly converts between some data types, so you don't have to manually perform these conversions. For example, if you use non-string values where strings are expected as inputs, Logic Apps automatically converts the non-string values into strings.
+Azure Logic Apps automatically or implicitly converts between some data types, so you don't have to manually perform these conversions. For example, if you use non-string values where strings are expected as inputs, Azure Logic Apps automatically converts the non-string values into strings.
For example, suppose a trigger returns a numerical value as output:
If you use this numerical output where string input is expected, such as a URL,
### Base64 encoding and decoding
-Logic Apps automatically or implicitly performs base64 encoding or decoding, so you don't have to manually perform these conversions by using the corresponding functions:
+Azure Logic Apps automatically or implicitly performs base64 encoding or decoding, so you don't have to manually perform these conversions by using the corresponding functions:
* `base64(<value>)` * `base64ToBinary(<value>)`
Logic Apps automatically or implicitly performs base64 encoding or decoding, so
* `decodeDataUri(<value>)` > [!NOTE]
-> If you manually add any of these functions while using the workflow designer, either directly to a trigger
+> If you manually add any of these functions while using the designer, either directly to a trigger
> or action or by using the expression editor, navigate away from the designer, and then return to the designer, > the function disappears from the designer, leaving behind only the parameter values. This behavior also happens > if you select a trigger or action that uses this function without editing the function's parameter values.
For the full reference about each function, see the
| [formatDateTime](../logic-apps/workflow-definition-language-functions-reference.md#formatDateTime) | Return the date from a timestamp. | | [getFutureTime](../logic-apps/workflow-definition-language-functions-reference.md#getFutureTime) | Return the current timestamp plus the specified time units. See also [addToTime](../logic-apps/workflow-definition-language-functions-reference.md#addToTime). | | [getPastTime](../logic-apps/workflow-definition-language-functions-reference.md#getPastTime) | Return the current timestamp minus the specified time units. See also [subtractFromTime](../logic-apps/workflow-definition-language-functions-reference.md#subtractFromTime). |
+| [parseDateTime](../logic-apps/workflow-definition-language-functions-reference.md#parseDateTime) | Return the timestamp from a string that contains a timestamp. |
| [startOfDay](../logic-apps/workflow-definition-language-functions-reference.md#startOfDay) | Return the start of the day for a timestamp. | | [startOfHour](../logic-apps/workflow-definition-language-functions-reference.md#startOfHour) | Return the start of the hour for a timestamp. | | [startOfMonth](../logic-apps/workflow-definition-language-functions-reference.md#startOfMonth) | Return the start of the month for a timestamp. |
And returns this result:
### actionOutputs
-Return an action's output at runtime. and is shorthand for `actions('<actionName>').outputs`. See [actions()](#actions). The `actionOutputs()` function resolves to `outputs()` in the Logic App Designer, so consider using [outputs()](#outputs), rather than `actionOutputs()`. Although both functions work the same way, `outputs()` is preferred.
+Return an action's output at runtime. and is shorthand for `actions('<actionName>').outputs`. See [actions()](#actions). The `actionOutputs()` function resolves to `outputs()` in the designer, so consider using [outputs()](#outputs), rather than `actionOutputs()`. Although both functions work the same way, `outputs()` is preferred.
``` actionOutputs('<actionName>')
addDays('<timestamp>', <days>, '<format>'?)
| | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp | | <*days*> | Yes | Integer | The positive or negative number of days to add |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated that the provided format isn't valid and must be a numeric format string. |
||||| | Return value | Type | Description |
addHours('<timestamp>', <hours>, '<format>'?)
| | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp | | <*hours*> | Yes | Integer | The positive or negative number of hours to add |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated that the provided format isn't valid and must be a numeric format string. |
||||| | Return value | Type | Description |
This example adds 10 hours to the specified timestamp:
addHours('2018-03-15T00:00:00Z', 10) ```
-And returns this result: `"2018-03-15T10:00:00.0000000Z"
+And returns this result: `"2018-03-15T10:00:00.0000000Z"`
*Example 2*
addMinutes('<timestamp>', <minutes>, '<format>'?)
| | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp | | <*minutes*> | Yes | Integer | The positive or negative number of minutes to add |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated that the provided format isn't valid and must be a numeric format string. |
||||| | Return value | Type | Description |
addSeconds('<timestamp>', <seconds>, '<format>'?)
| | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp | | <*seconds*> | Yes | Integer | The positive or negative number of seconds to add |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated that the provided format isn't valid and must be a numeric format string. |
||||| | Return value | Type | Description |
addToTime('<timestamp>', <interval>, '<timeUnit>', '<format>'?)
| <*timestamp*> | Yes | String | The string that contains the timestamp | | <*interval*> | Yes | Integer | The number of specified time units to add | | <*timeUnit*> | Yes | String | The unit of time to use with *interval*: "Second", "Minute", "Hour", "Day", "Week", "Month", "Year" |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated that the provided format isn't valid and must be a numeric format string. |
||||| | Return value | Type | Description |
These examples show the different supported types of input for `bool()`:
### coalesce
-Return the first non-null value from one or more parameters.
-Empty strings, empty arrays, and empty objects are not null.
+Return the first non-null value from one or more parameters. Empty strings, empty arrays, and empty objects aren't null.
``` coalesce(<object_1>, <object_2>, ...)
coalesce(<object_1>, <object_2>, ...)
| Return value | Type | Description | | | - | -- |
-| <*first-non-null-item*> | Any | The first item or value that is not null. If all parameters are null, this function returns null. |
+| <*first-non-null-item*> | Any | The first item or value that isn't null. If all parameters are null, this function returns null. |
|||| *Example*
concat('<text1>', '<text2>', ...)
| Return value | Type | Description | | | - | -- |
-| <*text1text2...*> | String | The string created from the combined input strings. <p><p>**Note**: The length of the result must not exceed 104,857,600 characters. |
+| <*text1text2...*> | String | The string created from the combined input strings. <br><br><br><br>**Note**: The length of the result must not exceed 104,857,600 characters. |
|||| > [!NOTE]
convertFromUtc('<timestamp>', '<destinationTimeZone>', '<format>'?)
| | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp | | <*destinationTimeZone*> | Yes | String | The name for the target time zone. For time zone names, please review: [Microsoft Windows Default Time Zones](/windows-hardware/manufacture/desktop/default-time-zones). |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated that the provided format isn't valid and must be a numeric format string. |
||||| | Return value | Type | Description |
convertTimeZone('<timestamp>', '<sourceTimeZone>', '<destinationTimeZone>', '<fo
| <*timestamp*> | Yes | String | The string that contains the timestamp | | <*sourceTimeZone*> | Yes | String | The name for the source time zone. For time zone names, see [Microsoft Windows Default Time Zones](/windows-hardware/manufacture/desktop/default-time-zones), but you might have to remove any punctuation from the time zone name. | | <*destinationTimeZone*> | Yes | String | The name for the target time zone. For time zone names, see [Microsoft Windows Default Time Zones](/windows-hardware/manufacture/desktop/default-time-zones), but you might have to remove any punctuation from the time zone name. |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated that the provided format isn't valid and must be a numeric format string. |
||||| | Return value | Type | Description |
convertToUtc('<timestamp>', '<sourceTimeZone>', '<format>'?)
| | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp | | <*sourceTimeZone*> | Yes | String | The name for the source time zone. For time zone names, see [Microsoft Windows Default Time Zones](/windows-hardware/manufacture/desktop/default-time-zones), but you might have to remove any punctuation from the time zone name. |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated that the provided format isn't valid and must be a numeric format string. |
||||| | Return value | Type | Description |
div(<dividend>, <divisor>)
| Return value | Type | Description | | | - | -- |
-| <*quotient-result*> | Integer or Float | The result from dividing the first number by the second number. If either the dividend or divisor has Float type, the result has Float type. <p><p>**Note**: To convert the float result to an integer, try [creating and calling a function in Azure](../logic-apps/logic-apps-azure-functions.md) from your logic app. |
+| <*quotient-result*> | Integer or Float | The result from dividing the first number by the second number. If either the dividend or divisor has Float type, the result has Float type. <br><br><br><br>**Note**: To convert the float result to an integer, try [creating and calling a function in Azure](../logic-apps/logic-apps-azure-functions.md) from your logic app. |
|||| *Example 1*
And returns these results:
Check whether a string ends with a specific substring. Return true when the substring is found, or return false when not found.
-This function is not case-sensitive.
+This function isn't case-sensitive.
``` endsWith('<text>', '<searchText>')
And returns this result: `10.333`
Return a timestamp in the specified format. ```
-formatDateTime('<timestamp>', '<format>'?)
+formatDateTime('<timestamp>', '<format>'?, '<locale>'?)
``` | Parameter | Required | Type | Description |
-| | -- | - | -- |
+|--|-||-|
| <*timestamp*> | Yes | String | The string that contains the timestamp |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated that the provided format isn't valid and must be a numeric format string. |
+| <*locale*> | No | String | The locale to use. <br><br>- If unspecified, the current behavior is unchanged. <br><br>- If *locale* isn't a valid value, an error is generated that the provided locale isn't valid or doesn't have an associated locale. |
||||| | Return value | Type | Description |
-| | - | -- |
-| <*reformatted-timestamp*> | String | The updated timestamp in the specified format |
+|--||-|
+| <*reformatted-timestamp*> | String | The updated timestamp in the specified format and locale, if specified. |
||||
-*Example*
-
-This example converts a timestamp to the specified format:
+*Examples*
```
-formatDateTime('03/15/2018 12:00:00', 'yyyy-MM-ddTHH:mm:ss')
-```
+formatDateTime('03/15/2018') // Returns '2018-03-15T00:00:00.0000000'.
+formatDateTime('03/15/2018 12:00:00', 'yyyy-MM-ddTHH:mm:ss') // Returns '2018-03-15T12:00:00'.
-And returns this result: `"2018-03-15T12:00:00"`
+formatDateTime('01/31/2016', 'dddd MMMM d') // Returns 'Sunday January 31'.
+formatDateTime('01/31/2016', 'dddd MMMM d', 'fr-fr') // Returns 'dimanche janvier 31'.
+formatDateTime('01/31/2016', 'dddd MMMM d', 'fr-FR') // Returns 'dimanche janvier 31'.
+formatDateTime('01/31/2016', 'dddd MMMM d', 'es-es') // Returns 'domingo enero 31'.
+```
<a name="formDataMultiValues"></a>
formatNumber(<number>, <format>, <locale>?)
| | -- | - | -- | | <*number*> | Yes | Integer or Double | The value that you want to format. | | <*format*> | Yes | String | A composite format string that specifies the format that you want to use. For the supported numeric format strings, see [Standard numeric format strings](/dotnet/standard/base-types/standard-numeric-format-strings), which are supported by `number.ToString(<format>, <locale>)`. |
-| <*locale*> | No | String | The locale to use as supported by `number.ToString(<format>, <locale>)`. If not specified, the default value is `en-us`. |
+| <*locale*> | No | String | The locale to use as supported by `number.ToString(<format>, <locale>)`. If not specified, the default value is `en-us`. If *locale* isn't a valid value, an error is generated that the provided locale isn't valid or doesn't have an associated locale. |
||||| | Return value | Type | Description |
getFutureTime(<interval>, <timeUnit>, <format>?)
| | -- | - | -- | | <*interval*> | Yes | Integer | The number of specified time units to add | | <*timeUnit*> | Yes | String | The unit of time to use with *interval*: "Second", "Minute", "Hour", "Day", "Week", "Month", "Year" |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated that the provided format isn't valid and must be a numeric format string. |
||||| | Return value | Type | Description |
getPastTime(<interval>, <timeUnit>, <format>?)
| | -- | - | -- | | <*interval*> | Yes | Integer | The number of specified time units to subtract | | <*timeUnit*> | Yes | String | The unit of time to use with *interval*: "Second", "Minute", "Hour", "Day", "Week", "Month", "Year" |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated that the provided format isn't valid and must be a numeric format string. |
||||| | Return value | Type | Description |
if(equals(1, 1), 'yes', 'no')
### indexOf Return the starting position or index value for a substring.
-This function is not case-sensitive,
+This function isn't case-sensitive,
and indexes start with the number 0. ```
indexOf('<text>', '<searchText>')
| Return value | Type | Description | | | - | -- |
-| <*index-value*>| Integer | The starting position or index value for the specified substring. <p>If the string is not found, return the number -1. |
+| <*index-value*>| Integer | The starting position or index value for the specified substring. <br><br>If the string isn't found, return the number -1. |
|||| *Example*
json(xml('value'))
| Return value | Type | Description | | | - | -- |
-| <*JSON-result*> | JSON native type, object, or array | The JSON native type value, object, or array of objects from the input string or XML. <p><p>- If you pass in XML that has a single child element in the root element, the function returns a single JSON object for that child element. <p> - If you pass in XML that has multiple child elements in the root element, the function returns an array that contains JSON objects for those child elements. <p>- If the string is null, the function returns an empty object. |
+| <*JSON-result*> | JSON native type, object, or array | The JSON native type value, object, or array of objects from the input string or XML. <br><br><br><br>- If you pass in XML that has a single child element in the root element, the function returns a single JSON object for that child element. <br><br> - If you pass in XML that has multiple child elements in the root element, the function returns an array that contains JSON objects for those child elements. <br><br>- If the string is null, the function returns an empty object. |
|||| *Example 1*
join([<collection>], '<delimiter>')
| Return value | Type | Description | | | - | -- |
-| <*char1*><*delimiter*><*char2*><*delimiter*>... | String | The resulting string created from all the items in the specified array. <p><p>**Note**: The length of the result must not exceed 104,857,600 characters. |
+| <*char1*><*delimiter*><*char2*><*delimiter*>... | String | The resulting string created from all the items in the specified array. <br><br><br><br>**Note**: The length of the result must not exceed 104,857,600 characters. |
|||| *Example*
And returns these results:
### lastIndexOf
-Return the starting position or index value for the last occurrence of a substring. This function is not case-sensitive, and indexes start with the number 0.
+Return the starting position or index value for the last occurrence of a substring. This function isn't case-sensitive, and indexes start with the number 0.
-```json
+```
lastIndexOf('<text>', '<searchText>') ```
If the string or substring value is empty, the following behavior occurs:
This example finds the starting index value for the last occurrence of the substring `world` substring in the string `hello world hello world`. The returned result is `18`:
-```json
+```
lastIndexOf('hello world hello world', 'world') ``` This example is missing the substring parameter, and returns a value of `22` because the value of the input string (`23`) minus 1 is greater than 0.
-```json
+```
lastIndexOf('hello world hello world', '') ```
Check whether an expression is false.
Return true when the expression is false, or return false when true.
-```json
+```
not(<expression>) ```
not(<expression>)
These examples check whether the specified expressions are false:
-```json
+```
not(false) not(true) ```
And return these results:
These examples check whether the specified expressions are false:
-```json
+```
not(equals(1, 2)) not(equals(1, 1)) ```
And return these results:
* First example: The expression is false, so the function returns `true`. * Second example: The expression is true, so the function returns `false`.
+<a name="nthIndexOf"></a>
+
+### nthIndexOf
+
+Return the starting position or index value where the *n*th occurrence of a substring appears in a string.
+
+```
+nthIndexOf('<text>', '<searchText>', <occurrence>)
+```
+
+| Parameter | Required | Type | Description |
+|--|-||-|
+| <*text*> | Yes | String | The string that contains the substring to find |
+| <*searchText*> | Yes | String | The substring to find |
+| <*ocurrence*> | Yes | Integer | A positive number that specifies the *n*th occurrence of the substring to find.|
+|||||
+
+| Return value | Type | Description |
+|--||-|
+| <*index-value*> | Integer | The starting position or index value for the *n*th occurrence of the specified substring. If the substring isn't found or fewer than *n* occurrences of the substring exist, return `-1`. |
+||||
+
+*Examples*
+
+```
+nthIndexOf('123456789123465789', '1', 1) // Returns `0`.
+nthIndexOf('123456789123465789', '1', 2) // Returns `9`.
+nthIndexOf('123456789123465789', '12', 2) // Returns `9`.
+nthIndexOf('123456789123465789', '6', 4) // Returns `-1`.
+```
+ ## O <a name="or"></a> ### or
-Check whether at least one expression is true.
-Return true when at least one expression is true,
-or return false when all are false.
+Check whether at least one expression is true. Return true when at least one expression is true, or return false when all are false.
``` or(<expression1>, <expression2>, ...)
or(<expression1>, <expression2>, ...)
These examples check whether at least one expression is true:
-```json
+```
or(true, false) or(false, false) ```
And return these results:
### outputs
-Return an action's outputs at runtime. Use this function, rather than `actionOutputs()`, which resolves to `outputs()` in the Logic App Designer. Although both functions work the same way, `outputs()` is preferred.
+Return an action's outputs at runtime. Use this function, rather than `actionOutputs()`, which resolves to `outputs()` in the designer. Although both functions work the same way, `outputs()` is preferred.
``` outputs('<actionName>')
parameters('fullName')
And returns this result: `"Sophia Owen"`
+<a name="parseDateTime"></a>
+
+### parseDateTime
+
+Return the timestamp from a string that contains a timestamp.
+
+```
+parseDateTime('<timestamp>', '<locale>', '<format>'?)
+```
+
+| Parameter | Required | Type | Description |
+|--|-||-|
+| <*timestamp*> | Yes | String | The string that contains the timestamp |
+| <*locale*> | Yes | String | The locale to use. If *locale* isn't a valid value, an error is generated that the provided locale isn't valid or doesn't have an associated locale. |
+| <*format*> | Yes | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated that the provided format isn't valid and must be a numeric format string. |
+||||
+
+| Return value | Type | Description |
+|--||-|
+| <*parsed-timestamp*> | String | The parsed timestamp in ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK) format, which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+||||
+
+*Examples*
+
+```
+parseDateTime('20/10/2014', 'fr-fr') // Returns '2014-10-20T00:00:00.0000000'.
+parseDateTime('20 octobre 2010', 'fr-FR') // Returns '2010-10-20T00:00:00.0000000'.
+parseDateTime('martes 20 octubre 2020', 'es-es') // Returns '2020-10-20T00:00:00.0000000'.
+parseDateTime('21052019', 'fr-fr', 'ddMMyyyy') // Returns '2019-05-21T00:00:00.0000000'.
+parseDateTime('10/20/2014 15h', 'en-US', 'MM/dd/yyyy HH\h') // Returns '2014-10-20T15:00:00.0000000'.
+```
+ ## R <a name="rand"></a>
range(<startIndex>, <count>)
| Parameter | Required | Type | Description | | | -- | - | -- | | <*startIndex*> | Yes | Integer | An integer value that starts the array as the first item |
-| <*count*> | Yes | Integer | The number of integers in the array. The `count` parameter value must be a positive integer that doesn't exceed 100,000. <p><p>**Note**: The sum of the `startIndex` and `count` values must not exceed 2,147,483,647. |
+| <*count*> | Yes | Integer | The number of integers in the array. The `count` parameter value must be a positive integer that doesn't exceed 100,000. <br><br><br><br>**Note**: The sum of the `startIndex` and `count` values must not exceed 2,147,483,647. |
||||| | Return value | Type | Description |
replace('<text>', '<oldText>', '<newText>')
| Return value | Type | Description | | | - | -- |
-| <*updated-text*> | String | The updated string after replacing the substring <p>If the substring is not found, return the original string. |
+| <*updated-text*> | String | The updated string after replacing the substring <br><br>If the substring isn't found, return the original string. |
|||| *Example*
skip([<collection>], <count>)
*Example*
-This example removes one item, the number 0,
-from the front of the specified array:
+This example removes one item, the number 0, from the front of the specified array:
``` skip(createArray(0, 1, 2, 3), 1)
skip(createArray(0, 1, 2, 3), 1)
And returns this array with the remaining items: `[1,2,3]`
+<a name="slice"></a>
+
+### slice
+
+Return a substring by specifying the starting and ending position or value.
+
+```
+slice('<text>', <startIndex>, <endIndex>)
+```
+
+| Parameter | Required | Type | Description |
+|--|-||-|
+| <*text*> | Yes | String | The string that contains the substring to find |
+| <*startIndex*> | Yes | Integer | The zero-based starting position or value for where to begin searching for the substring <br><br>- If *startIndex* is greater than the string length, return an empty string. <br><br>- If *startIndex* is negative, start searching at the index value that's the sum of the string length and *startIndex*. |
+| <*endIndex*> | No | Integer | The zero-based ending position or value for where to end searching for the substring. The character located at the ending index value isn't included in the search. <br><br>- If *endIndex* isn't specified or greater than the string length, search up to the end of the string. <br><br>- If *endIndex* is negative, end searching at the index value that the sum of the string length and *endIndex*. |
+|||||
+
+| Return value | Type | Description |
+|--||-|
+| <*slice-result*> | String | A new string that contains the found substring |
+||||
+
+*Examples*
+
+```
+slice('Hello World', 2) // Returns 'llo World'.
+slice('Hello World', 30) // Returns ''.
+slice('Hello World', 10, 2) // Returns ''.
+slice('Hello World', 0) // Returns 'Hello World'.
+slice('Hello World', 2, 5) // Returns 'llo'.
+slice('Hello World', 6, 20) // Returns 'World'.
+slice('Hello World', -2) // Returns 'ld'.
+slice('Hello World', 3, -1) // Returns 'lo Worl'.
+slice('Hello World', 3, 3) // Returns ''.
+```
+ <a name="split"></a> ### split
startOfDay('<timestamp>', '<format>'?)
| Parameter | Required | Type | Description | | | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated that the provided format isn't valid and must be a numeric format string. |
||||| | Return value | Type | Description |
startOfHour('<timestamp>', '<format>'?)
| Parameter | Required | Type | Description | | | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated that the provided format isn't valid and must be a numeric format string. |
||||| | Return value | Type | Description |
startOfMonth('<timestamp>', '<format>'?)
| Parameter | Required | Type | Description | | | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated that the provided format isn't valid and must be a numeric format string. |
||||| | Return value | Type | Description |
And returns this result: `"2018-03-01"`
### startsWith
-Check whether a string starts with a specific substring. Return true when the substring is found, or return false when not found. This function is not case-sensitive.
+Check whether a string starts with a specific substring. Return true when the substring is found, or return false when not found. This function isn't case-sensitive.
``` startsWith('<text>', '<searchText>')
string(<value>)
| Parameter | Required | Type | Description | | | -- | - | -- |
-| <*value*> | Yes | Any | The value to convert. If this value is null or evaluates to null, the value is converted to an empty string (`""`) value. <p><p>For example, if you assign a string variable to a non-existent property, which you can access with the `?` operator, the null value is converted to an empty string. However, comparing a null value isn't the same as comparing an empty string. |
+| <*value*> | Yes | Any | The value to convert. If this value is null or evaluates to null, the value is converted to an empty string (`""`) value. <br><br><br><br>For example, if you assign a string variable to a non-existent property, which you can access with the `?` operator, the null value is converted to an empty string. However, comparing a null value isn't the same as comparing an empty string. |
||||| | Return value | Type | Description |
subtractFromTime('<timestamp>', <interval>, '<timeUnit>', '<format>'?)
| <*timestamp*> | Yes | String | The string that contains the timestamp | | <*interval*> | Yes | Integer | The number of specified time units to subtract | | <*timeUnit*> | Yes | String | The unit of time to use with *interval*: "Second", "Minute", "Hour", "Day", "Week", "Month", "Year" |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated that the provided format isn't valid and must be a numeric format string. |
||||| | Return value | Type | Description |
Optionally, you can specify a different format with the <*format*> parameter.
| Parameter | Required | Type | Description | | | -- | - | -- |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. <br><br>If the format isn't a valid value, an error is generated that the provided format isn't valid and must be a numeric format string. |
||||| | Return value | Type | Description |
workflow().<property>
| Parameter | Required | Type | Description | | | -- | - | -- |
-| <*property*> | No | String | The name for the workflow property whose value you want <p><p>By default, a workflow object has these properties: `name`, `type`, `id`, `location`, `run`, and `tags`. <p><p>- The `run` property value is a JSON object that includes these properties: `name`, `type`, and `id`. <p><p>- The `tags` property is a JSON object that includes [tags that are associated with your logic app in Azure Logic Apps or flow in Power Automate](../azure-resource-manager/management/tag-resources.md) and the values for those tags. For more information about tags in Azure resources, review [Tag resources, resource groups, and subscriptions for logical organization in Azure](../azure-resource-manager/management/tag-resources.md). <p><p>**Note**: By default, a logic app has no tags, but a Power Automate flow has the `flowDisplayName` and `environmentName` tags. |
+| <*property*> | No | String | The name for the workflow property whose value you want <br><br><br><br>By default, a workflow object has these properties: `name`, `type`, `id`, `location`, `run`, and `tags`. <br><br><br><br>- The `run` property value is a JSON object that includes these properties: `name`, `type`, and `id`. <br><br><br><br>- The `tags` property is a JSON object that includes [tags that are associated with your logic app in Azure Logic Apps or flow in Power Automate](../azure-resource-manager/management/tag-resources.md) and the values for those tags. For more information about tags in Azure resources, review [Tag resources, resource groups, and subscriptions for logical organization in Azure](../azure-resource-manager/management/tag-resources.md). <br><br><br><br>**Note**: By default, a logic app has no tags, but a Power Automate flow has the `flowDisplayName` and `environmentName` tags. |
||||| *Example 1*
xml('<value>')
| Parameter | Required | Type | Description | | | -- | - | -- |
-| <*value*> | Yes | String | The string with the JSON object to convert <p>The JSON object must have only one root property, which can't be an array. <br>Use the backslash character (\\) as an escape character for the double quotation mark ("). |
+| <*value*> | Yes | String | The string with the JSON object to convert <br><br>The JSON object must have only one root property, which can't be an array. <br>Use the backslash character (\\) as an escape character for the double quotation mark ("). |
||||| | Return value | Type | Description |
xpath('<xml>', '<xpath>')
| | - | -- | | <*xml-node*> | XML | An XML node when only a single node matches the specified XPath expression | | <*value*> | Any | The value from an XML node when only a single value matches the specified XPath expression |
-| [<*xml-node1*>, <*xml-node2*>, ...] </br>-or- </br>[<*value1*>, <*value2*>, ...] | Array | An array with XML nodes or values that match the specified XPath expression |
+| [<*xml-node1*>, <*xml-node2*>, ...] -or- [<*value1*>, <*value2*>, ...] | Array | An array with XML nodes or values that match the specified XPath expression |
|||| *Example 1*
In this example, suppose you have this XML string, which includes the XML docume
<?xml version="1.0"?><file xmlns="https://contoso.com"><location>Paris</location></file> ```
-These expressions use either XPath expression, `/*[name()="file"]/*[name()="location"]` or `/*[local-name()="file" and namespace-uri()="https://contoso.com"]/*[local-name()="location"]`, to find nodes that match the `<location></location>` node. These examples show the syntax that you use in either the Logic App Designer or in the expression editor:
+These expressions use either XPath expression, `/*[name()="file"]/*[name()="location"]` or `/*[local-name()="file" and namespace-uri()="https://contoso.com"]/*[local-name()="location"]`, to find nodes that match the `<location></location>` node. These examples show the syntax that you use in either the designer or in the expression editor:
* `xpath(xml(body('Http')), '/*[name()="file"]/*[name()="location"]')` * `xpath(xml(body('Http')), '/*[local-name()="file" and namespace-uri()="https://contoso.com"]/*[local-name()="location"]')`
Here's the result node that matches the `<location></location>` node:
> > If you work in code view, escape the double quotation mark (") by using the backslash character (\\). > For example, you need to use escape characters when you serialize an expression as a JSON string.
-> However, if you're work in the Logic App Designer or expression editor, you don't need to escape the
+> However, if you're work in the designer or expression editor, you don't need to escape the
> double quotation mark because the backslash character is added automatically to the underlying definition, for example: > > * Code view: `xpath(xml(body('Http')), '/*[name()=\"file\"]/*[name()=\"location\"]')`
Here's the result: `Paris`
## Next steps
-Learn about the [Workflow Definition Language](../logic-apps/logic-apps-workflow-definition-language.md)
+Learn about the [Workflow Definition Language](../logic-apps/logic-apps-workflow-definition-language.md)
machine-learning How To Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-image-models.md
Each of the tasks (and some models) have a set of parameters in the `model_setti
| Task | Parameter name | Default | | |- | | |Image classification (multi-class and multi-label) | `valid_resize_size`<br>`valid_crop_size` | 256<br>224 |
-|Object detection, instance segmentation| `min_size`<br>`max_size`<br>`box_score_thresh`<br>`nms_iou_thresh`<br>`box_detections_per_img` | 600<br>1333<br>0.3<br>0.5<br>100 |
+|Object detection | `min_size`<br>`max_size`<br>`box_score_thresh`<br>`nms_iou_thresh`<br>`box_detections_per_img` | 600<br>1333<br>0.3<br>0.5<br>100 |
|Object detection using `yolov5`| `img_size`<br>`model_size`<br>`box_score_thresh`<br>`nms_iou_thresh` | 640<br>medium<br>0.1<br>0.5 |
+|Instance segmentation| `min_size`<br>`max_size`<br>`box_score_thresh`<br>`nms_iou_thresh`<br>`box_detections_per_img`<br>`mask_pixel_score_threshold`<br>`max_number_of_polygon_points`<br>`export_as_image`<br>`image_type` | 600<br>1333<br>0.3<br>0.5<br>100<br>0.5<br>100<br>False<br>JPG|
For a detailed description on task specific hyperparameters, please refer to [Hyperparameters for computer vision tasks in automated machine learning](reference-automl-images-hyperparameters.md).
machine-learning Reference Automl Images Hyperparameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-automl-images-hyperparameters.md
This table summarizes hyperparameters specific to the `yolov5` algorithm.
| `box_score_thresh` | During inference, only return proposals with a score greater than `box_score_thresh`. The score is the multiplication of the objectness score and classification probability. <br> Must be a float in the range [0, 1]. | 0.1 | | `nms_iou_thresh` | IOU threshold used during inference in non-maximum suppression post processing. <br> Must be a float in the range [0, 1]. | 0.5 |
+This table summarizes hyperparameters specific to the `maskrcnn_*` for instance segmentation during inference.
+
+| Parameter name | Description | Default |
+| - |-|-|
+| `mask_pixel_score_threshold` | Score cutoff for considering a pixel as part of the mask of an object. | 0.5 |
+| `max_number_of_polygon_points` | Maximum number of (x, y) coordinate pairs in polygon after converting from a mask. | 100 |
+| `export_as_image` | Export masks as images. | False |
+| `image_type` | Type of image to export mask as (options are jpg, png, bmp). | JPG |
+ ## Model agnostic hyperparameters The following table describes the hyperparameters that are model agnostic.
marketplace Azure App Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-app-apis.md
For the *tenant_id* value in the `POST URI` and the *client_id* and *client_secr
### Step 3: Use the Microsoft Store submission API
-After you have an Azure AD access token, you can call methods in the Partner Center submission API. To create or update submissions, you typically call multiple methods in the Partner Center submission API in a specific order. For information about each scenario and the syntax of each method, see the Ingestion API swagger.
+After you have an Azure AD access token, you can call methods in the Partner Center submission API. To create or update submissions, you typically call multiple methods in the Partner Center submission API in a specific order. For information about each scenario and the syntax of each method, see the [Ingestion API Swagger](https://ingestionapi-swagger.azureedge.net/#/).
-https://apidocs.microsoft.com/services/partneringestion/
+https://ingestionapi-swagger.azureedge.net/#/
## Next steps
media-services Media Services Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/media-services-compliance.md
+
+ Title: Azure Media Service Compliance, privacy and security
+: Azure Media Services
+description: As an important reminder, you must comply with all applicable laws in your use of Azure Media Services, and you may not use Media Services or any Azure service in a manner that violates the rights of others, or that may be harmful to others.
+
+documentationcenter: na
++++ Last updated : 2/17/2022+
+#Customer intent: As a developer or a content provider, I want to encode, stream (on demand or live), analyze my media content so that my customers can: view the content on a wide variety of browsers and devices, gain valuable insights from recorded content.
++
+# Azure Media Services compliance, privacy and security
++
+## Compliance, privacy and security
+
+As an important reminder, you must comply with all applicable laws in your use of Azure Media Services, and you may not use Media Services or any Azure service in a manner that violates the rights of others, or that may be harmful to others.
+
+Before uploading any video/image to Media Services, You must have all the proper rights to use the video/image, including, where required by law, all the necessary consents from individuals (if any) in the video/image, for the use, processing, and storage of their data in Media Services and Azure. Some jurisdictions may impose special legal requirements for the collection, online processing and storage of certain categories of data, such as biometric data. Before using Media Services and Azure for the processing and storage of any data subject to special legal requirements, You must ensure compliance with any such legal requirements that may apply to You.
+
+## Learn more about compliance
+
+To learn about compliance, privacy and security in Media Services please visit the Microsoft [Trust Center](https://www.microsoft.com/trust-center/?rtc=1). For Microsoft's privacy obligations, data handling and retention practices, including how to delete your data, please review Microsoft's [Privacy Statement](https://privacy.microsoft.com/PrivacyStatement), the [Online Services Terms](https://www.microsoft.com/licensing/product-licensing/products?rtc=1) ("OST") and [Data Processing Addendum](https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=67) ("DPA"). By using Media Services, you agree to be bound by the OST, DPA and the Privacy Statement.
media-services Media Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/media-services-overview.md
Azure Media Services is a cloud-based platform that enables you to build solutio
The Media Services v3 SDKs are based on [Media Services v3 OpenAPI Specification (Swagger)](https://aka.ms/ams-v3-rest-sdk). [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]-
-## Compliance, Privacy and Security
-
-As an important reminder, you must comply with all applicable laws in your use of Azure Media Services, and you may not use Media Services or any Azure service in a manner that violates the rights of others, or that may be harmful to others.
-
-Before uploading any video/image to Media Services, You must have all the proper rights to use the video/image, including, where required by law, all the necessary consents from individuals (if any) in the video/image, for the use, processing, and storage of their data in Media Services and Azure. Some jurisdictions may impose special legal requirements for the collection, online processing and storage of certain categories of data, such as biometric data. Before using Media Services and Azure for the processing and storage of any data subject to special legal requirements, You must ensure compliance with any such legal requirements that may apply to You.
-
-To learn about compliance, privacy and security in Media Services please visit the Microsoft [Trust Center](https://www.microsoft.com/trust-center/?rtc=1). For Microsoft's privacy obligations, data handling and retention practices, including how to delete your data, please review Microsoft's [Privacy Statement](https://privacy.microsoft.com/PrivacyStatement), the [Online Services Terms](https://www.microsoft.com/licensing/product-licensing/products?rtc=1) ("OST") and [Data Processing Addendum](https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=67) ("DPA"). By using Media Services, you agree to be bound by the OST, DPA and the Privacy Statement.
## What can I do with Media Services?
How-to guides contain code samples that demonstrate how to complete a task. In t
Check out the [Azure Media Services community](media-services-community.md) article to see different ways you can ask questions, give feedback, and get updates about Media Services.
-## Next steps
+## Compliance, privacy and security
-[Learn about fundamental concepts](concepts-overview.md)
+[!IMPORTANT] Read the [Compliance, privacy and security document](media-services-compliance.md) before using Azure Media Services to deliver your media content.
mysql Concepts Query Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-query-store.md
Use the [Azure portal](howto-server-parameters.md) or [Azure CLI](howto-confi
## Views and functions
-View and manage Query Store using the following views and functions. Anyone in the [select privilege public role](howto-create-users.md#to-create-more-admin-users-in-azure-database-for-mysql) can use these views to see the data in Query Store. These views are only available in the **mysql** database.
+View and manage Query Store using the following views and functions. Anyone in the [select privilege public role](howto-create-users.md) can use these views to see the data in Query Store. These views are only available in the **mysql** database.
Queries are normalized by looking at their structure after removing literals and constants. If two queries are identical except for literal values, they will have the same hash.
mysql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-high-availability.md
Here are some considerations to keep in mind when you use high availability:
* Read replicas aren't supported for HA servers. * Data-in Replication isn't supported for HA servers. * GTID mode will be turned on as the HA solution uses GTID. Check whether your workload has [restrictions or limitations on replication with GTIDs](https://dev.mysql.com/doc/refman/5.7/en/replication-gtids-restrictions.html).
-
+>[!Note]
+>If you are enabling same-zone HA post the server create, you need to make sure the server parameters enforce_gtid_consistencyΓÇ¥ and [ΓÇ£gtid_modeΓÇ¥](./concepts-read-replicas.md#global-transaction-identifier-gtid) is set to ON before enabling HA.
+ ## Frequently asked questions (FAQ) - **How am I billed for high available (HA) servers?**
mysql How To Create Manage Databases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-create-manage-databases.md
+
+ Title: How to create databases for Azure Database for MySQL Flexible Server
+description: This article describes how to create and manage databases on Azure Database for MySQL Flexible server.
++++ Last updated : 02/17/2022++
+# Create and manage databases for Azure Database for MySQL Flexible Server
+
+This article contains information about creating, listing, and deleting MySQL databases on Azure Database for Flexible Server.
+
+## Prerequisites
+Before completing the tasks, you must have
+- Created an Azure Database for MySQL Flexible server using [Azure portal](./quickstart-create-server-portal.md) <br/> or [Azure CLI](./quickstart-create-server-cli.md).
+- Sign in to [Azure portal](https://portal.azure.com).
++
+## List your databases
+To list all your databases on MySQL flexible server:
+- Open the **Overview** page of your MySQL flexible server.
+- Select **Databases** from the settings on left navigation menu.
+
+> :::image type="content" source="media/how-to-create-manage-databases/databases-view-mysql-flexible-server.png" alt-text="Screenshot showing how to list all the databases on Azure Database for MySQL flexible server":::
+
+## Create a database
+To create a database on MySQL flexible server:
+
+- Select **Databases** from the settings on left navigation menu.
+- Click on **Add** to create a database. Provide the database name, charset and collation settings for this database.
+- Click on **Save** to complete the task.
+
+> :::image type="content" source="media/how-to-create-manage-databases/create-database-azure-mysql-flexible-server.png" alt-text="Screenshot showing how to create a database on Azure Database for MySQL flexible server":::
+
+## Delete a database
+To delete a database on MySQL flexible server:
+
+- Select **Databases** from the settings on left navigation menu.
+- Click on **testdatabase1** to select the database. You can select multiple databases to delete at the same time.
+- Click on **Delete** to complete the task.
+
+> :::image type="content" source="media/how-to-create-manage-databases/delete-database-on-mysql-flexible-server.png" alt-text="Screenshot showing how to delete a database on Azure Database for MySQL flexible server":::
+
+## Next steps
+
+Learn how to [manage users](../howto-create-users.md)
mysql Howto Create Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-create-users.md
Title: Create databases and users - Azure Database for MySQL
+ Title: How to create users for Azure Database for MySQL
description: This article describes how to create new user accounts to interact with an Azure Database for MySQL server. Previously updated : 01/13/2021 Last updated : 02/17/2022
-# Create databases and users in Azure Database for MySQL
+# Create users in Azure Database for MySQL
[!INCLUDE[applies-to-mysql-single-flexible-server](includes/applies-to-mysql-single-flexible-server.md)]
-This article describes how to create users in Azure Database for MySQL.
+This article describes how to create users for Azure Database for MySQL.
> [!NOTE] > This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
After you create an Azure Database for MySQL server, you can use the first serve
> > Password plugins like `validate_password` and `caching_sha2_password` aren't supported by the service.
-## To create a database with a non-admin user in Azure Database for MySQL
+## Create a database
1. Get the connection information and admin user name. To connect to your database server, you need the full server name and admin sign-in credentials. You can easily find the server name and sign-in information on the server **Overview** page or on the **Properties** page in the Azure portal. 2. Use the admin account and password to connect to your database server. Use your preferred client tool, such as MySQL Workbench, mysql.exe, or HeidiSQL.
- If you're not sure how to connect, see [connect and query data for Single Server](./connect-workbench.md) or [connect and query data for Flexible Server](./flexible-server/connect-workbench.md).
+> [!NOTE]
+> If you're not sure how to connect, see [connect and query data for Single Server](./connect-workbench.md) or [connect and query data for Flexible Server](./flexible-server/connect-workbench.md).
3. Edit and run the following SQL code. Replace the placeholder value `db_user` with your intended new user name. Replace the placeholder value `testdb` with your database name.
After you create an Azure Database for MySQL server, you can use the first serve
```sql CREATE DATABASE testdb;
+ ```
+## Create a non-dmin user
+ Now that the database is created , you can create with a non-admin user with the ``` CREATE USER``` MySQL statement.
+ ``` sql
CREATE USER 'db_user'@'%' IDENTIFIED BY 'StrongPassword!'; GRANT ALL PRIVILEGES ON testdb . * TO 'db_user'@'%';
After you create an Azure Database for MySQL server, you can use the first serve
FLUSH PRIVILEGES; ```
-4. Verify the grants in the database:
+## Verify the user permissions
+Run the ```SHOW GRANTS``` MySQL statement to view the privileges allowed for user **db_user** on **testdb** database.
```sql USE testdb;
After you create an Azure Database for MySQL server, you can use the first serve
SHOW GRANTS FOR 'db_user'@'%'; ```
-5. Sign in to the server, specifying the designated database and using the new user name and password. This example shows the mysql command line. When you use this command, you'll be prompted for the user's password. Use your own server name, database name, and user name.
-
-### [Single Server](#tab/single-server)
+## Connect to the database with new user
+Sign in to the server, specifying the designated database and using the new user name and password. This example shows the mysql command line. When you use this command, you'll be prompted for the user's password. Use your own server name, database name, and user name. See how to connect for Single server and Flexible server below.
- ```azurecli-interactive
- mysql --host mydemoserver.mysql.database.azure.com --database testdb --user db_user@mydemoserver -p
- ```
-
-### [Flexible Server](#tab/flexible-server)
+| Server type | Usage |
+| -- | -- |
+| Single Server | ```mysql --host mydemoserver.mysql.database.azure.com --database testdb --user db_user@mydemoserver -p``` |
+| Flexible Server | ``` mysql --host mydemoserver.mysql.database.azure.com --database testdb --user db_user -p```|
- ```azurecli-interactive
- mysql --host mydemoserver.mysql.database.azure.com --database testdb --user db_user -p
- ```
-## To create more admin users in Azure Database for MySQL
-
-1. Get the connection information and admin user name.
- To connect to your database server, you need the full server name and admin sign-in credentials. You can easily find the server name and sign-in information on the server **Overview** page or on the **Properties** page in the Azure portal.
-
-2. Use the admin account and password to connect to your database server. Use your preferred client tool, such as MySQL Workbench, mysql.exe, or HeidiSQL.
-
- If you're not sure how to connect, see [Use MySQL Workbench to connect and query data](./connect-workbench.md).
-
-3. Edit and run the following SQL code. Replace the placeholder value `new_master_user` with your new user name. This syntax grants the listed privileges on all the database schemas (*.*) to the user (`new_master_user` in this example).
+## Limit privileges for user
+To restrict the type of operations a user can run on the database, you need to explicitly add the operations in the **GRANT** statement. See an example below:
```sql CREATE USER 'new_master_user'@'%' IDENTIFIED BY 'StrongPassword!'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, PROCESS, REFERENCES, INDEX, ALTER, SHOW DATABASES, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER ON *.* TO 'new_master_user'@'%' WITH GRANT OPTION;
- FLUSH PRIVILEGES;
- ```
-
-4. Verify the grants:
-
- ```sql
- USE sys;
-
- SHOW GRANTS FOR 'new_master_user'@'%';
+ FLUSH PRIVILEGES;
```
-## azure_superuser
+## About azure_superuser
All Azure Database for MySQL servers are created with a user called "azure_superuser". This is a system account created by Microsoft to manage the server to conduct monitoring, backups, and other regular maintenance. On-call engineers may also use this account to access the server during an incident with certificate authentication and must request access using just-in-time (JIT) processes. ## Next steps
-Open the firewall for the IP addresses of the new users' machines to enable them to connect:
-
-* [Create and manage firewall rules on Single Server](howto-manage-firewall-using-portal.md)
-* [Create and manage firewall rules on Flexible Server](flexible-server/how-to-connect-tls-ssl.md)
- For more information about user account management, see the MySQL product documentation for [User account management](https://dev.mysql.com/doc/refman/5.7/en/access-control.html), [GRANT syntax](https://dev.mysql.com/doc/refman/5.7/en/grant.html), and [Privileges](https://dev.mysql.com/doc/refman/5.7/en/privileges-provided.html).
network-watcher Network Watcher Ip Flow Verify Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-ip-flow-verify-overview.md
IP flow verify checks if a packet is allowed or denied to or from a virtual machine. The information consists of direction, protocol, local IP, remote IP, local port, and remote port. If the packet is denied by a security group, the name of the rule that denied the packet is returned. While any source or destination IP can be chosen, IP flow verify helps administrators quickly diagnose connectivity issues from or to the internet and from or to the on-premises environment.
-IP flow verify looks at the rules for all Network Security Groups (NSGs) applied to the network interface, such as a subnet or virtual machine NIC. Traffic flow is then verified based on the configured settings to or from that network interface. IP flow verify is useful in confirming if a rule in a Network Security Group is blocking ingress or egress traffic to or from a virtual machine.Now along with the NSG rules evaluation, the Azure Virtual Network Manager rules will also be evaluated.
+IP flow verify looks at the rules for all Network Security Groups (NSGs) applied to the network interface, such as a subnet or virtual machine NIC. Traffic flow is then verified based on the configured settings to or from that network interface. IP flow verify is useful in confirming if a rule in a Network Security Group is blocking ingress or egress traffic to or from a virtual machine. Now along with the NSG rules evaluation, the Azure Virtual Network Manager rules will also be evaluated.
[Azure Virtual Network Manager (AVNM)](../virtual-network-manager/overview.md) is a management service that enables users to group, configure, deploy, and manage Virtual Networks globally across subscriptions. AVNM security configuration allows users to define a collection of rules that can be applied to one or more network security groups at the global level. These security rules have a higher priority than network security group (NSG) rules. An important difference to note here is that admin rules are a resource delivered by ANM in a central location controlled by governance and security teams, which bubble down to each vnet. NSGs are a resource controlled by the vnet owners, which apply at each subnet or NIC level.
networking Networking Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/fundamentals/networking-overview.md
For more information about Traffic Manager, see [What is Azure Traffic Manager?]
### <a name="loadbalancer"></a>Load Balancer The Azure Load Balancer provides high-performance, low-latency Layer 4 load-balancing for all UDP and TCP protocols. It manages inbound and outbound connections. You can configure public and internal load-balanced endpoints. You can define rules to map inbound connections to back-end pool destinations by using TCP and HTTP health-probing options to manage service availability. To learn more about Load Balancer, read the [Load Balancer overview](../../load-balancer/load-balancer-overview.md) article.
+Azure Load Balancer is available in Standard, Regional, and Gateway SKUs.
+ The following picture shows an Internet-facing multi-tier application that utilizes both external and internal load balancers: :::image type="content" source="./media/networking-overview/load-balancer.png" alt-text="Azure Load Balancer example":::
postgresql Concepts Planned Maintenance Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-planned-maintenance-notification.md
Previously updated : 10/21/2020 Last updated : 2/17/2022 # Planned maintenance notification in Azure Database for PostgreSQL - Single Server
You can utilize the planned maintenance notifications feature to receive alerts
### Planned maintenance notification
-> [!IMPORTANT]
-> Planned maintenance notifications are currently available in preview in all regions **except** West Central US
**Planned maintenance notifications** allow you to receive alerts for upcoming planned maintenance event to your Azure Database for PostgreSQL. These notifications are integrated with [Service Health's](../service-health/overview.md) planned maintenance and allow you to view all scheduled maintenance for your subscriptions in one place. It also helps to scale the notification to the right audiences for different resource groups, as you may have different contacts responsible for different resources. You will receive the notification about the upcoming maintenance 72 calendar hours before the event.
postgresql Concepts Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-supported-versions.md
Previously updated : 08/01/2021 Last updated : 02/17/2021 # Supported PostgreSQL major versions
The current minor release is 11.11. Refer to the [PostgreSQL documentation](http
## PostgreSQL version 10 The current minor release is 10.16. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/10/static/release-10-16.html) to learn more about improvements and fixes in this minor release.
-## PostgreSQL version 9.6
+## PostgreSQL version 9.6 (retired)
The current minor release is 9.6.21. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/9.6/static/release-9-6-21.html) to learn more about improvements and fixes in this minor release. ## PostgreSQL version 9.5 (retired)
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md
Previously updated : 12/06/2021 Last updated : 02/17/2022
One advantage of running your workload in Azure is global reach. The flexible se
| Central US | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | East Asia | :heavy_check_mark: | :x: | :x: | | East US | :heavy_check_mark: | :heavy_check_mark: | :x: |
-| East US 2 | :heavy_check_mark: | :x: | :heavy_check_mark: |
+| East US 2 | :heavy_check_mark: | :x: $ | :heavy_check_mark: |
| France Central | :heavy_check_mark: | :heavy_check_mark: | :x: | | Germany West Central | :heavy_check_mark: | :heavy_check_mark: | :x: | | Japan East | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
One advantage of running your workload in Azure is global reach. The flexible se
| Norway East | :heavy_check_mark: | :x: | :x: | | South Africa North | :heavy_check_mark: | :x: | :x: | | South Central US | :heavy_check_mark: | :heavy_check_mark: | :x: |
-| Southeast Asia | :heavy_check_mark: | :x: | :x: |
+| Southeast Asia | :heavy_check_mark: | :x: $ | :x: |
| Sweden Central | :heavy_check_mark: | :x: | :x: | | Switzerland North | :heavy_check_mark: | :x: | :x: | | UAE North | :heavy_check_mark: | :x: | :x: |
One advantage of running your workload in Azure is global reach. The flexible se
| West US 2 | :heavy_check_mark: | :heavy_check_mark: | :x: | | West US 3 | :heavy_check_mark: | :x: | :x: |
+$ New Zone-redundant high availability deployments are temporarily blocked in this region. Already provisioned HA servers are fully supported.
+ <!-- We continue to add more regions for flexible server. --> > [!NOTE] > If your application requires Zone redundant HA and it's not available in your preferred Azure region, consider using other regions within the same geography where Zone redundant HA is available, such as US East for US East 2, Central US for North Central US, and so on.
private-link Create Private Endpoint Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-endpoint-cli.md
Title: 'Quickstart - Create an Azure Private Endpoint using Azure CLI'
-description: Use this quickstart to learn how to create a Private Endpoint using Azure CLI.
+ Title: 'Quickstart: Create a private endpoint by using the Azure CLI'
+description: In this quickstart, you'll learn how to create a private endpoint by using the Azure CLI.
Last updated 11/07/2020
-#Customer intent: As someone with a basic network background, but is new to Azure, I want to create an Azure private endpoint
+#Customer intent: As someone who has a basic network background but is new to Azure, I want to create a private endpoint by using the Azure CLI.
-# Quickstart: Create a Private Endpoint using Azure CLI
+# Quickstart: Create a private endpoint by using the Azure CLI
-Get started with Azure Private Link by using a Private Endpoint to connect securely to an Azure web app.
+Get started with Azure Private Link by using a private endpoint to connect securely to an Azure web app.
-In this quickstart, you'll create a private endpoint for an Azure web app and deploy a virtual machine to test the private connection.
+In this quickstart, you'll create a private endpoint for an Azure web app and then create and deploy a virtual machine (VM) to test the private connection.
-Private endpoints can be created for different kinds of Azure services, such as Azure SQL and Azure Storage.
+You can create private endpoints for a variety of Azure services, such as Azure SQL and Azure Storage.
## Prerequisites
-* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An Azure Web App with a **PremiumV2-tier** or higher app service plan deployed in your Azure subscription.
- * For more information and an example, see [Quickstart: Create an ASP.NET Core web app in Azure](../app-service/quickstart-dotnetcore.md).
- * For a detailed tutorial on creating a web app and an endpoint, see [Tutorial: Connect to a web app using an Azure Private Endpoint](tutorial-private-endpoint-webapp-portal.md).
-* Sign in to the Azure portal and check that your subscription is active by running `az login`.
-* Check your version of the Azure CLI in a terminal or command window by running `az --version`. For the latest version, see the [latest release notes](/cli/azure/release-notes-azure-cli?tabs=azure-cli).
- * If you don't have the latest version, update your installation by following the [installation guide for your operating system or platform](/cli/azure/install-azure-cli).
+* An Azure account with an active subscription. If you don't already have an Azure account, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-## Create a resource group
+ To ensure that your subscription is active, sign in to the Azure portal, and then check your version by running `az login`.
-An Azure resource group is a logical container into which Azure resources are deployed and managed.
+* An Azure web app with a *PremiumV2-tier* or higher app service plan, deployed in your Azure subscription.
-Create a resource group with [az group create](/cli/azure/group#az_group_create):
+ For more information and an example, see [Quickstart: Create an ASP.NET Core web app in Azure](../app-service/quickstart-dotnetcore.md).
+
+ For a detailed tutorial on creating a web app and an endpoint, see [Tutorial: Connect to a web app by using a private endpoint](tutorial-private-endpoint-webapp-portal.md).
-* Named **CreatePrivateEndpointQS-rg**.
-* In the **eastus** location.
+* The latest version of the Azure CLI, installed.
-```azurecli-interactive
-az group create \
- --name CreatePrivateEndpointQS-rg \
- --location eastus
-```
+ Check your version of the Azure CLI in a terminal or command window by running `az --version`. For the latest version, see the most recent [release notes](/cli/azure/release-notes-azure-cli?tabs=azure-cli).
+
+ If you don't have the latest version of the Azure CLI, update it by following the [installation guide for your operating system or platform](/cli/azure/install-azure-cli).
-## Create a virtual network and bastion host
-
-In this section, you'll create a virtual network, subnet, and bastion host.
-
-The bastion host will be used to connect securely to the virtual machine for testing the private endpoint.
+## Create a resource group
-Create a virtual network with [az network vnet create](/cli/azure/network/vnet#az_network_vnet_create)
+An Azure resource group is a logical container where Azure resources are deployed and managed.
-* Named **myVNet**.
-* Address prefix of **10.0.0.0/16**.
-* Subnet named **myBackendSubnet**.
-* Subnet prefix of **10.0.0.0/24**.
-* In the **CreatePrivateEndpointQS-rg** resource group.
-* Location of **eastus**.
+First, create a resource group by using [az group create](/cli/azure/group#az_group_create):
```azurecli-interactive
-az network vnet create \
- --resource-group CreatePrivateEndpointQS-rg\
- --location eastus \
- --name myVNet \
- --address-prefixes 10.0.0.0/16 \
- --subnet-name myBackendSubnet \
- --subnet-prefixes 10.0.0.0/24
+az group create \
+ --name CreatePrivateEndpointQS-rg \
+ --location eastus
```
-Update the subnet to disable private endpoint network policies for the private endpoint with [az network vnet subnet update](/cli/azure/network/vnet/subnet#az_network_vnet_subnet_update):
+## Create a virtual network and bastion host
-```azurecli-interactive
-az network vnet subnet update \
- --name myBackendSubnet \
- --resource-group CreatePrivateEndpointQS-rg \
- --vnet-name myVNet \
- --disable-private-endpoint-network-policies true
-```
+Next, create a virtual network, subnet, and bastion host. You'll use the bastion host to connect securely to the VM for testing the private endpoint.
+
+1. Create a virtual network by using [az network vnet create](/cli/azure/network/vnet#az_network_vnet_create):
+
+ * Name: **myVNet**
+ * Address prefix: **10.0.0.0/16**
+ * Subnet name: **myBackendSubnet**
+ * Subnet prefix: **10.0.0.0/24**
+ * Resource group: **CreatePrivateEndpointQS-rg**
+ * Location: **eastus**
+
+ ```azurecli-interactive
+ az network vnet create \
+ --resource-group CreatePrivateEndpointQS-rg\
+ --location eastus \
+ --name myVNet \
+ --address-prefixes 10.0.0.0/16 \
+ --subnet-name myBackendSubnet \
+ --subnet-prefixes 10.0.0.0/24
+ ```
-Use [az network public-ip create](/cli/azure/network/public-ip#az_network_public_ip_create) to create a public ip address for the bastion host:
+1. Update the subnet to disable private-endpoint network policies for the private endpoint by using [az network vnet subnet update](/cli/azure/network/vnet/subnet#az_network_vnet_subnet_update):
-* Create a standard zone redundant public IP address named **myBastionIP**.
-* In **CreatePrivateEndpointQS-rg**.
+ ```azurecli-interactive
+ az network vnet subnet update \
+ --name myBackendSubnet \
+ --resource-group CreatePrivateEndpointQS-rg \
+ --vnet-name myVNet \
+ --disable-private-endpoint-network-policies true
+ ```
-```azurecli-interactive
-az network public-ip create \
- --resource-group CreatePrivateEndpointQS-rg \
- --name myBastionIP \
- --sku Standard
-```
+1. Create a public IP address for the bastion host by using [az network public-ip create](/cli/azure/network/public-ip#az_network_public_ip_create):
-Use [az network vnet subnet create](/cli/azure/network/vnet/subnet#az_network_vnet_subnet_create) to create a bastion subnet:
+ * Standard zone-redundant public IP address name: **myBastionIP**
+ * Resource group: **CreatePrivateEndpointQS-rg**
-* Named **AzureBastionSubnet**.
-* Address prefix of **10.0.1.0/24**.
-* In virtual network **myVNet**.
-* In resource group **CreatePrivateEndpointQS-rg**.
+ ```azurecli-interactive
+ az network public-ip create \
+ --resource-group CreatePrivateEndpointQS-rg \
+ --name myBastionIP \
+ --sku Standard
+ ```
-```azurecli-interactive
-az network vnet subnet create \
- --resource-group CreatePrivateEndpointQS-rg \
- --name AzureBastionSubnet \
- --vnet-name myVNet \
- --address-prefixes 10.0.1.0/24
-```
+1. Create a bastion subnet by using [az network vnet subnet create](/cli/azure/network/vnet/subnet#az_network_vnet_subnet_create):
-Use [az network bastion create](/cli/azure/network/bastion#az_network_bastion_create) to create a bastion host:
+ * Name: **AzureBastionSubnet**
+ * Address prefix: **10.0.1.0/24**
+ * Virtual network: **myVNet**
+ * Resource group: **CreatePrivateEndpointQS-rg**
-* Named **myBastionHost**.
-* In **CreatePrivateEndpointQS-rg**.
-* Associated with public IP **myBastionIP**.
-* Associated with virtual network **myVNet**.
-* In **eastus** location.
+ ```azurecli-interactive
+ az network vnet subnet create \
+ --resource-group CreatePrivateEndpointQS-rg \
+ --name AzureBastionSubnet \
+ --vnet-name myVNet \
+ --address-prefixes 10.0.1.0/24
+ ```
-```azurecli-interactive
-az network bastion create \
- --resource-group CreatePrivateEndpointQS-rg \
- --name myBastionHost \
- --public-ip-address myBastionIP \
- --vnet-name myVNet \
- --location eastus
-```
+1. Create a bastion host by using [az network bastion create](/cli/azure/network/bastion#az_network_bastion_create):
+
+ * Name: **myBastionHost**
+ * Resource group: **CreatePrivateEndpointQS-rg**
+ * Public IP address: **myBastionIP**
+ * Virtual network: **myVNet**
+ * Location: **eastus**
+
+ ```azurecli-interactive
+ az network bastion create \
+ --resource-group CreatePrivateEndpointQS-rg \
+ --name myBastionHost \
+ --public-ip-address myBastionIP \
+ --vnet-name myVNet \
+ --location eastus
+ ```
It can take a few minutes for the Azure Bastion host to deploy.
-## Create test virtual machine
+## Create a test virtual machine
-In this section, you'll create a virtual machine that will be used to test the private endpoint.
+Next, create a VM that you can use to test the private endpoint.
-Create a VM withΓÇ»[az vm create](/cli/azure/vm#az_vm_create). When prompted, provide a password to be used as the credentials for the VM:
+1. Create the VM by using [az vm create](/cli/azure/vm#az_vm_create).
-* Named **myVM**.
-* In **CreatePrivateEndpointQS-rg**.
-* In network **myVNet**.
-* In subnet **myBackendSubnet**.
-* Server image **Win2019Datacenter**.
+1. At the prompt, provide a password to be used as the credentials for the VM:
-```azurecli-interactive
-az vm create \
- --resource-group CreatePrivateEndpointQS-rg \
- --name myVM \
- --image Win2019Datacenter \
- --public-ip-address "" \
- --vnet-name myVNet \
- --subnet myBackendSubnet \
- --admin-username azureuser
-```
+ * Name: **myVM**
+ * Resource group: **CreatePrivateEndpointQS-rg**
+ * Virtual network: **myVNet**
+ * Subnet: **myBackendSubnet**
+ * Server image: **Win2019Datacenter**
+
+ ```azurecli-interactive
+ az vm create \
+ --resource-group CreatePrivateEndpointQS-rg \
+ --name myVM \
+ --image Win2019Datacenter \
+ --public-ip-address "" \
+ --vnet-name myVNet \
+ --subnet myBackendSubnet \
+ --admin-username azureuser
+ ```
[!INCLUDE [ephemeral-ip-note.md](../../includes/ephemeral-ip-note.md)]
-## Create private endpoint
+## Create a private endpoint
-In this section, you'll create the private endpoint.
+Next, create the private endpoint.
-Use [az webapp list](/cli/azure/webapp#az_webapp_list) to place the resource ID of the Web app you previously created into a shell variable.
+1. Place the resource ID of the web app that you created earlier into a shell variable by using [az webapp list](/cli/azure/webapp#az_webapp_list).
-Use [az network private-endpoint create](/cli/azure/network/private-endpoint#az_network_private_endpoint_create) to create the endpoint and connection:
+1. Create the endpoint and connection by using [az network private-endpoint create](/cli/azure/network/private-endpoint#az_network_private_endpoint_create):
-* Named **myPrivateEndpoint**.
-* In resource group **CreatePrivateEndpointQS-rg**.
-* In virtual network **myVNet**.
-* In subnet **myBackendSubnet**.
-* Connection named **myConnection**.
-* Your webapp **\<webapp-resource-group-name>**.
+ * Name: **myPrivateEndpoint**
+ * Resource group: **CreatePrivateEndpointQS-rg**
+ * Virtual network: **myVNet**
+ * Subnet: **myBackendSubnet**
+ * Connection name: **myConnection**
+ * Web app: **\<webapp-resource-group-name>**
-```azurecli-interactive
-id=$(az webapp list \
- --resource-group <webapp-resource-group-name> \
- --query '[].[id]' \
- --output tsv)
+ ```azurecli-interactive
+ id=$(az webapp list \
+ --resource-group <webapp-resource-group-name> \
+ --query '[].[id]' \
+ --output tsv)
-az network private-endpoint create \
- --name myPrivateEndpoint \
- --resource-group CreatePrivateEndpointQS-rg \
- --vnet-name myVNet --subnet myBackendSubnet \
- --private-connection-resource-id $id \
- --group-id sites \
- --connection-name myConnection
-```
+ az network private-endpoint create \
+ --name myPrivateEndpoint \
+ --resource-group CreatePrivateEndpointQS-rg \
+ --vnet-name myVNet --subnet myBackendSubnet \
+ --private-connection-resource-id $id \
+ --group-id sites \
+ --connection-name myConnection
+ ```
## Configure the private DNS zone
-In this section, you'll create and configure the private DNS zone using [az network private-dns zone create](/cli/azure/network/private-dns/zone#az_network_private_dns_zone_create).
+Next, create and configure the private DNS zone by using [az network private-dns zone create](/cli/azure/network/private-dns/zone#az_network_private_dns_zone_create).
-You'll use [az network private-dns link vnet create](/cli/azure/network/private-dns/link/vnet#az_network_private_dns_link_vnet_create) to create the virtual network link to the dns zone.
+1. Create the virtual network link to the DNS zone by using [az network private-dns link vnet create](/cli/azure/network/private-dns/link/vnet#az_network_private_dns_link_vnet_create).
-You'll create a dns zone group with [az network private-endpoint dns-zone-group create](/cli/azure/network/private-endpoint/dns-zone-group#az_network_private_endpoint_dns_zone_group_create).
+1. Create a DNS zone group by using [az network private-endpoint dns-zone-group create](/cli/azure/network/private-endpoint/dns-zone-group#az_network_private_endpoint_dns_zone_group_create).
-* Zone named **privatelink.azurewebsites.net**
-* In virtual network **myVNet**.
-* In resource group **CreatePrivateEndpointQS-rg**.
-* DNS link named **myDNSLink**.
-* Associated with **myPrivateEndpoint**.
-* Zone group named **MyZoneGroup**.
+ * Zone name: **privatelink.azurewebsites.net**
+ * Virtual network: **myVNet**
+ * Resource group: **CreatePrivateEndpointQS-rg**
+ * DNS link name: **myDNSLink**
+ * Endpoint name: **myPrivateEndpoint**
+ * Zone group name: **MyZoneGroup**
-```azurecli-interactive
-az network private-dns zone create \
- --resource-group CreatePrivateEndpointQS-rg \
- --name "privatelink.azurewebsites.net"
+ ```azurecli-interactive
+ az network private-dns zone create \
+ --resource-group CreatePrivateEndpointQS-rg \
+ --name "privatelink.azurewebsites.net"
+
+ az network private-dns link vnet create \
+ --resource-group CreatePrivateEndpointQS-rg \
+ --zone-name "privatelink.azurewebsites.net" \
+ --name MyDNSLink \
+ --virtual-network myVNet \
+ --registration-enabled false
-az network private-dns link vnet create \
+ az network private-endpoint dns-zone-group create \
--resource-group CreatePrivateEndpointQS-rg \
- --zone-name "privatelink.azurewebsites.net" \
- --name MyDNSLink \
- --virtual-network myVNet \
- --registration-enabled false
-
-az network private-endpoint dns-zone-group create \
- --resource-group CreatePrivateEndpointQS-rg \
- --endpoint-name myPrivateEndpoint \
- --name MyZoneGroup \
- --private-dns-zone "privatelink.azurewebsites.net" \
- --zone-name webapp
-```
+ --endpoint-name myPrivateEndpoint \
+ --name MyZoneGroup \
+ --private-dns-zone "privatelink.azurewebsites.net" \
+ --zone-name webapp
+ ```
-## Test connectivity to private endpoint
+## Test connectivity to the private endpoint
-In this section, you'll use the virtual machine you created in the previous step to connect to the SQL server across the private endpoint.
+Finally, use the VM that you created earlier to connect to the SQL Server instance across the private endpoint.
-1. Sign in to the [Azure portal](https://portal.azure.com)
+1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Select **Resource groups** in the left-hand navigation pane.
+1. On the left pane, select **Resource groups**.
-3. Select **CreatePrivateEndpointQS-rg**.
+1. Select **CreatePrivateEndpointQS-rg**.
-4. Select **myVM**.
+1. Select **myVM**.
-5. On the overview page for **myVM**, select **Connect** then **Bastion**.
+1. On the overview page for **myVM**, select **Connect**, and then select **Bastion**.
-6. Select the blue **Use Bastion** button.
+1. Select the blue **Use Bastion** button.
-7. Enter the username and password that you entered during the virtual machine creation.
+1. Enter the username and password that you used when you created the VM.
-8. Open Windows PowerShell on the server after you connect.
+1. After you've connected, open PowerShell on the server.
-9. Enter `nslookup <your-webapp-name>.azurewebsites.net`. Replace **\<your-webapp-name>** with the name of the web app you created in the previous steps. You'll receive a message similar to what is displayed below:
+1. Enter `nslookup <your-webapp-name>.azurewebsites.net`, replacing *\<your-webapp-name>* with the name of the web app that you created earlier. You'll receive a message that's similar to the following:
```powershell Server: UnKnown
In this section, you'll use the virtual machine you created in the previous step
Aliases: mywebapp8675.azurewebsites.net ```
- A private IP address of **10.0.0.5** is returned for the web app name. This address is in the subnet of the virtual network you created previously.
+ A private IP address of *10.0.0.5* is returned for the web app name. This address is in the subnet of the virtual network that you created earlier.
-10. In the bastion connection to **myVM**, open Internet Explorer.
+1. In the bastion connection to *myVM**, open your web browser.
-11. Enter the url of your web app, **https://\<your-webapp-name>.azurewebsites.net**.
+1. Enter the URL of your web app, *https://\<your-webapp-name>.azurewebsites.net*.
-12. You'll receive the default web app page if your application hasn't been deployed:
+ If your web app hasn't been deployed, you'll get the following default web app page:
- :::image type="content" source="./media/create-private-endpoint-portal/web-app-default-page.png" alt-text="Default web app page." border="true":::
+ :::image type="content" source="./media/create-private-endpoint-portal/web-app-default-page.png" alt-text="Screenshot of the default web app page on a browser." border="true":::
-13. Close the connection to **myVM**.
+1. Close the connection to *myVM*.
## Clean up resources
-When you're done using the private endpoint and the VM, use [az group delete](/cli/azure/group#az_group_delete) to remove the resource group and all the resources it has:
+
+When you're done using the private endpoint and the VM, use [az group delete](/cli/azure/group#az_group_delete) to remove the resource group and all the resources within it:
```azurecli-interactive az group delete \ --name CreatePrivateEndpointQS-rg ```
-## Next steps
+## What you've learned
+
+In this quickstart, you created:
-In this quickstart, you created a:
+* A virtual network and bastion host
+* A virtual machine
+* A private endpoint for an Azure web app
-* Virtual network and bastion host.
-* Virtual machine.
-* Private endpoint for an Azure Web App.
+You used the VM to securely test connectivity to the web app across the private endpoint.
-You used the virtual machine to test connectivity securely to the web app across the private endpoint.
+## Next steps
-For more information on the services that support a private endpoint, see:
+For more information about the services that support private endpoints, see:
> [!div class="nextstepaction"]
-> [Private Link availability](private-link-overview.md#availability)
+> [What is Azure Private Link?](private-link-overview.md#availability)
private-link Create Private Endpoint Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-endpoint-portal.md
Title: 'Quickstart - Create a Private Endpoint using the Azure portal'
-description: Use this quickstart to learn how to create a Private Endpoint using the Azure portal.
+ Title: 'Quickstart: Create a private endpoint by using the Azure portal'
+description: In this quickstart, you'll learn how to create a private endpoint by using the Azure portal.
Last updated 10/20/2020
-#Customer intent: As someone with a basic network background, but is new to Azure, I want to create a private endpoint on a SQL server so that I can securely connect to it.
+#Customer intent: As someone who has a basic network background but is new to Azure, I want to create a private endpoint on a SQL server so that I can securely connect to it.
-# Quickstart: Create a Private Endpoint using the Azure portal
+# Quickstart: Create a private endpoint by using the Azure portal
-Get started with Azure Private Link by using a Private Endpoint to connect securely to an Azure web app.
+Get started with Azure Private Link by creating and using a private endpoint to connect securely to an Azure web app.
-In this quickstart, you'll create a private endpoint for an Azure web app and deploy a virtual machine to test the private connection.
+In this quickstart, you'll create a private endpoint for an Azure web app and then create and deploy a virtual machine (VM) to test the private connection.
-Private endpoints can be created for different kinds of Azure services, such as Azure SQL and Azure Storage.
+You can create private endpoints for a variety of Azure services, such as Azure SQL and Azure Storage.
## Prerequisites
-* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An Azure Web App with a **PremiumV2-tier** or higher app service plan deployed in your Azure subscription.
- * For more information and an example, see [Quickstart: Create an ASP.NET Core web app in Azure](../app-service/quickstart-dotnetcore.md).
- * For a detailed tutorial on creating a web app and an endpoint, see [Tutorial: Connect to a web app using an Azure Private Endpoint](tutorial-private-endpoint-webapp-portal.md).
+* An Azure account with an active subscription. If you don't already have an Azure account, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-## Sign in to Azure
+* An Azure web app with a *PremiumV2-tier* or higher app service plan, deployed in your Azure subscription.
-Sign in to the Azure portal at https://portal.azure.com.
+ For more information and an example, see [Quickstart: Create an ASP.NET Core web app in Azure](../app-service/quickstart-dotnetcore.md).
+
+ For a detailed tutorial on creating a web app and an endpoint, see [Tutorial: Connect to a web app by using a private endpoint](tutorial-private-endpoint-webapp-portal.md).
## Create a virtual network and bastion host
-In this section, you'll create a virtual network, subnet, and bastion host.
+Start by creating a virtual network, subnet, and bastion host.
+
+You use the bastion host to connect securely to the VM for testing the private endpoint.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
-The bastion host will be used to connect securely to the virtual machine for testing the private endpoint.
+1. At the upper left, select **Create a resource**.
-1. On the upper-left side of the screen, select **Create a resource > Networking > Virtual network** or search for **Virtual network** in the search box.
+1. On the left pane, select **Networking**, and then select **Virtual network**.
-2. In **Create virtual network**, enter or select this information in the **Basics** tab:
+1. On the **Create virtual network** pane, select the **Basics** tab, and then enter the following values:
- | **Setting** | **Value** |
- ||--|
- | **Project Details** | |
- | Subscription | Select your Azure subscription |
- | Resource Group | Select **CreatePrivateEndpointQS-rg** |
- | **Instance details** | |
- | Name | Enter **myVNet** |
+ | Setting | Value |
+ | | |
+ | **Project&nbsp;details** | |
+ | Subscription | Select your Azure subscription. |
+ | Resource group | Select **CreatePrivateEndpointQS-rg**. |
+ | **Instance&nbsp;details** | |
+ | Name | Enter **myVNet**. |
| Region | Select **West Europe**.|
-3. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page.
+1. Select the **IP Addresses** tab.
-4. In the **IP Addresses** tab, enter this information:
+1. On the **IP Addresses** pane, enter this value:
- | Setting | Value |
- |--|-|
- | IPv4 address space | Enter **10.1.0.0/16** |
+ | Setting | Value |
+ | | |
+ | IPv4 address space | Enter **10.1.0.0/16**. |
-5. Under **Subnet name**, select the word **default**.
+1. Under **Subnet name**, select the **default** link.
-6. In **Edit subnet**, enter this information:
+1. On the **Edit subnet** right pane, enter these values:
- | Setting | Value |
- |--|-|
- | Subnet name | Enter **mySubnet** |
- | Subnet address range | Enter **10.1.0.0/24** |
+ | Setting | Value |
+ | | |
+ | Subnet name | Enter **mySubnet**. |
+ | Subnet address range | Enter **10.1.0.0/24**. |
-7. Select **Save**.
+1. Select **Save**.
-8. Select the **Security** tab.
+1. Select the **Security** tab.
-9. Under **BastionHost**, select **Enable**. Enter this information:
+1. For **BastionHost**, select **Enable**, and then enter these values:
| Setting | Value | |--|-|
- | Bastion name | Enter **myBastionHost** |
- | AzureBastionSubnet address space | Enter **10.1.1.0/24** |
- | Public IP Address | Select **Create new**. </br> For **Name**, enter **myBastionIP**. </br> Select **OK**. |
+ | Bastion name | Enter **myBastionHost**. |
+ | AzureBastionSubnet address space | Enter **10.1.1.0/24**. |
+ | Public IP Address | Select **Create new** and then, for **Name**, enter **myBastionIP**, and then select **OK**. |
+1. Select the **Review + create** tab.
-8. Select the **Review + create** tab or select the **Review + create** button.
+1. Select **Create**.
-9. Select **Create**.
+## Create a test virtual machine
-## Create a virtual machine
+Next, create a VM that you can use to test the private endpoint.
-In this section, you'll create a virtual machine that will be used to test the private endpoint.
+1. In the Azure portal, select **Create a resource**.
-1. On the upper-left side of the portal, select **Create a resource** > **Compute** > **Virtual machine** or search for **Virtual machine** in the search box.
+1. On the left pane, select **Compute**, and then select **Virtual machine**.
-2. In **Create a virtual machine**, type or select the values in the **Basics** tab:
-
- | Setting | Value |
- |--|-|
- | **Project Details** | |
- | Subscription | Select your Azure subscription |
- | Resource Group | Select **CreatePrivateEndpointQS-rg** |
- | **Instance details** | |
- | Virtual machine name | Enter **myVM** |
+1. On the **Create a virtual machine** pane, select the **Basics** tab, and then enter the following values:
+
+ | Setting | Value |
+ | | |
+ | **Project&nbsp;details** | |
+ | Subscription | Select your Azure subscription. |
+ | Resource group | Select **CreatePrivateEndpointQS-rg**. |
+ | **Instance&nbsp;details** | |
+ | Virtual machine name | Enter **myVM**. |
| Region | Select **West Europe**. |
- | Availability Options | Select **No infrastructure redundancy required** |
- | Image | Select **Windows Server 2019 Datacenter - Gen1** |
- | Azure Spot instance | Select **No** |
- | Size | Choose VM size or take default setting |
- | **Administrator account** | |
- | Username | Enter a username |
- | Password | Enter a password |
- | Confirm password | Reenter password |
-
-3. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
+ | Availability options | Select **No infrastructure redundancy required**. |
+ | Image | Select **Windows Server 2019 Datacenter - Gen1**. |
+ | Azure Spot instance | Clear the checkbox. |
+ | Size | Select the VM size or use the default setting. |
+ | **Administrator&nbsp;account** | |
+ | Authentication type | Select **Password** |
+ | Username | Enter a username. |
+ | Password | Enter a password. |
+ | Confirm password | Reenter the password. |
+
+1. Select the **Networking** tab.
-4. In the Networking tab, select or enter:
+1. On the **Networking** pane, enter the following values:
| Setting | Value |
- |-|-|
- | **Network interface** | |
- | Virtual network | **myVNet** |
- | Subnet | **mySubnet** |
+ | | |
+ | **Network&nbsp;interface** | |
+ | Virtual network | Enter **myVNet**. |
+ | Subnet | Enter **mySubnet**. |
| Public IP | Select **None**. |
- | NIC network security group | **Basic**|
+ | NIC network security group | Select **Basic**. |
| Public inbound ports | Select **None**. |
-5. Select **Review + create**.
+1. Select **Review + create**.
-6. Review the settings, and then select **Create**.
+1. Review the settings, and then select **Create**.
[!INCLUDE [ephemeral-ip-note.md](../../includes/ephemeral-ip-note.md)]
-## Create a Private Endpoint
+## Create a private endpoint
+
+Next, you create a private endpoint for the web app that you created in the "Prerequisites" section.
-In this section, you'll create a Private Endpoint for the web app you created in the prerequisites section.
+1. In the Azure portal, select **Create a resource**.
-1. On the upper-left side of the screen in the portal, select **Create a resource** > **Networking** > **Private Link**, or in the search box enter **Private Link**.
+1. On the left pane, select **Networking**, and then select **Private Link**. You might have to search for **Private Link** and then select it in the search results.
-2. Select **Create**.
+1. On the **Private Link** page, select **Create**.
-3. In **Private Link Center**, select **Private endpoints** in the left-hand menu.
+1. In **Private Link Center**, on the left pane, select **Private endpoints**.
-4. In **Private endpoints**, select **+ Add**.
+1. On the **Private endpoints** pane, select **Create**.
-5. In the **Basics** tab of **Create a private endpoint**, enter, or select this information:
+1. On the **Create a private endpoint** pane, select the **Basics** tab, and then enter the following values:
| Setting | Value | | - | -- |
- | **Project details** | |
+ | **Project&nbsp;details** | |
| Subscription | Select your subscription. |
- | Resource group | Select **CreatePrivateEndpointQS-rg**. You created this resource group in the previous section.|
- | **Instance details** | |
+ | Resource group | Select **CreatePrivateEndpointQS-rg**. You created this resource group in an earlier section.|
+ | **Instance&nbsp;details** | |
| Name | Enter **myPrivateEndpoint**. | | Region | Select **West Europe**. |
-6. Select the **Resource** tab or the **Next: Resource** button at the bottom of the page.
+1. Select the **Resource** tab.
-7. In **Resource**, enter or select this information:
+1. On the **Resource** pane, enter the following values:
| Setting | Value | | - | -- | | Connection method | Select **Connect to an Azure resource in my directory**. | | Subscription | Select your subscription. | | Resource type | Select **Microsoft.Web/sites**. |
- | Resource | Select **\<your-web-app-name>**. </br> Select the name of the web app you created in the prerequisites. |
+ | Resource | Select **\<your-web-app-name>**. </br> Select the name of the web app that you created in the "Prerequisites" section. |
| Target sub-resource | Select **sites**. |
-8. Select the **Configuration** tab or the **Next: Configuration** button at the bottom of the screen.
+1. Select the **Configuration** tab.
-9. In **Configuration**, enter or select this information:
+1. On the **Configuration** pane, enter the following values:
| Setting | Value | | - | -- | | **Networking** | | | Virtual network | Select **myVNet**. | | Subnet | Select **mySubnet**. |
- | **Private DNS integration** | |
- | Integrate with private DNS zone | Leave the default of **Yes**. |
+ | **Private&nbsp;DNS&nbsp;integration** | |
+ | Integrate with private DNS zone | Keep the default of **Yes**. |
| Subscription | Select your subscription. |
- | Private DNS zones | Leave the default of **(New) privatelink.azurewebsites.net**.
+ | Private DNS zones | Keep the default of **(New) privatelink.azurewebsites.net**.
-13. Select **Review + create**.
+1. Select **Review + create**.
-14. Select **Create**.
+1. Select **Create**.
-## Test connectivity to private endpoint
+## Test connectivity to the private endpoint
-In this section, you'll use the virtual machine you created in the previous step to connect to the web app across the private endpoint.
+Use the VM that you created earlier to connect to the web app across the private endpoint.
-1. Select **Resource groups** in the left-hand navigation pane.
+1. In the Azure portal, on the left pane, select **Resource groups**.
-2. Select **CreatePrivateEndpointQS-rg**.
+1. Select **CreatePrivateEndpointQS-rg**.
-3. Select **myVM**.
+1. Select **myVM**.
-4. On the overview page for **myVM**, select **Connect** then **Bastion**.
+1. On the overview page for **myVM**, select **Connect**, and then select **Bastion**.
-5. Select the blue **Use Bastion** button.
+1. Select the blue **Use Bastion** button.
-6. Enter the username and password that you entered during the virtual machine creation.
+1. Enter the username and password that you used when you created the VM.
-7. Open Windows PowerShell on the server after you connect.
+1. After you've connected, open PowerShell on the server.
-8. Enter `nslookup <your-webapp-name>.azurewebsites.net`. Replace **\<your-webapp-name>** with the name of the web app you created in the previous steps. You'll receive a message similar to what is displayed below:
+1. Enter `nslookup <your-webapp-name>.azurewebsites.net`, replacing *\<your-webapp-name>* with the name of the web app that you created earlier. You'll receive a message that's similar to the following:
```powershell Server: UnKnown
In this section, you'll use the virtual machine you created in the previous step
Aliases: mywebapp8675.azurewebsites.net ```
- A private IP address of **10.1.0.5** is returned for the web app name. This address is in the subnet of the virtual network you created previously.
+ A private IP address of **10.1.0.5** is returned for the web app name. This address is in the subnet of the virtual network you created earlier.
-11. In the bastion connection to **myVM**, open Internet Explorer.
+1. In the bastion connection to **myVM**, open your web browser.
-12. Enter the url of your web app, **https://\<your-webapp-name>.azurewebsites.net**.
+1. Enter the URL of your web app, **https://\<your-webapp-name>.azurewebsites.net**.
-13. You'll receive the default web app page if your application hasn't been deployed:
+ If your web app hasn't been deployed, you'll get the following default web app page:
- :::image type="content" source="./media/create-private-endpoint-portal/web-app-default-page.png" alt-text="Default web app page." border="true":::
+ :::image type="content" source="./media/create-private-endpoint-portal/web-app-default-page.png" alt-text="Screenshot of the default web app page on a browser." border="true":::
-18. Close the connection to **myVM**.
+1. Close the connection to **myVM**.
## Clean up resources
-If you're not going to continue to use this application, delete the virtual network, virtual machine, and web app with the following steps:
+If you're not going to continue to use this web app, delete the virtual network, virtual machine, and web app by doing the following:
-1. From the left-hand menu, select **Resource groups**.
+1. On the left pane, select **Resource groups**.
-2. Select **CreatePrivateEndpointQS-rg**.
+1. Select **CreatePrivateEndpointQS-rg**.
-3. Select **Delete resource group**.
+1. Select **Delete resource group**.
-4. Enter **CreatePrivateEndpointQS-rg** in **TYPE THE RESOURCE GROUP NAME**.
+1. Under **Type the resource group name**, enter **CreatePrivateEndpointQS-rg**.
-5. Select **Delete**.
+1. Select **Delete**.
+## What you've learned
-## Next steps
-
-In this quickstart, you created a:
+In this quickstart, you created:
-* Virtual network and bastion host.
-* Virtual machine.
-* Private endpoint for an Azure Web App.
-
-You used the virtual machine to test connectivity securely to the web app across the private endpoint.
+* A virtual network and bastion host
+* A virtual machine
+* A private endpoint for an Azure web app
+You used the VM to test connectivity to the web app across the private endpoint.
+## Next steps
-For more information on the services that support a private endpoint, see:
+For more information about the services that support private endpoints, see:
> [!div class="nextstepaction"]
-> [Private Link availability](private-link-overview.md#availability)
+> [What is Azure Private Link?](private-link-overview.md#availability)
private-link Create Private Endpoint Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-endpoint-powershell.md
Title: 'Quickstart - Create an Azure Private Endpoint using Azure PowerShell'
-description: Use this quickstart to learn how to create a Private Endpoint using Azure PowerShell.
+ Title: 'Quickstart: Create a private endpoint by using Azure PowerShell'
+description: In this quickstart, you'll learn how to create a private endpoint by using Azure PowerShell.
Last updated 11/02/2020
-#Customer intent: As someone with a basic network background, but is new to Azure, I want to create an Azure private endpoint
+#Customer intent: As someone who has a basic network background but is new to Azure, I want to create a private endpoint by using Azure PowerShell.
-# Quickstart: Create an Azure Private Endpoint using Azure PowerShell
+# Quickstart: Create a private endpoint by using Azure PowerShell
-Get started with Azure Private Link by using a Private Endpoint to connect securely to an Azure web app.
+Get started with Azure Private Link by using a private endpoint to connect securely to an Azure web app.
-In this quickstart, you'll create a private endpoint for an Azure web app and deploy a virtual machine to test the private connection.
+In this quickstart, you'll create a private endpoint for an Azure web app and then create and deploy a virtual machine (VM) to test the private connection.
-Private endpoints can be created for different kinds of Azure services, such as Azure SQL and Azure Storage.
+You can create private endpoints for a variety of Azure services, such as Azure SQL and Azure Storage.
## Prerequisites
-* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An Azure Web App with a **PremiumV2-tier** or higher app service plan deployed in your Azure subscription.
- * For more information and an example, see [Quickstart: Create an ASP.NET Core web app in Azure](../app-service/quickstart-dotnetcore.md).
- * For a detailed tutorial on creating a web app and an endpoint, see [Tutorial: Connect to a web app using an Azure Private Endpoint](tutorial-private-endpoint-webapp-portal.md).
+* An Azure account with an active subscription. If you don't already have an Azure account, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+* An Azure web app with a *PremiumV2-tier* or higher app service plan, deployed in your Azure subscription.
+
+ For more information and an example, see [Quickstart: Create an ASP.NET Core web app in Azure](../app-service/quickstart-dotnetcore.md).
+
+ For a detailed tutorial on creating a web app and an endpoint, see [Tutorial: Connect to a web app by using a private endpoint](tutorial-private-endpoint-webapp-portal.md).
+
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. To find the installed version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install the Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Create a resource group
-An Azure resource group is a logical container into which Azure resources are deployed and managed.
+An Azure resource group is a logical container where Azure resources are deployed and managed.
Create a resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup):
New-AzResourceGroup -Name 'CreatePrivateEndpointQS-rg' -Location 'eastus'
## Create a virtual network and bastion host
-In this section, you'll create a virtual network, subnet, and bastion host.
+First, you'll create a virtual network, subnet, and bastion host.
-The bastion host will be used to connect securely to the virtual machine for testing the private endpoint.
+You'll use the bastion host to connect securely to the VM for testing the private endpoint.
-Create a virtual network and bastion host with:
+1. Create a virtual network and bastion host with:
-* [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork)
-* [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress)
-* [New-AzBastion](/powershell/module/az.network/new-azbastion)
+ * [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork)
+ * [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress)
+ * [New-AzBastion](/powershell/module/az.network/new-azbastion)
-```azurepowershell-interactive
-## Create backend subnet config. ##
-$subnetConfig = New-AzVirtualNetworkSubnetConfig -Name myBackendSubnet -AddressPrefix 10.0.0.0/24
-
-## Create Azure Bastion subnet. ##
-$bastsubnetConfig = New-AzVirtualNetworkSubnetConfig -Name AzureBastionSubnet -AddressPrefix 10.0.1.0/24
-
-## Create the virtual network. ##
-$parameters1 = @{
- Name = 'MyVNet'
- ResourceGroupName = 'CreatePrivateEndpointQS-rg'
- Location = 'eastus'
- AddressPrefix = '10.0.0.0/16'
- Subnet = $subnetConfig, $bastsubnetConfig
-}
-$vnet = New-AzVirtualNetwork @parameters1
-
-## Create public IP address for bastion host. ##
-$parameters2 = @{
- Name = 'myBastionIP'
- ResourceGroupName = 'CreatePrivateEndpointQS-rg'
- Location = 'eastus'
- Sku = 'Standard'
- AllocationMethod = 'Static'
-}
-$publicip = New-AzPublicIpAddress @parameters2
-
-## Create bastion host ##
-$parameters3 = @{
- ResourceGroupName = 'CreatePrivateEndpointQS-rg'
- Name = 'myBastion'
- PublicIpAddress = $publicip
- VirtualNetwork = $vnet
-}
-New-AzBastion @parameters3
-```
+1. Configure the back-end subnet.
+
+ ```azurepowershell-interactive
+ $subnetConfig = New-AzVirtualNetworkSubnetConfig -Name myBackendSubnet -AddressPrefix 10.0.0.0/24
+ ```
+
+1. Create the Azure Bastion subnet:
+
+ ```azurepowershell-interactive
+ $bastsubnetConfig = New-AzVirtualNetworkSubnetConfig -Name AzureBastionSubnet -AddressPrefix 10.0.1.0/24
+ ```
+
+1. Create the virtual network:
+
+ ```azurepowershell-interactive
+ $parameters1 = @{
+ Name = 'MyVNet'
+ ResourceGroupName = 'CreatePrivateEndpointQS-rg'
+ Location = 'eastus'
+ AddressPrefix = '10.0.0.0/16'
+ Subnet = $subnetConfig, $bastsubnetConfig
+ }
+ $vnet = New-AzVirtualNetwork @parameters1
+ ```
+
+1. Create the public IP address for the bastion host:
+
+ ```azurepowershell-interactive
+ $parameters2 = @{
+ Name = 'myBastionIP'
+ ResourceGroupName = 'CreatePrivateEndpointQS-rg'
+ Location = 'eastus'
+ Sku = 'Standard'
+ AllocationMethod = 'Static'
+ }
+ $publicip = New-AzPublicIpAddress @parameters2
+ ```
+
+1. Create the bastion host:
+
+ ```azurepowershell-interactive
+ $parameters3 = @{
+ ResourceGroupName = 'CreatePrivateEndpointQS-rg'
+ Name = 'myBastion'
+ PublicIpAddress = $publicip
+ VirtualNetwork = $vnet
+ }
+ New-AzBastion @parameters3
+ ```
It can take a few minutes for the Azure Bastion host to deploy.
-## Create test virtual machine
+## Create a test virtual machine
-In this section, you'll create a virtual machine that will be used to test the private endpoint.
+Next, create a VM that you can use to test the private endpoint.
-Create the virtual machine with:
+1. Create the VM by using:
- * [Get-Credential](/powershell/module/microsoft.powershell.security/get-credential)
- * [New-AzNetworkInterface](/powershell/module/az.network/new-aznetworkinterface)
- * [New-AzVM](/powershell/module/az.compute/new-azvm)
- * [New-AzVMConfig](/powershell/module/az.compute/new-azvmconfig)
- * [Set-AzVMOperatingSystem](/powershell/module/az.compute/set-azvmoperatingsystem)
- * [Set-AzVMSourceImage](/powershell/module/az.compute/set-azvmsourceimage)
- * [Add-AzVMNetworkInterface](/powershell/module/az.compute/add-azvmnetworkinterface)
+ * [Get-Credential](/powershell/module/microsoft.powershell.security/get-credential)
+ * [New-AzNetworkInterface](/powershell/module/az.network/new-aznetworkinterface)
+ * [New-AzVM](/powershell/module/az.compute/new-azvm)
+ * [New-AzVMConfig](/powershell/module/az.compute/new-azvmconfig)
+ * [Set-AzVMOperatingSystem](/powershell/module/az.compute/set-azvmoperatingsystem)
+ * [Set-AzVMSourceImage](/powershell/module/az.compute/set-azvmsourceimage)
+ * [Add-AzVMNetworkInterface](/powershell/module/az.compute/add-azvmnetworkinterface)
-```azurepowershell-interactive
-## Set credentials for server admin and password. ##
-$cred = Get-Credential
-
-## Command to get virtual network configuration. ##
-$vnet = Get-AzVirtualNetwork -Name myVNet -ResourceGroupName CreatePrivateEndpointQS-rg
-
-## Command to create network interface for VM ##
-$parameters1 = @{
- Name = 'myNicVM'
- ResourceGroupName = 'CreatePrivateEndpointQS-rg'
- Location = 'eastus'
- Subnet = $vnet.Subnets[0]
-}
-$nicVM = New-AzNetworkInterface @parameters1
-
-## Create a virtual machine configuration.##
-$parameters2 = @{
- VMName = 'myVM'
- VMSize = 'Standard_DS1_v2'
-}
-$parameters3 = @{
- ComputerName = 'myVM'
- Credential = $cred
-}
-$parameters4 = @{
- PublisherName = 'MicrosoftWindowsServer'
- Offer = 'WindowsServer'
- Skus = '2019-Datacenter'
- Version = 'latest'
-}
-$vmConfig =
-New-AzVMConfig @parameters2 | Set-AzVMOperatingSystem -Windows @parameters3 | Set-AzVMSourceImage @parameters4 | Add-AzVMNetworkInterface -Id $nicVM.Id
-
-## Create the virtual machine ##
-New-AzVM -ResourceGroupName 'CreatePrivateEndpointQS-rg' -Location 'eastus' -VM $vmConfig
-```
+1. Get the server admin credentials and password:
+
+ ```azurepowershell-interactive
+ $cred = Get-Credential
+ ```
+
+1. Get the virtual network configuration:
+
+ ```azurepowershell-interactive
+ $vnet = Get-AzVirtualNetwork -Name myVNet -ResourceGroupName CreatePrivateEndpointQS-rg
+ ```
+
+1. Create a network interface for the VM:
+
+ ```azurepowershell-interactive
+ $parameters1 = @{
+ Name = 'myNicVM'
+ ResourceGroupName = 'CreatePrivateEndpointQS-rg'
+ Location = 'eastus'
+ Subnet = $vnet.Subnets[0]
+ }
+ $nicVM = New-AzNetworkInterface @parameters1
+ ```
+
+1. Configure the VM:
+
+ ```azurepowershell-interactive
+ $parameters2 = @{
+ VMName = 'myVM'
+ VMSize = 'Standard_DS1_v2'
+ }
+ $parameters3 = @{
+ ComputerName = 'myVM'
+ Credential = $cred
+ }
+ $parameters4 = @{
+ PublisherName = 'MicrosoftWindowsServer'
+ Offer = 'WindowsServer'
+ Skus = '2019-Datacenter'
+ Version = 'latest'
+ }
+ $vmConfig =
+ New-AzVMConfig @parameters2 | Set-AzVMOperatingSystem -Windows @parameters3 | Set-AzVMSourceImage @parameters4 | Add-AzVMNetworkInterface -Id $nicVM.Id
+ ```
+
+1. Create the VM:
+
+ ```azurepowershell-interactive
+ New-AzVM -ResourceGroupName 'CreatePrivateEndpointQS-rg' -Location 'eastus' -VM $vmConfig
+ ```
[!INCLUDE [ephemeral-ip-note.md](../../includes/ephemeral-ip-note.md)]
-## Create private endpoint
+## Create a private endpoint
-In this section, you'll create the private endpoint and connection using:
+1. Create a private endpoint and connection by using:
-* [New-AzPrivateLinkServiceConnection](/powershell/module/az.network/New-AzPrivateLinkServiceConnection)
-* [New-AzPrivateEndpoint](/powershell/module/az.network/new-azprivateendpoint)
+ * [New-AzPrivateLinkServiceConnection](/powershell/module/az.network/New-AzPrivateLinkServiceConnection)
+ * [New-AzPrivateEndpoint](/powershell/module/az.network/new-azprivateendpoint)
-```azurepowershell-interactive
-## Place web app into variable. Replace <webapp-resource-group-name> with the resource group of your webapp. ##
-## Replace <your-webapp-name> with your webapp name ##
-$webapp = Get-AzWebApp -ResourceGroupName <webapp-resource-group-name> -Name <your-webapp-name>
-
-## Create private endpoint connection. ##
-$parameters1 = @{
- Name = 'myConnection'
- PrivateLinkServiceId = $webapp.ID
- GroupID = 'sites'
-}
-$privateEndpointConnection = New-AzPrivateLinkServiceConnection @parameters1
-
-## Place virtual network into variable. ##
-$vnet = Get-AzVirtualNetwork -ResourceGroupName 'CreatePrivateEndpointQS-rg' -Name 'myVNet'
-
-## Disable private endpoint network policy ##
-$vnet.Subnets[0].PrivateEndpointNetworkPolicies = "Disabled"
-$vnet | Set-AzVirtualNetwork
-
-## Create private endpoint
-$parameters2 = @{
- ResourceGroupName = 'CreatePrivateEndpointQS-rg'
- Name = 'myPrivateEndpoint'
- Location = 'eastus'
- Subnet = $vnet.Subnets[0]
- PrivateLinkServiceConnection = $privateEndpointConnection
-}
-New-AzPrivateEndpoint @parameters2
-```
+1. Place the web app into a variable. Replace \<webapp-resource-group-name> with the resource group name of your web app, and replace \<your-webapp-name> with your web app name.
+
+ ```azurepowershell-interactive
+ $webapp = Get-AzWebApp -ResourceGroupName <webapp-resource-group-name> -Name <your-webapp-name>
+ ```
+
+1. Create the private endpoint connection:
+
+ ```azurepowershell-interactive
+ $parameters1 = @{
+ Name = 'myConnection'
+ PrivateLinkServiceId = $webapp.ID
+ GroupID = 'sites'
+ }
+ $privateEndpointConnection = New-AzPrivateLinkServiceConnection @parameters1
+ ```
+
+1. Place the virtual network into a variable:
+
+ ```azurepowershell-interactive
+ $vnet = Get-AzVirtualNetwork -ResourceGroupName 'CreatePrivateEndpointQS-rg' -Name 'myVNet'
+ ```
+
+1. Disable the private endpoint network policy:
+
+ ```azurepowershell-interactive
+ $vnet.Subnets[0].PrivateEndpointNetworkPolicies = "Disabled"
+ $vnet | Set-AzVirtualNetwork
+ ```
+
+1. Create the private endpoint:
+
+ ```azurepowershell-interactive
+ $parameters2 = @{
+ ResourceGroupName = 'CreatePrivateEndpointQS-rg'
+ Name = 'myPrivateEndpoint'
+ Location = 'eastus'
+ Subnet = $vnet.Subnets[0]
+ PrivateLinkServiceConnection = $privateEndpointConnection
+ }
+ New-AzPrivateEndpoint @parameters2
+ ```
## Configure the private DNS zone
-In this section you'll create and configure the private DNS zone using:
+1. Create and configure the private DNS zone by using:
-* [New-AzPrivateDnsZone](/powershell/module/az.privatedns/new-azprivatednszone)
-* [New-AzPrivateDnsVirtualNetworkLink](/powershell/module/az.privatedns/new-azprivatednsvirtualnetworklink)
-* [New-AzPrivateDnsZoneConfig](/powershell/module/az.network/new-azprivatednszoneconfig)
-* [New-AzPrivateDnsZoneGroup](/powershell/module/az.network/new-azprivatednszonegroup)
+ * [New-AzPrivateDnsZone](/powershell/module/az.privatedns/new-azprivatednszone)
+ * [New-AzPrivateDnsVirtualNetworkLink](/powershell/module/az.privatedns/new-azprivatednsvirtualnetworklink)
+ * [New-AzPrivateDnsZoneConfig](/powershell/module/az.network/new-azprivatednszoneconfig)
+ * [New-AzPrivateDnsZoneGroup](/powershell/module/az.network/new-azprivatednszonegroup)
-```azurepowershell-interactive
-## Place virtual network into variable. ##
-$vnet = Get-AzVirtualNetwork -ResourceGroupName 'CreatePrivateEndpointQS-rg' -Name 'myVNet'
-
-## Create private dns zone. ##
-$parameters1 = @{
- ResourceGroupName = 'CreatePrivateEndpointQS-rg'
- Name = 'privatelink.azurewebsites.net'
-}
-$zone = New-AzPrivateDnsZone @parameters1
-
-## Create dns network link. ##
-$parameters2 = @{
- ResourceGroupName = 'CreatePrivateEndpointQS-rg'
- ZoneName = 'privatelink.azurewebsites.net'
- Name = 'myLink'
- VirtualNetworkId = $vnet.Id
-}
-$link = New-AzPrivateDnsVirtualNetworkLink @parameters2
-
-## Create DNS configuration ##
-$parameters3 = @{
- Name = 'privatelink.azurewebsites.net'
- PrivateDnsZoneId = $zone.ResourceId
-}
-$config = New-AzPrivateDnsZoneConfig @parameters3
-
-## Create DNS zone group. ##
-$parameters4 = @{
- ResourceGroupName = 'CreatePrivateEndpointQS-rg'
- PrivateEndpointName = 'myPrivateEndpoint'
- Name = 'myZoneGroup'
- PrivateDnsZoneConfig = $config
-}
-New-AzPrivateDnsZoneGroup @parameters4
-```
+1. Place the virtual network into a variable:
+
+ ```azurepowershell-interactive
+ $vnet = Get-AzVirtualNetwork -ResourceGroupName 'CreatePrivateEndpointQS-rg' -Name 'myVNet'
+ ```
+
+1. Create the private DNS zone:
+
+ ```azurepowershell-interactive
+ $parameters1 = @{
+ ResourceGroupName = 'CreatePrivateEndpointQS-rg'
+ Name = 'privatelink.azurewebsites.net'
+ }
+ $zone = New-AzPrivateDnsZone @parameters1
+ ```
+
+1. Create a DNS network link:
+
+ ```azurepowershell-interactive
+ $parameters2 = @{
+ ResourceGroupName = 'CreatePrivateEndpointQS-rg'
+ ZoneName = 'privatelink.azurewebsites.net'
+ Name = 'myLink'
+ VirtualNetworkId = $vnet.Id
+ }
+ $link = New-AzPrivateDnsVirtualNetworkLink @parameters2
+ ```
+
+1. Configure the DNS zone:
-## Test connectivity to private endpoint
+ ```azurepowershell-interactive
+ $parameters3 = @{
+ Name = 'privatelink.azurewebsites.net'
+ PrivateDnsZoneId = $zone.ResourceId
+ }
+ $config = New-AzPrivateDnsZoneConfig @parameters3
+ ```
+
+1. Create the DNS zone group:
+
+ ```azurepowershell-interactive
+ $parameters4 = @{
+ ResourceGroupName = 'CreatePrivateEndpointQS-rg'
+ PrivateEndpointName = 'myPrivateEndpoint'
+ Name = 'myZoneGroup'
+ PrivateDnsZoneConfig = $config
+ }
+ New-AzPrivateDnsZoneGroup @parameters4
+ ```
-In this section, you'll use the virtual machine you created in the previous step to connect to the SQL server across the private endpoint.
+## Test connectivity with the private endpoint
-1. Sign in to the [Azure portal](https://portal.azure.com)
+Finally, use the VM you created in the previous step to connect to the SQL server across the private endpoint.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Select **Resource groups** in the left-hand navigation pane.
+1. On the left pane, select **Resource groups**.
-3. Select **CreatePrivateEndpointQS-rg**.
+1. Select **CreatePrivateEndpointQS-rg**.
-4. Select **myVM**.
+1. Select **myVM**.
-5. On the overview page for **myVM**, select **Connect** then **Bastion**.
+1. On the overview page for **myVM**, select **Connect**, and then select **Bastion**.
-6. Select the blue **Use Bastion** button.
+1. Select the blue **Use Bastion** button.
-7. Enter the username and password that you entered during the virtual machine creation.
+1. Enter the username and password that you used when you created the VM.
-8. Open Windows PowerShell on the server after you connect.
+1. After you've connected, open PowerShell on the server.
-9. Enter `nslookup <your-webapp-name>.azurewebsites.net`. Replace **\<your-webapp-name>** with the name of the web app you created in the previous steps. You'll receive a message similar to what is displayed below:
+1. Enter `nslookup <your-webapp-name>.azurewebsites.net`. Replace **\<your-webapp-name>** with the name of the web app that you created earlier. You'll receive a message that's similar to the following:
```powershell Server: UnKnown
In this section, you'll use the virtual machine you created in the previous step
Aliases: mywebapp8675.azurewebsites.net ```
- A private IP address of **10.0.0.5** is returned for the web app name. This address is in the subnet of the virtual network you created previously.
+ A private IP address of *10.0.0.5* is returned for the web app name. This address is in the subnet of the virtual network that you created earlier.
-10. In the bastion connection to **myVM**, open Internet Explorer.
+1. In the bastion connection to **myVM**, open your web browser.
-11. Enter the url of your web app, **https://\<your-webapp-name>.azurewebsites.net**.
+1. Enter the URL of your web app, **https://\<your-webapp-name>.azurewebsites.net**.
-12. You'll receive the default web app page if your application hasn't been deployed:
+ If your web app hasn't been deployed, you'll get the following default web app page:
- :::image type="content" source="./media/create-private-endpoint-portal/web-app-default-page.png" alt-text="Default web app page." border="true":::
+ :::image type="content" source="./media/create-private-endpoint-portal/web-app-default-page.png" alt-text="Screenshot of the default web app page on a browser." border="true":::
-13. Close the connection to **myVM**.
+1. Close the connection to **myVM**.
## Clean up resources
-When you're done using the private endpoint and the VM, use [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) to remove the resource group and all the resources it has:
+When you're done using the private endpoint and the VM, use [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) to remove the resource group and all the resources within it:
```azurepowershell-interactive Remove-AzResourceGroup -Name CreatePrivateEndpointQS-rg -Force ```
-## Next steps
+## What you've learned
-In this quickstart, you created a:
+In this quickstart, you created:
-* Virtual network and bastion host.
-* Virtual machine.
-* Private endpoint for an Azure Web App.
+* A virtual network and bastion host
+* A virtual machine
+* A private endpoint for an Azure web app
-You used the virtual machine to test connectivity securely to the web app across the private endpoint.
+You used the VM to securely test connectivity to the web app across the private endpoint.
+
+## Next steps
-For more information on the services that support a private endpoint, see:
+For more information about the services that support private endpoints, see:
> [!div class="nextstepaction"]
-> [Private Link availability](private-link-overview.md#availability)
+> [What is Azure Private Link?](private-link-overview.md#availability)
private-link Create Private Endpoint Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-endpoint-template.md
Title: 'Quickstart - Create a private endpoint by using an ARM template'
-description: In this quickstart, you use an Azure Resource Manager template (ARM template) to create a private endpoint.
+ Title: 'Quickstart: Create a private endpoint by using an ARM template'
+description: In this quickstart, you'll learn how to create a private endpoint by using an Azure Resource Manager template (ARM template).
Last updated 05/26/2020
+#Customer intent: As someone who has a basic network background but is new to Azure, I want to create a private endpoint by using an ARM template.
# Quickstart: Create a private endpoint by using an ARM template
-In this quickstart, you use an Azure Resource Manager template (ARM template) to create a private endpoint.
+In this quickstart, you'll use an Azure Resource Manager template (ARM template) to create a private endpoint.
[!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)]
-You can also complete this quickstart by using the [Azure portal](create-private-endpoint-portal.md), [Azure PowerShell](create-private-endpoint-powershell.md), or the [Azure CLI](create-private-endpoint-cli.md).
+You can also create a private endpoint by using the [Azure portal](create-private-endpoint-portal.md), [Azure PowerShell](create-private-endpoint-powershell.md), or the [Azure CLI](create-private-endpoint-cli.md).
-If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
+If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button here. The ARM template will open in the Azure portal.
-[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.sql%2Fprivate-endpoint-sql%2Fazuredeploy.json)
+[![The 'Deploy to Azure' button.](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.sql%2Fprivate-endpoint-sql%2Fazuredeploy.json)
## Prerequisites
-You need an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+You need an Azure account with an active subscription. If you don't already have an Azure account, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
## Review the template This template creates a private endpoint for an instance of Azure SQL Database.
-The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/private-endpoint-sql/).
+The template that this quickstart uses is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/private-endpoint-sql/).
:::code language="json" source="~/quickstart-templates/quickstarts/microsoft.sql/private-endpoint-sql/azuredeploy.json":::
-Multiple Azure resources are defined in the template:
+The template defines multiple Azure resources:
- [**Microsoft.Sql/servers**](/azure/templates/microsoft.sql/servers): The instance of SQL Database with the sample database. - [**Microsoft.Sql/servers/databases**](/azure/templates/microsoft.sql/servers/databases): The sample database. - [**Microsoft.Network/virtualNetworks**](/azure/templates/microsoft.network/virtualnetworks): The virtual network where the private endpoint is deployed.-- [**Microsoft.Network/privateEndpoints**](/azure/templates/microsoft.network/privateendpoints): The private endpoint to access the instance of SQL Database.-- [**Microsoft.Network/privateDnsZones**](/azure/templates/microsoft.network/privatednszones): The zone used to resolve the private endpoint IP address.
+- [**Microsoft.Network/privateEndpoints**](/azure/templates/microsoft.network/privateendpoints): The private endpoint that you use to access the instance of SQL Database.
+- [**Microsoft.Network/privateDnsZones**](/azure/templates/microsoft.network/privatednszones): The zone that you use to resolve the private endpoint IP address.
- [**Microsoft.Network/privateDnsZones/virtualNetworkLinks**](/azure/templates/microsoft.network/privatednszones/virtualnetworklinks)-- [**Microsoft.Network/privateEndpoints/privateDnsZoneGroups**](/azure/templates/microsoft.network/privateendpoints/privateDnsZoneGroups): The zone group used to associate the private endpoint with a private DNS zone.-- [**Microsoft.Network/publicIpAddresses**](/azure/templates/microsoft.network/publicIpAddresses): The public IP address used to access the virtual machine.
+- [**Microsoft.Network/privateEndpoints/privateDnsZoneGroups**](/azure/templates/microsoft.network/privateendpoints/privateDnsZoneGroups): The zone group that you use to associate the private endpoint with a private DNS zone.
+- [**Microsoft.Network/publicIpAddresses**](/azure/templates/microsoft.network/publicIpAddresses): The public IP address that you use to access the virtual machine.
- [**Microsoft.Network/networkInterfaces**](/azure/templates/microsoft.network/networkinterfaces): The network interface for the virtual machine.-- [**Microsoft.Compute/virtualMachines**](/azure/templates/microsoft.compute/virtualmachines): The virtual machine used to test the private connection with private endpoint to the instance of SQL Database.
+- [**Microsoft.Compute/virtualMachines**](/azure/templates/microsoft.compute/virtualmachines): The virtual machine that you use to test the connection of the private endpoint to the instance of SQL Database.
## Deploy the template
-Here's how to deploy the ARM template to Azure:
+Deploy the ARM template to Azure by doing the following:
-1. To sign in to Azure and open the template, select **Deploy to Azure**. The template creates the private endpoint, the instance of SQL Database, the network infrastructure, and a virtual machine to validate.
+1. Sign in to Azure and open the ARM template by selecting the **Deploy to Azure** button here. The template creates the private endpoint, the instance of SQL Database, the network infrastructure, and a virtual machine to be validated.
- [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.sql%2Fprivate-endpoint-sql%2Fazuredeploy.json)
+ [![The 'Deploy to Azure' button.](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.sql%2Fprivate-endpoint-sql%2Fazuredeploy.json)
-2. Select or create your resource group.
-3. Type the SQL Administrator sign-in and password.
-4. Type the virtual machine administrator username and password.
-5. Read the terms and conditions statement. If you agree, select **I agree to the terms and conditions stated above** > **Purchase**. The deployment can take 20 minutes or longer to complete.
+1. Select your resource group or create a new one.
+1. Enter the SQL administrator sign-in name and password.
+1. Enter the virtual machine administrator username and password.
+1. Read the terms and conditions statement. If you agree, select **I agree to the terms and conditions stated above**, and then select **Purchase**. The deployment can take 20 minutes or longer to complete.
## Validate the deployment
Here's how to deploy the ARM template to Azure:
### Connect to a VM from the internet
-Connect to the VM _myVm{uniqueid}_ from the internet as follows:
+Connect to the VM _myVm{uniqueid}_ from the internet by doing the following:
1. In the portal's search bar, enter _myVm{uniqueid}_.
-2. Select **Connect**. **Connect to virtual machine** opens.
+1. Select **Connect**. **Connect to virtual machine** opens.
-3. Select **Download RDP File**. Azure creates a Remote Desktop Protocol (_.rdp_) file and downloads it to your computer.
+1. Select **Download RDP File**. Azure creates a Remote Desktop Protocol (RDP) file and downloads it to your computer.
-4. Open the downloaded .rdp file.
+1. Open the downloaded RDP file.
- a. If prompted, select **Connect**.
-
- b. Enter the username and password you specified when you created the VM.
+ a. If you're prompted, select **Connect**.
+ b. Enter the username and password that you specified when you created the VM.
> [!NOTE]
- > You might need to select **More choices** > **Use a different account**, to specify the credentials you entered when you created the VM.
+ > You might need to select **More choices** > **Use a different account** to specify the credentials you entered when you created the VM.
-5. Select **OK**.
+1. Select **OK**.
-6. You might receive a certificate warning during the sign-in process. If you receive a certificate warning, select **Yes** or **Continue**.
+ You might receive a certificate warning during the sign-in process. If you do, select **Yes** or **Continue**.
-7. After the VM desktop appears, minimize it to go back to your local desktop.
+1. After the VM desktop appears, minimize it to go back to your local desktop.
### Access the SQL Database server privately from the VM
-Here's how to connect to the SQL Database server from the VM by using the private endpoint.
+To connect to the SQL Database server from the VM by using the private endpoint, do the following:
+
+1. On the Remote Desktop of _myVM{uniqueid}_, open PowerShell.
+1. Run the following command:
-1. In the Remote Desktop of _myVM{uniqueid}_, open PowerShell.
-2. Enter the following: nslookup sqlserver{uniqueid}.database.windows.net.ΓÇ»
- You'll receive a message similar to this:
+ `nslookup sqlserver{uniqueid}.database.windows.net`ΓÇ»
+
+ You'll receive a message that's similar to this one:
``` Server: UnKnown
Here's how to connect to the SQL Database server from the VM by using the privat
Aliases: sqlserver.database.windows.net ```
-3. Install SQL Server Management Studio.
-4. InΓÇ»**Connect to server**, enter or select this information:
- - **Server type**: Select **Database Engine**.
- - **Server name**: Select **sqlserver{uniqueid}.database.windows.net**.
- - **Username**: Enter a username provided during creation.
- - **Password**: Enter a password provided during creation.
- - **Remember password**: SelectΓÇ»**Yes**.
+1. Install SQL Server Management Studio.
+
+1. On the **Connect to server** pane, do the following:
+ - For **Server type**, select **Database Engine**.
+ - For **Server name**, select **sqlserver{uniqueid}.database.windows.net**.
+ - For **Username**, enter the username that was provided earlier.
+ - For **Password**, enter the password that was provided earlier.
+ - For **Remember password**, selectΓÇ»**Yes**.
-5. Select **Connect**.
-6. From the menu on the left, go to **Databases**.
-7. Optionally, you can create or query information from _sample-db_.
-8. Close the Remote Desktop connection to _myVm{uniqueid}_.
+1. Select **Connect**.
+1. On the left pane, select **Databases**. Optionally, you can create or query information from _sample-db_.
+1. Close the Remote Desktop connection to _myVm{uniqueid}_.
## Clean up resources
-When you no longer need the resources that you created with the private endpoint, delete the resource group. This removes the private endpoint and all the related resources.
+When you no longer need the resources that you created with the private endpoint, delete the resource group. Doing so removes the private endpoint and all the related resources.
-To delete the resource group, call the `Remove-AzResourceGroup` cmdlet:
+To delete the resource group, run the `Remove-AzResourceGroup` cmdlet:
```azurepowershell-interactive Remove-AzResourceGroup -Name <your resource group name>
Remove-AzResourceGroup -Name <your resource group name>
## Next steps
-For more information on the services that support a private endpoint, see:
+For more information about the services that support private endpoints, see:
> [!div class="nextstepaction"]
-> [Private Link availability](private-link-overview.md#availability)
+> [What is Azure Private Link?](private-link-overview.md#availability)
private-link Private Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-overview.md
Title: What is an Azure Private Endpoint?
-description: Learn about Azure Private Endpoint
+ Title: What is a private endpoint?
+description: In this article, you'll learn how to use the Private Endpoint feature of Azure Private Link.
-# Customer intent: As someone with a basic network background, but is new to Azure, I want to understand the capabilities of Azure private endpoints so that I can securely connect to my Azure PaaS services within the virtual network.
+# Customer intent: As someone who has a basic network background but is new to Azure, I want to understand the capabilities of private endpoints so that I can securely connect to my Azure PaaS services within the virtual network.
Previously updated : 09/09/2021 Last updated : 02/17/2022
-# What is Azure Private Endpoint?
+# What is a private endpoint?
-A private endpoint is a network interface that uses a private IP address from your virtual network. This network interface connects you privately and securely to a service powered by Azure Private Link. By enabling a private endpoint, you're bringing the service into your virtual network.
+A private endpoint is a network interface that uses a private IP address from your virtual network. This network interface connects you privately and securely to a service that's powered by Azure Private Link. By enabling a private endpoint, you're bringing the service into your virtual network.
The service could be an Azure service such as: * Azure Storage * Azure Cosmos DB * Azure SQL Database
-* Your own service using a [Private Link Service](private-link-service-overview.md).
+* Your own service, using [Private Link service](private-link-service-overview.md).
-## Private Endpoint properties
- A Private Endpoint specifies the following properties:
+## Private endpoint properties
+
+A private endpoint specifies the following properties:
|Property |Description | ||| |Name | A unique name within the resource group. |
-|Subnet | The subnet to deploy and where the private IP address is assigned. For subnet requirements, see the limitations section in this article. |
-|Private Link Resource | The private link resource to connect using resource ID or alias, from the list of available types. A unique network identifier will be generated for all traffic sent to this resource. |
-|Target subresource | The subresource to connect. Each private link resource type has different options to select based on preference. |
-|Connection approval method | Automatic or manual. Depending on Azure role based access control permissions, your private endpoint can be approved automatically. If you try to connect to a private link resource without Azure role-based access control, use the manual method to allow the owner of the resource to approve the connection. |
-|Request Message | You can specify a message for requested connections to be approved manually. This message can be used to identify a specific request. |
-|Connection status | A read-only property that specifies if the private endpoint is active. Only private endpoints in an approved state can be used to send traffic. More states available: <br>-**Approved**: Connection was automatically or manually approved and is ready to be used.</br><br>-**Pending**: Connection was created manually and is pending approval by the private link resource owner.</br><br>-**Rejected**: Connection was rejected by the private link resource owner.</br><br>-**Disconnected**: Connection was removed by the private link resource owner. The private endpoint becomes informative and should be deleted for cleanup. </br>|
+|Subnet | The subnet to deploy, where the private IP address is assigned. For subnet requirements, see the [Limitations](#limitations) section later in this article. |
+|Private-link resource | The private-link resource to connect by using a resource ID or alias, from the list of available types. A unique network identifier is generated for all traffic that's sent to this resource. |
+|Target subresource | The subresource to connect. Each private-link resource type has various options to select based on preference. |
+|Connection approval method | Automatic or manual. Depending on the Azure role-based access control (RBAC) permissions, your private endpoint can be approved automatically. If you're connecting to a private-link resource without Azure RBAC permissions, use the manual method to allow the owner of the resource to approve the connection. |
+|Request message | You can specify a message for requested connections to be approved manually. This message can be used to identify a specific request. |
+|Connection status | A read-only property that specifies whether the private endpoint is active. Only private endpoints in an approved state can be used to send traffic. Additional available states: <li>*Approved*: The connection was automatically or manually approved and is ready to be used.<li>*Pending*: The connection was created manually and is pending approval by the private-link resource owner.<li>*Rejected*: The connection was rejected by the private-link resource owner.<li>*Disconnected*: The connection was removed by the private-link resource owner. The private endpoint becomes informative and should be deleted for cleanup. </br>|
-Some key details about private endpoints:
+As you're creating private endpoints, consider the following:
-- Private endpoint enables connectivity between the consumers from the same:
+- Private endpoints enable connectivity between the customers from the same:
- - Virtual Network
+ - Virtual network
- Regionally peered virtual networks - Globally peered virtual networks
- - On premises using [VPN](https://azure.microsoft.com/services/vpn-gateway/) or [Express Route](https://azure.microsoft.com/services/expressroute/)
- - Services powered by Private Link
+ - On-premises environments that use [VPN](https://azure.microsoft.com/services/vpn-gateway/) or [Express Route](https://azure.microsoft.com/services/expressroute/)
+ - Services that are powered by Private Link
-- Network connections can only be initiated by clients connecting to the private endpoint. Service providers don't have routing configuration to create connections into service consumers. Connections can only be established in a single direction.
+- Network connections can be initiated only by clients that are connecting to the private endpoint. Service providers don't have a routing configuration to create connections into service customers. Connections can be established in a single direction only.
-- When creating a private endpoint, a read-only network interface is created for the lifecycle of the resource. The interface is assigned a dynamic private IP address from the subnet that maps to the private link resource. The value of the private IP address remains unchanged for the entire lifecycle of the private endpoint.
+- A read-only network interface is created for the lifecycle of the resource. The interface is assigned a dynamic private IP address from the subnet that maps to the private-link resource. The value of the private IP address remains unchanged for the entire lifecycle of the private endpoint.
- The private endpoint must be deployed in the same region and subscription as the virtual network. -- The private link resource can be deployed in a different region than the virtual network and private endpoint.
+- The private-link resource can be deployed in a different region than the one for the virtual network and private endpoint.
-- Multiple private endpoints can be created using the same private link resource. For a single network using a common DNS server configuration, the recommended practice is to use a single private endpoint for a given private link resource. Use this practice to avoid duplicate entries or conflicts in DNS resolution.
+- Multiple private endpoints can be created with the same private-link resource. For a single network using a common DNS server configuration, the recommended practice is to use a single private endpoint for a specified private-link resource. Use this practice to avoid duplicate entries or conflicts in DNS resolution.
-- Multiple private endpoints can be created on the same or different subnets within the same virtual network. There are limits to the number of private endpoints you can create in a subscription. For details, seeΓÇ»[Azure limits](../azure-resource-manager/management/azure-subscription-service-limits.md#networking-limits).
+- Multiple private endpoints can be created on the same or different subnets within the same virtual network. There are limits to the number of private endpoints you can create in a subscription. For more information, see [Azure limits](../azure-resource-manager/management/azure-subscription-service-limits.md#networking-limits).
-- The subscription from the private link resource must also be registered with Microsoft. Network resource provider. For details, seeΓÇ»[Azure Resource Providers](../azure-resource-manager/management/resource-providers-and-types.md).
+- The subscription from the private-link resource must also be registered with the Microsoft network resource provider. For more information, seeΓÇ»[Azure Resource Providers](../azure-resource-manager/management/resource-providers-and-types.md).
-## Private link resource
-A private link resource is the destination target of a given private endpoint.
-
-The table below lists the available resources that support a private endpoint:
+## Private-link resource
+A private-link resource is the destination target of a specified private endpoint. The following table lists the available resources that support a private endpoint:
-| Private link resource name | Resource type | Subresources |
+| Private-link resource&nbsp;name | Resource type | Subresources |
| | - | - |
-| **Azure App Configuration** | Microsoft.Appconfiguration/configurationStores | configurationStores |
-| **Azure Automation** | Microsoft.Automation/automationAccounts | Webhook, DSCAndHybridWorker |
-| **Azure Cosmos DB** | Microsoft.AzureCosmosDB/databaseAccounts | Sql, MongoDB, Cassandra, Gremlin, Table |
-| **Azure Batch** | Microsoft.Batch/batchAccounts | batch account |
-| **Azure Cache for Redis** | Microsoft.Cache/Redis | redisCache |
-| **Azure Cache for Redis Enterprise** | Microsoft.Cache/redisEnterprise | redisEnterprise |
-| **Cognitive Services** | Microsoft.CognitiveServices/accounts | account |
-| **Azure Managed Disks** | Microsoft.Compute/diskAccesses | managed disk |
-| **Azure Container Registry** | Microsoft.ContainerRegistry/registries | registry |
-| **Azure Kubernetes Service - Kubernetes API** | Microsoft.ContainerService/managedClusters | management |
-| **Azure Data Factory** | Microsoft.DataFactory/factories | data factory |
-| **Azure Database for MariaDB** | Microsoft.DBforMariaDB/servers | mariadbServer |
-| **Azure Database for MySQL** | Microsoft.DBforMySQL/servers | mysqlServer |
-| **Azure Database for PostgreSQL - Single server** | Microsoft.DBforPostgreSQL/servers | postgresqlServer |
-| **Azure IoT Hub** | Microsoft.Devices/IotHubs | iotHub |
-| **Microsoft Digital Twins** | Microsoft.DigitalTwins/digitalTwinsInstances | digitaltwinsinstance |
-| **Azure Event Grid** | Microsoft.EventGrid/domains | domain |
-| **Azure Event Grid** | Microsoft.EventGrid/topics | Event grid topic |
-| **Azure Event Hub** | Microsoft.EventHub/namespaces | namespace |
-| **Azure HDInsight** | Microsoft.HDInsight/clusters | cluster |
-| **Azure API for FHIR** | Microsoft.HealthcareApis/services | service |
-| **Azure Keyvault HSM** | Microsoft.Keyvault/managedHSMs | HSM |
-| **Azure Key Vault** | Microsoft.KeyVault/vaults | vault |
-| **Azure Machine Learning** | Microsoft.MachineLearningServices/workspaces | amlworkspace |
-| **Azure Migrate** | Microsoft.Migrate/assessmentProjects | project |
-| **Application Gateway** | Microsoft.Network/applicationgateways | application gateway |
-| **Private Link Service** (Your own service) | Microsoft.Network/privateLinkServices | empty |
-| **Power BI** | Microsoft.PowerBI/privateLinkServicesForPowerBI | Power BI |
-| **Azure Purview** | Microsoft.Purview/accounts | account |
-| **Azure Purview** | Microsoft.Purview/accounts | portal |
-| **Azure Backup** | Microsoft.RecoveryServices/vaults | vault |
-| **Azure Relay** | Microsoft.Relay/namespaces | namespace |
-| **Microsoft Search** | Microsoft.Search/searchServices | search service |
-| **Azure Service Bus** | Microsoft.ServiceBus/namespaces | namespace |
-| **SignalR** | Microsoft.SignalRService/SignalR | signalr |
-| **SignalR** | Microsoft.SignalRService/webPubSub | webpubsub |
-| **Azure SQL Database** | Microsoft.Sql/servers | Sql Server (sqlServer) |
-| **Azure SQL Managed Instance** | Microsoft.Sql/managedInstances | Sql Managed Instance (managedInstance) |
-| **Azure Storage** | Microsoft.Storage/storageAccounts | Blob (blob, blob_secondary)<BR> Table (table, table_secondary)<BR> Queue (queue, queue_secondary)<BR> File (file, file_secondary)<BR> Web (web, web_secondary) |
-| **Azure File Sync** | Microsoft.StorageSync/storageSyncServices | File Sync Service |
-| **Azure Synapse** | Microsoft.Synapse/privateLinkHubs | synapse |
-| **Azure Synapse Analytics** | Microsoft.Synapse/workspaces | Sql, SqlOnDemand, Dev |
-| **Azure App Service** | Microsoft.Web/hostingEnvironments | hosting environment |
-| **Azure App Service** | Microsoft.Web/sites | sites |
-| **Azure App Service** | Microsoft.Web/staticSites | staticSite |
+| Azure App Configuration | Microsoft.Appconfiguration/configurationStores | configurationStores |
+| Azure Automation | Microsoft.Automation/automationAccounts | Webhook, DSCAndHybridWorker |
+| Azure Cosmos DB | Microsoft.AzureCosmosDB/databaseAccounts | SQL, MongoDB, Cassandra, Gremlin, Table |
+| Azure Batch | Microsoft.Batch/batchAccounts | batch account |
+| Azure Cache for Redis | Microsoft.Cache/Redis | redisCache |
+| Azure Cache for Redis Enterprise | Microsoft.Cache/redisEnterprise | redisEnterprise |
+| Azure Cognitive Services | Microsoft.CognitiveServices/accounts | account |
+| Azure Managed Disks | Microsoft.Compute/diskAccesses | managed disk |
+| Azure Container Registry | Microsoft.ContainerRegistry/registries | registry |
+| Azure Kubernetes Service - Kubernetes API | Microsoft.ContainerService/managedClusters | management |
+| Azure Data Factory | Microsoft.DataFactory/factories | data factory |
+| Azure Database for MariaDB | Microsoft.DBforMariaDB/servers | mariadbServer |
+| Azure Database for MySQL | Microsoft.DBforMySQL/servers | mysqlServer |
+| Azure Database for PostgreSQL - Single server | Microsoft.DBforPostgreSQL/servers | postgresqlServer |
+| Azure IoT Hub | Microsoft.Devices/IotHubs | iotHub |
+| Azure Digital Twins | Microsoft.DigitalTwins/digitalTwinsInstances | digitaltwinsinstance |
+| Azure Event Grid | Microsoft.EventGrid/domains | domain |
+| Azure Event Grid | Microsoft.EventGrid/topics | Event grid topic |
+| Azure Event Hub | Microsoft.EventHub/namespaces | namespace |
+| Azure HDInsight | Microsoft.HDInsight/clusters | cluster |
+| Azure API for FHIR (Fast Healthcare Interoperability Resources) | Microsoft.HealthcareApis/services | service |
+| Azure Key Vault HSM (hardware security module) | Microsoft.Keyvault/managedHSMs | HSM |
+| Azure Key Vault | Microsoft.KeyVault/vaults | vault |
+| Azure Machine Learning | Microsoft.MachineLearningServices/workspaces | amlworkspace |
+| Azure Migrate | Microsoft.Migrate/assessmentProjects | project |
+| Application Gateway | Microsoft.Network/applicationgateways | application gateway |
+| Private Link service (your own service) | Microsoft.Network/privateLinkServices | empty |
+| Power BI | Microsoft.PowerBI/privateLinkServicesForPowerBI | Power BI |
+| Azure Purview | Microsoft.Purview/accounts | account |
+| Azure Purview | Microsoft.Purview/accounts | portal |
+| Azure Backup | Microsoft.RecoveryServices/vaults | vault |
+| Azure Relay | Microsoft.Relay/namespaces | namespace |
+| Microsoft Search | Microsoft.Search/searchServices | search service |
+| Azure Service Bus | Microsoft.ServiceBus/namespaces | namespace |
+| Azure SignalR Service | Microsoft.SignalRService/SignalR | signalr |
+| Azure SignalR Service | Microsoft.SignalRService/webPubSub | webpubsub |
+| Azure SQL Database | Microsoft.Sql/servers | SQL Server (sqlServer) |
+| Azure Storage | Microsoft.Storage/storageAccounts | Blob (blob, blob_secondary)<BR> Table (table, table_secondary)<BR> Queue (queue, queue_secondary)<BR> File (file, file_secondary)<BR> Web (web, web_secondary) |
+| Azure File Sync | Microsoft.StorageSync/storageSyncServices | File Sync Service |
+| Azure Synapse | Microsoft.Synapse/privateLinkHubs | synapse |
+| Azure Synapse Analytics | Microsoft.Synapse/workspaces | SQL, SqlOnDemand, Dev |
+| Azure App Service | Microsoft.Web/hostingEnvironments | hosting environment |
+| Azure App Service | Microsoft.Web/sites | sites |
+| Azure App Service | Microsoft.Web/staticSites | staticSite |
## Network security of private endpoints
-When using private endpoints, traffic is secured to a private link resource. The platform does an access control to validate network connections reaching only the specified private link resource. To access more resources within the same Azure service, extra private endpoints are required.
+When you use private endpoints, traffic is secured to a private-link resource. The platform does an access control to validate network connections that reach only the specified private-link resource. To access more resources within the same Azure service, you need additional private endpoints.
-You can completely lock down your workloads from accessing public endpoints to connect to a supported Azure service. This control provides an extra network security layer to your resources. The security provides protection that prevents access to other resources hosted on the same Azure service.
+You can completely lock down your workloads to prevent them from accessing public endpoints to connect to a supported Azure service. This control provides an extra network security layer to your resources, and this security provides protection that helps prevent access to other resources that are hosted on the same Azure service.
-## Access to a private link resource using approval workflow
-You can connect to a private link resource using the following connection approval methods:
-- **Automatically** approved when you own or have permission on the specific private link resource. The permission required is based on the private link resource type in the following format: Microsoft.\<Provider>/<resource_type>/privateEndpointConnectionsApproval/action-- **Manual** request when you don't have the permission required and would like to request access. An approval workflow will be initiated. The private endpoint and later private endpoint connections will be created in a "Pending" state. The private link resource owner is responsible to approve the connection. After it's approved, the private endpoint is enabled to send traffic normally, as shown in the following approval workflow diagram.
+## Access to a private-link resource using approval workflow
+
+You can connect to a private-link resource by using the following connection approval methods:
+
+- **Automatically approve**: Use this method when you own or have permissions for the specific private-link resource. The required permissions are based on the private-link resource type in the following format:
+
+ `Microsoft.<Provider>/<resource_type>/privateEndpointConnectionsApproval/action`
+
+- **Manually request**: Use this method when you don't have the required permissions and want to request access. An approval workflow will be initiated. The private endpoint and later private-endpoint connections will be created in a *Pending* state. The private-link resource owner is responsible to approve the connection. After it's approved, the private endpoint is enabled to send traffic normally, as shown in the following approval workflow diagram:
-![workflow approval](media/private-endpoint-overview/private-link-paas-workflow.png)
+![Diagram of the workflow approval process.](media/private-endpoint-overview/private-link-paas-workflow.png)
-The private link resource owner can do the following actions over a private endpoint connection:
+Over a private-endpoint connection, a private-link resource owner can:
-- Review all private endpoint connections details. -- Approve a private endpoint connection. The corresponding private endpoint will be enabled to send traffic to the private link resource. -- Reject a private endpoint connection. The corresponding private endpoint will be updated to reflect the status.-- Delete a private endpoint connection in any state. The corresponding private endpoint will be updated with a disconnected state to reflect the action, the private endpoint owner can only delete the resource at this point.
+- Review all private-endpoint connection details.
+- Approve a private-endpoint connection. The corresponding private endpoint will be enabled to send traffic to the private-link resource.
+- Reject a private-endpoint connection. The corresponding private endpoint will be updated to reflect the status.
+- Delete a private-endpoint connection in any state. The corresponding private endpoint will be updated with a disconnected state to reflect the action. The private-endpoint owner can delete only the resource at this point.
> [!NOTE]
-> Only a private endpoint in an approved state can send traffic to a given private link resource.
+> Only private endpoints in an *Approved* state can send traffic to a specified private-link resource.
-### Connect with alias
+### Connect by using an alias
-An alias is a unique moniker that is generated when the service owner creates the private link service behind a standard load balancer. Service owners can share this alias with their consumers offline.
+An alias is a unique moniker that's generated when a service owner creates a private-link service behind a standard load balancer. Service owners can share this alias offline with consumers of your service.
-Consumers can request a connection to private link service using either the resource URI or the alias. If you want to connect using the alias, you must create a private endpoint using the manual connection approval method. For using manual connection approval method, set manual request parameter to true during private endpoint create flow. For more information, see [New-AzPrivateEndpoint](/powershell/module/az.network/new-azprivateendpoint) and [az network private-endpoint create](/cli/azure/network/private-endpoint#az_network_private_endpoint_create). Note that this manual request can be auto approved if the consumer subscription is allowlisted on the provider side. To learn more, navigate to [controlling service access](./private-link-service-overview.md#control-service-access).
+
+The consumers can request a connection to a private-link service by using either the resource URI or the alias. To connect by using the alias, create a private endpoint by using the manual connection approval method. To use the manual connection approval method, set the manual request parameter to *True* during the private-endpoint create flow. For more information, see [New-AzPrivateEndpoint](/powershell/module/az.network/new-azprivateendpoint) and [az network private-endpoint create](/cli/azure/network/private-endpoint#az_network_private_endpoint_create).
+
+> [!NOTE]
+> This manual request can be auto approved if the consumer's subscription is allow-listed on the provider side. To learn more, go to [controlling service access](./private-link-service-overview.md#control-service-access).
## DNS configuration
-The DNS settings used for connections to a private link resource are important. Ensure your DNS settings are correct when you use the fully qualified domain name (FQDN) for the connection. The settings must resolve to the private IP address of the private endpoint. Existing Azure services might already have a DNS configuration to use when connecting over a public endpoint. This configuration must be overwritten to connect using your private endpoint.
+The DNS settings that you use to connect to a private-link resource are important. Ensure that your DNS settings are correct when you use the fully qualified domain name (FQDN) for the connection. The settings must resolve to the private IP address of the private endpoint. Existing Azure services might already have a DNS configuration you can use when you're connecting over a public endpoint. This configuration must be overwritten so that you can connect by using your private endpoint.
-The network interface associated with the private endpoint contains the information required to configure your DNS. The information includes the FQDN and private IP address for a private link resource.
+The network interface associated with the private endpoint contains the information that's required to configure your DNS. The information includes the FQDN and private IP address for a private-link resource.
-For complete detailed information about recommendations to configure DNS for private endpoints, see [Private Endpoint DNS configuration](private-endpoint-dns.md).
+For complete, detailed information about recommendations to configure DNS for private endpoints, see [Private endpoint DNS configuration](private-endpoint-dns.md).
## Limitations
-The following table includes a list of known limitations when using private endpoints:
+The following table list the known limitations to the use of private endpoints:
| Limitation | Description |Mitigation | | | | |
-| Traffic destined to a private endpoint using a user-defined route may be asymmetric. | Return traffic from a private endpoint bypasses a Network Virtual Appliance (NVA) and attempts to return to the source VM. | Source Network Address Translation (SNAT) is used to ensure symmetric routing. For all traffic destined to a private endpoint using a UDR, it's recommended to use SNAT for traffic at the NVA. |
+| Traffic that's destined for a private endpoint through a user-defined route (UDR) might be asymmetric. | Return traffic from a private endpoint bypasses a network virtual appliance (NVA) and attempts to return to the source virtual machine. | Source network address translation (SNAT) is used to ensure symmetric routing. For all traffic to a private endpoint that uses a UDR, we recommend that you use SNAT for traffic at the NVA. |
> [!IMPORTANT]
-> NSG and UDR support for private endpoints are in public preview on select regions. For more information, see [Public preview of Private Link UDR Support](https://azure.microsoft.com/updates/public-preview-of-private-link-udr-support/) and [Public preview of Private Link Network Security Group Support](https://azure.microsoft.com/updates/public-preview-of-private-link-network-security-group-support/).
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> Network security group (NSG) and UDR support for private endpoints is in preview in select regions. For more information, see [Preview of Private Link UDR support](https://azure.microsoft.com/updates/public-preview-of-private-link-udr-support/) and [Preview of Private Link network security group support](https://azure.microsoft.com/updates/public-preview-of-private-link-network-security-group-support/).
+>
+> This preview version is provided without a service-level agreement, and we don't recommend using it for production workloads. Certain features might not be supported or might have constrained capabilities.
+>
> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-## Public preview limitations
+## Limitations of the preview version
-### NSG
+### Network security groups
| Limitation | Description | Mitigation | | - | -- | - |
-| Obtain effective routes and security rules won't be available on a private endpoint network interface. | You aren't able to navigate to the network interface to see relevant information on the effective routes and security rules. | Q4CY21 |
-| NSG flow logs not supported. | NSG flow logs won't work for inbound traffic destined for a private endpoint. | No information at this time. |
-| Intermittent drops with ZRS storage accounts. | Customers using ZRS storage account may see periodic intermittent drops even with allow NSG applied on storage private endpoint subnet. | September |
-| Intermittent drops with Azure Key Vault. | Customers using Azure Key Vault may see periodic intermittent drops even with allow NSG applied on Azure Key Vault private endpoint subnet. | September |
-| Limit on number of address prefixes per NSG. | Having more than 500 address prefixes in NSG in a single rule isn't supported. | September |
-| AllowVirtualNetworkAccess flag | Customers setting VNet peering on their VNet (VNet A) with **AllowVirtualNetworkAccess** flag set to false on the peering link to another VNet (VNet B) can't use the **VirtualNetwork** tag to deny traffic from VNet B accessing private endpoint resources. They'll need to explicitly place a block for VNet BΓÇÖs address prefix to deny traffic to the private endpoint. | September |
-| Dual port NSG rules unsupported. | If multiple port ranges are used with NSG rules, only the first port range is honored for allow rules and deny rules. Rules with multiple port ranges are defaulted to deny all instead of specific ports. </br> **For more information, see rule example below.** | September |
-
-| Priority | Source port | Destination port | Action | Effective action |
+| Obtaining effective routes and security rules isn't available on a private-endpoint network interface. | You can't navigate to the network interface to view relevant information about the effective routes and security rules. | Q4CY2021 |
+| NSG flow logs aren't supported. | NSG flow logs don't work for inbound traffic that's destined for a private endpoint. | No mitigation information is available at this time. |
+| Intermittent drops with zone-redundant storage (ZRS) storage accounts. | Customers that use ZRS storage accounts might see periodic intermittent drops, even with *allow NSG* applied on a storage private-endpoint subnet. | No mitigation information is available at this time. |
+| Intermittent drops with Azure Key Vault. | Customers that use Azure Key Vault might see periodic intermittent drops, even with *allow NSG* applied on a Key Vault private-endpoint subnet. | No mitigation information is available at this time. |
+| The number of address prefixes per NSG is limited. | Having more than 500 address prefixes in an NSG in a single rule isn't supported. | No mitigation information is available at this time. |
+| AllowVirtualNetworkAccess flag | Customers that set virtual network peering on their virtual network (virtual network A) with the *AllowVirtualNetworkAccess* flag set to *false* on the peering link to another virtual network (virtual network B) can't use the *VirtualNetwork* tag to deny traffic from virtual network B accessing private endpoint resources. The customers need to explicitly place a block for virtual network BΓÇÖs address prefix to deny traffic to the private endpoint. | No mitigation information is available at this time. |
+| Dual port NSG rules are unsupported. | If multiple port ranges are used with NSG rules, only the first port range is honored for allow rules and deny rules. Rules with multiple port ranges are defaulted to *deny all* instead of to denying specific ports. </br><br>For more information, see the UDR rule example in the next table. | No mitigation information is available at this time. |
+| | |
+
+The following table shows an example of a dual port NSG rule:
+
+| Priority | Source&nbsp;port&nbsp; | Destination&nbsp;port | Action | Effective&nbsp;action |
| -- | -- | - | | - | | 10 | 10-12 | 10-12 | Allow/Deny | Single port range in source/destination ports will work as expected. | | 10 | 10-12, 13-14 | 14-15, 16-17 | Allow | Only source ports 10-12 and destination ports 14-15 will be allowed. |
-| 10 | 10-12, 13-14 | 120-130, 140-150 | Deny | Traffic from all source ports will be denied to all dest ports since there are multiple source and destination port ranges. |
+| 10 | 10-12, 13-14 | 120-130, 140-150 | Deny | Traffic from all source ports will be denied to all destination ports, because there are multiple source and destination port ranges. |
| 10 | 10-12, 13-14 | 120-130 | Deny | Traffic from all source ports will be denied to destination ports 120-130 only. There are multiple source port ranges and a single destination port range. |-
-**Table: Example dual port rule.**
-
-### UDR
+| | |
| Limitation | Description | Mitigation | | - | -- | - |
-| Source Network Address Translation (SNAT) is recommended always. | Because of the variable nature of the private endpoint data-plane, it's recommended to SNAT traffic destined to a private endpoint to ensure return traffic is honored. | No information at this time. |
+| Source Network Address Translation (SNAT) is recommended always. | Because of the variable nature of the private-endpoint data plane, we recommend using SNAT traffic that's destined to a private endpoint, which ensures that return traffic is honored. | No mitigation information is available at this time. |
+| | |
## Next steps -- For more information on private endpoint and private link, see [What is Azure Private Link?](private-link-overview.md).-- To get started creating a private endpoint for a web app, see [Quickstart - Create a Private Endpoint using the Azure portal](create-private-endpoint-portal.md).
+- For more information about private endpoints and Private Link, see [What is Azure Private Link?](private-link-overview.md).
+- To get started with creating a private endpoint for a web app, see [Quickstart: Create a private endpoint by using the Azure portal](create-private-endpoint-portal.md).
purview Catalog Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-permissions.md
A collection is a tool Azure Purview uses to group assets, sources, and other ar
> [!NOTE] > As of November 8th, 2021, ***Insights*** is accessible to Data Curators. Data Readers do not have access to Insights.
->
->
+ ## Roles Azure Purview uses a set of predefined roles to control who can access what within the account. These roles are currently: -- **Collection admins** - a role for users that will need to assign roles to other users in Azure Purview or manage collections. Collection admins can add users to roles on collections where they're admins. They can also edit collections, their details, and add subcollections.
+- **Collection administrator** - a role for users that will need to assign roles to other users in Azure Purview or manage collections. Collection admins can add users to roles on collections where they're admins. They can also edit collections, their details, and add subcollections.
- **Data curators** - a role that provides access to the data catalog to manage assets, configure custom classifications, set up glossary terms, and view insights. Data curators can create, read, modify, move, and delete assets. They can also apply annotations to assets. - **Data readers** - a role that provides read-only access to data assets, classifications, classification rules, collections and glossary terms.-- **Data source admins** - a role that allows a user to manage data sources and scans. If a user is granted only to **Data source admin** role on a given data source, they can run new scans using an existing scan rule. To create new scan rules, the user must be also granted as either **Data reader** or **Data curator** roles.
+- **Data source administrators** - a role that allows a user to manage data sources and scans. If a user is granted only to **Data source admin** role on a given data source, they can run new scans using an existing scan rule. To create new scan rules, the user must be also granted as either **Data reader** or **Data curator** roles.
+- **Policy author (Preview)** - a role that allows a user to view, update, and delete Azure Purview policies through the policy management app within Azure Purview.
+
+> [!NOTE]
+> At this time, Azure Purview Policy author role is not sufficient to create policies. The Azure Purview Data source admin role is also required.
## Who should be assigned to what role? |User Scenario|Appropriate Role(s)| |-|--|
-|I just need to find assets, I don't want to edit anything|Data Reader|
-|I need to edit information about assets, assign classifications, associate them with glossary entries, and so on.|Data Curator|
-|I need to edit the glossary or set up new classification definitions|Data Curator|
-|I need to view Insights to understand the governance posture of my data estate|Data Curator|
-|My application's Service Principal needs to push data to Azure Purview|Data Curator|
-|I need to set up scans via the Azure Purview Studio|Data Curator on the collection **or** Data Curator **And** Data Source Administrator where the source is registered|
-|I need to enable a Service Principal or group to set up and monitor scans in Azure Purview without allowing them to access the catalog's information |Data Source Admin|
-|I need to put users into roles in Azure Purview | Collection Admin |
-
+|I just need to find assets, I don't want to edit anything|Data reader|
+|I need to edit information about assets, assign classifications, associate them with glossary entries, and so on.|Data curator|
+|I need to edit the glossary or set up new classification definitions|Data curator|
+|I need to view Insights to understand the governance posture of my data estate|Data curator|
+|My application's Service Principal needs to push data to Azure Purview|Data curator|
+|I need to set up scans via the Azure Purview Studio|Data curator on the collection **or** data curator **and** data source administrator where the source is registered.|
+|I need to enable a Service Principal or group to set up and monitor scans in Azure Purview without allowing them to access the catalog's information |Data source administrator|
+|I need to put users into roles in Azure Purview | Collection administrator |
+|I need to create and publish access policies | Data source administrator and policy author |
+
+>[!NOTE]
+> **\*Data source administrator permissions on Policies** - Data source administrators are also able to publish data policies.
## Understand how to use Azure Purview's roles and collections
purview Concept Elastic Data Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-elastic-data-map.md
The Data Map billing example below shows a Data Map with growing metadata storag
:::image type="content" source="./media/concept-elastic-data-map/operations-and-metadata.png" alt-text="Chart depicting number of operations and growth of metadata over time.":::
-Each Data Map capacity unit supports 25 operations/second and 10 GB of metadata storage. The Data Map is billed on an hourly basis. You are billed for the maximum Data Map capacity unit needed within the hour. At times, you may need more operations/second within the hour, and this will increase the number of capacity units needed within that hour. At other times, your operations/second usage may be low, but you may still need a large volume of metadata storage. The metadata storage is what determines how many capacity units you need within the hour.
+Each Data Map capacity unit supports 25 operations/second and 10 GB of metadata storage. The Data Map is billed hourly. It is billed for the maximum Data Map capacity units needed within the hour, with a minimum of one capacity unit. At times, you may need more operations/second within the hour, and this will increase the number of capacity units needed within that hour. At other times, your operations/second usage may be low, but you may still need a large volume of metadata storage. The metadata storage is what determines how many capacity units you need within the hour.
The table below shows the maximum number of operations/second and metadata storage used per hour for this billing example:
security Encryption Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/encryption-models.md
The Azure services that support each encryption model:
| Translator Text | Yes | Yes | - | | Power BI | Yes | Yes, RSA 4096-bit | - | | **Analytics** | | | |
-| Azure Stream Analytics | Yes | Yes\*\* | - |
+| Azure Stream Analytics | Yes | Yes\*\*, including Managed HSM | - |
| Event Hubs | Yes | Yes | - | | Functions | Yes | Yes | - | | Azure Analysis Services | Yes | - | - |
The Azure services that support each encryption model:
| Azure Monitor Application Insights | Yes | Yes | - | | Azure Monitor Log Analytics | Yes | Yes | - | | Azure Data Explorer | Yes | Yes | - |
-| Azure Data Factory | Yes | Yes | - |
+| Azure Data Factory | Yes | Yes, including Managed HSM | - |
| Azure Data Lake Store | Yes | Yes, RSA 2048-bit | - | | **Containers** | | | | | Azure Kubernetes Service | Yes | Yes | - | | Container Instances | Yes | Yes | - | | Container Registry | Yes | Yes | - | | **Compute** | | | |
-| Virtual Machines | Yes | Yes | - |
-| Virtual Machine Scale Set | Yes | Yes | - |
+| Virtual Machines | Yes | Yes, including Managed HSM | - |
+| Virtual Machine Scale Set | Yes | Yes, including Managed HSM | - |
| SAP HANA | Yes | Yes | - |
-| App Service | Yes | Yes\*\* | - |
-| Automation | Yes | Yes\*\* | - |
-| Azure Functions | Yes | Yes\*\* | - |
-| Azure portal | Yes | Yes\*\* | - |
+| App Service | Yes | Yes\*\*, including Managed HSM | - |
+| Automation | Yes | Yes | - |
+| Azure Functions | Yes | Yes\*\*, including Managed HSM | - |
+| Azure portal | Yes | Yes\*\*, including Managed HSM | - |
| Logic Apps | Yes | Yes | - |
-| Azure-managed applications | Yes | Yes\*\* | - |
+| Azure-managed applications | Yes | Yes\*\*, including Managed HSM | - |
| Service Bus | Yes | Yes | - | | Site Recovery | Yes | Yes | - | | **Databases** | | | | | SQL Server on Virtual Machines | Yes | Yes | Yes |
-| Azure SQL Database | Yes | Yes, RSA 3072-bit | Yes |
+| Azure SQL Database | Yes | Yes, RSA 3072-bit, including Managed HSM | Yes |
+| Azure SQL Database Managed Instance | Yes | Yes, RSA 3072-bit, including Managed HSM | Yes |
| Azure SQL Database for MariaDB | Yes | - | - | | Azure SQL Database for MySQL | Yes | Yes | - | | Azure SQL Database for PostgreSQL | Yes | Yes | - |
The Azure services that support each encryption model:
| Microsoft Defender for IoT | Yes | Yes | - | | Microsoft Sentinel | Yes | Yes | - | | **Storage** | | | |
-| Blob Storage | Yes | Yes | Yes |
-| Premium Blob Storage | Yes | Yes | Yes |
-| Disk Storage | Yes | Yes | - |
-| Ultra Disk Storage | Yes | Yes | - |
-| Managed Disk Storage | Yes | Yes | - |
-| File Storage | Yes | Yes | - |
-| File Premium Storage | Yes | Yes | - |
-| File Sync | Yes | Yes | - |
-| Queue Storage | Yes | Yes | Yes |
+| Blob Storage | Yes | Yes, including Managed HSM | Yes |
+| Premium Blob Storage | Yes | Yes, including Managed HSM | Yes |
+| Disk Storage | Yes | Yes, including Managed HSM | - |
+| Ultra Disk Storage | Yes | Yes, including Managed HSM | - |
+| Managed Disk Storage | Yes | Yes, including Managed HSM | - |
+| File Storage | Yes | Yes, including Managed HSM | - |
+| File Premium Storage | Yes | Yes, including Managed HSM | - |
+| File Sync | Yes | Yes, including Managed HSM | - |
+| Queue Storage | Yes | Yes, including Managed HSM | Yes |
+| Data Lake Storage Gen2 | Yes | Yes, including Managed HSM | Yes |
| Avere vFXT | Yes | - | - | | Azure Cache for Redis | Yes | N/A\* | - | | Azure NetApp Files | Yes | Yes | - |
security Tls Certificate Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/tls-certificate-changes.md
tags: azure-resource-manager
Previously updated : 09/13/2021 Last updated : 02/18/2022
All Azure services are impacted by this change. Here are some more details for s
- [Azure Active Directory](../../active-directory/index.yml) (Azure AD) services began this transition on July 7, 2020. - [Azure IoT Hub](https://azure.microsoft.com/services/iot-hub) and [DPS](../../iot-dps/index.yml) will remain on Baltimore CyberTrust Root CA but their intermediate CAs will change. [Click here for details](https://techcommunity.microsoft.com/t5/internet-of-things/azure-iot-tls-changes-are-coming-and-why-you-should-care/ba-p/1658456).
+- [Azure Cosmos DB](../../cosmos-db/index.yml) will begin this transition in July 2022 with an expected completion in October 2022.
- For [Azure Storage](../../storage/index.yml), [click here for details](https://techcommunity.microsoft.com/t5/azure-storage/azure-storage-tls-critical-changes-are-almost-here-and-why-you/ba-p/2741581). - [Azure Cache for Redis](../../azure-cache-for-redis/index.yml) will remain on Baltimore CyberTrust Root CA but their intermediate CAs will change. [Click here for details](../../azure-cache-for-redis/cache-whats-new.md). - For [Azure Instance Metadata Service](../../virtual-machines/linux/instance-metadata-service.md?tabs=linux), see [Azure Instance Metadata Service-Attested data TLS: Critical changes are almost here!](https://techcommunity.microsoft.com/t5/azure-governance-and-management/azure-instance-metadata-service-attested-data-tls-critical/ba-p/2888953) for details.
service-bus-messaging Message Expiration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/message-expiration.md
Title: Azure Service Bus - message expiration description: This article explains about expiration and time to live (TTL) of Azure Service Bus messages. After such a deadline, the message is no longer delivered. Previously updated : 11/17/2021 Last updated : 02/18/2022
Service Bus queues, topics, and subscriptions can be created as temporary entiti
Automatic cleanup is useful in development and test scenarios in which entities are created dynamically and aren't cleaned up after use, due to some interruption of the test or debugging run. It's also useful when an application creates dynamic entities, such as a reply queue, for receiving responses back into a web server process, or into another relatively short-lived object where it's difficult to reliably clean up those entities when the object instance disappears. The feature is enabled using the **auto delete on idle** property on the namespace. This property is set to the duration for which an entity must be idle (unused) before it's automatically deleted. The minimum value for this property is 5 minutes.+
+> [!IMPORTANT]
+> Setting the Azure Resource Manager lock-level to [`CanNotDelete`](../azure-resource-manager/management/lock-resources.md), on the namespace or at a higher level doesn't prevent entities with `AutoDeleteOnIdle` from being deleted. If you don't want the entity to be deleted, set the `AutoDeleteOnIdle` property to `DataTime.MaxValue`.
+ ## Idleness
Here's what considered idleness of entities (queues, topics, and subscriptions):
| Topic | <ul><li>No sends</li><li>No updates to the topic</li><li>No scheduled messages</li><li>No operations on the topic's subscriptions (see the next row)</li></ul> | | Subscription | <ul><li>No receives</li><li>No updates to the subscription</li><li>No new rules added to the subscription</li><li>No browse/peek</li></ul> |
-## SDKS
+## SDKs
- To set time-to-live on a message: [.NET](/dotnet/api/azure.messaging.servicebus.servicebusmessage.timetolive), [Java](/java/api/com.azure.messaging.servicebus.servicebusmessage.settimetolive), [Python](/python/api/azure-servicebus/azure.servicebus.servicebusmessage), [JavaScript](/javascript/api/@azure/service-bus/servicebusmessage#@azure-service-bus-servicebusmessage-timetolive) - To set the default time-to-live on a queue: [.NET](/dotnet/api/azure.messaging.servicebus.administration.createqueueoptions.defaultmessagetimetolive), [Java](/java/api/com.azure.messaging.servicebus.administration.models.createqueueoptions.setdefaultmessagetimetolive), [Python](/python/api/azure-servicebus/azure.servicebus.management.queueproperties), [JavaScript](/javascript/api/@azure/service-bus/queueproperties#@azure-service-bus-queueproperties-defaultmessagetimetolive)
To learn more about Service Bus messaging, see the following articles:
- [Geo-disaster recovery](service-bus-geo-dr.md) - [Asynchronous messaging patterns and high availability](service-bus-async-messaging.md) - [Handling outages and disasters](service-bus-outages-disasters.md)-- [Throttling](service-bus-throttling.md)
+- [Throttling](service-bus-throttling.md)
service-bus-messaging Service Bus Ip Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-ip-filtering.md
Title: Configure IP firewall rules for Azure Service Bus description: How to use Firewall Rules to allow connections from specific IP addresses to Azure Service Bus. Previously updated : 01/04/2022 Last updated : 02/18/2022 # Allow access to Azure Service Bus namespace from specific IP addresses or ranges
By default, Service Bus namespaces are accessible from internet as long as the r
This feature is helpful in scenarios in which Azure Service Bus should be only accessible from certain well-known sites. Firewall rules enable you to configure rules to accept traffic originating from specific IPv4 addresses. For example, if you use Service Bus with [Azure Express Route][express-route], you can create a **firewall rule** to allow traffic from only your on-premises infrastructure IP addresses or addresses of a corporate NAT gateway. ## IP firewall rules
-The IP firewall rules are applied at the Service Bus namespace level. Therefore, the rules apply to all connections from clients using any supported protocol. Any connection attempt from an IP address that does not match an allowed IP rule on the Service Bus namespace is rejected as unauthorized. The response does not mention the IP rule. IP filter rules are applied in order, and the first rule that matches the IP address determines the accept or reject action.
+The IP firewall rules are applied at the Service Bus namespace level. Therefore, the rules apply to all connections from clients using any supported protocol. Any connection attempt from an IP address that doesn't match an allowed IP rule on the Service Bus namespace is rejected as unauthorized. The response doesn't mention the IP rule. IP filter rules are applied in order, and the first rule that matches the IP address determines the accept or reject action.
## Important points - Firewalls and Virtual Networks are supported only in the **premium** tier of Service Bus. If upgrading to the **premier** tier isn't an option, we recommend that you keep the Shared Access Signature (SAS) token secure and share with only authorized users. For information about SAS authentication, see [Authentication and authorization](service-bus-authentication-and-authorization.md#shared-access-signature).
This section shows you how to use the Azure portal to create IP firewall rules f
> If you choose **Selected networks**, add at least one IP firewall rule or a virtual network that will have access to the namespace. Choose **Disabled** if you want to restrict all traffic to this namespace over [private endpoints](private-link-service.md) only. :::image type="content" source="./media/service-bus-ip-filtering/selected-networks.png" alt-text="Networking page with the selected networks option selected." lightbox="./media/service-bus-ip-filtering/selected-networks.png":::
- - **All networks** (default). This option enables public access from all networks using an access key. If you select the **All networks** option, the event hub accepts connections from any IP address (using the access key). This setting is equivalent to a rule that accepts the 0.0.0.0/0 IP address range.
+ - **All networks** (default). This option enables public access from all networks using an access key. If you select the **All networks** option, Service Bus accepts connections from any IP address (using the access key). This setting is equivalent to a rule that accepts the 0.0.0.0/0 IP address range.
:::image type="content" source="./media/service-bus-ip-filtering/firewall-all-networks-selected.png" alt-text="Screenshot of the Azure portal Networking page. The option to allow access from All networks is selected on the Firewalls and virtual networks tab."::: 1. To allow access from only specified IP address, select the **Selected networks** option if it isn't already selected. In the **Firewall** section, follow these steps:
This section has a sample Azure Resource Manager template that adds a virtual ne
**ipMask** is a single IPv4 address or a block of IP addresses in CIDR notation. For example, in CIDR notation 70.37.104.0/24 represents the 256 IPv4 addresses from 70.37.104.0 to 70.37.104.255, with 24 indicating the number of significant prefix bits for the range.
-When adding virtual network or firewalls rules, set the value of `defaultAction` to `Deny`.
+> [!NOTE]
+> The default value of the `defaultAction` is `Allow`. When adding virtual network or firewalls rules, make sure you set the `defaultAction` to `Deny`.
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0", "parameters": {
- "servicebusNamespaceName": {
- "type": "string",
- "metadata": {
- "description": "Name of the Service Bus namespace"
+ "serviceBusNamespaceName": {
+ "defaultValue": "contososbusns",
+ "type": "String"
}
- },
- "location": {
- "type": "string",
- "metadata": {
- "description": "Location for Namespace"
- }
- }
- },
- "variables": {
- "namespaceNetworkRuleSetName": "[concat(parameters('servicebusNamespaceName'), concat('/', 'default'))]",
},
+ "variables": {},
"resources": [
- {
- "apiVersion": "2018-01-01-preview",
- "name": "[parameters('servicebusNamespaceName')]",
- "type": "Microsoft.ServiceBus/namespaces",
- "location": "[parameters('location')]",
- "sku": {
- "name": "Premium",
- "tier": "Premium"
- },
- "properties": { }
- },
- {
- "apiVersion": "2018-01-01-preview",
- "name": "[variables('namespaceNetworkRuleSetName')]",
- "type": "Microsoft.ServiceBus/namespaces/networkrulesets",
- "dependsOn": [
- "[concat('Microsoft.ServiceBus/namespaces/', parameters('servicebusNamespaceName'))]"
- ],
- "properties": {
- "virtualNetworkRules": [<YOUR EXISTING VIRTUAL NETWORK RULES>],
- "ipRules":
- [
- {
- "ipMask":"10.1.1.1",
- "action":"Allow"
+ {
+ "type": "Microsoft.ServiceBus/namespaces",
+ "apiVersion": "2021-06-01-preview",
+ "name": "[parameters('serviceBusNamespaceName')]",
+ "location": "East US",
+ "sku": {
+ "name": "Premium",
+ "tier": "Premium",
+ "capacity": 1
},
- {
- "ipMask":"11.0.0.0/24",
- "action":"Allow"
+ "properties": {
+ "disableLocalAuth": false,
+ "zoneRedundant": true
+ }
+ },
+ {
+ "type": "Microsoft.ServiceBus/namespaces/networkRuleSets",
+ "apiVersion": "2021-06-01-preview",
+ "name": "[concat(parameters('serviceBusNamespaceName'), '/default')]",
+ "location": "East US",
+ "dependsOn": [
+ "[resourceId('Microsoft.ServiceBus/namespaces', parameters('serviceBusNamespaceName'))]"
+ ],
+ "properties": {
+ "publicNetworkAccess": "Enabled",
+ "defaultAction": "Allow",
+ "virtualNetworkRules": [],
+ "ipRules": [
+ {
+ "ipMask":"10.1.1.1",
+ "action":"Allow"
+ },
+ {
+ "ipMask":"11.0.0.0/24",
+ "action":"Allow"
+ }
+ ]
}
- ],
- "trustedServiceAccessEnabled": false,
- "defaultAction": "Deny"
}
- }
- ],
- "outputs": { }
- }
+ ]
+}
``` To deploy the template, follow the instructions for [Azure Resource Manager][lnk-deploy].
To deploy the template, follow the instructions for [Azure Resource Manager][lnk
> [!IMPORTANT] > If there are no IP and virtual network rules, all the traffic flows into the namespace even if you set the `defaultAction` to `deny`. The namespace can be accessed over the public internet (using the access key). Specify at least one IP rule or virtual network rule for the namespace to allow traffic only from the specified IP addresses or subnet of a virtual network.
+## default action and public network access
+
+### REST API
+
+The default value of the `defaultAction` property was `Deny` for API version **2021-01-01-preview and earlier**. However, the deny rule isn't enforced unless you set IP filters or virtual network (VNet) rules. That is, if you didn't have any IP filters or VNet rules, it's treated as `Allow`.
+
+From API version **2021-06-01-preview onwards**, the default value of the `defaultAction` property is `Allow`, to accurately reflect the service-side enforcement. If the default action is set to `Deny`, IP filters and VNet rules are enforced. If the default action is set to `Allow`, IP filters and VNet rules aren't enforced. The service remembers the rules when you turn them off and then back on again.
+
+The API version **2021-06-01-preview onwards** also introduces a new property named `publicNetworkAccess`. If it's set to `Disabled`, operations are restricted to private links only. If it's set to `Enabled`, operations are allowed over the public internet.
+
+For more information about these properties, see [Create or Update Network Rule Set](/rest/api/servicebus/preview/namespaces-network-rule-set/create-or-update-network-rule-set) and [Create or Update Private Endpoint Connections](/rest/api/servicebus/preview/private-endpoint-connections/create-or-update).
+
+> [!NOTE]
+> None of the above settings bypass validation of claims via SAS or Azure AD authentication. The authentication check always runs after the service validates the network checks that are configured by `defaultAction`, `publicNetworkAccess`, `privateEndpointConnections` settings.
+
+### Azure portal
+
+Azure portal always uses the latest API version to get and set properties. If you had previously configured your namespace using **2021-01-01-preview and earlier** with `defaultAction` set to `Deny`, and specified zero IP filters and VNet rules, the portal would have previously checked **Selected Networks** on the **Networking** page of your namespace. Now, it checks the **All networks** option.
+ ## Next steps
service-bus-messaging Service Bus Service Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-service-endpoints.md
Once configured to be bound to at least one virtual network subnet service endpo
The result is a private and isolated relationship between the workloads bound to the subnet and the respective Service Bus namespace, in spite of the observable network address of the messaging service endpoint being in a public IP range. ## Important points-- Virtual Networks are supported only in [Premium tier](service-bus-premium-messaging.md) Service Bus namespaces. When using VNet service endpoints with Service Bus, you should not enable these endpoints in applications that mix standard and premium tier Service Bus namespaces. Because the standard tier does not support VNets. The endpoint is restricted to Premium tier namespaces only.
+- Virtual Networks are supported only in [Premium tier](service-bus-premium-messaging.md) Service Bus namespaces. When using VNet service endpoints with Service Bus, you shouldn't enable these endpoints in applications that mix standard and premium tier Service Bus namespaces. Because the standard tier doesn't support VNets. The endpoint is restricted to Premium tier namespaces only.
- Implementing Virtual Networks integration can prevent other Azure services from interacting with Service Bus. As an exception, you can allow access to Service Bus resources from certain **trusted services** even when network service endpoints are enabled. For a list of trusted services, see [Trusted services](#trusted-microsoft-services). The following Microsoft services are required to be on a virtual network
The result is a private and isolated relationship between the workloads bound to
Solutions that require tight and compartmentalized security, and where virtual network subnets provide the segmentation between the compartmentalized services, generally still need communication paths between services residing in those compartments.
-Any immediate IP route between the compartments, including those carrying HTTPS over TCP/IP, carries the risk of exploitation of vulnerabilities from the network layer on up. Messaging services provide completely insulated communication paths, where messages are even written to disk as they transition between parties. Workloads in two distinct virtual networks that are both bound to the same Service Bus instance can communicate efficiently and reliably via messages, while the respective network isolation boundary integrity is preserved.
+Any immediate IP route between the compartments, including those carrying HTTPS over TCP/IP, carries the risk of exploitation of vulnerabilities from the network layer on up. Messaging services provide insulated communication paths, where messages are even written to disk as they transition between parties. Workloads in two distinct virtual networks that are both bound to the same Service Bus instance can communicate efficiently and reliably via messages, while the respective network isolation boundary integrity is preserved.
That means your security sensitive cloud solutions not only gain access to Azure industry-leading reliable and scalable asynchronous messaging capabilities, but they can now use messaging to create communication paths between secure solution compartments that are inherently more secure than what is achievable with any peer-to-peer communication mode, including HTTPS and other TLS-secured socket protocols.
That means your security sensitive cloud solutions not only gain access to Azure
*Virtual network rules* are the firewall security feature that controls whether your Azure Service Bus server accepts connections from a particular virtual network subnet.
-Binding a Service Bus namespace to a virtual network is a two-step process. You first need to create a **Virtual Network service endpoint** on a Virtual Network subnet and enable it for **Microsoft.ServiceBus** as explained in the [service endpoint overview][vnet-sep]. Once you have added the service endpoint, you bind the Service Bus namespace to it with a **virtual network rule**.
+Binding a Service Bus namespace to a virtual network is a two-step process. You first need to create a **Virtual Network service endpoint** on a Virtual Network subnet and enable it for **Microsoft.ServiceBus** as explained in the [service endpoint overview][vnet-sep]. Once you've added the service endpoint, you bind the Service Bus namespace to it with a **virtual network rule**.
-The virtual network rule is an association of the Service Bus namespace with a virtual network subnet. While the rule exists, all workloads bound to the subnet are granted access to the Service Bus namespace. Service Bus itself never establishes outbound connections, does not need to gain access, and is therefore never granted access to your subnet by enabling this rule.
+The virtual network rule is an association of the Service Bus namespace with a virtual network subnet. While the rule exists, all workloads bound to the subnet are granted access to the Service Bus namespace. Service Bus itself never establishes outbound connections, doesn't need to gain access, and is therefore never granted access to your subnet by enabling this rule.
> [!NOTE] > Remember that a network service endpoint provides applications running in the virtual network the access to the Service Bus namespace. The virtual network controls the reachability of the endpoint, but not what operations can be done on Service Bus entities (queues, topics, or subscriptions). Use Azure Active Directory (Azure AD) to authorize operations that the applications can perform on the namespace and its entities. For more information, see [Authenticate and authorize an application with Azure AD to access Service Bus entities](authenticate-application.md).
This section shows you how to use Azure portal to add a virtual network service
> If you choose **Selected networks**, add at least one IP firewall rule or a virtual network that will have access to the namespace. Choose **Disabled** if you want to restrict all traffic to this namespace over [private endpoints](private-link-service.md) only. :::image type="content" source="./media/service-bus-ip-filtering/selected-networks.png" alt-text="Networking page with the selected networks option selected." lightbox="./media/service-bus-ip-filtering/selected-networks.png":::
- - **All networks** (default). This option enables public access from all networks using an access key. If you select the **All networks** option, the event hub accepts connections from any IP address (using the access key). This setting is equivalent to a rule that accepts the 0.0.0.0/0 IP address range.
+ - **All networks** (default). This option enables public access from all networks using an access key. If you select the **All networks** option, Service Bus accepts connections from any IP address (using the access key). This setting is equivalent to a rule that accepts the 0.0.0.0/0 IP address range.
:::image type="content" source="./media/service-bus-ip-filtering/firewall-all-networks-selected.png" alt-text="Screenshot of the Azure portal Networking page. The option to allow access from All networks is selected on the Firewalls and virtual networks tab."::: 2. To restrict access to specific virtual networks, select the **Selected networks** option if it isn't already selected.
The following sample Resource Manager template adds a virtual network rule to an
The ID is a fully qualified Resource Manager path for the virtual network subnet. For example, `/subscriptions/{id}/resourceGroups/{rg}/providers/Microsoft.Network/virtualNetworks/{vnet}/subnets/default` for the default subnet of a virtual network.
-When adding virtual network or firewalls rules, set the value of `defaultAction` to `Deny`.
+> [!NOTE]
+> The default value of the `defaultAction` is `Allow`. When adding virtual network or firewalls rules, make sure you set the `defaultAction` to `Deny`.
+ Template:
Template:
"[concat('Microsoft.ServiceBus/namespaces/', parameters('servicebusNamespaceName'))]" ], "properties": {
+ "publicNetworkAccess": "Enabled",
+ "defaultAction": "Deny",
"virtualNetworkRules": [ {
Template:
"ignoreMissingVnetServiceEndpoint": false } ],
- "ipRules":[<YOUR EXISTING IP RULES>],
- "trustedServiceAccessEnabled": false,
- "defaultAction": "Deny"
+ "ipRules":[],
+ "trustedServiceAccessEnabled": false
} } ],
To deploy the template, follow the instructions for [Azure Resource Manager][lnk
> [!IMPORTANT] > If there are no IP and virtual network rules, all the traffic flows into the namespace even if you set the `defaultAction` to `deny`. The namespace can be accessed over the public internet (using the access key). Specify at least one IP rule or virtual network rule for the namespace to allow traffic only from the specified IP addresses or subnet of a virtual network.
+## default action and public network access
+
+### REST API
+
+The default value of the `defaultAction` property was `Deny` for API version **2021-01-01-preview and earlier**. However, the deny rule isn't enforced unless you set IP filters or virtual network (VNet) rules. That is, if you didn't have any IP filters or VNet rules, it's treated as `Allow`.
+
+From API version **2021-06-01-preview onwards**, the default value of the `defaultAction` property is `Allow`, to accurately reflect the service-side enforcement. If the default action is set to `Deny`, IP filters and VNet rules are enforced. If the default action is set to `Allow`, IP filters and VNet rules aren't enforced. The service remembers the rules when you turn them off and then back on again.
+
+The API version **2021-06-01-preview onwards** also introduces a new property named `publicNetworkAccess`. If it's set to `Disabled`, operations are restricted to private links only. If it's set to `Enabled`, operations are allowed over the public internet.
+
+For more information about these properties, see [Create or Update Network Rule Set](/rest/api/servicebus/preview/namespaces-network-rule-set/create-or-update-network-rule-set) and [Create or Update Private Endpoint Connections](/rest/api/servicebus/preview/private-endpoint-connections/create-or-update).
+
+> [!NOTE]
+> None of the above settings bypass validation of claims via SAS or Azure AD authentication. The authentication check always runs after the service validates the network checks that are configured by `defaultAction`, `publicNetworkAccess`, `privateEndpointConnections` settings.
+
+### Azure portal
+
+Azure portal always uses the latest API version to get and set properties. If you had previously configured your namespace using **2021-01-01-preview and earlier** with `defaultAction` set to `Deny`, and specified zero IP filters and VNet rules, the portal would have previously checked **Selected Networks** on the **Networking** page of your namespace. Now, it checks the **All networks** option.
++ ## Next steps For more information about virtual networks, see the following links:
storage Sas Expiration Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/sas-expiration-policy.md
Previously updated : 01/28/2022 Last updated : 02/17/2022 -+ - # Create an expiration policy for shared access signatures
-You can use a shared access signature (SAS) to delegate access to resources in your Azure Storage account. A SAS token includes the targeted resource, the permissions granted, and the interval over which access is permitted. Best practices recommend that you limit the interval for a SAS in case it is compromised. By setting a SAS expiration policy for your storage accounts, you can provide a recommended upper expiration limit when a user creates a SAS.
-
-A SAS expiration policy does not prevent a user from creating a SAS with an expiration that exceeds the limit recommended by the policy. When a user creates a SAS that violates the policy, they'll see a warning, together with the recommended maximum interval. If you have configured a diagnostic setting for logging with Azure Monitor, then Azure Storage writes to a property in the logs whenever a user creates a SAS that expires after the recommended interval.
+You can use a shared access signature (SAS) to delegate access to resources in your Azure Storage account. A SAS token includes the targeted resource, the permissions granted, and the interval over which access is permitted. Best practices recommend that you limit the interval for a SAS in case it is compromised. By setting a SAS expiration policy for your storage accounts, you can provide a recommended upper expiration limit when a user creates a service SAS or an account SAS.
For more information about shared access signatures, see [Grant limited access to Azure Storage resources using shared access signatures (SAS)](storage-sas-overview.md).
+## About SAS expiration policies
+
+You can configure a SAS expiration policy on the storage account. The SAS expiration policy specifies the recommended upper limit for the signed expiry field on a service SAS or an account SAS. The recommended upper limit is specified as a date/time value that is a combined number of days, hours, minutes, and seconds.
+
+The validity interval for the SAS is calculated by subtracting the date/time value of the signed start field from the date/time value of the signed expiry field. If the resulting value is less than or equal to the recommended upper limit, then the SAS is in compliance with the SAS expiration policy.
+
+After you configure the SAS expiration policy, then a user who creates a SAS with an interval that exceeds the recommended upper limit will see a warning.
+
+A SAS expiration policy does not prevent a user from creating a SAS with an expiration that exceeds the limit recommended by the policy. When a user creates a SAS that violates the policy, they'll see a warning, together with the recommended maximum interval. If you have configured a diagnostic setting for logging with Azure Monitor, then Azure Storage writes a message to the **SasExpiryStatus** property in the logs whenever a user creates or uses a SAS that expires after the recommended interval. The message indicates that the validity interval of the SAS exceeds the recommended interval.
+
+When a SAS expiration policy is in effect for the storage account, the signed start field is required for every SAS. If the signed start field is not included on the SAS, and you have configured a diagnostic setting for logging with Azure Monitor, then Azure Storage writes a message to the **SasExpiryStatus** property in the logs whenever a user creates or uses a SAS without a value for the signed start field.
+ ## Create a SAS expiration policy
-When you create a SAS expiration policy on a storage account, the policy applies to each type of SAS that you can create on that storage account, including a service SAS, user delegation SAS, or account SAS.
+When you create a SAS expiration policy on a storage account, the policy applies to each type of SAS that is signed with the account key. The types of shared access signatures that are signed with the account key are the service SAS and the account SAS.
To configure a SAS expiration policy for a storage account, use the Azure portal, PowerShell, or Azure CLI.
To bring a storage account into compliance, configure a SAS expiration policy fo
- [Grant limited access to Azure Storage resources using shared access signatures (SAS)](storage-sas-overview.md) - [Create a service SAS](/rest/api/storageservices/create-service-sas)-- [Create a user delegation SAS](/rest/api/storageservices/create-user-delegation-sas) - [Create an account SAS](/rest/api/storageservices/create-account-sas)
storage Storage Sas Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-sas-overview.md
The following recommendations for using shared access signatures can help mitiga
- **Have a revocation plan in place for a SAS.** Make sure you are prepared to respond if a SAS is compromised.
+- **Configure a SAS expiration policy for the storage account.** A SAS expiration policy specifies a recommended interval over which the SAS is valid. SAS expiration policies apply to a service SAS or an account SAS. When a user generates service SAS or an account SAS with a validity interval that is larger than the recommended interval, they'll see a warning. If Azure Storage logging with Azure Monitor is enabled, then an entry is written to the Azure Storage logs. To learn more, see [Create an expiration policy for shared access signatures](sas-expiration-policy.md).
+ - **Define a stored access policy for a service SAS.** Stored access policies give you the option to revoke permissions for a service SAS without having to regenerate the storage account keys. Set the expiration on these very far in the future (or infinite) and make sure it's regularly updated to move it farther into the future. - **Use near-term expiration times on an ad hoc SAS service SAS or account SAS.** In this way, even if a SAS is compromised, it's valid only for a short time. This practice is especially important if you cannot reference a stored access policy. Near-term expiration times also limit the amount of data that can be written to a blob by limiting the time available to upload to it.
The following recommendations for using shared access signatures can help mitiga
- **Be careful with SAS start time.** If you set the start time for a SAS to the current time, failures might occur intermittently for the first few minutes. This is due to different machines having slightly different current times (known as clock skew). In general, set the start time to be at least 15 minutes in the past. Or, don't set it at all, which will make it valid immediately in all cases. The same generally applies to expiry time as well--remember that you may observe up to 15 minutes of clock skew in either direction on any request. For clients using a REST version prior to 2012-02-12, the maximum duration for a SAS that does not reference a stored access policy is 1 hour. Any policies that specify a longer term than 1 hour will fail. -- **Be careful with SAS datetime format.** For some utilities (such as AzCopy), you need datetime formats to be '+%Y-%m-%dT%H:%M:%SZ'. This format specifically includes the seconds.
+- **Be careful with SAS datetime format.** For some utilities (such as AzCopy), date/time values must be formatted as '+%Y-%m-%dT%H:%M:%SZ'. This format specifically includes the seconds.
- **Be specific with the resource to be accessed.** A security best practice is to provide a user with the minimum required privileges. If a user only needs read access to a single entity, then grant them read access to that single entity, and not read/write/delete access to all entities. This also helps lessen the damage if a SAS is compromised because the SAS has less power in the hands of an attacker.
storage Migration Tools Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/data-management/migration-tools-comparison.md
description: Basic functionality and comparison between tools used for migration
Previously updated : 08/04/2021 Last updated : 02/18/2022
The following comparison matrix shows basic functionality of different tools tha
| | [Microsoft](https://www.microsoft.com/) | [Datadobi](https://www.datadobi.com) | [Data Dynamics](https://www.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) | | |--|--||| | **Solution name** | [Azure File Sync](../../../file-sync/file-sync-deployment-guide.md) | [DobiMigrate](https://azuremarketplace.microsoft.com/marketplace/apps/datadobi1602192408529.datadobi-dobimigrate?tab=Overview) | [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Intelligent Data Management](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) |
+| **Support provided by** | Microsoft | [Datadobi](https://support.datadobi.com/s/)<sub>1</sub> | [Data Dynamics](https://www.datdynsupport.com/)<sub>1</sub> | [Komprise](https://komprise.freshdesk.com/support/home)<sub>1</sub> |
| **Azure Files support (all tiers)** | Yes | Yes | Yes | Yes | | **Azure NetApp Files support** | No | Yes | Yes | Yes | | **Azure Blob Hot / Cool support** | No | Yes (via NFS preview) | Yes | Yes |
The following comparison matrix shows basic functionality of different tools tha
*List was last verified on March, 31st 2021.*
+<sub>1</sub> Support provided by ISV, not Microsoft
## See also - [Storage migration overview](../../../common/storage-migration-overview.md)
stream-analytics Blob Storage Azure Data Lake Gen2 Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/blob-storage-azure-data-lake-gen2-output.md
When you're using Blob storage as output, a new file is created in the blob in t
* If the file exceeds the maximum number of allowed blocks (currently 50,000). You might reach the maximum allowed number of blocks without reaching the maximum allowed blob size. For example, if the output rate is high, you can see more bytes per block, and the file size is larger. If the output rate is low, each block has less data, and the file size is smaller. * If there's a schema change in the output, and the output format requires fixed schema (CSV, Avro, Parquet). * If a job is restarted, either externally by a user stopping it and starting it, or internally for system maintenance or error recovery.
-* If the query is fully partitioned, and a new file is created for each output partition.
+* If the query is fully partitioned, and a new file is created for each output partition. This comes from using PARTITION BY, or the native parallelization introduced in [compatibility level 1.2](stream-analytics-compatibility-level.md#parallel-query-execution-for-input-sources-with-multiple-partitions)
* If the user deletes a file or a container of the storage account. * If the output is time partitioned by using the path prefix pattern, and a new blob is used when the query moves to the next hour. * If the output is partitioned by a custom field, and a new blob is created per partition key if it does not exist.
synapse-analytics Data Explorer Compare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/data-explorer/data-explorer-compare.md
Last updated 11/02/2021
-
synapse-analytics Data Explorer Create Pool Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/data-explorer/data-explorer-create-pool-portal.md
Last updated 11/02/2021
-
synapse-analytics Data Explorer Create Pool Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/data-explorer/data-explorer-create-pool-studio.md
Last updated 11/02/2021
-
synapse-analytics Data Explorer Monitor Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/data-explorer/data-explorer-monitor-pools.md
Last updated 11/02/2021
-
synapse-analytics Data Explorer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/data-explorer/data-explorer-overview.md
Last updated 11/02/2021
-
synapse-analytics Data Explorer Ingest Data One Click https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/data-explorer/ingest-data/data-explorer-ingest-data-one-click.md
Last updated 11/02/2021
-
synapse-analytics Data Explorer Ingest Data Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/data-explorer/ingest-data/data-explorer-ingest-data-overview.md
Last updated 11/02/2021
-
synapse-analytics Data Explorer Ingest Data Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/data-explorer/ingest-data/data-explorer-ingest-data-pipeline.md
Last updated 11/02/2021
-
synapse-analytics Data Explorer Ingest Data Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/data-explorer/ingest-data/data-explorer-ingest-data-properties.md
Last updated 11/02/2021
-
synapse-analytics Data Explorer Ingest Data Streaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/data-explorer/ingest-data/data-explorer-ingest-data-streaming.md
Last updated 11/02/2021
- ms.devlang: csharp, golang, java, javascript, python
synapse-analytics Data Explorer Ingest Data Supported Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/data-explorer/ingest-data/data-explorer-ingest-data-supported-formats.md
Last updated 11/02/2021
-
synapse-analytics Data Explorer Ingest Event Grid Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/data-explorer/ingest-data/data-explorer-ingest-event-grid-overview.md
Last updated 11/02/2021
-
synapse-analytics Data Explorer Ingest Event Grid Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/data-explorer/ingest-data/data-explorer-ingest-event-grid-portal.md
Last updated 11/02/2021
-
synapse-analytics Data Explorer Ingest Event Hub Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/data-explorer/ingest-data/data-explorer-ingest-event-hub-csharp.md
Last updated 11/02/2021
-
synapse-analytics Data Explorer Ingest Event Hub One Click https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/data-explorer/ingest-data/data-explorer-ingest-event-hub-one-click.md
Last updated 11/02/2021
-
synapse-analytics Data Explorer Ingest Event Hub Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/data-explorer/ingest-data/data-explorer-ingest-event-hub-overview.md
Last updated 11/02/2021
-
synapse-analytics Data Explorer Ingest Event Hub Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/data-explorer/ingest-data/data-explorer-ingest-event-hub-portal.md
Last updated 11/02/2021
-
synapse-analytics Data Explorer Ingest Event Hub Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/data-explorer/ingest-data/data-explorer-ingest-event-hub-python.md
Last updated 11/02/2021
-
synapse-analytics Data Explorer Ingest Event Hub Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/data-explorer/ingest-data/data-explorer-ingest-event-hub-resource-manager.md
Last updated 11/02/2021
-
synapse-analytics Get Started Add Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-add-admin.md
Title: 'Quickstart: Get started add an Administrator' description: In this tutorial, you'll learn how to add another administrative user to your workspace.-
synapse-analytics Get Started Analyze Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-analyze-data-explorer.md
Last updated 09/30/2021
-
synapse-analytics Get Started Analyze Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-analyze-spark.md
Title: 'Quickstart: Get started analyzing with Spark' description: In this tutorial, you'll learn to analyze data with Apache Spark.-
synapse-analytics Get Started Analyze Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-analyze-sql-on-demand.md
Title: 'Tutorial: Get started analyze data with a serverless SQL pool' description: In this tutorial, you'll learn how to analyze data with a serverless SQL pool using data located in Spark databases.-
synapse-analytics Get Started Analyze Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-analyze-sql-pool.md
Title: 'Tutorial: Get started analyze data with dedicated SQL pools' description: In this tutorial, you'll use the NYC Taxi sample data to explore SQL pool's analytic capabilities.-
synapse-analytics Get Started Analyze Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-analyze-storage.md
Title: 'Tutorial: Get started analyze data in Storage accounts' description: In this tutorial, you'll learn how to analyze data located in a storage account.-
synapse-analytics Get Started Create Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-create-workspace.md
Title: 'Quickstart: Get started - create a Synapse workspace' description: In this tutorial, you'll learn how to create a Synapse workspace, a dedicated SQL pool, and a serverless Apache Spark pool.-
synapse-analytics Get Started Knowledge Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-knowledge-center.md
Title: 'Tutorial: Get started explore the Synapse Knowledge center' description: In this tutorial, you'll learn how to use the Synapse Knowledge center.-
synapse-analytics Get Started Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-monitor.md
Title: 'Tutorial: Get started with Azure Synapse Analytics - monitor your Synapse workspace' description: In this tutorial, you'll learn how to monitor activities in your Synapse workspace.-
synapse-analytics Get Started Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-pipelines.md
Title: 'Tutorial: Get started integrate with pipelines' description: In this tutorial, you'll learn how to integrate pipelines and activities using Synapse Studio.-
synapse-analytics Get Started Visualize Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-visualize-power-bi.md
Title: 'Tutorial: Get started with Azure Synapse Analytics - visualize workspace data with Power BI' description: In this tutorial, you'll learn how to use Power BI to visualize data in Azure Synapse Analytics. -
synapse-analytics Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started.md
Title: 'Tutorial: Get started with Azure Synapse Analytics' description: In this tutorial, you'll learn the basic steps to set up and use Azure Synapse Analytics.-
synapse-analytics Overview Cognitive Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/overview-cognitive-services.md
Title: Cognitive Services in Azure Synapse Analytics description: Enrich your data with artificial intelligence (AI) in Azure Synapse Analytics using pretrained models from Azure Cognitive Services.-
synapse-analytics Quickstart Gallery Sample Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/quickstart-gallery-sample-notebook.md
Title: 'Quickstart: Use a sample notebook from the Synapse Analytics gallery' description: Learn how to use a sample notebook from the Synapse Analytics gallery to explore data and build a machine learning model.-
synapse-analytics Quickstart Industry Ai Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/quickstart-industry-ai-solutions.md
Title: Industry AI solutions description: Industry AI solutions in Azure Synapse Analytics-
synapse-analytics Quickstart Integrate Azure Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/quickstart-integrate-azure-machine-learning.md
Title: 'Quickstart: Link an Azure Machine Learning workspace' description: Link your Synapse workspace to an Azure Machine Learning workspace-
synapse-analytics Tutorial Automl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-automl.md
Title: 'Tutorial: Train a model by using automated machine learning' description: Tutorial on how to train a machine learning model without code in Azure Synapse Analytics.-
synapse-analytics Tutorial Build Applications Use Mmlspark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-build-applications-use-mmlspark.md
Title: 'Tutorial: Build machine learning applications using Synapse Machine Learning' description: Learn how to use Synapse Machine Learning to create machine learning applications in Azure Synapse Analytics.-
synapse-analytics Tutorial Cognitive Services Anomaly https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-cognitive-services-anomaly.md
Title: 'Tutorial: Anomaly detection with Cognitive Services' description: Learn how to use Cognitive Services for anomaly detection in Azure Synapse Analytics.-
synapse-analytics Tutorial Cognitive Services Sentiment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-cognitive-services-sentiment.md
Title: 'Tutorial: Sentiment analysis with Cognitive Services' description: Learn how to use Cognitive Services for sentiment analysis in Azure Synapse Analytics-
synapse-analytics Tutorial Computer Vision Use Mmlspark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-computer-vision-use-mmlspark.md
Title: 'Tutorial: Computer Vision with Cognitive Service' description: Learn how to use computer vision in Azure Synapse Analytics.-
synapse-analytics Tutorial Configure Cognitive Services Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-configure-cognitive-services-synapse.md
Title: 'Quickstart: Prerequisites for Cognitive Services in Azure Synapse Analytics' description: Learn how to configure the prerequisites for using Cognitive Services in Azure Synapse.-
synapse-analytics Tutorial Form Recognizer Use Mmlspark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-form-recognizer-use-mmlspark.md
Title: 'Tutorial: Form Recognizer with Azure Applied AI Service' description: Learn how to use form recognizer in Azure Synapse Analytics.-
synapse-analytics Tutorial Score Model Predict Spark Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-score-model-predict-spark-pool.md
Title: 'Tutorial: Score machine learning models with PREDICT in serverless Apache Spark pools' description: Learn how to use PREDICT functionality in serverless Apache Spark pools for predicting scores through machine learning models.-
synapse-analytics Tutorial Sql Pool Model Scoring Wizard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-sql-pool-model-scoring-wizard.md
Title: 'Tutorial: Machine learning model scoring wizard for dedicated SQL pools' description: Tutorial for how to use the machine learning model scoring wizard to enrich data in dedicated SQL pools.-
synapse-analytics Tutorial Text Analytics Use Mmlspark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-text-analytics-use-mmlspark.md
Title: 'Tutorial: Text Analytics with Cognitive Service' description: Learn how to use text analytics in Azure Synapse Analytics.-
synapse-analytics Tutorial Translator Use Mmlspark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-translator-use-mmlspark.md
Title: 'Tutorial: Translator with Cognitive Service' description: Learn how to use translator in Azure Synapse Analytics.-
synapse-analytics Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/metadata/overview.md
Title: Shared metadata model description: Azure Synapse Analytics allows the different workspace computational engines to share databases and tables between its serverless Apache Spark pools and serverless SQL pool. -
synapse-analytics Overview Terminology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/overview-terminology.md
Title: Terminology - Azure Synapse Analytics description: Reference guide walking user through Azure Synapse Analytics-
synapse-analytics Overview What Is https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/overview-what-is.md
Title: What is Azure Synapse Analytics? description: An Overview of Azure Synapse Analytics-
synapse-analytics Browse Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/partner/browse-partners.md
Title: Discover third-party solutions from Azure Synapse partners through Synapse Studio description: Learn how to discover new third-party solutions that are tightly integrated with Azure Synapse partners-
synapse-analytics Business Intelligence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/partner/business-intelligence.md
Title: Business Intelligence partners description: Lists of third-party business intelligence partners with solutions that support Azure Synapse Analytics.-
synapse-analytics Data Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/partner/data-integration.md
Title: Data integration partners description: Lists of third-party partners with data integration solutions that support Azure Synapse Analytics.-
synapse-analytics Quickstart Apache Spark Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-apache-spark-notebook.md
Title: 'Quickstart: Create a serverless Apache Spark pool using web tools' description: This quickstart shows how to use the web tools to create a serverless Apache Spark pool in Azure Synapse Analytics and how to run a Spark SQL query.-
synapse-analytics Quickstart Connect Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-connect-azure-data-explorer.md
Title: 'Quickstart: Connect Azure Data Explorer to an Azure Synapse Analytics workspace' description: Connect an Azure Data Explorer cluster to an Azure Synapse Analytics workspace by using Apache Spark for Azure Synapse Analytics.-
synapse-analytics Quickstart Connect Synapse Link Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-connect-synapse-link-cosmos-db.md
Title: 'Quickstart: Connect to Azure Synapse Link for Azure Cosmos DB' description: How to connect an Azure Cosmos DB to a Synapse workspace with Synapse Link-
synapse-analytics Quickstart Copy Activity Load Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-copy-activity-load-sql-pool.md
Title: "Quickstart: to load data into dedicated SQL pool using the copy activity" description: Use the pipeline copy activity in Azure Synapse Analytics to load data into dedicated SQL pool.-
synapse-analytics Quickstart Create Apache Gpu Pool Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-create-apache-gpu-pool-portal.md
Title: 'Quickstart: Create a serverless Apache Spark GPU pool' description: Create a serverless Apache Spark GPU pool using the Azure portal by following the steps in this guide.-
synapse-analytics Quickstart Create Apache Spark Pool Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-create-apache-spark-pool-portal.md
Title: 'Quickstart: Create a serverless Apache Spark pool using the Azure portal' description: Create a serverless Apache Spark pool using the Azure portal by following the steps in this guide.-
synapse-analytics Quickstart Create Apache Spark Pool Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-create-apache-spark-pool-studio.md
Title: 'Quickstart: Create a serverless Apache Spark pool using Synapse Studio' description: Create a serverless Apache Spark pool using Synapse Studio by following the steps in this guide.-
synapse-analytics Quickstart Create Sql Pool Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-create-sql-pool-portal.md
Title: 'Quickstart: Create a dedicated SQL pool using the Azure portal' description: Create a new dedicated SQL pool using the Azure portal by following the steps in this guide.-
synapse-analytics Quickstart Create Sql Pool Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-create-sql-pool-studio.md
Title: 'Quickstart: Create a dedicated SQL pool using Synapse Studio' description: Create a dedicated SQL pool using Synapse Studio by following the steps in this guide.-
synapse-analytics Quickstart Create Workspace Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-create-workspace-cli.md
Title: 'Quickstart: Create a Synapse workspace using Azure CLI' description: Create an Azure Synapse workspace using Azure CLI by following the steps in this guide.-
synapse-analytics Quickstart Create Workspace Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-create-workspace-powershell.md
Title: 'Quickstart: Create a Synapse workspace using Azure PowerShell' description: Create an Azure Synapse workspace using Azure PowerShell by following the steps in this guide.-
synapse-analytics Quickstart Create Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-create-workspace.md
Title: 'Quickstart: create a Synapse workspace' description: Create an Synapse workspace by following the steps in this guide.-
synapse-analytics Quickstart Load Studio Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-load-studio-sql-pool.md
Title: 'Quickstart: Bulk load data with a dedicated SQL pool' description: Use Synapse Studio to bulk load data into a dedicated SQL pool in Azure Synapse Analytics.-
synapse-analytics Quickstart Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-power-bi.md
Title: 'Quickstart: Linking a Power BI workspace to a Synapse workspace' description: Link a Power BI workspace to an Azure Synapse Analytics workspace by following the steps in this guide.-
synapse-analytics Quickstart Read From Gen2 To Pandas Dataframe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-read-from-gen2-to-pandas-dataframe.md
Title: 'Quickstart: Read data from ADLS Gen2 to Pandas dataframe' description: Read data from an Azure Data Lake Storage Gen2 account into a Pandas dataframe using Python in Synapse Studio in Azure Synapse Analytics.-
synapse-analytics Quickstart Serverless Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-serverless-sql-pool.md
Title: 'Quickstart: Use serverless SQL pool' description: In this quickstart, you'll see and learn how easy is to query various types of files using serverless SQL pool.-
synapse-analytics Synapse Workspace Access Control Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/synapse-workspace-access-control-overview.md
Title: Azure Synapse workspace access control overview description: This article describes the mechanisms used to control access to a Synapse workspace and the resources and code artifacts it contains.-
synapse-analytics Apache Spark 24 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-24-runtime.md
Title: Azure Synapse Runtime for Apache Spark 2.4 description: Supported versions of Spark, Scala, Python, and .NET for Apache Spark 2.4.-
synapse-analytics Apache Spark 3 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-3-runtime.md
Title: Azure Synapse Runtime for Apache Spark 3.1 description: Supported versions of Spark, Scala, Python, and .NET for Apache Spark 3.1.-
synapse-analytics Apache Spark Azure Machine Learning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-azure-machine-learning-tutorial.md
Title: 'Tutorial: Train a model in Python with automated machine learning' description: Tutorial on how to train a machine learning model in Python by using Apache Spark and automated machine learning.-
synapse-analytics Apache Spark Azure Portal Add Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-azure-portal-add-libraries.md
Title: Package management description: Learn how to add and manage libraries used by Apache Spark in Azure Synapse Analytics.-
synapse-analytics Apache Spark Custom Conda Channel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-custom-conda-channel.md
Title: Create custom Conda channel for package management description: Learn how to create a custom Conda channel for package management-
synapse-analytics Apache Spark Data Visualization Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-data-visualization-tutorial.md
Title: Visualize data with Apache Spark description: Create rich data visualizations by using Apache Spark and Azure Synapse Analytics notebooks-
synapse-analytics Apache Spark Delta Lake Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-delta-lake-overview.md
Title: Overview of how to use Linux Foundation Delta Lake in Apache Spark for Azure Synapse Analytics description: Learn how to use Delta Lake in Apache Spark for Azure Synapse Analytics, to create, and use tables with ACID properties.-
synapse-analytics Apache Spark Gpu Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-gpu-concept.md
Title: GPU-accelerated pools description: Introduction to GPUs inside Synapse Analytics.-
synapse-analytics Apache Spark Machine Learning Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-machine-learning-concept.md
Title: 'Machine Learning with Apache Spark' description: This article provides a conceptual overview of the machine learning and data science capabilities available through Apache Spark on Azure Synapse Analytics.-
synapse-analytics Apache Spark Machine Learning Mllib Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-machine-learning-mllib-notebook.md
Title: 'Tutorial: Build a machine learning app with Apache Spark MLlib' description: A tutorial on how to use Apache Spark MLlib to create a machine learning app that analyzes a dataset by using classification through logistic regression.-
synapse-analytics Apache Spark Manage Python Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-manage-python-packages.md
Title: Manage Python libraries for Apache Spark description: Learn how to add and manage Python libraries used by Apache Spark in Azure Synapse Analytics.-
synapse-analytics Apache Spark Manage Scala Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-manage-scala-packages.md
Title: Manage Scala & Java libraries for Apache Spark description: Learn how to add and manage Scala and Java libraries in Azure Synapse Analytics.-
synapse-analytics Apache Spark Notebook Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-notebook-concept.md
Title: Overview of Azure Synapse Analytics notebooks description: This article provides an overview of the capabilities available through Azure Synapse Analytics notebooks.-
synapse-analytics Apache Spark Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-performance.md
Title: Optimize Spark jobs for performance description: This article provides an introduction to Apache Spark in Azure Synapse Analytics.-
synapse-analytics Apache Spark Rapids Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-rapids-gpu.md
Title: Apache Spark on GPU description: Introduction to core concepts for Apache Spark on GPUs inside Synapse Analytics.-
synapse-analytics Apache Spark To Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-to-power-bi.md
Title: 'Azure Synapse Studio notebooks' description: This tutorial provides an overview on how to create a Power BI dashboard using Apache Spark and a Serverless SQL pool.-
synapse-analytics Apache Spark Troubleshoot Library Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-troubleshoot-library-errors.md
Title: Troubleshoot library installation errors description: This tutorial provides an overview on how to troubleshoot library installation errors.-
synapse-analytics Apache Spark Version Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-version-support.md
Title: Apache Spark version support description: Supported versions of Spark, Scala, Python, .NET-
synapse-analytics Synapse Spark Sql Pool Import Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/synapse-spark-sql-pool-import-export.md
Title: Import and Export data between serverless Apache Spark pools and SQL pools description: This article introduces the Synapse Dedicated SQL Pool Connector API for moving data between dedicated SQL pools and serverless Apache Spark pools.-
synapse-analytics Tutorial Spark Pool Filesystem Spec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/tutorial-spark-pool-filesystem-spec.md
Title: 'Tutorial: Use FSSPEC to read/write ADLS data in serverless Apache Spark pool in Synapse Analytics' description: Tutorial for how to use FSSPEC in PySpark notebook to read/write ADLS data in serverless Apache Spark pool.-
synapse-analytics Tutorial Use Pandas Spark Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/tutorial-use-pandas-spark-pool.md
Title: 'Tutorial: Use Pandas to read/write ADLS data in serverless Apache Spark pool in Synapse Analytics' description: Tutorial for how to use Pandas in a PySpark notebook to read/write ADLS data in a serverless Apache Spark pool.-
synapse-analytics Analyze Your Workload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/analyze-your-workload.md
Title: Analyze your workload for dedicated SQL pool description: Techniques for analyzing query prioritization for dedicated SQL pool in Azure Synapse Analytics.-
synapse-analytics Backup And Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/backup-and-restore.md
Title: Backup and restore - snapshots, geo-redundant description: Learn how backup and restore works in Azure Synapse Analytics dedicated SQL pool. Use backups to restore your data warehouse to a restore point in the primary region. Use geo-redundant backups to restore to a different geographical region.-
synapse-analytics Cheat Sheet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/cheat-sheet.md
Title: Cheat sheet for dedicated SQL pool (formerly SQL DW) description: Find links and best practices to quickly build your dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics.-
synapse-analytics Column Level Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/column-level-security.md
Title: Column-level security for dedicated SQL pool description: Column-Level Security allows customers to control access to database table columns based on the user's execution context or group membership, simplifying the design and coding of security in your application, and allowing you to implement restrictions on column access.-
synapse-analytics Create Data Warehouse Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/create-data-warehouse-azure-cli.md
Title: 'Quickstart: Create a Synapse SQL pool with Azure CLI' description: Quickly create a Synapse SQL pool with a server-level firewall rule using the Azure CLI.-
synapse-analytics Create Data Warehouse Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/create-data-warehouse-portal.md
Title: 'Quickstart: Create and query a dedicated SQL pool (formerly SQL DW) (Azure portal)' description: Create and query a dedicated SQL pool (formerly SQL DW) using the Azure portal-
synapse-analytics Create Data Warehouse Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/create-data-warehouse-powershell.md
Title: 'Quickstart: Create a dedicated SQL pool (formerly SQL DW) with Azure PowerShell' description: Quickly create a dedicated SQL pool (formerly SQL DW) with a server-level firewall rule using Azure PowerShell.-
synapse-analytics Design Elt Data Loading https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/design-elt-data-loading.md
Title: Instead of ETL, design ELT description: Implement flexible data loading strategies for dedicated SQL pools within Azure Synapse Analytics.-
synapse-analytics Design Guidance For Replicated Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/design-guidance-for-replicated-tables.md
Title: Design guidance for replicated tables description: Recommendations for designing replicated tables in Synapse SQL pool -
synapse-analytics Disable Geo Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/disable-geo-backup.md
Title: Disable geo-backups description: How-to guide for disabling geo-backups for a dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics-
synapse-analytics Fivetran Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/fivetran-quickstart.md
Title: "Quickstart: Fivetran and dedicated SQL pool (formerly SQL DW)" description: Get started with Fivetran and dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics. -
synapse-analytics Gen2 Migration Schedule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/gen2-migration-schedule.md
Title: Migrate your dedicated SQL pool (formerly SQL DW) to Gen2 description: Instructions for migrating an existing dedicated SQL pool (formerly SQL DW) to Gen2 and the migration schedule by region.-
synapse-analytics Load Data From Azure Blob Storage Using Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/load-data-from-azure-blob-storage-using-copy.md
Title: 'Tutorial: Load New York Taxicab data' description: Tutorial uses Azure portal and SQL Server Management Studio to load New York Taxicab data from an Azure blob for Synapse SQL.-
synapse-analytics Load Data Wideworldimportersdw https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/load-data-wideworldimportersdw.md
Title: 'Tutorial: Load data using Azure portal & SSMS' description: Tutorial uses Azure portal and SQL Server Management Studio to load the WideWorldImportersDW data warehouse from a global Azure blob to an Azure Synapse Analytics SQL pool.-
synapse-analytics Maintenance Scheduling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/maintenance-scheduling.md
Title: Maintenance schedules for Synapse SQL pool description: Maintenance scheduling enables customers to plan around the necessary scheduled maintenance events that Azure Synapse Analytics uses to roll out new features, upgrades, and patches. -
synapse-analytics Manage Compute With Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/manage-compute-with-azure-functions.md
Title: 'Tutorial: Manage compute with Azure Functions' description: How to use Azure Functions to manage the compute of your dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics.-
synapse-analytics Massively Parallel Processing Mpp Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/massively-parallel-processing-mpp-architecture.md
Title: Dedicated SQL pool (formerly SQL DW) architecture description: Learn how Dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics combines distributed query processing capabilities with Azure Storage to achieve high performance and scalability. -
synapse-analytics Memory Concurrency Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/memory-concurrency-limits.md
Title: Memory and concurrency limits description: View the memory and concurrency limits allocated to the various performance levels and resource classes for dedicated SQL pool in Azure Synapse Analytics.-
synapse-analytics Pause And Resume Compute Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/pause-and-resume-compute-portal.md
Title: 'Quickstart: Pause and resume compute in dedicated SQL pool via the Azure portal' description: Use the Azure portal to pause compute for dedicated SQL pool to save costs. Resume compute when you are ready to use the data warehouse.-
synapse-analytics Pause And Resume Compute Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/pause-and-resume-compute-powershell.md
Title: 'Quickstart: Pause and resume compute in dedicated SQL pool (formerly SQL DW) with Azure PowerShell' description: You can use Azure PowerShell to pause and resume dedicated SQL pool (formerly SQL DW). compute resources.-
synapse-analytics Performance Tuning Materialized Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/performance-tuning-materialized-views.md
Title: Performance tune with materialized views description: Learn about recommendations and considerations you should know as you use materialized views to improve your query performance. -
synapse-analytics Performance Tuning Ordered Cci https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/performance-tuning-ordered-cci.md
Title: Performance tuning with ordered clustered columnstore index description: Recommendations and considerations you should know as you use ordered clustered columnstore index to improve your query performance in dedicated SQL pools.-
synapse-analytics Performance Tuning Result Set Caching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/performance-tuning-result-set-caching.md
Title: Performance tuning with result set caching description: Result set caching feature overview for dedicated SQL pool in Azure Synapse Analytics -
synapse-analytics Quickstart Bulk Load Copy Tsql Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-bulk-load-copy-tsql-examples.md
Title: Authentication mechanisms with the COPY statement description: Outlines the authentication mechanisms to bulk load data-
synapse-analytics Quickstart Bulk Load Copy Tsql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-bulk-load-copy-tsql.md
Title: 'Quickstart: Bulk load data using a single T-SQL statement' description: Bulk load data using the COPY statement-
synapse-analytics Quickstart Configure Workload Isolation Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-configure-workload-isolation-portal.md
Title: 'Quickstart: Configure workload isolation - Portal' description: Use Azure portal to configure workload isolation for dedicated SQL pool.-
synapse-analytics Quickstart Configure Workload Isolation Tsql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-configure-workload-isolation-tsql.md
Title: 'Quickstart: Configure workload isolation - T-SQL' description: Use T-SQL to configure workload isolation.-
synapse-analytics Quickstart Create A Workload Classifier Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-create-a-workload-classifier-portal.md
Title: 'Quickstart: Create a workload classifier - Portal' description: Use Azure portal to create a workload classifier with high importance.-
synapse-analytics Quickstart Create A Workload Classifier Tsql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-create-a-workload-classifier-tsql.md
Title: 'Quickstart: Create a workload classifier - T-SQL' description: Use T-SQL to create a workload classifier with high importance.-
synapse-analytics Quickstart Scale Compute Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-scale-compute-portal.md
Title: 'Quickstart: Scale compute for Synapse SQL pool (Azure portal)' description: You can scale compute for Synapse SQL pool (data warehouse) using the Azure portal.-
synapse-analytics Quickstart Scale Compute Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-scale-compute-powershell.md
Title: 'Quickstart: Scale compute for dedicated SQL pool (formerly SQL DW) (Azure PowerShell)' description: You can scale compute for dedicated SQL pool (formerly SQL DW) using Azure PowerShell.-
synapse-analytics Quickstart Scale Compute Tsql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-scale-compute-tsql.md
Title: 'Quickstart: Scale compute in dedicated SQL pool (formerly SQL DW) - T-SQL' description: Scale compute in dedicated SQL pool (formerly SQL DW) using T-SQL and SQL Server Management Studio (SSMS). Scale out compute for better performance, or scale back compute to save costs.-
synapse-analytics Release Notes 10 0 10106 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/release-notes-10-0-10106-0.md
Title: Release notes for dedicated SQL pool (formerly SQL DW) description: Release notes for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics.-
synapse-analytics Resource Classes For Workload Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/resource-classes-for-workload-management.md
Title: Resource classes for workload management description: Guidance for using resource classes to manage concurrency and compute resources for queries in Azure Synapse Analytics.-
synapse-analytics Single Region Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/single-region-residency.md
Title: Single region residency description: How-to guide for configuring single region residency for a dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics-
synapse-analytics Sql Data Warehouse Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-authentication.md
Title: Authentication for dedicated SQL pool (formerly SQL DW) description: Learn how to authenticate to dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics by using Azure Active Directory (Azure AD) or SQL Server authentication.-
synapse-analytics Sql Data Warehouse Concept Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-concept-recommendations.md
Title: Dedicated SQL pool Azure Advisor recommendations description: Learn about Synapse SQL recommendations and how they are generated-
synapse-analytics Sql Data Warehouse Concept Resource Utilization Query Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-concept-resource-utilization-query-activity.md
Title: Manageability and monitoring - query activity, resource utilization description: Learn what capabilities are available to manage and monitor Azure Synapse Analytics. Use the Azure portal and Dynamic Management Views (DMVs) to understand query activity and resource utilization of your data warehouse.-
synapse-analytics Sql Data Warehouse Connect Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-connect-overview.md
Title: Connect to a SQL pool in Azure Synapse description: Get connected to SQL pool.-
synapse-analytics Sql Data Warehouse Connection Strings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-connection-strings.md
Title: Connection strings description: Connection strings for Synapse SQL pool-
synapse-analytics Sql Data Warehouse Continuous Integration And Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-continuous-integration-and-deployment.md
Title: Continuous integration and deployment for dedicated SQL pool description: Enterprise-class Database DevOps experience for dedicated SQL pool in Azure Synapse Analytics with built-in support for continuous integration and deployment using Azure Pipelines.-
synapse-analytics Sql Data Warehouse Develop Best Practices Transactions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-develop-best-practices-transactions.md
Title: Optimizing transactions description: Learn how to optimize the performance of your transactional code in dedicated SQL pool while minimizing risk for long rollbacks.-
synapse-analytics Sql Data Warehouse Develop Ctas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-develop-ctas.md
Title: CREATE TABLE AS SELECT (CTAS) description: Explanation and examples of the CREATE TABLE AS SELECT (CTAS) statement in Synapse SQL for developing solutions.-
synapse-analytics Sql Data Warehouse Develop Dynamic Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-develop-dynamic-sql.md
Title: Using dynamic SQL description: Tips for development solutions using dynamic SQL for dedicated SQL pools in Azure Synapse Analytics.-
synapse-analytics Sql Data Warehouse Develop Group By Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-develop-group-by-options.md
Title: Using group by options description: Tips for implementing group by options for dedicated SQL pools in Azure Synapse Analytics.-
synapse-analytics Sql Data Warehouse Develop Label https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-develop-label.md
Title: Using labels to instrument queries description: Tips for using labels to instrument queries for dedicated SQL pools in Azure Synapse Analytics.-
synapse-analytics Sql Data Warehouse Develop Loops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-develop-loops.md
Title: Using T-SQL loops description: Tips for solution development using T-SQL loops and replacing cursors for dedicated SQL pools in Azure Synapse Analytics.-
synapse-analytics Sql Data Warehouse Develop Stored Procedures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-develop-stored-procedures.md
Title: Using stored procedures description: Tips for developing solutions by implementing stored procedures for dedicated SQL pools in Azure Synapse Analytics.-
synapse-analytics Sql Data Warehouse Develop Transactions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-develop-transactions.md
Title: Use transactions in Azure Synapse Analytics SQL pool description: This article includes tips for implementing transactions and developing solutions in Synapse SQL pool.-
synapse-analytics Sql Data Warehouse Develop User Defined Schemas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-develop-user-defined-schemas.md
Title: Using user-defined schemas description: Tips for using T-SQL user-defined schemas to develop solutions for dedicated SQL pools in Azure Synapse Analytics.-
synapse-analytics Sql Data Warehouse Develop Variable Assignment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-develop-variable-assignment.md
Title: Assign variables description: In this article, you'll find essential tips for assigning T-SQL variables for dedicated SQL pools in Azure Synapse Analytics.-
synapse-analytics Sql Data Warehouse Encryption Tde Tsql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-encryption-tde-tsql.md
Title: Transparent data encryption (T-SQL) description: Transparent data encryption (TDE) in Azure Synapse Analytics (T-SQL)-
synapse-analytics Sql Data Warehouse Encryption Tde https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-encryption-tde.md
Title: Transparent Data Encryption (Portal) for dedicated SQL pool (formerly SQL DW) description: Transparent Data Encryption (TDE) for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics-
synapse-analytics Sql Data Warehouse Get Started Analyze With Azure Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-get-started-analyze-with-azure-machine-learning.md
Title: Analyze data with Azure Machine Learning description: Use Azure Machine Learning to build a predictive machine learning model based on data stored in Azure Synapse.-
synapse-analytics Sql Data Warehouse Get Started Connect Sqlcmd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-get-started-connect-sqlcmd.md
Title: Connect with sqlcmd description: Use sqlcmd command-line utility to connect to and query a dedicated SQL pool in Azure Synapse Analytics.-
synapse-analytics Sql Data Warehouse Get Started Create Support Ticket https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-get-started-create-support-ticket.md
Title: Request quota increases and get support description: How to create a support request in the Azure portal for Azure Synapse Analytics. Request quota increases or get problem resolution support.-
synapse-analytics Sql Data Warehouse How To Configure Workload Importance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-how-to-configure-workload-importance.md
Title: Configure workload importance for dedicated SQL pool description: Learn how to set request level importance in Azure Synapse Analytics.-
synapse-analytics Sql Data Warehouse How To Convert Resource Classes Workload Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-how-to-convert-resource-classes-workload-groups.md
Title: Convert resource class to a workload group description: Learn how to create a workload group that is similar to a resource class in a dedicated SQL pool.-
synapse-analytics Sql Data Warehouse How To Manage And Monitor Workload Importance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-how-to-manage-and-monitor-workload-importance.md
Title: Manage and monitor workload importance in dedicated SQL pool description: Learn how to manage and monitor request level importance dedicated SQL pool for Azure Synapse Analytics.-
synapse-analytics Sql Data Warehouse How To Monitor Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-how-to-monitor-cache.md
Title: Optimize your Gen2 cache description: Learn how to monitor your Gen2 cache using the Azure portal.-
synapse-analytics Sql Data Warehouse Integrate Azure Stream Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-integrate-azure-stream-analytics.md
Title: Use Azure Stream Analytics in dedicated SQL pool description: Tips for using Azure Stream Analytics with dedicated SQL pool in Azure Synapse for developing real-time solutions.-
synapse-analytics Sql Data Warehouse Load From Azure Blob Storage With Polybase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-load-from-azure-blob-storage-with-polybase.md
Title: Load Contoso retail data to dedicated SQL pools description: Use PolyBase and T-SQL commands to load two tables from the Contoso retail data into dedicated SQL pools.-
synapse-analytics Sql Data Warehouse Load From Azure Data Lake Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-load-from-azure-data-lake-store.md
Title: 'Tutorial load data from Azure Data Lake Storage' description: Use the COPY statement to load data from Azure Data Lake Storage for dedicated SQL pools.-
synapse-analytics Sql Data Warehouse Manage Compute Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-manage-compute-overview.md
Title: Manage compute resource for for dedicated SQL pool (formerly SQL DW) description: Learn about performance scale out capabilities for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics. Scale out by adjusting DWUs, or lower costs by pausing the dedicated SQL pool (formerly SQL DW).-
synapse-analytics Sql Data Warehouse Manage Compute Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-manage-compute-rest-api.md
Title: Pause, resume, scale with REST APIs for dedicated SQL pool (formerly SQL DW) description: Manage compute power for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics through REST APIs.-
synapse-analytics Sql Data Warehouse Manage Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-manage-monitor.md
Title: Monitor your dedicated SQL pool workload using DMVs description: Learn how to monitor your Azure Synapse Analytics dedicated SQL pool workload and query execution using DMVs.-
synapse-analytics Sql Data Warehouse Memory Optimizations For Columnstore Compression https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-memory-optimizations-for-columnstore-compression.md
Title: Improve columnstore index performance for dedicated SQL pool description: Reduce memory requirements or increase the available memory to maximize the number of rows within each rowgroup in dedicated SQL pool.-
synapse-analytics Sql Data Warehouse Monitor Workload Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-monitor-workload-portal.md
Title: Monitor workload - Azure portal description: Monitor Synapse SQL using the Azure portal-
synapse-analytics Sql Data Warehouse Overview Develop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-develop.md
Title: Resources for developing a dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics description: Development concepts, design decisions, recommendations, and coding techniques for a dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics.-
synapse-analytics Sql Data Warehouse Overview Integrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-integrate.md
Title: Build integrated solutions description: Solution tools and partners that integrate with a dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics.-
synapse-analytics Sql Data Warehouse Overview Manageability Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-manageability-monitoring.md
Title: Manageability and monitoring - overview description: Monitoring and manageability overview for resource utilization, log and query activity, recommendations, and data protection (backup and restore) with dedicated SQL pool in Azure Synapse Analytics.-
synapse-analytics Sql Data Warehouse Overview What Is https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md
Title: What is dedicated SQL pool (formerly SQL DW)? description: Dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics is the enterprise data warehousing functionality in Azure Synapse Analytics.-
synapse-analytics Sql Data Warehouse Predict https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-predict.md
Title: Score machine learning models with PREDICT description: Learn how to score machine learning models using the T-SQL PREDICT function in dedicated SQL pool.-
synapse-analytics Sql Data Warehouse Query Ssms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-query-ssms.md
Title: Connect to dedicated SQL pool (formerly SQL DW) with SSMS description: Use SQL Server Management Studio (SSMS) to connect to and query a dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics. -
synapse-analytics Sql Data Warehouse Query Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-query-visual-studio.md
Title: Connect to dedicated SQL pool (formerly SQL DW) with VSTS description: Query dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics with Visual Studio.-
synapse-analytics Sql Data Warehouse Reference Collation Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-reference-collation-types.md
Title: Data warehouse collation types description: Collation types supported for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics.-
synapse-analytics Sql Data Warehouse Reference Powershell Cmdlets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-reference-powershell-cmdlets.md
Title: PowerShell & REST APIs for dedicated SQL pool (formerly SQL DW) description: Top PowerShell cmdlets for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics including how to pause and resume a database.-
synapse-analytics Sql Data Warehouse Reference Tsql Language Elements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-reference-tsql-language-elements.md
Title: T-SQL language elements for dedicated SQL pool description: Links to the documentation for T-SQL language elements supported for dedicated SQL pool in Azure Synapse Analytics.-
synapse-analytics Sql Data Warehouse Reference Tsql Statements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-reference-tsql-statements.md
Title: T-SQL statements in dedicate SQL pool description: Links to the documentation for T-SQL statements supported for dedicated SQL pool in Azure Synapse Analytics .-
synapse-analytics Sql Data Warehouse Reference Tsql System Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-reference-tsql-system-views.md
Title: System views for dedicated SQL pool (formerly SQL DW) description: Links to the documentation for system views for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics.-
synapse-analytics Sql Data Warehouse Restore Active Paused Dw https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-restore-active-paused-dw.md
Title: Restore an existing dedicated SQL pool (formerly SQL DW) description: How-to guide for restoring an existing dedicated SQL pool in Azure Synapse Analytics.-
synapse-analytics Sql Data Warehouse Restore Deleted Dw https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-restore-deleted-dw.md
Title: Restore a deleted dedicated SQL pool (formerly SQL DW) description: How to guide for restoring a deleted dedicated SQL pool in Azure Synapse Analytics.-
synapse-analytics Sql Data Warehouse Restore From Geo Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-restore-from-geo-backup.md
Title: Restore a dedicated SQL pool from a geo-backup description: How-to guide for geo-restoring a dedicated SQL pool in Azure Synapse Analytics-
synapse-analytics Sql Data Warehouse Restore Points https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-restore-points.md
Title: User-defined restore points description: How to create a restore point for dedicated SQL pool (formerly SQL DW).-
synapse-analytics Sql Data Warehouse Service Capacity Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-service-capacity-limits.md
Title: Capacity limits for dedicated SQL pool description: Maximum values allowed for various components of dedicated SQL pool in Azure Synapse Analytics.-
synapse-analytics Sql Data Warehouse Source Control Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-source-control-integration.md
Title: Source Control Integration description: Enterprise-class Database DevOps experience for dedicated SQL pool with native source control integration using Azure Repos (Git and GitHub).-
synapse-analytics Sql Data Warehouse Table Constraints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-table-constraints.md
Title: Primary, foreign, and unique keys description: Table constraints support using dedicated SQL pool in Azure Synapse Analytics-
synapse-analytics Sql Data Warehouse Tables Data Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-data-types.md
Title: Table data types in dedicated SQL pool (formerly SQL DW) description: Recommendations for defining table data types for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics. -
synapse-analytics Sql Data Warehouse Tables Distribute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-distribute.md
Title: Distributed tables design guidance description: Recommendations for designing hash-distributed and round-robin distributed tables using dedicated SQL pool.-
synapse-analytics Sql Data Warehouse Tables Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-identity.md
Title: Using IDENTITY to create surrogate keys description: Recommendations and examples for using the IDENTITY property to create surrogate keys on tables in dedicated SQL pool.-
synapse-analytics Sql Data Warehouse Tables Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-index.md
Title: Indexing tables description: Recommendations and examples for indexing tables in dedicated SQL pool.-
synapse-analytics Sql Data Warehouse Tables Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-overview.md
Title: Designing tables description: Introduction to designing tables using dedicated SQL pool. -
synapse-analytics Sql Data Warehouse Tables Partition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-partition.md
Title: Partitioning tables in dedicated SQL pool description: Recommendations and examples for using table partitions in dedicated SQL pool.-
synapse-analytics Sql Data Warehouse Tables Statistics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-statistics.md
Title: Create and update statistics on tables description: Recommendations and examples for creating and updating query-optimization statistics on tables in dedicated SQL pool.-
synapse-analytics Sql Data Warehouse Tables Temporary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-temporary.md
Title: Temporary tables description: Essential guidance for using temporary tables in dedicated SQL pool, highlighting the principles of session level temporary tables. -
synapse-analytics Sql Data Warehouse Troubleshoot Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-troubleshoot-connectivity.md
Title: Troubleshooting connectivity description: Troubleshooting connectivity in dedicated SQL pool (formerly SQL DW).-
synapse-analytics Sql Data Warehouse Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-troubleshoot.md
Title: Troubleshooting dedicated SQL pool (formerly SQL DW) description: Troubleshooting dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics.-
synapse-analytics Sql Data Warehouse Workload Classification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-workload-classification.md
Title: Workload classification for dedicated SQL pool description: Guidance for using classification to manage query concurrency, importance, and compute resources for dedicated SQL pool in Azure Synapse Analytics.-
synapse-analytics Sql Data Warehouse Workload Importance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-workload-importance.md
Title: Workload importance description: Guidance for setting importance for dedicated SQL pool queries in Azure Synapse Analytics.-
synapse-analytics Sql Data Warehouse Workload Isolation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-workload-isolation.md
Title: Workload isolation description: Guidance for setting workload isolation with workload groups in Azure Synapse Analytics.-
synapse-analytics Sql Data Warehouse Workload Management Portal Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-workload-management-portal-monitor.md
Title: Workload management portal monitoring description: Guidance for workload management portal monitoring in Azure Synapse Analytics.-
synapse-analytics Sql Data Warehouse Workload Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-workload-management.md
Title: Workload management description: Guidance for implementing workload management in Azure Synapse Analytics.-
synapse-analytics Striim Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/striim-quickstart.md
Title: Striim quick start description: Get started quickly with Striim and Azure Synapse Analytics.-
synapse-analytics Upgrade To Latest Generation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/upgrade-to-latest-generation.md
Title: Upgrade to the latest generation of dedicated SQL pool (formerly SQL DW) description: Upgrade Azure Synapse Analytics dedicated SQL pool (formerly SQL DW) to latest generation of Azure hardware and storage architecture.-
synapse-analytics What Is A Data Warehouse Unit Dwu Cdwu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/what-is-a-data-warehouse-unit-dwu-cdwu.md
Title: Data Warehouse Units (DWUs) for dedicated SQL pool (formerly SQL DW) description: Recommendations on choosing the ideal number of data warehouse units (DWUs) to optimize price and performance, and how to change the number of units.-
synapse-analytics Workspace Connected Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/workspace-connected-create.md
Title: Enabling Synapse workspace features description: This document describes how a user can enable the Synapse workspace features on an existing dedicated SQL pool (formerly SQL DW). -
synapse-analytics Workspace Connected Experience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/workspace-connected-experience.md
Title: Enabling Synapse workspace features on a dedicated SQL pool (formerly SQL DW)
-description: This document describes how a customer can access and use their existing SQL DW standalone instance in the Workspace.
-
+description: This document describes how a customer can access and use their existing SQL DW standalone instance in the Workspace.
synapse-analytics Workspace Connected Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/workspace-connected-regions.md
Title: Enable Synapse Workspace feature region availability description: This document details the regions where the Synapse workspace feature is not available. - -
synapse-analytics Active Directory Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/active-directory-authentication.md
Title: Azure Active Directory description: Learn about how to use Azure Active Directory for authentication with SQL Database, Managed Instance, and Synapse SQL-
synapse-analytics Best Practices Dedicated Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/best-practices-dedicated-sql-pool.md
Title: Best practices for dedicated SQL pools description: Recommendations and best practices you should know as you work with dedicated SQL pools. -
synapse-analytics Best Practices Serverless Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/best-practices-serverless-sql-pool.md
Title: Best practices for serverless SQL pool description: Recommendations and best practices for working with serverless SQL pool. -
synapse-analytics Create External Table As Select https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/create-external-table-as-select.md
Title: Store query results from serverless SQL pool description: In this article, you'll learn how to store query results to storage using serverless SQL pool.-
synapse-analytics Create Use External Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/create-use-external-tables.md
Title: Create and use external tables in Synapse SQL pool description: In this section, you'll learn how to create and use external tables in Synapse SQL pool.-
synapse-analytics Create Use Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/create-use-views.md
Title: Create and use views in serverless SQL pool description: In this section, you'll learn how to create and use views to wrap serverless SQL pool queries. Views will allow you to reuse those queries. Views are also needed if you want to use tools, such as Power BI, in conjunction with serverless SQL pool.-
synapse-analytics Data Load Columnstore Compression https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/data-load-columnstore-compression.md
Title: Improve columnstore index performance description: Reduce memory requirements or increase the available memory to maximize the number of rows a columnstore index compresses into each rowgroup.-
synapse-analytics Data Loading Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/data-loading-best-practices.md
Title: Data loading best practices for dedicated SQL pools description: Recommendations and performance optimizations for loading data into a dedicated SQL pool in Azure Synapse Analytics.-
synapse-analytics Develop Dynamic Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-dynamic-sql.md
Title: Use dynamic SQL in Synapse SQL description: Tips for using dynamic SQL in Synapse SQL.-
synapse-analytics Develop Group By Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-group-by-options.md
Title: Use GROUP BY options in Synapse SQL description: Synapse SQL allows for developing solutions by implementing different GROUP BY options.-
synapse-analytics Develop Label https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-label.md
Title: Use query labels in Synapse SQL description: Included in this article are essential tips for using query labels in Synapse SQL.-
synapse-analytics Develop Loops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-loops.md
Title: Use T-SQL loops description: Tips for using T-SQL loops, replacing cursors, and developing related solutions with Synapse SQL in Azure Synapse Analytics.-
synapse-analytics Develop Materialized View Performance Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-materialized-view-performance-tuning.md
Title: Performance tuning with materialized views description: Recommendations and considerations for materialized views to improve your query performance. -
synapse-analytics Develop Openrowset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-openrowset.md
Title: How to use OPENROWSET in serverless SQL pool description: This article describes syntax of OPENROWSET in serverless SQL pool and explains how to use arguments.-
synapse-analytics Develop Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-overview.md
Title: Resources for developing Synapse SQL features description: Development concepts, design decisions, recommendations, and coding techniques for Synapse SQL.-
synapse-analytics Develop Storage Files Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-storage-files-overview.md
Title: Access files on storage in serverless SQL pool description: Describes querying storage files using serverless SQL pool in Azure Synapse Analytics.-
synapse-analytics Develop Stored Procedures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-stored-procedures.md
Title: Use stored procedures description: Tips for implementing stored procedures using Synapse SQL in Azure Synapse Analytics for solution development.-
synapse-analytics Develop Tables Cetas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-tables-cetas.md
Title: CREATE EXTERNAL TABLE AS SELECT (CETAS) in Synapse SQL description: Using CREATE EXTERNAL TABLE AS SELECT (CETAS) with Synapse SQL-
synapse-analytics Develop Tables Data Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-tables-data-types.md
Title: Table data types in Synapse SQL description: Recommendations for defining table data types in Synapse SQL. -
synapse-analytics Develop Tables External Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-tables-external-tables.md
Title: Use external tables with Synapse SQL description: Reading or writing data files with external tables in Synapse SQL-
synapse-analytics Develop Tables Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-tables-overview.md
Title: Design tables using Synapse SQL description: Introduction to designing tables in Synapse SQL. -
synapse-analytics Develop Tables Statistics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-tables-statistics.md
Title: Create and update statistics using Azure Synapse SQL resources description: Recommendations and examples for creating and updating query-optimization statistics in Synapse SQL.-
synapse-analytics Develop Tables Temporary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-tables-temporary.md
Title: Use temporary tables in Synapse SQL description: Essential guidance for using temporary tables in Synapse SQL. -
synapse-analytics Develop Transaction Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-transaction-best-practices.md
Title: Optimize transactions for dedicated SQL pool description: Learn how to optimize the performance of your transactional code in dedicated SQL pool.-
synapse-analytics Develop Transactions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-transactions.md
Title: Use transactions description: Tips for implementing transactions with dedicated SQL pool in Azure Synapse Analytics for developing solutions.-
synapse-analytics Develop Variable Assignment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-variable-assignment.md
Title: Assign variables with Synapse SQL description: In this article, you'll find tips for assigning T-SQL variables with Synapse SQL.-
synapse-analytics Develop Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-views.md
Title: T-SQL views using SQL pools description: Tips for using T-SQL views and developing solutions with dedicated SQL pool and serverless SQL pool in Azure Synapse Analytics..-
synapse-analytics Get Started Power Bi Professional https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/get-started-power-bi-professional.md
Title: Connect to Synapse SQL with Power BI Professional description: In this tutorial, we will go through steps how to connect Power BI desktop to serverless SQL pool.-
synapse-analytics Get Started Ssms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/get-started-ssms.md
Title: Connect to Synapse SQL with SQL Server Management Studio (SSMS) description: Use SQL Server Management Studio (SSMS) to connect to and query Synapse SQL in Azure Synapse Analytics. -
synapse-analytics Load Data Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/load-data-overview.md
Title: Design a PolyBase data loading strategy for dedicated SQL pool description: Instead of ETL, design an Extract, Load, and Transform (ELT) process for loading data with dedicated SQL.-
synapse-analytics Mfa Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/mfa-authentication.md
Title: Using Multi-factor AAD authentication description: Synapse SQL support connections from SQL Server Management Studio (SSMS) using Active Directory Universal Authentication. -
synapse-analytics Overview Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/overview-architecture.md
Title: Synapse SQL architecture description: Learn how Azure Synapse SQL combines distributed query processing capabilities with Azure Storage to achieve high performance and scalability. -
synapse-analytics Query Json Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/query-json-files.md
Title: Query JSON files using serverless SQL pool description: This section explains how to read JSON files using serverless SQL pool in Azure Synapse Analytics.-
synapse-analytics Query Parquet Nested Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/query-parquet-nested-types.md
Title: Query Parquet nested types using serverless SQL pool description: In this article, you'll learn how to query Parquet nested types by using serverless SQL pool.-
synapse-analytics Query Specific Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/query-specific-files.md
Title: Using file metadata in queries description: OPENROWSET function provides file and path information about every file used in the query to filter or analyze data based on file name and/or folder path.-
synapse-analytics Shared Databases Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/shared-databases-access-control.md
Title: How to set up access control on synchronized objects in serverless SQL pool description: Authorize shared databases access to non-privileged Azure AD users in serverless SQL pool.- reviewer: vvasic-msft, jovanpop-msft, WilliamDAssafMSFT
synapse-analytics Sql Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/sql-authentication.md
Title: SQL Authentication description: Learn about SQL authentication in Azure Synapse Analytics.-
synapse-analytics Tutorial Data Analyst https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/tutorial-data-analyst.md
Title: 'Tutorial: Use serverless SQL pool to analyze Azure Open Datasets in Synapse Studio' description: This tutorial shows you how to easily perform exploratory data analysis combining different Azure Open Datasets using serverless SQL pool and visualize the results in Synapse Studio.-
synapse-analytics Tutorial Logical Data Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/tutorial-logical-data-warehouse.md
Title: 'Tutorial: Use serverless SQL pool to build a Logical Data Warehouse' description: This tutorial shows you how to easily create Logical data Warehouse on Azure data sources using serverless SQL pool-
synapse-analytics How To Connect Synapse Link Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/how-to-connect-synapse-link-cosmos-db.md
Title: Connect to Azure Synapse Link for Azure Cosmos DB description: Learn how to connect an Azure Cosmos DB database to an Azure Synapse workspace with Azure Synapse Link.-
synapse-analytics How To Copy To Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/how-to-copy-to-sql-pool.md
Title: Copy Synapse Link for Azure Cosmos DB data into a dedicated SQL pool using Apache Spark description: Load the data into a Spark dataframe, curate the data, and load it into a dedicated SQL pool table-
synapse-analytics How To Query Analytical Store Spark 3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/how-to-query-analytical-store-spark-3.md
Title: Interact with Azure Cosmos DB using Apache Spark 3 in Azure Synapse Link description: How to interact with Azure Cosmos DB using Apache Spark 3 in Azure Synapse Link-
synapse-analytics How To Query Analytical Store Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/how-to-query-analytical-store-spark.md
Title: Interact with Azure Cosmos DB using Apache Spark 2 in Azure Synapse Link description: How to interact with Azure Cosmos DB using Apache Spark in Azure Synapse Link-
synapse-analytics Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new-archive.md
Title: Previous monthly updates in Azure Synapse Analytics description: Archive of the new features and documentation improvements for Azure Synapse Analytics-
synapse-analytics Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new.md
Title: What's new? description: Learn about the new features and documentation improvements for Azure Synapse Analytics-
virtual-desktop Autoscale Scaling Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/autoscale-scaling-plan.md
For best results, we recommend using autoscale with VMs you deployed with Azure
Before you create your first scaling plan, make sure you follow these guidelines: - You can currently only configure autoscale with pooled existing host pools.-- All host pools you autoscale must have a configured MaxSessionLimit parameter. Don't use the default value. You can configure this value in the host pool settings in the Azure portal or run the [New-AZWvdHostPool](/powershell/module/az.desktopvirtualization/new-azwvdhostpool?view=azps-5.7.0&preserve-view=true) or [Update-AZWvdHostPool](/powershell/module/az.desktopvirtualization/update-azwvdhostpool?view=azps-5.7.0&preserve-view=true) cmdlets in PowerShell.
+- You must create the scaling plan in the same region as the host pool you assign it to. You cannot assign a scaling plan in one region to a host pool in another region.
+- All host pools you use the autoscale feature for must have a configured MaxSessionLimit parameter. Don't use the default value. You can configure this value in the host pool settings in the Azure portal or run the [New-AZWvdHostPool](/powershell/module/az.desktopvirtualization/new-azwvdhostpool?view=azps-5.7.0&preserve-view=true) or [Update-AZWvdHostPool](/powershell/module/az.desktopvirtualization/update-azwvdhostpool?view=azps-5.7.0&preserve-view=true) cmdlets in PowerShell.
- You must grant Azure Virtual Desktop access to manage power on your VM Compute resources. ## Create a custom RBAC role in your subscription
virtual-machines Flexible Virtual Machine Scale Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/flexible-virtual-machine-scale-sets.md
The following tables list the Flexible orchestration mode features and links to
| Azure Load Balancer Standard SKU | Yes | | Application Gateway | Yes | | Infiniband Networking | No |
-| Basic SLB | No |
+| Azure Load Balancer Basic SKU | No |
| Network Port Forwarding | Yes (NAT Rules for individual instances) | ### Backup and recovery 
OutboundConnectivityNotEnabledOnVM. No outbound connectivity configured for virt
## Next steps > [!div class="nextstepaction"]
-> [Flexible orchestration mode for your scale sets with Azure portal.](flexible-virtual-machine-scale-sets-portal.md)
+> [Flexible orchestration mode for your scale sets with Azure portal.](flexible-virtual-machine-scale-sets-portal.md)
virtual-machines Oracle Create Upload Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/oracle-create-upload-vhd.md
Preparing an Oracle Linux 7 virtual machine for Azure is very similar to Oracle
* Use a cloud-init directive baked into the image that will do this every time the VM is created: ```console
- echo 'DefaultEnvironment="CLOUD_CFG=/etc/cloud/cloud.cfg.d/00-azure-swap.cfg"' >> /etc/systemd/system.conf
+ echo 'DefaultEnvironment="CLOUD_CFG=/etc/cloud/cloud.cfg.d/00-azure-swap.cfg"' >> /etc/systemd/system.conf
cat > /etc/cloud/cloud.cfg.d/00-azure-swap.cfg << EOF #cloud-config # Generated by Azure cloud image build
Preparing an Oracle Linux 7 virtual machine for Azure is very similar to Oracle
filesystem: swap mounts: - ["ephemeral0.1", "/mnt"]
- - ["ephemeral0.2", "none", "swap", "sw", "0", "0"]
+ - ["ephemeral0.2", "none", "swap", "sw,nofail,x-systemd.requires=cloud-init.service,x-systemd.device-timeout=2", "0", "0"]
EOF ```
virtual-machines Nd Series Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nd-series-retirement.md
After 31 August 2022, any remaining ND size virtual machines remaining in your s
This VM size retirement only impacts the VM sizes in the [ND-series](nd-series.md). This does not impact the newer [NCv3](ncv3-series.md), [NC T4 v3](nct4-v3-series.md), and [ND v2](ndv2-series.md) series virtual machines. ## What actions should I take?
-You will need to resize or deallocate your NC virtual machines. We recommend moving your GPU workloads to another GPU Virtual Machine size. Learn more about migrating your workloads to another [GPU Accelerated Virtual Machine size](sizes-gpu.md).
+You will need to resize or deallocate your ND virtual machines. We recommend moving your GPU workloads to another GPU Virtual Machine size. Learn more about migrating your workloads to another [GPU Accelerated Virtual Machine size](sizes-gpu.md).
## Next steps [Learn more](n-series-migration.md) about migrating your workloads to other GPU Azure Virtual Machine sizes.
virtual-machines Virtual Machines Create Restore Points https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/virtual-machines-create-restore-points.md
You can protect your data and guard against extended downtime by creating virtua
## About VM restore points
- An individual VM restore point stores the VM configuration and a disk restore point for each attached disk. A disk restore point consists of a snapshot of an individual managed disk.
+An individual VM restore point is a resource that stores VM configuration and point-in-time application consistent snapshots of all the managed disks attached to the VM. VM restore points can be leveraged to easily capture multi-disk consistent backups. VM restore points contains a disk restore point for each of the attached disks. A disk restore point consists of a snapshot of an individual managed disk.
+
+VM restore points support application consistency for VMs running Windows operating systems and support file system consistency for VMs running Linux operating system. Application consistent restore points use VSS writers (or pre/post scripts for Linux) to ensure the consistency of the application data before a restore point is created. To get an application consistent restore point the application running in the VM needs to provide a VSS writer (for Windows) or pre and post scripts (for Linux) to achieve application consistency.
VM restore points are organized into restore point collections. A restore point collection is an Azure Resource Management resource that contains the restore points for a specific VM. If you want to utilize ARM templates for creating restore points and restore point collections, visit the public [Virtual-Machine-Restore-Points](https://github.com/Azure/Virtual-Machine-Restore-Points) repository on GitHub.
virtual-machines Set Up Hpc Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/hpc/set-up-hpc-vms.md
+
+ Title: Set up Azure HPC or AI VMs
+description: How to set up an Azure HPC or AI virtual machine with NVIDIA or AMD GPUs using the Azure portal.
+++++ Last updated : 02/10/2022++
+# Set up Azure HPC or AI VMs
+
+This how-to guide explains how to create a basic Azure virtual machine (VM) for HPC and AI with NVIDIA or AMD GPUs. These VM sizes are intended for workloads that require high-performance computing (HPC sizes), or GPU-accelerated computing (AI sizes).
+
+## Choose your VM size
+
+Azure VMs have many different options, called [VM sizes](../../sizes.md). There are different series of [VM sizes for HPC](../../sizes-hpc.md) and [VM sizes GPU-optimized computing](../../sizes-gpu.md). Select the appropriate VM size for the workload you want to use. For help with selecting sizes, see the [VM selector tool](https://azure.microsoft.com/pricing/vm-selector/).
+
+Not all Azure products are available in all Azure regions. For more information, see the current list of [products available by region](https://azure.microsoft.com/global-infrastructure/services/).
+
+## Create your VM
+
+Before you can deploy a workload, you need to create your VM through the Azure portal.
+
+Depending on your VM's operating system, review either the [Linux VM quickstart](../../linux/quick-create-portal.md) or [Windows VM quickstart](../../windows/quick-create-portal.md). Then, create your VM with the following settings:
+
+1. For **Subscription**, select the Azure subscription that you want to use for this VM.
+
+1. For **Region**, select a region with capacity available for your VM size.
+
+1. For **Image**, select the image of the VM you chose in the previous section.
+
+ > [!NOTE]
+ > For the purpose of example, this guide uses the image **NVIDIA GPU-Optimized Image for AI & HPC ΓÇô v21.04.1 ΓÇô Gen 1**. If you're using another image, you might need to install other software, like the NVIDIA driver and Docker, before proceeding.
+
+1. For **Size**, select the HPC or GPU instance type. For more information, see [how to choose your VM size](#choose-your-vm-size).
+
+1. For **SSH public key source**, select **Generate a new key pair**.
+
+1. Wait for key validation to complete.
+
+1. When prompted, select **Download private key and create resource**.
+
+ > [!NOTE]
+ > Downloading the key pair is necessary to SSH into your VM for later configuration.
+
+1. For **Key pair name**, enter a name for your key pair.
+
+1. Under the **Networking** tab, make sure **Accelerated Networking** is disabled.
+
+1. Optionally, add a data disk to your VM. For more information, see how to add a data disk [to a Linux VM](../../linux/attach-disk-portal.md) or [to a Windows VM](../../windows/attach-managed-disk-portal.md).
+
+ > [!NOTE]
+ > Adding a data disk helps you store models, data sets, and other necessary components for benchmarking.
+
+1. Select **Review + create** to create your VM.
+
+## Connect to your VM
+
+Connect to your new VM using SSH, which allows you to perform further configuration. Some connection methods include:
+
+- [Connect over SSH on Linux or macOS](../../linux/mac-create-ssh-keys.md#ssh-into-your-vm)
+- [Connect over SSH on Windows](../../linux/ssh-from-windows.md#connect-to-your-vm)
+- [Connect over SSH using Azure Bastion](../../../bastion/bastion-connect-vm-ssh-linux.md)
+
+## Set up VM
+
+Set up your new VM for HPC or AI workloads. Install the newest NVIDIA or AMD GPU driver, which maps to your VM size.
+
+- [Install NVIDIA GPU drivers on N-series VMs running Linux](../../linux/n-series-driver-setup.md)
+- [Install NVIDIA GPU drivers on N-series VMs running Windows](../../windows/n-series-driver-setup.md)
+- [Install AMD GPU drivers on N-series VMs running Windows](../../windows/n-series-amd-driver-setup.md)
+
+## Next steps
+
+- [High performance computing VM sizes](../../sizes-hpc.md)
+- [GPU optimized virtual machine sizes](../../sizes-gpu.md)
virtual-machines Oracle Database Backup Strategies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-database-backup-strategies.md
Because Azure Files is designed to be a multi-user file share service, there are
## Azure NetApp Files
-The [Azure NetApp Files](../../../azure-netapp-files/azure-netapp-files-solution-architectures.md) service is a complete storage solution for Oracle Databases in Azure VMs. Built on an enterprise-class, high-performance, metered file storage, it supports any workload type and is highly available by default. Together with the Oracle Direct NFS (dNFS) driver, Azure NetApp Files provides a highly optimized storage layer for the Oracle Database.
+The [Azure NetApp Files](/azure/azure-netapp-files/azure-netapp-files-introduction) service is a complete storage solution for Oracle Databases in Azure VMs. Built on an enterprise-class, high-performance, metered file storage, it supports any workload type and is highly available by default. Together with the Oracle Direct NFS (dNFS) driver, Azure NetApp Files provides a highly optimized storage layer for the Oracle Database.
-Azure NetApp Files provides efficient storage-based snapshots copies on the underlying storage system that uses a Redirect on Write (RoW) mechanism. While snapshot copies are extremely fast to take and restore, they only serve as a first-line-of-defence, which can account for the vast majority of the required restore operations of any given organization, which is often recovery from human error. However, Snapshot copies are not a complete backup. To cover all backup and restore requirements, external snapshot replicas and/or other backup copies must be created in a remote geography to protect from regional outage.
-To learn more about using NetApp Files for Oracle Databases on Azure, read this [report](https://www.netapp.com/pdf.html?item=/media/17105-tr4780pdf.pdf).
+Azure NetApp Files provides efficient storage-based snapshots on the underlying storage system that uses a Redirect on Write (RoW) mechanism. While snapshots are extremely fast to take and restore, they only serve as a first-line-of-defence, which can account for the vast majority of the required restore operations of any given organization, which is often recovery from human error. However, snapshots are not a complete backup. To cover all backup and restore requirements, [external snapshot replicas](/azure/azure-netapp-files/cross-region-replication-introduction) and/or other [backup vaults](/azure/azure-netapp-files/backup-introduction) must be created in a (remote) geography to protect from regional outage. Read more about [how Azure NetApp Files snapshots work](/azure/azure-netapp-files/snapshots-introduction).
+
+In order to ensure the creation of database consistent snapshots the backup process must be orchestrated between the database and the storage. Azure Application Consistent Snapshot tool (AzAcSnap) is a command-line tool that enables data protection for third-party databases by handling all the orchestration required to put them into an application consistent state before taking a storage snapshot, after which it returns them to an operational state. Oracle databases are supported with AzAcSnap since [version 5.1](/azure/azure-netapp-files/azacsnap-release-notes#azacsnap-v51-preview-build-2022012585030).
+
+To learn more about using Azure NetApp Files for Oracle Databases on Azure, read more [here](/azure/azure-netapp-files/azure-netapp-files-solution-architectures#oracle).
## Azure Backup service
virtual-machines Automation Deploy Control Plane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-deploy-control-plane.md
cd ~/Azure_SAP_Automated_Deployment/WORKSPACES
az logout az login export DEPLOYMENT_REPO_PATH=~/Azure_SAP_Automated_Deployment/sap-automation
+export ARM_SUBSCRIPTION_ID=<subscriptionID>
export subscriptionID=<subscriptionID> export spn_id=<appID> export spn_secret=<password>
virtual-machines Automation Deploy System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-deploy-system.md
webdispatcher_server_count=0
## Deploying the SAP system
-The sample SAP System configuration file `DEV-WEEU-SAP01-X01.tfvars` is located in the `~/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/DEV-WEEU-SAP01-X01` folder.
+The sample SAP System configuration file `DEV-WEEU-SAP01-X01.tfvars` is located in the `~/Azure_SAP_Automated_Deployment/samples/WORKSPACES/SYSTEM/DEV-WEEU-SAP01-X01` folder.
Running the command below will deploy the SAP System.
virtual-machines Automation Deploy Workload Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-deploy-workload-zone.md
az role assignment create --assignee <appId> --role "User Access Administrator"
## Deploying the SAP Workload zone
-The sample Workload Zone configuration file `DEV-WEEU-SAP01-INFRASTRUCTURE.tfvars` is located in the `~/Azure_SAP_Automated_Deployment/WORKSPACES/LANDSCAPE/DEV-WEEU-SAP01-INFRASTRUCTURE` folder.
+The sample Workload Zone configuration file `DEV-WEEU-SAP01-INFRASTRUCTURE.tfvars` is located in the `~/Azure_SAP_Automated_Deployment/samples/WORKSPACES/LANDSCAPE/DEV-WEEU-SAP01-INFRASTRUCTURE` folder.
Running the command below will deploy the SAP Workload Zone.
virtual-network Accelerated Networking How It Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/accelerated-networking-how-it-works.md
If the VM is configured with Accelerated Networking, a second network interface
Different Azure hosts use different models of Mellanox physical NIC, so Linux automatically determines whether to use the ΓÇ£mlx4ΓÇ¥ or ΓÇ£mlx5ΓÇ¥ driver. Placement of the VM on an Azure host is controlled by the Azure infrastructure. With no customer option to specify which physical NIC that a VM deployment uses, the VMs must include both drivers. If a VM is stopped/deallocated and then restarted, it might be redeployed on hardware with a different model of Mellanox physical NIC. Therefore, it might use the other Mellanox driver.
-FreeBSD provides the same support for Accelerated Networking as Linux when running in Azure. The remainder of this article describes Linux and uses Linux examples, but the same functionality is available in FreeBSD.
+FreeBSD provides the same support for Accelerated Networking as Linux when running in Azure. The remainder of this article describes Linux and uses Linux examples, but the same functionality is available in FreeBSD.
+
+> [!NOTE]
+> This article contains references to the term *slave*, a term that Microsoft no longer uses. When this term is removed from the software, we'll remove it from this article.
## Bonding
vpn-gateway Nat Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/nat-overview.md
NAT on a gateway device translates the source and/or destination IP addresses, b
* **Dynamic NAT**: For dynamic NAT, an IP address can be translated to different target IP addresses based on availability, or with a different combination of IP address and TCP/UDP port. The latter is also called NAPT, Network Address and Port Translation. Dynamic rules will result in stateful translation mappings depending on the traffic flows at any given time.
+> [!NOTE]
+> When Dynamic NAT rules are used, traffic is unidirectional which means communication must be initiated from the site that is represented in the Internal Mapping field of the rule. If traffic is initiated from the External Mapping, the connection will not be established. If you require bidirectional traffic initiation, then use a static NAT rule to define a 1:1 mapping.
+ Another consideration is the address pool size for translation. If the target address pool size is the same as the original address pool, use static NAT rule to define a 1:1 mapping in a sequential order. If the target address pool is smaller than the original address pool, use dynamic NAT rule to accommodate the differences. > [!IMPORTANT]
To implement the NAT configuration as shown above, first create the NAT rules in
## Next steps
-See [Configure NAT on Azure VPN gateways](nat-howto.md) for steps to configure NAT for your cross-premises connections.
+See [Configure NAT on Azure VPN gateways](nat-howto.md) for steps to configure NAT for your cross-premises connections.