Updates from: 05/30/2022 01:05:46
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Developer Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/developer-glossary.md
Title: Microsoft identity platform developer glossary | Azure
-description: A list of terms for commonly used Microsoft identity platform developer concepts and features.
+ Title: Glossary of terms in the Microsoft identity platform
+description: Definitions of terms commonly found in Microsoft identity platform documentation, Azure portal, and authentication SDKs like the Microsoft Authentication Library (MSAL).
-- Previously updated : 12/14/2021+ Last updated : 05/28/2022 --+
-# Microsoft identity platform developer glossary
+# Glossary: Microsoft identity platform
-This article contains definitions for some of the core developer concepts and terminology, which are helpful when learning about application development using Microsoft identity platform.
+You'll see these terms when you use our documentation, the Azure portal, our authentication libraries, and the Microsoft Graph API. Some terms are Microsoft-specific while others are related to protocols like OAuth or other technologies you use with the Microsoft identity platform.
## Access token
-A type of [security token](#security-token) issued by an [authorization server](#authorization-server), and used by a [client application](#client-application) in order to access a [protected resource server](#resource-server). Typically in the form of a [JSON Web Token (JWT)][JWT], the token embodies the authorization granted to the client by the [resource owner](#resource-owner), for a requested level of access. The token contains all applicable [claims](#claim) about the subject, enabling the client application to use it as a form of credential when accessing a given resource. This also eliminates the need for the resource owner to expose credentials to the client.
+A type of [security token](#security-token) issued by an [authorization server](#authorization-server) and used by a [client application](#client-application) to access a [protected resource server](#resource-server). Typically in the form of a [JSON Web Token (JWT)][JWT], the token embodies the authorization granted to the client by the [resource owner](#resource-owner), for a requested level of access. The token contains all applicable [claims](#claim) about the subject, enabling the client application to use it as a form of credential when accessing a given resource. This also eliminates the need for the resource owner to expose credentials to the client.
-Access tokens are only valid for a short period of time and cannot be revoked. An authorization server may also issue a [refresh token](#refresh-token) when the access token is issued. Refresh tokens are typically provided only to confidential client applications.
+Access tokens are only valid for a short period of time and can't be revoked. An authorization server may also issue a [refresh token](#refresh-token) when the access token is issued. Refresh tokens are typically provided only to confidential client applications.
Access tokens are sometimes referred to as "User+App" or "App-Only", depending on the credentials being represented. For example, when a client application uses the:
-* ["Authorization code" authorization grant](#authorization-grant), the end user authenticates first as the resource owner, delegating authorization to the client to access the resource. The client authenticates afterward when obtaining the access token. The token can sometimes be referred to more specifically as a "User+App" token, as it represents both the user that authorized the client application, and the application.
-* ["Client credentials" authorization grant](#authorization-grant), the client provides the sole authentication, functioning without the resource-owner's authentication/authorization, so the token can sometimes be referred to as an "App-Only" token.
+- ["Authorization code" authorization grant](#authorization-grant), the end user authenticates first as the resource owner, delegating authorization to the client to access the resource. The client authenticates afterward when obtaining the access token. The token can sometimes be referred to more specifically as a "User+App" token, as it represents both the user that authorized the client application, and the application.
+- ["Client credentials" authorization grant](#authorization-grant), the client provides the sole authentication, functioning without the resource-owner's authentication/authorization, so the token can sometimes be referred to as an "App-Only" token.
See the [access tokens reference][AAD-Tokens-Claims] for more details. ## Actor
-Another term for the [client application](#client-application) - this is the party acting on behalf of the subject, or [resource owner](#resource-owner).
+Another term for the [client application](#client-application). The actor is the party acting on behalf of a subject ([resource owner](#resource-owner)).
-## Application ID (client ID)
+## Application (client) ID
-The unique identifier Azure AD issues to an application registration that identifies a specific application and the associated configurations. This application ID ([client ID](https://tools.ietf.org/html/rfc6749#page-15)) is used when performing authentication requests and is provided to the authentication libraries in development time. The application ID (client ID) is not a secret.
+The application ID, or _[client ID](https://datatracker.ietf.org/doc/html/rfc6749#section-2.2)_, is a value the Microsoft identity platform assigns to your application when you register it in Azure AD. The application ID is a GUID value that uniquely identifies the application and its configuration within the identity platform. You add the app ID to your application's code, and authentication libraries include the value in their requests to the identity platform at application runtime. The application (client) ID isn't a secret - don't use it as a password or other credential.
## Application manifest
A feature provided by the [Azure portal][AZURE-portal], which produces a JSON re
## Application object
-When you register/update an application in the [Azure portal][AZURE-portal], the portal creates/updates both an application object and a corresponding [service principal object](#service-principal-object) for that tenant. The application object *defines* the application's identity configuration globally (across all tenants where it has access), providing a template from which its corresponding service principal object(s) are *derived* for use locally at run-time (in a specific tenant).
+When you register/update an application in the [Azure portal][AZURE-portal], the portal creates/updates both an application object and a corresponding [service principal object](#service-principal-object) for that tenant. The application object _defines_ the application's identity configuration globally (across all tenants where it has access), providing a template from which its corresponding service principal object(s) are _derived_ for use locally at run-time (in a specific tenant).
For more information, see [Application and Service Principal Objects][AAD-App-SP-Objects]. ## Application registration
-In order to allow an application to integrate with and delegate Identity and Access Management functions to Azure AD, it must be registered with an Azure AD [tenant](#tenant). When you register your application with Azure AD, you are providing an identity configuration for your application, allowing it to integrate with Azure AD and use features such as:
+In order to allow an application to integrate with and delegate Identity and Access Management functions to Azure AD, it must be registered with an Azure AD [tenant](#tenant). When you register your application with Azure AD, you're providing an identity configuration for your application, allowing it to integrate with Azure AD and use features like:
-* Robust management of Single Sign-On using Azure AD Identity Management and [OpenID Connect][OpenIDConnect] protocol implementation
-* Brokered access to [protected resources](#resource-server) by [client applications](#client-application), via OAuth 2.0 [authorization server](#authorization-server)
-* [Consent framework](#consent) for managing client access to protected resources, based on resource owner authorization.
+- Robust management of Single Sign-On using Azure AD Identity Management and [OpenID Connect][OpenIDConnect] protocol implementation
+- Brokered access to [protected resources](#resource-server) by [client applications](#client-application), via OAuth 2.0 [authorization server](#authorization-server)
+- [Consent framework](#consent) for managing client access to protected resources, based on resource owner authorization.
See [Integrating applications with Azure Active Directory][AAD-Integrating-Apps] for more details. ## Authentication
-The act of challenging a party for legitimate credentials, providing the basis for creation of a security principal to be used for identity and access control. During an [OAuth2 authorization grant](#authorization-grant) for example, the party authenticating is filling the role of either [resource owner](#resource-owner) or [client application](#client-application), depending on the grant used.
+The act of challenging a party for legitimate credentials, providing the basis for creation of a security principal to be used for identity and access control. During an [OAuth 2.0 authorization grant](#authorization-grant) for example, the party authenticating is filling the role of either [resource owner](#resource-owner) or [client application](#client-application), depending on the grant used.
## Authorization The act of granting an authenticated security principal permission to do something. There are two primary use cases in the Azure AD programming model:
-* During an [OAuth2 authorization grant](#authorization-grant) flow: when the [resource owner](#resource-owner) grants authorization to the [client application](#client-application), allowing the client to access the resource owner's resources.
-* During resource access by the client: as implemented by the [resource server](#resource-server), using the [claim](#claim) values present in the [access token](#access-token) to make access control decisions based upon them.
+- During an [OAuth 2.0 authorization grant](#authorization-grant) flow: when the [resource owner](#resource-owner) grants authorization to the [client application](#client-application), allowing the client to access the resource owner's resources.
+- During resource access by the client: as implemented by the [resource server](#resource-server), using the [claim](#claim) values present in the [access token](#access-token) to make access control decisions based upon them.
## Authorization code
-A short lived "token" provided to a [client application](#client-application) by the [authorization endpoint](#authorization-endpoint), as part of the "authorization code" flow, one of the four OAuth2 [authorization grants](#authorization-grant). The code is returned to the client application in response to authentication of a [resource owner](#resource-owner), indicating the resource owner has delegated authorization to access the requested resources. As part of the flow, the code is later redeemed for an [access token](#access-token).
+A short-lived value provided by the [authorization endpoint](#authorization-endpoint) to a [client application](#client-application) during the OAuth 2.0 _authorization code grant flow_, one of the four OAuth 2.0 [authorization grants](#authorization-grant). Also called an _auth code_, the authorization code is returned to the client application in response to the authentication of a [resource owner](#resource-owner). The auth code indicates the resource owner has delegated authorization to the client application to access their resources. As part of the flow, the auth code is later redeemed for an [access token](#access-token).
## Authorization endpoint
-One of the endpoints implemented by the [authorization server](#authorization-server), used to interact with the [resource owner](#resource-owner) in order to provide an [authorization grant](#authorization-grant) during an OAuth2 authorization grant flow. Depending on the authorization grant flow used, the actual grant provided can vary, including an [authorization code](#authorization-code) or [security token](#security-token).
+One of the endpoints implemented by the [authorization server](#authorization-server), used to interact with the [resource owner](#resource-owner) to provide an [authorization grant](#authorization-grant) during an OAuth 2.0 authorization grant flow. Depending on the authorization grant flow used, the actual grant provided can vary, including an [authorization code](#authorization-code) or [security token](#security-token).
-See the OAuth2 specification's [authorization grant types][OAuth2-AuthZ-Grant-Types] and [authorization endpoint][OAuth2-AuthZ-Endpoint] sections, and the [OpenIDConnect specification][OpenIDConnect-AuthZ-Endpoint] for more details.
+See the OAuth 2.0 specification's [authorization grant types][OAuth2-AuthZ-Grant-Types] and [authorization endpoint][OAuth2-AuthZ-Endpoint] sections, and the [OpenIDConnect specification][OpenIDConnect-AuthZ-Endpoint] for more details.
## Authorization grant
-A credential representing the [resource owner's](#resource-owner) [authorization](#authorization) to access its protected resources, granted to a [client application](#client-application). A client application can use one of the [four grant types defined by the OAuth2 Authorization Framework][OAuth2-AuthZ-Grant-Types] to obtain a grant, depending on client type/requirements: "authorization code grant", "client credentials grant", "implicit grant", and "resource owner password credentials grant". The credential returned to the client is either an [access token](#access-token), or an [authorization code](#authorization-code) (exchanged later for an access token), depending on the type of authorization grant used.
+A credential representing the [resource owner's](#resource-owner) [authorization](#authorization) to access its protected resources, granted to a [client application](#client-application). A client application can use one of the [four grant types defined by the OAuth 2.0 Authorization Framework][OAuth2-AuthZ-Grant-Types] to obtain a grant, depending on client type/requirements: "authorization code grant", "client credentials grant", "implicit grant", and "resource owner password credentials grant". The credential returned to the client is either an [access token](#access-token), or an [authorization code](#authorization-code) (exchanged later for an access token), depending on the type of authorization grant used.
## Authorization server
-As defined by the [OAuth2 Authorization Framework][OAuth2-Role-Def], the server responsible for issuing access tokens to the [client](#client-application) after successfully authenticating the [resource owner](#resource-owner) and obtaining its authorization. A [client application](#client-application) interacts with the authorization server at runtime via its [authorization](#authorization-endpoint) and [token](#token-endpoint) endpoints, in accordance with the OAuth2 defined [authorization grants](#authorization-grant).
+As defined by the [OAuth 2.0 Authorization Framework][OAuth2-Role-Def], the server responsible for issuing access tokens to the [client](#client-application) after successfully authenticating the [resource owner](#resource-owner) and obtaining its authorization. A [client application](#client-application) interacts with the authorization server at runtime via its [authorization](#authorization-endpoint) and [token](#token-endpoint) endpoints, in accordance with the OAuth 2.0 defined [authorization grants](#authorization-grant).
In the case of the Microsoft identity platform application integration, the Microsoft identity platform implements the authorization server role for Azure AD applications and Microsoft service APIs, for example [Microsoft Graph APIs][Microsoft-Graph]. ## Claim
-A [security token](#security-token) contains claims, which provide assertions about one entity (such as a [client application](#client-application) or [resource owner](#resource-owner)) to another entity (such as the [resource server](#resource-server)). Claims are name/value pairs that relay facts about the token subject (for example, the security principal that was authenticated by the [authorization server](#authorization-server)). The claims present in a given token are dependent upon several variables, including the type of token, the type of credential used to authenticate the subject, the application configuration, etc.
+Claims are name/values pairs in a [security token](#security-token) that provide assertions made by one entity to another. These entities are typically the [client application](#client-application) or a [resource owner](#resource-owner) providing assertions to a [resource server](#resource-server). Claims relay facts about the token subject like the ID of the security principal that was authenticated by the [authorization server](#authorization-server). The claims present in a token can vary and depend on several factors like the type of token, type of credential used for authenticating the subject, the application configuration, and others.
See the [Microsoft identity platform token reference][AAD-Tokens-Claims] for more details. ## Client application
-Also known as the "[actor](#actor)". As defined by the [OAuth2 Authorization Framework][OAuth2-Role-Def], an application that makes protected resource requests on behalf of the [resource owner](#resource-owner). They receive permissions from the resource owner in the form of scopes. The term "client" does not imply any particular hardware implementation characteristics (for instance, whether the application executes on a server, a desktop, or other devices).
+Also known as the "[actor](#actor)". As defined by the [OAuth 2.0 Authorization Framework][OAuth2-Role-Def], an application that makes protected resource requests on behalf of the [resource owner](#resource-owner). They receive permissions from the resource owner in the form of scopes. The term "client" doesn't imply any particular hardware implementation characteristics (for instance, whether the application executes on a server, a desktop, or other devices).
-A client application requests [authorization](#authorization) from a resource owner to participate in an [OAuth2 authorization grant](#authorization-grant) flow, and may access APIs/data on the resource owner's behalf. The OAuth2 Authorization Framework [defines two types of clients][OAuth2-Client-Types], "confidential" and "public", based on the client's ability to maintain the confidentiality of its credentials. Applications can implement a [web client (confidential)](#web-client) which runs on a web server, a [native client (public)](#native-client) installed on a device, or a [user-agent-based client (public)](#user-agent-based-client) which runs in a device's browser.
+A client application requests [authorization](#authorization) from a resource owner to participate in an [OAuth 2.0 authorization grant](#authorization-grant) flow, and may access APIs/data on the resource owner's behalf. The OAuth 2.0 Authorization Framework [defines two types of clients][OAuth2-Client-Types], "confidential" and "public", based on the client's ability to maintain the confidentiality of its credentials. Applications can implement a [web client (confidential)](#web-client) which runs on a web server, a [native client (public)](#native-client) installed on a device, or a [user-agent-based client (public)](#user-agent-based-client) which runs in a device's browser.
## Consent
See [consent framework](consent-framework.md) for more information.
## ID token
-An [OpenID Connect][OpenIDConnect-ID-Token] [security token](#security-token) provided by an [authorization server's](#authorization-server) [authorization endpoint](#authorization-endpoint), which contains [claims](#claim) pertaining to the authentication of an end user [resource owner](#resource-owner). Like an access token, ID tokens are also represented as a digitally signed [JSON Web Token (JWT)][JWT]. Unlike an access token though, an ID token's claims are not used for purposes related to resource access and specifically access control.
+An [OpenID Connect][OpenIDConnect-ID-Token] [security token](#security-token) provided by an [authorization server's](#authorization-server) [authorization endpoint](#authorization-endpoint), which contains [claims](#claim) pertaining to the authentication of an end user [resource owner](#resource-owner). Like an access token, ID tokens are also represented as a digitally signed [JSON Web Token (JWT)][JWT]. Unlike an access token though, an ID token's claims aren't used for purposes related to resource access and specifically access control.
See the [ID token reference](id-tokens.md) for more details.
Eliminate the need for developers to manage credentials. Managed identities prov
## Microsoft identity platform
-The Microsoft identity platform is an evolution of the Azure Active Directory (Azure AD) identity service and developer platform. It allows developers to build applications that sign in all Microsoft identities, get tokens to call Microsoft Graph, other Microsoft APIs, or APIs that developers have built. ItΓÇÖs a full-featured platform that consists of an authentication service, libraries, application registration and configuration, full developer documentation, code samples, and other developer content. The Microsoft identity platform supports industry standard protocols such as OAuth 2.0 and OpenID Connect.
+The Microsoft identity platform is an evolution of the Azure Active Directory (Azure AD) identity service and developer platform. It allows developers to build applications that sign in all Microsoft identities, get tokens to call Microsoft Graph, other Microsoft APIs, or APIs that developers have built. It's a full-featured platform that consists of an authentication service, libraries, application registration and configuration, full developer documentation, code samples, and other developer content. The Microsoft identity platform supports industry standard protocols such as OAuth 2.0 and OpenID Connect.
## Multi-tenant application
See [How to sign in any Azure AD user using the multi-tenant application pattern
## Native client
-A type of [client application](#client-application) that is installed natively on a device. Since all code is executed on a device, it is considered a "public" client due to its inability to store credentials privately/confidentially. See [OAuth2 client types and profiles][OAuth2-Client-Types] for more details.
+A type of [client application](#client-application) that is installed natively on a device. Since all code is executed on a device, it's considered a "public" client due to its inability to store credentials privately/confidentially. See [OAuth 2.0 client types and profiles][OAuth2-Client-Types] for more details.
## Permissions A [client application](#client-application) gains access to a [resource server](#resource-server) by declaring permission requests. Two types are available:
-* "Delegated" permissions, which specify [scope-based](#scopes) access using delegated authorization from the signed-in [resource owner](#resource-owner), are presented to the resource at run-time as ["scp" claims](#claim) in the client's [access token](#access-token). These indicate the permission granted to the [actor](#actor) by the [subject](#subject).
-* "Application" permissions, which specify [role-based](#roles) access using the client application's credentials/identity, are presented to the resource at run-time as ["roles" claims](#claim) in the client's access token. These indicate permissions granted to the [subject](#subject) by the tenant.
+- "Delegated" permissions, which specify [scope-based](#scopes) access using delegated authorization from the signed-in [resource owner](#resource-owner), are presented to the resource at run-time as ["scp" claims](#claim) in the client's [access token](#access-token). These indicate the permission granted to the [actor](#actor) by the [subject](#subject).
+- "Application" permissions, which specify [role-based](#roles) access using the client application's credentials/identity, are presented to the resource at run-time as ["roles" claims](#claim) in the client's access token. These indicate permissions granted to the [subject](#subject) by the tenant.
They also surface during the [consent](#consent) process, giving the administrator or resource owner the opportunity to grant/deny the client access to resources in their tenant.
Permission requests are configured on the **API permissions** page for an applic
## Refresh token
-A type of [security token](#security-token) issued by an [authorization server](#authorization-server), and used by a [client application](#client-application) in order to request a new [access token](#access-token) before the access token expires. Typically in the form of a [JSON Web Token (JWT)][JWT].
+A type of [security token](#security-token) issued by an [authorization server](#authorization-server). Before an access token expires, a [client application](#client-application) includes its associated refresh token when it requests a new [access token](#access-token) from the authorization server. Refresh tokens are typically formatted as a [JSON Web Token (JWT)][JWT].
-Unlike access tokens, refresh tokens can be revoked. If a client application attempts to request a new access token using a refresh token that has been revoked, the authorization server will deny the request, and the client application will no longer have permission to access the [resource server](#resource-server) on behalf of the [resource owner](#resource-owner).
+Unlike access tokens, refresh tokens can be revoked. An authorization server denies any request from a client application that includes a refresh token that has been revoked. When the authorization server denies a request that includes a revoked refresh token, the client application loses the permission to access the [resource server](#resource-server) on behalf of the [resource owner](#resource-owner).
See the [refresh tokens](refresh-tokens.md) for more details. ## Resource owner
-As defined by the [OAuth2 Authorization Framework][OAuth2-Role-Def], an entity capable of granting access to a protected resource. When the resource owner is a person, it is referred to as an end user. For example, when a [client application](#client-application) wants to access a user's mailbox through the [Microsoft Graph API][Microsoft-Graph], it requires permission from the resource owner of the mailbox. The "resource owner" is also sometimes called the [subject](#subject).
+As defined by the [OAuth 2.0 Authorization Framework][OAuth2-Role-Def], an entity capable of granting access to a protected resource. When the resource owner is a person, it's referred to as an end user. For example, when a [client application](#client-application) wants to access a user's mailbox through the [Microsoft Graph API][Microsoft-Graph], it requires permission from the resource owner of the mailbox. The "resource owner" is also sometimes called the [subject](#subject).
-Every [security token](#security-token) represents a resource owner. The resource owner is what the subject [claim](#claim), object ID claim, and personal data in the token represent. Resource owners are the party that grants delegated permissions to a client application, in the form of scopes. Resource owners are also the recipients of [roles](#roles) that indicate expanded permissions within a tenant or on an application.
+Every [security token](#security-token) represents a resource owner. The resource owner is what the subject [claim](#claim), object ID claim, and personal data in the token represent. Resource owners are the party that grants delegated permissions to a client application, in the form of scopes. Resource owners are also the recipients of [roles](#roles) that indicate expanded permissions within a tenant or on an application.
## Resource server
-As defined by the [OAuth2 Authorization Framework][OAuth2-Role-Def], a server that hosts protected resources, capable of accepting and responding to protected resource requests by [client applications](#client-application) that present an [access token](#access-token). Also known as a protected resource server, or resource application.
+As defined by the [OAuth 2.0 Authorization Framework][OAuth2-Role-Def], a server that hosts protected resources, capable of accepting and responding to protected resource requests by [client applications](#client-application) that present an [access token](#access-token). Also known as a protected resource server, or resource application.
A resource server exposes APIs and enforces access to its protected resources through [scopes](#scopes) and [roles](#roles), using the OAuth 2.0 Authorization Framework. Examples include the [Microsoft Graph API][Microsoft-Graph], which provides access to Azure AD tenant data, and the Microsoft 365 APIs that provide access to data such as mail and calendar.
Just like a client application, resource application's identity configuration is
## Roles
-Like [scopes](#scopes), app roles provide a way for a [resource server](#resource-server) to govern access to its protected resources. Unlike scopes, roles represent privileges that the [subject](#subject) has been granted beyond the baseline - this is why reading your own email is a scope, while being an email administrator that can read everyone's email is a role.
+Like [scopes](#scopes), app roles provide a way for a [resource server](#resource-server) to govern access to its protected resources. Unlike scopes, roles represent privileges that the [subject](#subject) has been granted beyond the baseline - this is why reading your own email is a scope, while being an email administrator that can read everyone's email is a role.
-App roles can support two assignment types: "user" assignment implements role-based access control for users/groups that require access to the resource, while "application" assignment implements the same for [client applications](#client-application) that require access. An app role can be defined as user-assignable, app-assignabnle, or both.
+App roles can support two assignment types: "user" assignment implements role-based access control for users/groups that require access to the resource, while "application" assignment implements the same for [client applications](#client-application) that require access. An app role can be defined as user-assignable, app-assignabnle, or both.
Roles are resource-defined strings (for example "Expense approver", "Read-only", "Directory.ReadWrite.All"), managed in the [Azure portal][AZURE-portal] via the resource's [application manifest](#application-manifest), and stored in the resource's [appRoles property][Graph-Sp-Resource]. The Azure portal is also used to assign users to "user" assignable roles, and configure client [application permissions](#permissions) to request "application" assignable roles.
A best practice naming convention, is to use a "resource.operation.constraint" f
## Security token
-A signed document containing claims, such as an OAuth2 token or SAML 2.0 assertion. For an OAuth2 [authorization grant](#authorization-grant), an [access token](#access-token) (OAuth2), [refresh token](#refresh-token), and an [ID Token](https://openid.net/specs/openid-connect-core-1_0.html#IDToken) are types of security tokens, all of which are implemented as a [JSON Web Token (JWT)][JWT].
+A signed document containing claims, such as an OAuth 2.0 token or SAML 2.0 assertion. For an OAuth 2.0 [authorization grant](#authorization-grant), an [access token](#access-token) (OAuth2), [refresh token](#refresh-token), and an [ID Token](https://openid.net/specs/openid-connect-core-1_0.html#IDToken) are types of security tokens, all of which are implemented as a [JSON Web Token (JWT)][JWT].
## Service principal object
-When you register/update an application in the [Azure portal][AZURE-portal], the portal creates/updates both an [application object](#application-object) and a corresponding service principal object for that tenant. The application object *defines* the application's identity configuration globally (across all tenants where the associated application has been granted access), and is the template from which its corresponding service principal object(s) are *derived* for use locally at run-time (in a specific tenant).
+When you register/update an application in the [Azure portal][AZURE-portal], the portal creates/updates both an [application object](#application-object) and a corresponding service principal object for that tenant. The application object _defines_ the application's identity configuration globally (across all tenants where the associated application has been granted access), and is the template from which its corresponding service principal object(s) are _derived_ for use locally at run-time (in a specific tenant).
For more information, see [Application and Service Principal Objects][AAD-App-SP-Objects]. ## Sign-in
-The process of a [client application](#client-application) initiating end-user authentication and capturing related state, for the purpose of acquiring a [security token](#security-token) and scoping the application session to that state. State can include artifacts such as user profile information, and information derived from token claims.
+The process of a [client application](#client-application) initiating end-user authentication and capturing related state for requesting a [security token](#security-token) and scoping the application session to that state. State can include artifacts like user profile information, and information derived from token claims.
The sign-in function of an application is typically used to implement single-sign-on (SSO). It may also be preceded by a "sign-up" function, as the entry point for an end user to gain access to an application (upon first sign-in). The sign-up function is used to gather and persist additional state specific to the user, and may require [user consent](#consent).
The process of unauthenticating an end user, detaching the user state associated
## Subject
-Also known as the [resource owner](#resource-owner).
+Also known as the [resource owner](#resource-owner).
## Tenant An instance of an Azure AD directory is referred to as an Azure AD tenant. It provides several features, including:
-* a registry service for integrated applications
-* authentication of user accounts and registered applications
-* REST endpoints required to support various protocols including OAuth2 and SAML, including the [authorization endpoint](#authorization-endpoint), [token endpoint](#token-endpoint) and the "common" endpoint used by [multi-tenant applications](#multi-tenant-application).
+- a registry service for integrated applications
+- authentication of user accounts and registered applications
+- REST endpoints required to support various protocols including OAuth 2.0 and SAML, including the [authorization endpoint](#authorization-endpoint), [token endpoint](#token-endpoint) and the "common" endpoint used by [multi-tenant applications](#multi-tenant-application).
Azure AD tenants are created/associated with Azure and Microsoft 365 subscriptions during sign-up, providing Identity & Access Management features for the subscription. Azure subscription administrators can also create additional Azure AD tenants via the Azure portal. See [How to get an Azure Active Directory tenant][AAD-How-To-Tenant] for details on the various ways you can get access to a tenant. See [Associate or add an Azure subscription to your Azure Active Directory tenant][AAD-How-Subscriptions-Assoc] for details on the relationship between subscriptions and an Azure AD tenant, and for instructions on how to associate or add a subscription to an Azure AD tenant. ## Token endpoint
-One of the endpoints implemented by the [authorization server](#authorization-server) to support OAuth2 [authorization grants](#authorization-grant). Depending on the grant, it can be used to acquire an [access token](#access-token) (and related "refresh" token) to a [client](#client-application), or [ID token](#id-token) when used with the [OpenID Connect][OpenIDConnect] protocol.
+One of the endpoints implemented by the [authorization server](#authorization-server) to support OAuth 2.0 [authorization grants](#authorization-grant). Depending on the grant, it can be used to acquire an [access token](#access-token) (and related "refresh" token) to a [client](#client-application), or [ID token](#id-token) when used with the [OpenID Connect][OpenIDConnect] protocol.
## User-agent-based client
-A type of [client application](#client-application) that downloads code from a web server and executes within a user-agent (for instance, a web browser), such as a single-page application (SPA). Since all code is executed on a device, it is considered a "public" client due to its inability to store credentials privately/confidentially. For more information, see [OAuth2 client types and profiles][OAuth2-Client-Types].
+A type of [client application](#client-application) that downloads code from a web server and executes within a user-agent (for instance, a web browser), such as a single-page application (SPA). Since all code is executed on a device, it is considered a "public" client due to its inability to store credentials privately/confidentially. For more information, see [OAuth 2.0 client types and profiles][OAuth2-Client-Types].
## User principal
-Similar to the way a service principal object is used to represent an application instance, a user principal object is another type of security principal, which represents a user. The Microsoft Graph [User resource type][Graph-User-Resource] defines the schema for a user object, including user-related properties such as first and last name, user principal name, directory role membership, etc. This provides the user identity configuration for Azure AD to establish a user principal at run-time. The user principal is used to represent an authenticated user for Single Sign-On, recording [consent](#consent) delegation, making access control decisions, etc.
+Similar to the way a service principal object is used to represent an application instance, a user principal object is another type of security principal, which represents a user. The Microsoft Graph [User resource type][Graph-User-Resource] defines the schema for a user object, including user-related properties like first and last name, user principal name, directory role membership, etc. This provides the user identity configuration for Azure AD to establish a user principal at run-time. The user principal is used to represent an authenticated user for Single Sign-On, recording [consent](#consent) delegation, making access control decisions, etc.
## Web client
-A type of [client application](#client-application) that executes all code on a web server, and able to function as a "confidential" client by securely storing its credentials on the server. For more information, see [OAuth2 client types and profiles][OAuth2-Client-Types].
+A type of [client application](#client-application) that executes all code on a web server, functioning as a _confidential client_ because it can securely store its credentials on the server. For more information, see [OAuth 2.0 client types and profiles][OAuth2-Client-Types].
## Workload identity
-An identity used by a software workload (such as an application, service, script, or container) to authenticate and access other services and resources. In Azure AD, workload identities are apps, service principals, and managed identities. For more information, see [workload identity overview](workload-identities-overview.md).
+An identity used by a software workload like an application, service, script, or container to authenticate and access other services and resources. In Azure AD, workload identities are apps, service principals, and managed identities. For more information, see [workload identity overview](workload-identities-overview.md).
## Workload identity federation
Allows you to securely access Azure AD protected resources from external apps an
## Next steps
-The [Microsoft identity platform Developer's Guide][AAD-Dev-Guide] is the landing page to use for all the Microsoft identity platform development-related topics, including an overview of [application integration][AAD-How-To-Integrate] and the basics of the [Microsoft identity platform authentication and supported authentication scenarios][AAD-Auth-Scenarios]. You can also find code samples & tutorials on how to get up and running quickly on [GitHub](https://github.com/azure-samples?utf8=%E2%9C%93&q=active%20directory&type=&language=).
+Many of the terms in this glossary are related to the OAuth 2.0 and OpenID Connect protocols. Though you don't need to know how the protocols work "on the wire" to use the identity platform, knowing some protocol basics can help you more easily build and debug authentication and authorization in your apps:
-Use the following comments section to provide feedback and help to refine and shape this content, including requests for new definitions or updating existing ones!
+- [OAuth 2.0 and OpenID Connect (OIDC) in the Microsoft identity platform](active-directory-v2-protocols.md)
<!--Image references-->
Use the following comments section to provide feedback and help to refine and sh
[OAuth2-Role-Def]: https://tools.ietf.org/html/rfc6749#page-6 [OpenIDConnect]: https://openid.net/specs/openid-connect-core-1_0.html [OpenIDConnect-AuthZ-Endpoint]: https://openid.net/specs/openid-connect-core-1_0.html#AuthorizationEndpoint
-[OpenIDConnect-ID-Token]: https://openid.net/specs/openid-connect-core-1_0.html#IDToken
+[OpenIDConnect-ID-Token]: https://openid.net/specs/openid-connect-core-1_0.html#IDToken
active-directory Tutorial V2 Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-ios.md
Title: "Tutorial: Create an iOS or macOS app that uses the Microsoft identity platform for authentication | Azure"-
-description: In this tutorial, you build an iOS or macOS app that uses the Microsoft identity platform to sign in users and get an access token to call the Microsoft Graph API on their behalf.
-
+ Title: "Tutorial: Create an iOS or macOS app that uses the Microsoft identity platform for authentication"
+description: Build an iOS or macOS app that uses the Microsoft identity platform to sign in users and get an access token to call the Microsoft Graph API on their behalf.
- Previously updated : 09/18/2020 Last updated : 05/28/2022
In this tutorial, you build an iOS or macOS app that integrates with the Microsoft identity platform to sign users and get an access token to call the Microsoft Graph API.
-When you've completed the guide, your application will accept sign-ins of personal Microsoft accounts (including outlook.com, live.com, and others) and work or school accounts from any company or organization that uses Azure Active Directory. This tutorial is applicable to both iOS and macOS apps. Some steps are different between the two platforms.
+When you've completed the tutorial, your application will accept sign-ins of personal Microsoft accounts (including outlook.com, live.com, and others) and work or school accounts from any company or organization that uses Azure Active Directory. This tutorial is applicable to both iOS and macOS apps. Some steps are different between the two platforms.
In this tutorial:
If you'd like to download a completed version of the app you build in this tutor
1. Select **Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)** under **Supported account types**. 1. Select **Register**. 1. Under **Manage**, select **Authentication** > **Add a platform** > **iOS/macOS**.
-1. Enter your project's Bundle ID. If you downloaded the code, this is `com.microsoft.identitysample.MSALiOS`. If you're creating your own project, select your project in Xcode and open the **General** tab. The bundle identifier appears in the **Identity** section.
-1. Select **Configure** and save the **MSAL Configuration** that appears in the **MSAL configuration** page so you can enter it when you configure your app later.
+1. Enter your project's Bundle ID. If downloaded the code sample, the Bundle ID is `com.microsoft.identitysample.MSALiOS`. If you're creating your own project, select your project in Xcode and open the **General** tab. The bundle identifier appears in the **Identity** section.
+1. Select **Configure** and save the **MSAL Configuration** that appears in the **MSAL configuration** page so you can enter it when you configure your app later.
1. Select **Done**. ## Add MSAL
Choose one of the following ways to install the MSAL library in your app:
### CocoaPods
-1. If you're using [CocoaPods](https://cocoapods.org/), install `MSAL` by first creating an empty file called `podfile` in the same folder as your project's `.xcodeproj` file. Add the following to `podfile`:
+1. If you're using [CocoaPods](https://cocoapods.org/), install `MSAL` by first creating an empty file called _podfile_ in the same folder as your project's _.xcodeproj_ file. Add the following to _podfile_:
``` use_frameworks!
Choose one of the following ways to install the MSAL library in your app:
``` 2. Replace `<your-target-here>` with the name of your project.
-3. In a terminal window, navigate to the folder that contains the `podfile` you created and run `pod install` to install the MSAL library.
+3. In a terminal window, navigate to the folder that contains the _podfile_ you created and run `pod install` to install the MSAL library.
4. Close Xcode and open `<your project name>.xcworkspace` to reload the project in Xcode. ### Carthage
-If you're using [Carthage](https://github.com/Carthage/Carthage), install `MSAL` by adding it to your `Cartfile`:
+If you're using [Carthage](https://github.com/Carthage/Carthage), install `MSAL` by adding it to your _Cartfile_:
``` github "AzureAD/microsoft-authentication-library-for-objc" "master" ```
-From a terminal window, in the same directory as the updated `Cartfile`, run the following command to have Carthage update the dependencies in your project.
+From a terminal window, in the same directory as the updated _Cartfile_, run the following command to have Carthage update the dependencies in your project.
iOS:
You can also use Git Submodule, or check out the latest release to use as a fram
Next, we'll add your app registration to your code.
-First, add the following import statement to the top of the `ViewController.swift`, as well as `AppDelegate.swift` or `SceneDelegate.swift` files:
+First, add the following import statement to the top of the _ViewController.swift_ file and either _AppDelegate.swift_ or _SceneDelegate.swift_:
```swift import MSAL ```
-Then Add the following code to `ViewController.swift` prior to `viewDidLoad()`:
+Next, add the following code to _ViewController.swift_ before to `viewDidLoad()`:
```swift // Update the below to your client ID you received in the portal. The below is for running the demo only
var webViewParameters : MSALWebviewParameters?
var currentAccount: MSALAccount? ```
-The only value you modify above is the value assigned to `kClientID`to be your [Application ID](./developer-glossary.md#application-id-client-id). This value is part of the MSAL Configuration data that you saved during the step at the beginning of this tutorial to register the application in the Azure portal.
+The only value you modify above is the value assigned to `kClientID` to be your [Application ID](./developer-glossary.md#application-client-id). This value is part of the MSAL Configuration data that you saved during the step at the beginning of this tutorial to register the application in the Azure portal.
## Configure Xcode project settings
Add a new keychain group to your project **Signing & Capabilities**. The keychai
## For iOS only, configure URL schemes
-In this step, you will register `CFBundleURLSchemes` so that the user can be redirected back to the app after sign in. By the way, `LSApplicationQueriesSchemes` also allows your app to make use of Microsoft Authenticator.
+In this step, you'll register `CFBundleURLSchemes` so that the user can be redirected back to the app after sign in. By the way, `LSApplicationQueriesSchemes` also allows your app to make use of Microsoft Authenticator.
-In Xcode, open `Info.plist` as a source code file, and add the following inside of the `<dict>` section. Replace `[BUNDLE_ID]` with the value you used in the Azure portal. If you downloaded the code, the bundle identifier is `com.microsoft.identitysample.MSALiOS`. If you're creating your own project, select your project in Xcode and open the **General** tab. The bundle identifier appears in the **Identity** section.
+In Xcode, open _Info.plist_ as a source code file, and add the following inside of the `<dict>` section. Replace `[BUNDLE_ID]` with the value you used in the Azure portal. If you downloaded the code, the bundle identifier is `com.microsoft.identitysample.MSALiOS`. If you're creating your own project, select your project in Xcode and open the **General** tab. The bundle identifier appears in the **Identity** section.
```xml <key>CFBundleURLTypes</key>
In Xcode, open `Info.plist` as a source code file, and add the following inside
## Create your app's UI
-Now create a UI that includes a button to call the Microsoft Graph API, another to sign out, and a text view to see some output by adding the following code to the `ViewController`class:
+Now create a UI that includes a button to call the Microsoft Graph API, another to sign out, and a text view to see some output by adding the following code to the `ViewController` class:
### iOS UI
Next, also inside the `ViewController` class, replace the `viewDidLoad()` method
### Initialize MSAL
-Add the following `initMSAL` method to the `ViewController` class:
+To the `ViewController` class, add the `initMSAL` method:
```swift func initMSAL() throws {
Add the following `initMSAL` method to the `ViewController` class:
} ```
-Add the following after `initMSAL` method to the `ViewController` class.
+Still in the `ViewController` class and after the `initMSAL` method, add the `initWebViewParams` method:
### iOS code:
func initWebViewParams() {
} ```
-### For iOS only, handle the sign-in callback
+### Handle the sign-in callback (iOS only)
-Open the `AppDelegate.swift` file. To handle the callback after sign-in, add `MSALPublicClientApplication.handleMSALResponse` to the `appDelegate` class like this:
+Open the _AppDelegate.swift_ file. To handle the callback after sign-in, add `MSALPublicClientApplication.handleMSALResponse` to the `appDelegate` class like this:
```swift // Inside AppDelegate...
func application(_ app: UIApplication, open url: URL, options: [UIApplication.Op
```
-**If you are using Xcode 11**, you should place MSAL callback into the `SceneDelegate.swift` instead.
+**If you are using Xcode 11**, you should place MSAL callback into the _SceneDelegate.swift_ instead.
If you support both UISceneDelegate and UIApplicationDelegate for compatibility with older iOS, MSAL callback would need to be placed into both files. ```swift
func scene(_ scene: UIScene, openURLContexts URLContexts: Set<UIOpenURLContext>)
Now, we can implement the application's UI processing logic and get tokens interactively through MSAL.
-MSAL exposes two primary methods for getting tokens: `acquireTokenSilently()` and `acquireTokenInteractively()`:
+MSAL exposes two primary methods for getting tokens: `acquireTokenSilently()` and `acquireTokenInteractively()`.
-- `acquireTokenSilently()` attempts to sign in a user and get tokens without any user interaction as long as an account is present. `acquireTokenSilently()` requires providing a valid `MSALAccount` which can be retrieved by using one of MSAL account enumeration APIs. This sample uses `applicationContext.getCurrentAccount(with: msalParameters, completionBlock: {})` to retrieve current account.
+- `acquireTokenSilently()` attempts to sign in a user and get tokens without user interaction as long as an account is present. `acquireTokenSilently()` require a valid `MSALAccount` which can be retrieved by using one of MSAL's account enumeration APIs. This tutorial uses `applicationContext.getCurrentAccount(with: msalParameters, completionBlock: {})` to retrieve the current account.
- `acquireTokenInteractively()` always shows UI when attempting to sign in the user. It may use session cookies in the browser or an account in the Microsoft authenticator to provide an interactive-SSO experience.
Add the following code to the `ViewController` class:
#### Get a token interactively
-The following code snippet gets a token for the first time by creating an `MSALInteractiveTokenParameters` object and calling `acquireToken`. Next you will add code that:
+The following code snippet gets a token for the first time by creating an `MSALInteractiveTokenParameters` object and calling `acquireToken`. Next you'll add code that:
1. Creates `MSALInteractiveTokenParameters` with scopes. 2. Calls `acquireToken()` with the created parameters.
Add the following helper methods to the `ViewController` class to complete the s
} ```
-### For iOS only, get additional device information
+### iOS only: get additional device information
Use following code to read current device configuration, including whether device is configured as shared:
Use following code to read current device configuration, including whether devic
### Multi-account applications
-This app is built for a single account scenario. MSAL also supports multi-account scenarios, but it requires some additional work from apps. You will need to create UI to help users select which account they want to use for each action that requires tokens. Alternatively, your app can implement a heuristic to select which account to use by querying all accounts from MSAL. For example, see `accountsFromDeviceForParameters:completionBlock:` [API](https://azuread.github.io/microsoft-authentication-library-for-objc/Classes/MSALPublicClientApplication.html#/c:objc(cs)MSALPublicClientApplication(im)accountsFromDeviceForParameters:completionBlock:)
+This app is built for a single account scenario. MSAL also supports multi-account scenarios, but it requires more application work. You'll need to create UI to help users select which account they want to use for each action that requires tokens. Alternatively, your app can implement a heuristic to select which account to use by querying all accounts from MSAL. For example, see `accountsFromDeviceForParameters:completionBlock:` [API](https://azuread.github.io/microsoft-authentication-library-for-objc/Classes/MSALPublicClientApplication.html#/c:objc(cs)MSALPublicClientApplication(im)accountsFromDeviceForParameters:completionBlock:)
## Test your app Build and deploy the app to a test device or simulator. You should be able to sign in and get tokens for Azure AD or personal Microsoft accounts.
-The first time a user signs into your app, they will be prompted by Microsoft identity to consent to the permissions requested. While most users are capable of consenting, some Azure AD tenants have disabled user consent, which requires admins to consent on behalf of all users. To support this scenario, register your app's scopes in the Azure portal.
+The first time a user signs into your app, they'll be prompted by Microsoft identity to consent to the permissions requested. While most users are capable of consenting, some Azure AD tenants have disabled user consent, which requires admins to consent on behalf of all users. To support this scenario, register your app's scopes in the Azure portal.
After you sign in, the app will display the data returned from the Microsoft Graph `/me` endpoint.
active-directory Battery Management Information System Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/battery-management-information-system-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with BMIS - Battery Management Information System'
+description: Learn how to configure single sign-on between Azure Active Directory and BMIS - Battery Management Information System.
++++++++ Last updated : 05/27/2022++++
+# Tutorial: Azure AD SSO integration with BMIS - Battery Management Information System
+
+In this tutorial, you'll learn how to integrate BMIS - Battery Management Information System with Azure Active Directory (Azure AD). When you integrate BMIS - Battery Management Information System with Azure AD, you can:
+
+* Control in Azure AD who has access to BMIS - Battery Management Information System.
+* Enable your users to be automatically signed-in to BMIS - Battery Management Information System with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* BMIS - Battery Management Information System single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* BMIS - Battery Management Information System supports **IDP** initiated SSO.
+
+## Add BMIS - Battery Management Information System from the gallery
+
+To configure the integration of BMIS - Battery Management Information System into Azure AD, you need to add BMIS - Battery Management Information System from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **BMIS - Battery Management Information System** in the search box.
+1. Select **BMIS - Battery Management Information System** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for BMIS - Battery Management Information System
+
+Configure and test Azure AD SSO with BMIS - Battery Management Information System using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in BMIS - Battery Management Information System.
+
+To configure and test Azure AD SSO with BMIS - Battery Management Information System, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure BMIS - Battery Management Information System SSO](#configure-bmisbattery-management-information-system-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create BMIS - Battery Management Information System test user](#create-bmisbattery-management-information-system-test-user)** - to have a counterpart of B.Simon in BMIS - Battery Management Information System that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **BMIS - Battery Management Information System** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, the application is pre-configured and the necessary URLs are already pre-populated with Azure. The user needs to save the configuration by clicking the **Save** button.
+
+1. BMIS - Battery Management Information System application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the Battery Management Information System application image.](common/default-attributes.png "Image")
+
+1. In addition to above, BMIS - Battery Management Information System application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirement.
+
+ | Name | Source Attribute |
+ |-| |
+ | email | user.mail |
+ | first_name | user.givenname |
+ | last_name | user.surname |
+ | user_name | user.mail |
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up BMIS - Battery Management Information System** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URLs.](common/copy-configuration-urls.png "Attributes")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to BMIS - Battery Management Information System.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **BMIS - Battery Management Information System**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure BMIS - Battery Management Information System SSO
+
+To configure single sign-on on **BMIS - Battery Management Information System** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [BMIS - Battery Management Information System support team](mailto:bmissupport@midtronics.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create BMIS - Battery Management Information System test user
+
+In this section, you create a user called Britta Simon in BMIS - Battery Management Information System. Work with [BMIS - Battery Management Information System support team](mailto:bmissupport@midtronics.com) to add the users in the BMIS - Battery Management Information System platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on Test this application in Azure portal and you should be automatically signed in to the BMIS - Battery Management Information System for which you set up the SSO.
+
+* You can use Microsoft My Apps. When you click the BMIS - Battery Management Information System tile in the My Apps, you should be automatically signed in to the BMIS - Battery Management Information System for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure BMIS - Battery Management Information System you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory E2open Lsp Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/e2open-lsp-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with E2open LSP'
+description: Learn how to configure single sign-on between Azure Active Directory and E2open LSP.
++++++++ Last updated : 05/23/2022++++
+# Tutorial: Azure AD SSO integration with E2open LSP
+
+In this tutorial, you'll learn how to integrate E2open LSP with Azure Active Directory (Azure AD). When you integrate E2open LSP with Azure AD, you can:
+
+* Control in Azure AD who has access to E2open LSP.
+* Enable your users to be automatically signed-in to E2open LSP with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* E2open LSP single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* E2open LSP supports **SP** initiated SSO.
+
+## Add E2open LSP from the gallery
+
+To configure the integration of E2open LSP into Azure AD, you need to add E2open LSP from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **E2open LSP** in the search box.
+1. Select **E2open LSP** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for E2open LSP
+
+Configure and test Azure AD SSO with E2open LSP using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in E2open LSP.
+
+To configure and test Azure AD SSO with E2open LSP, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure E2open LSP SSO](#configure-e2open-lsp-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create E2open LSP test user](#create-e2open-lsp-test-user)** - to have a counterpart of B.Simon in E2open LSP that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **E2open LSP** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type a URL using the following pattern:
+ `https://<Customer name>-<Environment>.tms-lsp.blujaysolutions.net/navi/saml/metadata`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://<Customer name>-<Environment>.tms-lsp.blujaysolutions.net/navi/sam`
+
+ c. In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://<Customer name>-<Environment>.tms-lsp.blujaysolutions.net/navi/`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [E2open LSP Client support team](mailto:customersupport@e2open.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to E2open LSP.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **E2open LSP**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure E2open LSP SSO
+
+To configure single sign-on on **E2open LSP** side, you need to send the **App Federation Metadata Url** to [E2open LSP support team](mailto:customersupport@e2open.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create E2open LSP test user
+
+In this section, you create a user called Britta Simon in E2open LSP. Work with [E2open LSP support team](mailto:customersupport@e2open.com) to add the users in the E2open LSP platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to E2open LSP Sign-on URL where you can initiate the login flow.
+
+* Go to E2open LSP Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the E2open LSP tile in the My Apps, this will redirect to E2open LSP Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+
+## Next steps
+
+Once you configure E2open LSP you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Forcepoint Cloud Security Gateway Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/forcepoint-cloud-security-gateway-tutorial.md
Previously updated : 04/19/2022 Last updated : 05/26/2022
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
1. On the **Basic SAML Configuration** section, perform the following steps:
Follow these steps to enable Azure AD SSO in the Azure portal.
c. In the **Sign-on URL** text box, type the URL: `https://mailcontrol.com`
-1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
- ![The Certificate download link](common/certificatebase64.png)
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
1. On the **Set up Forcepoint Cloud Security Gateway - User Authentication** section, copy the appropriate URL(s) based on your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ ![Screenshot shows to copy appropriate configuration U R L.](common/copy-configuration-urls.png "Authentication")
### Create an Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
b. Select **Identity provider** from the dropdown.
- c. Open the downloaded **Certificate (Base64)** from the Azure portal and upload the file into the **File upload** textbox by clicking **Browse** option.
+ c. Upload the **Federation Metadata XML** file from the Azure portal into the **File upload** textbox by clicking **Browse** option.
d. Click **Save**.
active-directory Github Ae Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/github-ae-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with GitHub AE | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and GitHub AE.
+ Title: 'Tutorial: Azure AD SSO integration with GitHub Enterprise Server'
+description: Learn how to configure single sign-on between Azure Active Directory and GitHub Enterprise Server.
Previously updated : 08/31/2021 Last updated : 05/20/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with GitHub AE
+# Tutorial: Azure AD SSO integration with GitHub Enterprise Server
-In this tutorial, you'll learn how to integrate GitHub AE with Azure Active Directory (Azure AD). When you integrate GitHub AE with Azure AD, you can:
+In this tutorial, you'll learn how to integrate GitHub Enterprise Server with Azure Active Directory (Azure AD). When you integrate GitHub Enterprise Server with Azure AD, you can:
-* Control in Azure AD who has access to GitHub AE.
-* Enable your users to be automatically signed-in to GitHub AE with their Azure AD accounts.
+* Control in Azure AD who has access to GitHub Enterprise Server.
+* Enable your users to be automatically signed-in to GitHub Enterprise Server with their Azure AD accounts.
* Manage your accounts in one central location - the Azure portal. ## Prerequisites
In this tutorial, you'll learn how to integrate GitHub AE with Azure Active Dire
To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* GitHub AE, ready for [initialization](https://docs.github.com/github-ae@latest/admin/configuration/initializing-github-ae).
+* GitHub Enterprise Server, ready for [initialization](https://docs.github.com/github-ae@latest/admin/configuration/initializing-github-ae).
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* GitHub AE supports **SP** and **IDP** initiated SSO.
-* GitHub AE supports **Just In Time** user provisioning.
-* GitHub AE supports [Automated user provisioning](github-ae-provisioning-tutorial.md).
+* GitHub Enterprise Server supports **SP** and **IDP** initiated SSO.
+* GitHub Enterprise Server supports **Just In Time** user provisioning.
+* GitHub Enterprise Server supports [Automated user provisioning](github-ae-provisioning-tutorial.md).
-## Adding GitHub AE from the gallery
+## Adding GitHub Enterprise Server from the gallery
-To configure the integration of GitHub AE into Azure AD, you need to add GitHub AE from the gallery to your list of managed SaaS apps.
+To configure the integration of GitHub Enterprise Server into Azure AD, you need to add GitHub Enterprise Server from the gallery to your list of managed SaaS apps.
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. 1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **GitHub AE** in the search box.
-1. Select **GitHub AE** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+1. In the **Add from the gallery** section, type **GitHub Enterprise Server** in the search box.
+1. Select **GitHub Enterprise Server** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+## Configure and test Azure AD SSO for GitHub Enterprise Server
-## Configure and test Azure AD SSO for GitHub AE
+Configure and test Azure AD SSO with GitHub Enterprise Server using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in GitHub Enterprise Server.
-Configure and test Azure AD SSO with GitHub AE using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in GitHub AE.
-
-To configure and test Azure AD SSO with GitHub AE, complete the following building blocks:
+To configure and test Azure AD SSO with GitHub Enterprise Server, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure GitHub AE SSO](#configure-github-ae-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create GitHub AE test user](#create-github-ae-test-user)** - to have a counterpart of B.Simon in GitHub AE that is linked to the Azure AD representation of user.
+1. **[Configure GitHub Enterprise Server SSO](#configure-github-enterprise-server-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create GitHub Enterprise Server test user](#create-github-enterprise-server-test-user)** - to have a counterpart of B.Simon in GitHub Enterprise Server that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the Azure portal, on the **GitHub AE** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **GitHub Enterprise Server** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
-1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following steps:
a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern: `https://<YOUR-GITHUB-AE-HOSTNAME>`
Follow these steps to enable Azure AD SSO in the Azure portal.
`https://<YOUR-GITHUB-AE-HOSTNAME>/sso` > [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL, Reply URL and Identifier. Contact [GitHub AE Client support team](mailto:support@github.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [GitHub Enterprise Server Client support team](mailto:support@github.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-1. GitHub AE application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+1. GitHub Enterprise Server application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
- ![image](common/default-attributes.png)
+ ![Screenshot shows the image of Enterprise Server application.](common/default-attributes.png "Attributes")
1. Edit **User Attributes & Claims**.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. Click **Save**.
- ![manage claim](./media/github-ae-tutorial/administrator.png)
+ ![Screenshot shows to manage claim for attributes.](./media/github-ae-tutorial/administrator.png "Claims")
> [!NOTE] > To know the instructions on how to add a claim, please follow the [link](https://docs.github.com/en/github-ae@latest/admin/authentication/configuring-authentication-and-provisioning-for-your-enterprise-using-azure-ad). 1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
- ![The Certificate download link](common/certificatebase64.png)
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
-1. On the **Set up GitHub AE** section, copy the appropriate URL(s) based on your requirement.
+1. On the **Set up GitHub Enterprise Server** section, copy the appropriate URL(s) based on your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ ![Screenshot shows to copy appropriate configuration U R L.](common/copy-configuration-urls.png "Metadata")
### Create an Azure AD test user
In this section, you'll create a test user in the Azure portal called B.Simon.
### Assign the Azure AD test user
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to GitHub AE.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to GitHub Enterprise Server.
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **GitHub AE**.
+1. In the applications list, select **GitHub Enterprise Server**.
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. 1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure GitHub AE SSO
+## Configure GitHub Enterprise Server SSO
-To configure SSO on GitHub AE side, you need to follow the instructions mentioned [here](https://docs.github.com/github-ae@latest/admin/authentication/configuring-saml-single-sign-on-for-your-enterprise#enabling-saml-sso).
+To configure SSO on GitHub Enterprise Server side, you need to follow the instructions mentioned [here](https://docs.github.com/github-ae@latest/admin/authentication/configuring-saml-single-sign-on-for-your-enterprise#enabling-saml-sso).
-### Create GitHub AE test user
+### Create GitHub Enterprise Server test user
-In this section, a user called B.Simon is created in GitHub AE. GitHub AE supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in GitHub AE, a new one is created after authentication.
+In this section, a user called B.Simon is created in GitHub Enterprise Server. GitHub Enterprise Server supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in GitHub Enterprise Server, a new one is created after authentication.
-GitHub AE also supports automatic user provisioning, you can find more details [here](./github-ae-provisioning-tutorial.md) on how to configure automatic user provisioning.
+GitHub Enterprise Server also supports automatic user provisioning, you can find more details [here](./github-ae-provisioning-tutorial.md) on how to configure automatic user provisioning.
## Test SSO
In this section, you test your Azure AD single sign-on configuration with follow
#### SP initiated:
-* Click on **Test this application** in Azure portal. This will redirect to GitHub AE Sign on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to GitHub Enterprise Server Sign on URL where you can initiate the login flow.
-* Go to GitHub AE Sign-on URL directly and initiate the login flow from there.
+* Go to GitHub Enterprise Server Sign-on URL directly and initiate the login flow from there.
#### IDP initiated:
-* Click on **Test this application** in Azure portal and you should be automatically signed in to the GitHub AE for which you set up the SSO
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the GitHub Enterprise Server for which you set up the SSO.
-You can also use Microsoft My Apps to test the application in any mode. When you click the GitHub AE tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the GitHub AE for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+You can also use Microsoft My Apps to test the application in any mode. When you click the GitHub Enterprise Server tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the GitHub Enterprise Server for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps * [Configuring user provisioning for your enterprise](https://docs.github.com/github-ae@latest/admin/authentication/configuring-user-provisioning-for-your-enterprise).
-* Once you configure GitHub AE you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+* Once you configure GitHub Enterprise Server you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory S4 Digitsec Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/s4-digitsec-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with S4 - Digitsec'
+description: Learn how to configure single sign-on between Azure Active Directory and S4 - Digitsec.
++++++++ Last updated : 05/23/2022+++
+# Tutorial: Azure AD SSO integration with S4 - Digitsec
+
+In this tutorial, you'll learn how to integrate S4 - Digitsec with Azure Active Directory (Azure AD). When you integrate S4 - Digitsec with Azure AD, you can:
+
+* Control in Azure AD who has access to S4 - Digitsec.
+* Enable your users to be automatically signed-in to S4 - Digitsec with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* S4 - Digitsec single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* S4 - Digitsec supports **SP and IDP** initiated SSO.
+* S4 - Digitsec supports **Just In Time** user provisioning.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add S4 - Digitsec from the gallery
+
+To configure the integration of S4 - Digitsec into Azure AD, you need to add S4 - Digitsec from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **S4 - Digitsec** in the search box.
+1. Select **S4 - Digitsec** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for S4 - Digitsec
+
+Configure and test Azure AD SSO with S4 - Digitsec using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in S4 - Digitsec.
+
+To configure and test Azure AD SSO with S4 - Digitsec, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure S4 - Digitsec SSO](#configure-s4digitsec-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create S4 - Digitsec test user](#create-s4digitsec-test-user)** - to have a counterpart of B.Simon in S4 - Digitsec that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **S4 - Digitsec** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type the URL:
+ `https://s4.digitsec.com`
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
+
+1. On the **Set up S4 - Digitsec** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Attributes")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to S4 - Digitsec.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **S4 - Digitsec**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure S4 - Digitsec SSO
+
+To configure single sign-on on S4 - Digitsec side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [S4 - Digitsec support team](mailto:Support@digitsec.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create S4 - Digitsec test user
+
+In this section, a user called B.Simon is created in S4 - Digitsec. S4 - Digitsec supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in S4 - Digitsec, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to S4 - Digitsec Sign on URL where you can initiate the login flow.
+
+* Go to S4 - Digitsec Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the S4 - Digitsec for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the S4 - Digitsec tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the S4 - Digitsec for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+
+## Next steps
+
+Once you configure S4 - Digitsec you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory Standard For Success Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/standard-for-success-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Standard for Success K-12'
+description: Learn how to configure single sign-on between Azure Active Directory and Standard for Success K-12.
++++++++ Last updated : 05/27/2022++++
+# Tutorial: Azure AD SSO integration with Standard for Success K-12
+
+In this tutorial, you'll learn how to integrate Standard for Success K-12 with Azure Active Directory (Azure AD). When you integrate Standard for Success K-12 with Azure AD, you can:
+
+* Control in Azure AD who has access to Standard for Success K-12.
+* Enable your users to be automatically signed-in to Standard for Success K-12 with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Standard for Success K-12 single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Standard for Success K-12 supports **SP** and **IDP** initiated SSO.
+
+## Add Standard for Success K-12 from the gallery
+
+To configure the integration of Standard for Success K-12 into Azure AD, you need to add Standard for Success K-12 from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Standard for Success K-12** in the search box.
+1. Select **Standard for Success K-12** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Standard for Success K-12
+
+Configure and test Azure AD SSO with Standard for Success K-12 using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Standard for Success K-12.
+
+To configure and test Azure AD SSO with Standard for Success K-12, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Standard for Success K-12 SSO](#configure-standard-for-success-k-12-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Standard for Success K-12 test user](#create-standard-for-success-k-12-test-user)** - to have a counterpart of B.Simon in Standard for Success K-12 that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Standard for Success K-12** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type a value using the following pattern:
+ `api://<ApplicationId>`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://edu.standardforsuccess.com/access/mssaml_consume?did=<INSTITUTION-ID>`
+
+1. Click **Set additional URLs** and perform the following steps if you wish to configure the application in SP initiated mode:
+
+ a. In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://edu.standardforsuccess.com/access/mssaml_int?did=<INSTITUTION-ID>`
+
+ b. In the **Relay State** text box, type a URL using the following pattern:
+ `https://edu.standardforsuccess.com/access/mssaml_consume?did=<INSTITUTION-ID>`
+
+ > [!Note]
+ > These values are not real. Update these values with the actual Identifier, Reply URL, Sign-on URL and Relay State. Contact [Standard for Success K-12 Client support team](mailto:help@standardforsuccess.com) to get the INSTITUTION-ID value. You can also refer to the patterns shown in the Basic SAML Configuration section in the Azure portal.
+
+1. In the **SAML Signing Certificate** section, click **Edit** button to open **SAML Signing Certificate** dialog.
+
+ ![Screenshot shows to edit SAML Signing Certificate.](common/edit-certificate.png "Signing Certificate")
+
+1. In the **SAML Signing Certificate** section, copy the **Thumbprint Value** and save it on your computer.
+
+ ![Screenshot shows to copy thumbprint value.](common/copy-thumbprint.png "Thumbprint")
+
+1. On the **Set up Standard for Success K-12** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Attributes")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Standard for Success K-12.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Standard for Success K-12**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Standard for Success K-12 SSO
+
+1. Log in to your Standard for Success K-12 company site as an administrator with superuser access.
+
+1. From the menu, navigate to **Utilities** -> **Tools & Features**.
+
+1. Scroll down to **Single Sign On Settings** and click the **Microsoft Azure Single Sign On** link and perform the following steps:
+
+ ![Screenshot that shows the Configuration Settings.](./media/standard-for-success-tutorial/settings.png "Configuration")
+
+ a. Select **Enable Azure Single Sign On** checkbox.
+
+ b. In the **Login URL** textbox, paste the **Login URL** value which you have copied from the Azure portal.
+
+ c. In the **Azure AD Identifier** textbox, paste the **Azure AD Identifier** value which you have copied from the Azure portal.
+
+ d. Fill the **Application ID** in the **Application ID** text box.
+
+ e. In the **Certificate Thumbprint** text box, paste the **Thumbprint Value** that you copied from the Azure portal.
+
+ f. Click **Save**.
+
+### Create Standard for Success K-12 test user
+
+1. In a different web browser window, log into your Standard for Success K-12 website as an administrator with superuser privileges.
+
+1. From the menu, navigate to **Utilites** -> **Accounts Manager**, then click **Create New User** and perform the following steps:
+
+ ![Screenshot that shows the User Information fields.](./media/standard-for-success-tutorial/name.png "User Information")
+
+ a. In **First Name** text box, enter the first name of the user.
+
+ b. In **Last Name** text box, enter the last name of the user.
+
+ c. In **Email** text box, enter the email address which you have added within Azure.
+
+ d. Scroll to the bottom and Click **Create User**.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Standard for Success K-12 Sign on URL where you can initiate the login flow.
+
+* Go to Standard for Success K-12 Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Standard for Success K-12 for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Standard for Success K-12 tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Standard for Success K-12 for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Standard for Success K-12 you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Tvu Service Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tvu-service-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with TVU Service'
+description: Learn how to configure single sign-on between Azure Active Directory and TVU Service.
++++++++ Last updated : 05/21/2022++++
+# Tutorial: Azure AD SSO integration with TVU Service
+
+In this tutorial, you'll learn how to integrate TVU Service with Azure Active Directory (Azure AD). When you integrate TVU Service with Azure AD, you can:
+
+* Control in Azure AD who has access to TVU Service.
+* Enable your users to be automatically signed-in to TVU Service with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* TVU Service single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* TVU Service supports **IDP** initiated SSO.
+
+## Add TVU Service from the gallery
+
+To configure the integration of TVU Service into Azure AD, you need to add TVU Service from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **TVU Service** in the search box.
+1. Select **TVU Service** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for TVU Service
+
+Configure and test Azure AD SSO with TVU Service using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in TVU Service.
+
+To configure and test Azure AD SSO with TVU Service, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure TVU Service SSO](#configure-tvu-service-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create TVU Service test user](#create-tvu-service-test-user)** - to have a counterpart of B.Simon in TVU Service that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **TVU Service** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, the application is pre-configured and the necessary URLs are already pre-populated with Azure. The user needs to save the configuration by clicking the **Save** button.
+
+1. TVU Service application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the image of TVU Service application.](common/default-attributes.png "Attributes")
+
+1. In addition to above, TVU Service application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | surname | user.surname |
+ | firstName | user.givenname |
+ | lastName | user.surname |
+ | email | user.mail |
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to TVU Service.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **TVU Service**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure TVU Service SSO
+
+To configure single sign-on on **TVU Service** side, you need to send the **App Federation Metadata Url** to [TVU Service support team](mailto:support@tvunetworks.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create TVU Service test user
+
+In this section, you create a user called Britta Simon in TVU Service. Work with [TVU Service support team](mailto:support@tvunetworks.com) to add the users in the TVU Service platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on Test this application in Azure portal and you should be automatically signed in to the TVU Service for which you set up the SSO.
+
+* You can use Microsoft My Apps. When you click the TVU Service tile in the My Apps, you should be automatically signed in to the TVU Service for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure TVU Service you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
aks Private Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/private-clusters.md
Title: Create a private Azure Kubernetes Service cluster
description: Learn how to create a private Azure Kubernetes Service (AKS) cluster Previously updated : 01/12/2022 Last updated : 05/27/2022
As mentioned, virtual network peering is one way to access your private cluster.
3. In scenarios where the VNet containing your cluster has custom DNS settings (4), cluster deployment fails unless the private DNS zone is linked to the VNet that contains the custom DNS resolvers (5). This link can be created manually after the private zone is created during cluster provisioning or via automation upon detection of creation of the zone using event-based deployment mechanisms (for example, Azure Event Grid and Azure Functions).
+> [!NOTE]
+> Conditional Forwarding doesn't support subdomains.
+ > [!NOTE] > If you are using [Bring Your Own Route Table with kubenet](./configure-kubenet.md#bring-your-own-subnet-and-route-table-with-kubenet) and Bring Your Own DNS with Private Cluster, the cluster creation will fail. You will need to associate the [RouteTable](./configure-kubenet.md#bring-your-own-subnet-and-route-table-with-kubenet) in the node resource group to the subnet after the cluster creation failed, in order to make the creation successful.
azure-arc Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/network-requirements.md
Title: Connected Machine agent network requirements description: Learn about the networking requirements for using the Connected Machine agent for Azure Arc-enabled servers. Previously updated : 03/14/2022 Last updated : 05/24/2022
Be sure to allow access to the following Service Tags:
* AzureResourceManager * AzureArcInfrastructure * Storage
+* WindowsAdminCenter (if [using Windows Admin Center to manage Arc-enabled servers](/windows-server/manage/windows-admin-center/azure/manage-arc-hybrid-machines))
For a list of IP addresses for each service tag/region, see the JSON file [Azure IP Ranges and Service Tags ΓÇô Public Cloud](https://www.microsoft.com/download/details.aspx?id=56519). Microsoft publishes weekly updates containing each Azure Service and the IP ranges it uses. This information in the JSON file is the current point-in-time list of the IP ranges that correspond to each service tag. The IP addresses are subject to change. If IP address ranges are required for your firewall configuration, then the **AzureCloud** Service Tag should be used to allow access to all Azure services. Do not disable security monitoring or inspection of these URLs, allow them as you would other Internet traffic.
The table below lists the URLs that must be available in order to install and us
|`*.guestconfiguration.azure.com`| Extension management and guest configuration services |Always| Private | |`guestnotificationservice.azure.com`, `*.guestnotificationservice.azure.com`|Notification service for extension and connectivity scenarios|Always| Private | |`azgn*.servicebus.windows.net`|Notification service for extension and connectivity scenarios|Always| Public |
+|`*servicebus.windows.net`|For Windows Admin Center and SSH scenarios|If using SSH or Windows Admin Center from Azure|Public|
|`*.blob.core.windows.net`|Download source for Azure Arc-enabled servers extensions|Always, except when using private endpoints| Not used when private link is configured | |`dc.services.visualstudio.com`|Agent telemetry|Optional| Public |
azure-functions Durable Functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-overview.md
Last updated 05/24/2022 + #Customer intent: As a < type of user >, I want < what? > so that < why? >.
azure-functions Functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-overview.md
description: Learn how Azure Functions can help build robust serverless apps.
ms.assetid: 01d6ca9f-ca3f-44fa-b0b9-7ffee115acd4 Previously updated : 11/20/2020 Last updated : 05/27/2022 -+ # Introduction to Azure Functions
azure-functions Functions Reference Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-python.md
Title: Python developer reference for Azure Functions description: Understand how to develop functions with Python Previously updated : 05/19/2022 Last updated : 05/25/2022 ms.devlang: python-+ # Azure Functions Python developer guide
As a Python developer, you may also be interested in one of the following articl
| Getting started | Concepts| Scenarios/Samples | |--|--|--|
-| <ul><li>[Python function using Visual Studio Code](./create-first-function-vs-code-python.md)</li><li>[Python function with terminal/command prompt](./create-first-function-cli-python.md)</li></ul> | <ul><li>[Developer guide](functions-reference.md)</li><li>[Hosting options](functions-scale.md)</li><li>[Performance&nbsp;considerations](functions-best-practices.md)</li></ul> | <ul><li>[Image classification with PyTorch](machine-learning-pytorch.md)</li><li>[Azure automation sample](/samples/azure-samples/azure-functions-python-list-resource-groups/azure-functions-python-sample-list-resource-groups/)</li><li>[Machine learning with TensorFlow](functions-machine-learning-tensorflow.md)</li><li>[Browse Python samples](/samples/browse/?products=azure-functions&languages=python)</li></ul> |
+| <ul><li>[Python function using Visual Studio Code](./create-first-function-vs-code-python.md)</li><li>[Python function with terminal/command prompt](./create-first-function-cli-python.md)</li></ul> | <ul><li>[Developer guide](functions-reference.md)</li><li>[Hosting options](functions-scale.md)</li><li>[Performance&nbsp;considerations](functions-best-practices.md)</li></ul> | <ul><li>[Image classification with PyTorch](machine-learning-pytorch.md)</li><li>[Azure Automation sample](/samples/azure-samples/azure-functions-python-list-resource-groups/azure-functions-python-sample-list-resource-groups/)</li><li>[Machine learning with TensorFlow](functions-machine-learning-tensorflow.md)</li><li>[Browse Python samples](/samples/browse/?products=azure-functions&languages=python)</li></ul> |
> [!NOTE] > While you can [develop your Python based Azure Functions locally on Windows](create-first-function-vs-code-python.md#run-the-function-locally), Python is only supported on a Linux based hosting plan when running in Azure. See the list of supported [operating system/runtime](functions-scale.md#operating-systemruntime) combinations.
Likewise, you can set the `status_code` and `headers` for the response message i
## Web frameworks
-You can apply WSGI and ASGI-compatible frameworks such as Flask and FastAPI with your HTTP-triggered Python functions. This section shows how to modify your functions to support these frameworks.
+You can use WSGI and ASGI-compatible frameworks such as Flask and FastAPI with your HTTP-triggered Python functions. This section shows how to modify your functions to support these frameworks.
First, the function.json file must be updated to include a `route` in the HTTP trigger, as shown in the following example:
The runtime uses the available Python version, when you run it locally.
To set a Python function app to a specific language version, you need to specify the language and the version of the language in `LinuxFxVersion` field in site config. For example, to change Python app to use Python 3.8, set `linuxFxVersion` to `python|3.8`.
-To learn more about Azure Functions runtime support policy, refer [article](./language-support-policy.md).
+To learn more about Azure Functions runtime support policy, refer to this [article](./language-support-policy.md)
-To see the full list of supported Python versions functions apps, refer [article](./supported-languages.md).
+To see the full list of supported Python versions functions apps, refer to this [article](./supported-languages.md)
# [Azure CLI](#tab/azurecli-linux)
azure-functions Functions Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-scale.md
Title: Azure Functions scale and hosting
description: Learn how to choose between Azure Functions Consumption plan and Premium plan. ms.assetid: 5b63649c-ec7f-4564-b168-e0a74cb7e0f3 Previously updated : 03/24/2022 Last updated : 04/22/2022 -+ # Azure Functions hosting options
The following is a summary of the benefits of the three main hosting plans for F
| Plan | Benefits | | | | |**[Consumption plan]**| Scale automatically and only pay for compute resources when your functions are running.<br/><br/>On the Consumption plan, instances of the Functions host are dynamically added and removed based on the number of incoming events.<br/><br/> Γ£ö Default hosting plan.<br/>Γ£ö Pay only when your functions are running.<br/>Γ£ö Scales automatically, even during periods of high load.|
-|**[Premium plan]**|Automatically scales based on demand using pre-warmed workers which run applications with no delay after being idle, runs on more powerful instances, and connects to virtual networks. <br/><br/>Consider the Azure Functions Premium plan in the following situations: <br/><br/>Γ£ö Your function apps run continuously, or nearly continuously.<br/>Γ£ö You have a high number of small executions and a high execution bill, but low GB seconds in the Consumption plan.<br/>Γ£ö You need more CPU or memory options than what is provided by the Consumption plan.<br/>Γ£ö Your code needs to run longer than the maximum execution time allowed on the Consumption plan.<br/>Γ£ö You require features that aren't available on the Consumption plan, such as virtual network connectivity.<br/>Γ£ö You want to provide a custom Linux image on which to run your functions. |
+|**[Premium plan]**|Automatically scales based on demand using pre-warmed workers, which run applications with no delay after being idle, runs on more powerful instances, and connects to virtual networks. <br/><br/>Consider the Azure Functions Premium plan in the following situations: <br/><br/>Γ£ö Your function apps run continuously, or nearly continuously.<br/>Γ£ö You have a high number of small executions and a high execution bill, but low GB seconds in the Consumption plan.<br/>Γ£ö You need more CPU or memory options than what is provided by the Consumption plan.<br/>Γ£ö Your code needs to run longer than the maximum execution time allowed on the Consumption plan.<br/>Γ£ö You require features that aren't available on the Consumption plan, such as virtual network connectivity.<br/>Γ£ö You want to provide a custom Linux image on which to run your functions. |
|**[Dedicated plan]** |Run your functions within an App Service plan at regular [App Service plan rates](https://azure.microsoft.com/pricing/details/app-service/windows/).<br/><br/>Best for long-running scenarios where [Durable Functions](durable/durable-functions-overview.md) can't be used. Consider an App Service plan in the following situations:<br/><br/>Γ£ö You have existing, underutilized VMs that are already running other App Service instances.<br/>Γ£ö Predictive scaling and costs are required.| The comparison tables in this article also include the following hosting options, which provide the highest amount of control and isolation in which to run your function apps.
Maximum instances are given on a per-function app (Consumption) or per-plan (Pre
| **[ASE][Dedicated plan]**<sup>3</sup> | Manual/autoscale |100 | | **[Kubernetes]** | Event-driven autoscale for Kubernetes clusters using [KEDA](https://keda.sh). | Varies&nbsp;by&nbsp;cluster&nbsp;&nbsp;|
-<sup>1</sup> During scale out, there's currently a limit of 500 instances per subscription per hour for Linux apps on a Consumption plan. <br/>
+<sup>1</sup> During scale-out, there's currently a limit of 500 instances per subscription per hour for Linux apps on a Consumption plan. <br/>
<sup>2</sup> In some regions, Linux apps on a Premium plan can scale to 40 instances. For more information, see the [Premium plan article](functions-premium-plan.md#region-max-scale-out). <br/> <sup>3</sup> For specific limits for the various App Service plan options, see the [App Service plan limits](../azure-resource-manager/management/azure-subscription-service-limits.md#app-service-limits).
Maximum instances are given on a per-function app (Consumption) or per-plan (Pre
| Plan | Details | | | | | **[Consumption plan]** | Pay only for the time your functions run. Billing is based on number of executions, execution time, and memory used. |
-| **[Premium plan]** | Premium plan is based on the number of core seconds and memory used across needed and pre-warmed instances. At least one instance per plan must be kept warm at all times. This plan provides the most predictable pricing. |
+| **[Premium plan]** | Premium plan is based on the number of core seconds and memory used across needed and pre-warmed instances. At least one instance per plan must always be kept warm. This plan provides the most predictable pricing. |
| **[Dedicated plan]** | You pay the same for function apps in an App Service Plan as you would for other App Service resources, like web apps.| | **[App Service Environment (ASE)][Dedicated plan]** | There's a flat monthly rate for an ASE that pays for the infrastructure and doesn't change with the size of the ASE. There's also a cost per App Service plan vCPU. All apps hosted in an ASE are in the Isolated pricing SKU. | | **[Kubernetes]**| You pay only the costs of your Kubernetes cluster; no additional billing for Functions. Your function app runs as an application workload on top of your cluster, just like a regular app. |
azure-functions Functions Triggers Bindings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-triggers-bindings.md
description: Learn to use triggers and bindings to connect your Azure Function t
Previously updated : 02/18/2019 Last updated : 05/25/2022 + # Azure Functions triggers and bindings concepts
-In this article you learn the high-level concepts surrounding functions triggers and bindings.
+In this article, you learn the high-level concepts surrounding functions triggers and bindings.
-Triggers are what cause a function to run. A trigger defines how a function is invoked and a function must have exactly one trigger. Triggers have associated data, which is often provided as the payload of the function.
+Triggers cause a function to run. A trigger defines how a function is invoked and a function must have exactly one trigger. Triggers have associated data, which is often provided as the payload of the function.
Binding to a function is a way of declaratively connecting another resource to the function; bindings may be connected as *input bindings*, *output bindings*, or both. Data from bindings is provided to the function as parameters.
Consider the following examples of how you could implement different functions.
<sup>\*</sup> Represents different queues
-These examples are not meant to be exhaustive, but are provided to illustrate how you can use triggers and bindings together.
+These examples aren't meant to be exhaustive, but are provided to illustrate how you can use triggers and bindings together.
### Trigger and binding definitions
azure-monitor Alerts Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-log.md
This article shows you how to create log alert rules and manage your alert instances. Azure Monitor log alerts allow users to use a [Log Analytics](../logs/log-analytics-tutorial.md) query to evaluate resource logs at a set frequency and fire an alert based on the results. Rules can trigger one or more actions using [alert processing rules](alerts-action-rules.md) and [action groups](./action-groups.md). Learn the concepts behind log alerts [here](alerts-types.md#log-alerts).
-When an alert is triggered by an alert rule,
-- Target: A specific Azure resource to monitor.-- Criteria: Logic to evaluate. If met, the alert fires. -- Action: Notifications or automation - email, SMS, webhook, and so on.
+You create an alert rule by combining:
+ - The resource(s) to be monitored.
+ - The signal or telemetry from the resource
+ - Conditions
+
+And then defining these elements of the triggered alert:
+ - Alert processing rules
+ - Action groups
+ You can also [create log alert rules using Azure Resource Manager templates](../alerts/alerts-log-create-templates.md). ## Create a new log alert rule in the Azure portal
azure-monitor Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/powershell.md
Create a new .json file - let's call it `template1.json` in this example. Copy t
"tags": {}, "properties": { "ApplicationId": "[parameters('appName')]",
- "retentionInDays": "[parameters('retentionInDays')]"
+ "retentionInDays": "[parameters('retentionInDays')]",
+ "ImmediatePurgeDataOn30Days": "[parameters('ImmediatePurgeDataOn30Days')]"
}, "dependsOn": [] },
azure-monitor Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sampling.md
In [`ApplicationInsights.config`](./configuration-with-applicationinsights-confi
The amount of telemetry to sample when the app has just started. Don't reduce this value while you're debugging.
-* `<ExcludedTypes>Trace;Exception</ExcludedTypes>`
+* `<ExcludedTypes>type;type</ExcludedTypes>`
A semi-colon delimited list of types that you do not want to be subject to sampling. Recognized types are: `Dependency`, `Event`, `Exception`, `PageView`, `Request`, `Trace`. All telemetry of the specified types is transmitted; the types that are not specified will be sampled.
-* `<IncludedTypes>Request;Dependency</IncludedTypes>`
+* `<IncludedTypes>type;type</IncludedTypes>`
A semi-colon delimited list of types that you do want to subject to sampling. Recognized types are: `Dependency`, `Event`, `Exception`, `PageView`, `Request`, `Trace`. The specified types will be sampled; all telemetry of the other types will always be transmitted.
azure-monitor Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/data-explorer.md
- Title: Azure Data Explorer Insights| Microsoft Docs
-description: This article describes how to use Azure Data Explorer Insights.
-- Previously updated : 01/05/2021---
-# Azure Data Explorer Insights
-
-Azure Data Explorer Insights provides comprehensive monitoring of your clusters by delivering a unified view of your cluster performance, operations, usage, and failures.
-
-It offers:
--- **At-scale perspective**. A snapshot view of your clusters' primary metrics helps you track performance of queries, ingestion, and export operations.-- **Drill-down analysis**. You can drill down into a particular Azure Data Explorer cluster to perform detailed analysis.-- **Customization**. You can change which metrics you want to see, modify or set thresholds that align with your limits, and save your own custom workbooks. Charts in a workbook can be pinned to Azure dashboards.-
-This article will help you understand how to onboard and use Azure Data Explorer Insights.
-
-## View from Azure Monitor (at-scale perspective)
-
-From Azure Monitor, you can view the main performance metrics for the cluster. These metrics include information about queries, ingestion, and export operations from multiple clusters in your subscription. They can help you identify performance problems.
-
-To view the performance of your clusters across all your subscriptions:
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-2. Select **Monitor** from the left pane. In the **Insights Hub** section, select **Azure Data Explorer Clusters**.
-
-![Screenshot of selections for viewing the performance of Azure Data Explorer clusters.](./media/data-explorer/insights-hub.png)
-
-### Overview tab
-
-On the **Overview** tab for the selected subscription, the table displays interactive metrics for the Azure Data Explorer clusters grouped within the subscription. You can filter results based on the options that you select from the following dropdown lists:
-
-* **Subscriptions**: Only subscriptions that have Azure Data Explorer clusters are listed.
-
-* **Azure Data Explorer clusters**: By default, up to five clusters are pre-selected. If you select all or multiple clusters in the scope selector, up to 200 clusters will be returned.
-
-* **Time Range**: By default, the table displays the last 24 hours of information based on the corresponding selections made.
-
-The counter tile, under the dropdown list, gives the total number of Azure Data Explorer clusters in the selected subscriptions and shows how many are selected. There are conditional color codings for the columns: **Keep alive**, **CPU**, **Ingestion Utilization**, and **Cache Utilization**. Orange-coded cells have values that are not sustainable for the cluster.
-
-To better understand what each of the metrics represent, we recommend reading through the documentation on [Azure Data Explorer metrics](/azure/data-explorer/using-metrics#cluster-metrics).
-
-### Query Performance tab
-
-The **Query Performance** tab shows the query duration, the total number of concurrent queries, and the total number of throttled queries.
-
-![Screenshot of the Query Performance tab.](./media/data-explorer/query-performance.png)
-
-### Ingestion Performance tab
-
-The **Ingestion Performance** tab shows the ingestion latency, succeeded ingestion results, failed ingestion results, ingestion volume, and events processed for event hubs and IoT hubs.
-
-[![Screenshot of the Ingestion Performance tab.](./media/data-explorer/ingestion-performance.png)](./media/data-explorer/ingestion-performance.png#lightbox)
-
-### Streaming Ingest Performance tab
-
-The **Streaming Ingest Performance** tab provides information on the average data rate, average duration, and request rate.
-
-### Export Performance tab
-
-The **Export Performance** tab provides information on exported records, lateness, pending count, and utilization percentage for continuous export operations.
-
-## View from an Azure Data Explorer Cluster resource (drill-down analysis)
-
-To access Azure Data Explorer Insights directly from an Azure Data Explorer cluster:
-
-1. In the Azure portal, select **Azure Data Explorer Clusters**.
-
-2. From the list, choose an Azure Data Explorer cluster. In the monitoring section, select **Insights**.
-
-You can also access these views by selecting the resource name of an Azure Data Explorer cluster from within the Azure Monitor insights view.
-
-> [!NOTE]
-> Azure Data Explorer Insights combines both logs and metrics to provide a global monitoring solution. The inclusion of logs-based visualizations requires users to [enable diagnostic logging of their Azure Data Explorer cluster and send them to a Log Analytics workspace](/azure/data-explorer/using-diagnostic-logs?tabs=commands-and-queries#enable-diagnostic-logs). The diagnostic logs that should be enabled are **Command**, **Query**, **SucceededIngestion**, **FailedIngestion**, **IngestionBatching**, **TableDetails**, and **TableUsageStatistics**. (Enabling **SucceededIngestion** logs might be costly. Enable them only if you need to monitor successful ingestions.)
-
-![Screenshot of the button for configuring logs for monitoring.](./media/data-explorer/enable-logs.png)
-
-### Overview tab
-
-The **Overview** tab shows:
--- Metrics tiles that highlight the availability and overall status of the cluster for quick health assessment.--- A summary of active [Azure Advisor recommendations](/azure/data-explorer/azure-advisor) and [resource health](/azure/data-explorer/monitor-with-resource-health) status.--- Charts that show the top CPU and memory consumers and the number of unique users over time.-
-[![Screenshot of the view from an Azure Data Explorer cluster resource.](./media/data-explorer/overview.png)](./media/data-explorer/overview.png#lightbox)
-
-### Key Metrics tab
-
-The **Key Metrics** tab shows a unified view of some of the cluster's metrics. They're grouped into general metrics, query-related metrics, ingestion-related metrics, and streaming ingestion-related metrics.
-
-[![Screenshot of graphs on the Key Metrics tab.](./media/data-explorer/key-metrics.png)](./media/data-explorer/key-metrics.png#lightbox)
-
-### Usage tab
-
-The **Usage** tab allows users to deep dive into the performance of the cluster's commands and queries. On this tab, you can:
-
-- See which workload groups, users, and applications are sending the most queries or consuming the most CPU and memory. You can then understand which workloads are submitting the heaviest queries for the cluster to process.-- Identify top workload groups, users, and applications by failed queries.-- Identify recent changes in the number of queries, compared to the historical daily average (over the past 16 days), by workload group, user, and application.-- Identify trends and peaks in the number of queries, memory, and CPU consumption by workload group, user, application, and command type.-
-The **Usage** tab includes actions that are performed directly by users. Internal cluster operations are not included in this tab.
-
-[![Screenshot of the operations view with donut charts related to commands and queries.](./media/data-explorer/usage.png)](./media/data-explorer/usage.png#lightbox)
-
-[![Screenshot of the operations view with line charts related to queries and memory.](./media/data-explorer/usage-2.png)](./media/data-explorer/usage-2.png#lightbox)
-
-### Tables tab
-
-The **Tables** tab shows the latest and historical properties of tables in the cluster. You can see which tables are consuming the most space. You can also track growth history by table size, hot data, and the number of rows over time.
-
-### Cache tab
-
-The **Cache** tab allows users to analyze their actual queries' lookback window patterns and compare them to the configured cache policy (for each table). You can identify tables used by the most queries and tables that are not queried at all, and adapt the cache policy accordingly.
-
-You might get cache policy recommendations on specific tables in Azure Advisor. Currently, cache recommendations are available only from the [main Azure Advisor dashboard](/azure/data-explorer/azure-advisor#use-the-azure-advisor-recommendations). They're based on actual queries' lookback window in the past 30 days and an unoptimized cache policy for at least 95 percent of the queries.
-
-Cache reduction recommendations in Azure Advisor are available for clusters that are "bounded by data." That means the cluster has low CPU and low ingestion utilization, but because of high data capacity, the cluster can't scale in or scale down.
-
-[![Screenshot of cache details.](./media/data-explorer/cache-tab.png)](./media/data-explorer/cache-tab.png#lightbox)
-
-### Cluster Boundaries tab
-
-The **Cluster Boundaries** tab displays the cluster boundaries based on your usage. On this tab, you can inspect the CPU, ingestion, and cache utilization. These metrics are scored as **Low**, **Medium**, or **High**. These metrics and scores are important when you're deciding on the optimal SKU and instance count for your cluster. They're taken into account in Azure Advisor SKU/size recommendations.
-
-On this tab, you can select a metric tile and deep dive to understand its trend and how its score is decided. You can also view the Azure Advisor SKU/size recommendation for your cluster. For example, in the following image, you can see that all metrics are scored as **Low**. The cluster receives a cost recommendation that will allow it to scale in/down and save cost.
-
-> [!div class="mx-imgBorder"]
-> [![Screenshot of cluster boundaries.](./media/data-explorer/cluster-boundaries.png)](./media/data-explorer/cluster-boundaries.png#lightbox)
-
-## Pin to an Azure dashboard
-
-You can pin any one of the metric sections (of the "at-scale" perspective) to an Azure dashboard by selecting the pushpin icon at the upper right of the section.
-
-![Screenshot of the pin icon selected.](./media/data-explorer/pin.png)
-
-## Customize Azure Data Explorer Insights
-
-You can edit the workbook to customize it in support of your data analytics needs:
-* Scope the workbook to always select a particular subscription or Azure Data Explorer clusters.
-* Change metrics in the grid.
-* Change thresholds or color rendering/coding.
-
-You can begin customizations by selecting the **Customize** button on the top toolbar.
-
-![Screenshot of the Customize button.](./media/data-explorer/customize.png)
-
-Customizations are saved to a custom workbook to prevent overwriting the default configuration in a published workbook. Workbooks are saved within a resource group, either in the **My Reports** section that's private to you or in the **Shared Reports** section that's accessible to everyone with access to the resource group. After you save the custom workbook, go to the workbook gallery to open it.
-
-![Screenshot of the workbook gallery.](./media/data-explorer/gallery.png)
-
-## Troubleshooting
-
-For general troubleshooting guidance, see the [Troubleshooting workbook-based insights](troubleshoot-workbooks.md) article.
-
-The following sections will help you diagnose and troubleshoot of some of the common problems that you might encounter when using Azure Data Explorer Insights.
-
-### Why don't I see all my subscriptions in the subscription picker?
-
-Azure Data Explorer Insights shows only subscriptions that contain Azure Data Explorer clusters chosen from the selected subscription filter. You select a subscription filter under **Directory + subscription** in the Azure portal.
-
-![Screenshot of selecting a subscription filter.](./media/key-vaults-insights-overview/Subscriptions.png)
-
-### Why don't I see any data for my Azure Data Explorer cluster under the Usage, Tables, or Cache section?
-
-To view your logs-based data, you need to [enable diagnostic logs](/azure/data-explorer/using-diagnostic-logs?tabs=commands-and-queries#enable-diagnostic-logs) for each Azure Data Explorer cluster that you want to monitor. You can do this under the diagnostic settings for each cluster. You'll need to send your data to a Log Analytics workspace. The diagnostic logs that should be enabled are **Command**, **Query**, **TableDetails**, and **TableUsageStatistics**.
-
-### I've already enabled logs for my Azure Data Explorer cluster. Why am I still unable to see my data under Commands and Queries?
-
-Currently, diagnostic logs don't work retroactively. The data will start appearing after actions have been taken in Azure Data Explorer. It might take some time, ranging from hours to a day, depending on how active your Azure Data Explorer cluster is.
-
-## Next steps
-
-Learn the scenarios that workbooks are designed to support, how to author new and customize existing reports, and more by reviewing [Create interactive reports with Azure Monitor workbooks](../visualize/workbooks-overview.md).
azure-monitor Basic Logs Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md
You can currently configure the following tables for Basic Logs:
## Set table configuration+
+# [Portal](#tab/portal-1)
+
+To configure a table for Basic Logs or Analytics Logs in the Azure portal:
+
+1. From the **Log Analytics workspaces** menu, select **Tables (preview)**.
+
+ The **Tables (preview)** screen lists all of the tables in the workspace.
+
+1. Select the context menu for the table you want to configure and select **Manage table**.
+
+ :::image type="content" source="media/basic-logs-configure/log-analytics-table-configuration.png" lightbox="media/basic-logs-configure/log-analytics-table-configuration.png" alt-text="Screenshot showing the Manage table button for one of the tables in a workspace.":::
+
+1. From the **Table plan** dropdown on the table configuration screen, select **Basic** or **Analytics**.
+
+ The **Table plan** dropdown is enabled only for [tables that support Basic Logs](#which-tables-support-basic-logs).
+
+ :::image type="content" source="media/basic-logs-configure/log-analytics-configure-table-plan.png" lightbox="media/basic-logs-configure/log-analytics-configure-table-plan.png" alt-text="Screenshot showing the Table plan dropdown on the table configuration screen.":::
+
+1. Select **Save**.
+ # [API](#tab/api-1) To configure a table for Basic Logs or Analytics Logs, call the **Tables - Update** API:
For example:
## Check table configuration
-# [Portal](#tab/portal-1)
+# [Portal](#tab/portal-2)
+
+To check table configuration in the Azure portal, you can open the table configuration screen, as described in [Set table configuration](#set-table-configuration).
-To check the configuration of a table in the Azure portal:
+Alternatively:
1. From the **Azure Monitor** menu, select **Logs** and select your workspace for the [scope](scope.md). See [Log Analytics tutorial](log-analytics-tutorial.md#view-table-information) for a walkthrough. 1. Open the **Tables** tab, which lists all tables in the workspace.
To check the configuration of a table in the Azure portal:
![Screenshot of the Basic Logs table icon in the table list.](./media/basic-logs-configure/table-icon.png#lightbox)
- You can also hover over a table name for the table information view. This will specify that the table is configured as Basic Logs:
+ You can also hover over a table name for the table information view, which indicates whether the table is configured as Basic Logs:
![Screenshot of the Basic Logs table indicator in the table details.](./media/basic-logs-configure/table-info.png#lightbox)
azure-monitor Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/customer-managed-keys.md
When link your own storage (BYOS) to workspace, the service stores *saved-search
* You need to have "write" permissions on your workspace and Storage Account. * Make sure to create your Storage Account in the same region as your Log Analytics workspace is located. * The *saves searches* in storage is considered as service artifacts and their format may change.
-* Existing *saves searches* are removed from your workspace. Copy and any *saves searches* that you need before the configuration. You can view your *saved-searches* using [PowerShell](/powershell/module/az.operationalinsights/get-azoperationalinsightssavedsearch).
-* Query history isn't supported and you won't be able to see queries that you ran.
+* Existing *saves searches* are removed from your workspace. Copy any *saves searches* that you need before this configuration. You can view your *saved-searches* using [PowerShell](/powershell/module/az.operationalinsights/get-azoperationalinsightssavedsearch).
+* Query 'history' and 'pin to dashboard' aren't supported when linking Storage Account for queries.
* You can link a single Storage Account to a workspace, which can be used for both *saved-searches* and *log alerts* queries.
-* Pin to dashboard isn't supported.
* Fired log alerts will not contains search results or alert query. You can use [alert dimensions](../alerts/alerts-unified-log.md#split-by-alert-dimensions) to get context in the fired alerts. **Configure BYOS for saved-searches queries**
azure-monitor Data Retention Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-retention-archive.md
During the interactive retention period, data is available for monitoring, troub
> The archive feature is currently in public preview and can only be set at the table level, not at the workspace level. ## Configure the default workspace retention policy
-You can set the workspace default retention policy in the Azure portal to 30, 31, 60, 90, 120, 180, 270, 365, 550, and 730 days. To set a different policy, use the Resource Manager configuration method described below. If you're on the *free* tier, you need to upgrade to the paid tier to change the data retention period.
+You can set the workspace default retention policy in the Azure portal to 30, 31, 60, 90, 120, 180, 270, 365, 550, and 730 days. You can set a different policy for specific tables by [configuring retention and archive policy at the table level](#set-retention-and-archive-policy-by-table). If you're on the *free* tier, you'll need to upgrade to the paid tier to change the data retention period.
To set the default workspace retention policy:
To set the default workspace retention policy:
## Set retention and archive policy by table
-You can set retention policies for individual tables, except for workspaces in the legacy Free Trial pricing tier, using Azure Resource Manager APIs. You canΓÇÖt currently configure data retention for individual tables in the Azure portal.
+By default, all tables in your workspace inherit the workspace's interactive retention setting and have no archive policy. You can modify the retention and archive policies of individual tables, except for workspaces in the legacy Free Trial pricing tier.
You can keep data in interactive retention between 4 and 730 days. You can set the archive period for a total retention time of up to 2,555 days (seven years).
-Each table is a subresource of the workspace it's in. For example, you can address the `SecurityEvent` table in [Azure Resource Manager](../../azure-resource-manager/management/overview.md) as:
+# [Portal](#tab/portal-1)
-```
-/subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/Microsoft.OperationalInsights/workspaces/MyWorkspaceName/Tables/SecurityEvent
-```
+To set the retention and archive duration for a table in the Azure portal:
+
+1. From the **Log Analytics workspaces** menu, select **Tables (preview)**.
+
+ The **Tables (preview)** screen lists all of the tables in the workspace.
+
+1. Select the context menu for the table you want to configure and select **Manage table**.
+
+ :::image type="content" source="media/basic-logs-configure/log-analytics-table-configuration.png" lightbox="media/basic-logs-configure/log-analytics-table-configuration.png" alt-text="Screenshot showing the Manage table button for one of the tables in a workspace.":::
-The table name is case-sensitive.
+1. Configure the retention and archive duration in **Data retention settings** section of the table configuration screen.
+
+ :::image type="content" source="media/data-retention-configure/log-analytics-configure-table-retention-archive.png" lightbox="media/data-retention-configure/log-analytics-configure-table-retention-archive.png" alt-text="Screenshot showing the data retention settings on the table configuration screen.":::
# [API](#tab/api-1)
PATCH https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups
``` > [!NOTE]
-> You don't explicitly specify the archive duration in the API call. Instead, you set the total retention, which specifies the retention plus the archive duration.
+> You don't explicitly specify the archive duration in the API call. Instead, you set the total retention, which is the sum of the interactive retention plus the archive duration.
You can use either PUT or PATCH, with the following difference:
az monitor log-analytics workspace table update --subscription ContosoSID --reso
## Get retention and archive policy by table
+# [Portal](#tab/portal-2)
+
+To view the retention and archive duration for a table in the Azure portal, from the **Log Analytics workspaces** menu, select **Tables (preview)**.
+
+The **Tables (preview)** screen shows the interactive retention and archive period for all of the tables in the workspace.
+++ # [API](#tab/api-2) To get the retention policy of a particular table (in this example, `SecurityEvent`), call the **Tables - Get** API:
azure-monitor Monitor Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/monitor-reference.md
The table below lists the available curated visualizations and more detailed inf
| [Azure Monitor for Azure Cache for Redis (preview)](./insights/redis-cache-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/redisCacheInsights) | Provides a unified, interactive view of overall performance, failures, capacity, and operational health | | [Azure Cosmos DB Insights](./insights/cosmosdb-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/cosmosDBInsights) | Provides a view of the overall performance, failures, capacity, and operational health of all your Azure Cosmos DB resources in a unified interactive experience. | | [Azure Container Insights](/azure/azure-monitor/insights/container-insights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/containerInsights) | Monitors the performance of container workloads that are deployed to managed Kubernetes clusters hosted on Azure Kubernetes Service (AKS). It gives you performance visibility by collecting metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. Container logs are also collected. After you enable monitoring from Kubernetes clusters, these metrics and logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux. |
-| [Azure Data Explorer insights](./insights/data-explorer.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/adxClusterInsights) | Azure Data Explorer Insights provides comprehensive monitoring of your clusters by delivering a unified view of your cluster performance, operations, usage, and failures. |
+| [Azure Data Explorer insights](/azure/data-explorer/data-explorer-insights) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/adxClusterInsights) | Azure Data Explorer Insights provides comprehensive monitoring of your clusters by delivering a unified view of your cluster performance, operations, usage, and failures. |
| [Azure HDInsight (preview)](../hdinsight/log-analytics-migration.md#insights) | Preview | No | An Azure Monitor workbook that collects important performance metrics from your HDInsight cluster and provides the visualizations and dashboards for most common scenarios. Gives a complete view of a single HDInsight cluster including resource utilization and application status| | [Azure IoT Edge](../iot-edge/how-to-explore-curated-visualizations.md) | GA | No | Visualize and explore metrics collected from the IoT Edge device right in the Azure portal using Azure Monitor Workbooks based public templates. The curated workbooks use built-in metrics from the IoT Edge runtime. These views don't need any metrics instrumentation from the workload modules. | | [Azure Key Vault Insights (preview)](./insights/key-vault-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/keyvaultsInsights) | Provides comprehensive monitoring of your key vaults by delivering a unified view of your Key Vault requests, performance, failures, and latency. |
azure-monitor Monitor Virtual Machine Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-alerts.md
Use a rule with the following query.
```kusto Heartbeat
-| summarize TimeGenerated=max(TimeGenerated) by Computer
+| summarize TimeGenerated=max(TimeGenerated) by Computer, _ResourceId
| extend Duration = datetime_diff('minute',now(),TimeGenerated) | summarize AggregatedValue = min(Duration) by Computer, bin(TimeGenerated,5m), _ResourceId ```
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
This article lists significant changes to Azure Monitor documentation.
**Updated articles** -- [Azure Data Explorer Insights](insights/data-explorer.md)
+- [Azure Data Explorer Insights](/azure/data-explorer/data-explorer-insights)
- [Agent Health solution in Azure Monitor](insights/solution-agenthealth.md) - [Monitoring solutions in Azure Monitor](insights/solutions.md) - [Monitor your SQL deployments with SQL Insights (preview)](insights/sql-insights-overview.md)
azure-percept Azure Percept Devkit Software Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/azure-percept-devkit-software-release-notes.md
This page provides information of changes and fixes for each Azure Percept DK OS
To download the update images, refer to [Azure Percept DK software releases for USB cable update](./software-releases-usb-cable-updates.md) or [Azure Percept DK software releases for OTA update](./software-releases-over-the-air-updates.md).
+## May (2205) Release
+
+- Operating System
+ - Latest security updates on BIND, Node.js, Cyrus SASL, libxml2, and OpenSSL packages.
+
## March (2203) Release - Operating System
azure-percept Software Releases Usb Cable Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/software-releases-usb-cable-updates.md
This page provides information and download links for all the dev kit OS/firmwar
## Latest releases - **Latest service release**
-March Service Release (2203): [Azure-Percept-DK-1.0.20220310.1223-public_preview_1.0.zip](<https://download.microsoft.com/download/c/6/f/c6f6b152-699e-4f60-85b7-17b3ea57c189/Azure-Percept-DK-1.0.20220310.1223-public_preview_1.0.zip>)
+May Service Release (2205): [Azure-Percept-DK-1.0.20220511.1756-public_preview_1.0.zip](<https://download.microsoft.com/download/c/7/7/c7738a05-819c-48d9-8f30-e4bf64e19f11/Azure-Percept-DK-1.0.20220511.1756-public_preview_1.0.zip>)
- **Latest major update or known stable version** Feature Update (2104): [Azure-Percept-DK-1.0.20210409.2055.zip](https://download.microsoft.com/download/6/4/d/64d53e60-f702-432d-a446-007920a4612c/Azure-Percept-DK-1.0.20210409.2055.zip)
Feature Update (2104): [Azure-Percept-DK-1.0.20210409.2055.zip](https://download
|Release|Download Links|Note| |||::|
+|May Service Release (2205)|[Azure-Percept-DK-1.0.20220511.1756-public_preview_1.0.zip](<https://download.microsoft.com/download/c/7/7/c7738a05-819c-48d9-8f30-e4bf64e19f11/Azure-Percept-DK-1.0.20220511.1756-public_preview_1.0.zip>)||
|March Service Release (2203)|[Azure-Percept-DK-1.0.20220310.1223-public_preview_1.0.zip](<https://download.microsoft.com/download/c/6/f/c6f6b152-699e-4f60-85b7-17b3ea57c189/Azure-Percept-DK-1.0.20220310.1223-public_preview_1.0.zip>)|| |February Service Release (2202)|[Azure-Percept-DK-1.0.20220209.1156-public_preview_1.0.zip](<https://download.microsoft.com/download/f/8/6/f86ce7b3-8d76-494e-82d9-dcfb71fc2580/Azure-Percept-DK-1.0.20220209.1156-public_preview_1.0.zip>)|| |January Service Release (2201)|[Azure-Percept-DK-1.0.20220112.1519-public_preview_1.0.zip](<https://download.microsoft.com/download/1/6/4/164cfcf2-ce52-4e75-9dee-63bb4a128e71/Azure-Percept-DK-1.0.20220112.1519-public_preview_1.0.zip>)||
azure-video-indexer Considerations When Use At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/considerations-when-use-at-scale.md
To see an example of how to upload videos using URL, check out [this example](up
## Automatic Scaling of Media Reserved Units
-Starting August 1st 2021, Azure Video Indexer enabled [Reserved Units](/azure/azure/media-services/latest/concept-media-reserved-units)(MRUs) auto scaling by [Azure Media Services](/azure/azure/media-services/latest/media-services-overview) (AMS), as a result you do not need to manage them through Azure Video Indexer. That will allow price optimization, e.g. price reduction in many cases, based on your business needs as it is being auto scaled.
+Starting August 1st 2021, Azure Video Indexer enabled [Reserved Units](/azure/media-services/latest/concept-media-reserved-units)(MRUs) auto scaling by [Azure Media Services](/azure/media-services/latest/media-services-overview) (AMS), as a result you do not need to manage them through Azure Video Indexer. That will allow price optimization, e.g. price reduction in many cases, based on your business needs as it is being auto scaled.
## Respect throttling
azure-video-indexer Deploy With Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/deploy-with-arm-template.md
The resource will be deployed to your subscription and will create the Azure Vid
## Prerequisites
-* An Azure Media Services (AMS) account. You can create one for free through the [Create AMS Account](/azure/azure/media-services/latest/account-create-how-to).
+* An Azure Media Services (AMS) account. You can create one for free through the [Create AMS Account](/azure/media-services/latest/account-create-how-to).
## Deploy the sample
azure-video-indexer Odrv Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/odrv-download.md
This article shows how to index videos stored on OneDrive by using the Azure Vid
## Supported file formats
-For a list of file formats that you can use with Azure Video Indexer, see [Standard Encoder formats and codecs](/azure/azure/media-services/latest/encode-media-encoder-standard-formats-reference).
+For a list of file formats that you can use with Azure Video Indexer, see [Standard Encoder formats and codecs](/azure/media-services/latest/encode-media-encoder-standard-formats-reference).
## Index a video by using the website
When you're using the [Upload Video](https://api-portal.videoindexer.ai/api-deta
After the indexing and encoding jobs are done, the video is published so you can also stream your video. The streaming endpoint from which you want to stream the video must be in the **Running** state. For `SingleBitrate`, the standard encoder cost will apply for the output. If the video height is greater than or equal to 720, Azure Video Indexer encodes it as 1280 x 720. Otherwise, it's encoded as 640 x 468.
-The default setting is [content-aware encoding](/azure/azure/media-services/latest/encode-content-aware-concept).
+The default setting is [content-aware encoding](/azure/media-services/latest/encode-content-aware-concept).
If you only want to index your video and not encode it, set `streamingPreset` to `NoStreaming`.
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
Fixed bugs related to CSS, theming and accessibility:
### Automatic Scaling of Media Reserved Units
-Starting August 1st 2021, Azure Video Indexer enabled [Media Reserved Units (MRUs)](/azure/azure/media-services/latest/concept-media-reserved-units) auto scaling by [Azure Media Services](/azure/azure/media-services/latest/media-services-overview), as a result you do not need to manage them through Azure Video Indexer. That will allow price optimization, for example price reduction in many cases, based on your business needs as it is being auto scaled.
+Starting August 1st 2021, Azure Video Indexer enabled [Media Reserved Units (MRUs)](/azure/media-services/latest/concept-media-reserved-units) auto scaling by [Azure Media Services](/azure/media-services/latest/media-services-overview), as a result you do not need to manage them through Azure Video Indexer. That will allow price optimization, for example price reduction in many cases, based on your business needs as it is being auto scaled.
## June 2021
azure-video-indexer Upload Index Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/upload-index-videos.md
When you're uploading videos by using the API, you have the following options:
* Upload your video from a URL (preferred). * Send the video file as a byte array in the request body.
-* Use existing an Azure Media Services asset by providing the [asset ID](/azure/azure/media-services/latest/assets-concept). This option is supported in paid accounts only.
+* Use existing an Azure Media Services asset by providing the [asset ID](/azure/media-services/latest/assets-concept). This option is supported in paid accounts only.
## Supported file formats
When you're using the [Upload Video](https://api-portal.videoindexer.ai/api-deta
After the indexing and encoding jobs are done, the video is published so you can also stream your video. The streaming endpoint from which you want to stream the video must be in the **Running** state. For `SingleBitrate`, the standard encoder cost will apply for the output. If the video height is greater than or equal to 720, Azure Video Indexer encodes it as 1280 x 720. Otherwise, it's encoded as 640 x 468.
-The default setting is [content-aware encoding](/azure/azure/media-services/latest/encode-content-aware-concept).
+The default setting is [content-aware encoding](/azure/media-services/latest/encode-content-aware-concept).
If you only want to index your video and not encode it, set `streamingPreset` to `NoStreaming`.
azure-video-indexer Video Indexer Output Json V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-output-json-v2.md
When a video is indexed, Azure Video Indexer produces the JSON content that contains details of the specified video insights. The insights include transcripts, optical character recognition elements (OCRs), faces, topics, blocks, and similar details. Each insight type includes instances of time ranges that show when the insight appears in the video.
-The produced JSON output contains `Insights` and `SummarizedInsights` elements. We highly recommend using `Insights` and not using `SummarizedInsights` (which is present for backward compatibility).
+> [!TIP]
+> The produced JSON output contains `Insights` and `SummarizedInsights` elements. We highly recommend using `Insights` and not using `SummarizedInsights` (which is present for backward compatibility).
To visually examine the video's insights, press the **Play** button on the video on the [Azure Video Indexer](https://www.videoindexer.ai/) website.
To get insights produced by the API:
This section shows a summary of the insights.
+> [!TIP]
+> The produced JSON output contains `Insights` and `SummarizedInsights` elements. We highly recommend using `Insights` and not using `SummarizedInsights` (which is present for backward compatibility).
+ |Attribute | Description| ||| |`name`|The name of the video. For example: `Azure Monitor`.|
azure-video-indexer Video Indexer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-overview.md
[!INCLUDE [regulation](./includes/regulation.md)]
+> [!NOTE]
+> The service is now rebranded from Azure Video Analyzer for Media to **Azure Video Indexer**. Click [here](https://vi.microsoft.com) to read more.
+ Azure Video Indexer is a cloud application, part of Azure Applied AI Services, built on Azure Media Services and Azure Cognitive Services (such as the Face, Translator, Computer Vision, and Speech). It enables you to extract the insights from your videos using Azure Video Indexer video and audio models. To start extracting insights with Azure Video Indexer, you need to create an account and upload videos. When you upload your videos to Azure Video Indexer, it analyses both visuals and audio by running different AI models. As Azure Video Indexer analyzes your video, the insights that are extracted by the AI models.
The following list shows the supported browsers that you can use for the Azure V
You're ready to get started with Azure Video Indexer. For more information, see the following articles:
+- [Pricing](https://azure.microsoft.com/pricing/details/video-indexer/)
- [Get started with the Azure Video Indexer website](video-indexer-get-started.md). - [Process content with Azure Video Indexer REST API](video-indexer-use-apis.md). - [Embed visual widgets in your application](video-indexer-embed-widgets.md).
batch Batch Cli Sample Add Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/scripts/batch-cli-sample-add-application.md
Title: Azure CLI Script Example - Add an Application in Batch | Microsoft Docs description: Learn how to add an application for use with an Azure Batch pool or a task using the Azure CLI. Previously updated : 09/17/2021 Last updated : 05/24/2022 keywords: batch, azure cli samples, azure cli code samples, azure cli script samples # CLI example: Add an application to an Azure Batch account
-This script demonstrates how to add an application for use with an Azure Batch pool or task. To set up an application to add to your Batch account, package your executable, together with any dependencies, into a zip file.
+This script demonstrates how to add an application for use with an Azure Batch pool or task. To set up an application to add to your Batch account, package your executable, together with any dependencies, into a zip file.
+ [!INCLUDE [azure-cli-prepare-your-environment.md](../../../includes/azure-cli-prepare-your-environment.md)]
+## Sample script
++
+### Create batch account and new application
++
+### Create batch application package
-## Example script
+An application can reference multiple application executable packages of different versions. The executables and any dependencies need to be zipped up for the package. Once uploaded, the CLI attempts to activate the package so that it's ready for use.
-[!code-azurecli-interactive[main](../../../cli_scripts/batch/add-application/add-application.sh "Add Application")]
+```azurecli
+az batch application package create \
+ --resource-group $resourceGroup \
+ --name $batchAccount \
+ --application-name "MyApplication" \
+ --package-file my-application-exe.zip \
+ --version-name 1.0
+```
+
+### Update the application
+
+Update the application to assign the newly added application package as the default version.
+
+```azurecli
+az batch application set \
+ --resource-group $resourceGroup \
+ --name $batchAccount \
+ --application-name "MyApplication" \
+ --default-version 1.0
+```
-## Clean up deployment
+## Clean up resources
-Run the following command to remove the
-resource group and all resources associated with it.
-```azurecli-interactive
-az group delete --name myResourceGroup
+```azurecli
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command-specific documentation.
batch Batch Cli Sample Create Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/scripts/batch-cli-sample-create-account.md
Title: Azure CLI Script Example - Create Batch account - Batch service | Microsoft Docs
-description: Learn how to create a Batch account in Batch service mode with this Azure CLI script example. This also script shows how to query or update various properties of the account.
+description: Learn how to create a Batch account in Batch service mode with this Azure CLI script example. This script also shows how to query or update various properties of the account.
Previously updated : 09/17/2021 Last updated : 05/24/2022 keywords: batch, azure cli samples, azure cli code samples, azure cli script samples
keywords: batch, azure cli samples, azure cli code samples, azure cli script sam
# CLI example: Create a Batch account in Batch service mode This script creates an Azure Batch account in Batch service mode and shows how to query or update various properties of the account. When you create a Batch account in the default Batch service mode, its compute nodes are assigned internally by the Batch
-service. Allocated compute nodes are subject to a separate vCPU (core) quota and the account can be
-authenticated either via shared key credentials or an Azure Active Directory token.
+service. Allocated compute nodes are subject to a separate vCPU (core) quota and the account can be authenticated either via shared key credentials or an Azure Active Directory token.
+ [!INCLUDE [azure-cli-prepare-your-environment.md](../../../includes/azure-cli-prepare-your-environment.md)] -- This tutorial requires version 2.0.20 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+## Sample script
+
-## Example script
+### Run the script
-[!code-azurecli-interactive[main](../../../cli_scripts/batch/create-account/create-account.sh "Create Account")]
-## Clean up deployment
+## Clean up resources
-Run the following command to remove the
-resource group and all resources associated with it.
-```azurecli-interactive
-az group delete --name myResourceGroup
+```azurecli
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command-specific documentation.
batch Batch Cli Sample Create User Subscription Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/scripts/batch-cli-sample-create-user-subscription-account.md
Title: Azure CLI Script Example - Create Batch account - user subscription | Microsoft Docs description: Learn how to create an Azure Batch account in user subscription mode. This account allocates compute nodes into your subscription. Previously updated : 09/17/2021 Last updated : 05/24/2022 keywords: batch, azure cli samples, azure cli examples, azure cli code samples
keywords: batch, azure cli samples, azure cli examples, azure cli code samples
This script creates an Azure Batch account in user subscription mode. An account that allocates compute nodes into your subscription must be authenticated via an Azure Active Directory token. The compute nodes allocated count toward your subscription's vCPU (core) quota. + [!INCLUDE [azure-cli-prepare-your-environment.md](../../../includes/azure-cli-prepare-your-environment.md)] -- This tutorial requires version 2.0.20 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+## Sample script
+
-## Example script
+### Run the script
-[!code-azurecli-interactive[main](../../../cli_scripts/batch/create-account/create-account-user-subscription.sh "Create Account using user subscription")]
-## Clean up deployment
+## Clean up resources
-Run the following command to remove the resource group and all resources associated with it.
-```azurecli-interactive
-az group delete --name myResourceGroup
+```azurecli
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command-specific documentation.
batch Batch Cli Sample Manage Linux Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/scripts/batch-cli-sample-manage-linux-pool.md
Title: Azure CLI Script Example - Linux Pool in Batch | Microsoft Docs description: Learn the commands available in the Azure CLI to create and manage a pool of Linux compute nodes in Azure Batch. Previously updated : 09/17/2021 Last updated : 05/24/2022 keywords: linux, azure cli samples, azure cli code samples, azure cli script samples
keywords: linux, azure cli samples, azure cli code samples, azure cli script sam
This script demonstrates some of the commands available in the Azure CLI to create and manage a pool of Linux compute nodes in Azure Batch. + [!INCLUDE [azure-cli-prepare-your-environment.md](../../../includes/azure-cli-prepare-your-environment.md)] -- This tutorial requires version 2.0.20 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+## Sample script
++
+### To create a Linux pool in Azure Batch
++
+### To reboot a batch node
-## Example script
+If a particular node in the pool is having issues, it can be rebooted or reimaged. The ID of the node can be retrieved with the list command above. A typical node ID is in the format `tvm-xxxxxxxxxx_1-<timestamp>`.
-[!code-azurecli-interactive[main](../../../cli_scripts/batch/manage-pool/manage-pool-linux.sh "Manage Linux Virtual Machine Pool")]
+```azurecli
+az batch node reboot \
+ --pool-id mypool-linux \
+ --node-id tvm-123_1-20170316t000000z
+```
+
+### To delete a batch node
+
+One or more compute nodes can be deleted from the pool, and any work already assigned to it can be re-allocated to another node.
+
+```azurecli
+az batch node delete \
+ --pool-id mypool-linux \
+ --node-list tvm-123_1-20170316t000000z tvm-123_2-20170316t000000z \
+ --node-deallocation-option requeue
+```
-## Clean up deployment
+## Clean up resources
-Run the following command to remove the
-resource group and all resources associated with it.
-```azurecli-interactive
-az group delete --name myResourceGroup
+```azurecli
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command-specific documentation.
batch Batch Cli Sample Manage Windows Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/scripts/batch-cli-sample-manage-windows-pool.md
Title: Azure CLI Script Example - Windows Pool in Batch | Microsoft Docs description: Learn some of the commands available in the Azure CLI to create and manage a pool of Windows compute nodes in Azure Batch. Previously updated : 09/17/2021 Last updated : 05/24/2022 keywords: windows pool, azure cli samples, azure cli code samples, azure cli script samples
keywords: windows pool, azure cli samples, azure cli code samples, azure cli scr
# CLI example: Create and manage a Windows pool in Azure Batch This script demonstrates some of the commands available in the Azure CLI to create and
-manage a pool of Windows compute nodes in Azure Batch. A Windows pool can be configured in two ways, with either a Cloud Services configuration
-or a Virtual Machine configuration. This example shows how to create a Windows pool with the Cloud Services configuration.
+manage a pool of Windows compute nodes in Azure Batch. A Windows pool can be configured in two ways, with either a Cloud Services configuration or a Virtual Machine configuration. This example shows how to create a Windows pool with the Cloud Services configuration.
+ [!INCLUDE [azure-cli-prepare-your-environment.md](../../../includes/azure-cli-prepare-your-environment.md)] -- This tutorial requires version 2.0.20 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+## Sample script
+
-## Example script
+### Run the script
-[!code-azurecli-interactive[main](../../../cli_scripts/batch/manage-pool/manage-pool-windows.sh "Manage Windows Cloud Services Pool")]
-## Clean up deployment
+## Clean up resources
-Run the following command to remove the
-resource group and all resources associated with it.
-```azurecli-interactive
-az group delete --name myResourceGroup
+```azurecli
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command-specific documentation.
batch Batch Cli Sample Run Job https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/scripts/batch-cli-sample-run-job.md
Title: Azure CLI Script Example - Run a Batch job | Microsoft Docs description: Learn how to create a Batch job and add a series of tasks to the job using the Azure CLI. This article also shows how to monitor a job and its tasks. Previously updated : 09/17/2021 Last updated : 05/24/2022 keywords: batch, batch job, monitor job, azure cli samples, azure cli code samples, azure cli script samples # CLI example: Run a job and tasks with Azure Batch
-This script creates a Batch job and adds a series of tasks to the job. It also demonstrates
-how to monitor a job and its tasks.
+This script creates a Batch job and adds a series of tasks to the job. It also demonstrates how to monitor a job and its tasks.
+ [!INCLUDE [azure-cli-prepare-your-environment.md](../../../includes/azure-cli-prepare-your-environment.md)] -- This tutorial requires version 2.0.20 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+## Sample script
++
+### Create a Batch account in Batch service mode
++
+### To add many tasks at once
+
+To add many tasks at once, specify the tasks in a JSON file, and pass it to the command. For format, see https://github.com/Azure/azure-docs-cli-python-samples/blob/master/batch/run-job/tasks.json. Provide the absolute path to the JSON file. For an example JSON file, see https://github.com/Azure-Samples/azure-cli-samples/blob/master/batch/run-job/tasks.json.
+
+```azurecli
+az batch task create \
+ --job-id myjob \
+ --json-file tasks.json
+```
+
+### To update the job
+
+Update the job so that it is automatically marked as completed once all the tasks are finished.
+
+```azurecli
+az batch job set \
+--job-id myjob \
+--on-all-tasks-complete terminatejob
+```
+
+### To monitor the status of the job
+
+```azurecli
+az batch job show --job-id myjob
+```
-## Example script
+### To monitor the status of a task
-[!code-azurecli-interactive[main](../../../cli_scripts/batch/run-job/run-job.sh "Run Job")]
+```azurecli
+az batch task show \
+ --job-id myjob \
+ --task-id task1
+```
-## Clean up deployment
+## Clean up resources
-Run the following command to remove the
-resource group and all resources associated with it.
-```azurecli-interactive
-az group delete --name myResourceGroup
+```azurecli
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands. Each command in the table links to command-specific documentation.
chaos-studio Chaos Studio Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-overview.md
Title: What is Azure Chaos Studio?
-description: Understand Azure Chaos Studio, an Azure service that helps you to measure, understand, and build application and service resilience to real world incidents using chaos engineering to inject faults against your service then monitor how the service responds to disruptions.
+ Title: What is Azure Chaos Studio (Preview)?
+description: Measure, understand, and build resilience to incidents by using chaos engineering to inject faults and monitor how your application responds.
Previously updated : 11/11/2021 Last updated : 05/27/2022 -+
-# What is Azure Chaos Studio Preview?
+# What is Azure Chaos Studio (Preview)?
-Azure Chaos Studio is a managed service for improving resilience by injecting faults into your Azure applications. Running controlled fault injection experiments against your applications, a practice known as chaos engineering, helps you to measure, understand, and improve resilience against real-world incidents, such as a region outages or application failures causing high CPU utilization on a VM.
+[Azure Chaos Studio](https://azure.microsoft.com/services/chaos-studio) is a managed service that uses chaos engineering to help you measure, understand, and improve your cloud application and service resilience. Chaos engineering is a methodology by which you inject real-world faults into your application to run controlled fault injection experiments.
+
+Resilience is the capability of a system to handle and recover from disruptions. Application disruptions can cause errors and failures that can adversely affect your business or mission. Whether you're developing, migrating, or operating Azure applications, it's important to validate and improve your application's resilience.
+
+Chaos Studio helps you avoid negative consequences by validating that your application responds effectively to disruptions and failures. You can use Chaos Studio to test resilience against real-world incidents, like outages or high CPU utilization on virtual machines (VMs).
+
+The following video provides more background about Azure Chaos Studio:
> [!VIDEO https://aka.ms/docs/player?id=29017ee4-bdfa-491e-acfe-8876e93c505b]
-## Why should I use Chaos Studio?
+## Chaos Studio scenarios
+
+You can use chaos engineering for various resilience validation scenarios that span the service development and operations lifecycle. There are two types of scenarios:
+
+- *Shift right* scenarios use a production or pre-production environment. Usually, you do shift right scenarios with real customer traffic or simulated load.
+- *Shift left* scenarios can use a development or shared test environment. You can do shift left scenarios without any real customer traffic.
-Whether you are developing a new application that will be hosted on Azure, migrating an existing application to Azure, or operating an application that already runs on Azure, it is important to validate and improve your application's resilience. Resilience is the capability of a system to handle and recover from disruptions. Disruptions in your application's availability can result in errors and failures for users, which in turn can have negative consequences on your business or mission.
+You can use Chaos Studio for the following common chaos engineering scenarios:
-When running an application in the cloud, avoiding these negative consequences requires you to validate that your application responds effectively to disruptions that could be caused by a service you depend on, disruptions caused by a failure in the service itself, or even disruptions to incident response tooling and processes. Chaos experimentation enables you to test that your cloud-hosted application is resilient to failures.
+- Reproduce an incident that affected your application, to better understand the failure. Ensure that post-incident repairs prevent the incident from recurring.
+- Prepare for a major event or season with "game day" load, scale, performance, and resilience validation.
+- Do business continuity and disaster recovery (BCDR) drills to ensure that your application can recover quickly and preserve critical data in a disaster.
+- Run high availability (HA) drills to test application resilience against region outages, network configuration errors, high stress events, or noisy neighbor issues.
+- Develop application performance benchmarks.
+- Plan capacity needs for production environments.
+- Run stress tests or load tests.
+- Ensure that services migrated from an on-premises or other cloud environment remain resilient to known failures.
+- Build confidence in services built on cloud-native architectures.
+- Validate that live site tooling, observability data, and on-call processes still work in unexpected conditions.
-## When would I use Chaos Studio?
+For many of these scenarios, you first build resilience using ad-hoc chaos experiments. Then, you continuously validate that new deployments won't regress resilience, by running chaos experiments as deployment gates in your continuous integration/continuous deployment (CI/CD) pipelines.
-Chaos engineering can be used for a wide variety of resilience validation scenarios. These scenarios span the entire service development and operation lifecycle and can be categorized as either *shift right,* wherein the scenario is best validated in a production or pre-production environment, or *shift left,* wherein the scenario could be validated in a development environment or shared test environment. Typically shift right scenarios should be done with real customer traffic or simulated load whereas shift left scenarios can be done without any real customer traffic. Some common scenarios where chaos engineering can be applied are:
-* Reproducing an incident that impacted your application to better understand the failure mode or ensure that post-incident repair items will prevent the incident from recurring.
-* Running "game days" - load, scale, performance, and resilience validation of a service in preparation for a major user event or season.
-* Performing business continuity / disaster recovery (BCDR) drills to ensure that if your application were impacted by a major disaster it could recover quickly and critical data is preserved.
-* Running high availability drills to test application resilience against specific failures such as region outages, network configuration errors, high stress events, or noisy neighbor issues.
-* Developing application performance benchmarks.
-* Planning capacity needs for production environments.
-* Running stress tests or load tests.
-* Ensuring services migrated from an on-premises or other cloud environment remain resilient to known failures.
-* Building confidence in services built on cloud-native architectures.
-* Validating that live site tooling, observability data, and on-call processes work as expected under unexpected conditions.
+## How Chaos Studio works
-For many of these scenarios, you first build resilience using ad-hoc chaos experiments then continuously validate that new deployments won't regress resilience using chaos experiments as a deployment gate in your CI/CD pipeline.
+With Chaos Studio, you can orchestrate safe, controlled fault injection on your Azure resources. Chaos experiments are the core of Chaos Studio. A chaos experiment describes the faults to run and the resources to run against. You can organize faults to run in parallel or sequence, depending on your needs.
-## How does Chaos Studio work?
+Chaos Studio supports two types of faults:
-Chaos Studio enables you to orchestrate fault injection on your Azure resources in a safe and controlled way. At the core of Chaos Studio is chaos experiment. A chaos experiment is an Azure resource that describes the faults that should be run and the resources those faults should be run against. Faults can be organized to run in parallel or sequentially, depending on your needs. Chaos Studio supports two types of faults - *service-direct* faults, which run directly against an Azure resource without any installation or instrumentation (for example, rebooting an Azure Cache for Redis cluster or adding network latency to AKS pods), and *agent-based* faults, which run in virtual machines or virtual machine scale sets to perform in-guest failures (for example, applying virtual memory pressure or killing a process). Each fault has specific parameters you can control, like which process to kill or how much memory pressure to generate.
+- *Service-direct* faults run directly against an Azure resource, without any installation or instrumentation. Examples include rebooting an Azure Cache for Redis cluster, or adding network latency to Azure Kubernetes Service (AKS) pods.
+- *Agent-based* faults run in VMs or virtual machine scale sets to do in-guest failures. Examples include applying virtual memory pressure or killing a process.
-When you build a chaos experiment, you define one or more *steps* that execute sequentially, each step containing one or more *branches* that run in parallel within the step, and each branch containing one or more *actions* such as injecting a fault or waiting for a certain duration. Finally, you organize the resources (*targets*) that each fault will be run against into groups called selectors so that you can easily reference a group of resources in each action.
+Each fault has specific parameters you can configure, like which process to kill or how much memory pressure to generate.
+
+When you build a chaos experiment, you define one or more *steps* that execute sequentially. Each step contains one or more *branches* that run in parallel within the step. Each branch contains one or more *actions*, such as injecting a fault or waiting for a certain duration.
+
+You organize resource *targets* to run faults against into groups called *selectors*, so you can easily reference a group of resources in each action.
+
+The following diagram shows the layout of a chaos experiment in Chaos Studio:
![Diagram showing the layout of a chaos experiment.](images/chaos-experiment.png)
-A chaos experiment is an Azure resource that lives in a subscription and resource group. You can use the Azure portal or the Chaos Studio REST API to create, update, start, cancel, and view the status of an experiment.
+A chaos experiment is an Azure resource in a subscription and resource group. You can use the Azure portal or the [Chaos Studio REST API](/rest/api/chaosstudio) to create, update, start, cancel, and view the status of experiments.
## Next steps
-Get started creating and running chaos experiments to improve application resilience with Chaos Studio using the links below.
+ - [Create and run your first experiment](chaos-studio-tutorial-service-direct-portal.md) - [Learn more about chaos engineering](chaos-studio-chaos-engineering-overview.md)
cognitive-services Integrate Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/key-phrase-extraction/tutorials/integrate-power-bi.md
Previously updated : 11/02/2021 Last updated : 05/27/2022 -+ # Tutorial: Extract key phrases from text stored in Power BI
cognitive-services Extract Excel Information https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/tutorials/extract-excel-information.md
Last updated 11/02/2021 -+ # Extract information in Excel using Named Entity Recognition(NER) and Power Automate
cognitive-services Bot Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/tutorials/bot-service.md
Last updated 11/02/2021-+ # Tutorial: Create a FAQ bot
cognitive-services Multiple Domains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/tutorials/multiple-domains.md
Last updated 11/02/2021-+ # Add multiple categories to your FAQ bot
cognitive-services Use Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/tutorials/use-kubernetes-service.md
Previously updated : 11/02/2021 Last updated : 05/27/2022 -+ # Deploy a key phrase extraction container to Azure Kubernetes Service
connectors Connectors Create Api Sqlazure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-sqlazure.md
Title: Connect to SQL databases
-description: Automate workflows for SQL databases on premises or in the cloud with Azure Logic Apps.
+description: Connect to SQL databases from workflows in Azure Logic Apps.
ms.suite: integration Previously updated : 04/18/2022 Last updated : 06/01/2022 tags: connectors
-# Connect to a SQL database from Azure Logic Apps
+# Connect to a SQL database from workflows in Azure Logic Apps
-This article shows how to access your SQL database with the SQL Server connector in Azure Logic Apps. You can then create automated workflows that are triggered by events in your SQL database or other systems and manage your SQL data and resources.
+This article shows how to access your SQL database from a workflow in Azure Logic Apps with the SQL Server connector. You can then create automated workflows that run when triggered by events in your SQL database or in other systems and run actions to manage your SQL data and resources.
-For example, you can use actions that get, insert, and delete data along with running SQL queries and stored procedures. You can create workflow that checks for new records in a non-SQL database, does some processing work, creates new records in your SQL database using the results, and sends email alerts about the new records in your SQL database.
+For example, your workflow can run actions that get, insert, and delete data or that can run SQL queries and stored procedures. Your workflow can check for new records in a non-SQL database, do some processing work, use the results to create new records in your SQL database, and send email alerts about the new records.
- The SQL Server connector supports the following SQL editions:
+If you're new to Azure Logic Apps, review the following get started documentation:
+
+* [What is Azure Logic Apps](../logic-apps/logic-apps-overview.md)
+* [Quickstart: Create your first logic app workflow](../logic-apps/quickstart-create-first-logic-app-workflow.md)
+
+## Supported SQL editions
+
+The SQL Server connector supports the following SQL editions:
* [SQL Server](/sql/sql-server/sql-server-technical-documentation) * [Azure SQL Database](/azure/azure-sql/database/sql-database-paas-overview) * [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview)
-If you're new to Azure Logic Apps, review the following documentation:
+## Connector technical reference
+
+The SQL Server connector has different versions, based on [logic app type and host environment](../logic-apps/logic-apps-overview.md#resource-environment-differences).
-* [What is Azure Logic Apps](../logic-apps/logic-apps-overview.md)
-* [Quickstart: Create your first logic app workflow](../logic-apps/quickstart-create-first-logic-app-workflow.md)
+| Logic app | Environment | Connector version |
+|--|-|-|
+| **Consumption** | Multi-tenant Azure Logic Apps | [Managed connector - Standard class](managed.md). For more information, review the [SQL Server managed connector reference](/connectors/sql). |
+| **Consumption** | Integration service environment (ISE) | [Managed connector - Standard class](managed.md) and ISE version. For more information, review the [SQL Server managed connector reference](/connectors/sql). |
+| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | [Managed connector - Standard class](managed.md) and [built-in connector](built-in.md), which is [service provider based](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). <br><br>The built-in version differs in the following ways: <br><br>- The built-in version has no triggers. <br><br>- The built-in version has a single **Execute Query** action. The action can directly connect to Azure virtual networks without the on-premises data gateway. <br><br>For the managed version, review the [SQL Server managed connector reference](/connectors/sql/). |
+||||
## Prerequisites
If you're new to Azure Logic Apps, review the following documentation:
* [SQL Server database](/sql/relational-databases/databases/create-a-database), [Azure SQL Database](/azure/azure-sql/database/single-database-create-quickstart), or [SQL Managed Instance](/azure/azure-sql/managed-instance/instance-create-quickstart).
- The SQL connector requires that your tables contain data so that SQL connector operations can return results when called. For example, if you use Azure SQL Database, you can use the included sample databases to try the SQL connector operations.
+ The SQL Server connector requires that your tables contain data so that the connector operations can return results when called. For example, if you use Azure SQL Database, you can use the included sample databases to try the SQL Server connector operations.
* The information required to create a SQL database connection, such as your SQL server and database names. If you're using Windows Authentication or SQL Server Authentication to authenticate access, you also need your user name and password. You can usually find this information in the connection string.
If you're new to Azure Logic Apps, review the following documentation:
<a name="multi-tenant-or-ise"></a>
-* To connect to an on-premises SQL server, the following extra requirements apply based on whether you have a Consumption logic app workflow, either in multi-tenant Azure Logic Apps or an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md), or if you have a Standard logic app workflow in [single-tenant Azure Logic Apps](../logic-apps/single-tenant-overview-compare.md).
+* To connect to an on-premises SQL server, the following extra requirements apply, based on whether you have a Consumption or Standard logic app workflow.
* Consumption logic app workflow
If you're new to Azure Logic Apps, review the following documentation:
* Standard logic app workflow
- In single-tenant Azure Logic Apps, you can use the built-in SQL Server connector, which requires a connection string. If you want to use the managed SQL Server connector, you need follow the same requirements as a Consumption logic app workflow in multi-tenant Azure Logic Apps.
-
-## Connector technical reference
-
-This connector is available for logic app workflows in multi-tenant Azure Logic Apps, ISEs, and single-tenant Azure Logic Apps.
-
-* For Consumption logic app workflows in multi-tenant Azure Logic Apps, this connector is available only as a managed connector. For more information, review the [managed SQL Server connector operations](/connectors/sql).
-
-* For Consumption logic app workflows in an ISE, this connector is available as a managed connector and as an ISE connector that's designed to run in an ISE. For more information, review the [managed SQL Server connector operations](/connectors/sql).
-
-* For Standard logic app workflows in single-tenant Azure Logic Apps, this connector is available as a managed connector and as a built-in connector that's designed to run in the same process as the single-tenant Azure Logic Apps runtime. However, the built-in version differs in the following ways:
-
- * The built-in SQL Server connector has no triggers.
-
- * The built-in SQL Server connector has only one operation: **Execute Query**
-
-For the managed SQL Server connector technical information, such as trigger and action operations, limits, and known issues, review the [SQL Server connector's reference page](/connectors/sql/), which is generated from the Swagger description.
+ You can use the SQL Server built-in connector, which requires a connection string. If you want to use the SQL Server managed connector, you need follow the same requirements as a Consumption logic app workflow in multi-tenant Azure Logic Apps.
<a name="add-sql-trigger"></a>
The following steps use the Azure portal, but with the appropriate Azure Logic A
1. In the Azure portal, open your blank logic app workflow in the designer.
-1. Find and select the [managed SQL Server connector trigger](/connectors/sql) that you want to use.
+1. Find and select the [SQL Server managed connector trigger](/connectors/sql) that you want to use.
- 1. Under the designer search box, select **All**.
+ 1. On the designer, under the search box, select **All**.
- 1. In the designer search box, enter **sql server**.
+ 1. In the search box, enter **sql server**.
1. From the triggers list, select the SQL trigger that you want. This example continues with the trigger named **When an item is created**.
- ![Screenshot showing the Azure portal, workflow designer for Consumption logic app, search box with "sql server", and the "When an item is created" trigger selected.](./media/connectors-create-api-sqlazure/select-sql-server-trigger-consumption.png)
+ ![Screenshot showing the Azure portal, Consumption logic app workflow designer, search box with "sql server", and "When an item is created" trigger selected.](./media/connectors-create-api-sqlazure/select-sql-server-trigger-consumption.png)
-1. If you're connecting to your SQL database for the first time, you're prompted to [create your SQL database connection now](#create-connection). After you create this connection, you can continue with the next step.
+1. If the designer prompts you for connection information, [create your SQL database connection now](#create-connection). After you create this connection, you can continue with the next step.
1. In the trigger, specify the interval and frequency for how often the trigger checks the table.
The following steps use the Azure portal, but with the appropriate Azure Logic A
For example, to view the data in this row, you can add other actions that create a file that includes the fields from the returned row, and then send email alerts. To learn about other available actions for this connector, see the [connector's reference page](/connectors/sql/).
-1. On the designer toolbar, select **Save**.
+1. When you're done, save your workflow.
Although this step automatically enables and publishes your logic app live in Azure, the only action that your logic app currently takes is to check your database based on your specified interval and frequency. ### [Standard](#tab/standard)
-In Standard logic app workflows, only the managed SQL Server connector has triggers. The built-in SQL Server connector doesn't have any triggers.
+In Standard logic app workflows, only the SQL Server managed connector has triggers. The SQL Server built-in connector doesn't have any triggers.
1. In the Azure portal, open your blank logic app workflow in the designer.
-1. Find and select the [managed SQL Server connector trigger](/connectors/sql) that you want to use.
+1. Find and select the [SQL Server managed connector trigger](/connectors/sql) that you want to use.
- 1. Under the designer search box, select **Azure**.
+ 1. On the designer, select **Choose an operation**.
- 1. In the designer search box, enter **sql server**.
+ 1. Under the **Choose an operation** search box, select **Azure**.
+
+ 1. In the search box, enter **sql server**.
1. From the triggers list, select the SQL trigger that you want. This example continues with the trigger named **When an item is created**.
- ![Screenshot showing the Azure portal, workflow designer for Standard logic app, search box with "sql server", and the "When an item is created" trigger selected.](./media/connectors-create-api-sqlazure/select-sql-server-trigger-standard.png)
+ ![Screenshot showing Azure portal, Standard logic app workflow designer, search box with "sql server", and "When an item is created" trigger selected.](./media/connectors-create-api-sqlazure/select-sql-server-trigger-standard.png)
-1. If you're connecting to your SQL database for the first time, you're prompted to [create your SQL database connection now](#create-connection). After you create this connection, you can continue with the next step.
+1. If the designer prompts you for connection information, [create your SQL database connection now](#create-connection). After you create this connection, you can continue with the next step.
1. In the trigger, specify the interval and frequency for how often the trigger checks the table. 1. To add other properties available for this trigger, open the **Add new parameter** list and select those properties.
- This trigger returns only one row from the selected table, and nothing else. To perform other tasks, continue by adding either a [SQL connector action](#add-sql-action) or [another action](../connectors/apis-list.md) that performs the next task that you want in your logic app workflow.
+ This trigger returns only one row from the selected table, and nothing else. To perform other tasks, continue by adding either a [SQL Server connector action](#add-sql-action) or [another action](../connectors/apis-list.md) that performs the next task that you want in your logic app workflow.
For example, to view the data in this row, you can add other actions that create a file that includes the fields from the returned row, and then send email alerts. To learn about other available actions for this connector, see the [connector's reference page](/connectors/sql/).
-1. On the designer toolbar, select **Save**.
+1. When you're done, save your workflow.
Although this step automatically enables and publishes your logic app live in Azure, the only action that your logic app currently takes is to check your database based on your specified interval and frequency.
In Standard logic app workflows, only the managed SQL Server connector has trigg
## Trigger recurrence shift and drift (daylight saving time)
-Recurring connection-based triggers where you need to create a connection first, such as the managed SQL Server trigger, differ from built-in triggers that run natively in Azure Logic Apps, such as the [Recurrence trigger](../connectors/connectors-native-recurrence.md). For recurring connection-based triggers, the recurrence schedule isn't the only driver that controls execution, and the time zone only determines the initial start time. Subsequent runs depend on the recurrence schedule, the last trigger execution, *and* other factors that might cause run times to drift or produce unexpected behavior. For example, unexpected behavior can include failure to maintain the specified schedule when daylight saving time (DST) starts and ends.
+Recurring connection-based triggers where you need to create a connection first, such as the SQL Server managed connector trigger, differ from built-in triggers that run natively in Azure Logic Apps, such as the [Recurrence trigger](../connectors/connectors-native-recurrence.md). For recurring connection-based triggers, the recurrence schedule isn't the only driver that controls execution, and the time zone only determines the initial start time. Subsequent runs depend on the recurrence schedule, the last trigger execution, *and* other factors that might cause run times to drift or produce unexpected behavior. For example, unexpected behavior can include failure to maintain the specified schedule when daylight saving time (DST) starts and ends.
To make sure that the recurrence time doesn't shift when DST takes effect, manually adjust the recurrence. That way, your workflow continues to run at the expected or specified start time. Otherwise, the start time shifts one hour forward when DST starts and one hour backward when DST ends. For more information, see [Recurrence for connection-based triggers](../connectors/apis-list.md#recurrence-for-connection-based-triggers).
In this example, the logic app workflow starts with the [Recurrence trigger](../
1. In the Azure portal, open your logic app workflow in the designer.
-1. Find and select the [managed SQL Server connector action](/connectors/sql) that you want to use. This example continues with the action named **Get row**.
+1. Find and select the [SQL Server managed connector action](/connectors/sql) that you want to use. This example continues with the action named **Get row**.
1. Under the trigger or action where you want to add the SQL action, select **New step**. Or, to add an action between existing steps, move your mouse over the connecting arrow. Select the plus sign (**+**) that appears, and then select **Add an action**.
- 1. In the **Choose an operation** box, under the designer search box, select **All**.
+ 1. Under the **Choose an operation** search box, select **All**.
- 1. In the designer search box, enter **sql server**.
+ 1. In the search box, enter **sql server**.
1. From the actions list, select the SQL Server action that you want. This example uses the **Get row** action, which gets a single record. ![Screenshot showing the Azure portal, workflow designer for Consumption logic app, the search box with "sql server", and "Get row" selected in the "Actions" list.](./media/connectors-create-api-sqlazure/select-sql-get-row-action-consumption.png)
-1. If you're connecting to your SQL database for the first time, you're prompted to [create your SQL database connection now](#create-connection). After you create this connection, you can continue with the next step.
+1. If the designer prompts you for connection information, [create your SQL database connection now](#create-connection). After you create this connection, you can continue with the next step.
1. If you haven't already provided the SQL server name and database name, provide those values. Otherwise, from the **Table name** list, select the table that you want to use. In the **Row id** property, enter the ID for the record that you want.
In this example, the logic app workflow starts with the [Recurrence trigger](../
![Screenshot showing Consumption workflow designer and the "Get row" action with the example "Table name" property value and empty row ID.](./media/connectors-create-api-sqlazure/specify-table-row-id-consumption.png)
- This action returns only one row from the selected table, and nothing else. To view the data in this row, add other actions, for example, those that create a file that includes the fields from the returned row, and store that file in a cloud storage account. To learn about other available actions for this connector, see the [connector's reference page](/connectors/sql/).
+ This action returns only one row from the selected table, and nothing else. To view the data in this row, add other actions. For example, such actions might create a file, include the fields from the returned row, and store the file in a cloud storage account. To learn about other available actions for this connector, see the [connector's reference page](/connectors/sql/).
-1. When you're done, on the designer toolbar, select **Save**.
+1. When you're done, save your workflow.
### [Standard](#tab/standard)
In this example, the logic app workflow starts with the [Recurrence trigger](../
1. Find and select the SQL Server connector action that you want to use.
- 1. Under the trigger or action where you want to add the SQL Server action, select **New step**.
+ 1. Under the trigger or action where you want to add the SQL Server action, select the plus sign (**+**), and then select **Add an action**.
- Or, to add an action between existing steps, move your mouse over the connecting arrow. Select the plus sign (**+**) that appears, and then select **Add an action**.
+ Or, to add an action between existing steps, select the plus sign (**+**) on the connecting arrow, and then select **Add an action**.
- 1. In the **Choose an operation** box, under the designer search box, select either of the following options:
+ 1. Under the **Choose an operation** search box, select either of the following options:
- * **Built-in** when you want to use built-in SQL Server actions such as **Execute Query**
+ * **Built-in** when you want to use SQL Server built-in actions such as **Execute Query**
![Screenshot showing the Azure portal, workflow designer for Standard logic app, and designer search box with "Built-in" selected underneath.](./media/connectors-create-api-sqlazure/select-built-in-category-standard.png)
- * **Azure** when you want to use [managed SQL Server connector actions](/connectors/sql) such as **Get row**
+ * **Azure** when you want to use [SQL Server managed connector actions](/connectors/sql) such as **Get row**
![Screenshot showing the Azure portal, workflow designer for Standard logic app, and designer search box with "Azure" selected underneath.](./media/connectors-create-api-sqlazure/select-azure-category-standard.png)
- 1. In the designer search box, enter **sql server**.
+ 1. In the search box, enter **sql server**.
1. From the actions list, select the SQL Server action that you want.
In this example, the logic app workflow starts with the [Recurrence trigger](../
![Screenshot showing the designer search box with "sql server" and "Azure" selected underneath with the "Get row" action selected in the "Actions" list.](./media/connectors-create-api-sqlazure/select-sql-get-row-action-standard.png)
-1. If you're connecting to your SQL database for the first time, you're prompted to [create your SQL database connection now](#create-connection). After you create this connection, you can continue with the next step.
+1. If the designer prompts you for connection information, [create your SQL database connection now](#create-connection). After you create this connection, you can continue with the next step.
1. If you haven't already provided the SQL server name and database name, provide those values. Otherwise, from the **Table name** list, select the table that you want to use. In the **Row id** property, enter the ID for the record that you want.
In this example, the logic app workflow starts with the [Recurrence trigger](../
![Screenshot showing Standard workflow designer and "Get row" action with the example "Table name" property value and empty row ID.](./media/connectors-create-api-sqlazure/specify-table-row-id-standard.png)
- This action returns only one row from the selected table, and nothing else. To view the data in this row, add other actions, for example, those that create a file that includes the fields from the returned row, and store that file in a cloud storage account. To learn about other available actions for this connector, see the [connector's reference page](/connectors/sql/).
+ This action returns only one row from the selected table, and nothing else. To view the data in this row, add other actions. For example, such actions might create a file, include the fields from the returned row, and store the file in a cloud storage account. To learn about other available actions for this connector, see the [connector's reference page](/connectors/sql/).
-1. When you're done, on the designer toolbar, select **Save**.
+1. When you're done, save your workflow.
After you provide this information, continue with these steps:
To access a SQL Managed Instance without using the on-premises data gateway or integration service environment, you have to [set up the public endpoint on the SQL Managed Instance](/azure/azure-sql/managed-instance/public-endpoint-configure). The public endpoint uses port 3342, so make sure that you specify this port number when you create the connection from your logic app.
-The first time that you add either a [SQL Server trigger](#add-sql-trigger) or [SQL Server action](#add-sql-action), and you haven't previously created a connection to your database, you're prompted to complete these steps:
+When you add a [SQL Server trigger](#add-sql-trigger) or [SQL Server action](#add-sql-action) without a previously created and active database connection, complete the following steps:
1. For **Connection name**, provide a name to use for your connection.
The first time that you add either a [SQL Server trigger](#add-sql-trigger) or [
| Authentication | Description | |-|-|
- | **Service principal (Azure AD application)** | - Available only for the managed SQL Server connector. <br><br>- Requires an Azure AD application and service principal. For more information, see [Create an Azure AD application and service principal that can access resources using the Azure portal](../active-directory/develop/howto-create-service-principal-portal.md). |
- | **Logic Apps Managed Identity** | - Available only for the managed SQL Server connector and ISE SQL Server connector. <br><br>- Requires the following items: <br><br> A valid managed identity that's [enabled on your logic app resource](../logic-apps/create-managed-service-identity.md) and has access to your database. <br><br> **SQL DB Contributor** role access to the SQL Server resource <br><br> **Contributor** access to the resource group that includes the SQL Server resource. <br><br>For more information, see [SQL - Server-Level Roles](/sql/relational-databases/security/authentication-access/server-level-roles). |
- | [**Azure AD Integrated**](/azure/azure-sql/database/authentication-aad-overview) | - Available only for the managed SQL Server connector and ISE SQL Server connector. <br><br>- Requires a valid managed identity in Azure Active Directory (Azure AD) that's [enabled on your logic app resource](../logic-apps/create-managed-service-identity.md) and has access to your database. For more information, see these topics: <br><br>- [Azure SQL Security Overview - Authentication](/azure/azure-sql/database/security-overview#authentication) <br>- [Authorize database access to Azure SQL - Authentication and authorization](/azure/azure-sql/database/logins-create-manage#authentication-and-authorization) <br>- [Azure SQL - Azure AD Integrated authentication](/azure/azure-sql/database/authentication-aad-overview) |
- | [**SQL Server Authentication**](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication) | - Available only for the managed SQL Server connector and ISE SQL Server connector. <br><br>- Requires the following items: <br><br> A data gateway resource that's previously created in Azure for your connection, regardless whether your logic app is in multi-tenant Azure Logic Apps or an ISE. <br><br> A valid user name and strong password that are created and stored in your SQL Server database. For more information, see the following topics: <br><br>- [Azure SQL Security Overview - Authentication](/azure/azure-sql/database/security-overview#authentication) <br>- [Authorize database access to Azure SQL - Authentication and authorization](/azure/azure-sql/database/logins-create-manage#authentication-and-authorization) |
+ | **Service principal (Azure AD application)** | - Available only for the SQL Server managed connector. <br><br>- Requires an Azure AD application and service principal. For more information, see [Create an Azure AD application and service principal that can access resources using the Azure portal](../active-directory/develop/howto-create-service-principal-portal.md). |
+ | **Logic Apps Managed Identity** | - Available only for the SQL Server managed connector and ISE-versioned connector. <br><br>- Requires the following items: <br><br> A valid managed identity that's [enabled on your logic app resource](../logic-apps/create-managed-service-identity.md) and has access to your database. <br><br> **SQL DB Contributor** role access to the SQL Server resource <br><br> **Contributor** access to the resource group that includes the SQL Server resource. <br><br>For more information, see [SQL - Server-Level Roles](/sql/relational-databases/security/authentication-access/server-level-roles). |
+ | [**Azure AD Integrated**](/azure/azure-sql/database/authentication-aad-overview) | - Available only for the SQL Server managed connector and ISE-versioned connector. <br><br>- Requires a valid managed identity in Azure Active Directory (Azure AD) that's [enabled on your logic app resource](../logic-apps/create-managed-service-identity.md) and has access to your database. For more information, see these topics: <br><br>- [Azure SQL Security Overview - Authentication](/azure/azure-sql/database/security-overview#authentication) <br>- [Authorize database access to Azure SQL - Authentication and authorization](/azure/azure-sql/database/logins-create-manage#authentication-and-authorization) <br>- [Azure SQL - Azure AD Integrated authentication](/azure/azure-sql/database/authentication-aad-overview) |
+ | [**SQL Server Authentication**](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication) | - Available only for the SQL Server managed connector and ISE-versioned connector. <br><br>- Requires the following items: <br><br> A data gateway resource that's previously created in Azure for your connection, regardless whether your logic app is in multi-tenant Azure Logic Apps or an ISE. <br><br> A valid user name and strong password that are created and stored in your SQL Server database. For more information, see the following topics: <br><br>- [Azure SQL Security Overview - Authentication](/azure/azure-sql/database/security-overview#authentication) <br>- [Authorize database access to Azure SQL - Authentication and authorization](/azure/azure-sql/database/logins-create-manage#authentication-and-authorization) |
This connection and authentication information box looks similar to the following example, which selects **Azure AD Integrated**:
The first time that you add either a [SQL Server trigger](#add-sql-trigger) or [
### Connect to on-premises SQL Server
-The first time that you add either a [SQL trigger](#add-sql-trigger) or [SQL action](#add-sql-action), and you haven't previously created a connection to your database, you're prompted to complete these steps:
+When you add a [SQL Server trigger](#add-sql-trigger) or [SQL Server action](#add-sql-action) without a previously created and active database connection, complete the following steps:
1. For connections to your on-premises SQL server that require the on-premises data gateway, make sure that you've [completed these prerequisites](#multi-tenant-or-ise).
The first time that you add either a [SQL trigger](#add-sql-trigger) or [SQL act
| Authentication | Description | |-|-|
- | [**SQL Server Authentication**](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication) | - Available only for the managed SQL Server connector and ISE SQL Server connector. <br><br>- Requires the following items: <br><br> A data gateway resource that's previously created in Azure for your connection, regardless whether your logic app is in multi-tenant Azure Logic Apps or an ISE. <br><br> A valid user name and strong password that are created and stored in your SQL Server. <br><br>For more information, see [SQL Server Authentication](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication). |
- | [**Windows Authentication**](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-windows-authentication) | - Available only for the managed SQL Server connector. <br><br>- Requires the following items: <br><br> A data gateway resource that's previously created in Azure for your connection, regardless whether your logic app is in multi-tenant Azure Logic Apps or an ISE. <br><br> A valid Windows user name and password to confirm your identity through your Windows account. <br><br>For more information, see [Windows Authentication](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-windows-authentication). |
+ | [**SQL Server Authentication**](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication) | - Available only for the SQL Server managed connector and ISE-versioned connector. <br><br>- Requires the following items: <br><br> A data gateway resource that's previously created in Azure for your connection, regardless whether your logic app is in multi-tenant Azure Logic Apps or an ISE. <br><br> A valid user name and strong password that are created and stored in your SQL Server. <br><br>For more information, see [SQL Server Authentication](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication). |
+ | [**Windows Authentication**](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-windows-authentication) | - Available only for the SQL Server managed connector. <br><br>- Requires the following items: <br><br> A data gateway resource that's previously created in Azure for your connection, regardless whether your logic app is in multi-tenant Azure Logic Apps or an ISE. <br><br> A valid Windows user name and password to confirm your identity through your Windows account. <br><br>For more information, see [Windows Authentication](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-windows-authentication). |
||| 1. Select or provide the following values for your SQL database:
The first time that you add either a [SQL trigger](#add-sql-trigger) or [SQL act
## Handle bulk data
-Sometimes, you have to work with result sets so large that the connector doesn't return all the results at the same time, or you want better control over the size and structure for your result sets. Here's some ways that you can handle such large result sets:
+Sometimes, you work with result sets so large that the connector doesn't return all the results at the same time. Or, you want better control over the size and structure for your result sets. The following list includes some ways that you can handle such large result sets:
* To help you manage results as smaller sets, turn on *pagination*. For more information, see [Get bulk data, records, and items by using pagination](../logic-apps/logic-apps-exceed-default-page-size-with-pagination.md). For more information, see [SQL Pagination for bulk data transfer with Logic Apps](https://social.technet.microsoft.com/wiki/contents/articles/40060.sql-pagination-for-bulk-data-transfer-with-logic-apps.aspx).
-* Create a [*stored procedure*](/sql/relational-databases/stored-procedures/stored-procedures-database-engine) that organizes the results the way that you want. The SQL connector provides many backend features that you can access by using Azure Logic Apps so that you can more easily automate business tasks that work with SQL database tables.
+* Create a [*stored procedure*](/sql/relational-databases/stored-procedures/stored-procedures-database-engine) that organizes the results the way that you want. The SQL Server connector provides many backend features that you can access by using Azure Logic Apps so that you can more easily automate business tasks that work with SQL database tables.
When a SQL action gets or inserts multiple rows, your logic app workflow can iterate through these rows by using an [*until loop*](../logic-apps/logic-apps-control-flow-loops.md#until-loop) within these [limits](../logic-apps/logic-apps-limits-and-config.md). However, when your logic app has to work with record sets so large, for example, thousands or millions of rows, that you want to minimize the costs resulting from calls to the database. To organize the results in the way that you want, you can create a stored procedure that runs in your SQL instance and uses the **SELECT - ORDER BY** statement. This solution gives you more control over the size and structure of your results. Your logic app calls the stored procedure by using the SQL Server connector's **Execute stored procedure** action. For more information, see [SELECT - ORDER BY Clause](/sql/t-sql/queries/select-order-by-clause-transact-sql). > [!NOTE]
- > The SQL connector has a stored procedure timeout limit that's [less than 2-minutes](/connectors/sql/#known-issues-and-limitations).
+ >
+ > The SQL Server connector has a stored procedure timeout limit that's [less than 2 minutes](/connectors/sql/#known-issues-and-limitations).
> Some stored procedures might take longer than this limit to complete, causing a `504 Timeout` error. You can work around this problem > by using a SQL completion trigger, native SQL pass-through query, a state table, and server-side jobs. >
Sometimes, you have to work with result sets so large that the connector doesn't
> [SQL Server on premises](/sql/sql-server/sql-server-technical-documentation) > and [SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview), > you can use the [SQL Server Agent](/sql/ssms/agent/sql-server-agent). To learn more, see
- > [Handle long-running stored procedure timeouts in the SQL connector for Azure Logic Apps](../logic-apps/handle-long-running-stored-procedures-sql-connector.md).
+ > [Handle long-running stored procedure timeouts in the SQL Server connector for Azure Logic Apps](../logic-apps/handle-long-running-stored-procedures-sql-connector.md).
### Handle dynamic bulk data
When you call a stored procedure by using the SQL Server connector, the returned
1. View the output format by performing a test run. Copy and save your sample output.
-1. In the designer, under the action where you call the stored procedure, select **New step**.
+1. In the designer, under the action where you call the stored procedure, add a new action.
1. In the **Choose an operation** box, find and select the action named [**Parse JSON**](../logic-apps/logic-apps-perform-data-operations.md#parse-json-action).
When you call a stored procedure by using the SQL Server connector, the returned
1. In the **Enter or paste a sample JSON payload** box, paste your sample output, and select **Done**. > [!NOTE]
- > If you get an error that Logic Apps can't generate a schema, check that your sample output's syntax is correctly formatted.
- > If you still can't generate the schema, in the **Schema** box, manually enter the schema.
+ >
+ > If you get an error that Azure Logic Apps can't generate a schema,
+ > check that your sample output's syntax is correctly formatted.
+ > If you still can't generate the schema, in the **Schema** box,
+ > manually enter the schema.
-1. On the designer toolbar, select **Save**.
+1. When you're done, save your workflow.
1. To reference the JSON content properties, click inside the edit boxes where you want to reference those properties so that the dynamic content list appears. In the list, under the [**Parse JSON**](../logic-apps/logic-apps-perform-data-operations.md#parse-json-action) heading, select the data tokens for the JSON content properties that you want.
When you call a stored procedure by using the SQL Server connector, the returned
### Connection problems
-Connection problems can commonly happen, so to troubleshoot and resolve these kinds of issues, review [Solving connectivity errors to SQL Server](https://support.microsoft.com/help/4009936/solving-connectivity-errors-to-sql-server). Here are some examples:
+Connection problems can commonly happen, so to troubleshoot and resolve these kinds of issues, review [Solving connectivity errors to SQL Server](https://support.microsoft.com/help/4009936/solving-connectivity-errors-to-sql-server). The following list provides some examples:
* **A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections.**
connectors Connectors Native Recurrence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-recurrence.md
ms.suite: integration Previously updated : 01/24/2022 Last updated : 05/27/2022 # Create, schedule, and run recurring tasks and workflows with the Recurrence trigger in Azure Logic Apps
For differences between this trigger and the Sliding Window trigger or for more
|||||| > [!IMPORTANT]
- > If you use the **Day** or **Week** frequency and specify a future date and time, make sure that you set up the recurrence in advance:
+ > If you use the **Day**, **Week**, or **Month** frequency, and you specify a future date and time, make sure that you set up the recurrence in advance:
> > * **Day**: Set up the daily recurrence at least 24 hours in advance. > > * **Week**: Set up the weekly recurrence at least 7 days in advance. >
+ > * **Month**: Set up the monthly recurrence at least one month in advance.
+ >
> Otherwise, the workflow might skip the first recurrence. > > If a recurrence doesn't specify a specific [start date and time](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md#start-time), the first recurrence runs immediately
defender-for-iot How To Manage Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-subscriptions.md
Your enterprise may have more than one paying entity. If this is the case you ca
Before you subscribe, you should have a sense of how many devices you would like your subscriptions to cover.
-Users can also work with trial subscription, which supports monitoring a limited number of devices for 30 days. See [Microsoft Defender for IoT pricing](https://azure.microsoft.com/pricing/details/defender-for-cloud/#defenderforiot) information on committed device prices.
+Users can also work with trial subscription, which supports monitoring a limited number of devices for 30 days. See [Microsoft Defender for IoT pricing](https://azure.microsoft.com/pricing/details/iot-defender/) information on committed device prices.
## Requirements
defender-for-iot References Work With Defender For Iot Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/references-work-with-defender-for-iot-apis.md
# Defender for IoT sensor and management console APIs
-Defender for IoT APIs are governed by [Microsoft API License and Terms of use](/legal/microsoft-apis/terms-of-use).
+Defender for IoT APIs is governed by [Microsoft API License and Terms of use](/legal/microsoft-apis/terms-of-use).
Use an external REST API to access the data discovered by sensors and management consoles and perform actions with that data.
Define conditions under which alerts won't be sent. For example, define and upda
The APIs that you define here appear in the on-premises management console's Alert Exclusions window as a read-only exclusion rule.
+This API is supported for maintenance purposes only and is not meant to be used instead of [alert exclusion rules](/azure/defender-for-iot/organizations/how-to-work-with-alerts-on-premises-management-console#create-alert-exclusion-rules). Use this API for one-time maintenance operations only.
+ #### Method - POST #### Query parameters - **ticketId**: Defines the maintenance ticket ID in the user's systems. -- **ttl**: Defines the TTL (time to live), which is the duration of the maintenance window in minutes. After the period of time that this parameter defines, the system automatically starts sending alerts.
+- **ttl**: Required. Defines the TTL (time to live), which is the duration of the maintenance window in minutes. After the period of time that this parameter defines, the system automatically starts sending alerts.
- **engines**: Defines from which security engine to suppress alerts during the maintenance process:
defender-for-iot Resources Frequently Asked Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/resources-frequently-asked-questions.md
You can work with CLI [commands](references-work-with-defender-for-iot-cli-comma
## How do I check the sanity of my deployment
-After installing the software for your sensor or on-premises management console, you will want to perform the [Post-installation validation](how-to-install-software.md#post-installation-validation).
+After installing the software for your sensor or on-premises management console, you'll want to perform the [Post-installation validation](how-to-install-software.md#post-installation-validation).
You can also use our [UI and CLI tools](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md#check-system-health) to check system health and review your overall system statistics.
load-testing Quickstart Create And Run Load Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/quickstart-create-and-run-load-test.md
Azure Load Testing enables you to quickly create a load test from the Azure port
1. On the **Quickstart test** page, enter the **Test URL**.
- Enter the complete URL that you would like to run the test for. For example, https://www.example.com/login.
+ Enter the complete URL that you would like to run the test for. For example, `https://www.example.com/login`.
1. (Optional) Update the **Number of virtual users** to the total number of virtual users.
You now have an Azure Load Testing resource, which you used to load test an exte
You can reuse this resource to learn how to identify performance bottlenecks in an Azure-hosted application by using server-side metrics. > [!div class="nextstepaction"]
-> [Identify performance bottlenecks](./tutorial-identify-bottlenecks-azure-portal.md)
+> [Identify performance bottlenecks](./tutorial-identify-bottlenecks-azure-portal.md)
logic-apps Concepts Schedule Automated Recurring Tasks Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md
Title: Schedules for recurring triggers in workflows
-description: An overview about scheduling recurring automated workflows in Azure Logic Apps.
+ Title: About schedules for recurring triggers in workflows
+description: An overview about schedules for recurring workflows in Azure Logic Apps.
ms.suite: integration Previously updated : 03/17/2022 Last updated : 05/27/2022 # Schedules for recurring triggers in Azure Logic Apps workflows
Here are the differences between these triggers:
If you select **Day** as the frequency, you can specify the hours of the day and minutes of the hour, for example, every day at 2:30. If you select **Week** as the frequency, you can also select days of the week, such as Wednesday and Saturday. You can also specify a start date and time along with a time zone for your recurrence schedule. For more information about time zone formatting, see [Add a Recurrence trigger](../connectors/connectors-native-recurrence.md#add-the-recurrence-trigger). > [!IMPORTANT]
- > If you use the **Day** or **Week** frequency and specify a future date and time, make sure that you set up the recurrence in advance:
+ > If you use the **Day**, **Week**, or **Month** frequency, and you specify a future date and time, make sure that you set up the recurrence in advance:
> > * **Day**: Set up the daily recurrence at least 24 hours in advance. > > * **Week**: Set up the weekly recurrence at least 7 days in advance.
- >
+ >
+ > * **Month**: Set up the monthly recurrence at least one month in advance.
+ >
> Otherwise, the workflow might skip the first recurrence. > > If a recurrence doesn't specify a specific [start date and time](#start-time), the first recurrence runs immediately
Here are the differences between these triggers:
> > If a recurrence doesn't specify any other advanced scheduling options such as specific times to run future recurrences, > those recurrences are based on the last run time. As a result, the start times for those recurrences might drift due to
- > factors such as latency during storage calls. To make sure that your logic app doesn't miss a recurrence, especially when
+ > factors such as latency during storage calls. To make sure that your workflow doesn't miss a recurrence, especially when
> the frequency is in days or longer, try these options: > > * Provide a start date and time for the recurrence plus the specific times when to run subsequent recurrences by using the properties
Here are some patterns that show how you can control recurrence with the start d
||--|-| | {none} | Runs the first workload instantly. <p>Runs future workloads based on the last run time. | Runs the first workload instantly. <p>Runs future workloads based on the specified schedule. | | Start time in the past | **Recurrence** trigger: Calculates run times based on the specified start time and discards past run times. <p><p>Runs the first workload at the next future run time. <p><p>Runs future workloads based on the last run time. <p><p>**Sliding Window** trigger: Calculates run times based on the specified start time and honors past run times. <p><p>Runs future workloads based on the specified start time. <p><p>For more explanation, see the example following this table. | Runs the first workload *no sooner* than the start time, based on the schedule calculated from the start time. <p><p>Runs future workloads based on the specified schedule. <p><p>**Note:** If you specify a recurrence with a schedule, but don't specify hours or minutes for the schedule, Azure Logic Apps calculates future run times by using the hours or minutes, respectively, from the first run time. |
-| Start time now or in the future | Runs the first workload at the specified start time. <p><p>**Recurrence** trigger: Runs future workloads based on the last run time. <p><p>**Sliding Window** trigger: Runs future workloads based on the specified start time. | Runs the first workload *no sooner* than the start time, based on the schedule calculated from the start time. <p><p>Runs future workloads based on the specified schedule. If you use the **Day** or **Week** frequency and specify a future date and time, make sure that you set up the recurrence in advance: <p>- **Day**: Set up the daily recurrence at least 24 hours in advance. <p>- **Week**: Set up the weekly recurrence at least 7 days in advance. <p>Otherwise, the workflow might skip the first recurrence. <p>**Note:** If you specify a recurrence with a schedule, but don't specify hours or minutes for the schedule, Azure Logic Apps calculates future run times by using the hours or minutes, respectively, from the first run time. |
+| Start time now or in the future | Runs the first workload at the specified start time. <p><p>**Recurrence** trigger: Runs future workloads based on the last run time. <p><p>**Sliding Window** trigger: Runs future workloads based on the specified start time. | Runs the first workload *no sooner* than the start time, based on the schedule calculated from the start time. <p><p>Runs future workloads based on the specified schedule. If you use the **Day**, **Week**, or **Month** frequency, and you specify a future date and time, make sure that you set up the recurrence in advance: <p>- **Day**: Set up the daily recurrence at least 24 hours in advance. <p>- **Week**: Set up the weekly recurrence at least 7 days in advance. <p>- **Month**: Set up the monthly recurrence at least one month in advance. <p>Otherwise, the workflow might skip the first recurrence. <p>**Note:** If you specify a recurrence with a schedule, but don't specify hours or minutes for the schedule, Azure Logic Apps calculates future run times by using the hours or minutes, respectively, from the first run time. |
|||| *Example for past start time and recurrence but no schedule*
mysql How To Deploy On Azure Free Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-deploy-on-azure-free-account.md
To complete this tutorial, you need:
## Create an Azure Database for MySQL - Flexible Server
-In this article, you'll use the Azure portal to create a Flexible Server with public access connectivity method. Alternatively, refer the respective quickstarts to create a Flexible Server using [Azure CLI](./quickstart-create-server-cli.md) or [ARM template](./quickstart-create-arm-template.md), or [within a VNET](./quickstart-create-connect-server-vnet.md).
+In this article, you'll use the Azure portal to create a Flexible Server with public access connectivity method. Alternatively, refer to the respective quickstarts to create a Flexible Server using [Azure CLI](./quickstart-create-server-cli.md), [ARM template](./quickstart-create-arm-template.md), [Terraform](./quickstart-create-terraform.md), or [within a VNET](./quickstart-create-connect-server-vnet.md).
1. Sign in to the [Azure portal](https://portal.azure.com/) with your Azure free account.
mysql Quickstart Create Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/quickstart-create-arm-template.md
Last updated 10/23/2020
# Quickstart: Use an ARM template to create an Azure Database for MySQL - Flexible Server
-[[!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)]
-Azure Database for MySQL - Flexible Server is a managed service that you use to run, manage, and scale highly available MySQL databases in the cloud. You can use an Azure Resource Manager template (ARM template) to provision a flexible server to deploy multiple servers or multiple databases on a server.
[!INCLUDE [About Azure Resource Manager](../../../includes/resource-manager-quickstart-introduction.md)]
mysql Quickstart Create Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/quickstart-create-server-cli.md
Create an [Azure resource group](../../azure-resource-manager/management/overvie
az group create --name myresourcegroup --location eastus2 ```
-Create a flexible server with the `az mysql flexible-server create` command. A server can contain multiple databases. The following command creates a server using service defaults and values from your Azure CLI's [local context](/cli/azure/local-context):
+Create a flexible server with the `az mysql flexible-server create` command. A server can contain multiple databases. The following command creates a server using service defaults and values from your Azure CLI's local context:
```azurecli-interactive az mysql flexible-server create
mysql Quickstart Create Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/quickstart-create-terraform.md
+
+ Title: 'Quickstart: Use Terraform to create an Azure Database for MySQL - Flexible Server'
+description: Learn how to deploy a database for Azure Database for MySQL Flexible Server using Terraform
++++++ Last updated : 5/27/2022++
+# Quickstart: Use Terraform to create an Azure Database for MySQL - Flexible Server
++
+Article tested with the following Terraform and Terraform provider versions:
+
+- [Terraform v1.2.1](https://releases.hashicorp.com/terraform/)
+- [AzureRM Provider v.2.99.0](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs)
+++
+In this article, you learn how to deploy an Azure MySQL Flexible Server Database in a virtual network (VNet) using Terraform.
+
+> [!div class="checklist"]
+
+> * Create an Azure resource group using [azurerm_resource_group](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/resource_group)
+> * Create an Azure VNet using [azurerm_virtual_network](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/virtual_network)
+> * Create an Azure subnet using [azurerm_subnet](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/subnet)
+> * Define a private DNS zone within an Azure DNS using [azurerm_private_dns_zone](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/private_dns_zone)
+> * Define a private DNS zone VNet link using using [azurerm_private_dns_zone_virtual_network_link](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/private_dns_zone_virtual_network_link)
+> * Deploy Flexible Server using [azurerm_mysql_flexible_server](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/mysql_flexible_server)
+> * Deploy a database using [azurerm_mysql_flexible_database](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/mysql_flexible_database)
+
+> [!NOTE]
+> The example code in this article is located in the [Azure Terraform GitHub repo](https://github.com/Azure/terraform/tree/master/quickstart/201-mysql-fs-db).
+
+## Prerequisites
+
+- [!INCLUDE [flexible-server-free-trial-note](../includes/flexible-server-free-trial-note.md)]
+
+- [Install and configure Terraform](/azure/developer/terraform/quickstart-configure)
+
+## Implement the Terraform code
+
+1. Create a directory in which to test the sample Terraform code and make it the current directory.
+
+1. Create a file named `providers.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/201-mysql-fs-db/providers.tf)]
+
+1. Create a file named `main.tf` and insert the following code:
+
+ [!code-terraform[master](../../../terraform_samples/quickstart/201-mysql-fs-db/main.tf)]
+
+1. Create a file named `mysql-fs-db.tf` and insert the following code:
+
+ [!code-terraform[master](../../../terraform_samples/quickstart/201-mysql-fs-db/mysql-fs-db.tf)]
+
+1. Create a file named `variables.tf` and insert the following code:
+
+ [!code-terraform[master](../../../terraform_samples/quickstart/201-mysql-fs-db/variables.tf)]
+
+1. Create a file named `output.tf` and insert the following code:
+
+ [!code-terraform[master](../../../terraform_samples/quickstart/201-mysql-fs-db/output.tf)]
+
+## Initialize Terraform
++
+## Create a Terraform execution plan
++
+## Apply a Terraform execution plan
++
+## Verify the results
+
+#### [Azure CLI](#tab/azure-cli)
+
+Run [az mysql flexible-server db show](/cli/azure/mysql/flexible-server/db#az-mysql-flexible-server-db-show) to display the Azure MySQL database.
+
+```azurecli
+az mysql flexible-server db show \
+ --resource-group <resource_group_name> \
+ --server-name <azurerm_mysql_flexible_server> \
+ --database-name <mysql_flexible_server_database_name>
+```
+
+**Key points:**
+
+- The values for the `<resource_group_name>`, `<azurerm_mysql_flexible_server>`, and `<mysql_flexible_server_database_name>` are displayed in the `terraform apply` output. You can also run the [terraform output](https://www.terraform.io/cli/commands/output) command to view these output values.
+
+#### [Azure PowerShell](#tab/azure-powershell)
+
+Run [Get-AzMySqlFlexibleServerDatabase](/powershell/module/az.mysql/get-azmysqlflexibleserverdatabase) to display the Azure MySQL database.
+
+```azurepowershell
+Get-AzMySqlFlexibleServerDatabase `
+ -ResourceGroupName <resource_group_name> `
+ -ServerName <azurerm_mysql_flexible_server> `
+ -Name <mysql_flexible_server_database_name>
+```
+
+**Key points:**
+
+- The values for the `<resource_group_name>`, `<azurerm_mysql_flexible_server>`, and `<mysql_flexible_server_database_name>` are displayed in the `terraform apply` output. You can also run the [terraform output](https://www.terraform.io/cli/commands/output) command to view these output values.
+++
+## Clean up resources
++
+## Troubleshoot Terraform on Azure
+
+[Troubleshoot common problems when using Terraform on Azure](/azure/developer/terraform/troubleshoot)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Connect Azure Database for MySQL Flexible Server with private access](/azure/mysql/flexible-server/quickstart-create-connect-server-vnet)
object-anchors Upgrade Unity Quickstart To 2020 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/object-anchors/quickstarts/upgrade-unity-quickstart-to-2020.md
Title: 'Quickstart: Upgrade Quickstart app to Unity 2020'
-description: In this quickstart, you learn how to upgrade Quickstart app to Unity 2020 build a HoloLens Unity app using Object Anchors.
+ Title: 'Quickstart: Upgrade Quickstart HoloLens app to Unity 2020'
+description: In this quickstart, you learn how to upgrade a Unity HoloLens app that uses Azure Object Anchors from Unity 2019 to Unity 2020.
Previously updated : 06/23/2021 Last updated : 05/27/2022 -+
+- mode-other
+- kr2b-contr-experiment
+ # Quickstart: Upgrade Quickstart app to Unity 2020
-In this quickstart, you'll upgrade a Unity HoloLens app that uses [Azure Object Anchors](../overview.md) from
-Unity 2019 to Unity 2020. Azure Object Anchors is a managed cloud service that converts 3D assets into AI models
-that enable object-aware mixed reality experiences for the HoloLens. When you're finished, you'll have a HoloLens
-app built with Unity that can detect objects in the physical world.
+In this quickstart, you upgrade a Unity HoloLens app that uses [Azure Object Anchors](../overview.md) from Unity 2019 to Unity 2020. Azure Object Anchors is a managed cloud service that converts 3D assets into AI models that enable object-aware mixed reality experiences for the HoloLens. When you finish, you'll have a HoloLens app built with Unity that can detect objects in the physical world.
You'll learn how to:
You'll learn how to:
To complete this quickstart, make sure you have:
-* All prerequisites from either the [Unity HoloLens](get-started-unity-hololens.md) or the [Unity HoloLens with MRTK](get-started-unity-hololens-mrtk.md) quickstarts.
+* All prerequisites from either the [Unity HoloLens](get-started-unity-hololens.md) or the [Unity HoloLens with MRTK](get-started-unity-hololens-mrtk.md) quickstarts
* <a href="https://unity3d.com/get-unity/download" target="_blank">Unity Hub with Unity 2020.3.8f1 or newer</a> ## Open and upgrade the sample project Follow the steps from either the [Unity HoloLens](get-started-unity-hololens.md) or the [Unity HoloLens with MRTK](get-started-unity-hololens-mrtk.md) quickstarts to clone the [samples repository](https://github.com/Azure/azure-object-anchors), and download the Azure Object Anchors package for Unity.
-Open Unity Hub. Select the **Add** button and pick either the `quickstarts/apps/unity/basic` or the `quickstarts/apps/unity/mrtk` project. Then, under the **Unity Version** column, select the version of Unity 2020 in the dropdown that you've installed on your machine. Under the **Target Platform** column, select **Universal Windows Platform**. Finally, select the **Project Name** column and open the sample in Unity.
+1. Open Unity Hub. Select **Add** and pick either the `quickstarts/apps/unity/basic` or the `quickstarts/apps/unity/mrtk` project.
+1. Under the **Unity Version** column, select the version of Unity 2020 in the dropdown that you've installed on your computer.
+1. Under the **Target Platform** column, select **Universal Windows Platform**.
+1. Select the **Project Name** column and open the sample in Unity.
+ :::image type="content" source="./media/upgrade-unity-2020.png" alt-text="Screenshot shows a Unity page with Unity Version, Target Platform and ADD highlighted.":::
-You'll see a dialog asking for confirmation to upgrade your project. Select the **Confirm** button.
+ You'll see a dialog asking for confirmation to upgrade your project. Select the **Confirm** button.
+ :::image type="content" source="./media/confirm-unity-upgrade.png" alt-text="Screenshot shows a dialog confirming the upgrade with Confirm selected.":::
## Upgrade package dependencies
-Once the upgrade process completes, **Unity Editor** will open up.
+Once the upgrade process completes, **Unity Editor** opens.
+
+1. Follow the <a href="/windows/mixed-reality/develop/unity/welcome-to-mr-feature-tool" target="_blank">Mixed Reality Feature Tool</a> documentation to set up the tool and learn how to use it.
-Follow the <a href="/windows/mixed-reality/develop/unity/welcome-to-mr-feature-tool" target="_blank">Mixed Reality Feature Tool</a> documentation to set up the tool and learn how to use it.
+1. Under **Platform Support**, install the **Mixed Reality OpenXR Plugin** feature package, version 1.0.0 or newer, into the Unity project folder.
-Under the **Platform Support** section, install the **Mixed Reality OpenXR Plugin** feature package, version 1.0.0 or newer, into the Unity project folder. If you're working with the `quickstarts/apps/unity/mrtk` project, also open the **Mixed Reality Toolkit** section, locate the **Mixed Reality Toolkit Foundation** and **Mixed Reality Toolkit Tools** feature packages, and upgrade them to version 2.7.0 or newer.
+1. If you're working with the `quickstarts/apps/unity/mrtk` project, also open the **Mixed Reality Toolkit** section, locate the **Mixed Reality Toolkit Foundation** and **Mixed Reality Toolkit Tools** feature packages, and upgrade them to version 2.7.0 or newer.
-Go back to your **Unity Editor**. It might take a few minutes, while the **Mixed Reality Feature Tool** feature packages are installed.
+1. Go back to your **Unity Editor**. It might take a few minutes, while the **Mixed Reality Feature Tool** feature packages are installed.
-You'll see a dialog asking for confirmation to enable the new input system. Select the **Yes** button.
+ You'll see a dialog asking for confirmation to enable the new input system. Select **Yes**.
+ :::image type="content" source="./media/new-input-system.png" alt-text="Screenshot shows a dialog that contains a warning with the Yes button highlighted.":::
- If you get a dialog asking you to overwrite MRTK shaders, select **Yes**.
+ If you get a dialog asking you to overwrite MRTK shaders, select **Yes**.
+ :::image type="content" source="./media/mrtk-shaders.png" alt-text="Screenshot shows the Mixed Reality Toolkit Standard Assets dialog.":::
-Once the install process completes, Unity will restart automatically.
+Once the install process completes, Unity restarts automatically.
## Update configuration settings
-Back in **Unity Editor**, follow the <a href="/windows/mixed-reality/develop/unity/xr-project-setup#configuring-xr-plugin-management-for-openxr" target="_blank">Configuring XR Plugin Management for OpenXR</a> documentation to set up the **XR Plugin Management** in your **Project Settings**. Then, follow the <a href="/windows/mixed-reality/develop/unity/xr-project-setup#optimization" target="_blank">Optimization</a> documentation to apply the recommended project settings for HoloLens 2.
+Back in **Unity Editor**, follow the <a href="/windows/mixed-reality/develop/unity/new-openxr-project-with-mrtk#configure-openxr-settings" target="_blank">Configuring XR Plugin Management for OpenXR</a> documentation to set up the **XR Plugin Management** in your **Project Settings**. Then, follow the <a href="/windows/mixed-reality/develop/unity/new-openxr-project-with-mrtk#optimization" target="_blank">Optimization</a> documentation to apply the recommended project settings for HoloLens 2.
## Update MRTK settings
-If you're working with the `quickstarts/apps/unity/mrtk` project, MRTK will also need some adjustments. In that case, follow the steps below. Otherwise, skip to the **Build, deploy and run the app** section.
+If you're working with the `quickstarts/apps/unity/mrtk` project, follow the steps below to adjust MRTK. Otherwise, skip to [Build, deploy, and run the app](#build-deploy-and-run-the-app).
-In **Unity Editor**, navigate to `Assets/MixedReality.AzureObjectAnchors/Scenes`, and open **AOASampleScene**. Under the **Hierarchy** pane, select the **MixedRealityToolkit** object.
+1. In **Unity Editor**, navigate to `Assets/MixedReality.AzureObjectAnchors/Scenes`, and open **AOASampleScene**. Under the **Hierarchy** pane, select **MixedRealityToolkit**.
+ :::image type="content" source="./media/open-sample-scene.png" alt-text="Screenshot shows the Unity Editor with the MixedRealityToolkit highlighted.":::
-Under the **Inspector** pane, select the **Camera** button, and change the profile from **ObsoleteXRSDKCameraProfile** to **DefaultMixedRealityCameraProfile**.
+1. Under the **Inspector** pane, select **Camera**, and change the profile from **ObsoleteXRSDKCameraProfile** to **DefaultMixedRealityCameraProfile**.
+ :::image type="content" source="./media/update-camera-profile.png" alt-text="Screenshot shows the Unity Editor with Camera and DefaultMixedRealityCameraProfile highlighted.":::
-Still under the **Inspector** pane, select the **Input** button, and expand the **Input Data Providers** dropdown. Then, follow the <a href="/windows/mixed-reality/mrtk-unity/configuration/getting-started-with-mrtk-and-xrsdk#configuring-mrtk-for-the-xr-sdk-pipeline" target="_blank">Configuring MRTK for the XR SDK pipeline</a> documentation to set up the proper input data providers (**OpenXRDeviceManager** and **WindowsMixedRealityDeviceManager**).
+1. Still in the **Inspector** pane, select **Input**, and expand the **Input Data Providers** dropdown list. Follow the <a href="/windows/mixed-reality/mrtk-unity/configuration/getting-started-with-mrtk-and-xrsdk#configuring-mrtk-for-the-xr-sdk-pipeline" target="_blank">Configuring MRTK for the XR SDK pipeline</a> documentation to set up the proper input data providers: **OpenXRDeviceManager** and **WindowsMixedRealityDeviceManager**.
+ :::image type="content" source="./media/update-input-profile.png" alt-text="Screenshot shows the Unity Editor with Input and Input Data Providers highlighted.":::
## Build, deploy, and run the app
service-fabric Concepts Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/concepts-managed-identity.md
Title: Managed identities for Azure description: Learn about using Managed identities for Azure with Service Fabric.- Previously updated : 12/09/2019+ Last updated : 05/28/2022 # Using Managed identities for Azure with Service Fabric A common challenge when building cloud applications is how to securely manage the credentials in your code for authenticating to various services without saving them locally on a developer workstation or in source control. *Managed identities for Azure* solve this problem for all your resources in Azure Active Directory (Azure AD) by providing them with automatically managed identities within Azure AD. You can use a service's identity to authenticate to any service that supports Azure AD authentication, including Key Vault, without any credentials stored in your code.
-*Managed identities for Azure resources* are free with Azure AD for Azure subscriptions. There's no additional cost.
+*Managed identities for Azure resources* are free with Azure AD for Azure subscriptions. There's no extra cost.
> [!NOTE] > *Managed identities for Azure* is the new name for the service formerly known as Managed Service Identity (MSI). ## Concepts
-Managed identities for Azure is based upon several key concepts:
+Managed identities for Azure are based upon several key concepts:
-- **Client ID** - a unique identifier generated by Azure AD that is tied to an application and service principal during its initial provisioning (also see [application ID](../active-directory/develop/developer-glossary.md#application-id-client-id).)
+- **Client ID** - a unique identifier generated by Azure AD that is tied to an application and service principal during its initial provisioning (also see [Application (client) ID](../active-directory/develop/developer-glossary.md#application-client-id).)
-- **Principal ID** - the object ID of the service principal object for your Managed Identity that is used to grant role-based access to an Azure resource.
+- **Principal ID** - the object ID of the service principal object for your managed identity that is used to grant role-based access to an Azure resource.
-- **Service Principal** - an Azure Active Directory object, which represents the projection of an AAD application in a given tenant (also see [service principal](../active-directory/develop/developer-glossary.md#service-principal-object).)
+- **Service Principal** - an Azure Active Directory object, which represents the projection of an Azure AD application in a given tenant (also see [service principal](../active-directory/develop/developer-glossary.md#service-principal-object).)
There are two types of managed identities:
To further understand the difference between managed identity types, see [How do
## Supported scenarios for Service Fabric applications
-Managed identities for Service Fabric are only supported in Azure-deployed Service Fabric clusters, and only for applications deployed as Azure resources; an application that is not deployed as an Azure resource cannot be assigned an identity. Conceptually speaking, support for managed identities in an Azure Service Fabric cluster consists of two phases:
+Managed identities for Service Fabric are only supported in Azure-deployed Service Fabric clusters, and only for applications deployed as Azure resources. An application not deployed as an Azure resource can't be assigned an identity. Conceptually speaking, support for managed identities in an Azure Service Fabric cluster consists of two phases:
1. Assign one or more managed identities to the application resource; an application may be assigned a single system-assigned identity, and/or up to 32 user-assigned identities, respectively. 2. Within the application's definition, map one of the identities assigned to the application to any individual service comprising the application.
-The system-assigned identity of an application is unique to that application; a user-assigned identity is a standalone resource, which may be assigned to multiple applications. Within an application, a single identity (whether system-assigned or user-assigned) can be assigned to multiple services of the application, but each individual service can only be assigned one identity. Lastly, a service must be assigned an identity explicitly to have access to this feature. In effect, the mapping of an application's identities to its constituent services allows for in-application isolation ΓÇö a service may only use the identity mapped to it.
+The system-assigned identity of an application is unique to that application; a user-assigned identity is a standalone resource, which may be assigned to multiple applications. Within an application, a single identity (whether system-assigned or user-assigned) can be assigned to multiple services of the application, but each individual service can only be assigned one identity. Lastly, a service must be assigned an identity explicitly to have access to this feature. In effect, the mapping of an application's identities to its constituent services allows for in-application isolationΓÇöa service may only use the identity mapped to it.
-Currently, the following scenarios are supported for this feature:
+The following scenarios are supported for this feature:
- Deploy a new application with one or more services and one or more assigned identities - Assign one or more managed identities to an existing (Azure-deployed) application in order to access Azure resources
-The following scenarios are not supported or not recommended; note these actions may not be blocked, but can lead to outages in your applications:
+The following scenarios are unsupported or not recommended. These actions may not be blocked, but can lead to outages in your applications:
-- Remove or change the identities assigned to an application; if you must make changes, submit separate deployments to first add a new identity assignment, and then to remove a previously assigned one. Removal of an identity from an existing application can have undesirable effects, including leaving your application in a state that is not upgradeable. It is safe to delete the application altogether if the removal of an identity is necessary; note this will delete the system-assigned identity (if so defined) associated with the application, and will remove any associations with the user-assigned identities assigned to the application.
+- Removing or changing the identities assigned to an application. If you need to make changes, submit separate deployments to first add a new identity assignment, and then to remove a previously assigned one. Removal of an identity from an existing application can have undesirable effects, including leaving your application in a non-upgradeable state. It's safe to delete the application altogether if the removal of an identity is necessary. Deleting the application deletes any system-assigned identity associated with the application and removes all associations with any user-assigned identities assigned to the application.
-- Service Fabric support for managed identities is not integrated at this time into the deprecated [AzureServiceTokenProvider](/dotnet/api/overview/azure/service-to-service-authentication). However, Service Fabric does support leveraging managed identities instead through the [Azure Identity SDK](./how-to-managed-identity-service-fabric-app-code.md)
+- Service Fabric doesn't support managed identities in the deprecated [AzureServiceTokenProvider](/dotnet/api/overview/azure/service-to-service-authentication). Instead, use managed identities in Service Fabric by using the [Azure Identity SDK](./how-to-managed-identity-service-fabric-app-code.md)
## Next steps
The following scenarios are not supported or not recommended; note these actions
- [Enable managed identity support in an existing Azure Service Fabric cluster](./configure-existing-cluster-enable-managed-identity-token-service.md) - [Deploy an Azure Service Fabric application with a system-assigned managed identity](./how-to-deploy-service-fabric-application-system-assigned-managed-identity.md) - [Deploy an Azure Service Fabric application with a user-assigned managed identity](./how-to-deploy-service-fabric-application-user-assigned-managed-identity.md)-- [Leverage the managed identity of a Service Fabric application from service code](./how-to-managed-identity-service-fabric-app-code.md)
+- [Use the managed identity of a Service Fabric application from service code](./how-to-managed-identity-service-fabric-app-code.md)
- [Grant an Azure Service Fabric application access to other Azure resources](./how-to-grant-access-other-resources.md) - [Declaring and using application secrets as KeyVaultReferences](./service-fabric-keyvault-references.md)
service-fabric Service Fabric Reliable Services Exception Serialization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-services-exception-serialization.md
Title: Enabling Data Contract serialization for Remoting exceptions in Service Fabric
-description: Enabling Data Contract serialization for Remoting exceptions in Service Fabric
+ Title: Enable data contract serialization for remoting exceptions in Service Fabric
+description: Enable data contract serialization for remoting exceptions in Azure Service Fabric.
Last updated 03/30/2022
-# Remoting Exception Serialization Overview
-BinaryFormatter based serialization is not secure and Microsoft strongly recommends not to use BinaryFormatter for data processing. More details on the security implications can be found [here](/dotnet/standard/serialization/binaryformatter-security-guide).
-Service Fabric had been using BinaryFormatter for serializing Exceptions. Starting ServiceFabric v9.0, [Data Contract based serialization](/dotnet/api/system.runtime.serialization.datacontractserializer?view=net-6.0) for remoting exceptions is made available as an opt-in feature. It is strongly recommended to opt for DataContract remoting exception serialization by following the below mentioned steps.
+# Remoting exception serialization overview
-Support for BinaryFormatter based remoting exception serialization will be deprecated in the future.
+BinaryFormatter-based serialization isn't secure, so don't use BinaryFormatter for data processing. For more information on the security implications, see [Deserialization risks in the use of BinaryFormatter and related types](/dotnet/standard/serialization/binaryformatter-security-guide).
-## Steps to enable Data Contract Serialization for Remoting Exceptions
+Azure Service Fabric used BinaryFormatter for serializing exceptions. Starting with ServiceFabric v9.0, [data contract-based serialization](/dotnet/api/system.runtime.serialization.datacontractserializer?view=net-6.0) for remoting exceptions is available as an opt-in feature. We recommend that you opt for DataContract remoting exception serialization by following the steps in this article.
+
+Support for BinaryFormatter-based remoting exception serialization will be deprecated in the future.
+
+## Enable data contract serialization for remoting exceptions
>[!NOTE]
->Data Contract Serialization for Remoting Exceptions is only available for Remoting V2/V2_1 services.
+>Data contract serialization for remoting exceptions is only available for remoting V2/V2_1 services.
-You can enable Data Contract Serialization for Remoting Exceptions using the below steps
+To enable data contract serialization for remoting exceptions:
-1. Enable DataContract remoting exception serialization on the **Service** side by using `FabricTransportRemotingListenerSettings.ExceptionSerializationTechnique` while creating the remoting listener.
+1. Enable DataContract remoting exception serialization on the **Service** side by using `FabricTransportRemotingListenerSettings.ExceptionSerializationTechnique` while you create the remoting listener.
- - StatelessService
-```csharp
-protected override IEnumerable<ServiceInstanceListener> CreateServiceInstanceListeners()
-{
- return new[]
- {
- new ServiceInstanceListener(serviceContext =>
- new FabricTransportServiceRemotingListener(
- serviceContext,
- this,
- new FabricTransportRemotingListenerSettings
- {
- ExceptionSerializationTechnique = FabricTransportRemotingListenerSettings.ExceptionSerialization.Default,
- }),
- "ServiceEndpointV2")
- };
-}
-```
- - StatefulService
-```csharp
-protected override IEnumerable<ServiceReplicaListener> CreateServiceReplicaListeners()
-{
- return new[]
- {
- new ServiceReplicaListener(serviceContext =>
- new FabricTransportServiceRemotingListener(
- serviceContext,
- this,
- new FabricTransportRemotingListenerSettings
- {
- ExceptionSerializationTechnique = FabricTransportRemotingListenerSettings.ExceptionSerialization.Default,
- }),
- "ServiceEndpointV2")
- };
-}
-```
-
- - ActorService
-To enable DataContract remoting exception serialization on the ActorService, override `CreateServiceReplicaListeners()` by extending `ActorService`
-```csharp
-protected override IEnumerable<ServiceReplicaListener> CreateServiceReplicaListeners()
-{
- return new List<ServiceReplicaListener>
- {
- new ServiceReplicaListener(_ =>
+ - StatelessService
+
+ ```csharp
+ protected override IEnumerable<ServiceInstanceListener> CreateServiceInstanceListeners()
{
- return new FabricTransportActorServiceRemotingListener(
- this,
- new FabricTransportRemotingListenerSettings
- {
- ExceptionSerializationTechnique = FabricTransportRemotingListenerSettings.ExceptionSerialization.Default,
- });
- },
- "MyActorServiceEndpointV2")
- };
-}
-```
-
-If the original exception has multiple levels of inner exceptions, then you can control the number of levels of inner exceptions to be serialized by setting `FabricTransportRemotingListenerSettings.RemotingExceptionDepth`.
-
-2. Enable DataContract remoting exception serialization on the **Client** by using `FabricTransportRemotingSettings.ExceptionDeserializationTechnique` while creating the Client Factory
- - ServiceProxyFactory creation
-```csharp
-var serviceProxyFactory = new ServiceProxyFactory(
-(callbackClient) =>
-{
- return new FabricTransportServiceRemotingClientFactory(
- new FabricTransportRemotingSettings
+ return new[]
+ {
+ new ServiceInstanceListener(serviceContext =>
+ new FabricTransportServiceRemotingListener(
+ serviceContext,
+ this,
+ new FabricTransportRemotingListenerSettings
+ {
+ ExceptionSerializationTechnique = FabricTransportRemotingListenerSettings.ExceptionSerialization.Default,
+ }),
+ "ServiceEndpointV2")
+ };
+ }
+ ```
+
+ - StatefulService
+
+ ```csharp
+ protected override IEnumerable<ServiceReplicaListener> CreateServiceReplicaListeners()
{
- ExceptionDeserializationTechnique = FabricTransportRemotingSettings.ExceptionDeserialization.Default,
- },
- callbackClient);
-});
-```
- - ActorProxyFactory
-```csharp
-var actorProxyFactory = new ActorProxyFactory(
-(callbackClient) =>
-{
- return new FabricTransportActorRemotingClientFactory(
- new FabricTransportRemotingSettings
+ return new[]
+ {
+ new ServiceReplicaListener(serviceContext =>
+ new FabricTransportServiceRemotingListener(
+ serviceContext,
+ this,
+ new FabricTransportRemotingListenerSettings
+ {
+ ExceptionSerializationTechnique = FabricTransportRemotingListenerSettings.ExceptionSerialization.Default,
+ }),
+ "ServiceEndpointV2")
+ };
+ }
+ ```
+
+ - ActorService
+To enable DataContract remoting exception serialization on the actor service, override `CreateServiceReplicaListeners()` by extending `ActorService`.
+
+ ```csharp
+ protected override IEnumerable<ServiceReplicaListener> CreateServiceReplicaListeners()
{
- ExceptionDeserializationTechnique = FabricTransportRemotingSettings.ExceptionDeserialization.Default,
- },
- callbackClient);
-});
-```
-
-3. DataContract remoting exception serialization converts Exception to Data Transfer Object(DTO) on the service side and the DTO is converted back to Exception on the client side. Users need to register `ExceptionConvertor` for converting desired exceptions to DTO objects and vice versa.
-Framework implements Convertors for the below list of the exceptions. If user service code depends on exceptions outside the below list for retry implementation, exception handling, etc., then user needs to implement and register convertors for such exceptions.
-
- * All service fabric exceptions(derived from `System.Fabric.FabricException`)
- * SystemExceptions(derived from `System.SystemException`)
- * System.AccessViolationException
- * System.AppDomainUnloadedException
- * System.ArgumentException
- * System.ArithmeticException
- * System.ArrayTypeMismatchException
- * System.BadImageFormatException
- * System.CannotUnloadAppDomainException
- * System.Collections.Generic.KeyNotFoundException
- * System.ContextMarshalException
- * System.DataMisalignedException
- * System.ExecutionEngineException
- * System.FormatException
- * System.IndexOutOfRangeException
- * System.InsufficientExecutionStackException
- * System.InvalidCastException
- * System.InvalidOperationException
- * System.InvalidProgramException
- * System.IO.InternalBufferOverflowException
- * System.IO.InvalidDataException
- * System.IO.IOException
- * System.MemberAccessException
- * System.MulticastNotSupportedException
- * System.NotImplementedException
- * System.NotSupportedException
- * System.NullReferenceException
- * System.OperationCanceledException
- * System.OutOfMemoryException
- * System.RankException
- * System.Reflection.AmbiguousMatchException
- * System.Reflection.ReflectionTypeLoadException
- * System.Resources.MissingManifestResourceException
- * System.Resources.MissingSatelliteAssemblyException
- * System.Runtime.InteropServices.ExternalException
- * System.Runtime.InteropServices.InvalidComObjectException
- * System.Runtime.InteropServices.InvalidOleVariantTypeException
- * System.Runtime.InteropServices.MarshalDirectiveException
- * System.Runtime.InteropServices.SafeArrayRankMismatchException
- * System.Runtime.InteropServices.SafeArrayTypeMismatchException
- * System.Runtime.Serialization.SerializationException
- * System.StackOverflowException
- * System.Threading.AbandonedMutexException
- * System.Threading.SemaphoreFullException
- * System.Threading.SynchronizationLockException
- * System.Threading.ThreadInterruptedException
- * System.Threading.ThreadStateException
- * System.TimeoutException
- * System.TypeInitializationException
- * System.TypeLoadException
- * System.TypeUnloadedException
- * System.UnauthorizedAccessException
- * System.ArgumentNullException
- * System.IO.FileNotFoundException
- * System.IO.DirectoryNotFoundException
- * System.ObjectDisposedException
- * System.AggregateException
-
-## Sample implementation of service side convertor for a custom exception
-
-Below is reference `IExceptionConvertor` implementation on the **Service** and **Client** side for a well known exception type `CustomException`.
+ return new List<ServiceReplicaListener>
+ {
+ new ServiceReplicaListener(_ =>
+ {
+ return new FabricTransportActorServiceRemotingListener(
+ this,
+ new FabricTransportRemotingListenerSettings
+ {
+ ExceptionSerializationTechnique = FabricTransportRemotingListenerSettings.ExceptionSerialization.Default,
+ });
+ },
+ "MyActorServiceEndpointV2")
+ };
+ }
+ ```
-- CustomException
-```csharp
-class CustomException : Exception
-{
- public CustomException(string message, string field1, string field2)
- : base(message)
- {
- this.Field1 = field1;
- this.Field2 = field2;
- }
+ If the original exception has multiple levels of inner exceptions, you can control the number of levels of inner exceptions to be serialized by setting `FabricTransportRemotingListenerSettings.RemotingExceptionDepth`.
- public CustomException(string message, Exception innerEx, string field1, string field2)
- : base(message, innerEx)
- {
- this.Field1 = field1;
- this.Field2 = field2;
- }
+1. Enable DataContract remoting exception serialization on the **Client** by using `FabricTransportRemotingSettings.ExceptionDeserializationTechnique` while you create the client factory.
- public string Field1 { get; set; }
+ - ServiceProxyFactory creation
- public string Field2 { get; set; }
-}
-```
+ ```csharp
+ var serviceProxyFactory = new ServiceProxyFactory(
+ (callbackClient) =>
+ {
+ return new FabricTransportServiceRemotingClientFactory(
+ new FabricTransportRemotingSettings
+ {
+ ExceptionDeserializationTechnique = FabricTransportRemotingSettings.ExceptionDeserialization.Default,
+ },
+ callbackClient);
+ });
+ ```
-- `IExceptionConvertor` implementation on **Service** side.
-```csharp
-class CustomConvertorService : Microsoft.ServiceFabric.Services.Remoting.V2.Runtime.IExceptionConvertor
-{
- public Exception[] GetInnerExceptions(Exception originalException)
- {
- return originalException.InnerException == null ? null : new Exception[] { originalException.InnerException };
- }
+ - ActorProxyFactory
- public bool TryConvertToServiceException(Exception originalException, out ServiceException serviceException)
- {
- serviceException = null;
- if (originalException is CustomException customEx)
+ ```csharp
+ var actorProxyFactory = new ActorProxyFactory(
+ (callbackClient) =>
{
- serviceException = new ServiceException(customEx.GetType().FullName, customEx.Message);
- serviceException.ActualExceptionStackTrace = originalException.StackTrace;
- serviceException.ActualExceptionData = new Dictionary<string, string>()
+ return new FabricTransportActorRemotingClientFactory(
+ new FabricTransportRemotingSettings
{
- { "Field1", customEx.Field1 },
- { "Field2", customEx.Field2 },
- };
+ ExceptionDeserializationTechnique = FabricTransportRemotingSettings.ExceptionDeserialization.Default,
+ },
+ callbackClient);
+ });
+ ```
+
+1. DataContract remoting exception serialization converts an exception to the data transfer object (DTO) on the service side. The DTO is converted back to an exception on the client side. Users need to register `ExceptionConvertor` to convert the desired exceptions to DTO objects and vice versa.
+
+ The framework implements convertors for the following list of exceptions. If user service code depends on exceptions outside the following list for retry implementation and exception handling, users need to implement and register convertors for such exceptions.
+
+ * All Service Fabric exceptions derived from `System.Fabric.FabricException`
+ * SystemExceptions derived from `System.SystemException`
+ * System.AccessViolationException
+ * System.AppDomainUnloadedException
+ * System.ArgumentException
+ * System.ArithmeticException
+ * System.ArrayTypeMismatchException
+ * System.BadImageFormatException
+ * System.CannotUnloadAppDomainException
+ * System.Collections.Generic.KeyNotFoundException
+ * System.ContextMarshalException
+ * System.DataMisalignedException
+ * System.ExecutionEngineException
+ * System.FormatException
+ * System.IndexOutOfRangeException
+ * System.InsufficientExecutionStackException
+ * System.InvalidCastException
+ * System.InvalidOperationException
+ * System.InvalidProgramException
+ * System.IO.InternalBufferOverflowException
+ * System.IO.InvalidDataException
+ * System.IO.IOException
+ * System.MemberAccessException
+ * System.MulticastNotSupportedException
+ * System.NotImplementedException
+ * System.NotSupportedException
+ * System.NullReferenceException
+ * System.OperationCanceledException
+ * System.OutOfMemoryException
+ * System.RankException
+ * System.Reflection.AmbiguousMatchException
+ * System.Reflection.ReflectionTypeLoadException
+ * System.Resources.MissingManifestResourceException
+ * System.Resources.MissingSatelliteAssemblyException
+ * System.Runtime.InteropServices.ExternalException
+ * System.Runtime.InteropServices.InvalidComObjectException
+ * System.Runtime.InteropServices.InvalidOleVariantTypeException
+ * System.Runtime.InteropServices.MarshalDirectiveException
+ * System.Runtime.InteropServices.SafeArrayRankMismatchException
+ * System.Runtime.InteropServices.SafeArrayTypeMismatchException
+ * System.Runtime.Serialization.SerializationException
+ * System.StackOverflowException
+ * System.Threading.AbandonedMutexException
+ * System.Threading.SemaphoreFullException
+ * System.Threading.SynchronizationLockException
+ * System.Threading.ThreadInterruptedException
+ * System.Threading.ThreadStateException
+ * System.TimeoutException
+ * System.TypeInitializationException
+ * System.TypeLoadException
+ * System.TypeUnloadedException
+ * System.UnauthorizedAccessException
+ * System.ArgumentNullException
+ * System.IO.FileNotFoundException
+ * System.IO.DirectoryNotFoundException
+ * System.ObjectDisposedException
+ * System.AggregateException
+
+## Sample implementation of a service-side convertor for a custom exception
+
+The following example is reference `IExceptionConvertor` implementation on the **Service** and **Client** side for a well-known exception type, `CustomException`.
- return true;
+- CustomException
+
+ ```csharp
+ class CustomException : Exception
+ {
+ public CustomException(string message, string field1, string field2)
+ : base(message)
+ {
+ this.Field1 = field1;
+ this.Field2 = field2;
}-
- return false;
+
+ public CustomException(string message, Exception innerEx, string field1, string field2)
+ : base(message, innerEx)
+ {
+ this.Field1 = field1;
+ this.Field2 = field2;
+ }
+
+ public string Field1 { get; set; }
+
+ public string Field2 { get; set; }
}
-}
-```
-Actual exception observed during the execution of the remoting call is passed as input to `TryConvertToServiceException`. If the type of the exception is a well known one, then `TryConvertToServiceException` should convert the original exception to `ServiceException`
- and return it as an out parameter. A true value should be returned if the original exception type is well known one and original exception is successfully converted to the `ServiceException`, false otherwise.
+ ```
- A list of inner exceptions at the current level should be returned by `GetInnerExceptions()`.
+- `IExceptionConvertor` implementation on the **Service** side:
-- `IExceptionConvertor` implementation on **Client** side.
-```csharp
-class CustomConvertorClient : Microsoft.ServiceFabric.Services.Remoting.V2.Client.IExceptionConvertor
-{
- public bool TryConvertFromServiceException(ServiceException serviceException, out Exception actualException)
+ ```csharp
+ class CustomConvertorService : Microsoft.ServiceFabric.Services.Remoting.V2.Runtime.IExceptionConvertor
{
- return this.TryConvertFromServiceException(serviceException, (Exception)null, out actualException);
+ public Exception[] GetInnerExceptions(Exception originalException)
+ {
+ return originalException.InnerException == null ? null : new Exception[] { originalException.InnerException };
+ }
+
+ public bool TryConvertToServiceException(Exception originalException, out ServiceException serviceException)
+ {
+ serviceException = null;
+ if (originalException is CustomException customEx)
+ {
+ serviceException = new ServiceException(customEx.GetType().FullName, customEx.Message);
+ serviceException.ActualExceptionStackTrace = originalException.StackTrace;
+ serviceException.ActualExceptionData = new Dictionary<string, string>()
+ {
+ { "Field1", customEx.Field1 },
+ { "Field2", customEx.Field2 },
+ };
+
+ return true;
+ }
+
+ return false;
+ }
}
+ ```
- public bool TryConvertFromServiceException(ServiceException serviceException, Exception innerException, out Exception actualException)
- {
- actualException = null;
- if (serviceException.ActualExceptionType == typeof(CustomException).FullName)
- {
- actualException = new CustomException(
- serviceException.Message,
- innerException,
- serviceException.ActualExceptionData["Field1"],
- serviceException.ActualExceptionData["Field2"]);
+The actual exception observed during the execution of the remoting call is passed as input to `TryConvertToServiceException`. If the type of the exception is a well-known one, `TryConvertToServiceException` should convert the original exception to `ServiceException`
+ and return it as an out parameter. A true value should be returned if the original exception type is a well-known one and the original exception is successfully converted to `ServiceException`. Otherwise, the value is false.
- return true;
- }
+ A list of inner exceptions at the current level should be returned by `GetInnerExceptions()`.
- return false;
- }
+- `IExceptionConvertor` implementation on the **Client** side:
- public bool TryConvertFromServiceException(ServiceException serviceException, Exception[] innerExceptions, out Exception actualException)
+ ```csharp
+ class CustomConvertorClient : Microsoft.ServiceFabric.Services.Remoting.V2.Client.IExceptionConvertor
{
- throw new NotImplementedException();
+ public bool TryConvertFromServiceException(ServiceException serviceException, out Exception actualException)
+ {
+ return this.TryConvertFromServiceException(serviceException, (Exception)null, out actualException);
+ }
+
+ public bool TryConvertFromServiceException(ServiceException serviceException, Exception innerException, out Exception actualException)
+ {
+ actualException = null;
+ if (serviceException.ActualExceptionType == typeof(CustomException).FullName)
+ {
+ actualException = new CustomException(
+ serviceException.Message,
+ innerException,
+ serviceException.ActualExceptionData["Field1"],
+ serviceException.ActualExceptionData["Field2"]);
+
+ return true;
+ }
+
+ return false;
+ }
+
+ public bool TryConvertFromServiceException(ServiceException serviceException, Exception[] innerExceptions, out Exception actualException)
+ {
+ throw new NotImplementedException();
+ }
}
-}
-```
-`ServiceException` is passed as a parameter to `TryConvertFromServiceException` along with converted `innerException[s]`. If the actual exception type(`ServiceException.ActualExceptionType`) is a known one, then the convertor should create an actual exception object from the `ServiceException` and `innerException[s]`.
+ ```
-- `IExceptionConvertor` registration on the **Service** side.
+`ServiceException` is passed as a parameter to `TryConvertFromServiceException` along with converted `innerException[s]`. If the actual exception type, `ServiceException.ActualExceptionType`, is a known one, the convertor should create an actual exception object from `ServiceException` and `innerException[s]`.
- To register convertors, `CreateServiceInstanceListeners` has to be overridden and list of `IExceptionConvertor` has to be passed while creating RemotingListener instance.
+- `IExceptionConvertor` registration on the **Service** side:
- - *StatelessService*
-```csharp
-protected override IEnumerable<ServiceInstanceListener> CreateServiceInstanceListeners()
-{
- return new[]
- {
- new ServiceInstanceListener(serviceContext =>
- new FabricTransportServiceRemotingListener(
- serviceContext,
- this,
- new FabricTransportRemotingListenerSettings
- {
- ExceptionSerializationTechnique = FabricTransportRemotingListenerSettings.ExceptionSerialization.Default,
- },
- exceptionConvertors: new[]
- {
- new CustomConvertorService(),
- }),
- "ServiceEndpointV2")
- };
-}
-```
+ To register convertors, `CreateServiceInstanceListeners` must be overridden and the list of `IExceptionConvertor` classes must be passed while you create the `RemotingListener` instance.
+
+ - *StatelessService*
+
+ ```csharp
+ protected override IEnumerable<ServiceInstanceListener> CreateServiceInstanceListeners()
+ {
+ return new[]
+ {
+ new ServiceInstanceListener(serviceContext =>
+ new FabricTransportServiceRemotingListener(
+ serviceContext,
+ this,
+ new FabricTransportRemotingListenerSettings
+ {
+ ExceptionSerializationTechnique = FabricTransportRemotingListenerSettings.ExceptionSerialization.Default,
+ },
+ exceptionConvertors: new[]
+ {
+ new CustomConvertorService(),
+ }),
+ "ServiceEndpointV2")
+ };
+ }
+ ```
- *StatefulService*
-```csharp
-protected override IEnumerable<ServiceReplicaListener> CreateServiceReplicaListeners()
-{
- return new[]
+
+ ```csharp
+ protected override IEnumerable<ServiceReplicaListener> CreateServiceReplicaListeners()
{
- new ServiceReplicaListener(serviceContext =>
- new FabricTransportServiceRemotingListener(
- serviceContext,
- this,
- new FabricTransportRemotingListenerSettings
- {
- ExceptionSerializationTechnique = FabricTransportRemotingListenerSettings.ExceptionSerialization.Default,
- },
- exceptionConvertors: new []
- {
- new CustomConvertorService(),
- }),
- "ServiceEndpointV2")
- };
-}
-```
+ return new[]
+ {
+ new ServiceReplicaListener(serviceContext =>
+ new FabricTransportServiceRemotingListener(
+ serviceContext,
+ this,
+ new FabricTransportRemotingListenerSettings
+ {
+ ExceptionSerializationTechnique = FabricTransportRemotingListenerSettings.ExceptionSerialization.Default,
+ },
+ exceptionConvertors: new []
+ {
+ new CustomConvertorService(),
+ }),
+ "ServiceEndpointV2")
+ };
+ }
+ ```
- *ActorService*
-```csharp
-protected override IEnumerable<ServiceReplicaListener> CreateServiceReplicaListeners()
-{
- return new List<ServiceReplicaListener>
+
+ ```csharp
+ protected override IEnumerable<ServiceReplicaListener> CreateServiceReplicaListeners()
{
- new ServiceReplicaListener(_ =>
+ return new List<ServiceReplicaListener>
{
- return new FabricTransportActorServiceRemotingListener(
- this,
- new FabricTransportRemotingListenerSettings
- {
- ExceptionSerializationTechnique = FabricTransportRemotingListenerSettings.ExceptionSerialization.Default,
- },
- exceptionConvertors: new[]
- {
- new CustomConvertorService(),
- });
- },
- "MyActorServiceEndpointV2")
- };
-}
-```
-- `IExceptionConvertor` registration on the **Client** side.
+ new ServiceReplicaListener(_ =>
+ {
+ return new FabricTransportActorServiceRemotingListener(
+ this,
+ new FabricTransportRemotingListenerSettings
+ {
+ ExceptionSerializationTechnique = FabricTransportRemotingListenerSettings.ExceptionSerialization.Default,
+ },
+ exceptionConvertors: new[]
+ {
+ new CustomConvertorService(),
+ });
+ },
+ "MyActorServiceEndpointV2")
+ };
+ }
+ ```
- To register convertors, list of `IExceptionConvertor`s has to be passed while creating ClientFactory instance.
+- `IExceptionConvertor` registration on the **Client** side:
+
+ To register convertors, the list of `IExceptionConvertor` classes must be passed while you create the `ClientFactory` instance.
- *ServiceProxyFactory creation*
-```csharp
-var serviceProxyFactory = new ServiceProxyFactory(
-(callbackClient) =>
-{
- return new FabricTransportServiceRemotingClientFactory(
- new FabricTransportRemotingSettings
- {
- ExceptionDeserializationTechnique = FabricTransportRemotingSettings.ExceptionDeserialization.Default,
- },
- callbackClient,
- exceptionConvertors: new[]
- {
- new CustomConvertorClient(),
- });
-});
-```
+
+ ```csharp
+ var serviceProxyFactory = new ServiceProxyFactory(
+ (callbackClient) =>
+ {
+ return new FabricTransportServiceRemotingClientFactory(
+ new FabricTransportRemotingSettings
+ {
+ ExceptionDeserializationTechnique = FabricTransportRemotingSettings.ExceptionDeserialization.Default,
+ },
+ callbackClient,
+ exceptionConvertors: new[]
+ {
+ new CustomConvertorClient(),
+ });
+ });
+ ```
- *ActorProxyFactory creation*
-```csharp
-var actorProxyFactory = new ActorProxyFactory(
-(callbackClient) =>
-{
- return new FabricTransportActorRemotingClientFactory(
- new FabricTransportRemotingSettings
- {
- ExceptionDeserializationTechnique = FabricTransportRemotingSettings.ExceptionDeserialization.Default,
- },
- callbackClient,
- exceptionConvertors: new[]
- {
- new CustomConvertorClient(),
- });
-});
-```
+
+ ```csharp
+ var actorProxyFactory = new ActorProxyFactory(
+ (callbackClient) =>
+ {
+ return new FabricTransportActorRemotingClientFactory(
+ new FabricTransportRemotingSettings
+ {
+ ExceptionDeserializationTechnique = FabricTransportRemotingSettings.ExceptionDeserialization.Default,
+ },
+ callbackClient,
+ exceptionConvertors: new[]
+ {
+ new CustomConvertorClient(),
+ });
+ });
+ ```
+ >[!NOTE]
->If the framework finds the convertor for the exception, then the converted(actual) exception is wrapped inside AggregateException and is thrown at the remoting API(proxy). If the framework fails to find the convertor, then ServiceException which contains all the details of the actual exception is wrapped inside AggregateException and is thrown.
+>If the framework finds the convertor for the exception, the converted (actual) exception is wrapped inside `AggregateException` and is thrown at the remoting API (proxy). If the framework fails to find the convertor, then `ServiceException`, which contains all the details of the actual exception, is wrapped inside `AggregateException` and is thrown.
+
+### Upgrade an existing service to enable data contract serialization for remoting exceptions
+
+Existing services must follow the following order (*Service first*) to upgrade. Failure to follow this order could result in misbehavior in retry logic and exception handling.
+
+1. Implement the **Service** side `ExceptionConvertor` classes for the desired exceptions, if any. Update the remoting listener registration logic with `ExceptionSerializationTechnique` and the list of `IExceptionConvertor`classes. Upgrade the existing service to apply the exception serialization changes.
-### Step to upgrade an existing service to enable DataContract serialization for remoting exceptions
-Existing services must follow the below order(*Service first*) to upgrade. Failure to follow the below order could result in misbehavior in retry logic, exception handling, etc.
-1. Implement the **Service** side `ExceptionConvertor`s for the desired exceptions(if any). Update the remoting listener registration logic with `ExceptionSerializationTechnique` and list of `IExceptionConvertor`s. Upgrade the existing service to apply the exception serialization changes
-2. Implement the **Client** side `ExceptionConvertor`s for the desired exceptions(if any). Update the ProxyFactory creation logic with `ExceptionSerializationTechnique` and list of `IExceptionConvertor`s. Upgrade the existing client to apply the exception serialization changes
+1. Implement the **Client** side `ExceptionConvertor` classes for the desired exceptions, if any. Update the ProxyFactory creation logic with `ExceptionSerializationTechnique` and the list of `IExceptionConvertor` classes. Upgrade the existing client to apply the exception serialization changes.
## Next steps
storage Soft Delete Blob Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/soft-delete-blob-enable.md
To enable blob soft delete for your storage account by using the Azure portal, f
1. Install the latest **PowershellGet** module. Then, close and reopen the PowerShell console. ```powershell
- install-Module PowerShellGet -Repository PSGallery -Force
+ Install-Module PowerShellGet -Repository PSGallery -Force
``` 2. Install **Az.Storage** preview module.
storage Scalability Targets Standard Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/scalability-targets-standard-account.md
Previously updated : 05/09/2022 Last updated : 05/25/2022
storage Storage Account Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-create.md
Previously updated : 05/18/2022 Last updated : 05/26/2022
To create an Azure storage account with the Azure portal, follow these steps:
1. From the left portal menu, select **Storage accounts** to display a list of your storage accounts. If the portal menu isn't visible, click the menu button to toggle it on.
- :::image type="content" source="media/storage-account-create/menu-expand-sml.png" alt-text="Image of the Azure Portal homepage showing the location of the Menu button near the top left corner of the browser" lightbox="media/storage-account-create/menu-expand-lrg.png":::
+ :::image type="content" source="media/storage-account-create/menu-expand-sml.png" alt-text="Image of the Azure Portal homepage showing the location of the Menu button near the top left corner of the browser." lightbox="media/storage-account-create/menu-expand-lrg.png":::
1. On the **Storage accounts** page, select **Create**.
- :::image type="content" source="media/storage-account-create/create-button-sml.png" alt-text="Image showing the location of the create button within the Azure Portal Storage Accounts page" lightbox="media/storage-account-create/create-button-lrg.png":::
+ :::image type="content" source="media/storage-account-create/create-button-sml.png" alt-text="Image showing the location of the create button within the Azure Portal Storage Accounts page." lightbox="media/storage-account-create/create-button-lrg.png":::
Options for your new storage account are organized into tabs in the **Create a storage account** page. The following sections describe each of the tabs and their options.
The following table describes the fields on the **Basics** tab.
The following image shows a standard configuration of the basic properties for a new storage account. ### Advanced tab
The following table describes the fields on the **Advanced** tab.
The following image shows a standard configuration of the advanced properties for a new storage account. ### Networking tab
The following table describes the fields on the **Networking** tab.
| Section | Field | Required or optional | Description | |--|--|--|--| | Network connectivity | Connectivity method | Required | By default, incoming network traffic is routed to the public endpoint for your storage account. You can specify that traffic must be routed to the public endpoint through an Azure virtual network. You can also configure private endpoints for your storage account. For more information, see [Use private endpoints for Azure Storage](storage-private-endpoints.md). |
+| Network connectivity | Endpoint type | Required | Azure Storage supports two types of endpoints: standard endpoints (the default) and Azure DNS zone endpoints (preview). Within a given subscription, you can create up to 250 accounts with standard endpoints per region, and up to 5000 accounts with Azure DNS zone endpoints per region. To learn how to view the service endpoints for an existing storage account, see [Get service endpoints for the storage account](storage-account-get-info.md#get-service-endpoints-for-the-storage-account). |
| Network routing | Routing preference | Required | The network routing preference specifies how network traffic is routed to the public endpoint of your storage account from clients over the internet. By default, a new storage account uses Microsoft network routing. You can also choose to route network traffic through the POP closest to the storage account, which may lower networking costs. For more information, see [Network routing preference for Azure Storage](network-routing-preference.md). | The following image shows a standard configuration of the networking properties for a new storage account. +
+> [!IMPORTANT]
+> Azure DNS zone endpoints are currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
### Data protection tab
The following table describes the fields on the **Data protection** tab.
The following image shows a standard configuration of the data protection properties for a new storage account. ### Encryption tab
On the **Encryption** tab, you can configure options that relate to how your dat
The following image shows a standard configuration of the encryption properties for a new storage account. ### Tags tab
On the **Tags** tab, you can specify Resource Manager tags to help organize your
The following image shows a standard configuration of the index tag properties for a new storage account. ### Review + create tab
If validation fails, then the portal indicates which settings need to be modifie
The following image shows the **Review** tab data prior to the creation of a new storage account. # [PowerShell](#tab/azure-powershell) To create a general-purpose v2 storage account with PowerShell, first create a new resource group by calling the [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) command:
-```azurepowershell-interactive
+```azurepowershell
$resourceGroup = "<resource-group>" $location = "<location>" New-AzResourceGroup -Name $resourceGroup -Location $location
New-AzResourceGroup -Name $resourceGroup -Location $location
If you're not sure which region to specify for the `-Location` parameter, you can retrieve a list of supported regions for your subscription with the [Get-AzLocation](/powershell/module/az.resources/get-azlocation) command:
-```azurepowershell-interactive
+```azurepowershell
Get-AzLocation | select Location ``` Next, create a standard general-purpose v2 storage account with read-access geo-redundant storage (RA-GRS) by using the [New-AzStorageAccount](/powershell/module/az.storage/new-azstorageaccount) command. Remember that the name of your storage account must be unique across Azure, so replace the placeholder value in brackets with your own unique value:
-```azurepowershell-interactive
+```azurepowershell
New-AzStorageAccount -ResourceGroupName $resourceGroup ` -Name <account-name> ` -Location $location `
New-AzStorageAccount -ResourceGroupName $resourceGroup `
-Kind StorageV2 ```
-To enable a hierarchical namespace for the storage account to use [Azure Data Lake Storage](https://azure.microsoft.com/services/storage/data-lake-storage/), set the `EnableHierarchicalNamespace' parameter to `$True` on the call to the **New-AzStorageAccount** command.
+To create an account with Azure DNS zone endpoints (preview), follow these steps:
+
+1. Register for the preview as described in [Azure DNS zone endpoints (preview)](storage-account-overview.md#azure-dns-zone-endpoints-preview).
+
+1. Make sure you have the latest version of PowerShellGet installed.
+
+ ```azurepowershell
+ Install-Module PowerShellGet ΓÇôRepository PSGallery ΓÇôForce
+ ```
+
+1. Close and reopen the PowerShell console.
+
+1. Install version [4.4.2-preview](https://www.powershellgallery.com/packages/Az.Storage/4.4.2-preview) or later of the Az.Storage PowerShell module. You may need to uninstall other versions of the PowerShell module. For more information about installing Azure PowerShell, see [Install Azure PowerShell with PowerShellGet](/powershell/azure/install-az-ps).
+
+ ```azurepowershell
+ Install-Module Az.Storage -Repository PsGallery -RequiredVersion 4.4.2-preview -AllowClobber -AllowPrerelease -Force
+ ```
+
+Next, create the account, specifying `AzureDnsZone` for the `-DnsEndpointType` parameter. After the account is created, you can see the service endpoints by getting the `PrimaryEndpoints` and `SecondaryEndpoints` properties for the storage account.
+
+```azurepowershell
+$rgName = "<resource-group>"
+$accountName = "<storage-account>"
+
+$account = New-AzStorageAccount -ResourceGroupName $rgName `
+ -Name $accountName `
+ -SkuName Standard_RAGRS `
+ -Location <location> `
+ -Kind StorageV2 `
+ -DnsEndpointType AzureDnsZone
+
+$account.PrimaryEndpoints
+$account.SecondaryEndpoints
+```
+
+To enable a hierarchical namespace for the storage account to use [Azure Data Lake Storage](https://azure.microsoft.com/services/storage/data-lake-storage/), set the `EnableHierarchicalNamespace` parameter to `$True` on the call to the **New-AzStorageAccount** command.
The following table shows which values to use for the `SkuName` and `Kind` parameters to create a particular type of storage account with the desired redundancy configuration.
To create a general-purpose v2 storage account with Azure CLI, first create a ne
```azurecli-interactive az group create \ --name storage-resource-group \
- --location westus
+ --location eastus
``` If you're not sure which region to specify for the `--location` parameter, you can retrieve a list of supported regions for your subscription with the [az account list-locations](/cli/azure/account#az-account-list) command.
Next, create a standard general-purpose v2 storage account with read-access geo-
az storage account create \ --name <account-name> \ --resource-group storage-resource-group \
- --location westus \
+ --location eastus \
--sku Standard_RAGRS \ --kind StorageV2 ```
+To create an account with Azure DNS zone endpoints (preview), first register for the preview as described in [Azure DNS zone endpoints (preview)](storage-account-overview.md#azure-dns-zone-endpoints-preview). Next, install the preview extension for the Azure CLI if it's not already installed:
+
+```azurecli
+az extension add -name storage-preview
+```
+
+Next, create the account, specifying `AzureDnsZone` for the `--dns-endpoint-type` parameter. After the account is created, you can see the service endpoints by getting the `PrimaryEndpoints` property of the storage account.
+
+```azurecli
+az storage account create \
+ --name <account-name> \
+ --resource-group <resource-group> \
+ --location <location> \
+ --dns-endpoint-type AzureDnsZone
+```
+
+After the account is created, you can return the service endpoints by getting the `primaryEndpoints` and `secondaryEndpoints` properties for the storage account.
+
+```azurecli
+az storage account show \
+ --resource-group <resource-group> \
+ --name <account-name> \
+ --query '[primaryEndpoints, secondaryEndpoints]'
+```
+ To enable a hierarchical namespace for the storage account to use [Azure Data Lake Storage](https://azure.microsoft.com/services/storage/data-lake-storage/), set the `enable-hierarchical-namespace` parameter to `true` on the call to the **az storage account create** command. Creating a hierarchical namespace requires Azure CLI version 2.0.79 or later. The following table shows which values to use for the `sku` and `kind` parameters to create a particular type of storage account with the desired redundancy configuration.
To learn how to modify this Bicep file or create new ones, see:
You can use either Azure PowerShell or Azure CLI to deploy a Resource Manager template to create a storage account. The template used in this how-to article is from [Azure Resource Manager quickstart templates](https://azure.microsoft.com/resources/templates/storage-account-create/). To run the scripts, select **Try it** to open the Azure Cloud Shell. To paste the script, right-click the shell, and then select **Paste**.
-```azurepowershell-interactive
+```azurepowershell
$resourceGroupName = Read-Host -Prompt "Enter the Resource Group name" $location = Read-Host -Prompt "Enter the location (i.e. centralus)"
az storage account delete --name storageAccountName --resource-group resourceGro
To delete the storage account, use either Azure PowerShell or Azure CLI.
-```azurepowershell-interactive
+```azurepowershell
$storageResourceGroupName = Read-Host -Prompt "Enter the resource group name" $storageAccountName = Read-Host -Prompt "Enter the storage account name" Remove-AzStorageAccount -Name $storageAccountName -ResourceGroupName $storageResourceGroupName
storage Storage Account Get Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-get-info.md
Previously updated : 06/23/2021 Last updated : 05/26/2022
az storage account show \
+## Get service endpoints for the storage account
+
+The service endpoints for a storage account provide the base URL for any blob, queue, table, or file object in Azure Storage. Use this base URL to construct the address for any given resource.
+
+# [Azure portal](#tab/portal)
+
+To get the service endpoints for a storage account in the Azure portal, follow these steps:
+
+1. Navigate to your storage account in the Azure portal.
+1. In the **Settings** section, locate the **Endpoints** setting.
+1. On the **Endpoints** page, you'll see the service endpoint for each Azure Storage service, as well as the resource ID.
+
+ :::image type="content" source="media/storage-account-get-info/service-endpoints-portal-sml.png" alt-text="Screenshot showing how to retrieve service endpoints for a storage account." lightbox="media/storage-account-get-info/service-endpoints-portal-lrg.png":::
+
+If the storage account is geo-replicated, the secondary endpoints will also appear on this page.
+
+# [PowerShell](#tab/powershell)
+
+To get the service endpoints for a storage account with PowerShell, call [Get-AzStorageAccount](/powershell/module/az.storage/get-azstorageaccount) and return the `PrimaryEndpoints` property. If the storage account is geo-replicated, then the `SecondaryEndpoints` property returns the secondary endpoints.
+
+```azurepowershell
+(Get-AzStorageAccount -ResourceGroupName $rgName -Name $accountName).PrimaryEndpoints
+(Get-AzStorageAccount -ResourceGroupName $rgName -Name $accountName).SecondaryEndpoints
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+To get the service endpoints for a storage account with Azure CLI, call [az storage account show](/cli/azure/storage/account#az-storage-account-show) and return the `primaryEndpoints` property. If the storage account is geo-replicated, then the `secondaryEndpoints` property returns the secondary endpoints.
+
+```azurecli
+az storage account show \
+ --resource-group <resource-group> \
+ --name <account-name> \
+ --query '[primaryEndpoints, secondaryEndpoints]'
+```
+++
+## Get a connection string for the storage account
+
+You can use a connection string to authorize access to Azure Storage with the account access keys (Shared Key authorization). To learn more about connection strings, see [Configure Azure Storage connection strings](storage-configure-connection-string.md).
++
+# [Portal](#tab/portal)
+
+To get a connection string in the Azure portal, follow these steps:
+
+1. Navigate to your storage account in the Azure portal.
+1. In the **Security + networking** section, locate the **Access keys** setting.
+1. To display the account keys and associated connection strings, select the **Show keys** button at the top of the page.
+1. To copy a connection string to the clipboard, select the **Copy** button to the right of the connection string.
+
+# [PowerShell](#tab/powershell)
+
+To get a connection string with PowerShell, first get a `StorageAccountContext` object, then retrieve the `ConnectionString` property.
+
+```azurepowershell
+$rgName = "<resource-group>"
+$accountName = "storagesamplesdnszone2"
+
+(Get-AzStorageAccount -ResourceGroupName <resource-group> -Name <storage-account>).Context.ConnectionString
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+To get a connection string with Azure CLI, call the [az storage account show-connection-string](/cli/azure/storage/account#az-storage-account-show-connection-string) command.
+
+```azurecli
+az storage account show-connection-string --resource-group <resource-group> --name <storage-account>
+```
+++ ## Next steps - [Storage account overview](storage-account-overview.md)
storage Storage Account Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-overview.md
Previously updated : 04/05/2022 Last updated : 05/26/2022
The service-level agreement (SLA) for Azure Storage accounts is available at [SL
> [!NOTE] > You can't change a storage account to a different type after it's created. To move your data to a storage account of a different type, you must create a new account and copy the data to the new account.
-## Storage account endpoints
-
-A storage account provides a unique namespace in Azure for your data. Every object that you store in Azure Storage has an address that includes your unique account name. The combination of the account name and the Azure Storage service endpoint forms the endpoints for your storage account.
+## Storage account name
When naming your storage account, keep these rules in mind: - Storage account names must be between 3 and 24 characters in length and may contain numbers and lowercase letters only. - Your storage account name must be unique within Azure. No two storage accounts can have the same name.
-The following table lists the format of the endpoint for each of the Azure Storage services.
+## Storage account endpoints
+
+A storage account provides a unique namespace in Azure for your data. Every object that you store in Azure Storage has a URL address that includes your unique account name. The combination of the account name and the service endpoint forms the endpoints for your storage account.
+
+There are two types of service endpoints available for a storage account:
+
+- Standard endpoints (recommended). You can create up to 250 storage accounts per region with standard endpoints in a given subscription.
+- Azure DNS zone endpoints (preview). You can create up to 5000 storage accounts per region with Azure DNS zone endpoints in a given subscription.
+
+Within a single subscription, you can create accounts with either standard or Azure DNS Zone endpoints, for a maximum of 5250 accounts per subscription.
+
+> [!IMPORTANT]
+> Azure DNS zone endpoints are currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+You can configure your storage account to use a custom domain for the Blob Storage endpoint. For more information, see [Configure a custom domain name for your Azure Storage account](../blobs/storage-custom-domain-name.md).
+
+### Standard endpoints
+
+A standard service endpoint in Azure Storage includes the protocol (HTTPS is recommended), the storage account name as the subdomain, and a fixed domain that includes the name of the service.
+
+The following table lists the format for the standard endpoints for each of the Azure Storage services.
| Storage service | Endpoint | |--|--| | Blob Storage | `https://<storage-account>.blob.core.windows.net` |
+| Static website (Blob Storage) | `https://<storage-account>.web.core.windows.net` |
| Data Lake Storage Gen2 | `https://<storage-account>.dfs.core.windows.net` | | Azure Files | `https://<storage-account>.file.core.windows.net` | | Queue Storage | `https://<storage-account>.queue.core.windows.net` | | Table Storage | `https://<storage-account>.table.core.windows.net` |
-Construct the URL for accessing an object in a storage account by appending the object's location in the storage account to the endpoint. For example, the URL for a blob will be similar to:
+When your account is created with standard endpoints, you can easily construct the URL for an object in Azure Storage by appending the object's location in the storage account to the endpoint. For example, the URL for a blob will be similar to:
`https://*mystorageaccount*.blob.core.windows.net/*mycontainer*/*myblob*`
-You can also configure your storage account to use a custom domain for blobs. For more information, see [Configure a custom domain name for your Azure Storage account](../blobs/storage-custom-domain-name.md).
+### Azure DNS zone endpoints (preview)
+
+When you create an Azure Storage account with Azure DNS zone endpoints (preview), Azure Storage dynamically selects an Azure DNS zone and assigns it to the storage account when it is created. The new storage account's endpoints are created in the dynamically selected Azure DNS zone. For more information about Azure DNS zones, see [DNS zones](../../dns/dns-zones-records.md#dns-zones).
+
+An Azure DNS zone service endpoint in Azure Storage includes the protocol (HTTPS is recommended), the storage account name as the subdomain, and a domain that includes the name of the service and the identifier for the DNS zone. The identifier for the DNS zone always begins with `z` and can range from `z00` to `z99`.
+
+The following table lists the format for Azure DNS Zone endpoints for each of the Azure Storage services, where the zone is `z5`.
+
+| Storage service | Endpoint |
+|--|--|
+| Blob Storage | `https://<storage-account>.z[00-99].blob.core.windows.net` |
+| Static website (Blob Storage) | `https://<storage-account>.z[00-99].web.core.windows.net` |
+| Data Lake Storage Gen2 | `https://<storage-account>.z[00-99].dfs.core.windows.net` |
+| Azure Files | `https://<storage-account>.z[00-99].file.core.windows.net` |
+| Queue Storage | `https://<storage-account>.z[00-99].queue.core.windows.net` |
+| Table Storage | `https://<storage-account>.z[00-99].table.core.windows.net` |
+
+> [!IMPORTANT]
+> You can create up to 5000 accounts with Azure DNS Zone endpoints per subscription. However, you may need to update your application code to query for the account endpoint at runtime. You can call the [Get Properties](/rest/api/storagerp/storage-accounts/get-properties) operation to query for the storage account endpoints.
+
+Azure DNS zone endpoints are supported for accounts created with the Azure Resource Manager deployment model only. For more information, see [Azure Resource Manager overview](../../azure-resource-manager/management/overview.md).
+
+To learn how to create a storage account with Azure DNS Zone endpoints, see [Create a storage account](storage-account-create.md).
+
+#### About the preview
+
+The Azure DNS zone endpoints preview is available in all public regions. The preview is not available in any government cloud regions.
+
+To register for the preview, follow the instructions provided in [Set up preview features in Azure subscription](../../azure-resource-manager/management/preview-features.md#register-preview-feature). Specify `PartitionedDnsPublicPreview` as the feature name and `Microsoft.Storage` as the provider namespace.
## Migrate a storage account
The following table describes the legacy storage account types. These account ty
| Standard general-purpose v1 | Blob Storage, Queue Storage, Table Storage, and Azure Files | LRS/GRS/RA-GRS | Resource Manager, classic | General-purpose v1 accounts may not have the latest features or the lowest per-gigabyte pricing. Consider using it for these scenarios:<br /><ul><li>Your applications require the Azure [classic deployment model](../../azure-portal/supportability/classic-deployment-model-quota-increase-requests.md).</li><li>Your applications are transaction-intensive or use significant geo-replication bandwidth, but donΓÇÖt require large capacity. In this case, a general-purpose v1 account may be the most economical choice.</li><li>You use a version of the Azure Storage REST API that is earlier than February 14, 2014, or a client library with a version lower than 4.x, and you canΓÇÖt upgrade your application.</li><li>You're selecting a storage account to use as a cache for Azure Site Recovery. Because Site Recovery is transaction-intensive, a general-purpose v1 account may be more cost-effective. For more information, see [Support matrix for Azure VM disaster recovery between Azure regions](../../site-recovery/azure-to-azure-support-matrix.md#cache-storage).</li></ul> | | Standard Blob Storage | Blob Storage (block blobs and append blobs only) | LRS/GRS/RA-GRS | Resource Manager | Microsoft recommends using standard general-purpose v2 accounts instead when possible. |
+## Scalability targets for standard storage accounts
++ ## Next steps - [Create a storage account](storage-account-create.md)
storage Storage Configure Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-configure-connection-string.md
Previously updated : 04/14/2022 Last updated : 05/26/2022
storage File Sync Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-planning.md
If cloud tiering is enabled, solutions that directly back up the server endpoint
If you prefer to use an on-premises backup solution, backups should be performed on a server in the sync group that has cloud tiering disabled. When performing a restore, use the volume-level or file-level restore options. Files restored using the file-level restore option will be synced to all endpoints in the sync group and existing files will be replaced with the version restored from backup. Volume-level restores will not replace newer file versions in the Azure file share or other server endpoints.
-> [!WARNING]
-> If you need to use Robocopy /B with an Azure File Sync agent running on either source or target server, please upgrade to Azure File Sync agent version v12.0 or above. Using Robocopy /B with agent versions less than v12.0 will lead to the corruption of tiered files during the copy.
- > [!Note] > Bare-metal (BMR) restore can cause unexpected results and is not currently supported. > [!Note]
-> With Version 9 of the Azure File Sync agent, VSS snapshots (including Previous Versions tab) are now supported on volumes which have cloud tiering enabled. However, you must enable previous version compatibility through PowerShell. [Learn how](file-sync-deployment-guide.md#self-service-restore-through-previous-versions-and-vss-volume-shadow-copy-service).
+> VSS snapshots (including Previous Versions tab) are supported on volumes which have cloud tiering enabled. However, you must enable previous version compatibility through PowerShell. [Learn how](file-sync-deployment-guide.md#self-service-restore-through-previous-versions-and-vss-volume-shadow-copy-service).
## Data Classification If you have data classification software installed, enabling cloud tiering may result in increased cost for two reasons:
storage File Sync Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-troubleshoot.md
This error occurs if the firewall and virtual network settings are enabled on th
| **Error string** | ERROR_ACCESS_DENIED | | **Remediation required** | Yes |
-This error can occur if the Azure File Sync cannot access the storage account due to security settings or if the NT AUTHORITY\SYSTEM account does not have permissions to the System Volume Information folder on the volume where the server endpoint is located. Note, if individual files are failing to sync with ERROR_ACCESS_DENIED, perform the steps documented in the [Troubleshooting per file/directory sync errors](?tabs=portal1%252cazure-portal#troubleshooting-per-filedirectory-sync-errors) section.
+This error can occur if Azure File Sync cannot access the storage account due to security settings or if the NT AUTHORITY\SYSTEM account does not have permissions to the System Volume Information folder on the volume where the server endpoint is located. Note, if individual files are failing to sync with ERROR_ACCESS_DENIED, perform the steps documented in the [Troubleshooting per file/directory sync errors](?tabs=portal1%252cazure-portal#troubleshooting-per-filedirectory-sync-errors) section.
1. Verify the **SMB security settings** on the storage account are allowing **SMB 3.1.1** protocol version, **NTLM v2** authentication and **AES-128-GCM** encryption. To check the SMB security settings on the storage account, see [SMB security settings](../files/files-smb-protocol.md#smb-security-settings). 2. [Verify the firewall and virtual network settings on the storage account are configured properly (if enabled)](file-sync-deployment-guide.md?tabs=azure-portal#configure-firewall-and-virtual-network-settings)
virtual-machines High Availability Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide.md
[virtual-machines-manage-availability]:../../windows/manage-availability.md [virtual-machines-ps-create-preconfigure-windows-resource-manager-vms]:../../virtual-machines-windows-ps-create.md [virtual-machines-sizes]:../../virtual-machines-windows-sizes.md
-[virtual-machines-windows-portal-sql-alwayson-availability-groups-manual]:../../windows/sql/virtual-machines-windows-portal-sql-alwayson-availability-groups-manual.md
[virtual-machines-windows-portal-sql-alwayson-int-listener]:/azure/azure-sql/virtual-machines/windows/availability-group-load-balancer-portal-configure [virtual-machines-upload-image-windows-resource-manager]:../../virtual-machines-windows-upload-image.md [virtual-machines-windows-tutorial]:../../virtual-machines-windows-hero-tutorial.md
_**Figure 7:** Example of a high-availability SAP DBMS, with SQL Server Always O
For more information about clustering SQL Server in Azure by using the Azure Resource Manager deployment model, see these articles:
-* [Configure Always On availability group in Azure Virtual Machines manually by using Resource Manager][virtual-machines-windows-portal-sql-alwayson-availability-groups-manual]
+* [Configure Always On availability group in Azure Virtual Machines manually by using Resource Manager](/azure/azure-sql/virtual-machines/windows/availability-group-overview)
* [Configure an Azure internal load balancer for an Always On availability group in Azure][virtual-machines-windows-portal-sql-alwayson-int-listener] ## <a name="045252ed-0277-4fc8-8f46-c5a29694a816"></a> End-to-end high-availability deployment scenarios
virtual-machines Sap High Availability Architecture Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-high-availability-architecture-scenarios.md
[planning-guide-9.1]:planning-guide.md#6f0a47f3-a289-4090-a053-2521618a28c3 [planning-guide-azure-premium-storage]:planning-guide.md#ff5ad0f9-f7f4-4022-9102-af07aef3bc92
-[virtual-machines-windows-portal-sql-alwayson-availability-groups-manual]:../../windows/sql/virtual-machines-windows-portal-sql-alwayson-availability-groups-manual.md
[virtual-machines-windows-portal-sql-alwayson-int-listener]:/azure/azure-sql/virtual-machines/windows/availability-group-load-balancer-portal-configure [sap-ha-bc-virtual-env-hyperv-vmware-white-paper]:https://scn.sap.com/docs/DOC-44415
_**Figure 3:** Example of a high-availability SAP DBMS, with SQL Server AlwaysOn
For more information about clustering SQL Server DBMS in Azure by using the Azure Resource Manager deployment model, see these articles:
-* [Configure an AlwaysOn availability group in Azure virtual machines manually by using Resource Manager][virtual-machines-windows-portal-sql-alwayson-availability-groups-manual]
+* [Configure an AlwaysOn availability group in Azure virtual machines manually by using Resource Manager](/azure/azure-sql/virtual-machines/windows/availability-group-overview)
* [Configure an Azure internal load balancer for an AlwaysOn availability group in Azure][virtual-machines-windows-portal-sql-alwayson-int-listener]