Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory-b2c | Json Transformations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/json-transformations.md | In the following example, the claims transformation extracts the following claim - **active**: true - **birthDate**: 2005-09-23T00:00:00Z ++## GetClaimsFromJsonArrayV2 ++Get a list of specified elements from a string collection JSON elements. Check out the [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/claims-transformation/json#getclaimsfromjsonarrayv2) of this claims transformation. ++| Element | TransformationClaimType | Data Type | Notes | +| - | -- | | -- | +| InputClaim | jsonSourceClaim | stringCollection | The string collection claim with the JSON payloads. This claim is used by the claims transformation to get the claims. | +| InputParameter | errorOnMissingClaims | boolean | Specifies whether to throw an error if one of the claims is missing. | +| InputParameter | includeEmptyClaims | string | Specify whether to include empty claims. | +| InputParameter | jsonSourceKeyName | string | Element key name | +| InputParameter | jsonSourceValueName | string | Element value name | +| OutputClaim | Collection | string, int, boolean, and datetime |List of claims to extract. The name of the claim should be equal to the one specified in _jsonSourceClaim_ input claim. | ++### Example of GetClaimsFromJsonArrayV2 ++In the following example, the claims transformation extracts the following claims: email (string), displayName (string), membershipNum (int), active (boolean) and birthDate (datetime) from the JSON data. ++```xml +<ClaimsTransformation Id="GetClaimsFromJson" TransformationMethod="GetClaimsFromJsonArrayV2"> + <InputClaims> + <InputClaim ClaimTypeReferenceId="jsonSourceClaim" TransformationClaimType="jsonSource" /> + </InputClaims> + <InputParameters> + <InputParameter Id="errorOnMissingClaims" DataType="boolean" Value="false" /> + <InputParameter Id="includeEmptyClaims" DataType="boolean" Value="false" /> + <InputParameter Id="jsonSourceKeyName" DataType="string" Value="key" /> + <InputParameter Id="jsonSourceValueName" DataType="string" Value="value" /> + </InputParameters> + <OutputClaims> + <OutputClaim ClaimTypeReferenceId="email" /> + <OutputClaim ClaimTypeReferenceId="displayName" /> + <OutputClaim ClaimTypeReferenceId="membershipID" /> + <OutputClaim ClaimTypeReferenceId="active" /> + <OutputClaim ClaimTypeReferenceId="birthDate" /> + </OutputClaims> +</ClaimsTransformation> +``` ++- Input claims: + - **jsonSourceClaim[0]** (string collection first element): + + ```json + { + "key": "email", + "value": "someone@example.com" + } + ``` ++ - **jsonSourceClaim[1]** (string collection second element): ++ ```json + { + "key": "displayName", + "value": "Someone" + } + ``` ++ - **jsonSourceClaim[2]** (string collection third element): + + ```json + { + "key": "membershipID", + "value": 6353399 + } + ``` ++ - **jsonSourceClaim[3]** (string collection fourth element): ++ ```json + { + "key": "active", + "value": true + } + ``` ++ - **jsonSourceClaim[4]** (string collection fifth element): + + ```json + { + "key": "birthDate", + "value": "2005-09-23T00:00:00Z" + } + ``` ++- Input parameters: + - **errorOnMissingClaims**: false + - **includeEmptyClaims**: false + - **jsonSourceKeyName**: key + - **jsonSourceValueName**: value +- Output claims: + - **email**: "someone@example.com" + - **displayName**: "Someone" + - **membershipID**: 6353399 + - **active**: true + - **birthDate**: 2005-09-23T00:00:00Z ++ ## GetNumericClaimFromJson Gets a specified numeric (long) element from a JSON data. Check out the [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/claims-transformation/json#getnumericclaimfromjson) of this claims transformation. |
active-directory | User Provisioning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/user-provisioning.md | In Azure Active Directory (Azure AD), the term *app provisioning* refers to auto Azure AD application provisioning refers to automatically creating user identities and roles in the applications that users need access to. In addition to creating user identities, automatic provisioning includes the maintenance and removal of user identities as status or roles change. Common scenarios include provisioning an Azure AD user into SaaS applications like [Dropbox](../../active-directory/saas-apps/dropboxforbusiness-provisioning-tutorial.md), [Salesforce](../../active-directory/saas-apps/salesforce-provisioning-tutorial.md), [ServiceNow](../../active-directory/saas-apps/servicenow-provisioning-tutorial.md), and many more. -Azure AD also supports provisioning users into applications hosted on-premises or in a virtual machine, without having to open up any firewalls. If your application supports [SCIM](https://aka.ms/scimoverview), or you've built a SCIM gateway to connect to your legacy application, you can use the Azure AD Provisioning agent to [directly connect](./on-premises-scim-provisioning.md) with your application and automate provisioning and deprovisioning. If you have legacy applications that don't support SCIM and rely on an [LDAP](./on-premises-ldap-connector-configure.md) user store or a [SQL](./tutorial-ecma-sql-connector.md) database, Azure AD can support those as well. +Azure AD also supports provisioning users into applications hosted on-premises or in a virtual machine, without having to open up any firewalls. Your application must support [SCIM](https://aka.ms/scimoverview). Or, you must build a SCIM gateway to connect to your legacy application. If so, you can use the Azure AD Provisioning agent to [directly connect](./on-premises-scim-provisioning.md) with your application and automate provisioning and deprovisioning. If you have legacy applications that don't support SCIM and rely on an [LDAP](./on-premises-ldap-connector-configure.md) user store or a [SQL](./tutorial-ecma-sql-connector.md) database, Azure AD can support these applications as well. App provisioning lets you: |
active-directory | Concept Authentication Oath Tokens | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-oath-tokens.md | Users may have a combination of up to five OATH hardware tokens or authenticator .[!IMPORTANT] >Make sure to only assign each token to a single user.-In the future, support for the assignment of a single token to multiple users will stop to prevent a security risk. +>In the future, support for the assignment of a single token to multiple users will stop to prevent a security risk. ## Determine OATH token registration type in mysecurityinfo |
active-directory | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/overview.md | When licenses required for Conditional Access expire, policies aren't automatica [Security defaults](../fundamentals/concept-fundamentals-security-defaults.md) help protect against identity-related attacks and are available for all customers. + ## Next steps - [Building a Conditional Access policy piece by piece](concept-conditional-access-policies.md) |
active-directory | Howto Handle Samesite Cookie Changes Chrome Browser | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-handle-samesite-cookie-changes-chrome-browser.md | By default, the `SameSite` value is NOT set in browsers and that's why there are Recent [updates to the standards on SameSite](https://tools.ietf.org/html/draft-west-cookie-incrementalism-00) propose protecting apps by making the default behavior of `SameSite` when no value is set to Lax. This mitigation means cookies will be restricted on HTTP requests except GET made from other sites. Additionally, a value of **None** is introduced to remove restrictions on cookies being sent. These updates will soon be released in an upcoming version of the Chrome browser. -When web apps authenticate with the Microsoft Identity platform using the response mode "form_post", the login server responds to the application using an HTTP POST to send the tokens or auth code. Because this request is a cross-domain request (from `login.microsoftonline.com` to your domain - for instance `https://contoso.com/auth`), cookies that were set by your app now fall under the new rules in Chrome. The cookies that need to be used in cross-site scenarios are cookies that hold the *state* and *nonce* values, that are also sent in the login request. There are other cookies dropped by Azure AD to hold the session. +When web apps authenticate with the Microsoft identity platform using the response mode "form_post", the login server responds to the application using an HTTP POST to send the tokens or auth code. Because this request is a cross-domain request (from `login.microsoftonline.com` to your domain - for instance `https://contoso.com/auth`), cookies that were set by your app now fall under the new rules in Chrome. The cookies that need to be used in cross-site scenarios are cookies that hold the _state_ and _nonce_ values, that are also sent in the login request. There are other cookies dropped by Azure Active Directory (Azure AD) to hold the session. If you don't update your web apps, this new behavior will result in authentication failures. To overcome the authentication failures, web apps authenticating with the Micros Other browsers (see [here](https://www.chromium.org/updates/same-site/incompatible-clients) for a complete list) follow the previous behavior of `SameSite` and won't include the cookies if `SameSite=None` is set. That's why, to support authentication on multiple browsers web apps will have to set the `SameSite` value to `None` only on Chrome and leave the value empty on other browsers. -This approach is demonstrated in our code samples below. +This approach is demonstrated in the following sample code. # [.NET](#tab/dotnet) -The table below presents the pull requests that worked around the SameSite changes in our ASP.NET and ASP.NET Core samples. +The following table presents the pull requests that worked around the SameSite changes in our ASP.NET and ASP.NET Core samples. -| Sample | Pull request | -| | | -| [ASP.NET Core web app incremental tutorial](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2) | [Same site cookie fix #261](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/pull/261) | -| [ASP.NET MVC web app sample](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect) | [Same site cookie fix #35](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect/pull/35) | -| [active-directory-dotnet-admin-restricted-scopes-v2](https://github.com/azure-samples/active-directory-dotnet-admin-restricted-scopes-v2) | [Same site cookie fix #28](https://github.com/Azure-Samples/active-directory-dotnet-admin-restricted-scopes-v2/pull/28) | +| Sample | Pull request | +| -- | -- | +| [ASP.NET Core web app incremental tutorial](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2) | [Same site cookie fix #261](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/pull/261) | +| [ASP.NET MVC web app sample](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect) | [Same site cookie fix #35](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect/pull/35) | +| [active-directory-dotnet-admin-restricted-scopes-v2](https://github.com/azure-samples/active-directory-dotnet-admin-restricted-scopes-v2) | [Same site cookie fix #28](https://github.com/Azure-Samples/active-directory-dotnet-admin-restricted-scopes-v2/pull/28) | for details on how to handle SameSite cookies in ASP.NET and ASP.NET Core, see also: for details on how to handle SameSite cookies in ASP.NET and ASP.NET Core, see a # [Python](#tab/python) -| Sample | -| | -| [ms-identity-python-webapp](https://github.com/Azure-Samples/ms-identity-python-webapp) | +| Sample | +| | +| [ms-identity-python-webapp](https://github.com/Azure-Samples/ms-identity-python-webapp) | # [Java](#tab/java) -| Sample | Pull request | -| | | -| [ms-identity-java-webapp](https://github.com/Azure-Samples/ms-identity-java-webapp) | [Same site cookie fix #24](https://github.com/Azure-Samples/ms-identity-java-webapp/pull/24) -| [ms-identity-java-webapi](https://github.com/Azure-Samples/ms-identity-java-webapi) | [Same site cookie fix #4](https://github.com/Azure-Samples/ms-identity-java-webapi/pull/4) +| Sample | Pull request | +| -- | -- | +| [ms-identity-java-webapp](https://github.com/Azure-Samples/ms-identity-java-webapp) | [Same site cookie fix #24](https://github.com/Azure-Samples/ms-identity-java-webapp/pull/24) | +| [ms-identity-java-webapi](https://github.com/Azure-Samples/ms-identity-java-webapi) | [Same site cookie fix #4](https://github.com/Azure-Samples/ms-identity-java-webapi/pull/4) | Learn more about SameSite and the Web app scenario: - [Chromium SameSite page](https://www.chromium.org/updates/same-site) -- [Scenario: Web app that signs in users](scenario-web-app-sign-user-overview.md)+- [Scenario: Web app that signs in users](scenario-web-app-sign-user-overview.md) |
active-directory | Scenario Desktop Acquire Token Wam | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token-wam.md | Applications cannot remove accounts from Windows! - Removes app-only (not OS-wide) accounts. >[!NOTE]-> Ony users can remove OS accounts, whereas apps themselves cannot. If an OS account is passed into `RemoveAsync`, and then `GetAccounts` is called with `ListWindowsWorkAndSchoolAccounts` enabled, the same OS accounts will still be returned. +> Only users can remove OS accounts, whereas apps themselves cannot. If an OS account is passed into `RemoveAsync`, and then `GetAccounts` is called with `ListWindowsWorkAndSchoolAccounts` enabled, the same OS accounts will still be returned. ## Other considerations |
active-directory | Scenario Web App Call Api Call Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-call-api-call-api.md | public async Task<IActionResult> Profile() > [!NOTE] > You can use the same principle to call any web API. >-> Most Azure web APIs provide an SDK that simplifies calling the API as is the case for Microsoft Graph. See, for instance, [Create a web application that authorizes access to Blob storage with Azure AD](../../storage/common/storage-auth-aad-app.md?tabs=dotnet&toc=%2fazure%2fstorage%2fblobs%2ftoc.json) for an example of a web app using Microsoft.Identity.Web and using the Azure Storage SDK. +> Most Azure web APIs provide an SDK that simplifies calling the API as is the case for Microsoft Graph. # [Java](#tab/java) |
active-directory | V2 Oauth2 Client Creds Grant Flow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth2-client-creds-grant-flow.md | -You can use the OAuth 2.0 client credentials grant specified in [RFC 6749](https://tools.ietf.org/html/rfc6749#section-4.4), sometimes called *two-legged OAuth*, to access web-hosted resources by using the identity of an application. This type of grant is commonly used for server-to-server interactions that must run in the background, without immediate interaction with a user. These types of applications are often referred to as *daemons* or *service accounts*. +The OAuth 2.0 client credentials grant flow permits a web service (confidential client) to use its own credentials, instead of impersonating a user, to authenticate when calling another web service. The grant specified in [RFC 6749](https://tools.ietf.org/html/rfc6749#section-4.4), sometimes called *two-legged OAuth*, can be used to access web-hosted resources by using the identity of an application. This type is commonly used for server-to-server interactions that must run in the background, without immediate interaction with a user, and is often referred to as *daemons* or *service accounts*. -This article describes how to program directly against the protocol in your application. When possible, we recommend you use the supported Microsoft Authentication Libraries (MSAL) instead to [acquire tokens and call secured web APIs](authentication-flows-app-scenarios.md#scenarios-and-supported-authentication-flows). Also take a look at the [sample apps that use MSAL](sample-v2-code.md). As a side note, refresh tokens will never be granted with this flow as `client_id` and `client_secret` (which would be required to obtain a refresh token) can be used to obtain an access token instead. +In the client credentials flow, permissions are granted directly to the application itself by an administrator. When the app presents a token to a resource, the resource enforces that the app itself has authorization to perform an action since there is no user involved in the authentication. This article covers both the steps needed to: -The OAuth 2.0 client credentials grant flow permits a web service (confidential client) to use its own credentials, instead of impersonating a user, to authenticate when calling another web service. For a higher level of assurance, the Microsoft identity platform also allows the calling service to authenticate using a [certificate](#second-case-access-token-request-with-a-certificate) or federated credential instead of a shared secret. Because the application's own credentials are being used, these credentials must be kept safe - _never_ publish that credential in your source code, embed it in web pages, or use it in a widely distributed native application. +- [Authorize an application to call an API](#application-permissions) +- [How to get the tokens needed to call that API](#get-a-token). -In the client credentials flow, permissions are granted directly to the application itself by an administrator. When the app presents a token to a resource, the resource enforces that the app itself has authorization to perform an action since there is no user involved in the authentication. This article covers both the steps needed to [authorize an application to call an API](#application-permissions), as well as [how to get the tokens needed to call that API](#get-a-token). +This article describes how to program directly against the protocol in your application. When possible, we recommend you use the supported Microsoft Authentication Libraries (MSAL) instead to [acquire tokens and call secured web APIs](authentication-flows-app-scenarios.md#scenarios-and-supported-authentication-flows). You can also refer to the [sample apps that use MSAL](sample-v2-code.md). As a side note, refresh tokens will never be granted with this flow as `client_id` and `client_secret` (which would be required to obtain a refresh token) can be used to obtain an access token instead. ++For a higher level of assurance, the Microsoft identity platform also allows the calling service to authenticate using a [certificate](#second-case-access-token-request-with-a-certificate) or federated credential instead of a shared secret. Because the application's own credentials are being used, these credentials must be kept safe. _Never_ publish that credential in your source code, embed it in web pages, or use it in a widely distributed native application. [!INCLUDE [try-in-postman-link](includes/try-in-postman-link.md)] These two methods are the most common in Azure AD and we recommend them for clie A resource provider might enforce an authorization check based on a list of application (client) IDs that it knows and grants a specific level of access to. When the resource receives a token from the Microsoft identity platform, it can decode the token and extract the client's application ID from the `appid` and `iss` claims. Then it compares the application against an access control list (ACL) that it maintains. The ACL's granularity and method might vary substantially between resources. -A common use case is to use an ACL to run tests for a web application or for a web API. The web API might grant only a subset of full permissions to a specific client. To run end-to-end tests on the API, create a test client that acquires tokens from the Microsoft identity platform and then sends them to the API. The API then checks the ACL for the test client's application ID for full access to the API's entire functionality. If you use this kind of ACL, be sure to validate not only the caller's `appid` value but also validate that the `iss` value of the token is trusted. +A common use case is to use an ACL to run tests for a web application or for a web API. The web API might grant only a subset of full permissions to a specific client. To run end-to-end tests on the API, you can create a test client that acquires tokens from the Microsoft identity platform and then sends them to the API. The API then checks the ACL for the test client's application ID for full access to the API's entire functionality. If you use this kind of ACL, be sure to validate not only the caller's `appid` value but also validate that the `iss` value of the token is trusted. This type of authorization is common for daemons and service accounts that need to access data owned by consumer users who have personal Microsoft accounts. For data owned by organizations, we recommend that you get the necessary authorization through application permissions. If you'd like to prevent applications from getting role-less app-only access tok ### Application permissions -Instead of using ACLs, you can use APIs to expose a set of **application permissions**. An application permission is granted to an application by an organization's administrator, and can be used only to access data owned by that organization and its employees. For example, Microsoft Graph exposes several application permissions to do the following: +Instead of using ACLs, you can use APIs to expose a set of **application permissions**. These are granted to an application by an organization's administrator, and can be used only to access data owned by that organization and its employees. For example, Microsoft Graph exposes several application permissions to do the following: * Read mail in all mailboxes * Read and write mail in all mailboxes For more information about application permissions, see [Permissions and consent #### Recommended: Sign the admin into your app to have app roles assigned -Typically, when you build an application that uses application permissions, the app requires a page or view on which the admin approves the app's permissions. This page can be part of the app's sign-in flow, part of the app's settings, or it can be a dedicated "connect" flow. In many cases, it makes sense for the app to show this "connect" view only after a user has signed in with a work or school Microsoft account. +Typically, when you build an application that uses application permissions, the app requires a page or view on which the admin approves the app's permissions. This page can be part of the app's sign-in flow, part of the app's settings, or a dedicated *connect* flow. It often makes sense for the app to show this *connect* view only after a user has signed in with a work or school Microsoft account. If you sign the user into your app, you can identify the organization to which the user belongs to before you ask the user to approve the application permissions. Although not strictly necessary, it can help you create a more intuitive experience for your users. To sign the user in, follow the [Microsoft identity platform protocol tutorials](active-directory-v2-protocols.md). Pro tip: Try pasting the following request in a browser. https://login.microsoftonline.com/common/adminconsent?client_id=6731de76-14a6-49ae-97bc-6eba6914391e&state=12345&redirect_uri=http://localhost/myapp/permissions ``` -| Parameter | Condition | Description | -| | | | -| `tenant` | Required | The directory tenant that you want to request permission from. This can be in GUID or friendly name format. If you don't know which tenant the user belongs to and you want to let them sign in with any tenant, use `common`. | -| `client_id` | Required | The **Application (client) ID** that the [Azure portal ΓÇô App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) experience assigned to your app. | -| `redirect_uri` | Required | The redirect URI where you want the response to be sent for your app to handle. It must exactly match one of the redirect URIs that you registered in the portal, except that it must be URL-encoded, and it can have additional path segments. | -| `state` | Recommended | A value that's included in the request that's also returned in the token response. It can be a string of any content that you want. The state is used to encode information about the user's state in the app before the authentication request occurred, such as the page or view they were on. | +| Parameter | Condition | Description | +| -- | -- | -- | +| `tenant` | Required | The directory tenant that you want to request permission from. This can be in GUID or friendly name format. If you don't know which tenant the user belongs to and you want to let them sign in with any tenant, use `common`. | +| `client_id` | Required | The **Application (client) ID** that the [Azure portal ΓÇô App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) experience assigned to your app. | +| `redirect_uri` | Required | The redirect URI where you want the response to be sent for your app to handle. It must exactly match one of the redirect URIs that you registered in the portal, except that it must be URL-encoded, and it can have additional path segments. | +| `state` | Recommended | A value that's included in the request that's also returned in the token response. It can be a string of any content that you want. The state is used to encode information about the user's state in the app before the authentication request occurred, such as the page or view they were on. | -At this point, Azure AD enforces that only a tenant administrator can sign into complete the request. The administrator will be asked to approve all the direct application permissions that you have requested for your app in the app registration portal. +At this point, Azure AD enforces that only a tenant administrator can sign in to complete the request. The administrator will be asked to approve all the direct application permissions that you have requested for your app in the app registration portal. ##### Successful response If the admin approves the permissions for your application, the successful respo GET http://localhost/myapp/permissions?tenant=a8990e1f-ff32-408a-9f8e-78d3b9139b95&state=state=12345&admin_consent=True ``` -| Parameter | Description | -| | | -| `tenant` | The directory tenant that granted your application the permissions that it requested, in GUID format. | -| `state` | A value that is included in the request that also is returned in the token response. It can be a string of any content that you want. The state is used to encode information about the user's state in the app before the authentication request occurred, such as the page or view they were on. | +| Parameter | Description | +| | -- | +| `tenant` | The directory tenant that granted your application the permissions that it requested, in GUID format. | +| `state` | A value that is included in the request that also is returned in the token response. It can be a string of any content that you want. The state is used to encode information about the user's state in the app before the authentication request occurred, such as the page or view they were on. | | `admin_consent` | Set to **True**. | ##### Error response If the admin does not approve the permissions for your application, the failed r GET http://localhost/myapp/permissions?error=permission_denied&error_description=The+admin+canceled+the+request ``` -| Parameter | Description | -| | | -| `error` | An error code string that you can use to classify types of errors, and which you can use to react to errors. | +| Parameter | Description | +| - | -- | +| `error` | An error code string that you can use to classify types of errors, and which you can use to react to errors. | | `error_description` | A specific error message that can help you identify the root cause of an error. | After you've received a successful response from the app provisioning endpoint, your app has gained the direct application permissions that it requested. Now you can request a token for the resource that you want. ## Get a token -After you've acquired the necessary authorization for your application, proceed with acquiring access tokens for APIs. To get a token by using the client credentials grant, send a POST request to the `/token` Microsoft identity platform: +After you've acquired the necessary authorization for your application, proceed with acquiring access tokens for APIs. To get a token by using the client credentials grant, send a POST request to the `/token` Microsoft identity platform. There are a few different cases: ++- [Access token request with a shared secret](#first-case-access-token-request-with-a-shared-secret) +- [Access token request with a certificate](#second-case-access-token-request-with-a-certificate) +- [Access token request with a federated credential](#third-case-access-token-request-with-a-federated-credential) ### First case: Access token request with a shared secret client_id=535fb089-9ff3-47b6-9bfb-4f1264799865 &grant_type=client_credentials ``` -```Bash +```bash # Replace {tenant} with your tenant! curl -X POST -H "Content-Type: application/x-www-form-urlencoded" -d 'client_id=535fb089-9ff3-47b6-9bfb-4f1264799865&scope=https%3A%2F%2Fgraph.microsoft.com%2F.default&client_secret=qWgdYAmab0YSkuL1qKv5bPX&grant_type=client_credentials' 'https://login.microsoftonline.com/{tenant}/oauth2/v2.0/token' ``` -| Parameter | Condition | Description | -| | | | -| `tenant` | Required | The directory tenant the application plans to operate against, in GUID or domain-name format. | -| `client_id` | Required | The application ID that's assigned to your app. You can find this information in the portal where you registered your app. | -| `scope` | Required | The value passed for the `scope` parameter in this request should be the resource identifier (application ID URI) of the resource you want, affixed with the `.default` suffix. For the Microsoft Graph example, the value is `https://graph.microsoft.com/.default`. <br/>This value tells the Microsoft identity platform that of all the direct application permissions you have configured for your app, the endpoint should issue a token for the ones associated with the resource you want to use. To learn more about the `/.default` scope, see the [consent documentation](v2-permissions-and-consent.md#the-default-scope). | -| `client_secret` | Required | The client secret that you generated for your app in the app registration portal. The client secret must be URL-encoded before being sent. The Basic auth pattern of instead providing credentials in the Authorization header, per [RFC 6749](https://datatracker.ietf.org/doc/html/rfc6749#section-2.3.1) is also supported. | -| `grant_type` | Required | Must be set to `client_credentials`. | +| Parameter | Condition | Description | +| | | -- | +| `tenant` | Required | The directory tenant the application plans to operate against, in GUID or domain-name format. | +| `client_id` | Required | The application ID that's assigned to your app. You can find this information in the portal where you registered your app. | +| `scope` | Required | The value passed for the `scope` parameter in this request should be the resource identifier (application ID URI) of the resource you want, affixed with the `.default` suffix. For the Microsoft Graph example, the value is `https://graph.microsoft.com/.default`. <br/>This value tells the Microsoft identity platform that of all the direct application permissions you have configured for your app, the endpoint should issue a token for the ones associated with the resource you want to use. To learn more about the `/.default` scope, see the [consent documentation](v2-permissions-and-consent.md#the-default-scope). | +| `client_secret` | Required | The client secret that you generated for your app in the app registration portal. The client secret must be URL-encoded before being sent. The Basic auth pattern of instead providing credentials in the Authorization header, per [RFC 6749](https://datatracker.ietf.org/doc/html/rfc6749#section-2.3.1) is also supported. | +| `grant_type` | Required | Must be set to `client_credentials`. | ### Second case: Access token request with a certificate scope=https%3A%2F%2Fgraph.microsoft.com%2F.default &grant_type=client_credentials ``` -| Parameter | Condition | Description | -| | | | -| `tenant` | Required | The directory tenant the application plans to operate against, in GUID or domain-name format. | -| `client_id` | Required |The application (client) ID that's assigned to your app. | -| `scope` | Required | The value passed for the `scope` parameter in this request should be the resource identifier (application ID URI) of the resource you want, affixed with the `.default` suffix. For the Microsoft Graph example, the value is `https://graph.microsoft.com/.default`. <br/>This value informs the Microsoft identity platform that of all the direct application permissions you have configured for your app, it should issue a token for the ones associated with the resource you want to use. To learn more about the `/.default` scope, see the [consent documentation](v2-permissions-and-consent.md#the-default-scope). | -| `client_assertion_type` | Required | The value must be set to `urn:ietf:params:oauth:client-assertion-type:jwt-bearer`. | -| `client_assertion` | Required | An assertion (a JSON web token) that you need to create and sign with the certificate you registered as credentials for your application. Read about [certificate credentials](active-directory-certificate-credentials.md) to learn how to register your certificate and the format of the assertion.| -| `grant_type` | Required | Must be set to `client_credentials`. | +| Parameter | Condition | Description | +| -- | | -- | +| `tenant` | Required | The directory tenant the application plans to operate against, in GUID or domain-name format. | +| `client_id` | Required | The application (client) ID that's assigned to your app. | +| `scope` | Required | The value passed for the `scope` parameter in this request should be the resource identifier (application ID URI) of the resource you want, affixed with the `.default` suffix. For the Microsoft Graph example, the value is `https://graph.microsoft.com/.default`. <br/>This value informs the Microsoft identity platform that of all the direct application permissions you have configured for your app, it should issue a token for the ones associated with the resource you want to use. To learn more about the `/.default` scope, see the [consent documentation](v2-permissions-and-consent.md#the-default-scope). | +| `client_assertion_type` | Required | The value must be set to `urn:ietf:params:oauth:client-assertion-type:jwt-bearer`. | +| `client_assertion` | Required | An assertion (a JSON web token) that you need to create and sign with the certificate you registered as credentials for your application. Read about [certificate credentials](active-directory-certificate-credentials.md) to learn how to register your certificate and the format of the assertion.| +| `grant_type` | Required | Must be set to `client_credentials`. | The parameters for the certificate-based request differ in only one way from the shared secret-based request: the `client_secret` parameter is replaced by the `client_assertion_type` and `client_assertion` parameters. scope=https%3A%2F%2Fgraph.microsoft.com%2F.default &grant_type=client_credentials ``` -| Parameter | Condition | Description | -| | | | -| `client_assertion` | Required | An assertion (a JWT, or JSON web token) that your application gets from another identity provider outside of Microsoft identity platform, like Kubernetes. The specifics of this JWT must be registered on your application as a [federated identity credential](workload-identity-federation-create-trust.md). Read about [workload identity federation](workload-identity-federation.md) to learn how to setup and use assertions generated from other identity providers.| +| Parameter | Condition | Description | +| | | -- | +| `client_assertion` | Required | An assertion (a JWT, or JSON web token) that your application gets from another identity provider outside of Microsoft identity platform, like Kubernetes. The specifics of this JWT must be registered on your application as a [federated identity credential](workload-identity-federation-create-trust.md). Read about [workload identity federation](workload-identity-federation.md) to learn how to setup and use assertions generated from other identity providers.| -Everything in the request is the same as the certificate-based flow above, with one crucial exception - the source of the `client_assertion`. In this flow, your application does not create the JWT assertion itself. Instead, your app uses a JWT created by another identity provider. This is called "[workload identity federation](workload-identity-federation.md)", where your apps identity in another identity platform is used to acquire tokens inside the Microsoft identity platform. This is best suited for cross-cloud scenarios, such as hosting your compute outside Azure but accessing APIs protected by Microsoft identity platform. For information about the required format of JWTs created by other identity providers, read about the [assertion format](active-directory-certificate-credentials.md#assertion-format). +Everything in the request is the same as the certificate-based flow, with the crucial exception of the source of the `client_assertion`. In this flow, your application does not create the JWT assertion itself. Instead, your app uses a JWT created by another identity provider. This is called *[workload identity federation](workload-identity-federation.md)*, where your apps identity in another identity platform is used to acquire tokens inside the Microsoft identity platform. This is best suited for cross-cloud scenarios, such as hosting your compute outside Azure but accessing APIs protected by Microsoft identity platform. For information about the required format of JWTs created by other identity providers, read about the [assertion format](active-directory-certificate-credentials.md#assertion-format). ### Successful response A successful response from any method looks like this: } ``` -| Parameter | Description | -| | | +| Parameter | Description | +| -- | -- | | `access_token` | The requested access token. The app can use this token to authenticate to the secured resource, such as to a web API. |-| `token_type` | Indicates the token type value. The only type that the Microsoft identity platform supports is `bearer`. | -| `expires_in` | The amount of time that an access token is valid (in seconds). | +| `token_type` | Indicates the token type value. The only type that the Microsoft identity platform supports is `bearer`. | +| `expires_in` | The amount of time that an access token is valid (in seconds). | [!INCLUDE [remind-not-to-validate-access-tokens](includes/remind-not-to-validate-access-tokens.md)] An error response (400 Bad Request) looks like this: "error_codes": [ 70011 ],- "timestamp": "2016-01-09 02:02:12Z", + "timestamp": "YYYY-MM-DD HH:MM:SSZ", "trace_id": "255d1aef-8c98-452f-ac51-23d051240864", "correlation_id": "fb3d2015-bc17-4bb9-bb85-30c5cf1aaaa7" } ``` -| Parameter | Description | -| | | -| `error` | An error code string that you can use to classify types of errors that occur, and to react to errors. | +| Parameter | Description | +| - | -- | +| `error` | An error code string that you can use to classify types of errors that occur, and to react to errors. | | `error_description` | A specific error message that might help you identify the root cause of an authentication error. |-| `error_codes` | A list of STS-specific error codes that might help with diagnostics. | -| `timestamp` | The time when the error occurred. | -| `trace_id` | A unique identifier for the request to help with diagnostics. | -| `correlation_id` | A unique identifier for the request to help with diagnostics across components. | +| `error_codes` | A list of STS-specific error codes that might help with diagnostics. | +| `timestamp` | The time when the error occurred. | +| `trace_id` | A unique identifier for the request to help with diagnostics. | +| `correlation_id` | A unique identifier for the request to help with diagnostics across components. | ## Use a token GET /v1.0/me/messages Host: https://graph.microsoft.com Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6Ik5HVEZ2ZEstZnl0aEV1Q... ```+Try the following command in your terminal, ensuring to replace the token with your own. ```bash-# Pro tip: Try the following command! (Replace the token with your own.) - curl -X GET -H "Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbG...." 'https://graph.microsoft.com/v1.0/me/messages' ``` Read the [client credentials overview documentation](https://aka.ms/msal-net-cli | Sample | Platform |Description | |--|-||-|[active-directory-dotnetcore-daemon-v2](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2) | .NET Core 2.1 Console | A simple .NET Core application that displays the users of a tenant querying the Microsoft Graph using the identity of the application, instead of on behalf of a user. The sample also illustrates the variation using certificates for authentication. | +|[active-directory-dotnetcore-daemon-v2](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2) | .NET Core 6.0+ | A .NET Core application that displays the users of a tenant querying the Microsoft Graph using the identity of the application, instead of on behalf of a user. The sample also illustrates the variation using certificates for authentication. | |[active-directory-dotnet-daemon-v2](https://github.com/Azure-Samples/active-directory-dotnet-daemon-v2)| ASP.NET MVC | A web application that syncs data from the Microsoft Graph using the identity of the application, instead of on behalf of a user. |-|[ms-identity-javascript-nodejs-console](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-console)| Node.js Console | A simple Node.js application that displays the users of a tenant by querying the Microsoft Graph using the identity of the application | +|[ms-identity-javascript-nodejs-console](https://github.com/Azure-Samples/ms-identity-javascript-nodejs-console)| Node.js Console | A Node.js application that displays the users of a tenant by querying the Microsoft Graph using the identity of the application | |
active-directory | B2b Tutorial Require Mfa | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-tutorial-require-mfa.md | If you donΓÇÖt have an Azure subscription, create a [free account](https://azure To complete the scenario in this tutorial, you need: -- **Access to Azure AD Premium edition**, which includes Conditional Access policy capabilities. To enforce MFA, you need to create an Azure AD Conditional Access policy. MFA policies are always enforced at your organization, regardless of whether the partner has MFA capabilities.+- **Access to [Azure AD Premium edition](/security/business/identity-access/azure-active-directory-pricing)**, which includes Conditional Access policy capabilities. To enforce MFA, you need to create an Azure AD Conditional Access policy. MFA policies are always enforced at your organization, regardless of whether the partner has MFA capabilities. - **A valid external email account** that you can add to your tenant directory as a guest user and use to sign in. If you don't know how to create a guest account, see [Add a B2B guest user in the Azure portal](add-users-administrator.md). ## Create a test guest user in Azure AD |
active-directory | Current Limitations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/current-limitations.md | +# Customer intent: As a tenant administrator, I want to know about the current limitations for Azure AD B2B collaboration. # Limitations of Azure AD B2B collaboration Azure Active Directory (Azure AD) B2B collaboration is currently subject to the With Azure AD B2B, you can enforce multi-factor authentication at the resource organization (the inviting organization). The reasons for this approach are detailed in [Conditional Access for B2B collaboration users](authentication-conditional-access.md). If a partner already has multi-factor authentication set up and enforced, their users might have to perform the authentication once in their home organization and then again in yours. ## Instant-on-In the B2B collaboration flows, we add users to the directory and dynamically update them during invitation redemption, app assignment, and so on. The updates and writes ordinarily happen in one directory instance and must be replicated across all instances. Replication is completed once all instances are updated. Sometimes when the object is written or updated in one instance and the call to retrieve this object is to another instance, replication latencies can occur. If that happens, refresh or retry to help. If you are writing an app using our API, then retries with some back-off is a good, defensive practice to alleviate this issue. +In the B2B collaboration flows, we add users to the directory and dynamically update them during invitation redemption, app assignment, and so on. The updates and writes ordinarily happen in one directory instance and must be replicated across all instances. Replication is completed once all instances are updated. Sometimes when the object is written or updated in one instance and the call to retrieve this object is to another instance, replication latencies can occur. If that happens, refresh or retry to help. If you're writing an app using our API, then retries with some back-off is a good, defensive practice to alleviate this issue. ## Azure AD directories Azure AD B2B is subject to Azure AD service directory limits. For details about the number of directories a user can create and the number of directories to which a user or guest user can belong, see [Azure AD service limits and restrictions](../enterprise-users/directory-service-limits-restrictions.md). |
active-directory | Concept Azure Ad Connect Sync Declarative Provisioning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/concept-azure-ad-connect-sync-declarative-provisioning.md | In the attribute flows there is a setting to determine if multi-valued attribute  -There is also **Merge** and **MergeCaseInsensitive**. These options allow you to merge values from different sources. For example, it can be used to merge the member or proxyAddresses attribute from several different forests. When you use this option, all sync rules in scope for an object must use the same merge type. You cannot define **Update** from one Connector and **Merge** from another. If you try, you receive an error. +There is also **Merge** and **MergeCaseInsensitive**. These options allow you to merge values from different sources. For example, it can be used to merge the proxyAddresses attribute from several different forests. When you use this option, all sync rules in scope for an object must use the same merge type. You cannot define **Update** from one Connector and **Merge** from another. If you try, you receive an error. The difference between **Merge** and **MergeCaseInsensitive** is how to process duplicate attribute values. The sync engine makes sure duplicate values are not inserted into the target attribute. With **MergeCaseInsensitive**, duplicate values with only a difference in case are not going to be present. For example, you should not see both "SMTP:bob@contoso.com" and "smtp:bob@contoso.com" in the target attribute. **Merge** is only looking at the exact values and multiple values where there only is a difference in case might be present. In *Out to AD - User Exchange hybrid* the following flow can be found: This expression should be read as: if the user mailbox is located in Azure AD, then flow the attribute from Azure AD to AD. If not, do not flow anything back to Active Directory. In this case, it would keep the existing value in AD. ### ImportedValue-The function ImportedValue is different than all other functions since the attribute name must be enclosed in quotes rather than square brackets: ++The function ImportedValue is different than all other functions since the attribute name must be enclosed in quotes rather than square brackets: + `ImportedValue("proxyAddresses")`. -Usually during synchronization an attribute uses the expected value, even if it hasnΓÇÖt been exported yet or an error was received during export (ΓÇ£top of the towerΓÇ¥). An inbound synchronization assumes that an attribute that hasnΓÇÖt yet reached a connected directory eventually reaches it. In some cases, it is important to only synchronize a value that has been confirmed by the connected directory (ΓÇ£hologram and delta import towerΓÇ¥). +Inbound synchronization has a concept of assuming that an attribute that hasnΓÇÖt yet reached a connected directory will eventually reach it at some point so, normally, synchronization gets an attribute value from the respective connector space, even if it hasnΓÇÖt been yet exported or an error occurred during export. +In some cases, however, it is important to only synchronize a value that has been exported and confirmed during import from the connected directory. This function can be found in multiple ΓÇ£In From AD/AADΓÇ¥ out-of-box transformation rules where the attribute should only be synchronized when it has been confirmed that the value was exported successfully. ++An example of this function can be found in the out-of-box Synchronization Rule *In from AD ΓÇô User Common from Exchange*, for ProxyAddresses attribute flow with Hybrid Exchange. E.g., when a userΓÇÖs ProxyAddresses is added, the ImportedValue function will only return the new value after it has been confirmed from the following import step: -An example of this function can be found in the out-of-box Synchronization Rule *In from AD ΓÇô User Common from Exchange*. In Hybrid Exchange, the value added by Exchange online should only be synchronized when it has been confirmed that the value was exported successfully: `proxyAddresses` <- `RemoveDuplicates(Trim(ImportedValue("proxyAddresses")))` +This function is required when the target directory might change or discard an exported attribute value silently, and we want the synchronization to only process confirmed attribute values. + ## Precedence When several sync rules try to contribute the same attribute value to the target, the precedence value is used to determine the winner. The rule with highest precedence, lowest numeric value, is going to contribute the attribute in a conflict. This ordering can be used to define more precise attribute flows for a small sub Precedence can be defined between Connectors. That allows Connectors with better data to contribute values first. ### Multiple objects from the same connector space-If you have several objects in the same connector space joined to the same metaverse object, precedence must be adjusted. If several objects are in scope of the same sync rule, then the sync engine is not able to determine precedence. It is ambiguous which source object should contribute the value to the metaverse. This configuration is reported as ambiguous even if the attributes in the source have the same value. - +It is not possible to have several objects in the same connector space joined to the same metaverse object. This configuration is reported as ambiguous even if the attributes in the source have the same value. -For this scenario, you need to change the scope of the sync rules so the source objects have different sync rules in scope. That allows you to define different precedence. - + ## Next steps * Read more about the expression language in [Understanding Declarative Provisioning Expressions](concept-azure-ad-connect-sync-declarative-provisioning-expressions.md). |
active-directory | How To Connect Group Writeback V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-group-writeback-v2.md | These limitations and known issues are specific to group writeback: - Nested cloud groups that are members of writeback enabled groups must also be enabled for writeback to remain nested in AD. - Group Writeback setting to manage new security group writeback at scale is not yet available. You will need to configure writeback for each group.  - If you have a nested group like this, you'll see an export error in Azure AD Connect with the message "A universal group cannot have a local group as a member." The resolution is to remove the member with the **Domain local** scope from the Azure AD group, or update the nested group member scope in Active Directory to **Global** or **Universal**. -- Group writeback supports writing back groups to only a single organizational unit (OU). After the feature is enabled, you can't change the OU that you selected. A workaround is to disable group writeback entirely in Azure AD Connect and then select a different OU when you re-enable the feature.  -- Nested cloud groups that are members of writeback-enabled groups must also be enabled for writeback to remain nested in Active Directory. -- A group writeback setting to manage new security group writeback at scale is not yet available. You need to configure writeback for each group.  - ## Next steps - [Modify Azure AD Connect group writeback default behavior](how-to-connect-modify-group-writeback.md) |
active-directory | How To Connect Health Agent Install | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-health-agent-install.md | Title: Install the Connect Health agents in Azure Active Directory -description: This Azure AD Connect Health article describes agent installation for Active Directory Federation Services (AD FS) and for Sync. + Title: Install the Azure AD Connect Health agents in Azure Active Directory +description: Learn how to install the Azure AD Connect Health agents for Active Directory Federation Services (AD FS) and for sync. -# Azure AD Connect Health agent installation +# Install the Azure AD Connect Health agents -In this article, you'll learn how to install and configure the Azure Active Directory (Azure AD) Connect Health agents. To download the agents, see [these instructions](how-to-connect-install-roadmap.md#download-and-install-azure-ad-connect-health-agent). +In this article, you learn how to install and configure the Azure AD Connect Health agents. ++Learn how to [download the agents](how-to-connect-install-roadmap.md#download-and-install-azure-ad-connect-health-agent). > [!NOTE]-> Azure AD Connect Health is not available in the China sovereign cloud +> Azure AD Connect Health is not available in the China sovereign cloud. ## Requirements -The following table lists requirements for using Azure AD Connect Health. +The following table lists requirements for using Azure AD Connect Health: | Requirement | Description | | | |-| There is an Azure AD Premium (P1 or P2) Subsciption. |Azure AD Connect Health is a feature of Azure AD Premium (P1 or P2). For more information, see [Sign up for Azure AD Premium](../fundamentals/active-directory-get-started-premium.md). <br /><br />To start a free 30-day trial, see [Start a trial](https://azure.microsoft.com/trial/get-started-active-directory/). | -| You're a Hybrid Identity Administrator in Azure AD. |By default, only Hybrid Identity Administrators or global administrators can install and configure the health agents, access the portal, and do any operations within Azure AD Connect Health. For more information, see [Administering your Azure AD directory](../fundamentals/active-directory-whatis.md). <br /><br /> By using Azure role-based access control (Azure RBAC), you can allow other users in your organization to access Azure AD Connect Health. For more information, see [Azure RBAC for Azure AD Connect Health](how-to-connect-health-operations.md#manage-access-with-azure-rbac). <br /><br />**Important**: Use a work or school account to install the agents. You can't use a Microsoft account. For more information, see [Sign up for Azure as an organization](../fundamentals/sign-up-organization.md). | -| The Azure AD Connect Health agent is installed on each targeted server. | Health agents must be installed and configured on targeted servers so that they can receive data and provide monitoring and analytics capabilities. <br /><br />For example, to get data from your Active Directory Federation Services (AD FS) infrastructure, you must install the agent on the AD FS server and the Web Application Proxy server. Similarly, to get data from your on-premises Azure AD Domain Services (Azure AD DS) infrastructure, you must install the agent on the domain controllers. | -| The Azure service endpoints have outbound connectivity. | During installation and runtime, the agent requires connectivity to Azure AD Connect Health service endpoints. If firewalls block outbound connectivity, add the [outbound connectivity endpoints](how-to-connect-health-agent-install.md#outbound-connectivity-to-the-azure-service-endpoints) to the allow list. | +| You have an Azure Active Directory (Azure AD) Premium (P1 or P2) Subscription. |Azure AD Connect Health is a feature of Azure AD Premium (P1 or P2). For more information, see [Sign up for Azure AD Premium](../fundamentals/active-directory-get-started-premium.md). <br /><br />To start a free 30-day trial, see [Start a trial](https://azure.microsoft.com/trial/get-started-active-directory/). | +| You're a hybrid identity administrator in Azure AD. |By default, only Hybrid Identity Administrator and Global Administrator accounts can install and configure health agents, access the portal, and do any operations within Azure AD Connect Health. For more information, see [Administering your Azure AD directory](../fundamentals/active-directory-whatis.md). <br /><br /> By using Azure role-based access control (Azure RBAC), you can allow other users in your organization to access Azure AD Connect Health. For more information, see [Azure RBAC for Azure AD Connect Health](how-to-connect-health-operations.md#manage-access-with-azure-rbac). <br /><br />**Important**: Use a work or school account to install the agents. You can't use a Microsoft account to install the agents. For more information, see [Sign up for Azure as an organization](../fundamentals/sign-up-organization.md). | +| The Azure AD Connect Health agent is installed on each targeted server. | Health agents must be installed and configured on targeted servers so that they can receive data and provide monitoring and analytics capabilities. <br /><br />For example, to get data from your Active Directory Federation Services (AD FS) infrastructure, you must install the agent on the AD FS server and on the Web Application Proxy server. Similarly, to get data from your on-premises Azure Active Directory Domain Services (Azure AD DS) infrastructure, you must install the agent on the domain controllers. | +| The Azure service endpoints have outbound connectivity. | During installation and runtime, the agent requires connectivity to Azure AD Connect Health service endpoints. If firewalls block outbound connectivity, add the [outbound connectivity endpoints](how-to-connect-health-agent-install.md#outbound-connectivity-to-azure-service-endpoints) to an allowlist. | |Outbound connectivity is based on IP addresses. | For information about firewall filtering based on IP addresses, see [Azure IP ranges](https://www.microsoft.com/download/details.aspx?id=56519).| | TLS inspection for outbound traffic is filtered or disabled. | The agent registration step or data upload operations might fail if there's TLS inspection or termination for outbound traffic at the network layer. For more information, see [Set up TLS inspection](/previous-versions/tn-archive/ee796230(v=technet.10)). |-| Firewall ports on the server are running the agent. |The agent requires the following firewall ports to be open so that it can communicate with the Azure AD Connect Health service endpoints: <br /><li>TCP port 443</li><li>TCP port 5671</li> <br />The latest version of the agent doesn't require port 5671. Upgrade to the latest version so that only port 443 is required. For more information, see [Hybrid identity required ports and protocols](./reference-connect-ports.md). | -| If Internet Explorer enhanced security is enabled, allow specified websites. |If Internet Explorer enhanced security is enabled, then allow the following websites on the server where you install the agent:<br /><li>https:\//login.microsoftonline.com</li><li>https:\//secure.aadcdn.microsoftonline-p.com</li><li>https:\//login.windows.net</li><li>https:\//aadcdn.msftauth.net</li><li>The federation server for your organization that's trusted by Azure AD (for example, https:\//sts.contoso.com)</li> <br />For more information, see [How to configure Internet Explorer](https://support.microsoft.com/help/815141/internet-explorer-enhanced-security-configuration-changes-the-browsing). If you have a proxy in your network, then see the note that appears at the end of this table.| -| PowerShell version 5.0 or newer is installed. | Windows Server 2016 includes PowerShell version 5.0. -+| Firewall ports on the server are running the agent. |The agent requires the following firewall ports to be open so that it can communicate with the Azure AD Connect Health service endpoints: <br />- TCP port 443 <br />- TCP port 5671 <br /><br />The latest version of the agent doesn't require port 5671. Upgrade to the latest version so that only port 443 is required. For more information, see [Hybrid identity required ports and protocols](./reference-connect-ports.md). | +| If Internet Explorer enhanced security is enabled, allow specified websites. |If Internet Explorer enhanced security is enabled, allow the following websites on the server where you install the agent:<br />- `https://login.microsoftonline.com` <br />- `https://secure.aadcdn.microsoftonline-p.com` <br />- `https://login.windows.net` <br />- `https://aadcdn.msftauth.net` <br />- The federation server for your organization that's trusted by Azure AD (for example, `https://sts.contoso.com`). <br /><br />For more information, see [How to configure Internet Explorer](https://support.microsoft.com/help/815141/internet-explorer-enhanced-security-configuration-changes-the-browsing). If you have a proxy in your network, see the note that appears at the end of this table.| +| PowerShell version 5.0 or later is installed. | Windows Server 2016 includes PowerShell version 5.0. | > [!IMPORTANT] > Windows Server Core doesn't support installing the Azure AD Connect Health agent. > [!NOTE]-> If you have a highly locked-down and restricted environment, you need to add more URLs than the ones the table lists for Internet Explorer enhanced security. Also add URLs that are listed in the table in the next section. +> If you have a highly locked-down and restricted environment, you need to add more URLs than the URLs the table lists for Internet Explorer enhanced security. Also add URLs that are listed in the table in the next section. ++### New versions of the agent and auto upgrade ++If a new version of the health agent is released, any existing, installed agents are automatically updated. -### New versions of the agent and Auto upgrade -If a new version of the Health agent is released, any existing installed agents are automatically updated. +<a name="outbound-connectivity-to-the-azure-service-endpoints"></a> -### Outbound connectivity to the Azure service endpoints +### Outbound connectivity to Azure service endpoints -During installation and runtime, the agent needs connectivity to Azure AD Connect Health service endpoints. If firewalls block outbound connectivity, make sure that the URLs in the following table aren't blocked by default. +During installation and runtime, the agent needs connectivity to Azure AD Connect Health service endpoints. If firewalls block outbound connectivity, make sure that the URLs in the following table aren't blocked by default. -Don't disable security monitoring or inspection of these URLs. Instead, allow them as you would allow other internet traffic. +Don't disable security monitoring or inspection of these URLs. Instead, allow them as you would allow other internet traffic. -These URLs allow communication with Azure AD Connect Health service endpoints. Later in this article, you'll learn how to [check outbound connectivity](#test-connectivity-to-azure-ad-connect-health-service) by using `Test-AzureADConnectHealthConnectivity`. +These URLs allow communication with Azure AD Connect Health service endpoints. Later in this article, you'll learn how to [check outbound connectivity](#test-connectivity-to-the-azure-ad-connect-health-service) by using `Test-AzureADConnectHealthConnectivity`. | Domain environment | Required Azure service endpoints | | | |-| General public | <li>*.blob.core.windows.net </li><li>*.aadconnecthealth.azure.com </li><li>**.servicebus.windows.net - Port: 5671 (If 5671 is blocked, the agent falls back to 443, but using 5671 is recommended. This endpoint isn't required in the latest version of the agent.)</li><li>*.adhybridhealth.azure.com/</li><li>https:\//management.azure.com </li><li>https:\//policykeyservice.dc.ad.msft.net/</li><li>https:\//login.windows.net</li><li>https:\//login.microsoftonline.com</li><li>https:\//secure.aadcdn.microsoftonline-p.com </li><li>https:\//www.office.com (This endpoint is used only for discovery purposes during registration.)</li> <li>https://aadcdn.msftauth.net</li><li>https://aadcdn.msauth.net</li> | -| Azure Germany | <li>*.blob.core.cloudapi.de </li><li>*.servicebus.cloudapi.de </li> <li>*.aadconnecthealth.microsoftazure.de </li><li>https:\//management.microsoftazure.de </li><li>https:\//policykeyservice.aadcdi.microsoftazure.de </li><li>https:\//login.microsoftonline.de </li><li>https:\//secure.aadcdn.microsoftonline-p.de </li><li>https:\//www.office.de (This endpoint is used only for discovery purposes during registration.)</li> <li>https://aadcdn.msftauth.net</li><li>https://aadcdn.msauth.net</li> | -| Azure Government | <li>*.blob.core.usgovcloudapi.net </li> <li>*.servicebus.usgovcloudapi.net </li> <li>*.aadconnecthealth.microsoftazure.us </li> <li>https:\//management.usgovcloudapi.net </li><li>https:\//policykeyservice.aadcdi.azure.us </li><li>https:\//login.microsoftonline.us </li><li>https:\//secure.aadcdn.microsoftonline-p.com </li><li>https:\//www.office.com (This endpoint is used only for discovery purposes during registration.)</li> <li>https://aadcdn.msftauth.net</li><li>https://aadcdn.msauth.net</li> | +| General public | - `*.blob.core.windows.net` <br />- `*.aadconnecthealth.azure.com` <br />- `**.servicebus.windows.net` - Port: 5671 (If 5671 is blocked, the agent falls back to 443, but we recommend that you use port 5671. This endpoint isn't required in the latest version of the agent.)<br />- `*.adhybridhealth.azure.com/`<br />- `https://management.azure.com` <br />- `https://policykeyservice.dc.ad.msft.net/` <br />- `https://login.windows.net` <br />- `https://login.microsoftonline.com` <br />- `https://secure.aadcdn.microsoftonline-p.com` <br />- `https://www.office.com` (This endpoint is used only for discovery purposes during registration.)<br />- `https://aadcdn.msftauth.net` <br />- `https://aadcdn.msauth.net` | +| Azure Germany | - `*.blob.core.cloudapi.de` <br />- `*.servicebus.cloudapi.de` <br />- `*.aadconnecthealth.microsoftazure.de` <br />- `https://management.microsoftazure.de` <br />- `https://policykeyservice.aadcdi.microsoftazure.de` <br />- `https://login.microsoftonline.de` <br />- `https://secure.aadcdn.microsoftonline-p.de` <br />- `https://www.office.de` (This endpoint is used only for discovery purposes during registration.)<br />- `https://aadcdn.msftauth.net` <br />- `https://aadcdn.msauth.net` | +| Azure Government | - `*.blob.core.usgovcloudapi.net` <br />- `*.servicebus.usgovcloudapi.net` <br />- `*.aadconnecthealth.microsoftazure.us` <br />- `https://management.usgovcloudapi.net` <br />- `https://policykeyservice.aadcdi.azure.us` <br />- `https://login.microsoftonline.us` <br />- `https://secure.aadcdn.microsoftonline-p.com` <br />- `https://www.office.com` (This endpoint is used only for discovery purposes during registration.)<br />- `https://aadcdn.msftauth.net` <br />- `https://aadcdn.msauth.net` | +## Download the agents -## Install the agent +To download and install the Azure AD Connect Health agent: -To download and install the Azure AD Connect Health agent: --* Make sure that you satisfy the [requirements](how-to-connect-health-agent-install.md#requirements) for Azure AD Connect Health. -* Get started using Azure AD Connect Health for AD FS: - * [Download the Azure AD Connect Health agent for AD FS](https://go.microsoft.com/fwlink/?LinkID=518973). - * See the [installation instructions](#install-the-agent-for-ad-fs). -* Get started using Azure AD Connect Health for Sync: - * [Download and install the latest version of Azure AD Connect](https://go.microsoft.com/fwlink/?linkid=615771). The health agent for Sync is installed as part of the Azure AD Connect installation (version 1.0.9125.0 or later). -* Get started using Azure AD Connect Health for Azure AD DS: - * [Download the Azure AD Connect Health agent for Azure AD DS](https://go.microsoft.com/fwlink/?LinkID=820540). - * See the [installation instructions](#install-the-agent-for-azure-ad-ds). +- Make sure that you satisfy the [requirements](how-to-connect-health-agent-install.md#requirements) to install Azure AD Connect Health. +- Get started using Azure AD Connect Health for AD FS: + - [Download the Azure AD Connect Health agent for AD FS](https://go.microsoft.com/fwlink/?LinkID=518973). + - See the [installation instructions](#install-the-agent-for-ad-fs). +- Get started using Azure AD Connect Health for sync: + - [Download and install the latest version of Azure AD Connect](https://go.microsoft.com/fwlink/?linkid=615771). The health agent for sync is installed as part of the Azure AD Connect installation (version 1.0.9125.0 or later). +- Get started using Azure AD Connect Health for Azure AD DS: + - [Download the Azure AD Connect Health agent for Azure AD DS](https://go.microsoft.com/fwlink/?LinkID=820540). + - See the [installation instructions](#install-the-agent-for-azure-ad-ds). ## Install the agent for AD FS > [!NOTE]-> Your AD FS server should be different from your Sync server. Don't install the AD FS agent on your Sync server. +> Your AD FS server should be separate from your sync server. Don't install the AD FS agent on your sync server. > Before you install the agent, make sure your AD FS server host name is unique and isn't present in the AD FS service.-To start the agent installation, double-click the *.exe* file that you downloaded. In the first window, select **Install**. - +To start the agent installation, double-click the *.exe* file you downloaded. In the first dialog, select **Install**. + After the installation finishes, select **Configure Now**. - A PowerShell window opens to start the agent registration process. When you're prompted, sign in by using an Azure AD account that has permissions to register the agent. By default, the Hybrid Identity Administrator account has permissions. - -After you sign in, PowerShell continues. When it finishes, you can close PowerShell. The configuration is complete. +After you sign in, PowerShell continues the installation. When it finishes, you can close PowerShell. Configuration is complete. -At this point, the agent services should start automatically to allow the agent to securely upload the required data to the cloud service. +At this point, the agent services should start to automatically allow the agent to securely upload the required data to the cloud service. -If you haven't met all of the prerequisites, warnings appear in the PowerShell window. Be sure to complete the [requirements](how-to-connect-health-agent-install.md#requirements) before you install the agent. The following screenshot shows an example of these warnings. +If you haven't met all the prerequisites, warnings appear in the PowerShell window. Be sure to complete the [requirements](how-to-connect-health-agent-install.md#requirements) before you install the agent. The following screenshot shows an example of these warnings. - To verify that the agent was installed, look for the following services on the server. If you completed the configuration, they should already be running. Otherwise, they're stopped until the configuration is complete. -* Azure AD Connect Health AD FS Diagnostics Service -* Azure AD Connect Health AD FS Insights Service -* Azure AD Connect Health AD FS Monitoring Service -- +- Azure AD Connect Health AD FS Diagnostics Service +- Azure AD Connect Health AD FS Insights Service +- Azure AD Connect Health AD FS Monitoring Service ### Enable auditing for AD FS > [!NOTE]-> This section applies only to AD FS servers. You don't have to follow these steps on the Web Application Proxy servers. +> This section applies only to AD FS servers. You don't have to complete these steps on Web Application Proxy servers. > -The Usage Analytics feature needs to gather and analyze data. So the Azure AD Connect Health agent needs the information in the AD FS audit logs. These logs aren't enabled by default. Use the following procedures to enable AD FS auditing and to locate the AD FS audit logs on your AD FS servers. +The Usage Analytics feature needs to gather and analyze data, so the Azure AD Connect Health agent needs the information in the AD FS audit logs. These logs aren't enabled by default. Use the following procedures to enable AD FS auditing and to locate the AD FS audit logs on your AD FS servers. #### To enable auditing for AD FS on Windows Server 2012 R2 -1. On the Start screen, open **Server Manager**, and then open **Local Security Policy**. Or on the taskbar, open **Server Manager**, and then select **Tools/Local Security Policy**. -2. Go to the *Security Settings\Local Policies\User Rights Assignment* folder. Then double-click **Generate security audits**. -3. On the **Local Security Setting** tab, verify that the AD FS service account is listed. If it's not listed, then select **Add User or Group**, and add it to the list. Then select **OK**. -4. To enable auditing, open a Command Prompt window with elevated privileges. Then run the following command: - +1. On the Start screen, open **Server Manager**, and then open **Local Security Policy**. Or, on the taskbar, open **Server Manager**, and then select **Tools/Local Security Policy**. +1. Go to the *Security Settings\Local Policies\User Rights Assignment* folder. Double-click **Generate security audits**. +1. On the **Local Security Setting** tab, verify that the AD FS service account is listed. If it's not listed, select **Add User or Group**, and add the AD FS service account to the list. Then select **OK**. +1. To enable auditing, open a Command Prompt window as administrator, and then run the following command: + `auditpol.exe /set /subcategory:{0CCE9222-69AE-11D9-BED3-505054503030} /failure:enable /success:enable`+ 1. Close **Local Security Policy**.- >[!Important] - >The following steps are required only for primary AD FS servers. ++ > [!IMPORTANT] + > The remaining steps are required only for primary AD FS servers. + 1. Open the **AD FS Management** snap-in. (In **Server Manager**, select **Tools** > **AD FS Management**.) 1. In the **Actions** pane, select **Edit Federation Service Properties**.-1. In the **Federation Service Properties** dialog box, select the **Events** tab. -1. Select the **Success audits and Failure audits** check boxes, and then select **OK**. -1. To enable verbose logging through PowerShell, use the following command: +1. In the **Federation Service Properties** dialog, select the **Events** tab. +1. Select the **Success audits** and **Failure audits** checkboxes, and then select **OK**. +1. To enable verbose logging through PowerShell, use the following command: `Set-AdfsProperties -LOGLevel Verbose` #### To enable auditing for AD FS on Windows Server 2016 -1. On the Start screen, open **Server Manager**, and then open **Local Security Policy**. Or on the taskbar, open **Server Manager**, and then select **Tools/Local Security Policy**. -2. Go to the *Security Settings\Local Policies\User Rights Assignment* folder, and then double-click **Generate security audits**. -3. On the **Local Security Setting** tab, verify that the AD FS service account is listed. If it's not listed, then select **Add User or Group**, and add the AD FS service account to the list. Then select **OK**. -4. To enable auditing, open a Command Prompt window with elevated privileges. Then run the following command: +1. On the Start screen, open **Server Manager**, and then open **Local Security Policy**. Or, on the taskbar, open **Server Manager**, and then select **Tools/Local Security Policy**. +1. Go to the *Security Settings\Local Policies\User Rights Assignment* folder. Double-click **Generate security audits**. +1. On the **Local Security Setting** tab, verify that the AD FS service account is listed. If it's not listed, select **Add User or Group**, and add the AD FS service account to the list. Then select **OK**. +1. To enable auditing, open a Command Prompt window as administrator, and then run the following command: `auditpol.exe /set /subcategory:{0CCE9222-69AE-11D9-BED3-505054503030} /failure:enable /success:enable`+ 1. Close **Local Security Policy**.- >[!Important] - >The following steps are required only for primary AD FS servers. ++ > [!IMPORTANT] + > The remaining steps are required only for primary AD FS servers. + 1. Open the **AD FS Management** snap-in. (In **Server Manager**, select **Tools** > **AD FS Management**.) 1. In the **Actions** pane, select **Edit Federation Service Properties**.-1. In the **Federation Service Properties** dialog box, select the **Events** tab. -1. Select the **Success audits and Failure audits** check boxes, and then select **OK**. Success audits and failure audits should be enabled by default. -1. Open a PowerShell window and run the following command: +1. In the **Federation Service Properties** dialog, select the **Events** tab. +1. Select the **Success audits** and **Failure audits** checkboxes, and then select **OK**. Success audits and failure audits should be enabled by default. +1. Open a PowerShell window and run the following command: `Set-AdfsProperties -AuditLevel Verbose` The "basic" audit level is enabled by default. For more information, see [AD FS audit enhancement in Windows Server 2016](/windows-server/identity/ad-fs/technical-reference/auditing-enhancements-to-ad-fs-in-windows-server). - #### To locate the AD FS audit logs 1. Open **Event Viewer**. 2. Go to **Windows Logs**, and then select **Security**.-3. On the right, select **Filter Current Logs**. +3. In the right pane, select **Filter Current Logs**. 4. For **Event sources**, select **AD FS Auditing**. For more information about audit logs, see [Operations questions](./reference-connect-health-faq.yml). -  + :::image type="content" source="media/how-to-connect-health-agent-install/adfsaudit.png" alt-text="Screenshot that shows the Filter Current Log window, with AD FS auditing selected."::: > [!WARNING] > A group policy can disable AD FS auditing. If AD FS auditing is disabled, usage analytics about login activities are unavailable. Ensure that you have no group policy that disables AD FS auditing. > +## Install the agent for sync -## Install the agent for Sync +The Azure AD Connect Health agent for sync is installed automatically in the latest version of Azure AD Connect. To use Azure AD Connect for sync, [download the latest version of Azure AD Connect](https://www.microsoft.com/download/details.aspx?id=47594) and install it. -The Azure AD Connect Health agent for Sync is installed automatically in the latest version of Azure AD Connect. To use Azure AD Connect for Sync, [download the latest version of Azure AD Connect](https://www.microsoft.com/download/details.aspx?id=47594) and install it. +To verify that the agent has been installed, look for the following services on the server. If you completed the configuration, the services should already be running. Otherwise, the services are stopped until the configuration is complete. -To verify the agent has been installed, look for the following services on the server. If you completed the configuration, the services should already be running. Otherwise, the services are stopped until the configuration is complete. +- Azure AD Connect Health Sync Insights Service +- Azure AD Connect Health Sync Monitoring Service -* Azure AD Connect Health Sync Insights Service -* Azure AD Connect Health Sync Monitoring Service -- > [!NOTE] > Remember that you must have Azure AD Premium (P1 or P2) to use Azure AD Connect Health. If you don't have Azure AD Premium, you can't complete the configuration in the Azure portal. For more information, see the [requirements](how-to-connect-health-agent-install.md#requirements).-> -> -## Manually register Azure AD Connect Health for Sync +## Manually register Azure AD Connect Health for sync -If the Azure AD Connect Health for Sync agent registration fails after you successfully install Azure AD Connect, then you can use a PowerShell command to manually register the agent. +If the Azure AD Connect Health for sync agent registration fails after you successfully install Azure AD Connect, you can use a PowerShell command to manually register the agent. > [!IMPORTANT] > Use this PowerShell command only if the agent registration fails after you install Azure AD Connect.-> -> -Manually register the Azure AD Connect Health agent for Sync by using the following PowerShell command. The Azure AD Connect Health services will start after the agent has been successfully registered. +Manually register the Azure AD Connect Health agent for sync by using the following PowerShell command. The Azure AD Connect Health services will start after the agent has been successfully registered. `Register-AzureADConnectHealthSyncAgent -AttributeFiltering $true -StagingMode $false` The command takes following parameters: -* **AttributeFiltering**: `$true` (default) if Azure AD Connect isn't syncing the default attribute set and has been customized to use a filtered attribute set. Otherwise, use `$false`. -* **StagingMode**: `$false` (default) if the Azure AD Connect server is *not* in staging mode. If the server is configured to be in staging mode, use `$true`. +- `AttributeFiltering`: `$true` (default) if Azure AD Connect isn't syncing the default attribute set and has been customized to use a filtered attribute set. Otherwise, use `$false`. +- `StagingMode`: `$false` (default) if the Azure AD Connect server is *not* in staging mode. If the server is configured to be in staging mode, use `$true`. -When you're prompted for authentication, use the same Global Administrator account (such as admin@domain.onmicrosoft.com) that you used to configure Azure AD Connect. +When you're prompted for authentication, use the same Global Administrator account (such as `admin@domain.onmicrosoft.com`) that you used to configure Azure AD Connect. ## Install the agent for Azure AD DS To start the agent installation, double-click the *.exe* file that you downloaded. In the first window, select **Install**. - When the installation finishes, select **Configure Now**. - A Command Prompt window opens. PowerShell runs `Register-AzureADConnectHealthADDSAgent`. When you're prompted, sign in to Azure. - After you sign in, PowerShell continues. When it finishes, you can close PowerShell. The configuration is complete. -At this point, the services should be started automatically, allowing the agent to monitor and gather data. If you haven't met all of the prerequisites outlined in the previous sections, then warnings appear in the PowerShell window. Be sure to complete the [requirements](how-to-connect-health-agent-install.md#requirements) before you install the agent. The following screenshot shows an example of these warnings. +At this point, the services should be started automatically, allowing the agent to monitor and gather data. If you haven't met all the prerequisites outlined in the previous sections, warnings appear in the PowerShell window. Be sure to complete the [requirements](how-to-connect-health-agent-install.md#requirements) before you install the agent. The following screenshot shows an example of these warnings. - To verify that the agent is installed, look for the following services on the domain controller: -* Azure AD Connect Health AD DS Insights Service -* Azure AD Connect Health AD DS Monitoring Service +- Azure AD Connect Health AD DS Insights Service +- Azure AD Connect Health AD DS Monitoring Service If you completed the configuration, these services should already be running. Otherwise, they're stopped until the configuration finishes. - ### Quickly install the agent on multiple servers -1. Create a user account in Azure AD. Secure it by using a password. -2. Assign the **Owner** role for this local Azure AD account in Azure AD Connect Health by using the portal. Follow [these steps](how-to-connect-health-operations.md#manage-access-with-azure-rbac). Assign the role to all service instances. -3. Download the *.exe* MSI file in the local domain controller for the installation. -4. Run the following script. Replace the parameters with your new user account and its password. +1. Create a user account in Azure AD. Secure the account by using a password. +1. [Assign the Owner role](how-to-connect-health-operations.md#manage-access-with-azure-rbac) for this local Azure AD account in Azure AD Connect Health by using the portal. Assign the role to all service instances. +1. Download the *.exe* MSI file in the local domain controller for the installation. +1. Run the following script. Replace the parameters with your new user account and its password. ```powershell AdHealthAddsAgentSetup.exe /quiet If you completed the configuration, these services should already be running. Ot Register-AzureADConnectHealthADDSAgent -Credential $myCreds ``` -When you finish, you can remove access for the local account by doing one or more of the following tasks: -* Remove the role assignment for the local account for Azure AD Connect Health. -* Rotate the password for the local account. -* Disable the Azure AD local account. -* Delete the Azure AD local account. +When you finish, you can remove access for the local account by completing one or more of the following tasks: ++- Remove the role assignment for the local account for Azure AD Connect Health. +- Rotate the password for the local account. +- Disable the Azure AD local account. +- Delete the Azure AD local account. ## Register the agent by using PowerShell -After you install the appropriate agent *setup.exe* file, you can register the agent by using the following PowerShell commands, depending on the role. Open a PowerShell window and run the appropriate command: +After you install the relevant agent *setup.exe* file, you can register the agent by using the following PowerShell commands, depending on the role. Open PowerShell as administrator and run the relevant command: ```powershell Register-AzureADConnectHealthADFSAgent Register-AzureADConnectHealthSyncAgent > Register-AzureADConnectHealthSyncAgent -UserPrincipalName upn-of-the-user > ``` -These commands accept `Credential` as a parameter to complete the registration noninteractively or to complete the registration on a machine that runs Server Core. Keep in mind that: -* You can capture `Credential` in a PowerShell variable that's passed as a parameter. -* You can provide any Azure AD identity that has permissions to register the agents and that does *not* have multifactor authentication enabled. -* By default, global admins have permissions to register the agents. You can also allow less-privileged identities to do this step. For more information, see [Azure RBAC](how-to-connect-health-operations.md#manage-access-with-azure-rbac). +These commands accept `Credential` as a parameter to complete the registration non-interactively or to complete the registration on a computer that runs Server Core. Keep these factors in mind: ++- You can capture `Credential` in a PowerShell variable that's passed as a parameter. +- You can provide any Azure AD identity that has permissions to register the agents, and which does *not* have multifactor authentication enabled. +- By default, global admins have permissions to register the agents. You can also allow less-privileged identities to do this step. For more information, see [Azure RBAC](how-to-connect-health-operations.md#manage-access-with-azure-rbac). ```powershell $cred = Get-Credential These commands accept `Credential` as a parameter to complete the registration n You can configure Azure AD Connect Health agents to work with an HTTP proxy. > [!NOTE]-> * `Netsh WinHttp set ProxyServerAddress` is not supported. The agent uses System.Net instead of Windows HTTP Services to make web requests. -> * The configured HTTP proxy address is used to pass-through encrypted HTTPS messages. -> * Authenticated proxies (using HTTPBasic) are not supported. -> >+> - `Netsh WinHttp set ProxyServerAddress` isn't supported. The agent uses System.Net instead of Windows HTTP Services to make web requests. +> - The configured HTTP proxy address is used to pass through encrypted HTTPS messages. +> - Authenticated proxies (using HTTPBasic) aren't supported. ### Change the agent proxy configuration To configure the Azure AD Connect Health agent to use an HTTP proxy, you can:-* Import existing proxy settings. -* Specify proxy addresses manually. -* Clear the existing proxy configuration. ++- Import existing proxy settings. +- Specify proxy addresses manually. +- Clear the existing proxy configuration. > [!NOTE]-> To update the proxy settings, you must restart all Azure AD Connect Health agent services. Run the following command: +> To update the proxy settings, you must restart all Azure AD Connect Health agent services. To restart all the agents, run the following command: > > `Restart-Service AdHealthAdfs*` #### Import existing proxy settings -You can import Internet Explorer HTTP proxy settings so that the Azure AD Connect Health agents can use the settings. On each of the servers that run the health agent, run the following PowerShell command: +You can import Internet Explorer HTTP proxy settings so that Azure AD Connect Health agents can use the settings. On each of the servers that run the health agent, run the following PowerShell command: ```powershell Set-AzureAdConnectHealthProxySettings -ImportFromInternetSettings You can manually specify a proxy server. On each of the servers that run the hea Set-AzureAdConnectHealthProxySettings -HttpsProxyAddress address:port ``` -Here's an example: +Here's an example: `Set-AzureAdConnectHealthProxySettings -HttpsProxyAddress myproxyserver: 443` In this example:-* The `address` setting can be a DNS-resolvable server name or an IPv4 address. -* You can omit `port`. If you do, then 443 is the default port. ++- The `address` setting can be a DNS-resolvable server name or an IPv4 address. +- You can omit `port`. If you do, 443 is the default port. #### Clear the existing proxy configuration You can read the current proxy settings by running the following command: ```powershell Get-AzureAdConnectHealthProxySettings ```+<a name="test-connectivity-to-azure-ad-connect-health-service"></a> -## Test connectivity to Azure AD Connect Health service +## Test connectivity to the Azure AD Connect Health service -Occasionally, the Azure AD Connect Health agent can lose connectivity with the Azure AD Connect Health service. Causes of this connectivity loss can include network problems, permission problems, and various other problems. +Occasionally, the Azure AD Connect Health agent loses connectivity with the Azure AD Connect Health service. Causes of this connectivity loss might include network problems, permissions problems, and various other problems. -If the agent can't send data to the Azure AD Connect Health service for longer than two hours, the following alert appears in the portal: "Health Service data is not up to date." +If the agent can't send data to the Azure AD Connect Health service for longer than two hours, the following alert appears in the portal: **Health Service data is not up to date**. You can find out whether the affected Azure AD Connect Health agent can upload data to the Azure AD Connect Health service by running the following PowerShell command: You can find out whether the affected Azure AD Connect Health agent can upload d Test-AzureADConnectHealthConnectivity -Role ADFS ``` -The role parameter currently takes the following values: +The `Role` parameter currently takes the following values: -* ADFS -* Sync -* ADDS +- `ADFS` +- `Sync` +- `ADDS` > [!NOTE]-> To use the connectivity tool, you must first register the agent. If you can't complete the agent registration, make sure that you have met all of the [requirements](how-to-connect-health-agent-install.md#requirements) for Azure AD Connect Health. Connectivity is tested by default during agent registration. -> -> +> To use the connectivity tool, you must first register the agent. If you can't complete the agent registration, make sure that you meet all the [requirements](how-to-connect-health-agent-install.md#requirements) for Azure AD Connect Health. Connectivity is tested by default during agent registration. ## Next steps Check out the following related articles: -* [Azure AD Connect Health](./whatis-azure-ad-connect.md) -* [Azure AD Connect Health operations](how-to-connect-health-operations.md) -* [Using Azure AD Connect Health with AD FS](how-to-connect-health-adfs.md) -* [Using Azure AD Connect Health for Sync](how-to-connect-health-sync.md) -* [Using Azure AD Connect Health with Azure AD DS](how-to-connect-health-adds.md) -* [Azure AD Connect Health FAQ](reference-connect-health-faq.yml) -* [Azure AD Connect Health version history](reference-connect-health-version-history.md) +- [Azure AD Connect Health](./whatis-azure-ad-connect.md) +- [Azure AD Connect Health operations](how-to-connect-health-operations.md) +- [Using Azure AD Connect Health with AD FS](how-to-connect-health-adfs.md) +- [Using Azure AD Connect Health for sync](how-to-connect-health-sync.md) +- [Using Azure AD Connect Health with Azure AD DS](how-to-connect-health-adds.md) +- [Azure AD Connect Health FAQ](reference-connect-health-faq.yml) +- [Azure AD Connect Health version history](reference-connect-health-version-history.md) |
active-directory | How To Connect Pta Security Deep Dive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-pta-security-deep-dive.md | Title: Azure Active Directory Pass-through Authentication security deep dive| Microsoft Docs -description: This article describes how Azure Active Directory (Azure AD) Pass-through Authentication protects your on-premises accounts + Title: Azure Active Directory pass-through authentication security deep dive +description: Learn how Azure Active Directory pass-through authentication protects your on-premises accounts. -keywords: Azure AD Connect Pass-through Authentication, install Active Directory, required components for Azure AD, SSO, Single Sign-on +keywords: Azure AD Connect pass-through authentication, install Active Directory, required components for Azure AD, SSO, Single Sign-on -# Azure Active Directory Pass-through Authentication security deep dive +# Azure Active Directory pass-through authentication security deep dive -This article provides a more detailed description of how Azure Active Directory (Azure AD) Pass-through Authentication works. It focuses on the security aspects of the feature. This article is for security and IT administrators, chief compliance and security officers, and other IT professionals who are responsible for IT security and compliance at small-to-medium sized organizations or large enterprises. +This article provides a more detailed description of how Azure Active Directory (Azure AD) pass-through authentication works. It focuses on the security aspects of the feature. This article is for security and IT administrators, chief compliance and security officers, and other IT professionals who are responsible for IT security and compliance at organizations or enterprises of any size. The topics addressed include:-- Detailed technical information about how to install and register the Authentication Agents.-- Detailed technical information about the encryption of passwords during user sign-in.-- The security of the channels between on-premises Authentication Agents and Azure AD.-- Detailed technical information about how to keep the Authentication Agents operationally secure.-- Other security-related topics. -## Key security capabilities +- Detailed technical information about how to install and register authentication agents. +- Detailed technical information about password encryption during user sign-in. +- The security of the channels between on-premises authentication agents and Azure AD. +- Detailed technical information about how to keep the authentication agents operationally secure. ++## Pass-through authentication key security capabilities ++Pass-through authentication has these key security capabilities: -These are the key security aspects of this feature: - It's built on a secure multi-tenanted architecture that provides isolation of sign-in requests between tenants. - On-premises passwords are never stored in the cloud in any form.-- On-premises Authentication Agents that listen for, and respond to, password validation requests only make outbound connections from within your network. There is no requirement to install these Authentication Agents in a perimeter network (DMZ). As best practice, treat all servers running Authentication Agents as Tier 0 systems (see [reference](/windows-server/identity/securing-privileged-access/securing-privileged-access-reference-material)).-- Only standard ports (80 and 443) are used for outbound communication from the Authentication Agents to Azure AD. You don't need to open inbound ports on your firewall. +- On-premises authentication agents that listen for and respond to password validation requests make only outbound connections from within your network. There's no requirement to install these authentication agents in a perimeter network (also known as *DMZ*, *demilitarized zone*, and *screened subnet*). As a best practice, treat all servers that are running authentication agents as Tier 0 systems (see [reference](/windows-server/identity/securing-privileged-access/securing-privileged-access-reference-material)). +- Only standard ports (port 80 and port 443) are used for outbound communication from the authentication agents to Azure AD. You don't need to open inbound ports on your firewall. - Port 443 is used for all authenticated outbound communication.- - Port 80 is used only for downloading the Certificate Revocation Lists (CRLs) to ensure that none of the certificates used by this feature have been revoked. - - For the complete list of the network requirements, see [Azure Active Directory Pass-through Authentication: Quickstart](how-to-connect-pta-quick-start.md#step-1-check-the-prerequisites). -- Passwords that users provide during sign-in are encrypted in the cloud before the on-premises Authentication Agents accept them for validation against Active Directory.-- The HTTPS channel between Azure AD and the on-premises Authentication Agent is secured by using mutual authentication.-- Protects your user accounts by working seamlessly with [Azure AD Conditional Access policies](../conditional-access/overview.md), including Multi-Factor Authentication (MFA), [blocking legacy authentication](../conditional-access/concept-conditional-access-conditions.md) and by [filtering out brute force password attacks](../authentication/howto-password-smart-lockout.md).+ - Port 80 is used only for downloading certificate revocation lists (CRLs) to ensure that none of the certificates this feature uses have been revoked. + - For the complete list of the network requirements, see the [Azure Active Directory pass-through authentication quickstart](how-to-connect-pta-quick-start.md#step-1-check-the-prerequisites). +- Passwords that users provide during sign-in are encrypted in the cloud before the on-premises authentication agents accept them for validation against Windows Server Active Directory (Windows Server AD). +- The HTTPS channel between Azure AD and the on-premises authentication agent is secured by using mutual authentication. +- Pass-through authentication protects your user accounts by working seamlessly with [Azure AD Conditional Access policies](../conditional-access/overview.md), including multifactor authentication (MFA), [blocking legacy authentication](../conditional-access/concept-conditional-access-conditions.md), and by [filtering out brute force password attacks](../authentication/howto-password-smart-lockout.md). ++## Components involved in pass-through authentication -## Components involved +For general details about operational, service, and data security for Azure AD, see the [Trust Center](https://azure.microsoft.com/support/trust-center/). The following components are involved when you use pass-through authentication for user sign-in: -For general details about Azure AD operational, service, and data security, see the [Trust Center](https://azure.microsoft.com/support/trust-center/). The following components are involved when you use Pass-through Authentication for user sign-in: -- **Azure AD STS**: A stateless security token service (STS) that processes sign-in requests and issues security tokens to users' browsers, clients, or services as required.+- **Azure AD Security Token Service (Azure AD STS)**: A stateless STS that processes sign-in requests and issues security tokens to user browsers, clients, or services as required. - **Azure Service Bus**: Provides cloud-enabled communication with enterprise messaging and relays communication that helps you connect on-premises solutions with the cloud. - **Azure AD Connect Authentication Agent**: An on-premises component that listens for and responds to password validation requests.-- **Azure SQL Database**: Holds information about your tenant's Authentication Agents, including their metadata and encryption keys.-- **Active Directory**: On-premises Active Directory, where your user accounts and their passwords are stored.+- **Azure SQL Database**: Holds information about your tenant's authentication agents, including their metadata and encryption keys. +- **Windows Server AD**: On-premises Active Directory, where user accounts and their passwords are stored. ++## Installation and registration of authentication agents ++Authentication agents are installed and registered with Azure AD when you take one of the following actions: -## Installation and registration of the Authentication Agents +- [Enable pass-through authentication through Azure AD Connect](./how-to-connect-pta-quick-start.md#step-2-enable-the-feature) +- [Add more authentication agents to ensure the high availability of sign-in requests](./how-to-connect-pta-quick-start.md#step-4-ensure-high-availability) -Authentication Agents are installed and registered with Azure AD when you either: - - [Enable Pass-through Authentication through Azure AD Connect](./how-to-connect-pta-quick-start.md#step-2-enable-the-feature) - - [Add more Authentication Agents to ensure the high availability of sign-in requests](./how-to-connect-pta-quick-start.md#step-4-ensure-high-availability) - -Getting an Authentication Agent working involves three main phases: +Getting an authentication agent operational involves three main phases: -1. Authentication Agent installation -2. Authentication Agent registration -3. Authentication Agent initialization +- Installation +- Registration +- Initialization The following sections discuss these phases in detail. -### Authentication Agent installation +### Authentication agent installation -Only Hybrid Identity Administrators can install an Authentication Agent (by using Azure AD Connect or standalone) on an on-premises server. Installation adds two new entries to the **Control Panel** > **Programs** > **Programs and Features** list: -- The Authentication Agent application itself. This application runs with [NetworkService](/windows/win32/services/networkservice-account) privileges.-- The Updater application that's used to auto-update the Authentication Agent. This application runs with [LocalSystem](/windows/win32/services/localsystem-account) privileges.+Only a Hybrid Identity Administrator account can install an authentication agent (by using Azure AD Connect or a standalone instance) on an on-premises server. ->[!IMPORTANT] ->From a security standpoint, administrators should treat the server running the PTA agent as if it were a domain controller. The PTA agent servers should be hardened along the same lines as outlined in [Securing Domain Controllers Against Attack](/windows-server/identity/ad-ds/plan/security-best-practices/securing-domain-controllers-against-attack) +Installation adds two new entries to the list in **Control Panel** > **Programs** > **Programs and Features**: -### Authentication Agent registration +- The authentication agent application itself. This application runs with [NetworkService](/windows/win32/services/networkservice-account) privileges. +- The Updater application that's used to auto update the authentication agent. This application runs with [LocalSystem](/windows/win32/services/localsystem-account) privileges. -After you install the Authentication Agent, it needs to register itself with Azure AD. Azure AD assigns each Authentication Agent a unique, digital-identity certificate that it can use for secure communication with Azure AD. +> [!IMPORTANT] +> From a security standpoint, administrators should treat the server running the pass-through authentication agent as if it were a domain controller. The pass-through authentication agent agent servers should be hardened as outlined in [Secure domain controllers against attack](/windows-server/identity/ad-ds/plan/security-best-practices/securing-domain-controllers-against-attack). -The registration procedure also binds the Authentication Agent with your tenant. This ensures that Azure AD knows that this specific Authentication Agent is the only one authorized to handle password validation requests for your tenant. This procedure is repeated for each new Authentication Agent that you register. +### Authentication agent registration -The Authentication Agents use the following steps to register themselves with Azure AD: +After you install the authentication agent, it registers itself with Azure AD. Azure AD assigns each authentication agent a unique, digital identity certificate that it can use for secure communication with Azure AD. - +The registration procedure also binds the authentication agent with your tenant. Then, Azure AD knows that this specific authentication agent is the only one that's authorized to handle password validation requests for your tenant. This procedure is repeated for each new authentication agent that you register. -1. Azure AD first requests that a Hybrid Identity Administrator sign in to Azure AD with their credentials. During sign-in, the Authentication Agent acquires an access token that it can use on behalf of the -2. The Authentication Agent then generates a key pair: a public key and a private key. - - The key pair is generated through standard RSA 2048-bit encryption. - - The private key stays on the on-premises server where the Authentication Agent resides. -3. The Authentication Agent makes a ΓÇ£registrationΓÇ¥ request to Azure AD over HTTPS, with the following components included in the request: - - The access token acquired in step 1. - - The public key generated in step 2. - - A Certificate Signing Request (CSR or Certificate Request). This request applies for a digital identity certificate, with Azure AD as its certificate authority (CA). -4. Azure AD validates the access token in the registration request and verifies that the request came from a Hybrid Identity Administrator. -5. Azure AD then signs and sends a digital identity certificate back to the Authentication Agent. - - The root CA in Azure AD is used to sign the certificate. +The authentication agents use the following steps to register themselves with Azure AD: +++1. Azure AD first requests that a hybrid identity administrator sign in to Azure AD with their credentials. During sign-in, the authentication agent acquires an access token that it can use on behalf of the user. +1. The authentication agent then generates a key pair: a public key and a private key. + - The key pair is generated through standard RSA 2,048-bit encryption. + - The private key stays on the on-premises server where the authentication agent resides. +1. The authentication agent makes a registration request to Azure AD over HTTPS, with the following components included in the request: + - The access token that the agent acquired. + - The public key that was generated. + - A Certificate Signing Request (*CSR* or *Certificate Request*). This request applies for a digital identity certificate, with Azure AD as its certificate authority (CA). +1. Azure AD validates the access token in the registration request and verifies that the request came from a hybrid identity administrator. +1. Azure AD then signs a digital identity certificate and sends it back to the authentication agent. + - The root CA in Azure AD is used to sign the certificate. > [!NOTE]- > This CA is _not_ in the Windows Trusted Root Certificate Authorities store. - - The CA is used only by the Pass-through Authentication feature. The CA is used only to sign CSRs during the Authentication Agent registration. - - None of the other Azure AD services use this CA. - - The certificateΓÇÖs subject (Distinguished Name or DN) is set to your tenant ID. This DN is a GUID that uniquely identifies your tenant. This DN scopes the certificate for use only with your tenant. -6. Azure AD stores the public key of the Authentication Agent in a database in Azure SQL Database, which only Azure AD has access to. -7. The certificate (issued in step 5) is stored on the on-premises server in the Windows certificate store (specifically in the [CERT_SYSTEM_STORE_LOCAL_MACHINE](/windows/win32/seccrypto/system-store-locations#CERT_SYSTEM_STORE_LOCAL_MACHINE) location). It is used by both the Authentication Agent and the Updater applications. + > This CA is *not* in the Windows Trusted Root Certificate Authorities store. + - The CA is used only by the pass-through authentication feature. The CA is used only to sign CSRs during the authentication agent registration. + - No other Azure AD service uses this CA. + - The certificateΓÇÖs subject (also called *Distinguished Name* or *DN*) is set to your tenant ID. This DN is a GUID that uniquely identifies your tenant. This DN scopes the certificate for use only with your tenant. +1. Azure AD stores the public key of the authentication agent in a database in Azure SQL Database. Only Azure AD can access the database. +1. The certificate that's issued is stored on the on-premises server in the Windows certificate store (specifically, in [CERT_SYSTEM_STORE_LOCAL_MACHINE](/windows/win32/seccrypto/system-store-locations#CERT_SYSTEM_STORE_LOCAL_MACHINE)). The certificate is used by both the authentication agent and the Updater application. ++### Authentication agent initialization -### Authentication Agent initialization +When the authentication agent starts, either for the first time after registration or after a server restart, it needs a way to communicate securely with the Azure AD service so that it can start to accept password validation requests. -When the Authentication Agent starts, either for the first time after registration or after a server restart, it needs a way to communicate securely with the Azure AD service and start accepting password validation requests. - +Here's how authentication agents are initialized: -Here is how Authentication Agents are initialized: +1. The authentication agent makes an outbound bootstrap request to Azure AD. -1. The Authentication Agent makes an outbound bootstrap request to Azure AD. - - This request is made over port 443 and is over a mutually authenticated HTTPS channel. The request uses the same certificate that was issued during the Authentication Agent registration. -2. Azure AD responds to the request by providing an access key to an Azure Service Bus queue that's unique to your tenant and that's identified by your tenant ID. -3. The Authentication Agent makes a persistent outbound HTTPS connection (over port 443) to the queue. - - The Authentication Agent is now ready to retrieve and handle password-validation requests. + This request is made over port 443 and is over a mutually authenticated HTTPS channel. The request uses the same certificate that was issued during authentication agent registration. +1. Azure AD responds to the request by providing an access key to a Service Bus queue that's unique to your tenant, and which is identified by your tenant ID. +1. The authentication agent makes a persistent outbound HTTPS connection (over port 443) to the queue. -If you have multiple Authentication Agents registered on your tenant, then the initialization procedure ensures that each one connects to the same Service Bus queue. +The authentication agent is now ready to retrieve and handle password validation requests. -## Process sign-in requests +If you have multiple authentication agents registered on your tenant, the initialization procedure ensures that each agent connects to the same Service Bus queue. -The following diagram shows how Pass-through Authentication processes user sign-in requests. +## How pass-through authentication processes sign-in requests - +The following diagram shows how pass-through authentication processes user sign-in requests: -Pass-through Authentication handles a user sign-in request as follows: ++How pass-through authentication handles a user sign-in request: 1. A user tries to access an application, for example, [Outlook Web App](https://outlook.office365.com/owa).-2. If the user is not already signed in, the application redirects the browser to the Azure AD sign-in page. -3. The Azure AD STS service responds back with the **User sign-in** page. -4. The user enters their username into the **User sign-in** page, and then selects the **Next** button. -5. The user enters their password into the **User sign-in** page, and then selects the **Sign-in** button. -6. The username and password are submitted to Azure AD STS in an HTTPS POST request. -7. Azure AD STS retrieves public keys for all the Authentication Agents registered on your tenant from Azure SQL Database and encrypts the password by using them. - - It produces "N" encrypted password values for "N" Authentication Agents registered on your tenant. -8. Azure AD STS places the password validation request, which consists of the username and the encrypted password values, onto the Service Bus queue specific to your tenant. -9. Because the initialized Authentication Agents are persistently connected to the Service Bus queue, one of the available Authentication Agents retrieves the password validation request. -10. The Authentication Agent locates the encrypted password value that's specific to its public key, by using an identifier, and decrypts it by using its private key. -11. The Authentication Agent attempts to validate the username and the password against on-premises Active Directory by using the [Win32 LogonUser API](/windows/win32/api/winbase/nf-winbase-logonusera) with the **dwLogonType** parameter set to **LOGON32_LOGON_NETWORK**. - - This API is the same API that is used by Active Directory Federation Services (AD FS) to sign in users in a federated sign-in scenario. +1. If the user isn't already signed in, the application redirects the browser to the Azure AD sign-in page. +1. The Azure AD STS service responds back with the **User sign-in** page. +1. The user enters their username in the **User sign-in** page, and then selects the **Next** button. +1. The user enters their password in the **User sign-in** page, and then selects the **Sign-in** button. +1. The username and password are submitted to Azure AD STS in an HTTPS POST request. +1. Azure AD STS retrieves public keys for all the authentication agents that are registered on your tenant from Azure SQL Database and encrypts the password by using the keys. ++ It produces one encrypted password value for each authentication agent registered on your tenant. +1. Azure AD STS places the password validation request, which consists of the username and the encrypted password values, in the Service Bus queue that's specific to your tenant. +1. Because the initialized authentication agents are persistently connected to the Service Bus queue, one of the available authentication agents retrieves the password validation request. +1. The authentication agent uses an identifier to locate the encrypted password value that's specific to its public key. It decrypts the public key by using its private key. +1. The authentication agent attempts to validate the username and the password against Windows Server AD by using the [Win32 LogonUser API](/windows/win32/api/winbase/nf-winbase-logonusera) with the `dwLogonType` parameter set to `LOGON32_LOGON_NETWORK`. + - This API is the same API that's used by Active Directory Federation Services (AD FS) to sign in users in a federated sign-in scenario. - This API relies on the standard resolution process in Windows Server to locate the domain controller.-12. The Authentication Agent receives the result from Active Directory, such as success, username or password incorrect, or password expired. +1. The authentication agent receives the result from Windows Server AD, such as success, username or password is incorrect, or password is expired. > [!NOTE]- > If the Authentication Agent fails during the sign-in process, the whole sign-in request is dropped. There is no hand-off of sign-in requests from one Authentication Agent to another Authentication Agent on-premises. These agents only communicate with the cloud, and not with each other. - -13. The Authentication Agent forwards the result back to Azure AD STS over an outbound mutually authenticated HTTPS channel over port 443. Mutual authentication uses the certificate previously issued to the Authentication Agent during registration. -14. Azure AD STS verifies that this result correlates with the specific sign-in request on your tenant. -15. Azure AD STS continues with the sign-in procedure as configured. For example, if the password validation was successful, the user might be challenged for Multi-Factor Authentication or redirected back to the application. + > If the authentication agent fails during the sign-in process, the entire sign-in request is dropped. Sign-in requests aren't handed off from one on-premises authentication agent to another on-premises authentication agent. These agents communicate only with the cloud, and not with each other. ++1. The authentication agent forwards the result back to Azure AD STS over an outbound mutually authenticated HTTPS channel over port 443. Mutual authentication uses the certificate that was issued to the authentication agent during registration. +1. Azure AD STS verifies that this result correlates with the specific sign-in request on your tenant. +1. Azure AD STS continues with the sign-in procedure as configured. For example, if the password validation was successful, the user might be challenged for MFA or be redirected back to the application. ++<a name="operational-security-of-the-authentication-agents"></a> -## Operational security of the Authentication Agents +## Authentication agent operational security -To ensure that Pass-through Authentication remains operationally secure, Azure AD periodically renews Authentication Agents' certificates. Azure AD triggers the renewals. The renewals are not governed by the Authentication Agents themselves. +To ensure that pass-through authentication remains operationally secure, Azure AD periodically renews authentication agent certificates. Azure AD triggers the renewals. The renewals aren't governed by the authentication agents themselves. - -To renew an Authentication Agent's trust with Azure AD: +To renew an authentication agent's trust with Azure AD: -1. The Authentication Agent periodically pings Azure AD every few hours to check if it's time to renew its certificate. The certificate is renewed 30 days prior to its expiration. - - This check is done over a mutually authenticated HTTPS channel and uses the same certificate that was issued during registration. -2. If the service indicates that it's time to renew, the Authentication Agent generates a new key pair: a public key and a private key. - - These keys are generated through standard RSA 2048-bit encryption. +1. The authentication agent pings Azure AD every few hours to check if it's time to renew its certificate. The certificate is renewed 30 days before it expires. ++ This check is done over a mutually authenticated HTTPS channel and uses the same certificate that was issued during registration. +1. If the service indicates that it's time to renew, the authentication agent generates a new key pair: a public key and a private key. + - These keys are generated through standard RSA 2,048-bit encryption. - The private key never leaves the on-premises server.-3. The Authentication Agent then makes a ΓÇ£certificate renewalΓÇ¥ request to Azure AD over HTTPS, with the following components included in the request: - - The existing certificate that's retrieved from the CERT_SYSTEM_STORE_LOCAL_MACHINE location on the Windows certificate store. There is no global administrator involved in this procedure, so there is no access token needed on behalf of the global administrator. +1. The authentication agent then makes a certificate renewal request to Azure AD over HTTPS. The following components are included in the request: + - The existing certificate that's retrieved from the CERT_SYSTEM_STORE_LOCAL_MACHINE location in the Windows certificate store. No global administrator is involved in this procedure, so no access token is required for a global administrator. - The public key generated in step 2.- - A Certificate Signing Request (CSR or Certificate Request). This request applies for a new digital identity certificate, with Azure AD as its certificate authority. -4. Azure AD validates the existing certificate in the certificate renewal request. Then it verifies that the request came from an Authentication Agent registered on your tenant. -5. If the existing certificate is still valid, Azure AD then signs a new digital identity certificate, and issues the new certificate back to the Authentication Agent. -6. If the existing certificate has expired, Azure AD deletes the Authentication Agent from your tenantΓÇÖs list of registered Authentication Agents. Then a global administrator or hybrid identity administrator needs to manually install and register a new Authentication Agent. + - A CSR. This request applies for a new digital identity certificate, with Azure AD as its CA. +1. Azure AD validates the existing certificate in the certificate renewal request. Then it verifies that the request came from an authentication agent that's registered on your tenant. +1. If the existing certificate is still valid, Azure AD signs a new digital identity certificate and issues the new certificate back to the authentication agent. +1. If the existing certificate has expired, Azure AD deletes the authentication agent from your tenantΓÇÖs list of registered authentication agents. Then a global admin or a hybrid identity administrator must manually install and register a new authentication agent. - Use the Azure AD root CA to sign the certificate.- - Set the certificateΓÇÖs subject (Distinguished Name or DN) to your tenant ID, a GUID that uniquely identifies your tenant. The DN scopes the certificate to your tenant only. -6. Azure AD stores the new public key of the Authentication Agent in a database in Azure SQL Database that only it has access to. It also invalidates the old public key associated with the Authentication Agent. -7. The new certificate (issued in step 5) is then stored on the server in the Windows certificate store (specifically in the [CERT_SYSTEM_STORE_CURRENT_USER](/windows/win32/seccrypto/system-store-locations#CERT_SYSTEM_STORE_CURRENT_USER) location). - - Because the trust renewal procedure happens non-interactively (without the presence of the global administrator or hybrid identity administrator), the Authentication Agent no longer has access to update the existing certificate in the CERT_SYSTEM_STORE_LOCAL_MACHINE location. - + - Set the certificateΓÇÖs DN to your tenant ID, a GUID that uniquely identifies your tenant. The DN scopes the certificate to your tenant only. +1. Azure AD stores the new public key of the authentication agent in a database in Azure SQL Database that only it has access to. It also invalidates the old public key associated with the authentication agent. +1. The new certificate (issued in step 5) is then stored on the server in the Windows certificate store (specifically, in the [CERT_SYSTEM_STORE_CURRENT_USER](/windows/win32/seccrypto/system-store-locations#CERT_SYSTEM_STORE_CURRENT_USER) location). ++ Because the trust renewal procedure happens non-interactively (without the presence of the global administrator or hybrid identity administrator), the authentication agent no longer has access to update the existing certificate in the CERT_SYSTEM_STORE_LOCAL_MACHINE location. + > [!NOTE] > This procedure does not remove the certificate itself from the CERT_SYSTEM_STORE_LOCAL_MACHINE location.-8. The new certificate is used for authentication from this point on. Every subsequent renewal of the certificate replaces the certificate in the CERT_SYSTEM_STORE_LOCAL_MACHINE location. +1. From this point, the new certificate is used for authentication. Every subsequent renewal of the certificate replaces the certificate in the CERT_SYSTEM_STORE_LOCAL_MACHINE location. ++## Authentication agent auto update -## Auto-update of the Authentication Agents +The Updater application automatically updates the authentication agent when a new version (with bug fixes or performance enhancements) is released. The Updater application doesn't handle any password validation requests for your tenant. -The Updater application automatically updates the Authentication Agent when a new version (with bug fixes or performance enhancements) is released. The Updater application does not handle any password validation requests for your tenant. +Azure AD hosts the new version of the software as a signed Windows Installer package (MSI). The MSI is signed by using [Microsoft Authenticode](/previous-versions/windows/internet-explorer/ie-developer/platform-apis/ms537359(v=vs.85)) with SHA-256 as the digest algorithm. -Azure AD hosts the new version of the software as a signed **Windows Installer package (MSI)**. The MSI is signed by using [Microsoft Authenticode](/previous-versions/windows/internet-explorer/ie-developer/platform-apis/ms537359(v=vs.85)) with SHA256 as the digest algorithm. - +To auto update an authentication agent: -To auto-update an Authentication Agent: +1. The Updater application pings Azure AD every hour to check if a new version of the authentication agent is available. -1. The Updater application pings Azure AD every hour to check if there is a new version of the Authentication Agent available. - - This check is done over a mutually authenticated HTTPS channel by using the same certificate that was issued during registration. The Authentication Agent and the Updater share the certificate stored on the server. -2. If a new version is available, Azure AD returns the signed MSI back to the Updater. -3. The Updater verifies that the MSI is signed by Microsoft. -4. The Updater runs the MSI. This action involves the following steps: + This check is done over a mutually authenticated HTTPS channel by using the same certificate that was issued during registration. The authentication agent and the Updater share the certificate that is stored on the server. +1. If a new version is available, Azure AD returns the signed MSI back to the Updater. +1. The Updater verifies that the MSI is signed by Microsoft. +1. The Updater runs the MSI. In this process, the Updater application: > [!NOTE] > The Updater runs with [Local System](/windows/win32/services/localsystem-account) privileges. - - Stops the Authentication Agent service - - Installs the new version of the Authentication Agent on the server - - Restarts the Authentication Agent service -->[!NOTE] ->If you have multiple Authentication Agents registered on your tenant, Azure AD does not renew their certificates or update them at the same time. Instead, Azure AD does so one at a time to ensure the high availability of sign-in requests. -> + 1. Stops the authentication agent service. + 1. Installs the new version of the authentication agent on the server. + 1. Restarts the authentication agent service. +> [!NOTE] +> If you have multiple authentication agents registered on your tenant, Azure AD doesn't renew their certificates or update them at the same time. Instead, Azure AD renews the certificates one at a time to ensure high availability for sign-in requests. ## Next steps-- [Current limitations](how-to-connect-pta-current-limitations.md): Learn which scenarios are supported and which ones are not.-- [Quickstart](how-to-connect-pta-quick-start.md): Get up and running on Azure AD Pass-through Authentication.-- [Migrate your apps to Azure AD](../manage-apps/migration-resources.md): Resources to help you migrate application access and authentication to Azure AD.++- [Current limitations](how-to-connect-pta-current-limitations.md): Learn what scenarios are supported. +- [Quickstart](how-to-connect-pta-quick-start.md): Get set up with Azure AD pass-through authentication. +- [Migrate from AD FS to pass-through authentication](https://aka.ms/adfstoptadpdownload): Review this detailed guide that helps you migrate from AD FS or other federation technologies to pass-through authentication. - [Smart Lockout](../authentication/howto-password-smart-lockout.md): Configure the Smart Lockout capability on your tenant to protect user accounts.-- [How it works](how-to-connect-pta-how-it-works.md): Learn the basics of how Azure AD Pass-through Authentication works.-- [Frequently asked questions](how-to-connect-pta-faq.yml): Find answers to frequently asked questions.-- [Troubleshoot](tshoot-connect-pass-through-authentication.md): Learn how to resolve common problems with the Pass-through Authentication feature.-- [Azure AD Seamless SSO](how-to-connect-sso.md): Learn more about this complementary feature.-- [Hybrid Azure AD join](../devices/howto-hybrid-azure-ad-join.md): Configure Hybrid Azure AD join capability on your tenant for SSO across your cloud and on-premises resources. +- [How it works](how-to-connect-pta-how-it-works.md): Learn the basics of how Azure AD pass-through authentication works. +- [Frequently asked questions](how-to-connect-pta-faq.yml): Find answers to common questions. +- [Troubleshoot](tshoot-connect-pass-through-authentication.md): Learn how to resolve common problems with pass-through authentication. +- [Azure AD Seamless SSO](how-to-connect-sso.md): Learn more about the complementary Azure AD feature Seamless single sign-on. |
active-directory | How To Connect Sso Quick Start | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sso-quick-start.md | Title: 'Azure AD Connect: Seamless Single Sign-On - quickstart | Microsoft Docs' -description: This article describes how to get started with Azure Active Directory Seamless Single Sign-On + Title: 'Quickstart: Azure Active Directory Seamless single sign-on' +description: Learn how to get started with Azure Active Directory Seamless single sign-on by using Azure AD Connect. keywords: what is Azure AD Connect, install Active Directory, required components for Azure AD, SSO, Single Sign-on na-# Azure Active Directory Seamless Single Sign-On: Quickstart +# Quickstart: Azure Active Directory Seamless single sign-on -## Deploy Seamless Single Sign-On +Azure Active Directory (Azure AD) Seamless single sign-on (Seamless SSO) automatically signs in users when they're using their corporate desktops that are connected to your corporate network. Seamless SSO provides your users with easy access to your cloud-based applications without using any other on-premises components. -Azure Active Directory (Azure AD) Seamless Single Sign-On (Seamless SSO) automatically signs in users when they are on their corporate desktops that are connected to your corporate network. Seamless SSO provides your users with easy access to your cloud-based applications without needing any additional on-premises components. +To deploy Seamless SSO for Azure AD by using Azure AD Connect, complete the steps that are described in the following sections. -To deploy Seamless SSO, follow these steps. +<a name="step-1-check-the-prerequisites"></a> -## Step 1: Check the prerequisites +## Check the prerequisites Ensure that the following prerequisites are in place: -* **Set up your Azure AD Connect server**: If you use [Pass-through Authentication](how-to-connect-pta.md) as your sign-in method, no additional prerequisite check is required. If you use [password hash synchronization](how-to-connect-password-hash-synchronization.md) as your sign-in method, and if there is a firewall between Azure AD Connect and Azure AD, ensure that: - - You use version 1.1.644.0 or later of Azure AD Connect. - - If your firewall or proxy allows, add the connections to the allowed list for **\*.msappproxy.net** URLs over port 443. If you require a specific URL rather than a wildcard for proxy configuration, you can configure **tenantid.registration.msappproxy.net**, where tenantid is the GUID of the tenant where you are configuring the feature. If URL-based proxy exceptions are not possible in your organization, you can instead allow access to the [Azure datacenter IP ranges](https://www.microsoft.com/download/details.aspx?id=41653), which are updated weekly. This prerequisite is applicable only when you enable the feature. It is not required for actual user sign-ins. +- **Set up your Azure AD Connect server**: If you use [pass-through authentication](how-to-connect-pta.md) as your sign-in method, no other prerequisite check is required. If you use [password hash synchronization](how-to-connect-password-hash-synchronization.md) as your sign-in method and there's a firewall between Azure AD Connect and Azure AD, ensure that: + - You use Azure AD Connect version 1.1.644.0 or later. + - If your firewall or proxy allows, add the connections to your allowlist for `*.msappproxy.net` URLs over port 443. If you require a specific URL instead of a wildcard for proxy configuration, you can configure `tenantid.registration.msappproxy.net`, where `tenantid` is the GUID of the tenant for which you're configuring the feature. If URL-based proxy exceptions aren't possible in your organization, you can instead allow access to the [Azure datacenter IP ranges](https://www.microsoft.com/download/details.aspx?id=41653), which are updated weekly. This prerequisite is applicable only when you enable the Seamless SSO feature. It isn't required for direct user sign-ins. - >[!NOTE] - >Azure AD Connect versions 1.1.557.0, 1.1.558.0, 1.1.561.0, and 1.1.614.0 have a problem related to password hash synchronization. If you _don't_ intend to use password hash synchronization in conjunction with Pass-through Authentication, read the [Azure AD Connect release notes](./reference-connect-version-history.md) to learn more. - - >[!NOTE] - >If you have an outgoing HTTP proxy, make sure this URL, autologon.microsoftazuread-sso.com, is on the allowed list. You should specify this URL explicitly since wildcard may not be accepted. + > [!NOTE] + > + > - Azure AD Connect versions 1.1.557.0, 1.1.558.0, 1.1.561.0, and 1.1.614.0 have a problem related to password hash sync. If you *don't* intend to use password hash sync in conjunction with pass-through authentication, review the [Azure AD Connect release notes](./reference-connect-version-history.md) to learn more. + > - If you have an outgoing HTTP proxy, make sure that the URL `autologon.microsoftazuread-sso.com` is on your allowlist. You should specify this URL explicitly because the wildcard might not be accepted. -* **Use a supported Azure AD Connect topology**: Ensure that you are using one of Azure AD Connect's supported topologies described [here](plan-connect-topologies.md). +- **Use a supported Azure AD Connect topology**: Ensure that you're using one of the Azure AD Connect [supported topologies](plan-connect-topologies.md). - >[!NOTE] - >Seamless SSO supports multiple AD forests, whether there are AD trusts between them or not. + > [!NOTE] + > Seamless SSO supports multiple on-premises Windows Server Active Directory (Windows Server AD) forests, whether or not there are Windows Server AD trusts between them. -* **Set up domain administrator credentials**: You need to have domain administrator credentials for each Active Directory forest that: - * You synchronize to Azure AD through Azure AD Connect. - * Contains users you want to enable for Seamless SSO. - -* **Enable modern authentication**: You need to enable [modern authentication](/office365/enterprise/modern-auth-for-office-2013-and-2016) on your tenant for this feature to work. +- **Set up domain administrator credentials**: You must have domain administrator credentials for each Windows Server AD forest that: -* **Use the latest versions of Microsoft 365 clients**: To get a silent sign-on experience with Microsoft 365 clients (Outlook, Word, Excel, and others), your users need to use versions 16.0.8730.xxxx or above. + - You sync to Azure AD through Azure AD Connect. + - Contains users you want to enable Seamless SSO for. -## Step 2: Enable the feature +- **Enable modern authentication**: To use this feature, you must enable [modern authentication](/office365/enterprise/modern-auth-for-office-2013-and-2016) on your tenant. ++- **Use the latest versions of Microsoft 365 clients**: To get a silent sign-on experience with Microsoft 365 clients (for example, with Outlook, Word, or Excel), your users must use versions 16.0.8730.xxxx or later. ++## Enable the feature Enable Seamless SSO through [Azure AD Connect](whatis-hybrid-identity.md). ->[!NOTE] -> You can also [enable Seamless SSO using PowerShell](tshoot-connect-sso.md#manual-reset-of-the-feature) if Azure AD Connect doesn't meet your requirements. Use this option if you have more than one domain per Active Directory forest, and you want to be more targeted about the domain you want to enable Seamless SSO for. +> [!NOTE] +> If Azure AD Connect doesn't meet your requirements, you can [enable Seamless SSO by using PowerShell](tshoot-connect-sso.md#manual-reset-of-the-feature). Use this option if you have more than one domain per Windows Server AD forest, and you want to target the domain to enable Seamless SSO for. -If you're doing a fresh installation of Azure AD Connect, choose the [custom installation path](how-to-connect-install-custom.md). At the **User sign-in** page, select the **Enable single sign on** option. +If you're doing a *fresh installation of Azure AD Connect*, choose the [custom installation path](how-to-connect-install-custom.md). On the **User sign-in** page, select the **Enable single sign on** option. ->[!NOTE] -> The option will be available for selection only if the Sign On method is **Password Hash Synchronization** or **Pass-through Authentication**. - +> [!NOTE] +> The option is available to select only if the sign-on method that's selected is **Password Hash Synchronization** or **Pass-through Authentication**. -If you already have an installation of Azure AD Connect, select the **Change user sign-in** page in Azure AD Connect, and then select **Next**. If you are using Azure AD Connect versions 1.1.880.0 or above, the **Enable single sign on** option will be selected by default. If you are using older versions of Azure AD Connect, select the **Enable single sign on** option. +If you *already have an installation of Azure AD Connect*, in **Additional tasks**, select **Change user sign-in**, and then select **Next**. If you're using Azure AD Connect versions 1.1.880.0 or later, the **Enable single sign on** option is selected by default. If you're using an earlier version of Azure AD Connect, select the **Enable single sign on** option. - -Continue through the wizard until you get to the **Enable single sign on** page. Provide domain administrator credentials for each Active Directory forest that: +Continue through the wizard to the **Enable single sign on** page. Provide Domain Administrator credentials for each Windows Server AD forest that: -* You synchronize to Azure AD through Azure AD Connect. -* Contains users you want to enable for Seamless SSO. +- You sync to Azure AD through Azure AD Connect. +- Contains users you want to enable Seamless SSO for. -After completion of the wizard, Seamless SSO is enabled on your tenant. +When you complete the wizard, Seamless SSO is enabled on your tenant. ->[!NOTE] -> The domain administrator credentials are not stored in Azure AD Connect or in Azure AD. They're used only to enable the feature. +> [!NOTE] +> The Domain Administrator credentials are not stored in Azure AD Connect or in Azure AD. They're used only to enable the feature. -Follow these instructions to verify that you have enabled Seamless SSO correctly: +To verify that you have enabled Seamless SSO correctly: -1. Sign in to the [Azure Active Directory administrative center](https://aad.portal.azure.com) with the Hybrid Identity Administrator or hybrid identity administrator credentials for your tenant. -2. Select **Azure Active Directory** in the left pane. -3. Select **Azure AD Connect**. -4. Verify that the **Seamless single sign-on** feature appears as **Enabled**. +1. Sign in to the [Azure Active Directory administrative center](https://aad.portal.azure.com) with the Hybrid Identity Administrator account credentials for your tenant. +1. In the left menu, select **Azure Active Directory**. +1. Select **Azure AD Connect**. +1. Verify that **Seamless single sign-on** is set to **Enabled**. - ->[!IMPORTANT] -> Seamless SSO creates a computer account named `AZUREADSSOACC` in your on-premises Active Directory (AD) in each AD forest. The `AZUREADSSOACC` computer account needs to be strongly protected for security reasons. Only Domain Admins should be able to manage the computer account. Ensure that Kerberos delegation on the computer account is disabled, and that no other account in Active Directory has delegation permissions on the `AZUREADSSOACC` computer account. Store the computer account in an Organization Unit (OU) where they are safe from accidental deletions and where only Domain Admins have access. +> [!IMPORTANT] +> Seamless SSO creates a computer account named `AZUREADSSOACC` in each Windows Server AD forest in your on-premises Windows Server AD directory. The `AZUREADSSOACC` computer account must be strongly protected for security reasons. Only Domain Administrator accounts should be allowed to manage the computer account. Ensure that Kerberos delegation on the computer account is disabled, and that no other account in Windows Server AD has delegation permissions on the `AZUREADSSOACC` computer account. Store the computer accounts in an organization unit so that they're safe from accidental deletions and only Domain Administrators can access them. ->[!NOTE] -> If you are using Pass-the-Hash and Credential Theft Mitigation architectures in your on-premises environment, make appropriate changes to ensure that the `AZUREADSSOACC` computer account doesn't end up in the Quarantine container. +> [!NOTE] +> If you're using Pass-the-Hash and Credential Theft Mitigation architectures in your on-premises environment, make appropriate changes to ensure that the `AZUREADSSOACC` computer account doesn't end up in the Quarantine container. -## Step 3: Roll out the feature +<a name="step-3-roll-out-the-feature"></a> -You can gradually roll out Seamless SSO to your users using the instructions provided below. You start by adding the following Azure AD URL to all or selected users' Intranet zone settings by using Group Policy in Active Directory: +## Roll out the feature -- `https://autologon.microsoftazuread-sso.com`+You can gradually roll out Seamless SSO to your users by using the instructions provided in the next sections. You start by adding the following Azure AD URL to all or selected user intranet zone settings through Group Policy in Windows Server AD: -In addition, you need to enable an Intranet zone policy setting called **Allow updates to status bar via script** through Group Policy. +`https://autologon.microsoftazuread-sso.com` ->[!NOTE] -> The following instructions work only for Internet Explorer, Microsoft Edge, and Google Chrome on Windows (if it shares a set of trusted site URLs with Internet Explorer). Read the next section for instructions on how to set up Mozilla Firefox and Google Chrome on macOS. +You also must enable an intranet zone policy setting called **Allow updates to status bar via script** through Group Policy. -### Why do you need to modify users' Intranet zone settings? +> [!NOTE] +> The following instructions work only for Internet Explorer, Microsoft Edge, and Google Chrome on Windows (if Google Chrome shares a set of trusted site URLs with Internet Explorer). Learn how to set up [Mozilla Firefox](#mozilla-firefox-all-platforms) and [Google Chrome on macOS](#google-chrome-all-platforms). -By default, the browser automatically calculates the correct zone, either Internet or Intranet, from a specific URL. For example, `http://contoso/` maps to the Intranet zone, whereas `http://intranet.contoso.com/` maps to the Internet zone (because the URL contains a period). Browsers will not send Kerberos tickets to a cloud endpoint, like the Azure AD URL, unless you explicitly add the URL to the browser's Intranet zone. +### Why you need to modify user intranet zone settings -There are two ways to modify users' Intranet zone settings: +By default, a browser automatically calculates the correct zone, either internet or intranet, from a specific URL. For example, `http://contoso/` maps to the *intranet* zone, and `http://intranet.contoso.com/` maps to the *internet* zone (because the URL contains a period). Browsers don't send Kerberos tickets to a cloud endpoint, like to the Azure AD URL, unless you explicitly add the URL to the browser's intranet zone. ++There are two ways you can modify user intranet zone settings: | Option | Admin consideration | User experience | | | | |-| Group policy | Admin locks down editing of Intranet zone settings | Users cannot modify their own settings | -| Group policy preference | Admin allows editing on Intranet zone settings | Users can modify their own settings | +| Group policy | Admin locks down editing of intranet zone settings | Users can't modify their own settings | +| Group policy preference | Admin allows editing of intranet zone settings | Users can modify their own settings | -### "Group policy" option - Detailed steps +### Group policy detailed steps 1. Open the Group Policy Management Editor tool.-2. Edit the group policy that's applied to some or all your users. This example uses **Default Domain Policy**. -3. Browse to **User Configuration** > **Policies** > **Administrative Templates** > **Windows Components** > **Internet Explorer** > **Internet Control Panel** > **Security Page**. Then select **Site to Zone Assignment List**. -  -4. Enable the policy, and then enter the following values in the dialog box: +1. Edit the group policy that's applied to some or all your users. This example uses **Default Domain Policy**. +1. Go to **User Configuration** > **Policies** > **Administrative Templates** > **Windows Components** > **Internet Explorer** > **Internet Control Panel** > **Security Page**. Select **Site to Zone Assignment List**. ++ :::image type="content" source="media/how-to-connect-sso-quick-start/sso6.png" alt-text="Screenshot that shows the Security Page with Site to Zone Assignment List selected."::: +1. Enable the policy, and then enter the following values in the dialog: + - **Value name**: The Azure AD URL where the Kerberos tickets are forwarded.- - **Value** (Data): **1** indicates the Intranet zone. + - **Value** (Data): **1** indicates the intranet zone. - The result looks like this: + The result looks like this example: Value name: `https://autologon.microsoftazuread-sso.com` Value (Data): 1 - >[!NOTE] - > If you want to disallow some users from using Seamless SSO (for instance, if these users sign in on shared kiosks), set the preceding values to **4**. This action adds the Azure AD URL to the Restricted zone, and fails Seamless SSO all the time. + > [!NOTE] + > If you want to prevent some users from using Seamless SSO (for instance, if these users sign in on shared kiosks), set the preceding values to **4**. This action adds the Azure AD URL to the restricted zone and Seamless SSO fails for the users all the time. > -5. Select **OK**, and then select **OK** again. --  +1. Select **OK**, and then select **OK** again. -6. Browse to **User Configuration** > **Policies** > **Administrative Templates** > **Windows Components** > **Internet Explorer** > **Internet Control Panel** > **Security Page** > **Intranet Zone**. Then select **Allow updates to status bar via script**. + :::image type="content" source="media/how-to-connect-sso-quick-start/sso7.png" alt-text="Screenshot that shows the Show Contents window with a zone assignment selected."::: -  +1. Go to **User Configuration** > **Policies** > **Administrative Templates** > **Windows Components** > **Internet Explorer** > **Internet Control Panel** > **Security Page** > **Intranet Zone**. Select **Allow updates to status bar via script**. -7. Enable the policy setting, and then select **OK**. + :::image type="content" source="media/how-to-connect-sso-quick-start/sso11.png" alt-text="Screenshot that shows the Intranet Zone page with Allow updates to status bar via script selected." lightbox="media/how-to-connect-sso-quick-start/sso11.png"::: +1. Enable the policy setting, and then select **OK**. -  + :::image type="content" source="media/how-to-connect-sso-quick-start/sso12.png" alt-text="Screenshot that shows the Allow updates to status bar via script window with the policy setting enabled."::: -### "Group policy preference" option - Detailed steps +### Group policy preference detailed steps 1. Open the Group Policy Management Editor tool.-2. Edit the group policy that's applied to some or all your users. This example uses **Default Domain Policy**. -3. Browse to **User Configuration** > **Preferences** > **Windows Settings** > **Registry** > **New** > **Registry item**. +1. Edit the group policy that's applied to some or all your users. This example uses **Default Domain Policy**. +1. Go to **User Configuration** > **Preferences** > **Windows Settings** > **Registry** > **New** > **Registry item**. ++ :::image type="content" source="media/how-to-connect-sso-quick-start/sso15.png" alt-text="Screenshot that shows Registry selected and Registry Item selected."::: +1. Enter or select the following values as demonstrated, and then select **OK**. -  + - **Key Path**: Software\Microsoft\Windows\CurrentVersion\Internet Settings\ZoneMap\Domains\microsoftazuread-sso.com\autologon + - **Value name**: https + - **Value type**: REG_DWORD + - **Value data**: 00000001 -4. Enter the following values in appropriate fields and click **OK**. - - **Key Path**: ***Software\Microsoft\Windows\CurrentVersion\Internet Settings\ZoneMap\Domains\microsoftazuread-sso.com\autologon*** - - **Value name**: ***https*** - - **Value type**: ***REG_DWORD*** - - **Value data**: ***00000001*** - -  - -  + :::image type="content" source="media/how-to-connect-sso-quick-start/sso16.png" alt-text="Screenshot that shows the New Registry Properties window."::: ++ :::image type="content" source="media/how-to-connect-sso-quick-start/sso17.png" alt-text="Screenshot that shows the new values listed in Registry Editor."::: ### Browser considerations +The next sections have information about Seamless SSO that's specific to different types of browsers. + #### Mozilla Firefox (all platforms) -If you are using the [Authentication](https://github.com/mozill#authentication) policy settings in your environment, ensure that you add Azure AD's URL (`https://autologon.microsoftazuread-sso.com`) to the **SPNEGO** section. You can also set the **PrivateBrowsing** option to true to allow seamless SSO in private browsing mode. +If you're using the [Authentication](https://github.com/mozill#authentication) policy settings in your environment, ensure that you add the Azure AD URL (`https://autologon.microsoftazuread-sso.com`) to the **SPNEGO** section. You can also set the **PrivateBrowsing** option to **true** to allow Seamless SSO in private browsing mode. #### Safari (macOS) -Ensure that the machine running the macOS is joined to AD. Instructions for AD-joining your macOS device is outside the scope of this article. +Ensure that the machine running the macOS is joined to Windows Server AD. ++Instructions for joining your macOS device to Windows Server AD are outside the scope of this article. #### Microsoft Edge based on Chromium (all platforms) -If you have overridden the [AuthNegotiateDelegateAllowlist](/DeployEdge/microsoft-edge-policies#authnegotiatedelegateallowlist) or the [AuthServerAllowlist](/DeployEdge/microsoft-edge-policies#authserverallowlist) policy settings in your environment, ensure that you add Azure AD's URL (`https://autologon.microsoftazuread-sso.com`) to them as well. +If you've overridden the [AuthNegotiateDelegateAllowlist](/DeployEdge/microsoft-edge-policies#authnegotiatedelegateallowlist) or [AuthServerAllowlist](/DeployEdge/microsoft-edge-policies#authserverallowlist) policy settings in your environment, ensure that you also add the Azure AD URL (`https://autologon.microsoftazuread-sso.com`) to these policy settings. #### Microsoft Edge based on Chromium (macOS and other non-Windows platforms) -For Microsoft Edge based on Chromium on macOS and other non-Windows platforms, refer to [the Microsoft Edge based on Chromium Policy List](/DeployEdge/microsoft-edge-policies#authserverallowlist) for information on how to add the Azure AD URL for integrated authentication to your allow-list. +For Microsoft Edge based on Chromium on macOS and other non-Windows platforms, see the [Microsoft Edge based on Chromium Policy List](/DeployEdge/microsoft-edge-policies#authserverallowlist) for information on how to add the Azure AD URL for integrated authentication to your allowlist. #### Google Chrome (all platforms) -If you have overridden the [AuthNegotiateDelegateAllowlist](https://chromeenterprise.google/policies/#AuthNegotiateDelegateAllowlist) or the [AuthServerAllowlist](https://chromeenterprise.google/policies/#AuthServerAllowlist) policy settings in your environment, ensure that you add Azure AD's URL (`https://autologon.microsoftazuread-sso.com`) to them as well. +If you've overridden the [AuthNegotiateDelegateAllowlist](https://chromeenterprise.google/policies/#AuthNegotiateDelegateAllowlist) or [AuthServerAllowlist](https://chromeenterprise.google/policies/#AuthServerAllowlist) policy settings in your environment, ensure that you also add the Azure AD URL (`https://autologon.microsoftazuread-sso.com`) to these policy settings. #### macOS -The use of third-party Active Directory Group Policy extensions to roll out the Azure AD URL to Firefox and Google Chrome to macOS users is outside the scope of this article. +The use of third-party Active Directory Group Policy extensions to roll out the Azure AD URL to Firefox and Google Chrome for macOS users is outside the scope of this article. #### Known browser limitations -Seamless SSO doesn't work on Internet Explorer if the browser is running in Enhanced Protected mode. Seamless SSO supports the next version of Microsoft Edge based on Chromium and it works in InPrivate and Guest mode by design. Microsoft Edge (legacy) is no longer supported. +Seamless SSO doesn't work on Internet Explorer if the browser is running in Enhanced Protected mode. Seamless SSO supports the next version of Microsoft Edge based on Chromium, and it works in InPrivate and Guest mode by design. Microsoft Edge (legacy) is no longer supported. ++You might need to configure `AmbientAuthenticationInPrivateModesEnabled` for InPrivate or guest users based on the corresponding documentation: - `AmbientAuthenticationInPrivateModesEnabled`may need to be configured for InPrivate and / or guest users based on the corresponding documentations: - - - [Microsoft Edge Chromium](/DeployEdge/microsoft-edge-policies#ambientauthenticationinprivatemodesenabled) - - [Google Chrome](https://chromeenterprise.google/policies/?policy=AmbientAuthenticationInPrivateModesEnabled) +- [Microsoft Edge Chromium](/DeployEdge/microsoft-edge-policies#ambientauthenticationinprivatemodesenabled) +- [Google Chrome](https://chromeenterprise.google/policies/?policy=AmbientAuthenticationInPrivateModesEnabled) -## Step 4: Test the feature +## Test Seamless SSO To test the feature for a specific user, ensure that all the following conditions are in place:- - The user signs in on a corporate device. - - The device is joined to your Active Directory domain. The device _doesn't_ need to be [Azure AD Joined](../devices/overview.md). - - The device has a direct connection to your domain controller (DC), either on the corporate wired or wireless network or via a remote access connection, such as a VPN connection. - - You have [rolled out the feature](#step-3-roll-out-the-feature) to this user through Group Policy. -To test the scenario where the user enters only the username, but not the password: - - Sign in to https://myapps.microsoft.com/. Be sure to either clear the browser cache or use a new private browser session with any of the supported browsers in private mode. +- The user signs in on a corporate device. +- The device is joined to your Windows Server AD domain. The device *doesn't* need to be [Azure AD Joined](../devices/overview.md). +- The device has a direct connection to your domain controller, either on the corporate wired or wireless network or via a remote access connection, such as a VPN connection. +- You've [rolled out the feature](#roll-out-the-feature) to this user through Group Policy. ++To test a scenario in which the user enters a username, but not a password: ++- Sign in to [https://myapps.microsoft.com](https://myapps.microsoft.com/). Be sure to either clear the browser cache or use a new private browser session with any of the supported browsers in private mode. ++To test a scenario in which the user doesn't have to enter a username or password, use one of these steps: -To test the scenario where the user doesn't have to enter the username or the password, use one of these steps: - - Sign in to `https://myapps.microsoft.com/contoso.onmicrosoft.com` Be sure to either clear the browser cache or use a new private browser session with any of the supported browsers in private mode. Replace *contoso* with your tenant's name. - - Sign in to `https://myapps.microsoft.com/contoso.com` in a new private browser session. Replace *contoso.com* with a verified domain (not a federated domain) on your tenant. +- Sign in to `https://myapps.microsoft.com/contoso.onmicrosoft.com`. Be sure to either clear the browser cache or use a new private browser session with any of the supported browsers in private mode. Replace `contoso` with your tenant name. +- Sign in to `https://myapps.microsoft.com/contoso.com` in a new private browser session. Replace `contoso.com` with a verified domain (not a federated domain) on your tenant. -## Step 5: Roll over keys +## Roll over keys -In Step 2, Azure AD Connect creates computer accounts (representing Azure AD) in all the Active Directory forests on which you have enabled Seamless SSO. To learn more, see [Azure Active Directory Seamless Single Sign-On: Technical deep dive](how-to-connect-sso-how-it-works.md). +In [Enable the feature](#enable-the-feature), Azure AD Connect creates computer accounts (representing Azure AD) in all the Windows Server AD forests on which you enabled Seamless SSO. To learn more, see [Azure Active Directory Seamless single sign-on: Technical deep dive](how-to-connect-sso-how-it-works.md). ->[!IMPORTANT] ->The Kerberos decryption key on a computer account, if leaked, can be used to generate Kerberos tickets for any user in its AD forest. Malicious actors can then impersonate Azure AD sign-ins for compromised users. We highly recommend that you periodically roll over these Kerberos decryption keys - at least once every 30 days. +> [!IMPORTANT] +> The Kerberos decryption key on a computer account, if leaked, can be used to generate Kerberos tickets for any user in its Windows Server AD forest. Malicious actors can then impersonate Azure AD sign-ins for compromised users. We highly recommend that you periodically roll over these Kerberos decryption keys, or at least once every 30 days. -For instructions on how to roll over keys, see [Azure Active Directory Seamless Single Sign-On: Frequently asked questions](how-to-connect-sso-faq.yml). +For instructions on how to roll over keys, see [Azure Active Directory Seamless single sign-on: Frequently asked questions](how-to-connect-sso-faq.yml). ->[!IMPORTANT] ->You don't need to do this step _immediately_ after you have enabled the feature. Roll over the Kerberos decryption keys at least once every 30 days. +> [!IMPORTANT] +> You don't need to do this step *immediately* after you have enabled the feature. Roll over the Kerberos decryption keys at least once every 30 days. ## Next steps -- [Technical deep dive](how-to-connect-sso-how-it-works.md): Understand how the Seamless Single Sign-On feature works.-- [Frequently asked questions](how-to-connect-sso-faq.yml): Get answers to frequently asked questions about Seamless Single Sign-On.-- [Troubleshoot](tshoot-connect-sso.md): Learn how to resolve common problems with the Seamless Single Sign-On feature.+- [Technical deep dive](how-to-connect-sso-how-it-works.md): Understand how the Seamless single sign-on feature works. +- [Frequently asked questions](how-to-connect-sso-faq.yml): Get answers to frequently asked questions about Seamless single sign-on. +- [Troubleshoot](tshoot-connect-sso.md): Learn how to resolve common problems with the Seamless single sign-on feature. - [UserVoice](https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789): Use the Azure Active Directory Forum to file new feature requests. |
active-directory | How To Connect Sync Configure Filtering | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-configure-filtering.md | The filtering configuration is retained when you install or upgrade to a newer v If you have more than one forest, then you must apply the filtering configurations that are described in this topic to every forest (assuming that you want the same configuration for all of them). -### Disable the scheduled task +### Disable the synchronization scheduler To disable the built-in scheduler that triggers a synchronization cycle every 30 minutes, follow these steps: -1. Go to a PowerShell prompt. -2. Run `Set-ADSyncScheduler -SyncCycleEnabled $False` to disable the scheduler. -3. Make the changes that are documented in this article. -4. Run `Set-ADSyncScheduler -SyncCycleEnabled $True` to enable the scheduler again. +1. Open Windows Powershell, import the ADSync module and disable the scheduler using the follwoing commands -**If you use an Azure AD Connect build before 1.1.105.0** -To disable the scheduled task that triggers a synchronization cycle every three hours, follow these steps: +```Powershell +import-module ADSync +Set-ADSyncScheduler -SyncCycleEnabled $False +``` -1. Start **Task Scheduler** from the **Start** menu. -2. Directly under **Task Scheduler Library**, find the task named **Azure AD Sync Scheduler**, right-click, and select **Disable**. -  -3. You can now make configuration changes and run the sync engine manually from the **Synchronization Service Manager** console. +2. Make the changes that are documented in this article. Then re-enable the scheduler again with the following command -After you've completed all your filtering changes, don't forget to come back and **Enable** the task again. +```Powershell +Set-ADSyncScheduler -SyncCycleEnabled $True +``` ## Filtering options You can apply the following filtering configuration types to the directory synchronization tool: |
active-directory | How To Dirsync Upgrade Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-dirsync-upgrade-get-started.md | Title: 'Azure AD Connect: Upgrade from DirSync | Microsoft Docs' -description: Learn how to upgrade from DirSync to Azure AD Connect. This articles describes the steps for upgrading from DirSync to Azure AD Connect. + Title: 'Azure AD Connect: Upgrade from DirSync' +description: Learn how to upgrade from DirSync to Azure AD Connect. This article describes the steps for upgrading from DirSync to Azure AD Connect. editor: '' na-Azure AD Connect is the successor to DirSync. You find the ways you can upgrade from DirSync in this topic. These steps do not work for upgrading from another release of Azure AD Connect or from Azure AD Sync. -DirSync and Azure AD Sync are not supported and will no longer work. If you are still using these you MUST upgrade to AADConnect to resume your sync process. +Azure AD Connect is the successor of DirSync. In this article, learn how to upgrade to Azure AD Connect from DirSync. The steps described in this article don't work for upgrading from a different version of Azure AD Connect or from Azure Active Directory (Azure AD) Sync. -Before you start installing Azure AD Connect, make sure to [download Azure AD Connect](https://go.microsoft.com/fwlink/?LinkId=615771) and complete the pre-requisite steps in [Azure AD Connect: Hardware and prerequisites](how-to-connect-install-prerequisites.md). In particular, you want to read about the following, since these areas are different from DirSync: +DirSync and Azure AD Sync aren't supported and no longer work. If you're still using DirSync or Azure AD Sync, you *must* upgrade to Azure AD Connect to resume your sync process. -* The required version of .NET and PowerShell. Newer versions are required to be on the server than what DirSync needed. -* The proxy server configuration. If you use a proxy server to reach the internet, this setting must be configured before you upgrade. DirSync always used the proxy server configured for the user installing it, but Azure AD Connect uses machine settings instead. -* The URLs required to be open in the proxy server. For basic scenarios, those scenarios also supported by DirSync, the requirements are the same. If you want to use any of the new features included with Azure AD Connect, some new URLs must be opened. +Before you start installing Azure AD Connect, make sure you [download Azure AD Connect](https://go.microsoft.com/fwlink/?LinkId=615771) and complete the prerequisite steps described in [Azure AD Connect: Hardware and prerequisites](how-to-connect-install-prerequisites.md). Pay special attention to the following requirements for Azure AD Connect because they're different from DirSync: -> [!NOTE] -> Once you have enabled your new Azure AD Connect server to start synchronizing changes to Azure AD, you must not roll back to using DirSync or Azure AD Sync. Downgrading from Azure AD Connect to legacy clients including DirSync and Azure AD Sync is not supported and can lead to issues such as data loss in Azure AD. +- **Required versions of .NET and PowerShell**: Newer versions that what are required for DirSync must be on the server for Azure AD Connect. +- **Proxy server configuration**: If you use a proxy server to reach the internet, this setting must be configured before you upgrade. DirSync always used the proxy server that was configured for the user who installed it, but Azure AD Connect uses machine settings instead. +- **URLs required to be open in the proxy server**: For basic scenarios that were also supported by DirSync, the requirements are the same. If you want to use any of the new features in Azure AD Connect, some new URLs must be opened. ++> [!WARNING] +> After you have enabled your new Azure AD Connect server to start syncing changes to Azure AD, you must not roll back to using DirSync or Azure AD Sync. Downgrading from Azure AD Connect to legacy clients, including DirSync and Azure AD Sync, is not supported and can lead to issues like data loss in Azure AD. -If you are not upgrading from DirSync, see related documentation for other scenarios. +If you aren't upgrading from DirSync, see related documentation for other scenarios. ## Upgrade from DirSync-Depending on your current DirSync deployment, there are different options for the upgrade. If the expected upgrade time is less than three hours, then the recommendation is to do an in-place upgrade. If the expected upgrade time is more than three hours, then the recommendation is to do a parallel deployment on another server. It is estimated that if you have more than 50,000 objects it takes more than three hours to do the upgrade. -| Scenario | -| | -| [In-place upgrade](#in-place-upgrade) | -| [Parallel deployment](#parallel-deployment) | +Depending on your current DirSync deployment, you have different options for the upgrade. If the expected upgrade time is less than three hours, then we recommend that you do an in-place upgrade. If the expected upgrade time is more than three hours, then we recommend that you do a parallel deployment on a separate server. We estimate that if you have 50,000 or more objects, it takes more than three hours to do the upgrade. ++The upgrade scenarios are summarized in the following table: ++| Expected upgrade time | Number of objects | Upgrade option to use | +|-|-|-| +| Less than three hours | Fewer than 50,000 | [In-place upgrade](#in-place-upgrade) | +| More than three hours | 50,000 or more | [Parallel deployment](#parallel-deployment) | > [!NOTE]-> When you plan to upgrade from DirSync to Azure AD Connect, do not uninstall DirSync yourself before the upgrade. Azure AD Connect will read and migrate the configuration from DirSync and uninstall after inspecting the server. +> When you plan to upgrade from DirSync to Azure AD Connect, do not uninstall DirSync yourself before the upgrade. Azure AD Connect will read and migrate the configuration from DirSync and uninstall it after it inspects the server. -**In-place upgrade** -The expected time to complete the upgrade is displayed by the wizard. This estimate is based on the assumption that it takes three hours to complete an upgrade for a database with 50,000 objects (users, contacts, and groups). If the number of objects in your database is less than 50,000, then Azure AD Connect recommends an in-place upgrade. If you decide to continue, your current settings are automatically applied during upgrade and your server automatically resumes active synchronization. +- **In-place upgrade**. The wizard displays the expected time to complete the upgrade. This estimate is based on the assumption that it takes three hours to complete an upgrade for a database with 50,000 objects (users, contacts, and groups). If the number of objects in your database is fewer than 50,000, then Azure AD Connect recommends an in-place upgrade. If you decide to continue, your current settings are automatically applied during upgrade and your server automatically resumes active sync. -If you want to do a configuration migration and do a parallel deployment, then you can override the in-place upgrade recommendation. You might for example take the opportunity to refresh the hardware and operating system. For more information, see the [parallel deployment](#parallel-deployment) section. + If you want to do a configuration migration *and* do a parallel deployment, you can override the in-place upgrade recommendation. For example, you might use the upgrade as an opportunity to refresh the hardware and operating system. For more information, see [Parallel deployment](#parallel-deployment). +- **Parallel deployment**. If you have 50,000 or more objects, then we recommend a parallel deployment. This type of deployment avoids any operational delays for your users. The Azure AD Connect installation attempts to estimate the downtime for the upgrade, but if you've upgraded DirSync in the past, your own experience is likely to be the best guide for how long the upgrade will take. -**Parallel deployment** -If you have more than 50,000 objects, then a parallel deployment is recommended. This deployment avoids any operational delays experienced by your users. The Azure AD Connect installation attempts to estimate the downtime for the upgrade, but if you've upgraded DirSync in the past, your own experience is likely to be the best guide. +### DirSync configurations supported for upgrade -### Supported DirSync configurations to be upgraded -The following configuration changes are supported with upgraded DirSync: +The following configuration changes are supported for upgrading from DirSync: -* Domain and OU filtering -* Alternate ID (UPN) -* Password sync and Exchange hybrid settings -* Your forest/domain and Azure AD settings -* Filtering based on user attributes +- Domain and organization unit (OU) filtering +- Alternate ID (UPN) +- Password sync and Exchange hybrid settings +- Your forest, domain, and Azure AD settings +- Filtering based on user attributes -The following change cannot be upgraded. If you have this configuration, the upgrade is blocked: +The following change can't be upgraded. If you have this configuration, the upgrade is blocked: -* Unsupported DirSync changes, for example removed attributes and using a custom extension DLL +- Unsupported DirSync changes, for example, removed attributes and using a custom extension DLL - + :::image type="content" source="media/how-to-dirsync-upgrade-get-started/analysisblocked.png" alt-text="Screenshot that shows that the upgrade is blocked because of DirSync configurations."::: -In those cases, the recommendation is to install a new Azure AD Connect server in [staging mode](how-to-connect-sync-staging-server.md) and verify the old DirSync and new Azure AD Connect configuration. Reapply any changes using custom configuration, as described in [Azure AD Connect Sync custom configuration](how-to-connect-sync-whatis.md). + In unsupported upgrade scenarios, we recommend that you install a new Azure AD Connect server in [staging mode](how-to-connect-sync-staging-server.md) and verify the old DirSync and new Azure AD Connect configurations. Reapply any changes by using custom configuration as described in [Azure AD Connect Sync custom configuration](how-to-connect-sync-whatis.md). -The passwords used by DirSync for the service accounts cannot be retrieved and are not migrated. These passwords are reset during the upgrade. +The passwords that DirSync uses for the service accounts can't be retrieved and they aren't migrated. These passwords are reset during the upgrade. ### High-level steps for upgrading from DirSync to Azure AD Connect+ 1. Welcome to Azure AD Connect-2. Analysis of current DirSync configuration -3. Collect Azure AD Hybrid Identity Administrator password -4. Collect credentials for an enterprise admin account (only used during the installation of Azure AD Connect) -5. Installation of Azure AD Connect - * Uninstall DirSync (or temporarily disable it) - * Install Azure AD Connect - * Optionally begin synchronization +1. Analysis of current DirSync configuration +1. Collect the Azure AD Hybrid Identity Administrator account password +1. Collect credentials for an Enterprise Admins account (used only during installation of Azure AD Connect) +1. Installation of Azure AD Connect: + 1. Uninstall DirSync (or temporarily disable it) + 1. Install Azure AD Connect + 1. Optionally begin sync -Additional steps are required when: +More steps are required when: -* You're currently using Full SQL Server - local or remote -* You have more than 50,000 objects in scope for synchronization +- You're currently using the full version of SQL Server, whether local or remote. +- You have 50,000 or more objects in scope for synchronization. ## In-place upgrade-1. Launch the Azure AD Connect installer (MSI). -2. Review and agree to license terms and privacy notice. -  -3. Click next to begin analysis of your existing DirSync installation. -  -4. When the analysis completes, you see the recommendations on how to proceed. - * If you use SQL Server Express and have less than 50,000 objects, the following screen is shown: -  - * If you use a full SQL Server for DirSync, you see this page instead: -  - The information regarding the existing SQL Server database server being used by DirSync is displayed. Make appropriate adjustments if needed. Click **Next** to continue the installation. - * If you have more than 50,000 objects, you see this screen instead: -  - To proceed with an in-place upgrade, click the checkbox next to this message: **Continue upgrading DirSync on this computer.** - To do a [parallel deployment](#parallel-deployment) instead, you export the DirSync configuration settings and move the configuration to the new server. -5. Provide the password for the account you currently use to connect to Azure AD. This must be the account currently used by DirSync. -  - If you receive an error and have problems with connectivity, see [Troubleshoot connectivity problems](tshoot-connect-connectivity.md). -6. Provide an enterprise admin account for Active Directory. -  -7. You're now ready to configure. When you click **Upgrade**, DirSync is uninstalled and Azure AD Connect is configured and begins synchronizing. -  -8. After the installation has completed, sign out and sign in again to Windows before you use Synchronization Service Manager, Synchronization Rule Editor, or try to make any other configuration changes. ++To do an in-place upgrade: ++1. Open the Azure AD Connect installer (an MSI file). +1. Review and agree to the license terms and privacy notice. ++ :::image type="content" source="media/how-to-dirsync-upgrade-get-started/welcome.png" alt-text="Screenshot that shows the Welcome to Azure AD Connect page."::: ++1. Select **Next** to begin analysis of your existing DirSync installation. ++ :::image type="content" source="media/how-to-dirsync-upgrade-get-started/analyze.png" alt-text="Screenshot that shows Azure AD Connect when it's analyzing an existing DirSync installation."::: ++1. When the analysis is finished, recommendations for how to proceed are shown. ++ - If you use SQL Server Express and have fewer than 50,000 objects, this page is shown: ++ :::image type="content" source="media/how-to-dirsync-upgrade-get-started/analysisready.png" alt-text="Screenshot that shows the analysis completed and you're ready to upgrade from DirSync."::: ++ - If you use a full version of SQL Server for DirSync, this page is shown: ++ :::image type="content" source="media/how-to-dirsync-upgrade-get-started/analysisreadyfullsql.png" alt-text="Screenshot that shows the existing SQL database server that's being used."::: ++ Information about the existing SQL Server database server being the one that DirSync is using is shown. Make adjustments if needed. Select **Next** to continue the installation. ++ - If you have 50,000 or more objects, this page is shown: ++ :::image type="content" source="media/how-to-dirsync-upgrade-get-started/analysisrecommendparallel.png" alt-text="Screenshot that shows the page you see when you have 50,000 or more objects to upgrade."::: ++ To proceed with an in-place upgrade, select the **Continue upgrading DirSync on this computer** checkbox. ++ To do a [parallel deployment](#parallel-deployment), export the DirSync configuration settings and move the configuration to the new server. ++1. Enter the password for the account you currently use to connect to Azure AD. This must be the account that DirSync uses. ++ :::image type="content" source="media/how-to-dirsync-upgrade-get-started/connecttoazuread.png" alt-text="Screenshot that shows where you enter your Azure AD credentials."::: ++ If an error message appears or if you have problems with connectivity, see [Troubleshoot connectivity problems](tshoot-connect-connectivity.md). ++1. Enter an Enterprise Admins account for Active Directory Domain Services (AD DS). ++ :::image type="content" source="media/how-to-dirsync-upgrade-get-started/connecttoadds.png" alt-text="Screenshot that shows where you enter your AD DS credentials."::: ++1. You're now ready to configure. When you select **Upgrade**, DirSync is uninstalled and Azure AD Connect is configured and begins syncing. ++ :::image type="content" source="media/how-to-dirsync-upgrade-get-started/readytoconfigure.png" alt-text="Screenshot that shows the Ready to configure page."::: ++1. When installation is finished, sign out of Windows and then sign in again before you use Synchronization Service Manager or Synchronization Rule Editor, or before you try to make any other configuration changes. ## Parallel deployment++To use parallel deployment to upgrade, complete the following tasks. + ### Export the DirSync configuration-**Parallel deployment with more than 50,000 objects** -If you have more than 50,000 objects, then the Azure AD Connect installation recommends a parallel deployment. +**Parallel deployment with 50,000 or more objects** ++If you have 50,000 or more objects, the Azure AD Connect installation wizard recommends a parallel deployment. ++A page similar to the following example appears: +++If you want to proceed with parallel deployment, complete the following steps: ++- Select **Export settings**. When you install Azure AD Connect on a separate server, these settings are migrated from your current DirSync instance to your new Azure AD Connect installation. ++After your settings are successfully exported, you can exit the Azure AD Connect wizard on the DirSync server. Continue with the next step to install Azure AD Connect on a separate server. ++**Parallel deployment with fewer than 50,000 objects** ++If you have fewer than 50,000 objects, but you still want to do a parallel deployment: ++1. Run the Azure AD Connect installer. ++1. In **Welcome to Azure AD Connect**, exit the installation wizard by selecting the "X" in the top-right corner of the window. ++1. Open a Command Prompt window. ++1. In the installation location of Azure AD Connect (the default is *C:\Program Files\Microsoft Azure Active Directory Connect*), run the following command: ++ `AzureADConnect.exe /ForceExport` ++1. Select **Export settings**. When you install Azure AD Connect on a separate server, these settings are migrated from your current DirSync instance to your new Azure AD Connect installation. -A screen similar to the following is displayed: - + :::image type="content" source="media/how-to-dirsync-upgrade-get-started/forceexport.png" alt-text="Screenshot that shows the Export settings option for migrating your settings to the new Azure AD Connect installation."::: -If you want to proceed with parallel deployment, you need to perform the following steps: +After your settings are successfully exported, you can exit the Azure AD Connect wizard on the DirSync server. Continue with the next step to install Azure AD Connect on a separate server. -* Click the **Export settings** button. When you install Azure AD Connect on a separate server, these settings are migrated from your current DirSync to your new Azure AD Connect installation. +### Install Azure AD Connect on a separate server -Once your settings have been successfully exported, you can exit the Azure AD Connect wizard on the DirSync server. Continue with the next step to install Azure AD Connect on a separate server +When you install Azure AD Connect on a new server, the assumption is that you want to perform a clean install of Azure AD Connect. To use the DirSync configuration, there are some extra steps to take: -**Parallel deployment with less than 50,000 objects** +1. Run the Azure AD Connect installer. -If you have less than 50,000 objects but still want to do a parallel deployment, then do the following: +1. In **Welcome to Azure AD Connect**, exit the installation wizard by selecting the "X" in the top-right corner of the window. -1. Run the Azure AD Connect installer (MSI). -2. When you see the **Welcome to Azure AD Connect** screen, exit the installation wizard by clicking the "X" in the top right corner of the window. -3. Open a command prompt. -4. From the install location of Azure AD Connect (Default: C:\Program Files\Microsoft Azure Active Directory Connect) execute the following command: - `AzureADConnect.exe /ForceExport`. -5. Click the **Export settings** button. When you install Azure AD Connect on a separate server, these settings are migrated from your current DirSync to your new Azure AD Connect installation. +1. Open a Command Prompt window. - +1. In the installation location of Azure AD Connect (the default is *C:\Program Files\Microsoft Azure Active Directory Connect*), run the following command: -Once your settings have been successfully exported, you can exit the Azure AD Connect wizard on the DirSync server. Continue with the next step to install Azure AD Connect on a separate server. + `AzureADConnect.exe /migrate` -### Install Azure AD Connect on separate server -When you install Azure AD Connect on a new server, the assumption is that you want to perform a clean install of Azure AD Connect. Since you want to use the DirSync configuration, there are some extra steps to take: + The Azure AD Connect installation wizard starts and the following page appears: -1. Run the Azure AD Connect installer (MSI). -2. When you see the **Welcome to Azure AD Connect** screen, exit the installation wizard by clicking the "X" in the top right corner of the window. -3. Open a command prompt. -4. From the install location of Azure AD Connect (Default: C:\Program Files\Microsoft Azure Active Directory Connect) execute the following command: - `AzureADConnect.exe /migrate`. - The Azure AD Connect installation wizard starts and presents you with the following screen: -  -5. Select the settings file that exported from your DirSync installation. -6. Configure any advanced options including: - * A custom installation location for Azure AD Connect. - * An existing instance of SQL Server (Default: Azure AD Connect installs SQL Server 2019 Express). Do not use the same database instance as your DirSync server. - * A service account used to connect to SQL Server (If your SQL Server database is remote then this account must be a domain service account). - These options can be seen on this screen: -  -7. Click **Next**. -8. On the **Ready to configure** page, leave the **Start the synchronization process as soon as the configuration completes** checked. The server is now in [staging mode](how-to-connect-sync-staging-server.md) so changes are not exported to Azure AD. -9. Click **Install**. -10. After the installation has completed, sign out and sign in again to Windows before you use Synchronization Service Manager, Synchronization Rule Editor, or try to make any other configuration changes. + :::image type="content" source="media/how-to-dirsync-upgrade-get-started/importsettings.png" alt-text="Screenshot that shows where to import the settings file when you upgrade."::: ++1. Select the settings file that you exported from your DirSync installation. ++1. Configure any advanced options, including: ++ - A custom installation location for Azure AD Connect. + - An existing instance of SQL Server (by default, Azure AD Connect installs SQL Server 2019 Express). Don't use the same database instance your DirSync server uses. + - A service account that's used to connect to SQL Server. (If your SQL Server database is remote, this account must be a domain service account.) + + The following figure shows other options that are on this page: ++ :::image type="content" source="media/how-to-dirsync-upgrade-get-started/advancedsettings.png" alt-text="Screenshot that shows the advance configuration options for upgrading from DirSync."::: ++1. Select **Next**. ++1. In **Ready to configure**, leave the **Start the synchronization process as soon as the configuration completes** option selected. The server is now in [staging mode](how-to-connect-sync-staging-server.md), so changes aren't exported to Azure AD. ++1. Select **Install**. ++1. When installation is finished, sign out of Windows and then sign in again before you use Synchronization Service Manager or Synchronization Rule Editor, or before try to make any other configuration changes. > [!NOTE]-> Synchronization between Windows Server Active Directory and Azure Active Directory begins, but no changes are exported to Azure AD. Only one synchronization tool can be actively exporting changes at a time. This state is called [staging mode](how-to-connect-sync-staging-server.md). +> At this point, sync between on-premises Windows Server Active Directory (Windows Server AD) and Azure AD begins, but no changes are exported to Azure AD. Only one sync tool at a time can actively export changes. This state is called [staging mode](how-to-connect-sync-staging-server.md). ++### Verify that Azure AD Connect is ready to begin sync -### Verify that Azure AD Connect is ready to begin synchronization -To verify that Azure AD Connect is ready to take over from DirSync, you need to open **Synchronization Service Manager** in the group **Azure AD Connect** from the start menu. +To verify that Azure AD Connect is ready to take over from DirSync, on the Start menu, select **Azure AD Connect** > **Synchronization Service Manager**. -In the application, go to the **Operations** tab. On this tab, confirm that the following operations have completed: +In the application, go to the **Operations** tab. On this tab, confirm that the following operations show successful completion: -* Import on the AD Connector -* Import on the Azure AD Connector -* Full Sync on the AD Connector -* Full Sync on the Azure AD Connector +- **Full Import** on the Windows Server AD connector +- **Full Import** on the Azure AD connector +- **Full Synchronization** on the Windows Server AD connector +- **Full Synchronization** on the Azure AD connector - -Review the result from these operations and ensure there are no errors. +Review the results from these operations, and ensure that there are no errors. -If you want to see and inspect the changes that are about to be exported to Azure AD, then read how to verify the configuration under [staging mode](how-to-connect-sync-staging-server.md). Make required configuration changes until you do not see anything unexpected. +If you want to see and inspect the changes that are about to be exported to Azure AD, review how to [verify the configuration in staging mode](how-to-connect-sync-staging-server.md). Make required configuration changes until you don't see anything unexpected. -You are ready to switch from DirSync to Azure AD when you have completed these steps and are happy with the result. +You're ready to switch from DirSync to Azure AD when you've completed these steps and are confident with the results. ### Uninstall DirSync (old server)-* In **Programs and features** find **Windows Azure Active Directory sync tool** -* Uninstall **Windows Azure Active Directory sync tool** -* The uninstallation might take up to 15 minutes to complete. -If you prefer to uninstall DirSync later, you can also temporarily shut down the server or disable the service. If something goes wrong, this method allows you to re-enable it. However, it is not expected that the next step will fail so this should not be needed. +Next, uninstall DirSync: -With DirSync uninstalled or disabled, there is no active server exporting to Azure AD. The next step to enable Azure AD Connect must be completed before any changes in your on-premises Active Directory will continue to be synchronized to Azure AD. +1. In **Programs and features**, find and select **Windows Azure Active Directory sync tool**. +1. In the command bar, select **Uninstall**. -### Enable Azure AD Connect (new server) -After installation, reopening Azure AD connect will allow you to make additional configuration changes. Start **Azure AD Connect** from the start menu or from the shortcut on the desktop. Make sure you do not try to run the installation MSI again. +Uninstalling might take up to 15 minutes to complete. -You should see the following: - +If you prefer to uninstall DirSync later, you can temporarily shut down the server or disable the service. Using this method allows you to re-enable the service if something goes wrong. -* Select **Configure staging mode**. -* Turn off staging by unchecking the **Enabled staging mode** checkbox. +With DirSync uninstalled or disabled, you don't have an active server exporting to Azure AD. The next step to enable Azure AD Connect must be completed before any changes in your on-premises instance of Windows Server AD will continue to be synced to Azure AD. - +### Enable Azure AD Connect (new server) -* Click the **Next** button -* On the confirmation page, click the **install** button. +After installation, reopen Azure AD connect to make more configuration changes. Open Azure AD Connect from the Start menu or from the shortcut on the desktop. *Make sure that you don't run the installation MSI file again*. -Azure AD Connect is now your active server and you must not switch back to using your existing DirSync server. +1. In **Additional tasks**, select **Configure staging mode**. +1. In **Configure staging mode**, turn off staging by clearing the **Enabled staging mode** checkbox. -## Next steps -Now that you have Azure AD Connect installed you can [verify the installation and assign licenses](how-to-connect-post-installation.md). + :::image type="content" source="media/how-to-dirsync-upgrade-get-started/configurestaging.png" alt-text="Screenshot that shows the option to enable staging mode."::: ++1. Select **Next**. +1. On the confirmation page, select **Install**. -Learn more about these new features, which were enabled with the installation: [Automatic upgrade](how-to-connect-install-automatic-upgrade.md), [Prevent accidental deletes](how-to-connect-sync-feature-prevent-accidental-deletes.md), and [Azure AD Connect Health](how-to-connect-health-sync.md). +Azure AD Connect is now your active server. Ensure that you don't switch back to using your existing DirSync server. -Learn more about these common topics: [scheduler and how to trigger sync](how-to-connect-sync-feature-scheduler.md). +## Next steps -Learn more about [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md). +- Now that you have Azure AD Connect installed, you can [verify the installation and assign licenses](how-to-connect-post-installation.md). +- Learn more about these Azure AD Connect features: [Automatic upgrade](how-to-connect-install-automatic-upgrade.md), [prevent accidental deletes](how-to-connect-sync-feature-prevent-accidental-deletes.md), and [Azure AD Connect Health](how-to-connect-health-sync.md). +- Learn about the [scheduler and how to trigger sync](how-to-connect-sync-feature-scheduler.md). +- Learn more about [integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md). |
active-directory | Migrate From Federation To Cloud Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/migrate-from-federation-to-cloud-authentication.md | Your support team should understand how to troubleshoot any authentication issue Migration requires assessing how the application is configured on-premises, and then mapping that configuration to Azure AD. +> [!VIDEO https://www.youtube.com/embed/D0M-N-RQw0I] + If you plan to keep using AD FS with on-premises & SaaS Applications using SAML / WS-FED or Oauth protocol, you'll use both AD FS and Azure AD after you convert the domains for user authentication. In this case, you can protect your on-premises applications and resources with Secure Hybrid Access (SHA) through [Azure AD Application Proxy](../app-proxy/what-is-application-proxy.md) or one of [Azure AD partner integrations](../manage-apps/secure-hybrid-access.md). Using Application Proxy or one of our partners can provide secure remote access to your on-premises applications. Users benefit by easily connecting to their applications from any device after a [single sign-on](../manage-apps/add-application-portal-setup-sso.md). You can move SaaS applications that are currently federated with ADFS to Azure AD. Reconfigure to authenticate with Azure AD either via a built-in connector from the [Azure App gallery](https://azuremarketplace.microsoft.com/marketplace/apps/category/azure-active-directory-apps), or by [registering the application in Azure AD](../develop/quickstart-register-app.md). |
active-directory | Delete Application Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/delete-application-portal.md | To delete an enterprise application, you need: 1. Delete the enterprise application. ```powershell- Remove-AzureADServicePrincipal $ObjectId 'd4142c52-179b-4d31-b5b9-08940873507b' + Remove-AzureADServicePrincipal -ObjectId 'd4142c52-179b-4d31-b5b9-08940873507b' ``` :::zone-end |
active-directory | Overview Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-recommendations.md | Title: What is Azure Active Directory recommendations? | Microsoft Docs + Title: What are Azure Active Directory recommendations? | Microsoft Docs description: Provides a general overview of Azure Active Directory recommendations. -# What is Azure Active Directory recommendations? +# What are Azure Active Directory recommendations? -Keeping track of all the settings and resources in your tenant can be overwhelming. The Azure Active Directory (Azure AD) recommendations feature helps monitor the status of your tenant so you don't have to. Azure AD recommendations helps ensure your tenant is in a secure and healthy state while also helping you maximize the value of the features available in Azure AD. +Keeping track of all the settings and resources in your tenant can be overwhelming. The Azure Active Directory (Azure AD) recommendations feature helps monitor the status of your tenant so you don't have to. The Azure AD recommendations feature helps ensure your tenant is in a secure and healthy state while also helping you maximize the value of the features available in Azure AD. The Azure AD recommendations feature provides you with personalized insights with actionable guidance to: The Azure AD recommendations feature provides you with personalized insights wit - Improve the state of your Azure AD tenant. - Optimize the configurations for your scenarios. -This article gives you an overview of how you can use Azure AD recommendations. As an administrator, you should review your tenant's recommendations, and their associated resources periodically. +This article gives you an overview of how you can use Azure AD recommendations. As an administrator, you should review your tenant's Azure AD recommendations, and their associated resources periodically. ## What it is -Azure AD recommendations is the Azure AD specific implementation of [Azure Advisor](../../advisor/advisor-overview.md), which is a personalized cloud consultant that helps you follow best practices to optimize your Azure deployments. Azure Advisor analyzes your resource configuration and usage data to recommend solutions that can help you improve the cost effectiveness, performance, reliability, and security of your Azure resources. +The Azure AD recommendations feature is the Azure AD specific implementation of [Azure Advisor](../../advisor/advisor-overview.md), which is a personalized cloud consultant that helps you follow best practices to optimize your Azure deployments. Azure Advisor analyzes your resource configuration and usage data to recommend solutions that can help you improve the cost effectiveness, performance, reliability, and security of your Azure resources. -*Azure AD recommendations* uses similar data to support you with the roll-out and management of Microsoft's best practices for Azure AD tenants to keep your tenant in a secure and healthy state. Azure AD recommendations provide a holistic view into your tenant's security, health, and usage. +*Azure AD recommendations* use similar data to support you with the roll-out and management of Microsoft's best practices for Azure AD tenants to keep your tenant in a secure and healthy state. Azure AD recommendations provide a holistic view into your tenant's security, health, and usage. ## How it works -On a daily basis, Azure AD analyzes the configuration of your tenant. During this analysis, Azure AD compares the data of a recommendation with the actual configuration of your tenant. If a recommendation is flagged as applicable to your tenant, the recommendation appears in the **Recommendations** section of the Azure AD Overview area. Recommendations are listed in order of priority so you can quickly determine where to focus first. +On a daily basis, Azure AD analyzes the configuration of your tenant. During this analysis, Azure AD compares the data of a recommendation with the actual configuration of your tenant. If a recommendation is flagged as applicable to your tenant, the recommendation appears in the **Recommendations** section of the Azure AD Overview area. The recommendations are listed in order of priority so you can quickly determine where to focus first. -Recommendations contain a description, a summary of the value of addressing the recommendation, and a step-by-step action plan. If applicable, impacted resources associated with the recommendation are listed, so you can resolve each affected area. If a recommendation doesn't have any associated resources, the impacted resource type is *Tenant level*. so your step-by-step action plan impacts the entire tenant and not just a specific resource. +Each recommendation contains a description, a summary of the value of addressing the recommendation, and a step-by-step action plan. If applicable, impacted resources associated with the recommendation are listed, so you can resolve each affected area. If a recommendation doesn't have any associated resources, the impacted resource type is *Tenant level*. so your step-by-step action plan impacts the entire tenant and not just a specific resource.  The **Priority** of a recommendation could be low, medium, or high. These values - **Medium**: Should do. No severe risk if action isn't taken. - **Low**: Might do. No security risks or health concerns if action isn't taken. -The **Impacted resources** for a recommendation could be things like applications or users. This detail gives you an idea of what type of resources you'll need to address. The impacted resource could also be at the tenant level, so you may need to make a global change. +The **Impacted resources** for a recommendation could be things like applications or users. This detail gives you an idea of what type of resources you need to address. The impacted resource could also be at the tenant level, so you may need to make a global change. The **Status description** tells you the date the recommendation status changed and if it was changed by the system or a user. The following roles provide *update and read-only* access to recommendations: - Cloud apps Administrator - Apps Administrator -Azure AD recommendations is automatically enabled. If you'd like to disable this feature, go to **Azure AD** > **Preview features**. Locate the **Recommendations** feature, and change the **State**. +The Azure AD recommendations feature is automatically enabled. If you'd like to disable this feature, go to **Azure AD** > **Preview features**. Locate the **Recommendations** feature, and change the **State**. Azure AD only displays the recommendations that apply to your tenant, so you may not see all supported recommendations listed. Some recommendations are available in all tenants, regardless of the license type, but others require the [Workload Identities premium license](../identity-protection/concept-workload-identity-risk.md). The recommendations listed in the following table are available to all Azure AD | [Migrate to Microsoft Authenticator](recommendation-migrate-to-authenticator.md) | Users | Preview | | [Minimize MFA prompts from known devices](recommendation-migrate-apps-from-adfs-to-azure-ad.md) | Users | Generally available | -### Recommendations available for Workload Identities premium licenses --The recommendations listed in the following table are available to Azure AD tenants with a Workload Identities premium license. --| Recommendation | Impacted resources | Availability | -|- |- |- | -| Remove unused applications | Applications | Preview | -| Remove unused credentials from applications | Applications | Preview | -| Renew expiring application credentials | Applications | Preview | -| Renew expiring service principal credentials | Applications | Preview | - ## How to use Azure AD recommendations 1. Go to **Azure AD** > **Recommendations**. The recommendations listed in the following table are available to Azure AD tena 1. Follow the **Action plan**. -1. If applicable, right-click on a resource in a recommendation, select **Mark as**, then select a status. +1. If applicable, *right-click on the status* of a resource in a recommendation, select **Mark as**, then select a status. + - The status for the resource appears as regular text, but you can right-click on the status to open the menu. + - You can set each resource to a different status as needed. +  -1. If you need to manually change the status of a recommendation, select **Mark as** from the top of the page and select a status. +1. The recommendation service automatically marks the recommendation as complete, but if you need to manually change the status of a recommendation, select **Mark as** from the top of the page and select a status. ++  - Mark a recommendation as **Completed** if all impacted resources have been addressed. - Active resources may still appear in the list of resources for manually completed recommendations. If the resource is completed, the service will update the status the next time the service runs. |
active-directory | Amazon Web Service Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/amazon-web-service-tutorial.md | You can also use Microsoft My Apps to test the application in any mode. When you ## Known issues -* AWS Single-Account Access provisioning integration can be used only to connect to AWS public cloud endpoints. AWS Single-Account Access provisioning integration can't be used to access AWS Government environments, or the AWS China regions. +* AWS Single-Account Access provisioning integration cannot be used in the the AWS China regions. * In the **Provisioning** section, the **Mappings** subsection shows a "Loading..." message, and never displays the attribute mappings. The only provisioning workflow supported today is the import of roles from AWS into Azure AD for selection during a user or group assignment. The attribute mappings for this are predetermined, and aren't configurable. |
aks | Node Upgrade Github Actions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-upgrade-github-actions.md | Download and sign in to the Azure CLI. steps: - name: Azure Login- uses: Azure/login@v1.1 + uses: Azure/login@v1.4.3 with: creds: ${{ secrets.AZURE_CREDENTIALS }} ``` Download and sign in to the Azure CLI. ```output {- "appId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", - "displayName": "azure-cli-xxxx-xx-xx-xx-xx-xx", - "name": "http://azure-cli-xxxx-xx-xx-xx-xx-xx", - "password": "xXxXxXxXx", - "tenant": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" + "clientId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", + "clientSecret": "xXxXxXxXx", + "subscriptionId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" + "tenantId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" + "activeDirectoryEndpointUrl": "https://login.microsoftonline.com", + "resourceManagerEndpointUrl": "https://management.azure.com/", + "activeDirectoryGraphResourceId": "https://graph.windows.net/", + "sqlManagementEndpointUrl": "https://management.core.windows.net:8443/", + "galleryEndpointUrl": "https://gallery.azure.com/", + "managementEndpointUrl": "https://management.core.windows.net/" } ``` To create the steps to execute Azure CLI commands. steps: - name: Azure Login- uses: Azure/login@v1.1 + uses: Azure/login@v1.4.3 with: creds: ${{ secrets.AZURE_CREDENTIALS }} - name: Upgrade node images- uses: Azure/cli@v1.0.0 + uses: Azure/cli@v1.0.6 with: inlineScript: az aks upgrade -g {resourceGroupName} -n {aksClusterName} --node-image-only --yes ``` jobs: steps: - name: Azure Login- uses: Azure/login@v1.1 + uses: Azure/login@v1.4.3 with: creds: ${{ secrets.AZURE_CREDENTIALS }} |
aks | Resize Node Pool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/resize-node-pool.md | This lack of persistence also applies to the resize operation, thus, resizing AK ## Example resources -Suppose you want to resize an existing node pool, called `nodepool1`, from SKU size Standard_DS2_v2 to Standard_DS3_v2. To accomplish this task, you'll need to create a new node pool using Standard_DS3_v2, move workloads from `nodepool1` to the new node pool, and remove `nodepool1`. In this example, we'll call this new node pool `mynodepool`. +Assume you want to resize an existing node pool, called `nodepool1`, from SKU size Standard_DS2_v2 to Standard_DS3_v2. To accomplish this task, you'll need to create a new node pool using Standard_DS3_v2, move workloads from `nodepool1` to the new node pool, and remove `nodepool1`. In this example, we'll call this new node pool `mynodepool`. :::image type="content" source="./media/resize-node-pool/node-pool-ds2.png" alt-text="Screenshot of the Azure portal page for the cluster, navigated to Settings > Node pools. One node pool, named node pool 1, is shown."::: After resizing a node pool by cordoning and draining, learn more about [using mu [empty-dir]: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir [specify-disruption-budget]: https://kubernetes.io/docs/tasks/run-application/configure-pdb/ [disruptions]: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/-[use-multiple-node-pools]: use-multiple-node-pools.md +[use-multiple-node-pools]: use-multiple-node-pools.md |
aks | Supported Kubernetes Versions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md | Aim to run the latest patch release of the minor version you're running. For exa ## Alias minor version > [!NOTE]-> Alias minor version requires Azure CLI version 2.37 or above. Use `az upgrade` to install the latest version of the CLI. +> Alias minor version requires Azure CLI version 2.37 or above as well as API version 20220201 or above. Use `az upgrade` to install the latest version of the CLI. With AKS, you can create a cluster without specifying the exact patch version. When you create a cluster without designating a patch, the cluster will run the minor version's latest GA patch. For example, if you create a cluster with **`1.21`**, your cluster will run **`1.21.7`**, which is the latest GA patch version of *1.21*. |
api-management | Api Management Policy Expressions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-policy-expressions.md | The `context` variable is implicitly available in every policy [expression](api- |<a id="ref-context-user"></a>`context.User`|`Email`: `string`<br /><br /> `FirstName`: `string`<br /><br /> `Groups`: `IEnumerable<`[`IGroup`](#ref-igroup)`>`<br /><br /> `Id`: `string`<br /><br /> `Identities`: `IEnumerable<`[`IUserIdentity`](#ref-iuseridentity)`>`<br /><br /> `LastName`: `string`<br /><br /> `Note`: `string`<br /><br /> `RegistrationDate`: `DateTime`| |<a id="ref-iapi"></a>`IApi`|`Id`: `string`<br /><br /> `Name`: `string`<br /><br /> `Path`: `string`<br /><br /> `Protocols`: `IEnumerable<string>`<br /><br /> `ServiceUrl`: [`IUrl`](#ref-iurl)<br /><br /> `SubscriptionKeyParameterNames`: [`ISubscriptionKeyParameterNames`](#ref-isubscriptionkeyparameternames)| |<a id="ref-igroup"></a>`IGroup`|`Id`: `string`<br /><br /> `Name`: `string`|-|<a id="ref-imessagebody"></a>`IMessageBody`|`As<T>(bool preserveContent = false): Where T: string, byte[], JObject, JToken, JArray, XNode, XElement, XDocument` <br /><br /> - The `context.Request.Body.As<T>` and `context.Response.Body.As<T>` methods read a request or response message body in specified type `T`. <br/><br/> - Or - <br/><br/>`AsFormUrlEncodedContent(bool preserveContent = false)` <br/></br>- The `context.Request.Body.AsFormUrlEncodedContent()` and `context.Response.Body.AsFormUrlEncodedContent()` methods read URL-encoded form data in a request or response message body and return an `IDictionary<string, IList<string>>` object. The decoded object supports `IDictionary` operations and the following expressions: `ToQueryString()`, `JsonConvert.SerializeObject()`, `ToFormUrlEncodedContent().` <br/><br/> By default, the `As<T>` and `AsFormUrlEncodedContent()` methods:<br /><ul><li>Use the original message body stream.</li><li>Render it unavailable after it returns.</li></ul> <br />To avoid that and have the method operate on a copy of the body stream, set the `preserveContent` parameter to `true`, as shown in examples for the [set-body](set-body-policy.md#examples) policy.| +|<a id="ref-imessagebody"></a>`IMessageBody`|`As<T>(bool preserveContent = false): Where T: string, byte[], JObject, JToken, JArray, XNode, XElement, XDocument` <br /><br /> - The `context.Request.Body.As<T>` and `context.Response.Body.As<T>` methods read a request or response message body in specified type `T`. <br/><br/> - Or - <br/><br/>`AsFormUrlEncodedContent(bool preserveContent = false)` <br/></br>- The `context.Request.Body.AsFormUrlEncodedContent()` and `context.Response.Body.AsFormUrlEncodedContent()` methods read URL-encoded form data in a request or response message body and return an `IDictionary<string, IList<string>` object. The decoded object supports `IDictionary` operations and the following expressions: `ToQueryString()`, `JsonConvert.SerializeObject()`, `ToFormUrlEncodedContent().` <br/><br/> By default, the `As<T>` and `AsFormUrlEncodedContent()` methods:<br /><ul><li>Use the original message body stream.</li><li>Render it unavailable after it returns.</li></ul> <br />To avoid that and have the method operate on a copy of the body stream, set the `preserveContent` parameter to `true`, as shown in examples for the [set-body](set-body-policy.md#examples) policy.| |<a id="ref-iprivateendpointconnection"></a>`IPrivateEndpointConnection`|`Name`: `string`<br /><br /> `GroupId`: `string`<br /><br /> `MemberName`: `string`<br /><br />For more information, see the [REST API](/rest/api/apimanagement/current-ga/private-endpoint-connection/list-private-link-resources).| |<a id="ref-iurl"></a>`IUrl`|`Host`: `string`<br /><br /> `Path`: `string`<br /><br /> `Port`: `int`<br /><br /> [`Query`](#ref-iurl-query): `IReadOnlyDictionary<string, string[]>`<br /><br /> `QueryString`: `string`<br /><br /> `Scheme`: `string`| |<a id="ref-iuseridentity"></a>`IUserIdentity`|`Id`: `string`<br /><br /> `Provider`: `string`| |
app-service | Configure Custom Container | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-custom-container.md | This guide provides key concepts and instructions for containerization of Linux ::: zone pivot="container-windows" +> [!NOTE] +> Service Principal is no longer supported for Windows container image pull authentication. The recommended way is to use Managed Identity for both Windows and Linux containers + ## Supported parent images For your custom Windows image, you must choose the right [parent image (base image)](https://docs.docker.com/develop/develop-images/baseimages/) for the framework you want: If the app changes compute instances for any reason, such as scaling up and down ## Configure port number -By default, App Service assumes your custom container is listening on either port 80 or port 8080. If your container listens to a different port, set the `WEBSITES_PORT` app setting in your App Service app. You can set it via the [Cloud Shell](https://shell.azure.com). In Bash: +By default, App Service assumes your custom container is listening on port 80. If your container listens to a different port, set the `WEBSITES_PORT` app setting in your App Service app. You can set it via the [Cloud Shell](https://shell.azure.com). In Bash: ```azurecli-interactive az webapp config appsettings set --resource-group <group-name> --name <app-name> --settings WEBSITES_PORT=8000 |
app-service | Migrate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md | Title: Migrate to App Service Environment v3 by using the migration feature description: Overview of the migration feature for migration to App Service Environment v3 Previously updated : 1/27/2023 Last updated : 02/14/2023 At this time, App Service Environment migrations to v3 using the migration featu - Australia East - Australia Central+- Australia Central 2 - Australia Southeast - Brazil South - Canada Central At this time, App Service Environment migrations to v3 using the migration featu - East US - East US 2 - France Central+- France South - Germany North - Germany West Central - Japan East At this time, App Service Environment migrations to v3 using the migration featu - North Europe - Norway East - Norway West+- South Africa North +- South Africa West - South Central US - South India - Southeast Asia At this time, App Service Environment migrations to v3 using the migration featu - UK West - West Central US - West Europe+- West India - West US - West US 2 - West US 3 +### Azure Government: ++- US Gov Virginia + The following App Service Environment configurations can be migrated using the migration feature. The table gives the App Service Environment v3 configuration you'll end up with when using the migration feature based on your existing App Service Environment. All supported App Service Environments can be migrated to a [zone redundant App Service Environment v3](../../availability-zones/migrate-app-service-environment.md) using the migration feature as long as the environment is [in a region that supports zone redundancy](./overview.md#regions). You can [configure zone redundancy](#choose-your-app-service-environment-v3-configurations) during the migration process. |Configuration |App Service Environment v3 Configuration | |
app-service | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/overview.md | Title: App Service Environment overview description: This article discusses the Azure App Service Environment feature of Azure App Service. Previously updated : 12/2/2022 Last updated : 02/14/2023 App Service Environment v3 is available in the following regions: | East US | ✅ | ✅ | ✅ | | East US 2 | ✅ | ✅ | ✅ | | France Central | ✅ | ✅ | ✅ | -| France South | | | ✅ | +| France South | ✅ | | ✅ | | Germany North | ✅ | | ✅ | | Germany West Central | ✅ | ✅ | ✅ | | Japan East | ✅ | ✅ | ✅ | App Service Environment v3 is available in the following regions: | North Central US | ✅ | | ✅ | | North Europe | ✅ | ✅ | ✅ | | Norway East | ✅ | ✅ | ✅ | -| Norway West | | | ✅ | +| Norway West | ✅ | | ✅ | | Qatar Central | ✅ | ✅ | | | South Africa North | ✅ | ✅ | ✅ |-| South Africa West | | | ✅ | +| South Africa West | ✅ | | ✅ | | South Central US | ✅ | ✅ | ✅ | | South India | ✅ | | ✅ | | Southeast Asia | ✅ | ✅ | ✅ | | Sweden Central | ✅ | ✅ | | | Switzerland North | ✅ | ✅ | ✅ |-| Switzerland West | | | ✅ | +| Switzerland West | ✅ | | ✅ | | UAE Central | | | ✅ | | UAE North | ✅ | | ✅ | | UK South | ✅ | ✅ | ✅ | |
application-gateway | Application Gateway Backend Health Troubleshooting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-backend-health-troubleshooting.md | This behavior can occur for one or more of the following reasons: 2. Check whether your UDR has a default route (0.0.0.0/0) with the next hop not set as **Internet**: a. Follow steps 1a and 1b to determine your subnet.- b. Check whether there's any UDR configured. If there is, search for the resource on the search bar or under **All resources**. - c. Check whether there are any default routes (0.0.0.0/0) with the next hop not set as **Internet**. If the setting is either **Virtual Appliance** or **Virtual Network Gateway**, you must make sure that your virtual appliance, or the on-premises device, can properly route the packet back to the internet destination without modifying the packet. + b. Check to see if a UDR is configured. If there is, search for the resource on the search bar or under **All resources**. + c. Check to see if there are any default routes (0.0.0.0/0) with the next hop not set as **Internet**. If the setting is either **Virtual Appliance** or **Virtual Network Gateway**, you must make sure that your virtual appliance, or the on-premises device, can properly route the packet back to the Internet destination without modifying the packet. If probes are routed through a virtual appliance and modified, the backend resource will display a **200** status code and the Application Gateway health status can display as **Unknown**. This doesn't indicate an error. Traffic should still be routing through the Application Gateway without issue. d. Otherwise, change the next hop to **Internet**, select **Save**, and verify the backend health. 3. Default route advertised by the ExpressRoute/VPN connection to the virtual network over BGP: |
applied-ai-services | Concept Invoice | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-invoice.md | The following tools are supported by Form Recognizer v2.1: ## Try invoice data extraction -See how data, including customer information, vendor details, and line items, is extracted from invoices. You'll need the following resources: +See how data, including customer information, vendor details, and line items, is extracted from invoices. You need the following resources: * An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/) See how data, including customer information, vendor details, and line items, is :::image type="content" source="media/fott-select-form-type.png" alt-text="Screenshot showing the select-form-type dropdown menu."::: -1. Select **Run analysis**. The Form Recognizer Sample Labeling tool will call the Analyze Prebuilt API and analyze the document. +1. Select **Run analysis**. The Form Recognizer Sample Labeling tool calls the Analyze Prebuilt API and analyze the document. 1. View the results - see the key-value pairs extracted, line items, highlighted text extracted and tables detected. See how data, including customer information, vendor details, and line items, is | Supported languages | Details | |:-|:|-| • English | United States (us), Australia (-au), Canada (-ca), Great Britain (-gb), India (-in)| -| • Spanish |Spain (es)| -| • German | Germany (de)| -| • French | France (fr) | -| • Italian | Italy (it)| -| • Portuguese | Portugal (-pt), Brazil (-br)| -| • Dutch | Netherlands (de)| +| • English (en) | United States (us), Australia (-au), Canada (-ca), Great Britain (-gb), India (-in)| +| • Spanish (es) |Spain (es)| +| • German (de) | Germany (de)| +| • French (fr) | France (fr) | +| • Italian (it) | Italy (it)| +| • Portuguese (pt) | Portugal (pt), Brazil (br)| +| • Dutch (de) | Netherlands (de)| ## Field extraction See how data, including customer information, vendor details, and line items, is | ShippingAddress | String | Explicit shipping address for the customer | | | ShippingAddressRecipient | String | Name associated with the ShippingAddress | | | PaymentTerm | String | The terms of payment for the invoice | |-| SubTotal | Number | Subtotal field identified on this invoice | Integer | + |Sub​Total| Number | Subtotal field identified on this invoice | Integer | | TotalTax | Number | Total tax field identified on this invoice | Integer | | InvoiceTotal | Number (USD) | Total new charges associated with this invoice | Integer | | AmountDue | Number (USD) | Total Amount Due to the vendor | Integer | See how data, including customer information, vendor details, and line items, is | ServiceStartDate | Date | First date for the service period (for example, a utility bill service period) | yyyy-mm-dd | | ServiceEndDate | Date | End date for the service period (for example, a utility bill service period) | yyyy-mm-dd| | PreviousUnpaidBalance | Number | Explicit previously unpaid balance | Integer |-| CurrencyCode | String | The Currency Code associated with an extracted amount | | | PaymentOptions | Array | An array that holds Payment Option details such as `IBAN`and `SWIFT` | | | TotalDiscount | Number | The total discount applied to an invoice | Integer | | TaxItems (en-IN only) | Array | AN array that holds added tax information such as `CGST`, `IGST`, and `SGST`. This line item is currently only available for the en-in locale | | The invoice key-value pairs and line items extracted are in the `documentResults The prebuilt invoice **2022-06-30** and later releases support returns key-value pairs at no extra cost. Key-value pairs are specific spans within the invoice that identify a label or key and its associated response or value. In an invoice, these pairs could be the label and the value the user entered for that field or telephone number. The AI model is trained to extract identifiable keys and values based on a wide variety of document types, formats, and structures. -Keys can also exist in isolation when the model detects that a key exists, with no associated value or when processing optional fields. For example, a middle name field may be left blank on a form in some instances. key-value pairs are always spans of text contained in the document. For documents where the same value is described in different ways, for example, customer/user, the associated key will be either customer or user (based on context). +Keys can also exist in isolation when the model detects that a key exists, with no associated value or when processing optional fields. For example, a middle name field may be left blank on a form in some instances. key-value pairs are always spans of text contained in the document. For documents where the same value is described in different ways, for example, customer/user, the associated key is either customer or user (based on context). ::: moniker-end Keys can also exist in isolation when the model detects that a key exists, with ## Fields extracted -The Invoice service will extract the text, tables, and 26 invoice fields. Following are the fields extracted from an invoice in the JSON output response (the following output uses this [sample invoice](media/sample-invoice.jpg)). +The Invoice service extracts the text, tables, and 26 invoice fields. Following are the fields extracted from an invoice in the JSON output response (the following output uses this [sample invoice](media/sample-invoice.jpg)). |Name| Type | Description | Text | Value (standardized output) | |:--|:-|:-|:-| :-| The Invoice service will extract the text, tables, and 26 invoice fields. Follow | BillingAddressRecipient | string | Name associated with the BillingAddress | Microsoft Services | | | ShippingAddress | string | Explicit shipping address for the customer | 123 Ship Street, Redmond WA, 98052 | | | ShippingAddressRecipient | string | Name associated with the ShippingAddress | Microsoft Delivery | |-| SubTotal | number | Subtotal field identified on this invoice | $100.00 | 100 | +| Sub​Total | number | Subtotal field identified on this invoice | $100.00 | 100 | | TotalTax | number | Total tax field identified on this invoice | $10.00 | 10 | | InvoiceTotal | number | Total new charges associated with this invoice | $110.00 | 110 | | AmountDue | number | Total Amount Due to the vendor | $610.00 | 610 | The JSON output has three parts: * `"readResults"` node contains all of the recognized text and selection marks. Text is organized by page, then by line, then by individual words. * `"pageResults"` node contains the tables and cells extracted with their bounding boxes, confidence, and a reference to the lines and words in "readResults".-* `"documentResults"` node contains the invoice-specific values and line items that the model discovered. It's where you'll find all the fields from the invoice such as invoice ID, ship to, bill to, customer, total, line items and lots more. +* `"documentResults"` node contains the invoice-specific values and line items that the model discovered. It's where to find all the fields from the invoice such as invoice ID, ship to, bill to, customer, total, line items and lots more. ## Migration guide |
applied-ai-services | Concept Receipt | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-receipt.md | The following tools are supported by Form Recognizer v2.1: ### Try receipt data extraction -See how data, including time and date of transactions, merchant information, and amount totals, is extracted from receipts. You'll need the following resources: +See how data, including time and date of transactions, merchant information, and amount totals, is extracted from receipts. You need the following resources: * An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/) See how data, including time and date of transactions, merchant information, and :::image type="content" source="media/fott-select-form-type.png" alt-text="Screenshot of the select-form-type dropdown menu."::: -1. Select **Run analysis**. The Form Recognizer Sample Labeling tool will call the Analyze Prebuilt API and analyze the document. +1. Select **Run analysis**. The Form Recognizer Sample Labeling tool calls the Analyze Prebuilt API and analyze the document. 1. View the results - see the key-value pairs extracted, line items, highlighted text extracted and tables detected. The receipt model supports all English receipts and the following locales: |Supported Languages| Details | |:--|:-:| |• English| United States (-us), Australia (-au), Great Britain (-gb), India (-in), United Arab Emirates (-ae)|-|• Dutch| Netherlands (nl)| -|• French | France (fr) | -|• Japanese | Japan (ja)| -|• Portuguese| Portugal (-pt), Brazil (-br)| -|• Spanish | Spain (es) | +|• Dutch| Netherlands (nl-nl)| +|• French | France (fr-fr), Canada (fr-ca) | +|• German | Germany (de-de) | +|• Italian | Italy (it-it) | +|• Japanese | Japan (ja-ja)| +|• Portuguese| Portugal (pt-pt), Brazil (pt-br)| +|• Spanish | Spain (es-es) | ::: moniker-end ::: moniker range="form-recog-2.1.0" |
applied-ai-services | Language Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/language-support.md | Language| Locale code | >[!NOTE] > It's not necessary to specify a locale. This is an optional parameter. The Form Recognizer deep-learning technology will auto-detect the language of the text in your image. -Receipt supports all English receipts and the following locales: - |Language| Locale code | |:--|:-:| |English (Australia)|`en-au`| |English (Canada)|`en-ca`| |English (United Kingdom)|`en-gb`|-|English (India|`en-in`| +|English (India)|`en-in`| |English (United States)| `en-us`|-|French | `fr` | -| Spanish | `es` | +|French (France) | `fr` | +|French (Canada)| `fr-ca`| +|German | `de`| +|Italian| `it`| +|Spanish | `es` | ## Business card model |
applied-ai-services | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/whats-new.md | Form Recognizer service is updated on an ongoing basis. Bookmark this page to st The **prebuilt receipt model** now has added support for the following languages: - * English - United Arab Emirates (en-AE) - * Dutch - Netherlands (nl-NL) - * French - Canada (fr-CA) - * Japanese - Japan (ja-JP) - * Portuguese - Brazil (pt-BR) + * English - United Arab Emirates (en-ae) + * Dutch - Netherlands (nl-nl) + * French - Canada (fr-ca) + * German - (de-de) + * Italian - (it-it) + * Japanese - Japan (ja-jp) + * Portuguese - Brazil (pt-br) * **[Prebuilt invoice model](concept-invoice.md)ΓÇöadditional language support and field extractions** |
azure-arc | Active Directory Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/active-directory-introduction.md | Azure Arc-enabled data services support Active Directory (AD) for Identity and A This article describes how to enable Azure Arc-enabled SQL Managed Instance with Active Directory (AD) Authentication. The article demonstrates two possible AD integration modes: - Customer-managed keytab (CMK) -- System-managed keytab (SMK) +- Service-managed keytab (SMK) The notion of Active Directory(AD) integration mode describes the process for keytab management including: - Creating AD account used by SQL Managed Instance The notion of Active Directory(AD) integration mode describes the process for ke To enable Active Directory authentication for SQL Server on Linux and Linux containers, use a [keytab file](/sql/linux/sql-server-linux-ad-auth-understanding#what-is-a-keytab-file). The keytab file is a cryptographic file containing service principal names (SPNs), account names and hostnames. SQL Server uses the keytab file for authenticating itself to the Active Directory (AD) domain and authenticating its clients using Active Directory (AD). Do the following steps to enable Active Directory authentication for Arc-enabled SQL Managed Instance: - [Deploy data controller](create-data-controller-indirect-cli.md) -- [Deploy a customer-managed keytab AD connector](deploy-customer-managed-keytab-active-directory-connector.md) or [Deploy a system-managed keytab AD connector](deploy-system-managed-keytab-active-directory-connector.md)+- [Deploy a customer-managed keytab AD connector](deploy-customer-managed-keytab-active-directory-connector.md) or [Deploy a service-managed keytab AD connector](deploy-system-managed-keytab-active-directory-connector.md) - [Deploy SQL managed instances](deploy-active-directory-sql-managed-instance.md) The following diagram shows how to enable Active Directory authentication for Azure Arc-enabled SQL Managed Instance: What is the difference between the two Active Directory integration modes? To enable Active Directory authentication for Arc-enabled SQL Managed Instance, you need an Active Directory connector where you specify the Active Directory integration deployment mode. The two Active Directory integration modes are: - Customer-managed keytab-- System-managed keytab +- Service-managed keytab The following section compares these modes. |
azure-arc | Backup Restore Postgresql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/backup-restore-postgresql.md | -Automated backups can be enabled by including the `--storage-class-backups` argument when creating an Azure Arc-enabled PostgreSQL server. Restore is not supported in the current preview release. +Automated backups can be enabled by including the `--storage-class-backups` argument when creating an Azure Arc-enabled PostgreSQL server. Specify the retention period for backups with the `--retention-days` parameter, when creating or updating an Arc-enabled PostgreSQL server. The retention period can be between 0 and 35 days. If backups are enabled but no retention period is specified, the default is seven days. ++Restoring an Azure Arc-enable PostgreSQL server creates a new server by copying the configuration of the existing server (for example resource requests/limits, extensions etc.). Configurations that could cause conflicts (for example primary endpoint port) aren't copied. The storage configuration for the new resource can be defined by passing `--storage-class*` and `--volume-size-*` parameters to the `restore` command. ++Restore an Azure Arc-enabled PostgreSQL server to a new server with the `restore` command: +```azurecli +az postgres server-arc restore -n <destination-server-name> --source-server <source-server-name> --k8s-namespace <namespace> --use-k8s +``` ++## Examples: ++Create a new Arc-enabled PostgreSQL server `pg02` by restoring `pg01` using the latest backups: +```azurecli +az postgres server-arc restore -n pg02 --source-server pg01 --k8s-namespace arc --use-k8s +``` ++Create a new Arc-enabled PostgreSQL server `pg02` by restoring `pg01` using the latest backups, defining new storage requirements for pg02: +```azurecli +az postgres server-arc restore -n pg02 --source-server pg01 --k8s-namespace arc --storage-class-data azurefile-csi-premium --volume-size-data 10Gi --storage-class-logs azurefile-csi-premium --volume-size-logs 2Gi--use-k8s --storage-class-backups azurefile-csi-premium --volume-size-backups 15Gi +``` ++Create a new Arc-enabled PostgreSQL server `pg02` by restoring `pg01` to its state at `2023-02-01T00:00:00Z`: +```azurecli +az postgres server-arc restore -n pg02 --source-server pg01 --k8s-namespace arc -t 2023-02-01T00:00:00Z --use-k8s +``` ++For details about all the parameters available for restore review the output of the command: +```azurecli +az postgres server-arc restore --help +``` - Read about [scaling up or down (increasing/decreasing memory/vcores)](scale-up-down-postgresql-server-using-cli.md) your server. |
azure-arc | Configure Transparent Data Encryption Sql Managed Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/configure-transparent-data-encryption-sql-managed-instance.md | + + Title: Turn on transparent data encryption in Azure Arc-enabled SQL Managed Instance (preview) +description: How-to guide to turn on transparent data encryption in an Azure Arc-enabled SQL Managed Instance (preview) +++++++ Last updated : 01/20/2023++++# Enable transparent data encryption on Azure Arc-enabled SQL Managed Instance (preview) ++This article describes how to enable and disable transparent data encryption (TDE) at-rest on an Azure Arc-enabled SQL Managed Instance. In this article, the term *managed instance* refers to a deployment of Azure Arc-enabled SQL Managed Instance and enabling/disabling TDE will apply to all databases running on a managed instance. ++Enabling service-managed transparent data encryption will require the managed instance to use a service-managed database master key as well as the service-managed server certificate. These credentials will be automatically created when service-managed transparent data encryption is enabled. For more info on TDE, please refer to [Transparent data encryption](/sql/relational-databases/security/encryption/transparent-data-encryption). +++Turning on the TDE feature does the following: ++- All existing databases will now be automatically encrypted. +- All newly created databases will get automatically encrypted. +++## Prerequisites ++Before you proceed with this article, you must have an Azure Arc-enabled SQL Managed Instance resource created and connect to it. ++- [An Azure Arc-enabled SQL Managed Instance created](./create-sql-managed-instance.md) +- [Connect to Azure Arc-enabled SQL Managed Instance](./connect-managed-instance.md) ++## Limitations ++The following limitations must be considered when deploying Service-Managed TDE: ++- Only General Purpose Tier is supported. +- Failover Groups are not supported. ++## Turn on transparent data encryption on the managed instance +### Prerequisites ++Turning on TDE on the managed instance will result in the following operations taking place: ++1. Adding the service-managed database master key in the `master` database. +2. Adding the service-managed certificate protector. +3. Adding the associated Database Encryption Keys (DEK) on all databases on the managed instance. +4. Enabling encryption on all databases on the managed instance. ++### [Service-managed mode](#tab/service-managed-mode) ++Run kubectl patch to enable service-managed TDE ++```console +kubectl patch sqlmi <sqlmi-name> --namespace <namespace> --type merge --patch '{ "spec": { "security": { "transparentDataEncryption": { "mode": "ServiceManaged" } } } }' +``` ++Example: ++```console +kubectl patch sqlmi contososqlmi --namespace arc --type merge --patch '{ "spec": { "security": { "transparentDataEncryption": { "mode": "ServiceManaged" } } } }' +``` +++## Turn off transparent data encryption on the managed instance ++Turning off TDE on the managed instance will result in the following operations taking place: ++1. Disabling encryption on all databases on the managed instance. +2. Dropping the associated DEKs on all databases on the managed instance. +3. Dropping the service-managed certificate protector. +4. Dropping the service-managed database master key in the `master` database. ++### [Service-managed mode](#tab/service-managed-mode) ++Run kubectl patch to disable service-managed TDE ++```console +kubectl patch sqlmi <sqlmi-name> --namespace <namespace> --type merge --patch '{ "spec": { "security": { "transparentDataEncryption": { "mode": null } } } }' +``` ++Example: +```console +kubectl patch sqlmi contososqlmi --namespace arc --type merge --patch '{ "spec": { "security": { "transparentDataEncryption": { "mode": null } } } }' +``` +++## Back up a transparent data encryption credential ++When you back up credentials from the managed instance, the credentials are stored within the container. To store credentials on a persistent volume, specify the mount path in the container. For example, `var/opt/mssql/data`. The following example backs up a certificate from the managed instance: ++> [!NOTE] +> If the `kubectl cp` command is run from Windows, the command may fail when using absolute Windows paths. Use relative paths or the commands specified below. ++1. Back up the certificate from the container to `/var/opt/mssql/data`. ++ ```sql + USE master; + GO ++ BACKUP CERTIFICATE <cert-name> TO FILE = '<cert-path>' + WITH PRIVATE KEY ( FILE = '<private-key-path>', + ENCRYPTION BY PASSWORD = '<UseStrongPasswordHere>'); + ``` ++ Example: ++ ```sql + USE master; + GO ++ BACKUP CERTIFICATE MyServerCert TO FILE = '/var/opt/mssql/data/servercert.crt' + WITH PRIVATE KEY ( FILE = '/var/opt/mssql/data/servercert.key', + ENCRYPTION BY PASSWORD = '<UseStrongPasswordHere>'); + ``` ++2. Copy the certificate from the container to your file system. ++ ### [Windows](#tab/windows) ++ ```console + kubectl exec -n <namespace> -c arc-sqlmi <pod-name> -- cat <pod-certificate-path> > <local-certificate-path> + ``` ++ Example: ++ ```console + kubectl exec -n arc-ns -c arc-sqlmi sql-0 -- cat /var/opt/mssql/data/servercert.crt > $HOME\sqlcerts\servercert.crt + ``` ++ ### [Linux](#tab/linux) + ```console + kubectl cp --namespace <namespace> --container arc-sqlmi <pod-name>:<pod-certificate-path> <local-certificate-path> + ``` ++ Example: ++ ```console + kubectl cp --namespace arc-ns --container arc-sqlmi sql-0:/var/opt/mssql/data/servercert.crt $HOME/sqlcerts/servercert.crt + ``` + ++3. Copy the private key from the container to your file system. ++ ### [Windows](#tab/windows) + ```console + kubectl exec -n <namespace> -c arc-sqlmi <pod-name> -- cat <pod-private-key-path> > <local-private-key-path> + ``` ++ Example: ++ ```console + kubectl exec -n arc-ns -c arc-sqlmi sql-0 -- cat /var/opt/mssql/data/servercert.key > $HOME\sqlcerts\servercert.key + ``` ++ ### [Linux](#tab/linux) + ```console + kubectl cp --namespace <namespace> --container arc-sqlmi <pod-name>:<pod-private-key-path> <local-private-key-path> + ``` ++ Example: ++ ```console + kubectl cp --namespace arc-ns --container arc-sqlmi sql-0:/var/opt/mssql/data/servercert.key $HOME/sqlcerts/servercert.key + ``` + ++4. Delete the certificate and private key from the container. ++ ```console + kubectl exec -it --namespace <namespace> --container arc-sqlmi <pod-name> -- bash -c "rm <certificate-path> <private-key-path> + ``` ++ Example: ++ ```console + kubectl exec -it --namespace arc-ns --container arc-sqlmi sql-0 -- bash -c "rm /var/opt/mssql/data/servercert.crt /var/opt/mssql/data/servercert.key" + ``` ++## Restore a transparent data encryption credential to a managed instance ++Similar to above, to restore the credentials, copy them into the container and run the corresponding T-SQL afterwards. +++> [!NOTE] +> If the `kubectl cp` command is run from Windows, the command may fail when using absolute Windows paths. Use relative paths or the commands specified below. +> To restore database backups that have been taken before enabling TDE, you would need to disable TDE on the SQL Managed Instance, restore the database backup and enable TDE again. ++1. Copy the certificate from your file system to the container. + ### [Windows](#tab/windows) + ```console + type <local-certificate-path> | kubectl exec -i -n <namespace> -c arc-sqlmi <pod-name> -- tee <pod-certificate-path> + ``` ++ Example: ++ ```console + type $HOME\sqlcerts\servercert.crt | kubectl exec -i -n arc-ns -c arc-sqlmi sql-0 -- tee /var/opt/mssql/data/servercert.crt + ``` ++ ### [Linux](#tab/linux) + ```console + kubectl cp --namespace <namespace> --container arc-sqlmi <local-certificate-path> <pod-name>:<pod-certificate-path> + ``` ++ Example: ++ ```console + kubectl cp --namespace arc-ns --container arc-sqlmi $HOME/sqlcerts/servercert.crt sql-0:/var/opt/mssql/data/servercert.crt + ``` + ++2. Copy the private key from your file system to the container. + ### [Windows](#tab/windows) + ```console + type <local-private-key-path> | kubectl exec -i -n <namespace> -c arc-sqlmi <pod-name> -- tee <pod-private-key-path> + ``` ++ Example: ++ ```console + type $HOME\sqlcerts\servercert.key | kubectl exec -i -n arc-ns -c arc-sqlmi sql-0 -- tee /var/opt/mssql/data/servercert.key + ``` ++ ### [Linux](#tab/linux) + ```console + kubectl cp --namespace <namespace> --container arc-sqlmi <local-private-key-path> <pod-name>:<pod-private-key-path> + ``` ++ Example: ++ ```console + kubectl cp --namespace arc-ns --container arc-sqlmi $HOME/sqlcerts/servercert.key sql-0:/var/opt/mssql/data/servercert.key + ``` + ++3. Create the certificate using file paths from `/var/opt/mssql/data`. ++ ```sql + USE master; + GO ++ CREATE CERTIFICATE <certicate-name> + FROM FILE = '<certificate-path>' + WITH PRIVATE KEY ( FILE = '<private-key-path>', + DECRYPTION BY PASSWORD = '<UseStrongPasswordHere>' ); + ``` ++ Example: ++ ```sql + USE master; + GO ++ CREATE CERTIFICATE MyServerCertRestored + FROM FILE = '/var/opt/mssql/data/servercert.crt' + WITH PRIVATE KEY ( FILE = '/var/opt/mssql/data/servercert.key', + DECRYPTION BY PASSWORD = '<UseStrongPasswordHere>' ); + ``` ++4. Delete the certificate and private key from the container. ++ ```console + kubectl exec -it --namespace <namespace> --container arc-sqlmi <pod-name> -- bash -c "rm <certificate-path> <private-key-path> + ``` ++ Example: ++ ```console + kubectl exec -it --namespace arc-ns --container arc-sqlmi sql-0 -- bash -c "rm /var/opt/mssql/data/servercert.crt /var/opt/mssql/data/servercert.key" + ``` ++## Next steps ++[Transparent data encryption](/sql/relational-databases/security/encryption/transparent-data-encryption) |
azure-arc | Connectivity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/connectivity.md | Some Azure-attached services are only available when they can be directly reache |**Automatic backup and restore**|Supported<br/>Automatic local backup and restore.|Supported<br/>In addition to automated local backup and restore, you can _optionally_ send backups to Azure blob storage for long-term, off-site retention.| |**Monitoring**|Supported<br/>Local monitoring using Grafana and Kibana dashboards.|Supported<br/>In addition to local monitoring dashboards, you can _optionally_ send monitoring data and logs to Azure Monitor for at-scale monitoring of multiple sites in one place. | |**Authentication**|Use local username/password for data controller and dashboard authentication. Use SQL and Postgres logins or Active Directory (AD is not currently supported) for connectivity to database instances. Use Kubernetes authentication providers for authentication to the Kubernetes API.|In addition to or instead of the authentication methods for the indirectly connected mode, you can _optionally_ use Azure Active Directory.|-|**Role-based access control (RBAC)**|Use Kubernetes RBAC on Kubernetes API. Use SQL and Postgres RBAC for database instances.|You can use Azure Active Directory and Azure RBAC. **Pending availability in directly connected mode**| +|**Role-based access control (RBAC)**|Use Kubernetes RBAC on Kubernetes API. Use SQL and Postgres RBAC for database instances.|You can use Azure Active Directory and Azure RBAC.| ## Connectivity requirements |
azure-arc | Deploy Active Directory Postgresql Server Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-active-directory-postgresql-server-cli.md | + + Title: Deploy Active Directory integrated Azure Arc-enabled PostgreSQL server using Azure CLI +description: Explains how to deploy Active Directory integrated Azure Arc-enabled PostgreSQL server using Azure CLI ++++++ Last updated : 02/10/2023++++# Deploy Active Directory integrated Azure Arc-enabled PostgreSQL using Azure CLI ++This article explains how to deploy Azure Arc-enabled PostgreSQL server with Active Directory (AD) authentication using Azure CLI. ++See these articles for specific instructions: ++- [Tutorial ΓÇô Deploy AD connector in customer-managed keytab mode](deploy-customer-managed-keytab-active-directory-connector.md) ++### Prerequisites ++Before you proceed, install the following tools: ++- The [Azure CLI (az)](/cli/azure/install-azure-cli) +- The [`arcdata` extension for Azure CLI](install-arcdata-extension.md) ++To know more further details about how to set up OU and AD account, go to [Deploy Azure Arc-enabled data services in Active Directory authentication - prerequisites](active-directory-prerequisites.md) +++## Deploy and update Active Directory integrated Azure Arc-enabled PostgreSQL server ++### Customer-managed keytab mode ++#### Create an Azure Arc-enabled PostgreSQL server ++To view available options for the create command for Azure Arc-enabled PostgreSQL server, use the following command: ++```azurecli +az postgres server-arc create --help +``` ++To create a SQL Managed Instance, use `az postgres server-arc create`. See the following example: ++```azurecli +az postgres server-arc create +--name < PostgreSQL server name > +--k8s-namespace < namespace > +--ad-connector-name < your AD connector name > +--keytab-secret < PostgreSQL server keytab secret name > +--ad-account-name < PostgreSQL server AD user account > +--dns-name < PostgreSQL server primary endpoint DNS name > +--port < PostgreSQL server primary endpoint port number > +--use-k8s +``` ++Example: ++```azurecli +az postgres server-arc create +--name contosopg +--k8s-namespace arc +--ad-connector-name adarc +--keytab-secret arcuser-keytab-secret +--ad-account-name arcuser +--dns-name arcpg.contoso.local +--port 31432 +--use-k8s +``` ++#### Update an Azure Arc-enabled PostgreSQL server ++To update an Arc-enabled PostgreSQL server, use `az postgres server-arc update`. See the following example: ++```azurecli +az postgres server-arc update +--name < PostgreSQL server name > +--k8s-namespace < namespace > +--keytab-secret < PostgreSQL server keytab secret name > +--use-k8s +``` ++Example: ++```azurecli +az postgres server-arc update +--name contosopg +--k8s-namespace arc +--keytab-secret arcuser-keytab-secret +--use-k8s +``` ++## Next steps +- **Try it out.** Get started quickly with [Azure Arc Jumpstart](https://github.com/microsoft/azure_arc#azure-arc-enabled-data-services) on Azure Kubernetes Service (AKS), AWS Elastic Kubernetes Service (EKS), Google Cloud Kubernetes Engine (GKE) or in an Azure VM. + |
azure-arc | Limitations Postgresql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/limitations-postgresql.md | This article describes limitations of Azure Arc-enabled PostgreSQL. [!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)] -## Backup and restore --Enable automated backups. Include the `--storage-class-backups` argument when you create an Azure Arc-enabled PostgreSQL server. Restore has been temporarily removed as we finalize designs and experiences. - ## High availability Configuring high availability to recover from infrastructure failures isn't yet available. |
azure-arc | Plan Azure Arc Data Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/plan-azure-arc-data-services.md | You can deploy Azure Arc-enabled data services on various types of Kubernetes cl > [!IMPORTANT] > * The minimum supported version of Kubernetes is v1.21. For more information, see the "Known issues" section of [Release notes - Azure Arc-enabled data services](./release-notes.md#known-issues). > * The minimum supported version of OCP is 4.8.-> * OCP 4.11 is not supported. > * If you're using Azure Kubernetes Service, your cluster's worker node virtual machine (VM) size should be at least Standard_D8s_v3 and use Premium Disks. > * The cluster should not span multiple availability zones. > * For more information, see the "Known issues" section of [Release notes - Azure Arc-enabled data services](./release-notes.md#known-issues). |
azure-arc | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md | +## February 14, 2023 ++### Image tag ++`v1.16.0_2023-02-14` ++For complete release version information, see [Version log](version-log.md#february-14-2023). ++New for this release: ++- Arc data + - Initial Extended Events Functionality | (preview) ++- Arc-SQL MI + - [Enabled service managed Transparent Data Encryption (TDE) (preview)](configure-transparent-data-encryption-sql-managed-instance.md). + - Backups | Produce automated backups from readable secondary + - The built-in automatic backups are performed on secondary replicas when available. ++- Arc PostgreSQL + - Automated Backups + - Settings via configuration framework + - Point-in-Time Restore + - Turn backups on/off + - Require client connections to use SSL + - Active Directory | Customer-managed bring your own keytab + - Active Directory | Configure in Azure command line client + - Enable Extensions via Kubernetes Custom Resource Definition ++- Azure CLI Extension + - Optional `imageTag` for controller creation by defaulting to the image tag of the bootstrapper ++ ## January 13, 2023 ### Image tag |
azure-arc | Supported Versions Postgresql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/supported-versions-postgresql.md | -The list of supported versions evolves over time as we progress on ensuring parity with Postgres managed services in Azure PaaS. Today, the major versions that is supported is PostgreSQL 14. +The list of supported versions evolves over time as we progress on ensuring parity with PostgreSQL managed services in Azure PaaS. Today, the major version that is supported is PostgreSQL 14. [!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)] ## How to choose between versions?-It is recommend you look at what versions your applications have been designed for and what are the capabilities of each of the versions. +It's recommended you look at what versions your applications have been designed for and what are the capabilities of each of the versions. To learn more, read about each version on the official PostgreSQL site: - [PostgreSQL 14 (default)](https://www.postgresql.org/docs/14/https://docsupdatetracker.net/index.html) ## How to create a particular version in Azure Arc-enabled PostgreSQL server?-At creation time, you have the possibility to indicate what version to create by passing the _--engine-version_ parameter. -If you do not indicate a version information, by default, a server group of PostgreSQL version 14 will be created. +Currently only PostgreSQL version 14 is supported. -Note that there is only one PostgreSQL Custom Resource Definition (CRD) in your Kubernetes cluster no matter what versions we support. +There's only one PostgreSQL Custom Resource Definition (CRD) in your Kubernetes cluster no matter what versions we support. For example, run the following command: ```console kubectl get crds ``` -It will return an output like: +It returns an output like: ```console NAME CREATED AT dags.sql.arcdata.microsoft.com 2021-10-12T23:53:40Z sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com 2021-10-12T23:5 sqlmanagedinstances.sql.arcdata.microsoft.com 2021-10-12T23:53:37Z ``` -In this example, this output indicates there are one CRD related to PostgreSQL: postgresqls.arcdata.microsoft.com, shortname postgresqls. The CRD is not a PostgreSQL server. The presence of a CRD is not an indication that you have - or not - created a server. The CRD is an indication of what kind of resources can be created in the Kubernetes cluster. +In this example, this output indicates there is one CRD related to PostgreSQL: `postgresqls.arcdata.microsoft.com`, shortname `postgresqls`. The CRD isn't a PostgreSQL server. The presence of a CRD isn't an indication that you have - or not - created a server. The CRD is an indication of what kind of resources can be created in the Kubernetes cluster. ## How can I be notified when other versions are available?-Come back and read this article. It will be updated as appropriate. +Come back and read this article. It's updated as appropriate. ## Next steps: |
azure-arc | Using Extensions In Postgresql Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/using-extensions-in-postgresql-server.md | PostgreSQL is at its best when you use it with extensions. ## Supported extensions For this preview, the following standard [`contrib`](https://www.postgresql.org/docs/14/contrib.html) extensions are already deployed in the containers of your Azure Arc-enabled PostgreSQL server:-- adminpack-- amcheck-- autoinc-- bloombtree_gin-- btree_gist-- citext-- cube-- dblink-- dict_int-- dict_xsyn-- earthdistance-- file_fdw-- fuzzystrmatch-- hstore-- insert_username-- intagg-- intarray-- isn-- lo-- ltree-- moddatetime-- old_snapshot-- pageinspect-- pg_buffercache-- pg_freespacemap-- pg_prewarm-- pg_stat_statements-- pg_surgery-- pg_trgm-- pg_visibility-- pgcrypto-- pgrowlocks-- pgstattuple-- postgres_fdw-- refint-- seg-- sslinfo-- tablefunc-- tcn-- tsm_system_rows-- tsm_system_time-- unaccent-- xml2+- `address_standardizer_data_us` 3.3.1 +- `adminpack` 2.1 +- `amcheck` 1.3 +- `autoinc` 1 +- `bloom` 1 +- `btree_gin` 1.3 +- `btree_gist` 1.6 +- `citext` 1.6 +- `cube` 1.5 +- `dblink` 1.2 +- `dict_int` 1 +- `dict_xsyn` 1 +- `earthdistance` 1.1 +- `file_fdw` 1 +- `fuzzystrmatch` 1.1 +- `hstore` 1.8 +- `hypopg` 1.3.1 +- `insert_username` 1 +- `intagg` 1.1 +- `intarray` 1.5 +- `isn` 1.2 +- `lo` 1.1 +- `ltree` 1.2 +- `moddatetime` 1 +- `old_snapshot` 1 +- `orafce` 4 +- `pageinspect` 1.9 +- `pg_buffercache` 1.3 +- `pg_cron` 1.4-1 +- `pg_freespacemap` 1.2 +- `pg_partman` 4.7.1 +- `pg_prewarm` 1.2 +- `pg_repack` 1.4.8 +- `pg_stat_statements` 1.9 +- `pg_surgery` 1 +- `pg_trgm` 1.6 +- `pg_visibility` 1.2 +- `pgaudit` 1.7 +- `pgcrypto` 1.3 +- `pglogical` 2.4.2 +- `pglogical_origin` 1.0.0 +- `pgrouting` 3.4.1 +- `pgrowlocks` 1.2 +- `pgstattuple` 1.5 +- `plpgsql` 1 +- `postgis` 3.3.1 +- `postgis_raster` 3.3.1 +- `postgis_tiger_geocoder` 3.3.1 +- `postgis_topology` 3.3.1 +- `postgres_fdw` 1.1 +- `refint` 1 +- `seg` 1.4 +- `sslinfo` 1.2 +- `tablefunc` 1 +- `tcn` 1 +- `timescaledb` 2.8.1 +- `tsm_system_rows` 1 +- `tsm_system_time` 1 +- `unaccent` 1.1 Updates to this list will be posted as it evolves over time. -> [!IMPORTANT] -> While you may bring to your server an extension other than those listed above, in this Preview, it will not be persisted to your system. It means that it will not be available after a restart of the system and you would need to bring it again. +## Create an Arc-enabled PostgreSQL server with extensions enabled +You can create an Arc-enabled PostgreSQL server with any of the supported extensions enabled by passing a comma separated list of extensions to the `--extensions` parameter of the `create` command. *NOTE:* Extensions are enabled on the database for the admin user that was supplied when the server was created: +```azurecli +az postgres server-arc create -n <name> --k8s-namespace <namespace> --extensions "pgaudit,pg_partman" --use-k8s +``` +## Add or remove extensions +You can add or remove extensions from an existing Arc-enabled PostgreSQL server. -## Create extensions -Connect to your server with the client tool of your choice and run the standard PostgreSQL query: +First describe the server to get the current list of extensions: ```console-CREATE EXTENSION <extension name>; +kubectl describe postgresqls <server-name> -n <namespace> ```--## Show the list of extensions created -Connect to your server with the client tool of your choice and run the standard PostgreSQL query: -```console -select * from pg_extension; +If there are extensions enabled the output contains a section like this: +```yml + config: + postgreSqlExtensions: pgaudit,pg_partman +``` +Add new extensions by appending them to the existing list, or remove extensions by removing them from the existing list. Pass the desired list to the update command. For example, to add `pgcrypto` and remove `pg_partman` from the server in the example above: +```azurecli +az postgres server-arc update -n <name> --k8s-namespace <namespace> --extensions "pgaudit,pgrypto" --use-k8s ``` -## Drop an extension +## Show the list of enabled extensions Connect to your server with the client tool of your choice and run the standard PostgreSQL query: ```console-drop extension <extension name>; +select * from pg_extension; ``` ## Next steps |
azure-arc | Validation Program | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/validation-program.md | To see how all Azure Arc-enabled components are validated, see [Validation progr |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version |--|--|--|--|--|+| PowerStore T|1.25.4|1.15.0_2023-01-10|16.0.816.19223 |Not validated| | Dell EMC PowerFlex |1.21.5|1.4.1_2022-03-08|15.0.2255.119 | postgres 12.3 (Ubuntu 12.3-1) | | PowerFlex version 3.6 |1.21.5|1.4.1_2022-03-08|15.0.2255.119 | postgres 12.3 (Ubuntu 12.3-1) | | PowerFlex CSI version 1.4 |1.21.5|1.4.1_2022-03-08 | 15.0.2255.119 | postgres 12.3 (Ubuntu 12.3-1) | | PowerStore X|1.20.6|1.0.0_2021-07-30|15.0.2148.140 |postgres 12.3 (Ubuntu 12.3-1) |-| PowerStore T|1.23.5|1.9.0_2022-07-12|16.0.312.4243 |postgres 12.3 (Ubuntu 12.3-1)| + ### HPE |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version |--|--|--|--|--|-|HPE Superdome Flex 280|1.20.0|1.8.0_2022-06-14|16.0.41.7339|12.3 (Ubuntu 12.3-1) |HPE Apollo 4200 Gen10 Plus | 1.22.6 | 1.11.0_2022-09-13 |16.0.312.4243|12.3 (Ubuntu 12.3-1)|+|HPE Superdome Flex 280|1.20.0|1.8.0_2022-06-14|16.0.41.7339|12.3 (Ubuntu 12.3-1) ### Kublr To see how all Azure Arc-enabled components are validated, see [Validation progr |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version |--|--|--|--|--|+|Lenovo ThinkAgile MX1020 |1.24.6| 1.14.0_2022-12-13 |16.0.816.19223|Not validated| |Lenovo ThinkAgile MX3520 |AKS on Azure Stack HCI 21H2| 1.10.0_2022-08-09 |16.0.312.4243|postgres 12.3 (Ubuntu 12.3-1)| + ### Nutanix |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version To see how all Azure Arc-enabled components are validated, see [Validation progr |Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version |--|--|--|--|--|-| TKGm v1.5.3 | 1.22.8 | 1.9.0_2022-07-12 | 16.0.312.4243 | postgres 12.3 (Ubuntu 12.3-1)| +| TKG 2.1.0 | 1.26.0 | 1.15.0_2023-01-10 | 16.0.816.19223 | postgres 14.5 (Ubuntu 20.04) | TKG-1.6.0 | 1.23.8 | 1.11.0_2022-09-13 | 16.0.312.4243 | postgres 12.3 (Ubuntu 12.3-1)+| TKGm v1.5.3 | 1.22.8 | 1.9.0_2022-07-12 | 16.0.312.4243 | postgres 12.3 (Ubuntu 12.3-1)| ### Wind River |
azure-arc | Version Log | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/version-log.md | +## February 14, 2023 ++|Component|Value| +|--|--| +|Container images tag |`v1.16.0_2023-02-14 `| +|**CRD names and version:**| | +|`activedirectoryconnectors.arcdata.microsoft.com`| v1beta1, v1beta2, v1| +|`datacontrollers.arcdata.microsoft.com`| v1beta1, v1 through v6| +|`exporttasks.tasks.arcdata.microsoft.com`| v1beta1, v1, v2| +|`failovergroups.sql.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2| +|`kafkas.arcdata.microsoft.com`| v1beta1, v1beta2, v1beta3| +|`monitors.arcdata.microsoft.com`| v1beta1, v1, v2| +|`postgresqls.arcdata.microsoft.com`| v1beta1, v1beta2, v1beta3, v1beta4, v1beta5| +|`postgresqlrestoretasks.tasks.postgresql.arcdata.microsoft.com`| v1beta1| +|`redis.arcdata.microsoft.com`| v1beta1, v1beta2| +|`sqlmanagedinstances.sql.arcdata.microsoft.com`| v1beta1, v1 through v10| +|`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`| v1beta1, v1| +|`sqlmanagedinstancereprovisionreplicatask.tasks.sql.arcdata.microsoft.com`| v1beta1| +|`sqlmanagedinstancemonitoringprofiles.arcdata.microsoft.com`| v1beta1, v1beta2| +|`telemetrycollectors.arcdata.microsoft.com` *use to be otelcollectors*| v1beta1, v1beta2, v1beta3, v1beta4| +|`telemetryrouters.arcdata.microsoft.com`| v1beta1, v1beta2, v1beta3, v1beta4, v1beta4, v1beta5| +|Azure Resource Manager (ARM) API version|2022-06-15-preview| +|`arcdata` Azure CLI extension version|1.4.11 ([Download](https://aka.ms/az-cli-arcdata-ext))| +|Arc-enabled Kubernetes helm chart extension version|1.16.0| +|Azure Arc Extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.8.0 ([Download](https://aka.ms/ads-arcdata-ext))</br>1.8.0 ([Download](https://aka.ms/ads-azcli-ext))| + ## January 13, 2023 |Component|Value| |
azure-arc | What Is Azure Arc Enabled Postgresql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/what-is-azure-arc-enabled-postgresql.md | -## Compare Postgres solutions provided by Microsoft in Azure +## Compare PostgreSQL solutions provided by Microsoft in Azure -Microsoft offers Postgres database services in Azure in two ways: +Microsoft offers PostgreSQL database services in Azure in two ways: - As a managed service in **[Azure PaaS](https://portal.azure.com/#create/Microsoft.PostgreSQLServer)** (Platform As A Service) - As a semi-managed service with Azure Arc as it is operated by customers or their partners/vendors ### Features -- Manage Postgres simply+- Manage PostgreSQL simply - Simplify monitoring, back up, patching/upgrade, access control & more-- Deploy Postgres on any [Kubernetes](https://kubernetes.io/) infrastructure+- Deploy PostgreSQL on any [Kubernetes](https://kubernetes.io/) infrastructure - On-premises - Cloud providers like AWS, GCP, and Azure - Edge deployments (including lightweight Kubernetes [K3S](https://k3s.io/)) |
azure-arc | Tutorial Use Gitops Flux2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md | az k8s-configuration flux create -g flux-demo-rg \ --scope cluster \ -u https://github.com/Azure/gitops-flux2-kustomize-helm-mt \ --branch main \kustomization-name=infra path=./infrastructure prune=true \kustomization-name=apps path=./apps/staging prune=true dependsOn=\["infra"\]+--kustomization name=infra path=./infrastructure prune=true \ +--kustomization name=apps path=./apps/staging prune=true dependsOn=\["infra"\] ``` The `microsoft.flux` extension will be installed on the cluster (if it hasn't already been installed due to a previous GitOps deployment). az k8s-extension delete -g <resource-group> -c <cluster-name> -n flux -t managed ## Next steps * Read more about [configurations and GitOps](conceptual-gitops-flux2.md).-* Learn how to [use Azure Policy to enforce GitOps at scale](./use-azure-policy-flux-2.md). +* Learn how to [use Azure Policy to enforce GitOps at scale](./use-azure-policy-flux-2.md). |
azure-functions | Create First Function Cli Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-powershell.md | Before you begin, you must have the following: + One of the following tools for creating Azure resources: - + The Azure [Az PowerShell module](/powershell/azure/install-az-ps) version 5.9.0 or later. + + The Azure [Az PowerShell module](/powershell/azure/install-az-ps) version 9.4.0 or later. + [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later. -+ The [.NET Core 3.1 SDK](https://dotnet.microsoft.com/download) ++ The [.NET 6.0 SDK](https://dotnet.microsoft.com/download)+++ [PowerShell 7.2](/powershell/scripting/install/installing-powershell-core-on-windows) ### Prerequisite check Verify your prerequisites, which depend on whether you are using Azure CLI or Az + In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version 4.x. -+ Run `(Get-Module -ListAvailable Az).Version` and verify version 5.0 or later. ++ Run `(Get-Module -ListAvailable Az).Version` and verify version 9.4.0 or later. + Run `Connect-AzAccount` to sign in to Azure and verify an active subscription. |
azure-functions | Dotnet Isolated In Process Differences | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-in-process-differences.md | Use the following table to compare feature and functional differences between th | [Supported .NET versions](dotnet-isolated-process-guide.md#supported-versions) | Long Term Support (LTS) versions | [All supported versions](dotnet-isolated-process-guide.md#supported-versions) + .NET Framework | | Core packages | [Microsoft.NET.Sdk.Functions](https://www.nuget.org/packages/Microsoft.NET.Sdk.Functions/) | [Microsoft.Azure.Functions.Worker](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/)<br/>[Microsoft.Azure.Functions.Worker.Sdk](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk) | | Binding extension packages | [Microsoft.Azure.WebJobs.Extensions.*](https://www.nuget.org/packages?q=Microsoft.Azure.WebJobs.Extensions) | [Microsoft.Azure.Functions.Worker.Extensions.*](https://www.nuget.org/packages?q=Microsoft.Azure.Functions.Worker.Extensions) | -| Durable Functions | [Supported](durable/durable-functions-overview.md) | [Supported](durable/durable-functions-dotnet-isolated-overview.md) | -| Model types exposed by bindings | Simple types<br/>[JSON serializable](/dotnet/api/system.text.json.jsonserializeroptions) types<br/>Arrays/enumerations<br/>Service SDK types such as [BlobClient](/dotnet/api/azure.storage.blobs.blobclient)<br/>`IAsyncCollector` (for output bindings) | Simple types<br/>JSON serializable types<br/>Arrays/enumerations | +| Durable Functions | [Supported](durable/durable-functions-overview.md) | [Supported (public preview)](https://github.com/microsoft/durabletask-dotnet#usage-with-azure-functions) | +| Model types exposed by bindings | Simple types<br/>[JSON serializable](/dotnet/api/system.text.json.jsonserializeroptions) types<br/>Arrays/enumerations<br/>Service SDK types such as [BlobClient](/dotnet/api/azure.storage.blobs.blobclient)<br/>`IAsyncCollector` (for output bindings) | Simple types<br/>JSON serializable types<br/>Arrays/enumerations<br/>[Some service-specific SDK types](dotnet-isolated-process-guide.md#sdk-types-preview) | | HTTP trigger model types| [HttpRequest](/dotnet/api/system.net.http.httpclient) / [ObjectResult](/dotnet/api/microsoft.aspnetcore.mvc.objectresult) | [HttpRequestData](/dotnet/api/microsoft.azure.functions.worker.http.httprequestdata?view=azure-dotnet&preserve-view=true) / [HttpResponseData](/dotnet/api/microsoft.azure.functions.worker.http.httpresponsedata?view=azure-dotnet&preserve-view=true) | | Output binding interaction | Return values (single output only)<br/>`out` parameters<br/>`IAsyncCollector` | Return values (expanded model with single or [multiple outputs](dotnet-isolated-process-guide.md#multiple-output-bindings)) | | Imperative bindings<sup>1</sup> | [Supported](functions-dotnet-class-library.md#binding-at-runtime) | Not supported | |
azure-functions | Dotnet Isolated Process Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md | The trigger attribute specifies the trigger type and binds input data to a metho The `Function` attribute marks the method as a function entry point. The name must be unique within a project, start with a letter and only contain letters, numbers, `_`, and `-`, up to 127 characters in length. Project templates often create a method named `Run`, but the method name can be any valid C# method name. -Because .NET isolated projects run in a separate worker process, bindings can't take advantage of rich binding classes, such as `ICollector<T>`, `IAsyncCollector<T>`, and `CloudBlockBlob`. There's also no direct support for types inherited from underlying service SDKs, such as [DocumentClient] and [BrokeredMessage]. Instead, bindings rely on strings, arrays, and serializable types, such as plain old class objects (POCOs). +Bindings can provide data as strings, arrays, and serializable types, such as plain old class objects (POCOs). You can also bind to [types from some service SDKs](#sdk-types-preview). For HTTP triggers, you must use [HttpRequestData] and [HttpResponseData] to access the request and response data. This is because you don't have access to the original HTTP request and response objects when using .NET Functions isolated worker process. The data written to an output binding is always the return value of the function The response from an HTTP trigger is always considered an output, so a return value attribute isn't required. +### SDK types (preview) ++For some service-specific binding types, binding data can be provided using types from service SDKs and frameworks. These provide additional capability beyond what a serialized string or plain-old CLR object (POCO) may offer. Support for SDK types is currently in preview with limited scenario coverage. ++To use SDK type bindings, your project must reference [Microsoft.Azure.Functions.Worker 1.12.1-preview1 or later][sdk-types-worker-version] and [Microsoft.Azure.Functions.Worker.Sdk 1.9.0-preview1 or later][sdk-types-worker-sdk-version]. Specific package versions will be needed for each of the service extensions as well. When testing SDK types locally on your machine, you will also need to use [Azure Functions Core Tools version 4.0.5000 or later](./functions-run-local.md). You can check your current version using the command `func version`. ++The following service-specific bindings are currently included in the preview: ++| Service | Trigger | Input binding | Output binding | +|-|-|-|-| +| [Azure Blobs][blob-sdk-types] | Preview support | Preview support | Not yet supported | ++[blob-sdk-types]: ./functions-bindings-storage-blob.md?tabs=isolated-process%2Cextensionv5&pivots=programming-language-csharp#binding-types ++The [SDK type binding samples](https://github.com/Azure/azure-functions-dotnet-worker/tree/main/samples/WorkerBindingSamples) show examples of working with the various supported types. ++> [!NOTE] +> When using [binding expressions](./functions-bindings-expressions-patterns.md) that rely on trigger data, SDK types for the trigger itself are not supported. ++[sdk-types-worker-version]: https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/1.12.1-preview1 +[sdk-types-worker-sdk-version]: https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk/1.9.0-preview1 + ### HTTP trigger HTTP triggers translates the incoming HTTP request message into an [HttpRequestData] object that is passed to the function. This object provides data from the request, including `Headers`, `Cookies`, `Identities`, `URL`, and optional a message `Body`. This object is a representation of the HTTP request object and not the request itself. |
azure-functions | Functions Bindings Storage Blob Input | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-input.md | See the [Example section](#example) for complete examples. ::: zone pivot="programming-language-csharp" -The usage of the Blob input binding depends on the extension package version, and the C# modality used in your function app, which can be one of the following: +The binding types supported by Blob input depend on the extension package version and the C# modality used in your function app. For more information, see [Binding types](./functions-bindings-storage-blob.md#binding-types). +Binding to `string`, or `Byte[]` is only recommended when the blob size is small. This is recommended because the entire blob contents are loaded into memory. For most blobs, use a `Stream` or `BlobClient` type. For more information, see [Concurrency and memory usage](./functions-bindings-storage-blob-trigger.md#concurrency-and-memory-usage). ++If you get an error message when trying to bind to one of the Storage SDK types, make sure that you have a reference to [the correct Storage SDK version](./functions-bindings-storage-blob.md#tabpanel_2_functionsv1_in-process). + ::: zone-end ::: zone pivot="programming-language-java" |
azure-functions | Functions Bindings Storage Blob Output | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-output.md | See the [Example section](#example) for complete examples. ## Usage ::: zone pivot="programming-language-csharp" -The usage of the Blob output binding depends on the extension package version, and the C# modality used in your function app, which can be one of the following: +The binding types supported by Blob output depend on the extension package version and the C# modality used in your function app. For more information, see [Binding types](./functions-bindings-storage-blob.md#binding-types). +Binding to `string`, or `Byte[]` is only recommended when the blob size is small. This is recommended because the entire blob contents are loaded into memory. For most blobs, use a `Stream` or `BlobClient` type. For more information, see [Concurrency and memory usage](./functions-bindings-storage-blob-trigger.md#concurrency-and-memory-usage). +If you get an error message when trying to bind to one of the Storage SDK types, make sure that you have a reference to [the correct Storage SDK version](./functions-bindings-storage-blob.md#tabpanel_2_functionsv1_in-process). ::: zone-end <!--Any of the below pivots can be combined if the usage info is identical.--> ::: zone pivot="programming-language-java" |
azure-functions | Functions Bindings Storage Blob Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-trigger.md | The Blob storage trigger starts a function when a new or updated blob is detecte There are several ways to execute your function code based on changes to blobs in a storage container. Use the following table to determine which function trigger best fits your needs: -| | Blob Storage (standard) | Blob Storage (event-based) | Queue Storage | Event Grid | +| Consideration | Blob Storage (standard) | Blob Storage (event-based) | Queue Storage | Event Grid | | -- | -- | -- | -- | - | | Latency | High (up to 10 min) | Low | Medium | Low | | [Storage account](../storage/common/storage-account-overview.md#types-of-storage-accounts) limitations | Blob-only accounts not supported┬╣ | general purpose v1 not supported | none | general purpose v1 not supported | Metadata is available through the `$TriggerMetadata` parameter. ## Usage ::: zone pivot="programming-language-csharp" +The binding types supported by Blob trigger depend on the extension package version and the C# modality used in your function app. For more information, see [Binding types](./functions-bindings-storage-blob.md#binding-types). -The usage of the Blob trigger depends on the extension package version, and the C# modality used in your function app, which can be one of the following: +Binding to `string`, or `Byte[]` is only recommended when the blob size is small. This is recommended because the entire blob contents are loaded into memory. For most blobs, use a `Stream` or `BlobClient` type. For more information, see [Concurrency and memory usage](./functions-bindings-storage-blob-trigger.md#concurrency-and-memory-usage). +If you get an error message when trying to bind to one of the Storage SDK types, make sure that you have a reference to [the correct Storage SDK version](./functions-bindings-storage-blob.md#tabpanel_2_functionsv1_in-process). ::: zone-end ::: zone pivot="programming-language-java" |
azure-functions | Functions Bindings Storage Blob | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob.md | Functions 1.x apps automatically have a reference to the extension. ::: zone-end +## Binding types ++The binding types supported for .NET depend on both the extension version and C# execution mode, which can be one of the following: + +# [In-process class library](#tab/in-process) ++An in-process class library is a compiled C# function runs in the same process as the Functions runtime. + +# [Isolated process](#tab/isolated-process) ++An isolated worker process class library compiled C# function runs in a process isolated from the runtime. + +# [C# script](#tab/csharp-script) ++C# script is used primarily when creating C# functions in the Azure portal. ++++Choose a version to see binding type details for the mode and version. ++# [Extension 5.x and higher](#tab/extensionv5/in-process) ++The Azure Blobs extension supports parameter types according to the table below. ++| Binding | Parameter types | +|-|-|-| +| Blob trigger | [Stream]<br/>`TextReader`<br/>`string`<br/>`byte[]`<br/>[BlobClient]<sup>1</sup><br/>[BlockBlobClient]<sup>1</sup><br/>[PageBlobClient]<sup>1</sup><br/>[AppendBlobClient]<sup>1</sup><br/>[BlobBaseClient]<sup>1</sup>| +| Blob input | [Stream]<br/>`TextReader`<br/>`string`<br/>`byte[]`<br/>[BlobClient]<sup>1</sup><br/>[BlockBlobClient]<sup>1</sup><br/>[PageBlobClient]<sup>1</sup><br/>[AppendBlobClient]<sup>1</sup><br/>[BlobBaseClient]<sup>1</sup><br/>`IEnumerable<T>`<sup>2</sup>| +| Blob output |[Stream]<br/>`TextWriter`<br/>`string`<br/>`byte[]` | ++<sup>1</sup> The client types require the `Access` property of the attribute to be set to `FileAccess.ReadWrite`. ++<sup>2</sup> `IEnumerable<T>` provides an enumeration of blobs in the container. Here, `T` can be any of the other supported types. ++For examples using these types, see [the GitHub repository for the extension](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Microsoft.Azure.WebJobs.Extensions.Storage.Blobs#examples). Learn more about these new types are different and how to migrate to them from the [Azure.Storage.Blobs Migration Guide](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/storage/Azure.Storage.Blobs/AzureStorageNetMigrationV12.md). ++# [Functions 2.x and higher](#tab/functionsv2/in-process) ++Earlier versions of the extension exposed types from the now deprecated [Microsoft.Azure.Storage.Blob] namespace. Newer types from [Azure.Storage.Blobs] are exclusive to **extension 5.x and higher**. ++This version of the Azure Blobs extension supports parameter types according to the table below. ++| Binding | Parameter types | +|-|-|-| +| Blob trigger | [Stream]<br/>`TextReader`<br/>`string`<br/>`byte[]`<br/>[ICloudBlob]<sup>1</sup><br/>[CloudBlockBlob]<sup>1</sup><br/>[CloudPageBlob]<sup>1</sup><br/>[CloudAppendBlob]<sup>1</sup>| +| Blob input | [Stream]<br/>`TextReader`<br/>`string`<br/>`byte[]`<br/>[ICloudBlob]<sup>1</sup><br/>[CloudBlockBlob]<sup>1</sup><br/>[CloudPageBlob]<sup>1</sup><br/>[CloudAppendBlob]<sup>1</sup>| +| Blob output |[Stream]<br/>`TextWriter`<br/>`string`<br/>`byte[]` | ++<sup>1</sup> These types require the `Access` property of the attribute to be set to `FileAccess.ReadWrite`. ++<sup>2</sup> `IEnumerable<T>` provides an enumeration of blobs in the container. Here, `T` can be any of the other supported types. ++# [Functions 1.x](#tab/functionsv1/in-process) ++Functions 1.x exposed types from the now deprecated [Microsoft.Azure.Storage.Blob] namespace. Newer types from [Azure.Storage.Blobs] are exclusive to later host versions with **extension 5.x and higher**. ++Functions 1.x supports parameter types according to the table below. ++| Binding | Parameter types | +|-|-|-| +| Blob trigger | [Stream]<br/>`TextReader`<br/>`string`<br/>`byte[]`<br/>[ICloudBlob]<sup>1</sup><br/>[CloudBlockBlob]<sup>1</sup><br/>[CloudPageBlob]<sup>1</sup><br/>[CloudAppendBlob]<sup>1</sup>| +| Blob input | [Stream]<br/>`TextReader`<br/>`string`<br/>`byte[]`<br/>[ICloudBlob]<sup>1</sup><br/>[CloudBlockBlob]<sup>1</sup><br/>[CloudPageBlob]<sup>1</sup><br/>[CloudAppendBlob]<sup>1</sup>| +| Blob output |[Stream]<br/>`TextWriter`<br/>`string`<br/>`byte[]` | ++<sup>1</sup> These types require the `Access` property of the attribute to be set to `FileAccess.ReadWrite`. ++<sup>2</sup> `IEnumerable<T>` provides an enumeration of blobs in the container. Here, `T` can be any of the other supported types. ++# [Extension 5.x and higher](#tab/extensionv5/isolated-process) ++The isolated worker process supports parameter types according to the table below. Binding to string parameters is currently the only option that is generally available. Support for binding to `Byte[]`, to `Stream`, and to types from [Azure.Storage.Blobs] is in preview. ++| Binding | Parameter types | Preview parameter types<sup>1</sup> | +|-|-|-| +| Blob trigger | `string` | `Byte[]`<br/>[Stream]<br/>[BlobClient]<br/>[BlockBlobClient]<br/>[PageBlobClient]<br/>[AppendBlobClient]<br/>[BlobBaseClient]<br/>[BlobContainerClient]<br/>JSON serializable types<sup>2</sup>| +| Blob input | `string` | `Byte[]`<br/>[Stream]<br/>[BlobClient]<br/>[BlockBlobClient]<br/>[PageBlobClient]<br/>[AppendBlobClient]<br/>[BlobBaseClient]<br/>[BlobContainerClient]<sup>3</sup><br/>JSON serializable types<sup>2</sup>| +| Blob output | `string` | No preview types<sup>4</sup> | ++<sup>1</sup> Preview types require use of [Microsoft.Azure.Functions.Worker.Extensions.Storage.Blobs 5.1.0-preview1 or later][sdk-types-extension-version], [Microsoft.Azure.Functions.Worker 1.12.1-preview1 or later][sdk-types-worker-version], and [Microsoft.Azure.Functions.Worker.Sdk 1.9.0-preview1 or later][sdk-types-worker-sdk-version]. When developing on your local machine, you will need [Azure Functions Core Tools version 4.0.5000 or later](./functions-run-local.md). Collections of preview types, such as arrays and `IEnumerable<T>`, are not supported. When using a preview type, [binding expressions](./functions-bindings-expressions-patterns.md) that rely on trigger data are not supported. ++[sdk-types-extension-version]: https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Storage.Blobs/5.1.0-preview1 +[sdk-types-worker-version]: https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/1.12.1-preview1 +[sdk-types-worker-sdk-version]: https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk/1.9.0-preview1 ++<sup>2</sup> Blobs containing JSON data can be deserialized into known plain-old CLR object (POCO) types. ++<sup>3</sup> The `BlobPath` configuration for an input binding to [BlobContainerClient] currently requires the presence of a blob name. It is not sufficient to provide just the container name. A placeholder value may be used and will not change the behavior. For example, setting `[BlobInput("samples-workitems/placeholder.txt")] BlobContainerClient containerClient` does not consider whether any `placeholder.txt` exists or not, and the client will work with the overall "samples-workitems" container. ++<sup>4</sup> Support for SDK type bindings does not presently extend to output bindings. ++# [Functions 2.x and higher](#tab/functionsv2/isolated-process) ++Earlier versions of extensions in the isolated worker process only support binding to string parameters. Additional options are available to **extension 5.x and higher**. ++# [Functions 1.x](#tab/functionsv1/isolated-process) ++Functions version 1.x doesn't support isolated worker process. ++# [Extension 5.x and higher](#tab/extensionv5/csharp-script) ++The Azure Blobs extension supports parameter types according to the table below. ++| Binding | Parameter types | +|-|-|-| +| Blob trigger | [Stream]<br/>`TextReader`<br/>`string`<br/>`byte[]`<br/>[BlobClient]<sup>1</sup><br/>[BlockBlobClient]<sup>1</sup><br/>[PageBlobClient]<sup>1</sup><br/>[AppendBlobClient]<sup>1</sup><br/>[BlobBaseClient]<sup>1</sup>| +| Blob input | [Stream]<br/>`TextReader`<br/>`string`<br/>`byte[]`<br/>[BlobClient]<sup>1</sup><br/>[BlockBlobClient]<sup>1</sup><br/>[PageBlobClient]<sup>1</sup><br/>[AppendBlobClient]<sup>1</sup><br/>[BlobBaseClient]<sup>1</sup><br/>`IEnumerable<T>`<sup>2</sup>| +| Blob output |[Stream]<br/>`TextWriter`<br/>`string`<br/>`byte[]` | ++<sup>1</sup> The client types require the `Access` property of the attribute to be set to `FileAccess.ReadWrite`. ++<sup>2</sup> `IEnumerable<T>` provides an enumeration of blobs in the container. Here, `T` can be any of the other supported types. ++# [Functions 2.x and higher](#tab/functionsv2/csharp-script) ++Earlier versions of the extension exposed types from the now deprecated [Microsoft.Azure.Storage.Blob] namespace. Newer types from [Azure.Storage.Blobs] are exclusive to **extension 5.x and higher**. ++This version of the Azure Blobs extension supports parameter types according to the table below. ++| Binding | Parameter types | +|-|-|-| +| Blob trigger | [Stream]<br/>`TextReader`<br/>`string`<br/>`byte[]`<br/>[ICloudBlob]<sup>1</sup><br/>[CloudBlockBlob]<sup>1</sup><br/>[CloudPageBlob]<sup>1</sup><br/>[CloudAppendBlob]<sup>1</sup>| +| Blob input | [Stream]<br/>`TextReader`<br/>`string`<br/>`byte[]`<br/>[ICloudBlob]<sup>1</sup><br/>[CloudBlockBlob]<sup>1</sup><br/>[CloudPageBlob]<sup>1</sup><br/>[CloudAppendBlob]<sup>1</sup>| +| Blob output |[Stream]<br/>`TextWriter`<br/>`string`<br/>`byte[]` | ++<sup>1</sup> These types require the `Access` property of the attribute to be set to `FileAccess.ReadWrite`. ++<sup>2</sup> `IEnumerable<T>` provides an enumeration of blobs in the container. Here, `T` can be any of the other supported types. ++# [Functions 1.x](#tab/functionsv1/csharp-script) ++Functions 1.x exposed types from the now deprecated [Microsoft.Azure.Storage.Blob] namespace. Newer types from [Azure.Storage.Blobs] are exclusive to later host versions with **extension 5.x and higher**. ++Functions 1.x supports parameter types according to the table below. ++| Binding | Parameter types | +|-|-|-| +| Blob trigger | [Stream]<br/>`TextReader`<br/>`string`<br/>`byte[]`<br/>[ICloudBlob]<sup>1</sup><br/>[CloudBlockBlob]<sup>1</sup><br/>[CloudPageBlob]<sup>1</sup><br/>[CloudAppendBlob]<sup>1</sup>| +| Blob input | [Stream]<br/>`TextReader`<br/>`string`<br/>`byte[]`<br/>[ICloudBlob]<sup>1</sup><br/>[CloudBlockBlob]<sup>1</sup><br/>[CloudPageBlob]<sup>1</sup><br/>[CloudAppendBlob]<sup>1</sup>| +| Blob output |[Stream]<br/>`TextWriter`<br/>`string`<br/>`byte[]` | ++<sup>1</sup> These types require the `Access` property of the attribute to be set to `FileAccess.ReadWrite`. ++<sup>2</sup> `IEnumerable<T>` provides an enumeration of blobs in the container. Here, `T` can be any of the other supported types. ++++[Stream]: /dotnet/api/system.io.stream ++[Azure.Storage.Blobs]: /dotnet/api/azure.storage.blobs +[BlobClient]: /dotnet/api/azure.storage.blobs.blobclient +[BlockBlobClient]: /dotnet/api/azure.storage.blobs.specialized.blockblobclient +[PageBlobClient]: /dotnet/api/azure.storage.blobs.specialized.pageblobclient +[AppendBlobClient]: /dotnet/api/azure.storage.blobs.specialized.appendblobclient +[BlobBaseClient]: /dotnet/api/azure.storage.blobs.specialized.blobbaseclient +[BlobContainerClient]: /dotnet/api/azure.storage.blobs.blobcontainerclient ++[Microsoft.Azure.Storage.Blob]: /dotnet/api/microsoft.azure.storage.blob +[ICloudBlob]: /dotnet/api/microsoft.azure.storage.blob.icloudblob +[CloudBlockBlob]: /dotnet/api/microsoft.azure.storage.blob.cloudblockblob +[CloudPageBlob]: /dotnet/api/microsoft.azure.storage.blob.cloudpageblob +[CloudAppendBlob]: /dotnet/api/microsoft.azure.storage.blob.cloudappendblob + ## host.json settings |
azure-functions | Functions Bindings Triggers Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-triggers-python.md | Durable Functions also provides preview support of the V2 programming model. To > [!NOTE] > Using [Extension Bundles](./functions-bindings-register.md#extension-bundles) is not currently supported when trying out the Python V2 programming model with Durable Functions, so you will need to manage your extensions manually.-> To do this, remove the `extensionBundles` section of your `host.json` as described [here](./functions-run-local.md#install-extensions) and run `func extensions install --package Microsoft.Azure.WebJobs.Extensions.DurableTask --version 2.9.1` on your terminal. This will install the Durable Functions extension for your app and will allow you to try out the new experience. +> To do this, remove the `extensionBundle` section of your `host.json` as described [here](./functions-run-local.md#install-extensions) and run `func extensions install --package Microsoft.Azure.WebJobs.Extensions.DurableTask --version 2.9.1` on your terminal. This will install the Durable Functions extension for your app and will allow you to try out the new experience. The Durable Functions Triggers and Bindings may be accessed from an instance `DFApp`, a subclass of `FunctionApp` that additionally exports Durable Functions-specific decorators. def entity_function(context): + [Python developer guide](./functions-reference-python.md) + [Get started with Visual Studio](./create-first-function-vs-code-python.md)-+ [Get started command prompt](./create-first-function-cli-python.md) ++ [Get started command prompt](./create-first-function-cli-python.md) |
azure-functions | Functions Bindings Warmup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-warmup.md | zone_pivot_groups: programming-languages-set-functions-lang-workers # Azure Functions warmup trigger -This article explains how to work with the warmup trigger in Azure Functions. A warmup trigger is invoked when an instance is added to scale a running function app. The warmup trigger lets you define a function that's run when a new instance of your function app is started. You can use a warmup trigger to pre-load custom dependencies during the pre-warming process so your functions are ready to start processing requests immediately. Some actions for a warmup trigger might include opening connections, loading dependencies, or running any other custom logic before your app begins receiving traffic. To learn more, see [pre-warmed instances](./functions-premium-plan.md#pre-warmed-instances). +This article explains how to work with the warmup trigger in Azure Functions. A warmup trigger is invoked when an instance is added to scale a running function app. The warmup trigger lets you define a function that's run when a new instance of your function app is started. You can use a warmup trigger to pre-load custom dependencies during the pre-warming process so your functions are ready to start processing requests immediately. Some actions for a warmup trigger might include opening connections, loading dependencies, or running any other custom logic before your app begins receiving traffic. The following considerations apply when using a warmup trigger: |
azure-functions | Functions Create Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-vnet.md | Title: Use private endpoints to integrate Azure Functions with a virtual network description: This tutorial shows you how to connect a function to an Azure virtual network and lock it down by using private endpoints. Previously updated : 2/22/2021 Last updated : 2/10/2023 #Customer intent: As an enterprise developer, I want to create a function that can connect to a virtual network with private endpoints to secure my function app. You'll create a .NET function app in the Premium plan because this tutorial uses | **Function App name** | Globally unique name | Name that identifies your new function app. Valid characters are `a-z` (case insensitive), `0-9`, and `-`. | |**Publish**| Code | Choose to publish code files or a Docker container. | | **Runtime stack** | .NET | This tutorial uses .NET. |- | **Version** | 3.1 | This tutorial uses .NET Core 3.1 | + | **Version** | 6 | This tutorial uses .NET 6.0 running [in the same process as the Functions host](./functions-dotnet-class-library.md). | |**Region**| Preferred region | Choose a [region](https://azure.microsoft.com/regions/) near you or near other services that your functions access. | 1. Select **Next: Hosting**. On the **Hosting** page, enter the following settings. To use your function app with virtual networks, you need to join it to a subnet. | **Repository** | functions-vnet-tutorial | The repository forked from https://github.com/Azure-Samples/functions-vnet-tutorial. | | **Branch** | main | The main branch of the repository you created. | | **Runtime stack** | .NET | The sample code is in C#. |- | **Version** | .NET Core 3.1 | The runtime version. | + | **Version** | v4.0 | The runtime version. | 1. Select **Save**. |
azure-functions | Functions Develop Vs Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-vs-code.md | These prerequisites are only required to [run and debug your functions locally]( * The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 2.x or later. The Core Tools package is downloaded and installed automatically when you start the project locally. Core Tools include the entire Azure Functions runtime, so download and installation might take some time. -* [PowerShell 7](/powershell/scripting/install/installing-powershell-core-on-windows) recommended. For version information, see [PowerShell versions](functions-reference-powershell.md#powershell-versions). +* [PowerShell 7.2](/powershell/scripting/install/installing-powershell-core-on-windows) recommended. For version information, see [PowerShell versions](functions-reference-powershell.md#powershell-versions). -* Both [.NET Core 3.1 runtime](https://dotnet.microsoft.com/download) and [.NET Core 2.1 runtime](https://dotnet.microsoft.com/download/dotnet/2.1). +* [.NET 6.0 runtime](https://dotnet.microsoft.com/download). * The [PowerShell extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-vscode.PowerShell). |
azure-functions | Functions Identity Based Connections Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-identity-based-connections-tutorial.md | After you complete this tutorial, you should complete the follow-on tutorial tha + An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). -+ The [.NET Core 3.1 SDK](https://dotnet.microsoft.com/download) ++ The [.NET 6.0 SDK](https://dotnet.microsoft.com/download) -+ The [Azure Functions Core Tools](functions-run-local.md#v2) version 3.x. ++ The [Azure Functions Core Tools](functions-run-local.md#v2) version 4.x. ## Why use identity? |
azure-functions | Functions Run Local | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-run-local.md | The specific prerequisites for Core Tools depend on the features you plan to use **[Publish](#publish)**: Core Tools currently depends on either the [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps) for authenticating with your Azure account. This means that you must install one of these tools to be able to [publish to Azure](#publish) from Azure Functions Core Tools. -**[Install extensions](#install-extensions)**: To manually install extensions by using Core Tools, you must have the [.NET Core 3.1 SDK](https://dotnet.microsoft.com/download) installed. The .NET Core SDK is used by Core Tools to install extensions from NuGet. You don't need to know .NET to use Azure Functions extensions. +**[Install extensions](#install-extensions)**: To manually install extensions by using Core Tools, you must have the [.NET 6.0 SDK](https://dotnet.microsoft.com/download) installed. The .NET SDK is used by Core Tools to install extensions from NuGet. You don't need to know .NET to use Azure Functions extensions. ## <a name="v2"></a>Core Tools versions There are four versions of Azure Functions Core Tools. The version you use depends on your local development environment, [choice of language](supported-languages.md), and level of support required. -Choose a version tab below to learn about each specific version and for detailed installation instructions: +Choose one of the following version tabs to learn about each specific version and for detailed installation instructions: # [Version 4.x](#tab/v4) Supports [version 4.x](functions-versions.md) of the Functions runtime. This ver # [Version 3.x](#tab/v3) -Supports [version 3.x](functions-versions.md) of the Azure Functions runtime. This version supports Windows, macOS, and Linux, and uses platform-specific package managers or npm for installation. +Supports [version 3.x](functions-versions.md) of the Azure Functions runtime, which reached end of life (EOL) for extended support on December 13, 2022. Use version 4.x instead. # [Version 2.x](#tab/v2) -Supports [version 2.x](functions-versions.md) of the Azure Functions runtime. This version supports Windows, macOS, and Linux, and uses platform-specific package managers or npm for installation. +Supports [version 3.x](functions-versions.md) of the Azure Functions runtime, which reached end of life (EOL) for extended support on December 13, 2022. Use version 4.x instead. # [Version 1.x](#tab/v1) Supports version 1.x of the Azure Functions runtime. This version of the tools i -You can only install one version of Core Tools on a given computer. Unless otherwise noted, the examples in this article are for version 3.x. +You can only install one version of Core Tools on a given computer. Unless otherwise noted, the examples in this article are for version 4.x. ## Install the Azure Functions Core Tools Version 1.x of the Core Tools isn't supported on Linux. Use version 2.x or a lat ## Changing Core Tools versions -When changing to a different version of Core Tools, you should use the same package manager as the original installation to move to a different package version. For example, if you installed Core Tools version 2.x using npm, you should use the following command to upgrade to version 3.x: +When changing to a different version of Core Tools, you should use the same package manager as the original installation to move to a different package version. For example, if you installed Core Tools version 3.x using npm, you should use the following command to upgrade to version 4.x: ```bash-npm install -g azure-functions-core-tools@3 --unsafe-perm true +npm install -g azure-functions-core-tools@4 --unsafe-perm true ``` If you used Windows installer (MSI) to install Core Tools on Windows, you should uninstall the old version from Add Remove Programs before installing a different version. In the terminal window or from a command prompt, run the following command to cr func init MyFunctionProj ``` -This example creates a Functions project in a new `MyFunctionProj` folder. You are prompted to choose a default language for your project. +This example creates a Functions project in a new `MyFunctionProj` folder. You're prompted to choose a default language for your project. The following considerations apply to project initialization: The following considerations apply to project initialization: + If you plan to publish your project to a custom Linux container, use the `--docker` option to make sure that a Dockerfile is generated for your project. To learn more, see [Create a function on Linux using a custom image](functions-create-function-linux-custom-image.md). -Certain languages may have additional considerations: +Certain languages may have more considerations: # [C\#](#tab/csharp) + Core Tools lets you create function app projects for the .NET runtime as both [in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md) C# class library projects (.csproj). These projects, which can be used with Visual Studio or Visual Studio Code, are compiled during debugging and when publishing to Azure. -+ Use the `--csx` parameter if you want to work locally with C# script (.csx) files. These are the same files you get when you create functions in the Azure portal and when using version 1.x of Core Tools. To learn more, see the [func init reference](functions-core-tools-reference.md#func-init). ++ Use the `--csx` parameter if you want to work locally with C# script (.csx) files. These files are the same ones you get when you create functions in the Azure portal and when using version 1.x of Core Tools. To learn more, see the [func init reference](functions-core-tools-reference.md#func-init). # [Java](#tab/java) Certain languages may have additional considerations: # [PowerShell](#tab/powershell) -There are no additional considerations for PowerShell. +There are no other considerations for PowerShell. # [Python](#tab/python) There are no additional considerations for PowerShell. ## Register extensions -Starting with runtime version 2.x, [Functions triggers and bindings](functions-triggers-bindings.md) are implemented as .NET extension (NuGet) packages. For compiled C# projects, you simply reference the NuGet extension packages for the specific triggers and bindings you are using. HTTP bindings and timer triggers don't require extensions. +Starting with runtime version 2.x, [Functions triggers and bindings](functions-triggers-bindings.md) are implemented as .NET extension (NuGet) packages. For compiled C# projects, you simply reference the NuGet extension packages for the specific triggers and bindings you're using. HTTP bindings and timer triggers don't require extensions. -To improve the development experience for non-C# projects, Functions lets you reference a versioned extension bundle in your host.json project file. [Extension bundles](functions-bindings-register.md#extension-bundles) makes all extensions available to your app and removes the chance of having package compatibility issues between extensions. Extension bundles also removes the requirement of installing the .NET Core 3.1 SDK and having to deal with the extensions.csproj file. +To improve the development experience for non-C# projects, Functions lets you reference a versioned extension bundle in your host.json project file. [Extension bundles](functions-bindings-register.md#extension-bundles) makes all extensions available to your app and removes the chance of having package compatibility issues between extensions. Extension bundles also removes the requirement of installing the .NET SDK and having to deal with the extensions.csproj file. -Extension bundles is the recommended approach for functions projects other than C# complied projects, as well as C# script. For these projects, the extension bundle setting is generated in the _host.json_ file during initialization. If bundles aren't enabled, you need to update the project's host.json file. +Extension bundles is the recommended approach for functions projects other than C# complied projects, and for C# script. For these projects, the extension bundle setting is generated in the _host.json_ file during initialization. If bundles aren't enabled, you need to update the project's host.json file. [!INCLUDE [Register extensions](../../includes/functions-extension-bundles.md)] There may be cases in a non-.NET project when you can't use extension bundles, s [!INCLUDE [functions-local-settings-file](../../includes/functions-local-settings-file.md)] -By default, these settings are not migrated automatically when the project is published to Azure. Use the [`--publish-local-settings` option][func azure functionapp publish] when you publish to make sure these settings are added to the function app in Azure. Values in the `ConnectionStrings` section are never published. +By default, these settings aren't migrated automatically when the project is published to Azure. Use the [`--publish-local-settings` option][func azure functionapp publish] when you publish to make sure these settings are added to the function app in Azure. Values in the `ConnectionStrings` section are never published. -The function app settings values can also be read in your code as environment variables. For more information, see the Environment variables section of these language-specific reference topics: +The function app settings values can also be read in your code as environment variables. For more information, see the Environment variables section of these language-specific reference articles: * [C# precompiled](functions-dotnet-class-library.md#environment-variables) * [C# script (.csx)](functions-reference-csharp.md#environment-variables) To create a function in an existing project, run the following command: func new ``` -In version 3.x/2.x, when you run `func new` you are prompted to choose a template in the default language of your function app. Next, you're prompted to choose a name for your function. In version 1.x, you are also required to choose the language. +When you run `func new`, you're prompted to choose a template in the default language of your function app. Next, you're prompted to choose a name for your function. In version 1.x, you're also required to choose the language. You can also specify the function name and template in the `func new` command. The following example uses the `--template` option to create an HTTP trigger named `MyHttpTrigger`: You must have already [created a function app in your Azure subscription](functi To learn how to create a function app from the command prompt or terminal window using the Azure CLI or Azure PowerShell, see [Create a Function App for serverless execution](./scripts/functions-cli-create-serverless.md). >[!IMPORTANT]-> When you create a function app in the Azure portal, it uses version 3.x of the Function runtime by default. To make the function app use version 1.x of the runtime, follow the instructions in [Run on version 1.x](functions-versions.md#creating-1x-apps). +> When you create a function app in the Azure portal, it uses version 4.x of the Function runtime by default. To make the function app use version 1.x of the runtime, follow the instructions in [Run on version 1.x](functions-versions.md#creating-1x-apps). > You can't change the runtime version for a function app that has existing functions. func extensions install The command reads the *function.json* file to see which packages you need, installs them, and rebuilds the extensions project (extensions.csproj). It adds any new bindings at the current version but doesn't update existing bindings. Use the `--force` option to update existing bindings to the latest version when installing new ones. To learn more, see the [`func extensions install` command](functions-core-tools-reference.md#func-extensions-install). -If your function app uses bindings or NuGet packages that Core Tools does not recognize, you must manually install the specific extension. +If your function app uses bindings or NuGet packages that Core Tools doesn't recognize, you must manually install the specific extension. ### Install a specific extension This type of streaming logs requires that Application Insights integration be en [!INCLUDE [functions-x86-emulation-on-arm64](../../includes/functions-x86-emulation-on-arm64.md)] -If you are using Visual Studio Code, you can integrate Rosetta with the built-in Terminal. For more information, see [Enable emulation in Visual Studio Code](./functions-develop-vs-code.md#enable-emulation-in-visual-studio-code). +If you're using Visual Studio Code, you can integrate Rosetta with the built-in Terminal. For more information, see [Enable emulation in Visual Studio Code](./functions-develop-vs-code.md#enable-emulation-in-visual-studio-code). ## Next steps |
azure-functions | Python Scale Performance Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/python-scale-performance-reference.md | Title: Improve throughput performance of Python apps in Azure Functions -description: Learn how to develop Azure Functions apps using Python that are highly performant and scale well under load. +description: Learn how to develop Azure Functions apps using Python that are highly performant and scale well under load. Previously updated : 10/13/2020 Last updated : 02/13/2023 ms.devlang: python As real world function workloads are usually a mix of I/O and CPU bound, you sho ### Performance-specific configurations -After understanding the workload profile of your function app, the following are configurations that you can use to improve the throughput performance of your functions. +After you understand the workload profile of your function app, the following are configurations that you can use to improve the throughput performance of your functions. * [Async](#async) * [Multiple language worker](#use-multiple-language-worker-processes) Here are a few examples of client libraries that have implemented async patterns ##### Understanding async in Python worker -When you define `async` in front of a function signature, Python marks the function as a coroutine. When calling the coroutine, it can be scheduled as a task into an event loop. When you call `await` in an async function, it registers a continuation into the event loop, which allows the event loop to process the next task during the wait time. +When you define `async` in front of a function signature, Python marks the function as a coroutine. When you call the coroutine, it can be scheduled as a task into an event loop. When you call `await` in an async function, it registers a continuation into the event loop, which allows the event loop to process the next task during the wait time. In our Python Worker, the worker shares the event loop with the customer's `async` function and it's capable for handling multiple requests concurrently. We strongly encourage our customers to make use of asyncio compatible libraries, such as [aiohttp](https://pypi.org/project/aiohttp/) and [pyzmq](https://pypi.org/project/pyzmq/). Following these recommendations increases your function's throughput compared to those libraries when implemented synchronously. By default, every Functions host instance has a single language worker process. For CPU bound apps, you should set the number of language workers to be the same as or higher than the number of cores that are available per function app. To learn more, see [Available instance SKUs](functions-premium-plan.md#available-instance-skus). -I/O-bound apps may also benefit from increasing the number of worker processes beyond the number of cores available. Keep in mind that setting the number of workers too high can impact overall performance due to the increased number of required context switches. +I/O-bound apps may also benefit from increasing the number of worker processes beyond the number of cores available. Keep in mind that setting the number of workers too high can affect overall performance due to the increased number of required context switches. -The `FUNCTIONS_WORKER_PROCESS_COUNT` applies to each host that Functions creates when scaling out your application to meet demand. +The `FUNCTIONS_WORKER_PROCESS_COUNT` applies to each host that Azure Functions creates when scaling out your application to meet demand. > [!NOTE] > Multiple Python workers are not supported by the Python v2 programming model at this time. This means that enabling intelligent concurrency and setting `FUNCTIONS_WORKER_PROCESS_COUNT` greater than 1 is not supported for functions developed using the v2 model. You can set the value of maximum workers allowed for running sync functions usin For CPU-bound apps, you should keep the setting to a low number, starting from 1 and increasing as you experiment with your workload. This suggestion is to reduce the time spent on context switches and allowing CPU-bound tasks to finish. -For I/O-bound apps, you should see substantial gains by increasing the number of threads working on each invocation. the recommendation is to start with the Python default (the number of cores) + 4 and then tweak based on the throughput values you're seeing. +For I/O-bound apps, you should see substantial gains by increasing the number of threads working on each invocation. The recommendation is to start with the Python default (the number of cores) + 4 and then tweak based on the throughput values you're seeing. For mixed workloads apps, you should balance both `FUNCTIONS_WORKER_PROCESS_COUNT` and `PYTHON_THREADPOOL_THREAD_COUNT` configurations to maximize the throughput. To understand what your function apps spend the most time on, we recommend profiling them and setting the values according to their behaviors. To learn about these application settings, see [Use multiple worker processes](#use-multiple-language-worker-processes). > [!NOTE]-> Although these recommendations apply to both HTTP and non-HTTP triggered functions, you might need to adjust other trigger specific configurations for non-HTTP triggered functions to get the expected performance from your function apps. For more information about this, please refer to this [article](functions-best-practices.md). +> Although these recommendations apply to both HTTP and non-HTTP triggered functions, you might need to adjust other trigger specific configurations for non-HTTP triggered functions to get the expected performance from your function apps. For more information about this, please refer to this [Best practices for reliable Azure Functions](functions-best-practices.md). #### Managing event loop async def main(req: func.HttpRequest) -> func.HttpResponse: mimetype='application/json') ``` #### Vertical scaling-For more processing units especially in CPU-bound operation, you might be able to get this by upgrading to premium plan with higher specifications. With higher processing units, you can adjust the number of worker processes count according to the number of cores available and achieve higher degree of parallelism. +You might be able to get more processing units, especially in CPU-bound operation, by upgrading to premium plan with higher specifications. With higher processing units, you can adjust the number of worker processes count according to the number of cores available and achieve higher degree of parallelism. ## Next steps |
azure-government | Compare Azure Government Global Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compare-azure-government-global-azure.md | The following Azure Bot Service **features aren't currently available** in Azure - Microsoft Search Channel (Preview) - Kik Channel (deprecated) -For more information, see [How do I create a bot that uses US Government data center](/azure/bot-service/bot-service-resources-faq-ecosystem#how-do-i-create-a-bot-that-uses-the-us-government-data-center). +For information on how to deploy Bot Framework and Azure Bot Service bots to Azure Government, see [Configure Bot Framework bots for US Government customers](/azure/bot-service/how-to-deploy-gov-cloud-high). ### [Azure Machine Learning](../machine-learning/index.yml) The following Azure Database for MySQL **features aren't currently available** i ### [Azure Database for PostgreSQL](../postgresql/index.yml) +For Flexible Server availability in Azure Government regions, see [Azure Database for PostgreSQL ΓÇô Flexible Server](../postgresql/flexible-server/overview.md#azure-regions). + The following Azure Database for PostgreSQL **features aren't currently available** in Azure Government: -- Hyperscale (Citus) deployment option-- The following features of the Single server deployment option+- Azure Cosmos DB for PostgreSQL, formerly Azure Database for PostgreSQL ΓÇô Hyperscale (Citus). For more information about supported regions, see [Regional availability for Azure Cosmos DB for PostgreSQL](../cosmos-db/postgresql/resources-regions.md). +- The following features of the Single Server deployment option - Advanced Threat Protection - Backup with long-term retention The following Azure Database for PostgreSQL **features aren't currently availabl The following Azure SQL Managed Instance **features aren't currently available** in Azure Government: -- Long-term retention+- Long-term backup retention ## Developer tools |
azure-maps | How To Dev Guide Js Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-js-sdk.md | npm init To use Azure Maps JavaScript SDK, you'll need to install the search package. Each of the Azure Maps services including search, routing, rendering and geolocation are each in their own package. ```powershell-npm install @azure/maps-search +npm install @azure-rest/maps-search ``` Once the package is installed, create a `search.js` file in the `mapsDemo` directory: MAPS_CLIENT_ID="<maps-client-id>" Once your environment variables are created, you can access them in your JavaScript code: ```JavaScript-const { MapsSearchClient } = require("@azure/maps-search"); +const MapsSearch = require("@azure-rest/maps-search").default; const { DefaultAzureCredential } = require("@azure/identity"); require("dotenv").config(); const credential = new DefaultAzureCredential(); -const client = new MapsSearchClient(credential, process.env.MAPS_CLIENT_ID); +const client = MapsSearch(credential, process.env.MAPS_CLIENT_ID); ``` ### Using a subscription key credential You can authenticate with your Azure Maps subscription key. Your subscription ke :::image type="content" source="./media/rest-sdk-dev-guides/subscription-key.png" alt-text="A screenshot showing the subscription key in the Authentication section of an Azure Maps account." lightbox="./media/rest-sdk-dev-guides/subscription-key.png"::: -You need to pass the subscription key to the `AzureKeyCredential` class provided by the [Azure Maps Search client library for JavaScript/TypeScript][JS-SDK]. For security reasons, it's better to specify the key as an environment variable than to include it in your source code. +You need to pass the subscription key to the `AzureKeyCredential` class provided by the [Azure Core Authentication Package][core auth package]. For security reasons, it's better to specify the key as an environment variable than to include it in your source code. You can accomplish this by using a `.env` file to store the subscription key variable. You'll need to install the [dotenv][dotenv] package to retrieve the value: MAPS_SUBSCRIPTION_KEY="<subscription-key>" Once your environment variable is created, you can access it in your JavaScript code: ```JavaScript-const { MapsSearchClient, AzureKeyCredential } = require("@azure/maps-search"); +const MapsSearch = require("@azure-rest/maps-search").default; +const { AzureKeyCredential } = require("@azure/core-auth"); require("dotenv").config(); const credential = new AzureKeyCredential(process.env.MAPS_SUBSCRIPTION_KEY);-const client = new MapsSearchClient(credential); +const client = MapsSearch(credential); ``` ## Fuzzy search an entity The following code snippet demonstrates how, in a simple console application, to ```JavaScript -const { MapsSearchClient, AzureKeyCredential } = require("@azure/maps-search"); -require("dotenv").config(); - -async function main() { - // Authenticate with Azure Map Subscription Key - const credential = new AzureKeyCredential(process.env.MAPS_SUBSCRIPTION_KEY); - const client = new MapsSearchClient(credential); - - // Setup the fuzzy search query - const response = await client.fuzzySearch({ - query: "Starbucks", - coordinates: [47.61010, -122.34255], - countryCodeFilter: ["US"], - }); - - // Log the result - console.log(`Starbucks search result nearby Seattle:`); - response.results.forEach((result) => { - console.log(`\ - * ${result.address.streetNumber} ${result.address.streetName} - ${result.address.municipality} ${result.address.countryCode} ${result.address.postalCode} - Coordinate: (${result.position[0].toFixed(4)}, ${result.position[1].toFixed(4)})\ - `); -} - -main().catch((err) => { - console.error(err); -}); +const MapsSearch = require("@azure-rest/maps-search").default; +const { isUnexpected } = require("@azure-rest/maps-search"); +const { AzureKeyCredential } = require("@azure/core-auth"); +require("dotenv").config(); -``` +async function main() { + // Authenticate with Azure Map Subscription Key + const credential = new AzureKeyCredential( + process.env. MAPS_SUBSCRIPTION_KEY + ); + const client = MapsSearch(credential); ++ // Setup the fuzzy search query + const response = await client.path("/search/fuzzy/{format}", "json").get({ + queryParameters: { + query: "Starbucks", + lat: 47.61559, + lon: -122.33817, + countrySet: ["US"], + }, + }); ++ // Handle the error response + if (isUnexpected(response)) { + throw response.body.error; + } + // Log the result + console.log(`Starbucks search result nearby Seattle:`); + response.body.results.forEach((result) => { + console.log(`\ + * ${result.address.streetNumber} ${result.address.streetName} + ${result.address.municipality} ${result.address.countryCode} ${ + result.address.postalCode + } + Coordinate: (${result.position.lat.toFixed(4)}, ${result.position.lon.toFixed(4)})\ + `); + }); +} ++main().catch((err) => { + console.error(err); +}); -In the above code snippet, you create a `MapsSearchClient` object using your Azure credentials. This is done using your Azure Maps subscription key, however you could use the [Azure AD credential](#using-an-azure-ad-credential) discussed in the previous section. You then pass the search query and options to the `fuzzySearch` method. Search for Starbucks (`query: "Starbucks"`) near Seattle (`coordinates: [47.61010, -122.34255], countryFilter: ["US"]`). For more information, see [FuzzySearchRequest][FuzzySearchRequest] in the [Azure Maps Search client library for JavaScript/TypeScript][JS-SDK]. -The method `fuzzySearch` provided by `MapsSearchClient` will forward the request to Azure Maps REST endpoints. When the results are returned, they're written to the console. For more information, see [SearchAddressResult][SearchAddressResult]. +``` +The code snippet above shows how to use the `MapsSearch` method from the Azure Maps Search client library to create a `client` object with your Azure credentials. You can use either your Azure Maps subscription key or the [Azure AD credential](#using-an-azure-ad-credential) from the previous section. The `path` parameter specifies the API endpoint, which is "/search/fuzzy/{format}" in this case. The `get` method sends an HTTP GET request with the query parameters, such as `query`, `coordinates`, and `countryFilter`. The query searches for Starbucks locations near Seattle in the US. The SDK returns the results as a [FuzzySearchResult][FuzzySearchResult] object and writes them to the console. For more details, see the [FuzzySearchRequest][FuzzySearchRequest] documentation. + Run `search.js` with Node.js: ```powershell node search.js ## Search an Address -The [searchAddress][searchAddress] method can be used to get the coordinates of an address. Modify the `search.js` from the sample as follows: +The [searchAddress][searchAddress] query can be used to get the coordinates of an address. Modify the `search.js` from the sample as follows: ```JavaScript-const { MapsSearchClient, AzureKeyCredential } = require("@azure/maps-search"); +const MapsSearch = require("@azure-rest/maps-search").default; +const { isUnexpected } = require("@azure-rest/maps-search"); +const { AzureKeyCredential } = require("@azure/core-auth"); require("dotenv").config(); async function main() {- const credential = new AzureKeyCredential(process.env.MAPS_SUBSCRIPTION_KEY); - const client = new MapsSearchClient(credential); -- const response = await client.searchAddress( - "1912 Pike Pl, Seattle, WA 98101, US" + const credential = new AzureKeyCredential( + process.env. MAPS_SUBSCRIPTION_KEY );+ const client = MapsSearch(credential); - console.log(`The coordinate is: ${response.results[0].position}`);} + const response = await client.path("/search/address/{format}", "json").get({ + queryParameters: { + query: "1301 Alaskan Way, Seattle, WA 98101, US", + }, + }); + if (isUnexpected(response)) { + throw response.body.error; + } + const { lat, lon } = response.body.results[0].position; + console.log(`The coordinate is: (${lat}, ${lon})`); +} main().catch((err) => { console.error(err); });+ ``` -The results returned from `client.searchAddress` are ordered by confidence score and in this example only the first result returned with be displayed to the screen. +The results are ordered by confidence score and in this example only the first result returned with be displayed to the screen. ## Batch reverse search -Azure Maps Search also provides some batch query methods. These methods will return Long Running Operations (LRO) objects. The requests might not return all the results immediately, so you can wait until completion or query the result periodically. The example below demonstrates how to call batched reverse search method: +Azure Maps Search also provides some batch query methods. These methods will return Long Running Operations (LRO) objects. The requests might not return all the results immediately, so you can use the poller to wait until completion. The example below demonstrates how to call batched reverse search method: ```JavaScript- const poller = await client.beginReverseSearchAddressBatch([ + const batchItems = await createBatchItems([ // This is an invalid query- { coordinates: [148.858561, 2.294911] }, - { - coordinates: [47.61010, -122.34255], - }, - { coordinates: [47.6155, -122.33817] }, - options: { radiusInMeters: 5000 }, + { query: [148.858561, 2.294911] }, + { query: [47.61010, -122.34255] }, + { query: [47.61559, -122.33817], radius: 5000 }, ]);+ const initialResponse = await client.path("/search/address/reverse/batch/{format}", "json").post({ + body: { batchItems }, + }); ``` -In this example, three queries are passed into the _batched reverse search_ request. The first query is invalid, see [Handing failed requests](#handing-failed-requests) for an example showing how to handle the invalid query. --Use the `getResult` method from the poller to check the current result. You check the status using `getOperationState` to see if the poller is still running. If it is, you can keep calling `poll` until the operation is finished: --```JavaScript - while (poller.getOperationState().status === "running") { - const partialResponse = poller.getResult(); - logResponse(partialResponse) - await poller.poll(); - } -``` +In this example, three queries are passed into the helper function `createBatchItems` which is imported from `@azure-rest/maps-search`. This helper function composed the body of the batch request. The first query is invalid, see [Handing failed requests](#handing-failed-requests) for an example showing how to handle the invalid query. -Alternatively, you can wait until the operation has completed, by using `pollUntilDone()`: +Use the `getLongRunningPoller` method with the `initialResponse` to get the poller. Then you can use `pollUntilDone` to get the final result: ```JavaScript-const response = await poller.pollUntilDone(); -logResponse(response) + const poller = getLongRunningPoller(client, initialResponse); + const response = await poller.pollUntilDone(); + logResponseBody(response.body); ``` -A common scenario for LRO is to resume a previous operation later. Do that by serializing the pollerΓÇÖs state with the `toString` method, and rehydrating the state with a new poller using `resumeReverseSearchAddressBatch`: +A common scenario for LRO is to resume a previous operation later. Do that by serializing the pollerΓÇÖs state with the `toString` method, and rehydrating the state with a new poller using the `resumeFrom` option in `getLongRunningPoller`: ```JavaScript const serializedState = poller.toString();- const rehydratedPoller = await client.resumeReverseSearchAddressBatch( - serializedState - ); - const response = await rehydratedPoller.pollUntilDone(); - logResponse(response); + const rehydratedPoller = getLongRunningPoller(client, initialResponse, { + resumeFrom: serializedState, + }); ++ const resumeResponse = await rehydratedPoller.pollUntilDone(); + logResponseBody(response); ``` Once you get the response, you can log it: ```JavaScript-function logResponse(response) { - console.log( - `${response.totalSuccessfulRequests}/${response.totalRequests} succeed.` - ); - response.batchItems.forEach((item, idx) => { - console.log(`The result for ${idx + 1}th request:`); - // Check if the request is failed - if (item.response.error) { - console.error(item.response.error); + +function logResponseBody(resBody) { + const { summary, batchItems } = resBody; ++ const { totalRequests, successfulRequests } = summary; + console.log(`${successfulRequests} out of ${totalRequests} requests are successful.`); ++ batchItems.forEach(({ response }, idx) => { + if (response.error) { + console.log(`Error in ${idx + 1} request: ${response.error.message}`); } else {- item.response.results.forEach((result) => { - console.log(result.address.freeformAddress); + console.log(`Results in ${idx + 1} request:`); + response.addresses.forEach(({ address }) => { + console.log(` ${address.freeformAddress}`); }); } });-} +} + ``` ### Handing failed requests -Handle failed requests by checking for the `error` property in the response batch item. See the `logResponse` function in the completed batch reverse search example below. +Handle failed requests by checking for the `error` property in the response batch item. See the `logResponseBody` function in the completed batch reverse search example below. ### Completed batch reverse search example The complete code for the reverse address batch search example: ```JavaScript-const { MapsSearchClient, AzureKeyCredential } = require("@azure/maps-search"); +const MapsSearch = require("@azure-rest/maps-search").default, + { createBatchItems, getLongRunningPoller } = require("@azure-rest/maps-search"); +const { AzureKeyCredential } = require("@azure/core-auth"); require("dotenv").config(); async function main() { const credential = new AzureKeyCredential(process.env.MAPS_SUBSCRIPTION_KEY);- const client = new MapsSearchClient(credential); + const client = MapsSearch(credential); - const poller = await client.beginReverseSearchAddressBatch([ + const batchItems = createBatchItems([ // This is an invalid query- { coordinates: [148.858561, 2.294911] }, + { query: [148.858561, 2.294911] }, {- coordinates: [47.61010, -122.34255], + query: [47.6101, -122.34255], },- { coordinates: [47.6155, -122.33817] }, - options: { radiusInMeters: 5000 }, + { query: [47.6155, -122.33817], radius: 5000 }, ]); - // Get the partial result and keep polling - while (poller.getOperationState().status === "running") { - const partialResponse = poller.getResult(); - logResponse(partialResponse); - await poller.poll(); - } + const initialResponse = await client.path("/search/address/reverse/batch/{format}", "json").post({ + body: { batchItems }, + }); + const poller = getLongRunningPoller(client, initialResponse); - // You can simply wait for the operation is done - // const response = await poller.pollUntilDone(); - // logResponse(response) + const response = await poller.pollUntilDone(); + logResponseBody(response.body); - // Resume the poller const serializedState = poller.toString();- const rehydratedPoller = await client.resumeReverseSearchAddressBatch( - serializedState - ); - const response = await rehydratedPoller.pollUntilDone(); - logResponse(response); + const rehydratedPoller = getLongRunningPoller(client, initialResponse, { + resumeFrom: serializedState, + }); + const resumeResponse = await rehydratedPoller.pollUntilDone(); + logResponseBody(resumeResponse.body); } -function logResponse(response) { - console.log( - `${response.totalSuccessfulRequests}/${response.totalRequests} succeed.` - ); - response.batchItems.forEach((item, idx) => { - console.log(`The result for ${idx + 1}th request:`); - if (item.response.error) { - console.error(item.response.error); +function logResponseBody(resBody) { + const { summary, batchItems } = resBody; ++ const { totalRequests, successfulRequests } = summary; + console.log(`${successfulRequests} out of ${totalRequests} requests are successful.`); ++ batchItems.forEach(({ response }, idx) => { + if (response.error) { + console.log(`Error in ${idx + 1} request: ${response.error.message}`); } else {- item.response.results.forEach((result) => { - console.log(result.address.freeformAddress); + console.log(`Results in ${idx + 1} request:`); + response.addresses.forEach(({ address }) => { + console.log(` ${address.freeformAddress}`); }); } }); } -main().catch((err) => { - console.error(err); -}); +main().catch(console.error); + ``` ## Additional information - The [Azure Maps Search client library for JavaScript/TypeScript][JS-SDK]. -[JS-SDK]: /javascript/api/overview/azure/maps-search-readme?view=azure-node-preview +[JS-SDK]: /javascript/api/@azure-rest/maps-search [defaultazurecredential]: https://github.com/Azure/azure-sdk-for-js/tree/@azure/maps-search_1.0.0-beta.1/sdk/identity/identity#defaultazurecredential -[searchAddress]: /javascript/api/@azure/maps-search/mapssearchclient?view=azure-node-preview#@azure-maps-search-mapssearchclient-searchaddress +[searchAddress]: /javascript/api/@azure-rest/maps-search/searchaddress -[FuzzySearchRequest]: /javascript/api/@azure/maps-search/fuzzysearchrequest?view=azure-node-preview +[FuzzySearchRequest]: /javascript/api/@azure-rest/maps-search/fuzzysearch + +[FuzzySearchResult]: /javascript/api/@azure-rest/maps-search/searchfuzzysearch200response -[SearchAddressResult]: /javascript/api/@azure/maps-search/searchaddressresult?view=azure-node-preview [search]: /rest/api/maps/search [Node.js Release]: https://github.com/nodejs/release#release-schedule main().catch((err) => { [authentication]: azure-maps-authentication.md [Identity library]: /javascript/api/overview/azure/identity-readme+[core auth package]: /javascript/api/@azure/core-auth/ [Host daemon]: ./how-to-secure-daemon-app.md#host-a-daemon-on-non-azure-resources [dotenv]: https://github.com/motdotla/dotenv#readme |
azure-monitor | Alerts Create New Alert Rule | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-new-alert-rule.md | Then you define these elements for the resulting alert actions by using: ### [Resource Health alert](#tab/resource-health) - 1. Enter values for the **Alert rule name** and the **Alert rule description**. - 1. (Optional) In the **Advanced options** section, select **Enable upon creation** for the alert rule to start running as soon as you're done creating it. + 1. Enter values for the **Alert rule name** and the **Alert rule description**. + 1. Select the **Region**. + 1. (Optional) In the **Advanced options** section, select **Enable upon creation** for the alert rule to start running as soon as you're done creating it. + 1. (Optional) If you've configured action groups for this alert rule, you can add custom properties to the alert payload to add more information to the payload. In the **Custom properties** section, add the property **Name** and **Value** for the custom property you want included in the payload. ++ > [!NOTE] + > The [common schema](alerts-common-schema.md) overwrites custom configurations. Therefore, you can't use both custom properties and the common schema for resource health alerts. ++ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-activity-log-rule-details-tab.png" alt-text="Screenshot that shows the Actions tab when creating a new activity log alert rule."::: ### [Service Health alert](#tab/service-health) 1. Enter values for the **Alert rule name** and the **Alert rule description**.+ 1. Select the **Region**. 1. (Optional) In the **Advanced options** section, select **Enable upon creation** for the alert rule to start running as soon as you're done creating it.+ 1. (Optional) If you've configured action groups for this alert rule, you can add custom properties to the alert payload to add more information to the payload. In the **Custom properties** section, add the property **Name** and **Value** for the custom property you want included in the payload. + > [!NOTE] + > The [common schema](alerts-common-schema.md) overwrites custom configurations. Therefore, you can't use both custom properties and the common schema for service health alerts. ++ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-activity-log-rule-details-tab.png" alt-text="Screenshot that shows the Actions tab when creating a new activity log alert rule."::: + 1. On the **Tags** tab, set any required tags on the alert rule resource. |
azure-monitor | Api Custom Events Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-custom-events-metrics.md | telemetry.trackEvent({name: "WinGame"}); ### Custom events in Log Analytics -The telemetry is available in the `customEvents` table on the [Application Insights Logs tab](../logs/log-query-overview.md) or [usage experience](usage-overview.md). Events might come from `trackEvent(..)` or the [Click Analytics Auto-collection plug-in](javascript-click-analytics-plugin.md). +The telemetry is available in the `customEvents` table on the [Application Insights Logs tab](../logs/log-query-overview.md) or [usage experience](usage-overview.md). Events might come from `trackEvent(..)` or the [Click Analytics Auto-collection plug-in](javascript-feature-extensions.md). If [sampling](./sampling.md) is in operation, the `itemCount` property shows a value greater than `1`. For example, `itemCount==10` means that of 10 calls to `trackEvent()`, the sampling process transmitted only one of them. To get a correct count of custom events, use code such as `customEvents | summarize sum(itemCount)`. |
azure-monitor | App Insights Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md | Application Insights provides other features including, but not limited to: - [Live Metrics](live-stream.md) ΓÇô observe activity from your deployed application in real time with no effect on the host environment - [Availability](availability-overview.md) ΓÇô also known as ΓÇ£Synthetic Transaction MonitoringΓÇ¥, probe your applications external endpoint(s) to test the overall availability and responsiveness over time-- [GitHub or Azure DevOps integration](work-item-integration.md) ΓÇô create [GitHub](/training/paths/github-administration-products/) or [Azure DevOps](/azure/devops/?view=azure-devops) work items in context of Application Insights data+- [GitHub or Azure DevOps integration](work-item-integration.md) ΓÇô create [GitHub](/training/paths/github-administration-products/) or [Azure DevOps](/azure/devops/) work items in context of Application Insights data - [Usage](usage-overview.md) ΓÇô understand which features are popular with users and how users interact and use your application - [Smart Detection](proactive-diagnostics.md) ΓÇô automatic failure and anomaly detection through proactive telemetry analysis Supported platforms and frameworks are listed here. * [Node.js](./nodejs.md) * [Python](./opencensus-python.md) * [JavaScript - web](./javascript.md)- * [React](./javascript-react-plugin.md) - * [React Native](./javascript-react-native-plugin.md) - * [Angular](./javascript-angular-plugin.md) + * [React](./javascript-framework-extensions.md) + * [React Native](./javascript-framework-extensions.md) + * [Angular](./javascript-framework-extensions.md) * [Windows desktop applications, services, and worker roles](https://github.com/Microsoft/appcenter) * [Universal Windows app](https://github.com/Microsoft/appcenter) (App Center) * [Android](https://github.com/Microsoft/appcenter) (App Center) |
azure-monitor | Convert Classic Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md | Workspace-based resources: > - Are available in all commercial regions and [Azure US Government](../../azure-government/index.yml). > - Don't require changing instrumentation keys after migration from a classic resource. +> [!IMPORTANT] +> * On February 29, 2024, continuous export will be deprecated as part of the classic Application Insights deprecation. +> * When you [migrate to a workspace-based Application Insights resource](convert-classic-resource.md), you must use [diagnostic settings](export-telemetry.md#diagnostic-settings-based-export) for exporting telemetry. All [workspace-based Application Insights resources](./create-workspace-resource.md) must use [diagnostic settings](./create-workspace-resource.md#export-telemetry). +> * Diagnostic settings export might increase costs. For more information, see [Diagnostic settings-based export](export-telemetry.md#diagnostic-settings-based-export). + ## New capabilities Workspace-based Application Insights resources allow you to take advantage of the latest capabilities of Azure Monitor and Log Analytics: Workspace-based Application Insights resources allow you to take advantage of th When you migrate to a workspace-based resource, no data is transferred from your classic resource's storage to the new workspace-based storage. Choosing to migrate changes the location where new data is written to a Log Analytics workspace while preserving access to your classic resource data. -Your classic resource data will persist and be subject to the retention settings on your classic Application Insights resource. All new data ingested post migration will be subject to the [retention settings](../logs/data-retention-archive.md) of the associated Log Analytics workspace, which also supports [different retention settings by data type](../logs/data-retention-archive.md#set-retention-and-archive-policy-by-table). +Your classic resource data persists and is subject to the retention settings on your classic Application Insights resource. All new data ingested post migration is subject to the [retention settings](../logs/data-retention-archive.md) of the associated Log Analytics workspace, which also supports [different retention settings by data type](../logs/data-retention-archive.md#set-retention-and-archive-policy-by-table). *The migration process is permanent and can't be reversed.* After you migrate a resource to workspace-based Application Insights, it will always be a workspace-based resource. After you migrate, you can change the target workspace as often as needed. If you don't need to migrate an existing resource, and instead want to create a - Workspace-based Application Insights resources aren't compatible with workspaces set to the dedicated **workspace-based permissions** setting. To learn more about Log Analytics workspace access control, see the [Access control mode guidance](../logs/manage-access.md#access-control-mode). - If you don't already have an existing Log Analytics workspace, see the [Log Analytics workspace creation documentation](../logs/quick-create-workspace.md). -- **Continuous export** isn't supported for workspace-based resources and must be disabled. After the migration is finished, you can use [diagnostic settings](../essentials/diagnostic-settings.md) to configure data archiving to a storage account or streaming to Azure Event Hubs.+- **Continuous export** isn't compatible with workspace-based resources and must be disabled. After the migration is finished, you can use [diagnostic settings](../essentials/diagnostic-settings.md) to configure data archiving to a storage account or streaming to Azure Event Hubs. > [!CAUTION] > * Diagnostic settings use a different export format/schema than continuous export. Migrating breaks any existing integrations with Azure Stream Analytics. You might have multiple Application Insights resources that store telemetry in o - Go to your Application Insights resource and select the **Logs** tab. All queries from this tab automatically pull data from the selected Application Insights resource. - Go to the Log Analytics workspace that you configured as the destination for your Application Insights telemetry and select the **Logs** tab. To query data from a specific Application Insights resource, filter for the built-in `_ResourceId` property that's available in all application-specific tables. -If you query directly from the Log Analytics workspace, you'll only see data that's ingested post migration. To see both your classic Application Insights data and the new data ingested after migration in a unified query experience, use the **Logs** tab from within your migrated Application Insights resource. +When you query directly from the Log Analytics workspace, you only see data that's ingested post migration. To see both your classic Application Insights data and the new data ingested after migration in a unified query experience, use the **Logs** tab from within your migrated Application Insights resource. > [!NOTE] > If you rename your Application Insights resource after you migrate to the workspace-based model, the Application Insights **Logs** tab no longer shows the telemetry collected before renaming. You can see all old and new data on the **Logs** tab of the associated Log Analytics resource. To access the preview Application Insights Azure CLI commands, you first need to az extension add -n application-insights ``` -If you don't run the `az extension add` command, you'll see an error message that states `az : ERROR: az monitor: 'app-insights' is not in the 'az monitor' command group. See 'az monitor --help'.` +If you don't run the `az extension add` command, you see an error message that states `az : ERROR: az monitor: 'app-insights' is not in the 'az monitor' command group. See 'az monitor --help'.` Now you can run the following code to create your Application Insights resource: This section provides answers to common questions. There's usually no difference, with a couple of exceptions: - Migrated Application Insights resources can use [Log Analytics commitment tiers](../logs/cost-logs.md#commitment-tiers) to reduce cost if the data volumes in the workspace are high enough.+ - Grandfathered Application Insights resources no longer get 1 GB per month free from the original Application Insights pricing model. ### How will telemetry capping work? There's no strict billing capping available. There are no changes to ingestion-based sampling. -### Will there be any gap in data collected during migration? +### Are there gaps in data collected during migration? No. We merge data during query time. -### Will my old log queries continue to work? +### Do old log queries continue to work? -Yes, they'll continue to work. +Yes, they continue to work. ### Will my dashboards that have pinned metric and log charts continue to work after migration? -Yes, they'll continue to work. +Yes, they continue to work. -### Will migration affect AppInsights API accessing data? +### Does migration affect AppInsights API accessing data? -No. Migration won't affect existing API access to data. After migration, you can access data directly from a workspace by using a [slightly different schema](#workspace-based-resource-changes). +No. Migration doesn't affect existing API access to data. After migration, you can access data directly from a workspace by using a [slightly different schema](#workspace-based-resource-changes). -### Will there be any impact on Live Metrics or other monitoring experiences? +### Is there be any impact on Live Metrics or other monitoring experiences? No. There's no impact to [Live Metrics](live-stream.md#live-metrics-monitor-and-diagnose-with-1-second-latency) or other monitoring experiences. The legacy **Continuous export** functionality isn't supported for workspace-bas  - - After you select **Disable**, you can go back to the migration UI. If the **Edit continuous export** page prompts you that your settings won't be saved, select **OK**. This prompt doesn't pertain to disabling or enabling continuous export. + - After you select **Disable**, you can go back to the migration UI. If the **Edit continuous export** page prompts you that your settings aren't saved, select **OK**. This prompt doesn't pertain to disabling or enabling continuous export. - After you've successfully migrated your Application Insights resource to workspace based, you can use diagnostic settings to replace the functionality that continuous export used to provide. Select **Diagnostics settings** > **Add diagnostic setting** in your Application Insights resource. You can select all tables, or a subset of tables, to archive to a storage account or stream to Azure Event Hubs. For more information on diagnostic settings, see the [Azure Monitor diagnostic settings guidance](../essentials/diagnostic-settings.md). |
azure-monitor | Javascript Angular Plugin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-angular-plugin.md | - Title: Angular plug-in for Application Insights JavaScript SDK -description: Learn how to install and use the Angular plug-in for the Application Insights JavaScript SDK. --- Previously updated : 01/10/2023----# Angular plug-in for the Application Insights JavaScript SDK --The Angular plug-in for the Application Insights JavaScript SDK enables: --- Tracking of router changes.-- Tracking uncaught exceptions.--> [!WARNING] -> The Angular plug-in *isn't* ECMAScript 3 (ES3) compatible. --When we add support for a new Angular version, our npm package becomes incompatible with down-level Angular versions. Continue to use older npm packages until you're ready to upgrade your Angular version. --## Get started --Install an npm package: --```bash -npm install @microsoft/applicationinsights-angularplugin-js @microsoft/applicationinsights-web --save -``` --## Basic usage --Set up an instance of Application Insights in the entry component in your app: ---```js -import { Component } from '@angular/core'; -import { ApplicationInsights } from '@microsoft/applicationinsights-web'; -import { AngularPlugin } from '@microsoft/applicationinsights-angularplugin-js'; -import { Router } from '@angular/router'; --@Component({ - selector: 'app-root', - templateUrl: './app.component.html', - styleUrls: ['./app.component.css'] -}) -export class AppComponent { - constructor( - private router: Router - ){ - var angularPlugin = new AngularPlugin(); - const appInsights = new ApplicationInsights({ config: { - connectionString: 'YOUR_CONNECTION_STRING_GOES_HERE', - extensions: [angularPlugin], - extensionConfig: { - [angularPlugin.identifier]: { router: this.router } - } - } }); - appInsights.loadAppInsights(); - } -} -``` --To track uncaught exceptions, set up `ApplicationinsightsAngularpluginErrorService` in `app.module.ts`: --```js -import { ApplicationinsightsAngularpluginErrorService } from '@microsoft/applicationinsights-angularplugin-js'; --@NgModule({ - ... - providers: [ - { - provide: ErrorHandler, - useClass: ApplicationinsightsAngularpluginErrorService - } - ] - ... -}) -export class AppModule { } -``` --## Enable correlation --Correlation generates and sends data that enables distributed tracing and powers [Application Map](../app/app-map.md), the [end-to-end transaction view](../app/app-map.md#go-to-details), and other diagnostic tools. --In JavaScript, correlation is turned off by default to minimize the telemetry we send by default. To enable correlation, see the [JavaScript client-side correlation documentation](./javascript.md#enable-distributed-tracing). --### Route tracking --The Angular plug-in automatically tracks route changes and collects other Angular-specific telemetry. --> [!NOTE] -> Set `enableAutoRouteTracking` to `false`. If it's set to `true`, when the route changes, duplicate `PageViews` might be sent. --### PageView --If a custom `PageView` duration isn't provided, the `PageView` duration defaults to a value of `0`. --## Next steps --- To learn more about the JavaScript SDK, see the [Application Insights JavaScript SDK documentation](javascript.md).-- See the [Angular plug-in on GitHub](https://github.com/microsoft/applicationinsights-angularplugin-js). |
azure-monitor | Javascript Feature Extensions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-feature-extensions.md | + + Title: Feature extensions for Application Insights JavaScript SDK (Click Analytics) +description: Learn how to install and use JavaScript feature extensions (Click Analytics) for Application Insights JavaScript SDK. ++ ibiza + Last updated : 02/13/2023+ms.devlang: javascript ++++# Feature extensions for Application Insights JavaScript SDK (Click Analytics) ++App Insights JavaScript SDK feature extensions are extra features that can be added to the Application Insights JavaScript SDK to enhance its functionality. ++In this article, we cover the Click Analytics plugin that automatically tracks click events on web pages and uses data-* attributes on HTML elements to populate event telemetry. +++## Getting started ++Users can set up the Click Analytics Auto-collection plugin via npm. ++### npm setup ++Install npm package: ++```bash +npm install --save @microsoft/applicationinsights-clickanalytics-js @microsoft/applicationinsights-web +``` ++```js ++import { ApplicationInsights } from '@microsoft/applicationinsights-web'; +import { ClickAnalyticsPlugin } from '@microsoft/applicationinsights-clickanalytics-js'; ++const clickPluginInstance = new ClickAnalyticsPlugin(); +// Click Analytics configuration +const clickPluginConfig = { + autoCapture: true +}; +// Application Insights Configuration +const configObj = { + instrumentationKey: "YOUR INSTRUMENTATION KEY", + extensions: [clickPluginInstance], + extensionConfig: { + [clickPluginInstance.identifier]: clickPluginConfig + }, +}; ++const appInsights = new ApplicationInsights({ config: configObj }); +appInsights.loadAppInsights(); +``` ++## Snippet Setup (ignore if using npm setup) ++```html +<script type="text/javascript" src="https://js.monitor.azure.com/scripts/b/ext/ai.clck.2.6.2.min.js"></script> +<script type="text/javascript"> + var clickPluginInstance = new Microsoft.ApplicationInsights.ClickAnalyticsPlugin(); + // Click Analytics configuration + var clickPluginConfig = { + autoCapture : true, + dataTags: { + useDefaultContentNameOrId: true + } + } + // Application Insights Configuration + var configObj = { + instrumentationKey: "YOUR INSTRUMENTATION KEY", + extensions: [ + clickPluginInstance + ], + extensionConfig: { + [clickPluginInstance.identifier] : clickPluginConfig + }, + }; + // Application Insights Snippet code + !function(T,l,y){<!-- Removed the Snippet code for brevity -->}(window,document,{ + src: "https://js.monitor.azure.com/scripts/b/ai.2.min.js", + crossOrigin: "anonymous", + cfg: configObj + }); +</script> +``` ++## How to effectively use the plugin ++1. Telemetry data generated from the click events are stored as `customEvents` in the Application Insights section of the Azure portal. +2. The `name` of the customEvent is populated based on the following rules: + 1. The `id` provided in the `data-*-id` is used as the customEvent name. For example, if the clicked HTML element has the attribute "data-sample-id"="button1", then "button1" is the customEvent name. + 2. If no such attribute exists and if the `useDefaultContentNameOrId` is set to `true` in the configuration, then the clicked element's HTML attribute `id` or content name of the element is used as the customEvent name. If both `id` and content name are present, precedence is given to `id`. + 3. If `useDefaultContentNameOrId` is false, then the customEvent name is "not_specified". ++ > [!TIP] + > We recommend settings `useDefaultContentNameOrId` to true for generating meaningful data. +3. `parentDataTag` does two things: + 1. If this tag is present, the plugin fetches the `data-*` attributes and values from all the parent HTML elements of the clicked element. + 2. To improve efficiency, the plugin uses this tag as a flag, when encountered it stops itself from further processing the DOM (Document Object Model) upwards. + + > [!CAUTION] + > Once `parentDataTag` is used, the SDK will begin looking for parent tags across your entire application and not just the HTML element where you used it. +4. `customDataPrefix` provided by the user should always start with `data-`, for example `data-sample-`. In HTML, the `data-*` global attributes are called custom data attributes that allow proprietary information to be exchanged between the HTML and its DOM representation by scripts. Older browsers (Internet Explorer, Safari) drop attributes that it doesn't understand, unless they start with `data-`. ++ The `*` in `data-*` may be replaced by any name following the [production rule of XML names](https://www.w3.org/TR/REC-xml/#NT-Name) with the following restrictions: + - The name must not start with "xml", whatever case is used for these letters. + - The name must not contain any semicolon (U+003A). + - The name must not contain capital letters. ++## What data does the plugin collect ++The following are some of the key properties captured by default when the plugin is enabled: ++### Custom Event Properties +| Name | Description | Sample | +| | |--| +| Name | The `name` of the customEvent. More info on how name gets populated is shown [here](#how-to-effectively-use-the-plugin).| About | +| itemType | Type of event. | customEvent | +|sdkVersion | Version of Application Insights SDK along with click plugin|JavaScript:2.6.2_ClickPlugin2.6.2| ++### Custom Dimensions +| Name | Description | Sample | +| | |--| +| actionType | Action type that caused the click event. Can be left or right click. | CL | +| baseTypeSource | Base Type source of the custom event. | ClickEvent | +| clickCoordinates | Coordinates where the click event is triggered. | 659X47 | +| content | Placeholder to store extra `data-*` attributes and values. | [{sample1:value1, sample2:value2}] | +| pageName | Title of the page where the click event is triggered. | Sample Title | +| parentId | ID or name of the parent element | navbarContainer | ++### Custom Measurements +| Name | Description | Sample | +| | |--| +| timeToAction | Time taken in milliseconds for the user to click the element since initial page load | 87407 | ++## Configuration ++| Name | Type | Default | Description | +| | --| --| - | +| auto-Capture | Boolean | True | Automatic capture configuration. | +| callback | [IValueCallback](#ivaluecallback) | Null | Callbacks configuration. | +| pageTags | String | Null | Page tags. | +| dataTags | [ICustomDataTags](#icustomdatatags)| Null | Custom Data Tags provided to override default tags used to capture click data. | +| urlCollectHash | Boolean | False | Enables the logging of values after a "#" character of the URL. | +| urlCollectQuery | Boolean | False | Enables the logging of the query string of the URL. | +| behaviorValidator | Function | Null | Callback function to use for the `data-*-bhvr` value validation. For more information, go to [behaviorValidator section](#behaviorvalidator).| +| defaultRightClickBhvr | String (or) number | '' | Default Behavior value when Right Click event has occurred. This value is overridden if the element has the `data-*-bhvr` attribute. | +| dropInvalidEvents | Boolean | False | Flag to drop events that don't have useful click data. | ++### IValueCallback ++| Name | Type | Default | Description | +| | -- | - | | +| pageName | Function | Null | Function to override the default pageName capturing behavior. | +| pageActionPageTags | Function | Null | A callback function to augment the default pageTags collected during pageAction event. | +| contentName | Function | Null | A callback function to populate customized contentName. | ++### ICustomDataTags ++| Name | Type | Default | Default Tag to Use in HTML | Description | +|||--|-|-| +| useDefaultContentNameOrId | Boolean | False | N/A |Collects standard HTML attribute for contentName when a particular element isn't tagged with default customDataPrefix or when customDataPrefix isn't provided by user. | +| customDataPrefix | String | `data-` | `data-*`| Automatic capture content name and value of elements that are tagged with provided prefix. For example, `data-*-id`, `data-<yourcustomattribute>` can be used in the HTML tags. | +| aiBlobAttributeTag | String | `ai-blob` | `data-ai-blob`| Plugin supports a JSON blob attribute instead of individual `data-*` attributes. | +| metaDataPrefix | String | Null | N/A | Automatic capture HTML Head's meta element name and content with provided prefix when capture. For example, `custom-` can be used in the HTML meta tag. | +| captureAllMetaDataContent | Boolean | False | N/A | Automatic capture all HTML Head's meta element names and content. Default is false. If enabled it overrides provided metaDataPrefix. | +| parentDataTag | String | Null | N/A | Stops traversing up the DOM to capture content name and value of elements when encountered with this tag. For example, `data-<yourparentDataTag>` can be used in the HTML tags.| +| dntDataTag | String | `ai-dnt` | `data-ai-dnt`| HTML elements with this attribute are ignored by the plugin for capturing telemetry data.| ++### behaviorValidator ++The behaviorValidator functions automatically check that tagged behaviors in code conform to a pre-defined list. It ensures tagged behaviors are consistent with your enterprise's established taxonomy. It isn't required or expected that most Azure Monitor customers use these functions, but they're available for advanced scenarios. There are three different behaviorValidator callback functions exposed as part of this extension. However, users can use their own callback functions if the exposed functions don't solve your requirement. The intent is to bring your own behaviors data structure, the plugin uses this validator function while extracting the behaviors from the data tags. ++| Name | Description | +| - | --| +| BehaviorValueValidator | Use this callback function if your behaviors data structure is an array of strings.| +| BehaviorMapValidator | Use this callback function if your behaviors data structure is a dictionary. | +| BehaviorEnumValidator | Use this callback function if your behaviors data structure is an Enum. | ++#### Sample usage with behaviorValidator ++```js +var clickPlugin = Microsoft.ApplicationInsights.ClickAnalyticsPlugin; +var clickPluginInstance = new clickPlugin(); ++// Behavior enum values +var behaviorMap = { + UNDEFINED: 0, // default, Undefined ++ /////////////////////////////////////////////////////////////////////////////////////////////////// + // Page Experience [1-19] + /////////////////////////////////////////////////////////////////////////////////////////////////// + NAVIGATIONBACK: 1, // Advancing to the previous index position within a webpage + NAVIGATION: 2, // Advancing to a specific index position within a webpage + NAVIGATIONFORWARD: 3, // Advancing to the next index position within a webpage + APPLY: 4, // Applying filter(s) or making selections + REMOVE: 5, // Applying filter(s) or removing selections + SORT: 6, // Sorting content + EXPAND: 7, // Expanding content or content container + REDUCE: 8, // Sorting content + CONTEXTMENU: 9, // Context Menu + TAB: 10, // Tab control + COPY: 11, // Copy the contents of a page + EXPERIMENTATION: 12, // Used to identify a third party experimentation event + PRINT: 13, // User printed page + SHOW: 14, // Displaying an overlay + HIDE: 15, // Hiding an overlay + MAXIMIZE: 16, // Maximizing an overlay + MINIMIZE: 17, // Minimizing an overlay + BACKBUTTON: 18, // Clicking the back button ++ /////////////////////////////////////////////////////////////////////////////////////////////////// + // Scenario Process [20-39] + /////////////////////////////////////////////////////////////////////////////////////////////////// + STARTPROCESS: 20, // Initiate a web process unique to adopter + PROCESSCHECKPOINT: 21, // Represents a checkpoint in a web process unique to adopter + COMPLETEPROCESS: 22, // Page Actions that complete a web process unique to adopter + SCENARIOCANCEL: 23, // Actions resulting from cancelling a process/scenario ++ /////////////////////////////////////////////////////////////////////////////////////////////////// + // Download [40-59] + /////////////////////////////////////////////////////////////////////////////////////////////////// + DOWNLOADCOMMIT: 40, // Initiating an unmeasurable off-network download + DOWNLOAD: 41, // Initiating a download ++ /////////////////////////////////////////////////////////////////////////////////////////////////// + // Search [60-79] + /////////////////////////////////////////////////////////////////////////////////////////////////// + SEARCHAUTOCOMPLETE: 60, // Auto-completing a search query during user input + SEARCH: 61, // Submitting a search query + SEARCHINITIATE: 62, // Initiating a search query + TEXTBOXINPUT: 63, // Typing or entering text in the text box ++ /////////////////////////////////////////////////////////////////////////////////////////////////// + // Commerce [80-99] + /////////////////////////////////////////////////////////////////////////////////////////////////// + VIEWCART: 82, // Viewing the cart + ADDWISHLIST: 83, // Adding a physical or digital good or services to a wishlist + FINDSTORE: 84, // Finding a physical store + CHECKOUT: 85, // Before you fill in credit card info + REMOVEFROMCART: 86, // Remove an item from the cart + PURCHASECOMPLETE: 87, // Used to track the pageView event that happens when the CongratsPage or Thank You page loads after a successful purchase + VIEWCHECKOUTPAGE: 88, // View the checkout page + VIEWCARTPAGE: 89, // View the cart page + VIEWPDP: 90, // View a PDP + UPDATEITEMQUANTITY: 91, // Update an item's quantity + INTENTTOBUY: 92, // User has the intent to buy an item + PUSHTOINSTALL: 93, // User has selected the push to install option ++ /////////////////////////////////////////////////////////////////////////////////////////////////// + // Authentication [100-119] + /////////////////////////////////////////////////////////////////////////////////////////////////// + SIGNIN: 100, // User sign-in + SIGNOUT: 101, // User sign-out ++ /////////////////////////////////////////////////////////////////////////////////////////////////// + // Social [120-139] + /////////////////////////////////////////////////////////////////////////////////////////////////// + SOCIALSHARE: 120, // "Sharing" content for a specific social channel + SOCIALLIKE: 121, // "Liking" content for a specific social channel + SOCIALREPLY: 122, // "Replying" content for a specific social channel + CALL: 123, // Click on a "call" link + EMAIL: 124, // Click on an "email" link + COMMUNITY: 125, // Click on a "community" link ++ /////////////////////////////////////////////////////////////////////////////////////////////////// + // Feedback [140-159] + /////////////////////////////////////////////////////////////////////////////////////////////////// + VOTE: 140, // Rating content or voting for content + SURVEYCHECKPOINT: 145, // reaching the survey page/form ++ /////////////////////////////////////////////////////////////////////////////////////////////////// + // Registration, Contact [160-179] + /////////////////////////////////////////////////////////////////////////////////////////////////// + REGISTRATIONINITIATE: 161, // Initiating a registration process + REGISTRATIONCOMPLETE: 162, // Completing a registration process + CANCELSUBSCRIPTION: 163, // Canceling a subscription + RENEWSUBSCRIPTION: 164, // Renewing a subscription + CHANGESUBSCRIPTION: 165, // Changing a subscription + REGISTRATIONCHECKPOINT: 166, // Reaching the registration page/form ++ /////////////////////////////////////////////////////////////////////////////////////////////////// + // Chat [180-199] + /////////////////////////////////////////////////////////////////////////////////////////////////// + CHATINITIATE: 180, // Initiating a chat experience + CHATEND: 181, // Ending a chat experience ++ /////////////////////////////////////////////////////////////////////////////////////////////////// + // Trial [200-209] + /////////////////////////////////////////////////////////////////////////////////////////////////// + TRIALSIGNUP: 200, // Signing-up for a trial + TRIALINITIATE: 201, // Initiating a trial ++ /////////////////////////////////////////////////////////////////////////////////////////////////// + // Signup [210-219] + /////////////////////////////////////////////////////////////////////////////////////////////////// + SIGNUP: 210, // Signing-up for a notification or service + FREESIGNUP: 211, // Signing-up for a free service ++ /////////////////////////////////////////////////////////////////////////////////////////////////// + // Referals [220-229] + /////////////////////////////////////////////////////////////////////////////////////////////////// + PARTNERREFERRAL: 220, // Navigating to a partner's web property ++ /////////////////////////////////////////////////////////////////////////////////////////////////// + // Intents [230-239] + /////////////////////////////////////////////////////////////////////////////////////////////////// + LEARNLOWFUNNEL: 230, // Engaging in learning behavior on a commerce page (ex. "Learn more click") + LEARNHIGHFUNNEL: 231, // Engaging in learning behavior on a non-commerce page (ex. "Learn more click") + SHOPPINGINTENT: 232, // Shopping behavior prior to landing on a commerce page ++ /////////////////////////////////////////////////////////////////////////////////////////////////// + // Video [240-259] + /////////////////////////////////////////////////////////////////////////////////////////////////// + VIDEOSTART: 240, // Initiating a video + VIDEOPAUSE: 241, // Pausing a video + VIDEOCONTINUE: 242, // Pausing or resuming a video. + VIDEOCHECKPOINT: 243, // Capturing predetermined video percentage complete. + VIDEOJUMP: 244, // Jumping to a new video location. + VIDEOCOMPLETE: 245, // Completing a video (or % proxy) + VIDEOBUFFERING: 246, // Capturing a video buffer event + VIDEOERROR: 247, // Capturing a video error + VIDEOMUTE: 248, // Muting a video + VIDEOUNMUTE: 249, // Unmuting a video + VIDEOFULLSCREEN: 250, // Making a video full screen + VIDEOUNFULLSCREEN: 251, // Making a video return from full screen to original size + VIDEOREPLAY: 252, // Making a video replay + VIDEOPLAYERLOAD: 253, // Loading the video player + VIDEOPLAYERCLICK: 254, // Click on a button within the interactive player + VIDEOVOLUMECONTROL: 255, // Click on video volume control + VIDEOAUDIOTRACKCONTROL: 256, // Click on audio control within a video + VIDEOCLOSEDCAPTIONCONTROL: 257, // Click on the closed caption control + VIDEOCLOSEDCAPTIONSTYLE: 258, // Click to change closed caption style + VIDEORESOLUTIONCONTROL: 259, // Click to change resolution ++ /////////////////////////////////////////////////////////////////////////////////////////////////// + // Advertisement Engagement [280-299] + /////////////////////////////////////////////////////////////////////////////////////////////////// + ADBUFFERING: 283, // Ad is buffering + ADERROR: 284, // Ad error + ADSTART: 285, // Ad start + ADCOMPLETE: 286, // Ad complete + ADSKIP: 287, // Ad skipped + ADTIMEOUT: 288, // Ad timed-out + OTHER: 300 // Other +}; ++// Application Insights Configuration +var configObj = { + instrumentationKey: "YOUR INSTRUMENTATION KEY", + extensions: [clickPluginInstance], + extensionConfig: { + [clickPluginInstance.identifier]: { + behaviorValidator: Microsoft.ApplicationInsights.BehaviorMapValidator(behaviorMap), + defaultRightClickBhvr: 9 + }, + }, +}; +var appInsights = new Microsoft.ApplicationInsights.ApplicationInsights({ + config: configObj +}); +appInsights.loadAppInsights(); +``` ++## Enable Correlation ++Correlation generates and sends data that enables distributed tracing and powers the [application map](../app/app-map.md), [end-to-end transaction view](../app/app-map.md#go-to-details), and other diagnostic tools. ++JavaScript correlation is turned off by default in order to minimize the telemetry we send by default. To enable correlation, reference [JavaScript client-side correlation documentation](./javascript.md#enable-distributed-tracing). ++## Sample app ++[Simple web app with Click Analytics Auto-collection Plugin enabled](https://go.microsoft.com/fwlink/?linkid=2152871). ++## Next steps ++- Check out the [documentation on utilizing HEART Workbook](usage-heart.md) for expanded product analytics. +- Check out the [GitHub Repository](https://github.com/microsoft/ApplicationInsights-JS/tree/master/extensions/applicationinsights-clickanalytics-js) and [npm Package](https://www.npmjs.com/package/@microsoft/applicationinsights-clickanalytics-js) for the Click Analytics Auto-Collection Plugin. +- Use [Events Analysis in Usage Experience](usage-segmentation.md) to analyze top clicks and slice by available dimensions. +- Find click data under content field within customDimensions attribute in CustomEvents table in [Log Analytics](../logs/log-analytics-tutorial.md#write-a-query). For more information, see [Sample App](https://go.microsoft.com/fwlink/?linkid=2152871). +- Build a [Workbook](../visualize/workbooks-overview.md) or [export to Power BI](../logs/log-powerbi.md) to create custom visualizations of click data. |
azure-monitor | Javascript Framework Extensions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-framework-extensions.md | + + Title: Framework extensions for Application Insights JavaScript SDK +description: Learn how to install and use JavaScript framework extensions for the Application Insights JavaScript SDK. ++ ibiza + Last updated : 02/13/2023+ms.devlang: javascript ++++# Framework extensions for Application Insights JavaScript SDK ++In addition to the core SDK, there are also plugins available for specific frameworks, such as the [React plugin](javascript-framework-extensions.md?tabs=react), the [React Native plugin](javascript-framework-extensions.md?tabs=reactnative), and the [Angular plugin](javascript-framework-extensions.md?tabs=angular). ++These plugins provide extra functionality and integration with the specific framework. ++## [React](#tab/react) ++### React Application Insights JavaScript SDK plug-in ++The React plug-in for the Application Insights JavaScript SDK enables: ++- Tracking of route changes. +- React components usage statistics. ++### Get started ++Install the npm package: ++```bash ++npm install @microsoft/applicationinsights-react-js @microsoft/applicationinsights-web --save ++``` ++### Basic usage ++Initialize a connection to Application Insights: +++```javascript +import React from 'react'; +import { ApplicationInsights } from '@microsoft/applicationinsights-web'; +import { ReactPlugin, withAITracking } from '@microsoft/applicationinsights-react-js'; +import { createBrowserHistory } from "history"; +const browserHistory = createBrowserHistory({ basename: '' }); +var reactPlugin = new ReactPlugin(); +var appInsights = new ApplicationInsights({ + config: { + connectionString: 'YOUR_CONNECTION_STRING_GOES_HERE', + extensions: [reactPlugin], + extensionConfig: { + [reactPlugin.identifier]: { history: browserHistory } + } + } +}); +appInsights.loadAppInsights(); +``` ++Wrap your component with the higher-order component function to enable Application Insights on it: ++```javascript +import React from 'react'; +import { withAITracking } from '@microsoft/applicationinsights-react-js'; +import { reactPlugin, appInsights } from './AppInsights'; ++// To instrument various React components usage tracking, apply the `withAITracking` higher-order +// component function. ++class MyComponent extends React.Component { + ... +} ++// withAITracking takes 4 parameters (reactPlugin, Component, ComponentName, className). +// The first two are required and the other two are optional. ++export default withAITracking(reactPlugin, MyComponent); +``` ++For `react-router v6` or other scenarios where router history isn't exposed, Application Insights configuration `enableAutoRouteTracking` can be used to auto-track router changes: ++```javascript +var reactPlugin = new ReactPlugin(); +var appInsights = new ApplicationInsights({ + config: { + connectionString: 'YOUR_CONNECTION_STRING_GOES_HERE', + enableAutoRouteTracking: true, + extensions: [reactPlugin] + } +}); +appInsights.loadAppInsights(); +``` ++### Configuration ++| Name | Default | Description | +|||-| +| history | null | React router history. For more information, see the [React router package documentation](https://reactrouter.com/en/main). To learn how to access the history object outside of components, see the [React router FAQ](https://github.com/ReactTraining/react-router/blob/master/FAQ.md#how-do-i-access-the-history-object-outside-of-components). | ++#### React components usage tracking ++To instrument various React components usage tracking, apply the `withAITracking` higher-order component function. ++It measures time from the `ComponentDidMount` event through the `ComponentWillUnmount` event. To make the result more accurate, it subtracts the time in which the user was idle by using `React Component Engaged Time = ComponentWillUnmount timestamp - ComponentDidMount timestamp - idle time`. ++To see this metric in the Azure portal, go to the Application Insights resource and select the **Metrics** tab. Configure the empty charts to display the custom metric name `React Component Engaged Time (seconds)`. Select the aggregation (for example, sum or avg) of your metric and split by `Component Name`. ++ ++You can also run custom queries to divide Application Insights data to generate reports and visualizations as per your requirements. In the Azure portal, go to the Application Insights resource, select **Analytics** from the **Overview** tab, and run your query. ++ ++> [!NOTE] +> It can take up to 10 minutes for new custom metrics to appear in the Azure portal. ++### Use React Hooks ++[React Hooks](https://reactjs.org/docs/hooks-reference.html) are an approach to state and lifecycle management in a React application without relying on class-based React components. The Application Insights React plug-in provides several Hooks integrations that operate in a similar way to the higher-order component approach. ++#### Use React Context ++The React Hooks for Application Insights are designed to use [React Context](https://reactjs.org/docs/context.html) as a containing aspect for it. To use Context, initialize Application Insights, and then import the Context object: ++```javascript +import React from "react"; +import { AppInsightsContext } from "@microsoft/applicationinsights-react-js"; +import { reactPlugin } from "./AppInsights"; ++const App = () => { + return ( + <AppInsightsContext.Provider value={reactPlugin}> + /* your application here */ + </AppInsightsContext.Provider> + ); +}; +``` ++This Context Provider makes Application Insights available as a `useContext` Hook within all children components of it: ++```javascript +import React from "react"; +import { useAppInsightsContext } from "@microsoft/applicationinsights-react-js"; ++const MyComponent = () => { + const appInsights = useAppInsightsContext(); + const metricData = { + average: engagementTime, + name: "React Component Engaged Time (seconds)", + sampleCount: 1 + }; + const additionalProperties = { "Component Name": 'MyComponent' }; + appInsights.trackMetric(metricData, additionalProperties); + + return ( + <h1>My Component</h1> + ); +} +export default MyComponent; +``` ++#### useTrackMetric ++The `useTrackMetric` Hook replicates the functionality of the `withAITracking` higher-order component, without adding another component to the component structure. The Hook takes two arguments. First is the Application Insights instance, which can be obtained from the `useAppInsightsContext` Hook. The second is an identifier for the component for tracking, such as its name. ++```javascript +import React from "react"; +import { useAppInsightsContext, useTrackMetric } from "@microsoft/applicationinsights-react-js"; ++const MyComponent = () => { + const appInsights = useAppInsightsContext(); + const trackComponent = useTrackMetric(appInsights, "MyComponent"); + + return ( + <h1 onHover={trackComponent} onClick={trackComponent}>My Component</h1> + ); +} +export default MyComponent; +``` ++It operates like the higher-order component, but it responds to Hooks lifecycle events rather than a component lifecycle. The Hook needs to be explicitly provided to user events if there's a need to run on particular interactions. ++#### useTrackEvent ++The `useTrackEvent` Hook is used to track any custom event that an application might need to track, such as a button click or other API call. It takes four arguments: ++- Application Insights instance, which can be obtained from the `useAppInsightsContext` Hook. +- Name for the event. +- Event data object that encapsulates the changes that have to be tracked. +- skipFirstRun (optional) flag to skip calling the `trackEvent` call on initialization. The default value is set to `true` to mimic more closely the way the non-Hook version works. With `useEffect` Hooks, the effect is triggered on each value update _including_ the initial setting of the value. As a result, tracking starts too early, which causes potentially unwanted events to be tracked. ++```javascript +import React, { useState, useEffect } from "react"; +import { useAppInsightsContext, useTrackEvent } from "@microsoft/applicationinsights-react-js"; ++const MyComponent = () => { + const appInsights = useAppInsightsContext(); + const [cart, setCart] = useState([]); + const trackCheckout = useTrackEvent(appInsights, "Checkout", cart); + const trackCartUpdate = useTrackEvent(appInsights, "Cart Updated", cart); + useEffect(() => { + trackCartUpdate({ cartCount: cart.length }); + }, [cart]); + + const performCheckout = () => { + trackCheckout(); + // submit data + }; + + return ( + <div> + <ul> + <li>Product 1 <button onClick={() => setCart([...cart, "Product 1"])}>Add to Cart</button></li> + <li>Product 2 <button onClick={() => setCart([...cart, "Product 2"])}>Add to Cart</button></li> + <li>Product 3 <button onClick={() => setCart([...cart, "Product 3"])}>Add to Cart</button></li> + <li>Product 4 <button onClick={() => setCart([...cart, "Product 4"])}>Add to Cart</button></li> + </ul> + <button onClick={performCheckout}>Checkout</button> + </div> + ); +} ++export default MyComponent; +``` ++When the Hook is used, a data payload can be provided to it to add more data to the event when it's stored in Application Insights. ++### React error boundaries ++[Error boundaries](https://reactjs.org/docs/error-boundaries.html) provide a way to gracefully handle an exception when it occurs within a React application. When such an error occurs, it's likely that the exception needs to be logged. The React plug-in for Application Insights provides an error boundary component that automatically logs the error when it occurs. ++```javascript +import React from "react"; +import { reactPlugin } from "./AppInsights"; +import { AppInsightsErrorBoundary } from "@microsoft/applicationinsights-react-js"; ++const App = () => { + return ( + <AppInsightsErrorBoundary onError={() => <h1>I believe something went wrong</h1>} appInsights={reactPlugin}> + /* app here */ + </AppInsightsErrorBoundary> + ); +}; +``` ++The `AppInsightsErrorBoundary` requires two props to be passed to it. They're the `ReactPlugin` instance created for the application and a component to be rendered when an error occurs. When an unhandled error occurs, `trackException` is called with the information provided to the error boundary, and the `onError` component appears. ++### Enable correlation ++Correlation generates and sends data that enables distributed tracing and powers the [application map](../app/app-map.md), [end-to-end transaction view](../app/app-map.md#go-to-details), and other diagnostic tools. ++In JavaScript, correlation is turned off by default to minimize the telemetry we send by default. To enable correlation, see the [JavaScript client-side correlation documentation](./javascript.md#enable-distributed-tracing). ++#### Route tracking ++The React plug-in automatically tracks route changes and collects other React-specific telemetry. ++> [!NOTE] +> `enableAutoRouteTracking` should be set to `false`. If it's set to `true`, then when the route changes, duplicate `PageViews` can be sent. ++For `react-router v6` or other scenarios where router history isn't exposed, you can add `enableAutoRouteTracking: true` to your [setup configuration](#basic-usage). ++#### PageView ++If a custom `PageView` duration isn't provided, `PageView` duration defaults to a value of `0`. ++### Sample app ++Check out the [Application Insights React demo](https://github.com/Azure-Samples/application-insights-react-demo). ++## [React Native](#tab/reactnative) ++### React Native plugin for Application Insights JavaScript SDK ++The React Native plugin for Application Insights JavaScript SDK collects device information, by default this plugin automatically collects: ++- **Unique Device ID** (Also known as Installation ID.) +- **Device Model Name** (Such as iPhone X, Samsung Galaxy Fold, Huawei P30 Pro etc.) +- **Device Type** (For example, handset, tablet, etc.) ++### Requirements ++You must be using a version >= 2.0.0 of `@microsoft/applicationinsights-web`. This plugin works in react-native apps. It doesn't work with [apps using the Expo framework](https://docs.expo.io/), therefore it doesn't work with Create React Native App. ++### Getting started ++Install and link the [react-native-device-info](https://www.npmjs.com/package/react-native-device-info) package. Keep the `react-native-device-info` package up to date to collect the latest device names using your app. ++```zsh ++npm install --save @microsoft/applicationinsights-react-native @microsoft/applicationinsights-web +npm install --save react-native-device-info +react-native link react-native-device-info ++``` ++### Initializing the plugin ++To use this plugin, you need to construct the plugin and add it as an `extension` to your existing Application Insights instance. +++```typescript +import { ApplicationInsights } from '@microsoft/applicationinsights-web'; +import { ReactNativePlugin } from '@microsoft/applicationinsights-react-native'; ++var RNPlugin = new ReactNativePlugin(); +var appInsights = new ApplicationInsights({ + config: { + connectionString: 'YOUR_CONNECTION_STRING_GOES_HERE', + extensions: [RNPlugin] + } +}); +appInsights.loadAppInsights(); ++``` ++### Enable Correlation ++Correlation generates and sends data that enables distributed tracing and powers the [application map](../app/app-map.md), [end-to-end transaction view](../app/app-map.md#go-to-details), and other diagnostic tools. ++JavaScript correlation is turned off by default in order to minimize the telemetry we send by default. To enable correlation, reference [JavaScript client-side correlation documentation](./javascript.md#enable-distributed-tracing). ++#### PageView ++If a custom `PageView` duration isn't provided, `PageView` duration defaults to a value of 0. ++ +## [Angular](#tab/angular) ++## Angular plugin for Application Insights JavaScript SDK ++The Angular plugin for the Application Insights JavaScript SDK, enables: ++- Tracking of router changes +- Tracking uncaught exceptions ++> [!WARNING] +> Angular plugin is NOT ECMAScript 3 (ES3) compatible. ++> [!IMPORTANT] +> When we add support for a new Angular version, our NPM package becomes incompatible with down-level Angular versions. Continue to use older NPM packages until you're ready to upgrade your Angular version. ++### Getting started ++Install npm package: ++```bash +npm install @microsoft/applicationinsights-angularplugin-js @microsoft/applicationinsights-web --save +``` ++### Basic usage ++Set up an instance of Application Insights in the entry component in your app: +++```js +import { Component } from '@angular/core'; +import { ApplicationInsights } from '@microsoft/applicationinsights-web'; +import { AngularPlugin } from '@microsoft/applicationinsights-angularplugin-js'; +import { Router } from '@angular/router'; ++@Component({ + selector: 'app-root', + templateUrl: './app.component.html', + styleUrls: ['./app.component.css'] +}) +export class AppComponent { + constructor( + private router: Router + ){ + var angularPlugin = new AngularPlugin(); + const appInsights = new ApplicationInsights({ config: { + connectionString: 'YOUR_CONNECTION_STRING_GOES_HERE', + extensions: [angularPlugin], + extensionConfig: { + [angularPlugin.identifier]: { router: this.router } + } + } }); + appInsights.loadAppInsights(); + } +} +``` ++To track uncaught exceptions, setup ApplicationinsightsAngularpluginErrorService in `app.module.ts`: ++```js +import { ApplicationinsightsAngularpluginErrorService } from '@microsoft/applicationinsights-angularplugin-js'; ++@NgModule({ + ... + providers: [ + { + provide: ErrorHandler, + useClass: ApplicationinsightsAngularpluginErrorService + } + ] + ... +}) +export class AppModule { } +``` ++### Enable Correlation ++Correlation generates and sends data that enables distributed tracing and powers the [application map](../app/app-map.md), [end-to-end transaction view](../app/app-map.md#go-to-details), and other diagnostic tools. ++JavaScript correlation is turned off by default in order to minimize the telemetry we send by default. To enable correlation, reference [JavaScript client-side correlation documentation](./javascript.md#enable-distributed-tracing). ++#### Route tracking ++The Angular Plugin automatically tracks route changes and collects other Angular specific telemetry. ++> [!NOTE] +> `enableAutoRouteTracking` should be set to `false` if it set to true then when the route changes duplicate PageViews may be sent. ++#### PageView ++If a custom `PageView` duration isn't provided, `PageView` duration defaults to a value of 0. ++++## Next steps ++- To learn more about the JavaScript SDK, see the [Application Insights JavaScript SDK documentation](javascript.md). +- To learn about the Kusto Query Language and querying data in Log Analytics, see the [Log query overview](../../azure-monitor/logs/log-query-overview.md). |
azure-monitor | Javascript React Native Plugin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-react-native-plugin.md | - Title: React Native plugin for Application Insights JavaScript SDK -description: How to install and use the React Native plugin for Application Insights JavaScript SDK. --- Previously updated : 11/14/2022----# React Native plugin for Application Insights JavaScript SDK --The React Native plugin for Application Insights JavaScript SDK collects device information, by default this plugin automatically collects: --- **Unique Device ID** (Also known as Installation ID.)-- **Device Model Name** (Such as iPhone X, Samsung Galaxy Fold, Huawei P30 Pro etc.)-- **Device Type** (For example, handset, tablet, etc.)--## Requirements --You must be using a version >= 2.0.0 of `@microsoft/applicationinsights-web`. This plugin will only work in react-native apps. It will not work with [apps using the Expo framework](https://docs.expo.io/), therefore it will not work with Create React Native App. --## Getting started --Install and link the [react-native-device-info](https://www.npmjs.com/package/react-native-device-info) package. Keep the `react-native-device-info` package up to date to collect the latest device names using your app. --```zsh --npm install --save @microsoft/applicationinsights-react-native @microsoft/applicationinsights-web -npm install --save react-native-device-info -react-native link react-native-device-info --``` --## Initializing the plugin --To use this plugin, you need to construct the plugin and add it as an `extension` to your existing Application Insights instance. ---```typescript -import { ApplicationInsights } from '@microsoft/applicationinsights-web'; -import { ReactNativePlugin } from '@microsoft/applicationinsights-react-native'; --var RNPlugin = new ReactNativePlugin(); -var appInsights = new ApplicationInsights({ - config: { - connectionString: 'YOUR_CONNECTION_STRING_GOES_HERE', - extensions: [RNPlugin] - } -}); -appInsights.loadAppInsights(); --``` --## Enable Correlation --Correlation generates and sends data that enables distributed tracing and powers the [application map](../app/app-map.md), [end-to-end transaction view](../app/app-map.md#go-to-details), and other diagnostic tools. --In JavaScript correlation is turned off by default in order to minimize the telemetry we send by default. To enable correlation please reference [JavaScript client-side correlation documentation](./javascript.md#enable-distributed-tracing). --### PageView --If a custom `PageView` duration is not provided, `PageView` duration defaults to a value of 0. --## Next steps --- To learn more about the JavaScript SDK, see the [Application Insights JavaScript SDK documentation](javascript.md).-- To learn about the Kusto query language and querying data in Log Analytics, see the [Log query overview](../../azure-monitor/logs/log-query-overview.md). |
azure-monitor | Javascript Sdk Upgrade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-sdk-upgrade.md | + + Title: Azure Application Insights JavaScript SDK upgrade information +description: Azure Application Insights JavaScript SDK upgrade information + Last updated : 02/13/2023+ms.devlang: javascript +++++# Upgrade from old versions of the Application Insights JavaScript SDK ++Upgrading to the new version of the Application Insights JavaScript SDK can provide several advantages such as: ++> [!div class="checklist"] +> - Improved performance and bug fixes +> - New features and functionalities +> - Better compatibility with other technologies +> - Enhanced security and data privacy ++## Breaking changes in the SDK V2 version: ++- To allow for better API signatures, some of the API calls, such as trackPageView and trackException, have been updated. Running in Internet Explorer 8 and earlier versions of the browser isn't supported. +- The telemetry envelope has field name and structure changes due to data schema updates. +- Moved `context.operation` to `context.telemetryTrace`. Some fields were also changed (`operation.id` --> `telemetryTrace.traceID`). + + To manually refresh the current pageview ID, for example, in single-page applications, use `appInsights.properties.context.telemetryTrace.traceID = Microsoft.ApplicationInsights.Telemetry.Util.generateW3CId()`. ++ > [!NOTE] + > To keep the trace ID unique, where you previously used `Util.newId()`, now use `Util.generateW3CId()`. Both ultimately end up being the operation ID. ++If you're using the current application insights PRODUCTION SDK (1.0.20) and want to see if the new SDK works in runtime, update the URL depending on your current SDK loading scenario. ++- Download via CDN scenario: Update the code snippet that you currently use to point to the following URL: + ``` + "https://js.monitor.azure.com/scripts/b/ai.2.min.js" + ``` ++- npm scenario: Call `downloadAndSetup` to download the full ApplicationInsights script from CDN and initialize it with a connection string: ++ ```ts + appInsights.downloadAndSetup({ + connectionString: "Copy connection string from Application Insights Resource Overview", + url: "https://js.monitor.azure.com/scripts/b/ai.2.min.jss" + }); + ``` ++Test in an internal environment to verify the monitoring telemetry is working as expected. If all works, update your API signatures appropriately to SDK v2 and deploy in your production environments. ++## Next steps +- To learn more about the JavaScript SDK, see the [Application Insights JavaScript SDK documentation](javascript.md). +- To learn about the Kusto Query Language and querying data in Log Analytics, see the [Log query overview](../../azure-monitor/logs/log-query-overview.md). |
azure-monitor | Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript.md | For single-page applications, reference plug-in documentation for guidance speci | Plug-ins | ||-| [React](javascript-react-plugin.md#enable-correlation)| -| [React Native](javascript-react-native-plugin.md#enable-correlation)| -| [Angular](javascript-angular-plugin.md#enable-correlation)| -| [Click Analytics Auto-collection](javascript-click-analytics-plugin.md#enable-correlation)| +| [React](javascript-framework-extensions.md#enable-correlation)| +| [React Native](javascript-framework-extensions.md#enable-correlation)| +| [Angular](javascript-framework-extensions.md#enable-correlation)| +| [Click Analytics Auto-collection](javascript-feature-extensions.md#enable-correlation)| ### Advanced correlation When you use an npm-based configuration, a location must be determined to store | Extensions | ||-| [React](javascript-react-plugin.md)| -| [React Native](javascript-react-native-plugin.md)| -| [Angular](javascript-angular-plugin.md)| -| [Click Analytics Auto-collection](javascript-click-analytics-plugin.md)| +| [React](javascript-framework-extensions.md)| +| [React Native](javascript-framework-extensions.md)| +| [Angular](javascript-framework-extensions.md)| +| [Click Analytics Auto-collection](javascript-feature-extensions.md)| ## Explore browser/client-side data |
azure-monitor | Usage Heart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-heart.md | These dimensions are measured independently, but they interact with each other a | pageViews | operation_Id | Correlate telemetry events | | pageViews | user_Id | Unique user identifier | -*Use the [Click Analytics Auto collection plugin](javascript-click-analytics-plugin.md) via npm to emit these attributes. +*Use the [Click Analytics Auto collection plugin](javascript-feature-extensions.md) via npm to emit these attributes. >[!TIP]-> To understand how to effectively use the Click Analytics plugin, please refer to [this section](javascript-click-analytics-plugin.md#how-to-effectively-use-the-plugin). +> To understand how to effectively use the Click Analytics plugin, please refer to [this section](javascript-feature-extensions.md#how-to-effectively-use-the-plugin). ### Open the workbook The workbook can be found in the gallery under 'public templates'. The workbook will be shown in the section titled **"Product Analytics using the Click Analytics Plugin"** as shown in the following image: Engagement is a measure of user activity, specifically intentional user actions Measuring engagement can vary based on the type of product being used. For example, a product like Microsoft Teams is expected to have a high daily usage, making it an important metric to track. But for a product like a paycheck portal, measurement would make more sense at a monthly or weekly level. >[!IMPORTANT]->A user who does an intentional action such as clicking a button or typing an input is counted as an active user. For this reason, Engagement metrics require the [Click Analytics plugin for Application Insights](javascript-click-analytics-plugin.md) implemented in the application. +>A user who does an intentional action such as clicking a button or typing an input is counted as an active user. For this reason, Engagement metrics require the [Click Analytics plugin for Application Insights](javascript-feature-extensions.md) implemented in the application. A Retained user is a user who was active in a specified reporting period and its | Retention | Proportion of active users from the previous period who are also Active this period | What percent of users are staying engaged with the product? | >[!IMPORTANT]->Since active users must have at least one telemetry event with an actionType, Retention metrics require the [Click Analytics plugin for Application Insights](javascript-click-analytics-plugin.md) implemented in the application. +>Since active users must have at least one telemetry event with an actionType, Retention metrics require the [Click Analytics plugin for Application Insights](javascript-feature-extensions.md) implemented in the application. ### Task success A successful task meets three requirements: A task is considered unsuccessful if any of the above requirements isn't met. >[!IMPORTANT]->Task success metrics require the [Click Analytics plugin for Application Insights](javascript-click-analytics-plugin.md) implemented in the application. +>Task success metrics require the [Click Analytics plugin for Application Insights](javascript-feature-extensions.md) implemented in the application. Set up a custom task using the below parameters. For more on editing workbook templates, refer to the [Azure Workbook templates]( ## Next steps-- Set up the [Click Analytics Auto Collection Plugin](javascript-click-analytics-plugin.md) via npm.+- Set up the [Click Analytics Auto Collection Plugin](javascript-feature-extensions.md) via npm. - Check out the [GitHub Repository](https://github.com/microsoft/ApplicationInsights-JS/tree/master/extensions/applicationinsights-clickanalytics-js) and [npm Package](https://www.npmjs.com/package/@microsoft/applicationinsights-clickanalytics-js) for the Click Analytics Auto Collection Plugin. - Use [Events Analysis in Usage Experience](usage-segmentation.md) to analyze top clicks and slice by available dimensions. - Find click data under content field within customDimensions attribute in CustomEvents table in [Log Analytics](../logs/log-analytics-tutorial.md#write-a-query). See [Sample App](https://go.microsoft.com/fwlink/?linkid=2152871) for more guidance. |
azure-monitor | Usage Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-overview.md | For more information about the Retention workbook, see [User retention analysis To get a clear understanding of what users do with your app, it's useful to insert lines of code to log custom events. These events can track anything from detailed user actions, such as selecting specific buttons, to more significant business events, such as making a purchase or winning a game. -You can also use the [Click Analytics Auto-collection plug-in](javascript-click-analytics-plugin.md) to collect custom events. +You can also use the [Click Analytics Auto-collection plug-in](javascript-feature-extensions.md) to collect custom events. In some cases, page views can represent useful events, but it isn't true in general. A user can open a product page without buying the product. |
azure-monitor | Change Analysis Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-enable.md | foreach ($webapp in $webapp_list) - Learn about [visualizations in Change Analysis](change-analysis-visualizations.md) - Learn how to [troubleshoot problems in Change Analysis](change-analysis-troubleshoot.md)-- Enable Application Insights for [Azure App Services apps](../../azure-monitor/app/azure-web-apps.md).+- Enable Application Insights for [Azure web apps](../../azure-monitor/app/azure-web-apps.md). - Enable Application Insights for [Azure VM and Azure virtual machine scale set IIS-hosted apps](../../azure-monitor/app/azure-vm-vmss-apps.md). |
azure-monitor | Change Analysis Visualizations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-visualizations.md | Use the [View change history](../essentials/activity-log.md#view-change-history) - Resource level changes from [Azure Resource Graph](../../governance/resource-graph/overview.md). - Resource properties from [Azure Resource Manager](../../azure-resource-manager/management/overview.md).-- In-guest changes from PaaS services, such as App Services web app.+- In-guest changes from PaaS services, such as a web app. 1. From within your resource, select **Activity Log** from the side menu. 1. Select a change from the list. |
azure-monitor | Change Analysis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis.md | Azure Monitor Change Analysis service supports resource property level changes i Azure Monitor's Change Analysis queries for: - [Azure Resource Manager resource properties.](#azure-resource-manager-resource-properties-changes) - [Resource configuration changes.](#resource-configuration-changes)-- [App Service Function and Web App in-guest changes.](#changes-in-azure-app-services-function-and-web-apps-in-guest-changes) +- [App Service Function and Web App in-guest changes.](#changes-in-azure-function-and-web-apps-in-guest-changes) Change Analysis also tracks [resource dependency changes](#dependency-changes) to diagnose and monitor an application end-to-end. In addition to the settings set via Azure Resource Manager, you can set configur These setting changes are not captured by Azure Resource Graph. Change Analysis fills this gap by capturing snapshots of changes in those main configuration properties, like changes to the connection string, etc. Snapshots are taken of configuration changes and change details every up to 6 hours. [See known limitations.](#limitations) -### Changes in Azure App Services Function and Web Apps (in-guest changes) +### Changes in Azure Function and Web Apps (in-guest changes) Every 30 minutes, Change Analysis captures the configuration state of a web application. For example, it can detect changes in the application environment variables, configuration files, and WebJobs. The tool computes the differences and presents the changes. Currently the following dependencies are supported in **Web App Diagnose and sol ## Limitations -- **OS environment**: For Azure App Services Function and Web App in-guest changes, Change Analysis currently only works with Windows environments, not Linux.+- **OS environment**: For Azure Function and Web App in-guest changes, Change Analysis currently only works with Windows environments, not Linux. - **Web app deployment changes**: Code deployment change information might not be available immediately in the Change Analysis tool. To view the latest changes in Change Analysis, select **Refresh**.-- **App Services file changes**: File changes take up to 30 minutes to display.-- **App Services configuration changes**: Due to the snapshot approach to configuration changes, timestamps of configuration changes could take up to 6 hours to display from when the change actually happened.+- **Function and Web App file changes**: File changes take up to 30 minutes to display. +- **Function and Web App configuration changes**: Due to the snapshot approach to configuration changes, timestamps of configuration changes could take up to 6 hours to display from when the change actually happened. - **Web app deployment and configuration changes**: Since these changes are collected by a site extension and stored on disk space owned by your application, data collection and storage is subject to your application's behavior. Check to see if a misbehaving application is affecting the results. - **Snapshot retention for all changes**: The Change Analysis data for resources is tracked by Azure Resource Graphs (ARG). ARG keeps snapshot history of tracked resources only for 14 days. Currently the following dependencies are supported in **Web App Diagnose and sol - Learn about [enabling Change Analysis](change-analysis-enable.md) - Learn about [visualizations in Change Analysis](change-analysis-visualizations.md) - Learn how to [troubleshoot problems in Change Analysis](change-analysis-troubleshoot.md)-- Enable Application Insights for [Azure App Services apps](../../azure-monitor/app/azure-web-apps.md).+- Enable Application Insights for [Azure web apps](../../azure-monitor/app/azure-web-apps.md). - Enable Application Insights for [Azure VM and Azure virtual machine scale set IIS-hosted apps](../../azure-monitor/app/azure-vm-vmss-apps.md). |
azure-monitor | Diagnostic Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/diagnostic-settings.md | Currently, there are two category groups: - **All**: Every resource log offered by the resource. - **Audit**: All resource logs that record customer interactions with data or the settings of the service. Note that Audit logs are an attempt by each resource provider to provide the most relevant audit data, but may not be considered sufficient from an auditing standards perspective. +Note : Enabling *Audit* for Azure SQL Database does not enable auditing for Azure SQL Database. To enable database auditing, you have to enable it from the auditing blade for Azure Database. + ### Activity log See the [Activity log settings](#activity-log-settings) section. |
azure-monitor | Logs Ingestion Api Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-ingestion-api-overview.md | The Logs Ingestion API can send data to any custom table that you create and to ### Built-in tables -The Logs Ingestion API can send data to the following built-in tables. Other tables may be added to this list as support for them is implemented. Columns extended on top of built-in tables must have the suffix `_CF`. Columns in a custom table don't need this suffix. Column names can consist of alphanumeric characters and the characters `_` and `-`, and they must start with a letter. +The Logs Ingestion API can send data to the following built-in tables. Other tables may be added to this list as support for them is implemented. Columns extended on top of built-in tables must have the suffix `_CF`. Columns in a custom table don't need this suffix. - [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog) - [SecurityEvents](/azure/azure-monitor/reference/tables/securityevent) - [Syslog](/azure/azure-monitor/reference/tables/syslog) - [WindowsEvents](/azure/azure-monitor/reference/tables/windowsevent) -+> [!NOTE] +> Column names can consist of alphanumeric characters and the characters `_` and `-`, and they must start with a letter. ## Authentication |
azure-resource-manager | Move Resource Group And Subscription | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-resource-group-and-subscription.md | There are some important steps to do before moving a resource. By verifying thes * [App Services move guidance](./move-limitations/app-service-move-limitations.md) * [Azure DevOps Services move guidance](/azure/devops/organizations/billing/change-azure-subscription?toc=/azure/azure-resource-manager/toc.json) * [Classic deployment model move guidance](./move-limitations/classic-model-move-limitations.md) - Classic Compute, Classic Storage, Classic Virtual Networks, and Cloud Services+ * [Cloud Services (extended support) move guidance](./move-limitations/classic-model-move-limitations.md) * [Networking move guidance](./move-limitations/networking-move-limitations.md) * [Recovery Services move guidance](../../backup/backup-azure-move-recovery-services-vault.md?toc=/azure/azure-resource-manager/toc.json) * [Virtual Machines move guidance](./move-limitations/virtual-machines-move-limitations.md) |
azure-resource-manager | Move Support Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-support-resources.md | Before starting your move operation, review the [checklist](./move-resource-grou > | virtualmachines / extensions | **Yes** | **Yes** | No | > | virtualmachinescalesets | **Yes** | **Yes** | No | ++> [!IMPORTANT] +> See [Cloud Services (extended support) deployment move guidance](./move-limitations/classic-model-move-limitations.md). Cloud Services (extended support) deployment resources can be moved across subscriptions with an operation specific to that scenario. ++> [!div class="mx-tableFixed"] +> | Resource type | Resource group | Subscription | Region move | +> | - | -- | - | -- | +> | capabilities | No | No | No | +> | domainnames | **Yes** | No | No | +> | quotas | No | No | No | +> | resourcetypes | No | No | No | +> | validatesubscriptionmoveavailability | No | No | No | +> | virtualmachines | **Yes** | **Yes** | No | ++ ## Microsoft.Confluent > [!div class="mx-tableFixed"] |
azure-vmware | Azure Vmware Solution Platform Updates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-platform-updates.md | description: Learn about the platform updates to Azure VMware Solution. Previously updated : 12/22/2022 Last updated : 2/03/2023 # What's new in Azure VMware Solution Microsoft will regularly apply important updates to the Azure VMware Solution for new features and software lifecycle management. You'll receive a notification through Azure Service Health that includes the timeline of the maintenance. For more information, see [Host maintenance and lifecycle management](concepts-private-clouds-clusters.md#host-maintenance-and-lifecycle-management). +## February 2023 ++VMware HCX Enterprise is now available and supported on Azure VMware Solution at no extra cost. HCX Enterprise brings valuable [services](https://docs.vmware.com/en/VMware-HCX/4.5/hcx-user-guide/GUID-32AF32BD-DE0B-4441-95B3-DF6A27733EED.html) like, Replicated Assisted vMotion (RAV), Mobility Optimized Networking (MON). HCX Enterprise is now automatically installed for all new HCX add-on requests, and existing HCX Advanced customers can upgrade to HCX Enterprise using the Azure portal.ΓÇ»Learn more on how to [Install and activate VMware HCX in Azure VMware Solution](https://learn.microsoft.com/azure/azure-vmware/install-vmware-hcx). ++**Log analytics - monitor Azure VMware Solution** ++The data in Azure Log Analytics offer insights into issues by searching using Kusto Query Language. ++**New SKU availability - AV36P and AV52 nodes** ++The AV36P is now available in the West US Region.ΓÇ» This node size is used for memory and storage workloads by offering increased Memory and NVME based SSDs.ΓÇ» ++AV52 is now available in the East US 2 Region.ΓÇ»This node size is used for intensive workloads with higher physical core count, additional memory, and larger capacity NVME based SSDs. ++**Customer-managed keys using Azure Key Vault** ++You can use customer-managed keys to bring and manage your master encryption keys to encrypt van. Azure Key Vault allows you to store your privately managed keys securely to access your Azure VMware Solution data. ++**Azure NetApp Files - more storage options available** ++You can use Azure NetApp Files volumes as a file share for Azure VMware Solution workloads using Network File System (NFS) or Server Message Block (SMB). ++**Stretched clusters - increase uptime with Stretched Clusters (Preview)** ++Stretched clusters for Azure VMware Solution, provides 99.99% uptime for mission critical applications that require the highest availability. ++For more information, see [Azure Migration and Modernization blog](https://techcommunity.microsoft.com/t5/azure-migration-and/bg-p/AzureMigrationBlog). + ## November 2022 -AV36P and AV52 node sizes available in Azure VMware Solution. The new node sizes increase memory and storage options to optimize your workloads. The gains in performance enable you to do more per server, break storage bottlenecks, and lower transaction costs of latency-sensitive workloads. The availability of the new nodes allows for large latency-sensitive services to be hosted efficiently on the Azure VMware Solution infrastructure. +AV36P and AV52 node sizes available in Azure VMware Solution. The new node sizes increase memory and storage options to optimize your workloads. The gains in performance enable you to do more per server, break storage bottlenecks, and lower transaction costs of latency-sensitive workloads. The availability of the new nodes allows for large latency-sensitive services to be hosted efficiently on the Azure VMware Solution infrastructure. + For pricing and region availability, see the [Azure VMware Solution pricing page](https://azure.microsoft.com/pricing/details/azure-vmware/) and see the [Products available by region page](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=azure-vmware®ions=all). ## July 2022 HCX cloud manager in Azure VMware Solution can now be accessible over a public IP address. You can pair HCX sites and create a service mesh from on-premises to Azure VMware Solution private cloud using Public IP.-HCX with public IP is especially useful in cases where On-premises sites are not connected to Azure via Express Route or VPN. HCX service mesh appliances can be configured with public IPs to avoid lower tunnel MTUs due to double encapsulation if a VPN is used for on-premises to cloud connections. For more information, please see [Enable HCX over the internet](./enable-hcx-access-over-internet.md) +HCX with public IP is especially useful in cases where On-premises sites aren't connected to Azure via Express Route or VPN. HCX service mesh appliances can be configured with public IPs to avoid lower tunnel MTUs due to double encapsulation if a VPN is used for on-premises to cloud connections. For more information, please see [Enable HCX over the internet](./enable-hcx-access-over-internet.md) ++All new Azure VMware Solution private clouds are now deployed with VMware vCenter Server version 7.0 Update 3c and ESXi version 7.0 Update 3c. -All new Azure VMware Solution private clouds are now deployed with VMware vCenter Server version 7.0 Update 3c and ESXi version 7.0 Update 3c. -Any existing private clouds will be upgraded to those versions. For more information, please see [VMware ESXi 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-70u3c-release-notes.html) and [VMware vCenter Server 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u3c-release-notes.html). - You'll receive a notification through Azure Service Health that includes the timeline of the upgrade. You can reschedule an upgrade as needed. This notification also provides details on the upgraded component, its effect on workloads, private cloud access, and other Azure services. +Any existing private clouds will be upgraded to those versions. For more information, please see [VMware ESXi 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-70u3c-release-notes.html) and [VMware vCenter Server 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u3c-release-notes.html). ++You'll receive a notification through Azure Service Health that includes the timeline of the upgrade. You can reschedule an upgrade as needed. This notification also provides details on the upgraded component, its effect on workloads, private cloud access, and other Azure services. ## June 2022 Any existing private clouds in the above mentioned regions will also be upgraded ## May 2022 All new Azure VMware Solution private clouds in regions (Germany West Central, Australia East, Central US and UK West), are now deployed with VMware vCenter Server version 7.0 Update 3c and ESXi version 7.0 Update 3c. -Any existing private clouds in the previously mentioned regions will be upgraded to those versions. For more information, please see [VMware ESXi 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-70u3c-release-notes.html) and [VMware vCenter Server 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u3c-release-notes.html). You'll receive a notification through Azure Service Health that includes the timeline of the upgrade. You can reschedule an upgrade as needed. This notification also provides details on the upgraded component, its effect on workloads, private cloud access, and other Azure services. +Any existing private clouds in the previously mentioned regions will be upgraded to those versions. For more information, please see [VMware ESXi 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-70u3c-release-notes.html) and [VMware vCenter Server 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u3c-release-notes.html). ++You'll receive a notification through Azure Service Health that includes the timeline of the upgrade. You can reschedule an upgrade as needed. This notification also provides details on the upgraded component, its effect on workloads, private cloud access, and other Azure services. All new Azure VMware Solution private clouds in regions (France Central, Brazil South, Japan West, Australia Southeast, Canada East, East Asia, and Southeast Asia), are now deployed with VMware vCenter Server version 7.0 Update 3c and ESXi version 7.0 Update 3c.+ Any existing private clouds in the previously mentioned regions will be upgraded to those versions. For more information, please see [VMware ESXi 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-70u3c-release-notes.html) and [VMware vCenter Server 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u3c-release-notes.html). + You'll receive a notification through Azure Service Health that includes the timeline of the upgrade. You can reschedule an upgrade as needed. This notification also provides details on the upgraded component, its effect on workloads, private cloud access, and other Azure services. ## February 2022 No further action is required. ## December 2021 Azure VMware Solution (AVS) has completed maintenance activities to address critical vulnerabilities in Apache Log4j. The fixes documented in the VMware security advisory [VMSA-2021-0028.6](https://www.vmware.com/security/advisories/VMSA-2021-0028.html) to address CVE-2021-44228 and CVE-2021-45046 have been applied to these AVS managed VMware products: vCenter Server, NSX-T Data Center, SRM and HCX. We strongly encourage customers to apply the fixes to on-premises HCX connector appliances. - We also recommend customers to review the security advisory and apply the fixes for other affected VMware products or workloads. - If you need any assistance or have questions, please [contact us](https://portal.azure.com/#home). -VMware has announced a security advisory [VMSA-2021-0028](https://www.vmware.com/security/advisories/VMSA-2021-0028.html), addressing a critical vulnerability in Apache Log4j identified by CVE-2021-44228. Azure VMware Solution is actively monitoring this issue. We are addressing this issue by applying VMware recommended workarounds or patches for AVS managed VMware components as they become available. Please note that you may experience intermittent connectivity to these components when we apply a fix. We strongly recommend that you read the advisory and patch or apply the recommended workarounds for any additional VMware products that you may have deployed in Azure VMware Solution. If you need any assistance or have questions, please [contact us](https://portal.azure.com). + We also recommend customers to review the security advisory and apply the fixes for other affected VMware products or workloads. + + If you need any assistance or have questions, [contact us](https://portal.azure.com/#home). ++VMware has announced a security advisory [VMSA-2021-0028](https://www.vmware.com/security/advisories/VMSA-2021-0028.html), addressing a critical vulnerability in Apache Log4j identified by CVE-2021-44228. Azure VMware Solution is actively monitoring this issue. We're addressing this issue by applying VMware recommended workarounds or patches for AVS managed VMware components as they become available. ++ Note that you may experience intermittent connectivity to these components when we apply a fix. We strongly recommend that you read the advisory and patch or apply the recommended workarounds for other VMware products you may have deployed in Azure VMware Solution. If you need any assistance or have questions, [contact us](https://portal.azure.com). ## November 2021 All new Azure VMware Solution private clouds are now deployed with ESXi versio ## July 2021 -All new Azure VMware Solution private clouds are now deployed with NSX-T Data Center version [!INCLUDE [nsxt-version](includes/nsxt-version.md)]. NSX-T Data Center version in existing private clouds will be upgraded through September, 2021 to NSX-T Data Center [!INCLUDE [nsxt-version](includes/nsxt-version.md)] release. +All new Azure VMware Solution private clouds are now deployed with NSX-T Data Center version [!INCLUDE [nsxt-version](includes/nsxt-version.md)]. NSX-T Data Center version in existing private clouds will be upgraded through September 2021 to NSX-T Data Center [!INCLUDE [nsxt-version](includes/nsxt-version.md)] release. You'll receive an email with the planned maintenance date and time. You can reschedule an upgrade. The email also provides details on the upgraded component, its effect on workloads, private cloud access, and other Azure services. For more information on this NSX-T Data Center version, see [VMware NSX-T Data C Per VMware security advisory [VMSA-2021-0010](https://www.vmware.com/security/advisories/VMSA-2021-0010.html), multiple vulnerabilities in VMware ESXi and vSphere Client (HTML5) have been reported to VMware. To address the vulnerabilities ([CVE-2021-21985](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-21985) and [CVE-2021-21986](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-21986)) reported in VMware security advisory [VMSA-2021-0010](https://www.vmware.com/security/advisories/VMSA-2021-0010.html), vCenter Server has been updated in all Azure VMware Solution private clouds. No further action is required. -Azure VMware Solution service will do maintenance work through May 23, 2021, to apply important updates to the vCenter Server in your private cloud. You'll receive a notification through Azure Service Health that includes the timeline of the maintenance for your private cloud. During this time, VMware vCenter Server will be unavailable and you won't be able to manage VMs (stop, start, create, or delete). It's recommended that, during this time, you don't plan any other activities like scaling up private cloud, creating new networks, and so on, in your private cloud. There is no impact to workloads running in your private cloud. +Azure VMware Solution service will do maintenance work through May 23, 2021, to apply important updates to the vCenter Server in your private cloud. You'll receive a notification through Azure Service Health that includes the timeline of the maintenance for your private cloud. During this time, VMware vCenter Server will be unavailable and you won't be able to manage VMs (stop, start, create, or delete). It's recommended that, during this time, you don't plan any other activities like scaling up private cloud, creating new networks, and so on, in your private cloud. There's no impact to workloads running in your private cloud. ## April 2021 |
azure-vmware | Configure Vmware Hcx | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-vmware-hcx.md | -In this how-to, you'll: +In this tutorial, you'll learn how to do the following tasks: * Pair your on-premises VMware HCX Connector with your Azure VMware Solution HCX Cloud Manager * Configure the network profile, compute profile, and service mesh After you complete these steps, you'll have a production-ready environment for c - [VMware HCX Connector](install-vmware-hcx.md) has been installed. -- If you plan to use VMware HCX Enterprise, make sure you've enabled the [VMware HCX Enterprise](https://cloud.vmware.com/community/2019/08/08/introducing-hcx-enterprise/) add-on through a [support request](https://portal.azure.com/#create/Microsoft.Support). VMware HCX Enterprise edition is available and supported on Azure VMware Solution, at no additional cost.+- VMware HCX Enterprise is now available and supported on Azure VMware Solution at no extra cost. HCX Enterprise is automatically installed for all new HCX add-on requests, and existing HCX Advanced customers can upgrade to HCX Enterprise using the Azure portal. - If you plan to [enable VMware HCX MON](https://docs.vmware.com/en/VMware-HCX/4.1/hcx-user-guide/GUID-0E254D74-60A9-479C-825D-F373C41F40BC.html), make sure you have: - NSX-T Data Center or vSphere Distributed Switch (vDS) on-premises for HCX Network Extension (vSphere Standard Switch not supported) - - One or more active stretched network segment + - One or more active stretched network segments - [VMware software version requirements](https://docs.vmware.com/en/VMware-HCX/4.1/hcx-user-guide/GUID-54E5293B-8707-4D29-BFE8-EE63539CC49B.html) have been met. In your data center, you can connect or pair the VMware HCX Cloud Manager in Azu 1. Under **Infrastructure**, select **Site Pairing** and select the **Connect To Remote Site** option (in the middle of the screen). -1. Enter the Azure VMware Solution HCX Cloud Manager URL or IP address that you noted earlier `https://x.x.x.9` and the credentials for a user which holds the CloudAdmin role in your private cloud. Then select **Connect**. +1. Enter the Azure VMware Solution HCX Cloud Manager URL or IP address that you noted earlier `https://x.x.x.9` and the credentials for a user that holds the CloudAdmin role in your private cloud. Then select **Connect**. > [!NOTE] > To successfully establish a site pair: In your data center, you can connect or pair the VMware HCX Cloud Manager in Azu ## Create network profiles -VMware HCX Connector deploys a subset of virtual appliances (automated) that require multiple IP segments. When you create your network profiles, you use the IP segments you identified during the [planning phase](plan-private-cloud-deployment.md#define-vmware-hcx-network-segments). You'll create four network profiles: +VMware HCX Connector deploys a subset of virtual appliances (automated) that requires multiple IP segments. When you create your network profiles, you use the IP segments you identified during the [planning phase](plan-private-cloud-deployment.md#define-vmware-hcx-network-segments). You'll create four network profiles: - Management - vMotion |
azure-vmware | Install Vmware Hcx | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/install-vmware-hcx.md | description: Install VMware HCX in your Azure VMware Solution private cloud. Previously updated : 12/05/2022 Last updated : 2/14/2023 # Install and activate VMware HCX in Azure VMware Solution Last updated 12/05/2022 VMware HCX has two component -In this article, you'll learn how to install and activate the VMware HCX Cloud Manager and VMware HCX Connector components. +This article shows you how to install and activate the VMware HCX Cloud Manager and VMware HCX Connector components. HCX Cloud manager is typically deployed as the destination (cloud side), but it can also be used as the source in cloud-to-cloud deployments. HCX Connector is deployed at the source (on-premises environment). A download link is provided for deploying HCX Connector appliance from within the HCX Cloud Manager. -In this how-to, you'll: +This article also teaches you how to do the following tasks: * Install VMware HCX Cloud through the Azure portal. * Download and deploy the VMware HCX Connector in on-premises. After HCX is deployed, follow the recommended [Next steps](#next-steps). 1. Select **Get started** for **HCX Workload Mobility**. - :::image type="content" source="media/tutorial-vmware-hcx/deployed-hcx-migration-get-started.png" alt-text="Screenshot showing the Get started button for HCX Workload Mobility."::: + :::image type="content" source="media/tutorial-vmware-hcx/add-hcx-workload-mobility.png" alt-text="Screenshot showing the Get started button for HCX Workload Mobility." lightbox="media/tutorial-vmware-hcx/add-hcx-workload-mobility.png"::: 1. Select the **I agree with terms and conditions** checkbox and then select **Install**. - Once installed, you'll see the HCX Cloud Manager URL and the HCX keys required for the HCX on-premises connector site pairing on the **Migration using HCX** tab. + Once installed, you should see the HCX Manager IP and the HCX keys required for the HCX on-premises connector site pairing on the **Migration using HCX** tab. > [!IMPORTANT] > If you don't see the HCX key after installing, click the **ADD** button to generate the key which you can then use for site pairing. - :::image type="content" source="media/tutorial-vmware-hcx/deployed-hcx-migration-using-hcx-tab.png" alt-text="Screenshot showing the Migration using HCX tab under Connectivity."::: + :::image type="content" source="media/tutorial-vmware-hcx/configure-hcx-appliance-for-migration-using-hcx-tab.png" alt-text="Screenshot showing the Migration using HCX tab under Connectivity." lightbox="media/tutorial-vmware-hcx/configure-hcx-appliance-for-migration-using-hcx-tab.png"::: ## HCX license edition -HCX offers various [services](https://docs.vmware.com/en/VMware-HCX/4.5/hcx-user-guide/GUID-32AF32BD-DE0B-4441-95B3-DF6A27733EED.html#GUID-32AF32BD-DE0B-4441-95B3-DF6A27733EED) based on the type of license installed with the system. Advanced delivers basic connectivity and mobility services to enable hybrid interconnect and migration services. HCX Enterprise offers more services than what standard licenses provide, such as Mobility Groups, Replication assisted vMotion (RAV), Mobility Optimized Networking, Network Extension High availability, OS assisted Migration, etc. +HCX offers various [services](https://docs.vmware.com/en/VMware-HCX/4.5/hcx-user-guide/GUID-32AF32BD-DE0B-4441-95B3-DF6A27733EED.html#GUID-32AF32BD-DE0B-4441-95B3-DF6A27733EED) based on the type of license installed with the system. Advanced delivers basic connectivity and mobility services to enable hybrid interconnect and migration services. HCX Enterprise offers more services than what standard licenses provide. Some of those services include; Mobility Groups, Replication assisted vMotion (RAV), Mobility Optimized Networking, Network Extension High availability, OS assisted Migration, and others. >[!Note] > VMware HCX Enterprise is available for Azure VMware Solution customers at no additional cost. -- After HCX is deployed, you can upgrade the license from Advanced to Enterprise using a [support request](https://rc.portal.azure.com/#create/Microsoft.Support) to have HCX Enterprise Edition enabled. -- Downgrading from HCX Enterprise Edition to HCX Advanced is possible without redeploying. First, ensure you've reverted to an HCX Advanced configuration state and you aren't using the Enterprise features. If you plan to downgrade, ensure that no scheduled migrations, [Enterprise services](https://docs.vmware.com/en/VMware-HCX/4.5/hcx-user-guide/GUID-32AF32BD-DE0B-4441-95B3-DF6A27733EED.html#GUID-32AF32BD-DE0B-4441-95B3-DF6A27733EED) like RAV and HCX MON, etc. aren't in use. Open a [support request](https://rc.portal.azure.com/#create/Microsoft.Support) to request downgrade. +- HCX is now installed as Enterprise for all new HCX installations in Azure VMware solution. +- Existing HCX Advanced customers can upgrade to HCX Enterprise using the Azure portal. Use the following steps to upgrade to HCX Enterprise using Azure portal. ++ 1. Under **Manage** in the left navigation, select **Add-ons**, then the **Migration using HCX** tab. + 2. Select the **Upgrade to HCX Enterprise** button to enable HCX Enterprise edition. ++ :::image type="content" source="media/tutorial-vmware-hcx/upgrade-to-hcx-enterprise-on-migration-using-hcx-tab.png" alt-text="Screenshot showing upgrade to HCX Enterprise using HCX tab under Add-ons." lightbox="media/tutorial-vmware-hcx/upgrade-to-hcx-enterprise-on-migration-using-hcx-tab.png"::: ++ 3. Confirm the update to HCX Enterprise edition by selecting **Yes**. ++ :::image type="content" source="media/tutorial-vmware-hcx/update-to-hcx-enterprise-edition-on-migration-using-hcx-tab.png" alt-text="Screenshot showing confirmation to update to HCX Enterprise using HCX tab under Add-ons." lightbox="media/tutorial-vmware-hcx/update-to-hcx-enterprise-edition-on-migration-using-hcx-tab.png"::: ++ >[!IMPORTANT] + > If you upgraded HCX from advanced to Enterprise, enable the new features in the compute profile and perform resync in service mesh to select a new feature like, Replication Assisted vMotion (RAV). ++ 4. Change Compute profile after HCX upgrade to HCX Enterprise. ++ 1. On HCX UI, select **Infrastructure** > **Interconnect**, then select **Edit**. + 2. Select services you want activated like, Replication Assisted vMotion (RAV) and OS assisted Migration, which is available with HCX Enterprise only. ++ 3. Select **Continue**, review the settings, then select **Finish** to create the Compute Profile. ++ 5. If compute profile is being used in service mesh(es), resync service mesh. ++ 1. Go to **Interconnect** > **Service Mesh**. + 1. Select **Resync**, then verify that the changes appear in the Service Mesh configuration. ++- Downgrading from HCX Enterprise Edition to HCX Advanced is possible without redeploying. ++ 1. Verify that you've reverted to an HCX Advanced configuration state and you aren't using the Enterprise features. + 1. If you plan to downgrade, verify that no scheduled migrations, [Enterprise services](https://docs.vmware.com/en/VMware-HCX/4.5/hcx-user-guide/GUID-32AF32BD-DE0B-4441-95B3-DF6A27733EED.html#GUID-32AF32BD-DE0B-4441-95B3-DF6A27733EED) like RAV and HCX MON, etc. are in use. Open a [support request](https://rc.portal.azure.com/#create/Microsoft.Support) to request downgrade. ## Download and deploy the VMware HCX Connector in on-premises -In this step, you'll download the VMware HCX Connector OVA file, and then you'll deploy the VMware HCX Connector to your on-premises vCenter Server. +Use the following steps to download the VMware HCX Connector OVA file, and then deploy the VMware HCX Connector to your on-premises vCenter Server. 1. Open a browser window, sign in to the Azure VMware Solution HCX Manager on `https://x.x.x.9` port 443 with the **cloudadmin\@vsphere.local** user credentials - - 1. Under **Administration** > **System Updates**, select **Request Download Link**. If the box is greyed, wait a few seconds for it to generate a link. - - 1. Either download or receive a link for the VMware HCX Connector OVA file you deploy on your local vCenter Server. 1. In your on-premises vCenter Server, select an [OVF template](https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.vm_admin.doc/GUID-17BEDA21-43F6-41F4-8FB2-E01D275FE9B4.html) to deploy the VMware HCX Connector to your on-premises vSphere cluster. 1. Navigate to and select the OVA file that you downloaded and then select **Open**. In this step, you'll download the VMware HCX Connector OVA file, and then you'll 1. Select the [VMware HCX management network segment](plan-private-cloud-deployment.md#define-vmware-hcx-network-segments) that you defined during the planning state. Then select **Next**. 1. In **Customize template**, enter all required information and then select **Next**. - - :::image type="content" source="media/tutorial-vmware-hcx/customize-template.png" alt-text="Screenshot of the boxes for customizing a template." lightbox="media/tutorial-vmware-hcx/customize-template.png"::: - 1. Verify and then select **Finish** to deploy the VMware HCX Connector OVA. >[!IMPORTANT] In this step, you'll download the VMware HCX Connector OVA file, and then you'll ## Activate VMware HCX -After deploying the VMware HCX Connector OVA on-premises and starting the appliance, you're ready to activate it. First, you'll need to get a license key from the Azure VMware Solution portal, and then you'll activate it in VMware HCX Manager. Finally, youΓÇÖll need a key for each on-premises HCX connector deployed. +After deploying the VMware HCX Connector OVA on-premises and starting the appliance, you're ready to activate it. First, you need to get a license key from the Azure VMware Solution portal and activate it in VMware HCX Manager. Then you need a key for each on-premises HCX connector deployed. 1. In your Azure VMware Solution private cloud, select **Manage** > **Add-ons** > **Migration using HCX**. Then copy the **Activation key**. After deploying the VMware HCX Connector OVA on-premises and starting the applia 1. Sign in to the on-premises VMware HCX Manager at `https://HCXManagerIP:9443` with the `admin` credentials. Make sure to include the `9443` port number with the VMware HCX Manager IP address. >[!TIP]- >You defined the **admin** user password during the VMware HCX Manager OVA file deployment. + > You defined the **admin** user password during the VMware HCX Manager OVA file deployment. 1. In **Licensing**, enter your key for **HCX Advanced Key** and select **Activate**. >[!IMPORTANT] After deploying the VMware HCX Connector OVA on-premises and starting the applia >Typically, it's the same as your vCenter Server FQDN or IP address. 1. Verify that the information entered is correct and select **Restart**. >[!NOTE] - >You'll experience a delay after restarting before being prompted for the next step. + > You'll experience a delay after restarting before being prompted for the next step. After the services restart, you'll see vCenter Server displayed as green on the screen that appears. Both vCenter Server and SSO must have the appropriate configuration parameters, which should be the same as the previous screen. |
backup | Backup Azure Arm Restore Vms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-arm-restore-vms.md | Title: Restore VMs by using the Azure portal description: Restore an Azure virtual machine from a recovery point by using the Azure portal, including the Cross Region Restore feature. Previously updated : 12/06/2022 Last updated : 02/14/2023 Currently, secondary region [RPO](azure-backup-glossary.md#rpo-recovery-point-ob >[!NOTE] >->- You can cancel the restore job till the data transfer phase. Once it enters VM creation phase, you can't cancel the restore job. >- The Cross Region Restore feature restores CMK (customer-managed keys) enabled Azure VMs, which aren't backed-up in a CMK enabled Recovery Services vault, as non-CMK enabled VMs in the secondary region. >- The Azure roles needed to restore in the secondary region are the same as those in the primary region. >- While restoring an Azure VM, Azure Backup configures the virtual network settings in the secondary region automatically. If you are [restoring disks](#restore-disks) while deploying the template, ensure to provide the virtual network settings, corresponding to the secondary region. In summary, the **Availability Zone** will only appear when  >[!Note]->Cross-region restore jobs can't be canceled. +>Cross region restore jobs once triggered, can't be canceled. ### Monitoring secondary region restore jobs |
backup | Backup Azure Arm Userestapi Createorupdatepolicy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-arm-userestapi-createorupdatepolicy.md | Title: Create backup policies using REST API description: In this article, you'll learn how to create and manage backup policies (schedule and retention) using REST API.- Previously updated : 06/13/2022+ Last updated : 02/14/2023 ms.assetid: 5ffc4115-0ae5-4b85-a18c-8a942f6d4870 For the complete list of definitions in the request body, refer to the [backup p #### For Azure VM backup -The following request body defines a backup policy for Azure VM backups. +The following request body defines a standard backup policy for Azure VM backups. This policy: This policy: } ``` +The following request body defines an enhanced backup policy for Azure VM backups creating multiple backups a day. ++This policy: ++- Takes a backup every 4 hours from 3:30 PM UTC everyday +- Retains instant recovery snapshot for 7 days +- Retains the daily backups for 180 days +- Retains the backups taken on the Sunday of every week for 12 weeks +- Retains the backups taken on the first Sunday of every month for 12 months ++```json +{ + "properties": { + "backupManagementType": "AzureIaasVM", + "policyType": "V2", + "instantRPDetails": {}, + "schedulePolicy": { + "schedulePolicyType": "SimpleSchedulePolicyV2", + "scheduleRunFrequency": "Hourly", + "hourlySchedule": { + "interval": 4, + "scheduleWindowStartTime": "2023-02-06T15:30:00Z", + "scheduleWindowDuration": 24 + } + }, + "retentionPolicy": { + "retentionPolicyType": "LongTermRetentionPolicy", + "dailySchedule": { + "retentionTimes": [ + "2023-02-06T15:30:00Z" + ], + "retentionDuration": { + "count": 180, + "durationType": "Days" + } + }, + "weeklySchedule": { + "daysOfTheWeek": [ + "Sunday" + ], + "retentionTimes": [ + "2023-02-06T15:30:00Z" + ], + "retentionDuration": { + "count": 12, + "durationType": "Weeks" + } + }, + "monthlySchedule": { + "retentionScheduleFormatType": "Weekly", + "retentionScheduleWeekly": { + "daysOfTheWeek": [ + "Sunday" + ], + "weeksOfTheMonth": [ + "First" + ] + }, + "retentionTimes": [ + "2023-02-06T15:30:00Z" + ], + "retentionDuration": { + "count": 12, + "durationType": "Months" + } + } + }, + "tieringPolicy": { + "ArchivedRP": { + "tieringMode": "DoNotTier", + "duration": 0, + "durationType": "Invalid" + } + }, + "instantRpRetentionRangeInDays": 7, + "timeZone": "UTC", + "protectedItemsCount": 0 + } +} +``` ++ > [!IMPORTANT] > The time formats for schedule and retention support only DateTime. They don't support Time format alone. ++ #### For SQL in Azure VM backup The following is an example request body for SQL in Azure VM backup. |
bastion | Bastion Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-faq.md | Azure Bastion doesn't move or store customer data out of the region it's deploye ### <a name="vwan"></a>Does Azure Bastion support Virtual WAN? -Yes, you can use Azure Bastion for Virtual WAN deployments. However, deploying Azure Bastion within a Virtual WAN hub isn't supported. You can deploy Azure Bastion in a spoke VNet and use the [IP-based connection](connect-ip-address.md) feature to connect to virtual machines deployed across a different VNet via the Virtual WAN hub. If the Azure Virtual WAN hub will be integrated with Azure Firewall as a [Secured Virtual Hub](https://learn.microsoft.com/azure/firewall-manager/secured-virtual-hub), default 0.0.0.0/0 route must not be overwritten. +Yes, you can use Azure Bastion for Virtual WAN deployments. However, deploying Azure Bastion within a Virtual WAN hub isn't supported. You can deploy Azure Bastion in a spoke VNet and use the [IP-based connection](connect-ip-address.md) feature to connect to virtual machines deployed across a different VNet via the Virtual WAN hub. If the Azure Virtual WAN hub will be integrated with Azure Firewall as a [Secured Virtual Hub](../firewall-manager/secured-virtual-hub.md), default 0.0.0.0/0 route must not be overwritten. ### <a name="dns"></a>Can I use Azure Bastion with Azure Private DNS Zones? To establish the correct key mappings for your target language, you must set the To set your target language as your keyboard layout on a Windows workstation, navigate to Settings > Time & Language > Language & Region. Under "Preferred languages," select "Add a language" and add your target language. You'll then be able to see your keyboard layouts on your toolbar. To set English (United States) as your keyboard layout, select "ENG" on your toolbar or click Windows + Spacebar to open keyboard layouts. +### <a name="shortcut"></a>Is there a keyboard solution to toggle focus between a VM and browser? ++Users can use "Ctrl+Shift+Alt" to effectively switch focus between the VM and the browser. + ### <a name="res"></a>What is the maximum screen resolution supported via Bastion? Currently, 1920x1080 (1080p) is the maximum supported resolution. ### <a name="timezone"></a>Does Azure Bastion support timezone configuration or timezone redirection for target VMs? -Azure Bastion currently doesn't support timezone redirection and isn't timezone configurable. +Azure Bastion currently doesn't support timezone redirection and isn't timezone configurable. Timezone settings for a VM can be manually updated after successfully connecting to the Guest OS. ### <a name="disconnect"></a>Will an existing session disconnect during maintenance on the Bastion host? Make sure the user has **read** access to both the VM, and the peered VNet. Addi |Microsoft.Network/virtualNetworks/subnets/virtualMachines/read|Gets references to all the virtual machines in a virtual network subnet|Action| |Microsoft.Network/virtualNetworks/virtualMachines/read|Gets references to all the virtual machines in a virtual network|Action| +### My privatelink.azure.com cannot resolve to management.privatelinke.azure.com ++This may be due to the Private DNS zone for privatelink.azure.com linked to the Bastion virtual network causing management.azure.com CNAMEs to resolve to management.privatelink.azure.com behind the scenes. Create a CNAME record in their privatelink.azure.com zone for management.privatelink.azure.com to arm-frontdoor-prod.trafficmanager.net to enable successful DNS resolution. +++ ## Next steps For more information, see [What is Azure Bastion](bastion-overview.md). |
cognitive-services | Batch Transcription | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-transcription.md | Batch transcription jobs are scheduled on a best-effort basis. You can't estimat ## Next steps - [Locate audio files for batch transcription](batch-transcription-audio-data.md)-- [Review quotas and limits](speech-services-quotas-and-limits.md#batch-transcription)+- [Create a batch transcription](batch-transcription-create.md) - [Get batch transcription results](batch-transcription-get.md) |
cognitive-services | Captioning Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/captioning-concepts.md | The [SRT](https://docs.fileformat.com/video/srt/) (SubRip Text) timespan output Welcome to applied Mathematics course 201. ``` -The [WebVTT](https://www.w3.org/TR/webvtt1/#introduction) (Web Video Text Tracks) timespan output format is `hh:mm:ss,fff`. +The [WebVTT](https://www.w3.org/TR/webvtt1/#introduction) (Web Video Text Tracks) timespan output format is `hh:mm:ss.fff`. ``` WEBVTT |
cognitive-services | How To Custom Speech Human Labeled Transcriptions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-human-labeled-transcriptions.md | Human-labeled transcriptions are word-by-word transcriptions of an audio file. Y A large sample of transcription data is required to improve recognition. We suggest providing between 1 and 20 hours of audio data. The Speech service will use up to 20 hours of audio for training. This guide is broken up by locale, with sections for US English, Mandarin Chinese, and German. -The transcriptions for all WAV files are contained in a single plain-text file. Each line of the transcription file contains the name of one of the audio files, followed by the corresponding transcription. The file name and transcription are separated by a tab (`\t`). +The transcriptions for all WAV files are contained in a single plain-text file (.txt or .tsv). Each line of the transcription file contains the name of one of the audio files, followed by the corresponding transcription. The file name and transcription are separated by a tab (`\t`). For example: -```tsv +```txt speech01.wav speech recognition is awesome speech02.wav the quick brown fox jumped all over the place speech03.wav the lazy dog was not amused |
cognitive-services | Pronunciation Assessment Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/pronunciation-assessment-tool.md | The complete transcription is shown in the `text` attribute. You can see accurac ## Next steps - Use [pronunciation assessment with the Speech SDK](how-to-pronunciation-assessment.md)-- Read the blog about [use cases](https://techcommunity.microsoft.com/t5/azure-ai-blog/speech-service-update-pronunciation-assessment-is-generally/ba-p/2505501)+- Read the blog about [use cases](https://aka.ms/pronunciationassessment/techblog) |
cognitive-services | Developer Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/developer-guide.md | As you use these features in your application, use the following documentation a | Language → Latest GA version |Reference documentation |Samples | ||||-| [C#/.NET → v5.2.0](https://www.nuget.org/packages/Azure.AI.TextAnalytics/5.2.0) | [C# documentation](/dotnet/api/azure.ai.textanalytics?view=azure-dotnet-preview&preserve-view=true) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/textanalytics/Azure.AI.TextAnalytics/samples) | -| [Java → v5.2.0](https://mvnrepository.com/artifact/com.azure/azure-ai-textanalytics/5.2.0) | [Java documentation](/java/api/overview/azure/ai-textanalytics-readme?view=azure-java-preview&preserve-view=true) | [Java Samples](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/textanalytics/azure-ai-textanalytics/src/samples) | -| [JavaScript → v5.1.0](https://www.npmjs.com/package/@azure/ai-text-analytics/v/5.1.0) | [JavaScript documentation](/javascript/api/overview/azure/ai-text-analytics-readme?view=azure-node-preview&preserve-view=true) | [JavaScript samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/textanalytics/ai-text-analytics/samples/v5) | +| [C#/.NET → v5.2.0](https://www.nuget.org/packages/Azure.AI.TextAnalytics/5.2.0) | [C# documentation](/dotnet/api/azure.ai.textanalytics) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/textanalytics/Azure.AI.TextAnalytics/samples) | +| [Java → v5.2.0](https://mvnrepository.com/artifact/com.azure/azure-ai-textanalytics/5.2.0) | [Java documentation](/java/api/overview/azure/ai-textanalytics-readme) | [Java Samples](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/textanalytics/azure-ai-textanalytics/src/samples) | +| [JavaScript → v1.0.0](https://www.npmjs.com/package/@azure/ai-language-text/v/1.0.0) | [JavaScript documentation](/javascript/api/overview/azure/ai-language-text-readme) | [JavaScript samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/textanalytics/ai-text-analytics/samples/v5) | | [Python → v5.2.0](https://pypi.org/project/azure-ai-textanalytics/5.2.0/) | [Python documentation](/python/api/overview/azure/ai-language-conversations-readme) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/textanalytics/azure-ai-textanalytics/samples) | |
cognitive-services | Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/entity-linking/quickstart.md | |
cognitive-services | Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/key-phrase-extraction/quickstart.md | |
cognitive-services | Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/language-detection/quickstart.md | |
cognitive-services | Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/quickstart.md | |
cognitive-services | Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/personally-identifiable-information/quickstart.md | |
cognitive-services | Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/sentiment-opinion-mining/quickstart.md | |
cognitive-services | Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/quickstart.md | |
communication-services | Teams User Calling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user-calling.md | The following list presents the set of features that are currently available in | | Interact with a poll | ❌ | | | Interact with a Q&A | ❌ | | Accessibility | Receive closed captions | ❌ |-| Advanced call routing | Does start a call and add user operations honor forwarding rules | ✔️ | +| Advanced call routing | Start a call and add user operations honor forwarding rules | ✔️ | | | Read and configure call forwarding rules | ❌ |-| | Does start a call and add user operations honor simultaneous ringing | ✔️ | +| | Start a call and add user operations honor simultaneous ringing | ✔️ | | | Read and configure simultaneous ringing | ❌ |+| | Start a call and add user operations honor "Do not disturb" status | ✔️ | | | Placing participant on hold plays music on hold | ❌ | | | Being placed by Teams user on Teams client on hold plays music on hold | ✔️ | | | Park a call | ❌ | |
communication-services | Get Started Live Stream | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-live-stream.md | + + Title: Quickstart - Add live stream to your app ++description: In this quickstart, you'll learn how to add live stream calling capabilities to your app using Azure Communication Services. ++++ Last updated : 06/30/2022+++++++# Live stream quick start ++Live streaming will empower Contoso to engage thousands of online attendees by adding interactive live audio and video streaming functionality into their web and +mobile applications that their audiences will love, no matter where they are. Interactive Live Streaming is the ability to broadcast media content to thousands of online +attendees while enabling some attendees to share their live audio and video, interact via chat, and engage with metadata content such as reactions, polls, quizzes, ads, etc. ++## Prerequisites ++- [Rooms](../rooms/get-started-rooms.md) meeting will be needed for role-based streaming. +- The quick start examples here are available with the private preview version [1.11.0-alpha.20230124.1](https://www.npmjs.com/package/@azure/communication-calling/v/1.11.0-alpha.20230124.1) of the calling Web SDK. Make sure to use that or higher version when trying this quick start. ++## Live streaming with Rooms +Room participants can be assigned one of the following roles: **Presenter**, **Attendee** and **Consumer**. By default, a user is assigned an **Consumer** role, if no other role is assigned. ++Participants with `Consumer` role will be receiving only the live stream. They won't be able to speak or share video or screen. Developers shouldn't show the unmute, share video, and screen option to end users/consumers. Live stream supports both open and closed Rooms. In Open Rooms the default role is `Consumer`. +On the other hand, Participants with other roles receive both real-time and live stream. Developers can choose either stream to play. +Check [participant roles and permissions](../../concepts/rooms/room-concept.md#predefined-participant-roles-and-permissions) to know more about the roles capabilities. ++### Place a Rooms call (start live streaming) +Live streaming will start when the Rooms call starts. ++```js +const context = { roomId: '<RoomId>' } ++const call = callAgent.join(context); +``` ++### Receive live stream +Contoso can use the `Features.LiveStream` to get the live stream and play it. ++```typescript +call.feature(Features.LiveStream).on('liveStreamsUpdated', e => { + // Subscribe to new live video streams that were added. + e.added.forEach(liveVideoStream => { + subscribeToLiveVideoStream(liveVideoStream) + }); + // Unsubscribe from live video streams that were removed. + e.removed.forEach(liveVideoStream => { + console.log('Live video stream was removed.'); + } +); ++const subscribeToLiveVideoStream = async (liveVideoStream) => { + // Create a video stream renderer for the live video stream. + let videoStreamRenderer = new VideoStreamRenderer(liveVideoStream); + let view; + const renderVideo = async () => { + try { + // Create a renderer view for the live video stream. + view = await videoStreamRenderer.createView(); + // Attach the renderer view to the UI. + liveVideoContainer.hidden = false; + liveVideoContainer.appendChild(view.target); + } catch (e) { + console.warn(`Failed to createView, reason=${e.message}, code=${e.code}`); + } + } ++ // Live video stream is available during initialization. + await renderVideo(); +}; ++``` ++### Count participants in both real-time and streaming media lane +Web SDK already exposes `Call.totalParticipantCount` (available in beta release) which includes all participants count (Presenter, Attendee, Consumer, Participants in the lobby etc.). We've added a new API `Call.feature(Features.LiveStream).participantCount` under the `LiveStream` feature to have the count of streaming participants. `Call.feature(Features.LiveStream).participantCount` represents the number of participants receiving the streaming media only. ++```typescript +call.feature(Features.LiveStream).on('participantCountChanged', e => { + // Get current streaming participant count. + Call.feature(Features.LiveStream).participantCount; +); +``` ++`call.feature(Features.LiveStream).participantCount` represents the total count of participants in streaming media lane. Contoso can find out the count of participants in real-time media lane by subtracting from the total participants. So, number of real-time media participants = `call.totalParticipantCount` - `call.feature(Features.LiveStream).participantCount`. ++## Next steps +For more information, see the following articles: ++- Check out our [calling hero sample](../../samples/calling-hero-sample.md) +- Get started with the [UI Library](https://aka.ms/acsstorybook) +- Learn about [Calling SDK capabilities](./getting-started-with-calling.md?pivots=platform-web) +- Learn more about [how calling works](../../concepts/voice-video-calling/about-call-types.md) |
connectors | Connectors Create Api Mq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-mq.md | tags: connectors [!INCLUDE [logic-apps-sku-consumption-standard](../../includes/logic-apps-sku-consumption-standard.md)] -This article shows how to access an MQ server that's either on premises or in Azure from a workflow in Azure Logic Apps with the MQ connector. You can then create automated workflows that receive and send messages stored in your MQ server. For example, your workflow can browse for a single message in a queue and then run other actions. The MQ connector includes a Microsoft MQ client that communicates with a remote MQ server across a TCP/IP network. +This article shows how to access an Azure-hosted or on-premises MQ server from a workflow in Azure Logic Apps using the MQ connector. You can then create automated workflows that receive and send messages stored in your MQ server. For example, your workflow can browse for a single message in a queue and then run other actions. ++The MQ connector provides a wrapper around a Microsoft MQ client, which includes all the messaging capabilities to communicate with a remote MQ server across a TCP/IP network. This connector defines the connections, operations, and parameters to call the MQ client. ## Supported IBM WebSphere MQ versions The MQ connector has different versions, based on [logic app type and host envir | Logic app | Environment | Connection version | |--|-|--|-| **Consumption** | Multi-tenant Azure Logic Apps | Managed connector, which appears in the designer under the **Enterprise** label. This connector provides only actions, not triggers. For more information, review the following documentation: <br><br>- [MQ managed connector reference](/connectors/mq) <br>- [Managed connectors in Azure Logic Apps](managed.md) | -| **Consumption** | Integration service environment (ISE) | Managed connector, which appears in the designer under the **Enterprise** label. For more information, review the following documentation: <br><br>- [MQ managed connector reference](/connectors/mq) <br>- [Managed connectors in Azure Logic Apps](managed.md) | -| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | Managed connector, which appears in the designer under the **Azure** label, and built-in connector, which appears in the designer under the **Built-in** label and is service provider based. The built-in version differs in the following ways: <br><br>- The built-in version includes actions *and* triggers. <br><br>- The built-in version can connect directly to an MQ server and access Azure virtual networks. You don't need an on-premises data gateway. <br><br>- The built-in version supports Transport Layer Security (TLS) encryption for data in transit, message encoding for both the send and receive operations, and Azure virtual network integration when your logic app uses the Azure Functions Premium plan <br><br>For more information, review the following documentation: <br><br>- [MQ managed connector reference](/connectors/mq) <br>- [MQ built-in connector reference](/azure/logic-apps/connectors/built-in/reference/mq/) <br>- [Built-in connectors in Azure Logic Apps](built-in.md) | +| **Consumption** | Multi-tenant Azure Logic Apps and Integration Service Environment (ISE) | Managed connector, which appears in the designer under the **Enterprise** label. This connector provides only actions, not triggers. In on-premises MQ server scenarios, the managed connector supports server only authentication with TLS (SSL) encryption. <br><br>For more information, review the following documentation: <br><br>- [MQ managed connector reference](/connectors/mq) <br>- [Managed connectors in Azure Logic Apps](managed.md) | +| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (ASE v3 with Windows plans only) | Managed connector, which appears in the designer under the **Azure** label, and built-in connector, which appears in the designer under the **Built-in** label and is service provider based. The built-in version differs in the following ways: <br><br>- The built-in version includes actions *and* triggers. <br><br>- The built-in version can connect directly to an MQ server and access Azure virtual networks. You don't need an on-premises data gateway. <br><br>- The built-in version supports both server authentication and server-client authentication with TLS (SSL) encryption for data in transit, message encoding for both the send and receive operations, and Azure virtual network integration. <br><br>For more information, review the following documentation: <br><br>- [MQ managed connector reference](/connectors/mq) <br>- [MQ built-in connector reference](/azure/logic-apps/connectors/built-in/reference/mq/) <br>- [Built-in connectors in Azure Logic Apps](built-in.md) | ++## Authentication with TLS (SSL) encryption ++Based on whether you use the MQ managed connector (Consumption or Standard workflows) or the MQ built-in connector (Standard workflows only), the MQ connector supports one or both of the following authentication directions: ++| Authentication | Supported logic app type and MQ connector | Process | +|-|-|| +| Server only <br>(one-way) | - Consumption: Managed only <br><br>- Standard: Managed or built-in | For server authentication, your MQ server sends a private key certificate, either publicly trusted or non-publicly trusted, to your logic app client for validation. The MQ connector validates the incoming server certificate for authenticity against public key certificates, known also as a "signer" certificates, by using standard .NET SSL stream validation. <br><br>The logic app doesn't send a client certificate. | +| Server-client <br>(two-way) | - Consumption: Not supported <br><br>- Standard: Built-in only | For server authentication, see the previous row. <br><br>For client authentication, the logic app client sends a private key certificate to your MQ server for validation. The MQ server validates the incoming client certificate for authenticity also by using a public key certificate. | ++### Notes about private key and public key certificates ++- The certificate that requires validation is always a private key certificate. The certificate used to perform the validation is always a public key certificate. ++- A publicly trusted private key certificate is issued by a recognized [Certificate Authority](https://www.ssl.com/faqs/what-is-a-certificate-authority/). A non-publicly trusted private key certificate includes self-signed, private CA, and similar certificates. ++- To validate a private key certificate sent from your MQ server, the MQ connector uses public key certificates that usually exist on your logic app's virtual machine host in the host's [Trusted Root Certification Authorities (CA) Store](/windows-hardware/drivers/install/trusted-root-certification-authorities-certificate-store). ++ However, if the host doesn't have all the required public key certificates, or if your MQ server sends a non-publicly trusted private key certificate, you need to take extra steps. For more information, see [Prerequisites](#prerequisites). ++- To validate a client's private key certificate sent from your Standard logic app, the MQ server uses public key certificates that exist in your MQ server's certificate store. To add a private key certificate for your logic app to use as a client certificate, see [Add a private key certificate](#add-private-key-certificate). ## Limitations -* The MQ connector doesn't support segmented messages. +* Authentication with TLS (SSL) encryption ++ | MQ connector | Supported authentication direction | + |--|| + | Managed | Server only (one-way) | + | Built-in | - Server-client (two-way) <br>- Server-only (one-way) | ++* Server certificate validation ++ The MQ built-in connector doesn't validate the server certificate's expiration date nor certificate chain. ++* Character set conversions -* The MQ connector doesn't use the message's **Format** field and doesn't make any character set conversions. The connector only puts whatever data appears in the message field into a JSON message and sends the message along. + - The MQ managed connector doesn't make any character set conversions nor use the message's **Format** field. The connector only copies whatever data appears in the message field and sends the message along. ++ - The MQ built-in connector can make character set conversions, but only when the data format is a string. If you supply a different character set ID (code page), the connector attempts to convert the data to the new code page. ++* The MQ connector doesn't support segmented messages. For more information, review the [MQ managed connector reference](/connectors/mq) or the [MQ built-in connector reference](/azure/logic-apps/connectors/built-in/reference/mq/). For more information, review the [MQ managed connector reference](/connectors/mq * An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -* If you're using an on-premises MQ server, [install the on-premises data gateway](../logic-apps/logic-apps-gateway-install.md) on a server within your network. For the MQ connector to work, the server with the on-premises data gateway also must have .NET Framework 4.6 installed. +* To connect with an on-premises MQ server, you must [install the on-premises data gateway](../logic-apps/logic-apps-gateway-install.md) on a server within your network. For the MQ connector to work, the server with the on-premises data gateway also must have .NET Framework 4.6 installed. After you install the gateway, you must also create a data gateway resource in Azure. The MQ connector uses this resource to access your MQ server. For more information, review [Set up the data gateway connection](../logic-apps/logic-apps-gateway-connection.md). For more information, review the [MQ managed connector reference](/connectors/mq > * Your MQ server is publicly available or available in Azure. > * You're going to use the MQ built-in connector, not the managed connector. -* The logic app workflow where you want to access your MQ server. +* The logic app resource and workflow where you want to access your MQ server. ++ * To use the MQ managed connector with the on-premises data gateway, your logic app resource must use the same location as your gateway resource in Azure. - * If you're using the MQ managed connector, which doesn't provide any triggers, make sure that your workflow already starts with a trigger or that you first add a trigger to your workflow. For example, you can use the [Recurrence trigger](../connectors/connectors-native-recurrence.md). + * To use the MQ managed connector, which doesn't provide any triggers, make sure that your workflow starts with a trigger or that you first add a trigger to your workflow. For example, you can use the [Recurrence trigger](../connectors/connectors-native-recurrence.md). - * If you're using a trigger from the MQ built-in connector, make sure that you start with a blank workflow. + * To use a trigger from the MQ built-in connector, make sure that you start with a blank workflow. - * If you're using the on-premises data gateway, your logic app resource must use the same location as your gateway resource in Azure. +* Certificate requirements for authentication with TLS (SSL) encryption ++ * MQ managed connector ++ | MQ server | Requirements | + |--|--| + | Azure-hosted MQ server | The MQ server must send a private key certificate that's issued by a trusted [certificate authority](https://www.ssl.com/faqs/what-is-a-certificate-authority/) to your logic app client for validation. | + | On-premises MQ server using on-premises data gateway | To send a non-publicly trusted private key certificate such as a self-signed or private CA certificate, you have to add the certificate to the [Trusted Root Certification Authorities (CA) Store](/windows-hardware/drivers/install/trusted-root-certification-authorities-certificate-store) on the local computer with the on-premises data gateway installation. For this task, you can use [Windows Certificate Manager (certmgr.exe)](/dotnet/framework/tools/certmgr-exe-certificate-manager-tool). | ++ * MQ built-in connector ++ Standard logic apps use [Azure App Service](../app-service/overview.md) as the host platform and to handle certificates. For Standard logic apps on any [WS* plan](../logic-apps/logic-apps-pricing.md#standard-pricing-tiers), you can add public, private, custom, or self-signed certificates to the [local machine certificate store](/windows-hardware/drivers/install/local-machine-and-current-user-certificate-stores). However, if you have to add certificates to the Trusted Root CA Store on the virtual machine host where your Standard logic app runs, App Service requires that your logic app run in an isolated [App Service Environment v3 (ASE) with a Windows-only](../app-service/environment/overview.md) and an [ASE-based App Service plan](../app-service/overview-hosting-plans.md). For more information, see [Certificates and the App Service Environment](../app-service/environment/overview-certificates.md). ++ * MQ server authentication ++ The following table describes the certificate prerequisites, based on your scenario: ++ | Incoming MQ server certificate | Requirements | + |--|| + | Publicly trusted private key certificate issued by a trusted [certificate authority](https://www.ssl.com/faqs/what-is-a-certificate-authority/) | Usually, your logic app doesn't need any other setup because your logic app's virtual machine host usually has the required public key certificates to validate the incoming MQ server's private key certificate. To check that these public key certificates exist, follow the steps to [View and confirm thumbprints for existing public key certificates](#view-existing-public-key-certificates). <br><br>If the virtual machine host doesn't have all the required public key certificates to validate the incoming MQ server's private key certificate and any chaining certificates, complete the following steps: <br><br>1. Recreate your Standard logic app using an [Azure App Service Environment v3 (ASE) with a Windows-only and ASE-based App Service plan](../app-service/environment/overview.md). <br><br>2. Manually [add the required public key certificates to the host's Trusted Root CA Store](#view-existing-public-key-certificates). | + | Non-publicly trusted private key certificate, such as a self-signed or private CA certificate | Your logic app's virtual machine host won't have the required public key certificates in the host's Trusted Root CA Store to validate the MQ server's certificate chain. In this case, complete the following steps: <br><br>1. Recreate your Standard logic app using an [Azure App Service Environment v3 (ASE) with a Windows-only and ASE-based App Service plan](../app-service/environment/overview.md). <br><br>2. Manually [add the required public key certificates to the host's Trusted Root CA Store](#view-existing-public-key-certificates). <br><br>For more information, see the following documentation: <br>- [Certificate bindings and the App Service Environment](../app-service/environment/certificates.md) <br>- [Add and manage TLS/SSL certificates in Azure App Service](../app-service/configure-ssl-certificate.md) | ++ * Logic app client authentication ++ You can add a private key certificate to send as the client certificate and then specify the certificate's thumbprint value in the connection details for the MQ built-in connector. For more information, see [add a private key certificate](#add-private-key-certificate). ++ **Recommendation**: Upgrade to MQ server 9.0 or later. Also, on your MQ server, make sure to set up the server-connection channel with a cipher suite that matches the cipher specification used by your client connection, for example, **ANY_TLS12_OR_HIGHER**. For more information, see the next item about [Cipher requirements](#cipher-requirements). ++<a name="cipher-requirements"></a> ++* Cipher specification requirements ++ The MQ server requires that you define the cipher specification for connections that use TLS (SSL) encryption. This cipher specification must match the cipher suites that are supported, chosen, and used by the operating system where the MQ server runs. Ultimately, the cipher specification used by the client connection must match the cipher suites set up on the server-connection channel on the MQ server. ++ For more information, see [Connection and authentication problems](#connection-problems). <a name="add-trigger"></a> These steps use the Azure portal, but with the appropriate Azure Logic Apps exte 1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer. -1. On the designer, select **Choose an operation**, if not already selected. +1. On the designer, select **Choose an operation**, if not selected. 1. Under the **Choose an operation** search box, select **Built-in**. In the search box, enter **mq**. The steps to add and use an MQ action differ based on whether your workflow uses ## Test your workflow -To check that your workflow returns the results that you expect, run your workflow and then review the outputs from your workflow's run history. +To check that your workflow returns the results that you expect, run your workflow, and then review the outputs from your workflow's run history. 1. Run your workflow. To check that your workflow returns the results that you expect, run your workfl * To review more output details, select **Show raw outputs**. If you set **IncludeInfo** to **true**, more output is included. +<a name="view-add-certificates"></a> ++## View and add certificates for authentication with TLS (SSL) encryption ++The following information applies only to Standard logic app workflows for the MQ built-in connector using either server-only or server-client authentication with TLS (SSL) encryption. ++<a name="view-existing-public-key-certificates"></a> ++### View and confirm thumbprints for existing public key certificates ++To check that the thumbprints for the required public key certificates exist on your Standard logic app's virtual machine host in the Trusted Root CA Store, follow these steps to run the [`cert` PowerShell script](/powershell/module/microsoft.powershell.security/about/about_certificate_provider) from your Standard logic app's resource menu. ++1. In the [Azure portal](https://portal.azure.com), open your Standard logic app resource. On the logic app resource menu, under **Development Tools**, select **Advanced Tools** > **Go**. ++1. From the Kudu **Debug console** menu, select **PowerShell**. ++1. After the PowerShell window appears, from the PowerShell command prompt, run the following script: ++ `dir cert:\localmachine\root` ++ The PowerShell window lists the existing thumbprints and descriptions, for example: ++  ++<a name="add-public-key-certificate"></a> ++## Add a public key certificate ++To add a public key certificate to the Trusted Root CA Store on that virtual machine host where your Standard logic app runs, follow these steps: ++1. In the [Azure portal](https://portal.azure.com), open your Standard logic app resource. On the logic app resource menu, under **Settings**, select **TLS/SSL settings (classic)**. ++1. On the **TLS/SSL settings (classic)** page, select the **Public Key Certificates (.cer)** tab, and then select **Upload Public Key Certificate**. ++1. On the **Add Public Key Certificate (.cer)** pane that opens, enter a name to describe the certificate. Find and select the public key certificate file (.cer). When you're done, select **Upload**. ++1. After you add the certificate, from the **Thumbprint** column, copy the certificate's thumbprint value. ++  ++1. On the logic app resource menu, select **Configuration**. ++1. On the **Application settings** tab, select **New application setting**. Add a new application setting named **WEBSITE_LOAD_ROOT_CERTIFICATES**, and enter the certificate's thumbprint value that you previously copied. If you have multiple certificate thumbprint values, make sure to separate each value with a comma (**,**). ++ For more information, see [Edit host and app settings for Standard logic apps in single-tenant Azure Logic Apps](../logic-apps/edit-app-settings-host-settings.md#manage-app-settings). ++ > [!NOTE] + > + > If you specify a thumbprint for a private CA certificate, the MQ built-in connector doesn't run any certificate validation, + > such as checking the certificate's expiration date or source. If standard .NET SSL validation fails, the connector + > only compares any thumbprint value that's passed in against the value in the **WEBSITE_LOAD_ROOT_CERTIFICATES** setting. ++1. If the added certificate doesn't appear in the public key certificates list, on the toolbar, select **Refresh**. ++<a name="add-private-key-certificate"></a> ++## Add a private key certificate ++To add a private key certificate to the Trusted Root CA Store on virtual machine host where your Standard logic app runs, follow these steps: ++1. In the [Azure portal](https://portal.azure.com), open your logic app resource. On the logic app resource menu, under **Settings**, select **TLS/SSL settings (classic)**. ++1. On the **TLS/SSL settings (classic)** page, select the **Private Key Certificates (.pfx)** tab, and then select **Upload Certificate**. ++1. On the **Add Private Key Certificate (.pfx)** pane that opens, find and select the private key certificate file (.pfx), and then enter the certificate password. When you're done, select **Upload**. ++1. After you add the certificate, from the **Thumbprint** column, copy the certificate's thumbprint value. ++  ++1. On the logic app resource menu, select **Configuration**. ++1. On the **Application settings** tab, select **New application setting**. Add a new application setting named **WEBSITE_LOAD_CERTIFICATES**, and enter the certificate's thumbprint value that you previously copied. ++ For more information, see [Edit host and app settings for Standard logic apps in single-tenant Azure Logic Apps](../logic-apps/edit-app-settings-host-settings.md#manage-app-settings). ++1. If the added certificate doesn't appear in the private key certificates list, on the toolbar, select **Refresh**. ++1. When you create a connection using the MQ built-in connector, in the connection information box, select **Use TLS**. ++1. In the **Client Cert Thumbprint** property, enter the previously copied thumbprint value for the private key certificate, which enables server-client (two-way) authentication. If you don't enter a thumbprint value, the connector uses server-only (one-way) authentication. ++  + ## Troubleshoot problems ### Failures with browse or receive actions If you run a browse or receive action on an empty queue, the action fails with t ### Connection and authentication problems -When your workflow tries connecting to your on-premises MQ server, you might get the following error: +When your workflow uses the MQ managed connector to connect to your on-premises MQ server, you might get the following error: `"MQ: Could not Connect the Queue Manager '<queue-manager-name>': The Server was expecting an SSL connection."` -* If you're using the MQ connector directly in Azure, the MQ server needs to use a certificate that's issued by a trusted [certificate authority](https://www.ssl.com/faqs/what-is-a-certificate-authority/). +* The MQ server needs to provide a certificate that's issued by a trusted [certificate authority](https://www.ssl.com/faqs/what-is-a-certificate-authority/). * The MQ server requires that you define the cipher specification to use with TLS connections. However, for security purposes and to include the best security suites, the Windows operating system sends a set of supported cipher specifications. The operating system where the MQ server runs chooses the suites to use. To make the configuration match, you have to change your MQ server setup so that the cipher specification matches the option chosen in the TLS negotiation. - When you try to connect, the MQ server logs an event message that the connection attempt failed because the MQ server chose the incorrect cipher specification. The event message contains the cipher specification that the MQ server chose from the list. In the channel configuration, update the cipher specification to match the cipher specification in the event message. + When you try to connect, the MQ server logs an event message that the connection attempt failed because the MQ server chose the incorrect cipher specification. The event message contains the cipher specification that the MQ server chose from the list. In the server-connection channel configuration, update the cipher specification to match the cipher specification in the event message. ## Next steps |
connectors | Connectors Native Recurrence | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-recurrence.md | The Recurrence trigger is part of the built-in Schedule connector and runs nativ > the first recurrence runs immediately when you save or deploy the logic app, despite your trigger's recurrence setup. To avoid this behavior, > provide a start date and time for when you want the first recurrence to run. >+ > If you deploy a disabled Consumption workflow that has a Recurrence trigger using an ARM template, the trigger + > instantly fires when you enable the workflow unless you set the **Start time** parameter before deployment. + > > If a recurrence doesn't specify any other advanced scheduling options such as specific times to run future recurrences, > those recurrences are based on the last run time. As a result, the start times for those recurrences might drift due to > factors such as latency during storage calls. To make sure that your logic app doesn't miss a recurrence, especially when > the frequency is in days or longer, try the following options:- > + > > * Provide a start date and time for the recurrence and the specific times to run subsequent recurrences. You can use the > properties named **At these hours** and **At these minutes**, which are available only for the **Day** and **Week** frequencies. > |
container-apps | Azure Resource Manager Api Spec | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-resource-manager-api-spec.md | The following example ARM template deploys a container app. "secretRef": "mysecret" } ],+ "command": [ + "npm", + "start" + ], "resources": { "cpu": 0.5, "memory": "1Gi" properties: registries: - passwordSecretRef: myregistrypassword server: myregistry.azurecr.io- username: myregistrye + username: myregistry dapr: appId: mycontainerapp appPort: 80 properties: value: 80 - name: secret_name secretRef: mysecret+ command: + - npm + - start resources: cpu: 0.5 memory: 1Gi |
container-apps | Storage Mounts Azure Files | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/storage-mounts-azure-files.md | In this tutorial, you learn how to: - Install the latest version of the [Azure CLI](/cli/azure/install-azure-cli). -## Set up +## Set up the environment The following commands help you define variables and ensure your Container Apps extension is up to date. -1. Log in to the Azure CLI. +1. Sign in to the Azure CLI. # [Bash](#tab/bash) Now you can update the container app configuration to support the storage mount. 1. Open *app.yaml* in a code editor. -1. Add a reference to the storage volumes to the `template` definition. +1. Replace the `volumes: null` definition in the `template` section with a `volumes:` definition referencing the storage volume. The template section should look like the following: ```yml template: Now you can update the container app configuration to support the storage mount. - name: my-azure-file-volume storageName: mystoragemount storageType: AzureFile+ containers: + - image: nginx + name: my-container-app + volumeMounts: + - volumeName: my-azure-file-volume + mountPath: /var/log/nginx + resources: + cpu: 0.5 + ephemeralStorage: 3Gi + memory: 1Gi + initContainers: null + revisionSuffix: '' + scale: + maxReplicas: 1 + minReplicas: 1 + rules: null ``` The new `template.volumes` section includes the following properties. |
cosmos-db | Docker Emulator Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/docker-emulator-linux.md | Since the Azure Cosmos DB Emulator provides an emulated environment that runs on - The Linux emulator supports a maximum ID property size of 254 characters. +- The linux emulator supports a maximum of five JOIN statements per query. + ## Run the Linux Emulator on macOS > [!NOTE] |
cosmos-db | Aggregate Count | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/aggregate-count.md | COUNT(<scalar_expr>) ## Arguments *scalar_expr* - Is any scalar expression + Any expression that results in a scalar value ## Return types -Returns a numeric expression. +Returns a numeric (scalar) value ## Examples The following example returns the total count of items in a container: ```sql SELECT COUNT(1) FROM c-``` -COUNT can take any scalar expression as input. The below query will produce an equivalent results: +``` ++In the first example, the parameter of the `COUNT` function is any scalar value or expression, but the parameter does not influence the result. The first example passes in a scalar value of `1` to the `COUNT` function. This second example will produce an identical result even though a different scalar expression is used. In the second example, the scalar expression of `2 + 3` is passed in to the `COUNT` function, but the result will be equivalent to the first function. ```sql-SELECT COUNT(2) +SELECT COUNT(2 + 3) FROM c ``` |
cost-management-billing | Enterprise Rest Apis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/enterprise-rest-apis.md | Title: Azure Enterprise REST APIs description: This article describes the REST APIs for use with your Azure enterprise enrollment. Previously updated : 11/27/2022 Last updated : 02/14/2023 -+ # Azure Enterprise REST APIs In the Manage API Access Keys window, you can perform the following tasks: - View start and end dates for access keys - Disable access keys +> [!NOTE] +> 1. If you are on Enrollment Admin, then you can generate the keys from only Usage & Charges blade at enrollment level but not at Accounts & department level. +> 2. If you are an Department owner only, then you can generate the keys at Department level and at the Account level for which you are an account owner for. +> 3. If you are Account owner only, then you can generate the keys at Acount level only. + ### Generate the primary or secondary API key 1. Sign in to the Azure portal as an enterprise administrator. You might receive 400 and 404 (unavailable) errors returned from an API call whe ## Next steps - Azure EA portal administrators should read [Azure EA portal administration](ea-portal-administration.md) to learn about common administrative tasks.-- If you need help with troubleshooting Azure EA portal issues, see [Troubleshoot Azure EA portal access](ea-portal-troubleshoot.md).+- If you need help with troubleshooting Azure EA portal issues, see [Troubleshoot Azure EA portal access](ea-portal-troubleshoot.md). |
data-factory | Azure Ssis Integration Runtime Express Virtual Network Injection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/azure-ssis-integration-runtime-express-virtual-network-injection.md | |
data-factory | Azure Ssis Integration Runtime Standard Virtual Network Injection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/azure-ssis-integration-runtime-standard-virtual-network-injection.md | |
data-factory | Azure Ssis Integration Runtime Virtual Network Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/azure-ssis-integration-runtime-virtual-network-configuration.md | |
data-factory | Built In Preinstalled Components Ssis Integration Runtime | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/built-in-preinstalled-components-ssis-integration-runtime.md | Last updated 02/15/2022 # Built-in and preinstalled components on Azure-SSIS Integration Runtime This article lists all built-in and preinstalled components, such as clients, drivers, providers, connection managers, data sources/destinations/transformations, and tasks on SSIS Integration Runtime (IR) in Azure Data Factory (ADF) or Synapse Pipelines. To provision SSIS IR in ADF, follow the instructions in [Provision Azure-SSIS IR](./tutorial-deploy-ssis-packages-azure.md). |
data-factory | Concept Managed Airflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concept-managed-airflow.md | With Managed Airflow, Azure Data Factory now offers multi-orchestration capabili * NorthEurope * WestEurope * SouthEastAsia-* EastUS2 -* WestUS2 -* GermanyWestCentral -* AustraliaEast +* EastUS2 (coming soon) +* WestUS2 (coming soon) +* GermanyWestCentral (coming soon) +* AustraliaEast (coming soon) > [!NOTE] > By GA, all ADF regions will be supported. The Airflow environment region is defaulted to the Data Factory region and is not configurable, so ensure you use a Data Factory in the above supported region to be able to access the Managed Airflow preview. |
data-factory | Configure Azure Ssis Integration Runtime Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/configure-azure-ssis-integration-runtime-performance.md | |
data-factory | Configure Bcdr Azure Ssis Integration Runtime | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/configure-bcdr-azure-ssis-integration-runtime.md | Last updated 02/15/2022 # Configure Azure-SSIS integration runtime for business continuity and disaster recovery (BCDR) Azure SQL Database/Managed Instance and SQL Server Integration Services (SSIS) in Azure Data Factory (ADF) or Synapse Pipelines can be combined as the recommended all-Platform as a Service (PaaS) solution for SQL Server migration. You can deploy your SSIS projects into SSIS catalog database (SSISDB) hosted by Azure SQL Database/Managed Instance and run your SSIS packages on Azure SSIS integration runtime (IR) in ADF or Synapse Pipelines. |
data-factory | Connector Azure Blob Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-blob-storage.md | For general information about Azure Storage service principal authentication, se To use service principal authentication, follow these steps: -1. Register an application entity in Azure Active Directory (Azure AD) by following [Register your application with an Azure AD tenant](../storage/common/storage-auth-aad-app.md#register-your-application-with-an-azure-ad-tenant). Make note of these values, which you use to define the linked service: +1. Register an application with the Microsoft Identity platform. To learn how, see [Quickstart: Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md). Make note of these values, which you use to define the linked service: - Application ID - Application key |
data-factory | Connector Azure Cosmos Db | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-cosmos-db.md | The Azure Cosmos DB for NoSQL connector supports the following authentication ty To use service principal authentication, follow these steps. -1. Register an application entity in Azure Active Directory (Azure AD) by following the steps in [Register your application with an Azure AD tenant](../storage/common/storage-auth-aad-app.md#register-your-application-with-an-azure-ad-tenant). Make note of the following values, which you use to define the linked service: +1. Register an application with the Microsoft Identity platform. To learn how, see [Quickstart: Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md). Make note of these values, which you use to define the linked service: - Application ID - Application key |
data-factory | Connector Azure Data Explorer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-data-explorer.md | The Azure Data Explorer connector supports the following authentication types. S To use service principal authentication, follow these steps to get a service principal and to grant permissions: -1. Register an application entity in Azure Active Directory by following the steps in [Register your application with an Azure AD tenant](../storage/common/storage-auth-aad-app.md#register-your-application-with-an-azure-ad-tenant). Make note of the following values, which you use to define the linked service: +1. Register an application with the Microsoft Identity platform. To learn how, see [Quickstart: Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md). Make note of these values, which you use to define the linked service: - Application ID - Application key |
data-factory | Connector Azure Data Lake Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-data-lake-storage.md | To use storage account key authentication, the following properties are supporte To use service principal authentication, follow these steps. -1. Register an application entity in Azure Active Directory (Azure AD) by following the steps in [Register your application with an Azure AD tenant](../storage/common/storage-auth-aad-app.md#register-your-application-with-an-azure-ad-tenant). Make note of the following values, which you use to define the linked service: +1. Register an application with the Microsoft Identity platform. To learn how, see [Quickstart: Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md). Make note of these values, which you use to define the linked service: - Application ID - Application key |
data-factory | Connector Dynamics Ax | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-dynamics-ax.md | The following sections provide details about properties you can use to define Da To use service principal authentication, follow these steps: -1. Register an application entity in Azure Active Directory (Azure AD) by following [Register your application with an Azure AD tenant](../storage/common/storage-auth-aad-app.md#register-your-application-with-an-azure-ad-tenant). Make note of the following values, which you use to define the linked service: +1. Register an application with the Microsoft Identity platform. To learn how, see [Quickstart: Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md). Make note of these values, which you use to define the linked service: - Application ID - Application key |
data-factory | Connector Sharepoint Online List | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sharepoint-online-list.md | Specifically, this SharePoint List Online connector uses service principal authe The SharePoint List Online connector uses service principal authentication to connect to SharePoint. Follow these steps to set it up: -1. Register an application entity in Azure Active Directory (Azure AD) by following [Register your application with an Azure AD tenant](../storage/common/storage-auth-aad-app.md#register-your-application-with-an-azure-ad-tenant). Make note of the following values, which you use to define the linked service: +1. Register an application with the Microsoft Identity platform. To learn how, see [Quickstart: Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md). Make note of these values, which you use to define the linked service: - Application ID - Application key |
data-factory | Continuous Integration Delivery Improvements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-improvements.md | Follow these steps to get started: displayName: 'Install npm package' # Validates all of the Data Factory resources in the repository. You'll get the same validation errors as when "Validate All" is selected.- # Enter the appropriate subscription and name for the source factory. + # Enter the appropriate subscription and name for the source factory. Either of the "Validate" or "Validate and Generate ARM temmplate" options are required to perform validation. Running both is unnecessary. - task: Npm@1 inputs: Follow these steps to get started: 5. Save and run. If you used the YAML, it gets triggered every time the main branch is updated. > [!NOTE]-> The generated artifacts already contain pre and post deployment scripts for the triggers so it isn't necessary to add these manually. +> The generated artifacts already contain pre and post deployment scripts for the triggers so it isn't necessary to add these manually. However, when deploying one would still need to reference the [documentation on stopping and starting triggers](continuous-integration-delivery-sample-script.md#script-execution-and-parameters) to execute the provided script. ## Next steps |
data-factory | Create Azure Ssis Integration Runtime Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-azure-ssis-integration-runtime-portal.md | |
data-factory | Create Azure Ssis Integration Runtime Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-azure-ssis-integration-runtime-powershell.md | |
data-factory | Create Azure Ssis Integration Runtime Resource Manager Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-azure-ssis-integration-runtime-resource-manager-template.md | |
data-factory | Create Azure Ssis Integration Runtime | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-azure-ssis-integration-runtime.md | |
data-factory | How To Clean Up Ssisdb Logs With Elastic Jobs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-clean-up-ssisdb-logs-with-elastic-jobs.md | |
data-factory | How To Configure Azure Ssis Ir Custom Setup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-configure-azure-ssis-ir-custom-setup.md | Last updated 08/09/2022 # Customize the setup for an Azure-SSIS Integration Runtime You can customize your Azure-SQL Server Integration Services (SSIS) Integration Runtime (IR) in Azure Data Factory (ADF) or Synapse Pipelines via custom setups. They allow you to add your own steps during the provisioning or reconfiguration of your Azure-SSIS IR. |
data-factory | How To Configure Azure Ssis Ir Enterprise Edition | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-configure-azure-ssis-ir-enterprise-edition.md | |
data-factory | How To Develop Azure Ssis Ir Licensed Components | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-develop-azure-ssis-ir-licensed-components.md | Last updated 08/09/2022 # Install paid or licensed custom components for the Azure-SSIS integration runtime This article describes how an ISV can develop and install paid or licensed custom components for SQL Server Integration Services (SSIS) packages that run in Azure in the Azure-SSIS integration runtime, and proxy with self-hosted integration runtime. |
data-factory | How To Invoke Ssis Package Managed Instance Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-invoke-ssis-package-managed-instance-agent.md | Last updated 08/09/2022 # Run SSIS packages by using Azure SQL Managed Instance Agent This article describes how to run a SQL Server Integration Services (SSIS) package by using Azure SQL Managed Instance Agent. This feature provides behaviors that are similar to when you schedule SSIS packages by using SQL Server Agent in your on-premises environment. |
data-factory | How To Invoke Ssis Package Ssdt | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-invoke-ssis-package-ssdt.md | Last updated 08/09/2022 # Execute SSIS packages in Azure from SSDT This article describes the feature of Azure-enabled SQL Server Integration Services (SSIS) projects on SQL Server Data Tools (SSDT). It allows you to assess the cloud compatibility of your SSIS packages and run them on Azure-SSIS Integration Runtime (IR) in Azure Data Factory (ADF). You can use this feature to test your existing packages before you lift & shift/migrate them to Azure or to develop new packages to run in Azure. |
data-factory | How To Invoke Ssis Package Ssis Activity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-invoke-ssis-package-ssis-activity.md | Last updated 08/09/2022 # Run an SSIS package with the Execute SSIS Package activity in Azure portal This article describes how to run a SQL Server Integration Services (SSIS) package in an Azure Data Factory pipeline by using the Execute SSIS Package activity in Azure Data Factory and Synapse Pipelines portal. |
data-factory | How To Invoke Ssis Package Stored Procedure Activity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-invoke-ssis-package-stored-procedure-activity.md | |
data-factory | How To Schedule Azure Ssis Integration Runtime | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-schedule-azure-ssis-integration-runtime.md | |
data-factory | How To Use Sql Managed Instance With Ir | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-use-sql-managed-instance-with-ir.md | Last updated 08/10/2022 # Use Azure SQL Managed Instance with SQL Server Integration Services (SSIS) in Azure Data Factory or Azure Synapse Analytics You can now move your SQL Server Integration Services (SSIS) projects, packages, and workloads to the Azure cloud. Deploy, run, and manage SSIS projects and packages on Azure SQL Database or SQL Managed Instance with familiar tools such as SQL Server Management Studio (SSMS). This article highlights the following specific areas when using Azure SQL Managed Instance with Azure-SSIS integration runtime (IR): |
data-factory | Join Azure Ssis Integration Runtime Virtual Network Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/join-azure-ssis-integration-runtime-virtual-network-powershell.md | |
data-factory | Join Azure Ssis Integration Runtime Virtual Network Ui | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/join-azure-ssis-integration-runtime-virtual-network-ui.md | |
data-factory | Join Azure Ssis Integration Runtime Virtual Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/join-azure-ssis-integration-runtime-virtual-network.md | |
data-factory | Manage Azure Ssis Integration Runtime | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/manage-azure-ssis-integration-runtime.md | |
data-factory | Scenario Ssis Migration Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/scenario-ssis-migration-overview.md | Last updated 08/18/2022 # Migrate on-premises SSIS workloads to SSIS in ADF or Synapse Pipelines ## Overview |
data-factory | Scenario Ssis Migration Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/scenario-ssis-migration-rules.md | Last updated 08/18/2022 # SSIS migration assessment rules When planning a migration of on-premises SSIS to SSIS in Azure Data Factory (ADF) or Synapse Pipelines, assessment will help identify issues with the source SSIS packages that would prevent a successful migration. |
data-factory | Self Hosted Integration Runtime Proxy Ssis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/self-hosted-integration-runtime-proxy-ssis.md | Last updated 08/18/2022 # Configure a self-hosted IR as a proxy for an Azure-SSIS IR This article describes how to run SQL Server Integration Services (SSIS) packages on an Azure-SSIS Integration Runtime (Azure-SSIS IR) with a self-hosted integration runtime (self-hosted IR) configured as a proxy. |
data-factory | Ssis Azure Connect With Windows Auth | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/ssis-azure-connect-with-windows-auth.md | |
data-factory | Ssis Azure Files File Shares | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/ssis-azure-files-file-shares.md | |
data-factory | Ssis Integration Runtime Diagnose Connectivity Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/ssis-integration-runtime-diagnose-connectivity-faq.md | Last updated 09/22/2022 # Use the diagnose connectivity feature in the SSIS integration runtime You might find connectivity problems while executing SQL Server Integration Services (SSIS) packages in the SSIS integration runtime. These problems occur especially if your SSIS integration runtime joins the Azure virtual network. |
data-factory | Ssis Integration Runtime Management Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/ssis-integration-runtime-management-troubleshoot.md | Last updated 09/22/2022 # Troubleshoot SSIS Integration Runtime management This article provides troubleshooting guidance for management issues in Azure-SQL Server Integration Services (SSIS) Integration Runtime (IR), also known as SSIS IR. |
data-factory | Ssis Integration Runtime Ssis Activity Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/ssis-integration-runtime-ssis-activity-faq.md | Last updated 09/22/2022 # Troubleshoot package execution in the SSIS integration runtime This article includes the most common errors that you might find when you're executing SQL Server Integration Services (SSIS) packages in the SSIS integration runtime. It describes the potential causes and actions to solve the errors. |
data-factory | Tutorial Deploy Ssis Packages Azure Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-deploy-ssis-packages-azure-powershell.md | |
data-factory | Tutorial Deploy Ssis Packages Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-deploy-ssis-packages-azure.md | |
databox-online | Azure Stack Edge Gpu Deploy Configure Compute | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-configure-compute.md | |
databox-online | Azure Stack Edge Pro 2 Deploy Configure Compute | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-configure-compute.md | |
defender-for-iot | Cli Ot Sensor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/cli-ot-sensor.md | Version: 22.2.5.9-r-2121448 #### Update sensor software from CLI -For more information, see [Update your sensors](update-ot-software.md#update-your-sensors). +For more information, see [Update your sensors](update-ot-software.md#update-ot-sensors). ### Date, time, and NTP #### Show current system date/time |
defender-for-iot | How To Activate And Set Up Your On Premises Management Console | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-activate-and-set-up-your-on-premises-management-console.md | After activating an on-premises management console, you'll need to apply new act |Location |Activation process | ||| |**On-premises management console** | Apply a new activation file on your on-premises management console if you've [modified the number of committed devices](how-to-manage-subscriptions.md#edit-a-plan-for-ot-networks) in your subscription. |-|**Cloud-connected sensors** | Cloud-connected sensors remain activated for as long as your Azure subscription with your Defender for IoT plan is active. <br><br>However, you'll also need to apply a new activation file when [updating your sensor software](update-ot-software.md#download-and-apply-a-new-activation-file) from a legacy version to version 22.2.x. | +|**Cloud-connected sensors** | Cloud-connected sensors remain activated for as long as your Azure subscription with your Defender for IoT plan is active. <br><br>If you're [updating an OT sensor from a legacy version](update-ot-software.md#update-legacy-ot-sensor-software), you'll need to re-activate your updated sensor. | | **Locally-managed** | Apply a new activation file to locally managed sensors every year. After a sensor's activation file has expired, the sensor will continue to monitor your network, but you'll see a warning message when signing in to the sensor. | For more information, see [Manage Defender for IoT subscriptions](how-to-manage-subscriptions.md). |
defender-for-iot | How To Activate And Set Up Your Sensor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-activate-and-set-up-your-sensor.md | After activating a sensor, you'll need to apply new activation files as follows: |Location |Activation process | |||-|**Cloud-connected sensors** | Cloud-connected sensors remain activated for as long as your Azure subscription with your Defender for IoT plan is active. <br><br>However, you'll also need to apply a new activation file when [updating your sensor software](update-ot-software.md#download-and-apply-a-new-activation-file) from a legacy version to version 22.2.x. | +|**Cloud-connected sensors** | Cloud-connected sensors remain activated for as long as your Azure subscription with your Defender for IoT plan is active. <br><br>If you're [updating an OT sensor from a legacy version](update-ot-software.md#update-legacy-ot-sensor-software), you'll need to re-activate your updated sensor. | | **Locally managed** | Apply a new activation file to locally managed sensors every year. After a sensor's activation file has expired, the sensor will continue to monitor your network, but you'll see a warning message when signing in to the sensor. | For more information, see [Manage Defender for IoT subscriptions](how-to-manage-subscriptions.md) and [Manage the on-premises management console](how-to-manage-the-on-premises-management-console.md). |
defender-for-iot | How To Manage Individual Sensors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-individual-sensors.md | If there are any connection issues, a disconnection message is shown in the **Ge :::image type="content" source="media/how-to-manage-individual-sensors/system-messages.png" alt-text="Screenshot of the system messages pane." lightbox="media/how-to-manage-individual-sensors/system-messages.png"::: ++## Download software for OT sensors ++You may need to download software for your OT sensor if you're [installing Defender for IoT software](ot-deploy/install-software-ot-sensor.md) on your own appliances, or [updating software versions](update-ot-software.md). ++In Defender for IoT in the Azure portal, use one of the following options: ++- For a new installation, select **Getting started** > **Sensor**. Select a version in the **Purchase an appliance and install software** area, and then select **Download**. ++- If you're updating your OT sensor, use the options in the **Sites and sensors** page > **Sensor update (Preview)** menu. +++For more information, see [Update Defender for IoT OT monitoring software](update-ot-software.md). + ## Manage sensor activation files Your sensor was onboarded with Microsoft Defender for IoT from the Azure portal. Each sensor was onboarded as either a locally connected sensor or a cloud-connected sensor. |
defender-for-iot | How To Manage Sensors From The On Premises Management Console | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-sensors-from-the-on-premises-management-console.md | You can define the following sensor system settings from the management console: 1. Select **Save**. -## Update threat intelligence packages --The data package for threat intelligence is provided with each new Defender for IoT version, or if needed between releases. The package contains signatures (including malware signatures), CVEs, and other security content. --You can manually upload this file in the Azure portal and automatically update it to sensors. ---**To update the threat intelligence data:** --1. Go to the Defender for IoT **Updates** page. --1. Download and save the file. --1. Sign in to the management console. --1. On the side menu, select **System Settings**. --1. Select the sensors that should receive the update in the **Sensor Engine Configuration** section. --1. In the **Select Threat Intelligence Data** section, select the plus sign (**+**). --1. Upload the package that you downloaded from the Defender for IoT **Updates** page. - ## Understand sensor disconnection events The **Site Manager** window displays disconnection information if sensors disconnect from their assigned on-premises management console. The following sensor disconnection information is available: For more information, see: - [Track sensor activity](how-to-track-sensor-activity.md) - [Update OT system software](update-ot-software.md) - [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md)-- [Threat intelligence research and packages](how-to-work-with-threat-intelligence-packages.md)+- [Maintain threat intelligence packages on OT network sensors](how-to-work-with-threat-intelligence-packages.md) - [Troubleshoot the sensor and on-premises management console](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md) |
defender-for-iot | How To Manage Sensors On The Cloud | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-sensors-on-the-cloud.md | Details about each sensor are listed in the following columns: |**Sensor health**| Displays a [sensor health message](sensor-health-messages.md). For more information, see [Understand sensor health](how-to-manage-sensors-on-the-cloud.md#understand-sensor-health).| |**Last connected (UTC)**| Displays how long ago the sensor was last connected.| |**Threat Intelligence version**| Displays the [Threat Intelligence version](how-to-work-with-threat-intelligence-packages.md) installed on an OT sensor. The name of the version is based on the day the package was built by Defender for IoT. |-|**Threat Intelligence mode**| Displays whether the Threat Intelligence update mode is manual or automatic. If it's manual that means that you can [push newly released packages directly to sensors](how-to-work-with-threat-intelligence-packages.md) as needed. Otherwise, the new packages will be automatically installed on all OT, cloud-connected sensors. | +|**Threat Intelligence mode**| Displays whether the Threat Intelligence update mode is manual or automatic. If it's manual that means that you can [push newly released packages directly to sensors](how-to-work-with-threat-intelligence-packages.md) as needed. Otherwise, the new packages are automatically installed on all OT, cloud-connected sensors. | |**Threat Intelligence update status**| Displays the update status of the Threat Intelligence package on an OT sensor. The status can be either **Failed**, **In Progress**, **Update Available**, or **Ok**.| ## Site management options from the Azure portal Use the options on the **Sites and sensor** page and a sensor details page to do |Task |Description | |||-|:::image type="icon" source="medi). | +| :::image type="icon" source="medi). | +|:::image type="icon" source="medi). | |:::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-edit.png" border="false"::: **Edit automatic threat intelligence updates** | Individual, OT sensors only. <br><br>Available from the **...** options menu or a sensor details page. <br><br>Select **Edit** and then toggle the **Automatic Threat Intelligence Updates (Preview)** option on or off as needed. Select **Submit** to save your changes. |-|:::image type="icon" source="medi#download-and-apply-a-new-activation-file) | +|:::image type="icon" source="medi#update-legacy-ot-sensor-software). | ### Sensor deployment and access Use the options on the **Sites and sensor** page and a sensor details page to do |:::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-edit.png" border="false"::: **Edit a sensor zone** | For individual sensors only, from the **...** options menu or a sensor details page. <br><br>Select **Edit**, and then select a new zone from the **Zone** menu or select **Create new zone**. Select **Submit** to save your changes. | | **Download SNMP MIB file** | Available from the **Sites and sensors** toolbar **More actions** menu. <br><br>For more information, see [Set up OT sensor health monitoring via SNMP](how-to-set-up-snmp-mib-monitoring.md).| |:::image type="icon" source="medi#install-enterprise-iot-sensor-software). |-|<a name="endpoint"></a> **Download endpoint details** (Public preview) | Available from the **Sites and sensors** toolbar **More actions** menu, for OT sensor versions 22.x only. <br><br>Download the list of endpoints that must be enabled as secure endpoints from OT network sensors. Make sure that HTTPS traffic is enabled over port 443 to the listed endpoints for your sensor to connect to Azure. Outbound allow rules are defined once for all OT sensors onboarded to the same subscription.<br><br>To enable this option, select a sensor with a supported software version, or a site with one or more sensors with supported versions. | -| **Define OT network sensor settings** (Preview) | Define selected sensor settings for one or more cloud-connected OT network sensors. For more information, see [Define and view OT sensor settings from the Azure portal (Public preview)](configure-sensor-settings-portal.md). <br><br>Other settings are also available directly from the [OT sensor console](how-to-manage-individual-sensors.md), or the [on-premises management console](how-to-manage-sensors-from-the-on-premises-management-console.md).| +|<a name="endpoint"></a> **Download endpoint details** (Public preview) | OT sensors only, with versions 22.x and higher only.<br><br>Available from the **Sites and sensors** toolbar **More actions** menu. <br><br>Download the list of endpoints that must be enabled as secure endpoints from OT network sensors. Make sure that HTTPS traffic is enabled over port 443 to the listed endpoints for your sensor to connect to Azure. Outbound allow rules are defined once for all OT sensors onboarded to the same subscription.<br><br>To enable this option, select a sensor with a supported software version, or a site with one or more sensors with supported versions. | ### Sensor maintenance and troubleshooting |Task |Description | |||+| :::image type="icon" source="medi).| |:::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-export.png" border="false"::: **Export sensor data** | Available from the **Sites and sensors** toolbar only, to download a CSV file with details about all the sensors listed. | |:::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-delete.png" border="false"::: **Delete a sensor** | For individual sensors only, from the **...** options menu or a sensor details page. | | :::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-diagnostics.png" border="false"::: **Send diagnostic files to support** | Individual, locally managed OT sensors only. <br><br>Available from the **...** options menu. <br><br>For more information, see [Upload a diagnostics log for support](#upload-a-diagnostics-log-for-support).| ## Retrieve forensics data stored on the sensor -Use Azure Monitor workbooks on an OT network sensor to retrieve forensic data from that sensorΓÇÖs storage. The following types of forensic data is stored locally on OT sensors, for devices detected by that sensor: +Use Azure Monitor workbooks on an OT network sensor to retrieve forensic data from that sensorΓÇÖs storage. The following types of forensic data are stored locally on OT sensors, for devices detected by that sensor: - Device data - Alert data Use Azure Monitor workbooks on an OT network sensor to retrieve forensic data fr - Event timeline data - Log files -Each type of data has a different retention period and maximum capacity. For more information see [Visualize Microsoft Defender for IoT data with Azure Monitor workbooks](workbooks.md) and [Data retention across Microsoft Defender for IoT](references-data-retention.md). +Each type of data has a different retention period and maximum capacity. For more information, see [Visualize Microsoft Defender for IoT data with Azure Monitor workbooks](workbooks.md) and [Data retention across Microsoft Defender for IoT](references-data-retention.md). ## Reactivate an OT sensor This procedure describes how to view sensor health data from the Azure portal. S - Sensor fails regular sanity tests - No traffic detected by the sensor - Sensor software version is no longer supported- - A [remote sensor upgrade from the Azure portal](update-ot-software.md#update-your-sensors) fails + - A [remote sensor upgrade from the Azure portal](update-ot-software.md#update-ot-sensors) fails For more information, see our [Sensor health message reference](sensor-health-messages.md). |
defender-for-iot | How To Manage The On Premises Management Console | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-the-on-premises-management-console.md | -You onboard the on-premises management console from the Azure portal. - ## Download software for the on-premises management console -You may need to download software for your on-premises management console if you're installing Defender for IoT software on your own appliances, or updating software versions. +You may need to download software for your on-premises management console if you're [installing Defender for IoT software](ot-deploy/install-software-on-premises-management-console.md) on your own appliances, or [updating software versions](update-ot-software.md). -**To download on-premises management console software**: +In Defender for IoT in the Azure portal, use one of the following options: -1. In the Azure portal, go to **Defender for IoT** > **Getting started** > **On-premises management console** or **Updates**. +- For a new installation or standalone update, select **Getting started** > **On-premises management console**. -1. Select **Download** for your on-premises management console software update. Save your `management-secured-patcher-<version>.tar` file locally. For example: + - For a new installation, select a version in the **Purchase an appliance and install software** area, and then select **Download**. + - For an update, select your update scenario in the **On-premises management console** area and then select **Download**. - :::image type="content" source="media/update-ot-software/on-premises-download.png" alt-text="Screenshot of the Download option for the on-premises management console." lightbox="media/update-ot-software/on-premises-download.png"::: +- If you're updating your on-premises management console together with connected OT sensors, use the options in the **Sites and sensors** page > **Sensor update (Preview)** menu. [!INCLUDE [root-of-trust](includes/root-of-trust.md)] +For more information, see [Update Defender for IoT OT monitoring software](update-ot-software.md#update-an-on-premises-management-console). ## Upload an activation file When you first sign in, an activation file for the on-premises management console is downloaded. This file contains the aggregate committed devices that are defined during the onboarding process. The list includes sensors associated with multiple subscriptions. |
defender-for-iot | How To Work With Threat Intelligence Packages | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-threat-intelligence-packages.md | Title: Update threat intelligence data -description: The threat intelligence data package is provided with each new Defender for IoT version, or if needed between releases. Previously updated : 11/16/2022+ Title: Maintain threat intelligence packages on OT network sensors - Microsoft Defender for IoT +description: Learn how to maintain threat intelligence packages on OT network sensors. Last updated : 02/09/2023 -# Threat intelligence research and packages -## Overview -Security teams at Microsoft carry out proprietary ICS threat intelligence and vulnerability research. These teams include MSTIC (Microsoft Threat Intelligence Center), DART (Microsoft Detection and Response Team), DCU (Digital Crimes Unit), and Section 52 (IoT/OT/ICS domain experts that track ICS-specific zero-days, reverse-engineering malware, campaigns, and adversaries) +# Maintain threat intelligence packages on OT network sensors -The teams provide security detection, analytics, and response to Microsoft's: +Microsoft security teams continually run proprietary ICS threat intelligence and vulnerability research. Security research provides security detection, analytics, and response to Microsoft's cloud infrastructure and services, traditional products and deices, and internal corporate resources. -- Cloud infrastructure and services.-- Traditional products and devices.-- Internal corporate resources.+Microsoft Defender for IoT regularly delivers threat intelligence package updates for OT network sensors, providing increased protection from known and relevant threats and insights that can help your teams triage and prioritize alerts. -Security teams gain the benefit of: +Threat intelligence packages contain signatures, such as malware signatures, CVEs, and other security content. -- Protection from known and relevant threats.-- Insights that help you triage and prioritize.-- An understanding of the full context of threats before they're affected.-- More relevant, accurate, and actionable data.+> [!TIP] +> We recommend ensuring that your OT network sensors always have the latest threat intelligence package installed so that you always have the full context of a threat before an environment is affected, and increased relevancy, accuracy, and actionable recommendations. +> +> Announcements about new packages are available from our [TechCommunity blog](https://techcommunity.microsoft.com/t5/azure-defender-for-iot/bd-p/AzureDefenderIoT). -This intelligence provides contextual information to enrich Microsoft platform analytics and supports the company's managed services for incident response and breach investigation. Threat intelligence packages contain signatures (including malware signatures), CVEs, and other security content. +## Permissions -## When are packages delivered +To perform the procedures in this article, make sure that you have: -Threat intelligence packages are provided approximately once a month, or if needed more frequently. Announcements about new packages are available from: https://techcommunity.microsoft.com/t5/azure-defender-for-iot/bd-p/AzureDefenderIoT. +- One or more OT sensors [onboarded](onboard-sensors.md) to Azure. -You can also see the most current package delivered from the **Threat intelligence update** section of the **Updates** page on Defender for IoT in the Azure portal. +- Relevant permissions on the Azure portal and any OT network sensors or on-premises management console you want to update. -## Update threat intelligence packages to your sensors + - **To download threat intelligence packages from the Azure portal**, you need access to the Azure portal as a [Security Reader](../../role-based-access-control/built-in-roles.md#security-reader), [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) role. -Three options are available for updating threat intelligence packages to your sensors: + - **To push threat intelligence updates to cloud-connected OT sensors from the Azure portal**, you need access to Azure portal as a [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) role. -- Automatically push packages to sensors as they're delivered by Defender for IoT.-- Manually push threat intelligence package to sensors as required.-- Download a package and then upload it to a sensor or multiple sensors.+ - **To manually upload threat intelligence packages to OT sensors or on-premises management consoles**, you need access to the OT sensor or on-premises management console as an **Admin** user. -Users with Defender for IoT Security Reader permissions can automatically and manually push packages to sensors. +For more information, see [Azure user roles and permissions for Defender for IoT](roles-azure.md) and [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md). -### Automatically push threat intelligence updates to sensors -Threat intelligence packages can be automatically updated to *cloud connected* sensors as they're released by Defender for IoT. Ensure automatic package update by onboarding your cloud connected sensor with the **Automatic Threat Intelligence Updates** option enabled. For more information, see [Onboard a sensor](tutorial-onboarding.md#onboard-and-activate-the-virtual-sensor). +## View the most recent threat intelligence package -### Manually push threat intelligence updates to sensors +To view the most recent package delivered, in the Azure portal, select **Sites and sensors** > **Threat intelligence update (Preview)** > **Local update**. -Your *cloud connected* sensors can be automatically updated with threat intelligence packages. However, if you would like to take a more conservative approach, you can push packages from Defender for IoT to sensors only when you feel it's required. This gives you the ability to control when a package is installed, without the need to download and then upload it to your sensors. +Details about the most recent package available are shown in the **Sensor TI update** pane. For example: -**To manually push packages:** -1. Go to the Microsoft Defender for IoT **Sites and Sensors** page. -1. Select the ellipsis (...) for a sensor and then select **Push Threat Intelligence update**. The **Threat Intelligence update status** field displays the update progress. +## Update threat intelligence packages -#### Change the threat intelligence update mode +Update threat intelligence packages on your OT sensors using any of the following methods: -You can change the sensor threat intelligence update mode after initial onboarding. +- [Have updates pushed](#automatically-push-updates-to-cloud-connected-sensors) to cloud-connected OT sensors automatically as they're released +- [Manually push](#manually-push-updates-to-cloud-connected-sensors) updates to cloud-connected OT sensors +- [Download an update package](#manually-update-locally-managed-sensors) and manually upload it to your OT sensor. Alternately, upload the package to an on-premises management console and push the updates from there to any connected OT sensors. -**To change the update mode:** +### Automatically push updates to cloud-connected sensors -1. Select the ellipsis (...) for a sensor and then select **Edit**. -1. Enable or disable the **Automatic Threat Intelligence Updates** toggle. +Threat intelligence packages can be automatically updated to *cloud connected* sensors as they're released by Defender for IoT. -### Download packages and upload to sensors +Ensure automatic package update by onboarding your cloud connected sensor with the **Automatic Threat Intelligence Updates** option enabled. For more information, see [Onboard a sensor](tutorial-onboarding.md#onboard-and-activate-the-virtual-sensor). -Packages can be downloaded the Azure portal and manually uploaded to individual sensors. If the on-premises management console manages your sensors, you can download threat intelligence packages to the management console and push them to multiple sensors simultaneously. +**To change the update mode after you've onboarded your OT sensor**: +1. In Defender for IoT on the Azure portal, select **Sites and sensors**, and then locate the sensor you want to change. +1. Select the options (**...**) menu for the selected OT sensor > **Edit**. +1. Toggle on or toggle off the **Automatic Threat Intelligence Updates** option as needed. -This option is available for both *cloud connected* and *locally managed* sensors. +### Manually push updates to cloud-connected sensors +Your *cloud connected* sensors can be automatically updated with threat intelligence packages. However, if you would like to take a more conservative approach, you can push packages from Defender for IoT to sensors only when you feel it's required. Pushing updates manually gives you the ability to control when a package is installed, without the need to download and then upload it to your sensors. ++**To manually push updates to a single OT sensor**: ++1. In Defender for IoT on the Azure portal, select **Sites and sensors**, and locate the OT sensor you want to update. +1. Select the options (**...**) menu for the selected sensor and then select **Push Threat Intelligence update**. ++The **Threat Intelligence update status** field displays the update progress. ++**To manually push updates to multiple OT sensors**: ++1. In Defender for IoT on the Azure portal, select **Sites and sensors**. Locate and select the OT sensors you want to update. +1. Select **Threat intelligence updates (Preview)** > **Remote update**. ++The **Threat Intelligence update status** field displays the update progress for each selected sensor. ++### Manually update locally managed sensors ++If you're working with locally managed OT sensors, you need to download the updated threat intelligence packages and upload them manually on your sensors. ++If you're also working with an on-premises management console, we recommend that you upload the threat intelligence package to the on-premises management console and push the update from there. -**To upload to a single sensor:** +> [!TIP] +> This option can also be used for cloud-connected sensors if you don't want to push the updates from the Azure portal. +> -1. In Defender for IoT on the Azure portal, go to the **Get started** > **Updates** tab. +**To download threat intelligence packages**: -1. In the **Sensor threat intelligence update** box, select **Download file** to download the latest threat intelligence package. +1. In Defender for IoT on the Azure portal, select **Sites and sensors** > **Threat intelligence update (Preview)** > **Local update**. -1. Sign in to the sensor console, and then select **System settings** > **Threat intelligence**. +1. In the **Sensor TI update** pane, select **Download** to download the latest threat intelligence file. For example: +++**To update a single sensor:** ++1. Sign into your OT sensor and then select **System settings** > **Threat intelligence**. 1. In the **Threat intelligence** pane, select **Upload file**. For example: This option is available for both *cloud connected* and *locally managed* sensor 1. Browse to and select the package you'd downloaded from the Azure portal and upload it to the sensor. -**To upload to multiple sensors simultaneously:** --1. In Defender for IoT on the Azure portal, go to the **Get started** > **Updates** tab. +**To update multiple sensors simultaneously:** -1. In the **Sensor threat intelligence update** box, select **Download file** to download the latest threat intelligence package. --1. Sign in to the management console and select **System settings**. +1. Sign in to your on-premises management console and select **System settings**. 1. In the **Sensor Engine Configuration** area, select the sensors that you want to receive the updated packages. For example: This option is available for both *cloud connected* and *locally managed* sensor :::image type="content" source="media/how-to-work-with-threat-intelligence-packages/save-changes-management-console.png" alt-text="Screenshot of where you can save changes made to selected sensors on the management console." lightbox="media/how-to-work-with-threat-intelligence-packages/save-changes-management-console.png"::: -## Review package update status on the sensor --The package update status and version information are displayed in the sensor **System Settings**, **Threat Intelligence** section. --## Review package information for cloud connected sensors --Review the following information about threat intelligence packages for your cloud connected sensors: --- Package version installed-- Threat intelligence update mode-- Threat intelligence update status+## Review threat intelligence update statuses -**To review threat intelligence information**: +On each OT sensor, the threat intelligence update status and version information are shown in the sensor's **System settings > Threat intelligence** settings. -1. Go to the Microsoft Defender for IoT **Sites and Sensors** page. +For cloud-connected OT sensors, threat intelligence data is also shown in the **Sites and sensors** page. To view threat intelligence statues from the Azure portal: -1. Review the **Threat Intelligence version** installed on each sensor. Version naming is based on the day the package was built by Defender for IoT. +1. In Defender for IoT on the Azure portal, select **Site and sensors**. -1. Review the **Threat Intelligence mode** . *Automatic* indicates that newly available packages will be automatically installed on sensors as they're released by Defender for IoT. +1. Locate the OT sensors where you want to check the threat intelligence statues. - *Manual* indicates that you can push newly available packages directly to sensors as needed. +1. Note the values of the following columns for your OT sensors: -1. Review the **Threat Intelligence update status**. The following statuses may be displayed: + |Column name |Description | + ||| + |**Threat Intelligence version** | Version naming is based on the day the package was built by Defender for IoT. | + |**Threat Intelligence mode** | *Automatic* indicates that newly available packages will be automatically installed on sensors as they're released by Defender for IoT. <br><br>*Manual* indicates that you can push newly available packages directly to sensors as needed. | + |**Threat Intelligence update status** | Shows one of the following statuses: <br> - **Failed**<br> - **In Progress**<br> - **Update Available**<br> - **Ok** | - - Failed - - In Progress - - Update Available - - Ok +> [!TIP] +> If a cloud-connected OT sensor shows that a threat intelligence update has failed, we recommend that your check your sensor connection details. On the **Sites and sensors** page, check the **Sensor status** and **Last connected UTC** columns. -If cloud connected threat intelligence updates fail, review connection information in the **Sensor status** and **Last connected UTC** columns in the **Sites and Sensors** page. ## Next steps |
defender-for-iot | Onboard Sensors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/onboard-sensors.md | Onboard an OT sensor by registering it with Microsoft Defender for IoT and downl If you haven't yet upgraded to version 22.x, see [Update Defender for IoT OT monitoring software](update-ot-software.md). - 1. In the **Site** section, select the **Resource name** and enter the **Display name** for your site. Add any tags as needed to help you identify your sensor. + 1. In the **Site** section, select the **Resource name** and enter an extra **Display name** to show for your site in the Azure portal. Add any tags as needed to help you identify your sensor. 1. In the **Zone** field, select a zone from the menu, or select **Create Zone** to create a new one. |
defender-for-iot | Install Software On Premises Management Console | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-deploy/install-software-on-premises-management-console.md | For more information, see: Download on-premises management console software from Defender for IoT in the Azure portal. -On the Defender for IoT > **Getting started** page, select the **On-premises management console** or **Updates** tab and locate the software you need. +Select **Getting started** > **On-premises management console** and select the software version you want to download. -If you're updating from a previous version, check the options carefully to ensure that you have the correct update path for your situation. +> [!IMPORTANT] +> If you're updating software from a previous version, alternately use the options from the **Sites and sensors** > **Sensor update (Preview)** menu. Use this option especially when you're updating your on-premises management console together with connected OT sensors. For more information, see [Update Defender for IoT OT monitoring software](../update-ot-software.md). ## Install on-premises management console software |
defender-for-iot | Install Software Ot Sensor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-deploy/install-software-ot-sensor.md | For more information, see: Download the OT sensor software from Defender for IoT in the Azure portal. -On the Defender for IoT > **Getting started** page, select the **Sensor** or **Updates** tab and locate the software you need. +Select **Getting started** > **Sensor** and select the software version you want to download. -If you're updating from a previous version, check the options carefully to ensure that you have the correct update path for your situation. +> [!IMPORTANT] +> If you're updating software from a previous version, use the options from the **Sites and sensors** > **Sensor update** menu. For more information, see [Update Defender for IoT OT monitoring software](../update-ot-software.md). ## Install Defender or IoT software on OT sensors |
defender-for-iot | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md | This version includes the following new updates and fixes: This version includes the following new updates and fixes: - [Define and view OT sensor settings from the Azure portal](configure-sensor-settings-portal.md)-- [Update your sensors from the Azure portal](update-ot-software.md#update-your-sensors)+- [Update your sensors from the Azure portal](update-ot-software.md#update-ot-sensors) - [New naming convention for hardware profiles](ot-appliance-sizing.md) - [PCAP access from the Azure portal](how-to-manage-cloud-alerts.md) - [Bi-directional alert synch between OT sensors and the Azure portal](alerts.md#managing-ot-alerts-in-a-hybrid-environment) |
defender-for-iot | Roles Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/roles-azure.md | Roles for management actions are applied to user roles across an entire Azure su | Action and scope|[Security Reader](../../role-based-access-control/built-in-roles.md#security-reader) |[Security Admin](../../role-based-access-control/built-in-roles.md#security-admin) |[Contributor](../../role-based-access-control/built-in-roles.md#contributor) | [Owner](../../role-based-access-control/built-in-roles.md#owner) | ||||||-| **Grant permissions to others**<br>Apply per subscription or site | - | - | - | Γ£ö | -| **Onboard OT or Enterprise IoT sensors** [*](#enterprise-iot-security) <br>Apply per subscription only | - | Γ£ö | Γ£ö | Γ£ö | -| **Download OT sensor and on-premises management console software**<br>Apply per subscription only | Γ£ö | Γ£ö | Γ£ö | Γ£ö | -| **Download sensor endpoint details** <br>Apply per subscription only | Γ£ö | Γ£ö | Γ£ö | Γ£ö | -| **Download sensor activation files** <br>Apply per subscription only| - | Γ£ö | Γ£ö | Γ£ö | -| **View values on the Plans and pricing page** [*](#enterprise-iot-security) <br>Apply per subscription only| Γ£ö | Γ£ö | Γ£ö | Γ£ö | -| **Modify values on the Plans and pricing page** [*](#enterprise-iot-security) <br>Apply per subscription only| - | Γ£ö | Γ£ö | Γ£ö | -| **View values on the Sites and sensors page** [*](#enterprise-iot-security)<br>Apply per subscription only | Γ£ö | Γ£ö | Γ£ö | Γ£ö| -| **Modify values on the Sites and sensors page** [*](#enterprise-iot-security)<br>Apply per subscription only | - | Γ£ö | Γ£ö | Γ£ö| -| **Recover on-premises management console passwords** <br>Apply per subscription only | - | Γ£ö | Γ£ö | Γ£ö | -| **Download OT threat intelligence packages** <br>Apply per subscription only | Γ£ö | Γ£ö | Γ£ö | Γ£ö | -| **Push OT threat intelligence updates** <br>Apply per subscription only | - | Γ£ö | Γ£ö | Γ£ö | -| **Onboard an Enterprise IoT plan from Microsoft 365 Defender** [*](#enterprise-iot-security)<br>Apply per subscription only | - | Γ£ö | - | - | -| **View Azure alerts** <br>Apply per subscription or site | Γ£ö | Γ£ö |Γ£ö | Γ£ö| -| **Modify Azure alerts (write access - change status, learn, download PCAP)** <br>Apply per subscription or site| - | Γ£ö |Γ£ö | Γ£ö | -| **View Azure device inventory** <br>Apply per subscription or site | Γ£ö | Γ£ö |Γ£ö | Γ£ö| -| **Manage Azure device inventory (write access)** <br>Apply per subscription or site | - | Γ£ö |Γ£ö | Γ£ö | -| **View Azure workbooks**<br>Apply per subscription or site | Γ£ö | Γ£ö |Γ£ö | Γ£ö | -| **Manage Azure workbooks (write access)** <br>Apply per subscription or site | - | Γ£ö |Γ£ö | Γ£ö | +| **[Grant permissions to others](manage-users-portal.md)**<br>Apply per subscription or site | - | - | - | Γ£ö | +| **Onboard [OT](onboard-sensors.md) or [Enterprise IoT sensors](eiot-sensor.md)** [*](#enterprise-iot-security) <br>Apply per subscription only | - | Γ£ö | Γ£ö | Γ£ö | +| **[Download OT sensor and on-premises management console software](update-ot-software.md#download-the-update-package-from-the-azure-portal)**<br>Apply per subscription only | Γ£ö | Γ£ö | Γ£ö | Γ£ö | +| **[Download sensor endpoint details](how-to-manage-sensors-on-the-cloud.md#endpoint)** <br>Apply per subscription only | Γ£ö | Γ£ö | Γ£ö | Γ£ö | +| **[Download sensor activation files](how-to-manage-sensors-on-the-cloud.md#reactivate-an-ot-sensor)** <br>Apply per subscription only| - | Γ£ö | Γ£ö | Γ£ö | +| **[View values on the Plans and pricing page](how-to-manage-subscriptions.md)** [*](#enterprise-iot-security) <br>Apply per subscription only| Γ£ö | Γ£ö | Γ£ö | Γ£ö | +| **[Modify values on the Plans and pricing page](how-to-manage-subscriptions.md#edit-a-plan-for-ot-networks)** [*](#enterprise-iot-security) <br>Apply per subscription only| - | Γ£ö | Γ£ö | Γ£ö | +| **[View values on the Sites and sensors page](how-to-manage-sensors-on-the-cloud.md)** [*](#enterprise-iot-security)<br>Apply per subscription only | Γ£ö | Γ£ö | Γ£ö | Γ£ö| +| **[Modify values on the Sites and sensors page](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal)** [*](#enterprise-iot-security), including remote OT sensor updates<br>Apply per subscription only | - | Γ£ö | Γ£ö | Γ£ö| +| **[Recover on-premises management console passwords](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal)** <br>Apply per subscription only | - | Γ£ö | Γ£ö | Γ£ö | +| **[Download OT threat intelligence packages](how-to-work-with-threat-intelligence-packages.md#manually-update-locally-managed-sensors)** <br>Apply per subscription only | Γ£ö | Γ£ö | Γ£ö | Γ£ö | +| **[Push OT threat intelligence updates](how-to-work-with-threat-intelligence-packages.md#manually-push-updates-to-cloud-connected-sensors)** <br>Apply per subscription only | - | Γ£ö | Γ£ö | Γ£ö | +| **[Onboard an Enterprise IoT plan from Microsoft 365 Defender](eiot-defender-for-endpoint.md)** [*](#enterprise-iot-security)<br>Apply per subscription only | - | Γ£ö | - | - | +| **[View Azure alerts](how-to-manage-cloud-alerts.md)** <br>Apply per subscription or site | Γ£ö | Γ£ö |Γ£ö | Γ£ö| +| **[Modify Azure alerts](how-to-manage-cloud-alerts.md) (write access - change status, learn, download PCAP)** <br>Apply per subscription or site| - | Γ£ö |Γ£ö | Γ£ö | +| **[View Azure device inventory](how-to-manage-device-inventory-for-organizations.md)** <br>Apply per subscription or site | Γ£ö | Γ£ö |Γ£ö | Γ£ö| +| **[Manage Azure device inventory](how-to-manage-device-inventory-for-organizations.md) (write access)** <br>Apply per subscription or site | - | Γ£ö |Γ£ö | Γ£ö | +| **[View Azure workbooks](workbooks.md)**<br>Apply per subscription or site | Γ£ö | Γ£ö |Γ£ö | Γ£ö | +| **[Manage Azure workbooks](workbooks.md) (write access)** <br>Apply per subscription or site | - | Γ£ö |Γ£ö | Γ£ö | | **[View Defender for IoT settings](configure-sensor-settings-portal.md)** <br>Apply per subscription | Γ£ö | Γ£ö |Γ£ö | Γ£ö | | **[Configure Defender for IoT settings](configure-sensor-settings-portal.md)** <br>Apply per subscription | - | Γ£ö |Γ£ö | Γ£ö | |
defender-for-iot | Update Ot Software | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/update-ot-software.md | Title: Update Defender for IoT OT monitoring software versions description: Learn how to update (upgrade) Defender for IoT software on OT sensors and on-premises management servers. Previously updated : 01/10/2023 Last updated : 02/14/2023 -You can purchase preconfigured appliances for your sensors and on-premises management consoles, or install software on your own hardware machines. In either case, you'll need to update software versions to use new features for OT sensors and on-premises management consoles. +You can purchase pre-configured appliances for your sensors and on-premises management consoles, or install software on your own hardware machines. In either case, you'll need to update software versions to use new features for OT sensors and on-premises management consoles. For more information, see [Which appliances do I need?](ot-appliance-sizing.md), [Pre-configured physical appliances for OT monitoring](ot-pre-configured-appliances.md), and [OT monitoring software release notes](release-notes.md). -## Legacy version updates vs. recent version updates --When downloading your update files from the Azure portal, youΓÇÖll see the option to download different files for different types of updates. Update files differ depending on the version youΓÇÖre updating from and updating to. --Make sure to select the file that matches your upgrade scenario. --Updates from legacy versions may require a series of software updates: If you still have a sensor version 3.1.1 installed, you'll need to first upgrade to version 10.5.5, and then to a 22.x version. For example: ---## Verify network requirements +> [!NOTE] +> Update files are available for [currently supported versions](release-notes.md) only. If you have OT network sensors with legacy software versions that are no longer supported, open a support ticket to access the relevant files for your update. -- Make sure that your sensors can reach the Azure data center address ranges and set up any extra resources required for the connectivity method your organization is using. - For more information, see [OT sensor cloud connection methods](architecture-connections.md) and [Connect your OT sensors to the cloud](connect-sensors.md). +## Prerequisites -- Make sure that your firewall rules are configured as needed for the new version you're updating to. For example, the new version may require a new or modified firewall rule to support sensor access to the Azure portal. From the **Sites and sensors** page, select **More actions > Download sensor endpoint details** for the full list of endpoints required to access the Azure portal.+To perform the procedures described in this article, make sure that you have: - For more information, see [Networking requirements](how-to-set-up-your-network.md#networking-requirements) and [Sensor management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal). +- **A list of the OT sensors you'll want to update**, and the update methods you want to use. Each sensor that you want to update must be both [onboarded](onboard-sensors.md) to Defender for IoT and activated. -## Update an on-premises management console + |Update scenario |Method details | + ||| + |**On-premises management console** | If the OT sensors you want to update are connected to an on-premises management console, plan to [update your on-premises management console](#update-the-on-premises-management-console) *before* updating your sensors.| + |**Cloud-connected sensors** | Cloud connected sensors can be updated remotely, directly from the Azure portal, or manually using a downloaded update package. <br><br>[Remote updates](#update-ot-sensors) require that your OT sensor have version [22.2.3](release-notes.md#2223) or later already installed. | + |**Locally-managed sensors** | Locally-managed sensors can be updated using a downloaded update package, either via a connected on-premises management console, or directly on an OT sensor console. | -This procedure describes how to update Defender for IoT software on an on-premises management console, and is only relevant if your organization is using an on-premises management console to manage multiple sensors simultaneously. +- **Required access permissions**: -In such cases, make sure to update your on-premises management consoles *before* you update software on your sensors. This process takes about 30 minutes. + - **To download update packages or push updates from the Azure portal**, you'll need access to the Azure portal as a [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), and [Owner](../../role-based-access-control/built-in-roles.md#owner) user. -> [!IMPORTANT] -> The software version on your on-premises management console must be equal to that of your most up-to-date sensor version. Each on-premises management console version is backwards compatible to older, supported sensor versions, but cannot connect to newer sensor versions. -> + - **To run updates on an OT sensor or on-premises management console**, you'll need access as an **Admin** user. -**To update on-premises management console software**: + - **To update an OT sensor via CLI**, you'll need access to the sensor as a [privileged user](roles-on-premises.md#default-privileged-on-premises-users). -1. In the Azure portal, go to **Defender for IoT** > **Getting started** > **Updates**. + For more information, see [Azure user roles and permissions for Defender for IoT](roles-azure.md) and [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md). -1. Scroll down to the **On-premises management console** section, and select **Download** for the software update. Save your `management-secured-patcher-<version>.tar` file locally. For example: +## Verify network requirements - :::image type="content" source="media/update-ot-software/on-premises-download.png" alt-text="Screenshot of the Download option for the on-premises management console." lightbox="media/update-ot-software/on-premises-download.png"::: +- Make sure that your sensors can reach the Azure data center address ranges and set up any extra resources required for the connectivity method your organization is using. - Make sure to select the version for the update you're performing. For more information, see [Legacy version updates vs. recent version updates](#legacy-version-updates-vs-recent-version-updates). + For more information, see [OT sensor cloud connection methods](architecture-connections.md) and [Connect your OT sensors to the cloud](connect-sensors.md). - [!INCLUDE [root-of-trust](includes/root-of-trust.md)] +- Make sure that your firewall rules are configured as needed for the new version you're updating to. -1. On your on-premises management console, select **System Settings** > **Version Update**. + For example, the new version may require a new or modified firewall rule to support sensor access to the Azure portal. From the **Sites and sensors** page, select **More actions > Download sensor endpoint details** for the full list of endpoints required to access the Azure portal. -1. In the **Upload File** dialog, select **BROWSE FILE** and then browse to and select the update file you'd downloaded from the Azure portal. + For more information, see [Networking requirements](how-to-set-up-your-network.md#networking-requirements) and [Sensor management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal). - The update process starts, and may take about 30 minutes. During your upgrade, the system is rebooted twice. +## Update OT sensors - Sign in when prompted and check the version number listed in the bottom-left corner to confirm that the new version is listed. +This section describes how to update Defender for IoT OT sensors using any of the supported methods. -## Update your sensors +**Sending or downloading an update package** and **running the update** are two separate steps. Each step can be done one right after the other or at different times. -You can update software on your sensors individually, directly from each sensor console, or in bulk from the on-premises management console. Select one of the following tabs for the steps required in each method. +For example, you might want to first send the update to your sensor or download and update package, and then have an administrator run the update later on, during a planned maintenance window. -> [!NOTE] -> If you are updating from software versions earlier than [22.1.x](whats-new.md#update-to-version-221x), note that [version 22.1.x](release-notes.md#2223) has a large update with more complicated background processes. Expect this update to take more time than earlier updates have required. -> --### Prerequisites --If you're using an on-premises management console to manage your sensors, make sure to update your on-premises management console software *before* you update your sensor software. +If you're using an on-premises management console, make sure that you've [updated the on-premises management console](#update-the-on-premises-management-console) *before* updating any connected sensors. On-premises management software is backwards compatible, and can connect to sensors with earlier versions installed, but not later versions. If you update your sensor software before updating your on-premises management console, the updated sensor will be disconnected from the on-premises management console. -For more information, see [Update an on-premises management console](#update-an-on-premises-management-console). --### Select an update method +Select the update method you want to use: -Select one of the following tabs, depending on how you've chosen to update your OT sensor software. --# [From the Azure portal (Public preview)](#tab/portal) +# [Azure portal (Public preview)](#tab/portal) This procedure describes how to send a software version update to one or more OT sensors, and then run the updates remotely from the Azure portal. Bulk updates are supported for up to 10 sensors at a time. -> [!TIP] -> Sending your version update and running the update process are two separate steps, which can be done one right after the other or at different times. +> [!IMPORTANT] +> If you're using an on-premises management console, make sure that you've [updated the on-premises management console](#update-the-on-premises-management-console) *before* updating any connected sensors. >-> For example, you might want to first send the update to your sensor and then an administrator to run the installation during a planned maintenance window. --**Prerequisites**: A cloud-connected sensor with a software version equal to or higher than [22.2.3](release-notes.md#2223), but not yet the latest version available. -**To send the software update to your OT sensor**: +### Send the software update to your OT sensor -1. In the Azure portal, go to **Defender for IoT** > **Sites and sensors** and identify the sensors that have legacy versions installed. +1. In Defender for IoT in the Azure portal, select **Sites and sensors** and then locate the OT sensors with legacy, but [supported versions](#prerequisites) installed. If you know your site and sensor name, you can browse or search for it directly. Alternately, filter the sensors listed to show only cloud-connected, OT sensors that have *Remote updates supported*, and have legacy software version installed. For example: :::image type="content" source="media/update-ot-software/filter-remote-update.png" alt-text="Screenshot of how to filter for OT sensors that are ready for remote update." lightbox="media/update-ot-software/filter-remote-update.png"::: -1. Select one or more sensors to update, and then select **Update (Preview)** > **Send package**. For a specific sensor, you can also access the **Send package** option from the **...** options menu to the right of the sensor row. For example: +1. Select one or more sensors to update, and then select **Sensor update (Preview)** > **Remote update** > **Step one: Send package to sensor**. - :::image type="content" source="media/update-ot-software/send-package.png" alt-text="Screenshot of the Send package option." lightbox="media/update-ot-software/send-package.png"::: + For an individual sensor, the **Step one: Send package to sensor** option is also available from the **...** options menu to the right of the sensor row. For example: -1. In the **Send package** pane that appears on the right, check to make sure that you're sending the correct software to the sensor you want to update. For more information, see [Legacy version updates vs. recent version updates](#legacy-version-updates-vs-recent-version-updates). + :::image type="content" source="media/update-ot-software/remote-update-step-1.png" alt-text="Screenshot of the Send package option." lightbox="media/update-ot-software/remote-update-step-1.png"::: - To jump to the release notes for the new version, select **Learn more** at the top of the pane. +1. In the **Send package** pane that appears on the right, check to make sure that you're sending the correct software to the sensor you want to update. To jump to the release notes for the new version, select **Learn more** at the top of the pane. 1. When you're ready, select **Send package**. The software transfer to your sensor machine is started, and you can see the progress in the **Sensor version** column. This procedure describes how to send a software version update to one or more OT Hover over the **Sensor version** value to see the source and target version for your update. -**To run your sensor update from the Azure portal**: +### Run your sensor update from the Azure portal -When the **Sensor version** column for your sensors reads :::image type="icon" source="media/update-ot-software/ready-to-update.png" border="false"::: **Ready to update**, you're ready to run your update. +Run the sensor update only when you see the :::image type="icon" source="media/update-ot-software/ready-to-update.png" border="false"::: **Ready to update** icon in the **Sensor version** column. -1. As in the previous step, either select multiple sensors that are ready to update, or select one sensor at a time. +1. Select one or more sensors to update, and then select **Sensor update (Preview)** > **Remote update** > **Step 2: Update sensor** from the toolbar. -1. Select either **Update (Preview)** > **Update sensor** from the toolbar, or for an individual sensor, select the **...** options menu > **Update sensor**. For example: + For an individual sensor, the **Step 2: Update sensor** option is also available from the **...** options menu. For example: - :::image type="content" source="media/update-ot-software/update-sensor.png" alt-text="Screenshot of the Update sensor option." lightbox="media/update-ot-software/update-sensor.png"::: + :::image type="content" source="media/update-ot-software/remote-update-step-2.png" alt-text="Screenshot of the Update sensor option." lightbox="media/update-ot-software/remote-update-step-2.png"::: 1. In the **Update sensor (Preview)** pane that appears on the right, verify your update details. When the **Sensor version** column for your sensors reads :::image type="icon" s If a sensor fails to update for any reason, the software reverts back to the previous version installed, and a sensor health alert is triggered. For more information, see [Understand sensor health](how-to-manage-sensors-on-the-cloud.md#understand-sensor-health) and [Sensor health message reference](sensor-health-messages.md). -# [From an OT sensor UI](#tab/sensor) +# [OT sensor UI](#tab/sensor) This procedure describes how to manually download the new sensor software version and then run your update directly on the sensor console's UI. -**To update sensor software directly from the sensor UI**: +> [!IMPORTANT] +> If your OT sensor is connected to an on-premises management console, make sure to update the on-premises management console before updating any connected sensors. ++### Download the update package from the Azure portal ++1. In Defender for IoT on the Azure portal, select **Sites and sensors** > **Sensor update (Preview)**. ++1. In the **Local update** pane, select the software version that's currently installed on your sensors. ++1. In the **Available versions** area of the **Local update** pane, select the version you want to download for your software update. -1. In the Azure portal, go to **Defender for IoT** > **Getting started** > **Updates**. + The **Available versions** area lists all update packages available for your specific update scenario. You may have multiple options, but there will always be one specific version marked as **Recommended** for you. For example: -1. From the **Sensors** section, select **Download** for the sensor update, and save your `<legacy/upstream>-sensor-secured-patcher-<version number>.tar` file locally. For example: + :::image type="content" source="media/update-ot-software/recommended-version.png" alt-text="Screenshot highlighting the recommended update version for the selected update scenario." lightbox="media/update-ot-software/recommended-version.png"::: - :::image type="content" source="media/how-to-manage-individual-sensors/updates-page.png" alt-text="Screenshot of the Updates page of Defender for IoT." lightbox="media/how-to-manage-individual-sensors/updates-page.png"::: +1. Scroll down further in the **Local update** pane and select **Download** to download the update package. - Make sure you're downloading the correct file for the update you're performing. For more information, see [Legacy version updates vs. recent version updates](#legacy-version-updates-vs-recent-version-updates). + The update package is downloaded with a file syntax name of `sensor-secured-patcher-<Version number>.tar`, where `version number` is the version you are updating to. [!INCLUDE [root-of-trust](includes/root-of-trust.md)] -1. On your sensor console, select **System Settings** > **Sensor management** > **Software Update**. +### Update the OT sensor software from the sensor UI -1. On the **Software Update** pane on the right, select **Upload file**, and then navigate to and select your downloaded `legacy-sensor-secured-patcher-<Version number>.tar` file. For example: +1. Sign into your OT sensor and select **System Settings** > **Sensor management** > **Software Update**. - :::image type="content" source="media/how-to-manage-individual-sensors/upgrade-pane-v2.png" alt-text="Screenshot of the Software Update pane on the sensor." lightbox="media/how-to-manage-individual-sensors/upgrade-pane-v2.png"::: +1. On the **Software Update** pane on the right, select **Upload file**, and then navigate to and select your downloaded update package. - The update process starts, and may take about 30 minutes. During your upgrade, the system is rebooted twice. + :::image type="content" source="media/update-ot-software/sensor-upload-file.png" alt-text="Screenshot of the Software update pane on the OT sensor." lightbox="media/update-ot-software/sensor-upload-file.png"::: - Sign in when prompted, and then return to the **System Settings** > **Sensor management** > **Software Update** pane to confirm that the new version is listed. For example: + The update process starts, and may take about 30 minute and include one or two reboots. If your machine reboots, make sure to sign in again as prompted. - :::image type="content" source="media/how-to-manage-individual-sensors/defender-for-iot-version.png" alt-text="Screenshot of the upgrade version that appears after you sign in." lightbox="media/how-to-manage-individual-sensors/defender-for-iot-version.png"::: +# [On-premises management console](#tab/onprem) -# [From an on-premises management console](#tab/onprem) +This procedure describes how to update several OT sensors simultaneously from an on-premises management console. -This procedure describes how to update several sensors simultaneously from an on-premises management console. +> [!IMPORTANT] +> If you're updating multiple, locally-managed OT sensors, make sure to [update the on-premises management console](#update-an-on-premises-management-console) *before* you update any connected sensors. +> +> +The software version on your on-premises management console must be equal to that of your most up-to-date sensor version. Each on-premises management console version is backwards compatible to older, supported sensor versions, but cannot connect to newer sensor versions. +> +### Download the update packages from the Azure portal -**Prerequisites**: +1. In Defender for IoT on the Azure portal, select **Sites and sensors** > **Sensor update (Preview)**. -If you're upgrading an on-premises management console and managed sensors, [first update the management console](#update-an-on-premises-management-console), and then update the sensors. +1. In the **Local update** pane, select the software version that's currently installed on your sensors. -The sensor update process won't succeed if you don't update the on-premises management console first. +1. Select the **Are you updating through a local manager** option, and then select the software version that's currently installed on your on-premises management console. -**To update several sensors**: +1. In the **Available versions** area of the **Local update** pane, select the version you want to download for your software update. -1. On the Azure portal, go to **Defender for IoT** > **Updates**. Under **Sensors**, select **Download** and save the file. For example: + The **Available versions** area lists all update packages available for your specific update scenario. You may have multiple options, but there will always be one specific version marked as **Recommended** for you. For example: - :::image type="content" source="media/how-to-manage-individual-sensors/updates-page.png" alt-text="Screenshot of the Updates page of Defender for IoT." lightbox="media/how-to-manage-individual-sensors/updates-page.png"::: + :::image type="content" source="media/update-ot-software/recommended-version.png" alt-text="Screenshot highlighting the recommended update version for the selected update scenario." lightbox="media/update-ot-software/recommended-version.png"::: - Make sure you're downloading the correct file for the update you're performing. For more information, see [Legacy version updates vs. recent version updates](#legacy-version-updates-vs-recent-version-updates). +1. Scroll down further in the **Local update** pane and select **Download** to download the software file. - [!INCLUDE [root-of-trust](includes/root-of-trust.md)] + If you'd selected the **Are you updating through a local manager** option, files will be listed for both the on-premises management console and the sensor. For example: ++ :::image type="content" source="media/update-ot-software/download-update-package.png" alt-text="Screenshot of the Local update pane with two download files showing, for an on-premises management console and a sensor." lightbox="media/update-ot-software/download-update-package.png"::: ++ The update packages are downloaded with the following file syntax names: ++ - `sensor-secured-patcher-<Version number>.tar` for the OT sensor update + - `management-secured-patcher-<Version number>.tar` for the on-premises management console update ++ Where `<version number>` is the software version number you're updating to. +++### Update an on-premises management console ++1. Sign into your on-premises management console and select **System Settings** > **Version Update**. -1. On your on-premises management console, select **System Settings**, and identify the sensors that you want to update. +1. In the **Upload File** dialog, select **BROWSE FILE** and then browse to and select the update package you'd downloaded from the Azure portal. ++ The update process starts, and may take about 30 minutes. During your upgrade, the system is rebooted twice. ++ Sign in when prompted and check the version number listed in the bottom-left corner to confirm that the new version is listed. ++### Update your OT sensors from the on-premises management console ++1. Sign into your on-premises management console, select **System Settings**, and identify the sensors that you want to update. 1. For any sensors you want to update, make sure that the **Automatic Version Updates** option is selected. The sensor update process won't succeed if you don't update the on-premises mana :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/automatic-updates.png" alt-text="Screenshot of on-premises management console with Automatic Version Updates selected." lightbox="media/how-to-manage-sensors-from-the-on-premises-management-console/automatic-updates.png"::: > [!IMPORTANT]- > If your **Automatic Version Updates** option is red, you have a update conflict. For example, an update conflict might occur if you have multiple sensors marked for automatic updates but the sensors currently have different software versions installed. Select the option to resolve the conflict. + > If your **Automatic Version Updates** option is red, you have a update conflict. An update conflict might occur if you have multiple sensors marked for automatic updates but the sensors currently have different software versions installed. Select the **Automatic Version Updates** option to resolve the conflict. > 1. Scroll down and on the right, select the **+** in the **Sensor version update** box. Browse to and select the update file you'd downloaded from the Azure portal. The sensor update process won't succeed if you don't update the on-premises mana If updates fail, a retry option appears with an option to download the failure log. Retry the update process or open a support ticket with the downloaded log files for assistance. -# [From an OT sensor via CLI](#tab/cli) +# [OT sensor CLI](#tab/cli) This procedure describes how to update OT sensor software via the CLI, directly on the OT sensor. -**To update sensor software directly from the sensor via CLI**: +> [!IMPORTANT] +> If you're using an on-premises management console, make sure that you've [updated the on-premises management console](#update-the-on-premises-management-console) *before* updating any connected sensors. +> ++### Download the update package from the Azure portal ++1. In Defender for IoT on the Azure portal, select **Sites and sensors** > **Sensor update (Preview)**. ++1. In the **Local update** pane, select the software version that's currently installed on your sensors. ++1. In the **Available versions** area of the **Local update** pane, select the version you want to download for your software update. ++ The **Available versions** area lists all update packages available for your specific update scenario. You may have multiple options, but there will always be one specific version marked as **Recommended** for you. For example: ++ :::image type="content" source="media/update-ot-software/recommended-version.png" alt-text="Screenshot highlighting the recommended update version for the selected update scenario." lightbox="media/update-ot-software/recommended-version.png"::: ++1. Scroll down further in the **Local update** pane and select **Download** to download the software file. ++ The update package is downloaded with a file syntax name of `sensor-secured-patcher-<Version number>.tar`, where `version number` is the version you are updating to. +++### Update sensor software directly from the sensor via CLI -1. Use SFTP or SCP to copy the update file to the sensor machine. +1. Use SFTP or SCP to copy the update package you'd downloaded from the Azure portal to the OT sensor machine. 1. Sign in to the sensor as the `cyberx_host` user and copy the update file to the `/opt/sensor/logs/` directory. This procedure describes how to update OT sensor software via the CLI, directly -> [!NOTE] -> After upgrading to version 22.1.x or higher, the new upgrade log is accessible by the *cyberx_host* user on the sensor at the following path: `/opt/sensor/logs/legacy-upgrade.log`. To access the update log, sign into the sensor via SSH with the *cyberx_host* user. -> -> For more information, see [Default privileged on-premises users](roles-on-premises.md#default-privileged-on-premises-users). +### Confirm that your update succeeded -## Download and apply a new activation file +To confirm that the update process completed successfully, check the sensor version in the following locations for the new version number: -**Relevant only when updating from a legacy version to version 22.x or higher** +- In the Azure portal, on the **Sites and sensors** page, in the **Sensor version** column -This procedure is relevant only if you're updating sensors from software versions earlier than 22.1.x. Such updates require a new activation file for each sensor, which you'll use to activate the sensor before you [update the software](#update-your-sensors). +- On the OT sensor console: -**To prepare your sensor for update**: + - In the title bar + - On the **Overview** page > **General Settings** area + - In the **System settings** > **Sensor management** > **Software update** pane -1. In Defender for IoT on the Azure portal, select **Sites and sensors** on the left. +- On a connected on-premises management console, on the **Site Management** page -1. Select the site where you want to update your sensor, and then browse to the sensor you want to update. +Upgrade log files are located on the OT sensor machine at `/opt/sensor/logs/legacy-upgrade.log`, and are accessible to the *[cyberx_host](roles-on-premises.md#default-privileged-on-premises-users)* user via SSH. -1. Expand the row for your sensor, select the options **...** menu on the right of the row, and then select **Prepare to update to 22.x**. For example: +## Update the on-premises management console - :::image type="content" source="media/how-to-manage-sensors-on-the-cloud/prepare-to-update.png" alt-text="Screenshot of the Prepare to update option." lightbox="media/how-to-manage-sensors-on-the-cloud/prepare-to-update.png"::: +This procedure describes how to update on-premises management console software. You might need these steps before [updating OT sensors remotely from the Azure portal](#update-ot-sensors) or as a standalone update process. -1. <a name="activation-file"></a>In the **Prepare to update sensor to version 22.X** message, select **Let's go**. +Updating an on-premises management console takes about 30 minutes. - A new row in the grid is added for sensor you're upgrading. In that added row, select to download the activation file. +> [!IMPORTANT] +> If you're updating the on-premises management console as part of an OT sensor process, you must update your on-premises management console *before* updating your OT sensors. +> +> The software version on your on-premises management console must be equal to or greater than that of your most up-to-date sensor version. Each on-premises management console version is backwards compatible to older, supported sensor versions, but cannot connect to newer sensor versions. +> -1. Verify that the status showing in the new sensor row has switched to **Pending activation**. +### Download the update package from the Azure portal ++This procedure describes how to download an update package for a standalone update. If you're updating your on-premises management console together with connected sensors, we recommend using the **[Update sensors (Preview)](#update-ot-sensors)** menu from on the **Sites and sensors** page instead. + +1. In Defender for IoT on the Azure portal, select **Getting started** > **On-premises management console**. ++1. In the **On-premises management console** area, select the download scenario that best describes your update, and then select **Download**. ++ The update package is downloaded with a file syntax name of `management-secured-patcher-<version number>.tar`, where `<version number>` is the software version number you're updating to. [!INCLUDE [root-of-trust](includes/root-of-trust.md)] -> [!NOTE] -> The previous sensor is not automatically deleted after your update. After you've updated the sensor software, make sure to [remove the previous sensor from Defender for IoT](#remove-your-previous-sensor). +### Update the on-premises management console software version -**To apply your activation file**: +1. Sign into your on-premises management console and select **System Settings** > **Version Update**. -If you're upgrading from a legacy version to version 22.x or higher, make sure to apply the new activation file to your sensor. +1. In the **Upload File** dialog, select **BROWSE FILE** and then browse to and select the update file you'd downloaded from the Azure portal. -1. On your sensor, select **System settings > Sensor management > Subscription & Mode Activation**. + The update process starts, and may take about 30 minutes. During your upgrade, the system is rebooted twice. ++1. Sign in when prompted and check the version number listed in the bottom-left corner to confirm that the new version is listed. ++## Update legacy OT sensor software ++Updating to version 22.x from an earlier version essentially onboards a new OT sensor, with all of the details from the legacy sensor. ++After the update, the newly onboarded, updated sensor requires a new activation file. We also recommend that you remove any resources left from your legacy sensor, such as deleting the sensor from Defender for IoT, and any private IoT Hubs that you'd used. -1. In the **Subscription & Mode Activation** pane that appears on the right, select **Select file**, and then browse to and select the activation file you'd downloaded [earlier](#activation-file). +For more information, see [Versioning and support for on-premises software versions](release-notes.md#versioning-and-support-for-on-premises-software-versions). -1. In Defender for IoT on the Azure portal, monitor your sensor's activation status. When the sensor is fully activated: +**To update a legacy OT sensor version** - - The sensor's **Overview** page shows an activation status of **Valid**. - - In the Azure portal, on the **Sites and sensors** page, the sensor is listed as **OT cloud connected** and with the updated sensor version. +1. In Defender for IoT on the Azure portal, select **Sites and sensors** and then select the legacy OT sensor you want to update. -## Remove your previous sensor +1. Select the **Prepare to update to 22.X** option from the toolbar or from the options (**...**) from the sensor row. ++1. <a name="activation-file"></a>In the **Prepare to update sensor to version 22.X** message, select **Let's go**. ++ A new row is added on the **Sites and sensors** page, representing the newly updated OT sensor. In that row, select to download the activation file. ++ [!INCLUDE [root-of-trust](includes/root-of-trust.md)] -Your previous sensors continue to appear in the **Sites and sensors** page until you delete them. After you've applied your new activation file and updated sensor software, make sure to delete any remaining, previous sensors from Defender for IoT. + The status for the new OT sensor switches to **Pending activation**. -Delete a sensor from the **Sites and sensors** page in the Azure portal. For more information, see [Sensor management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal). +1. Sign into your OT sensor and select **System settings > Sensor management > Subscription & Mode Activation**. -## Remove private IoT Hubs +1. In the **Subscription & Mode Activation** pane, select **Select file**, and then browse to and select the activation file you'd downloaded [earlier](#activation-file). -If you've updated from a version earlier than 22.1.x, you may no longer need the private IoT Hubs you'd previously used to connect sensors to Defender for IoT. + Monitor the activation status on the **Sites and sensors** page. When the OT sensor is fully activated: -In such cases: + - The sensor status and health on the **Sites and sensors** page is updated with the new software version + - On the OT sensor, the **Overview** page shows an activation status of **Valid**. -1. Review your IoT hubs to ensure that it's not being used by other services. +1. After you've applied your new activation file, make sure to [delete the legacy sensor](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal). On the **Sites and sensors** page, select your legacy sensor, and then from the options (**...**) menu for that sensor, select **Delete sensor**. -1. Verify that your sensors are connected successfully. +1. (Optional) After updating from a legacy OT sensor version, you may have leftover IoT Hubs that are no longer in use. In such cases: -1. Delete any private IoT Hubs that are no longer needed. For more information, see the [IoT Hub documentation](../../iot-hub/iot-hub-create-through-portal.md). + 1. Review your IoT hubs to ensure that they're not being used by other services. + 1. Verify that your sensors are connected successfully. + 1. Delete any private IoT Hubs that are no longer needed. + + For more information, see the [IoT Hub documentation](../../iot-hub/iot-hub-create-through-portal.md). ## Next steps For more information, see: -- [Install OT system software](how-to-install-software.md)-- [Manage individual sensors](how-to-manage-individual-sensors.md)-- [Manage sensors from the management console](how-to-manage-sensors-from-the-on-premises-management-console.md) - [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md)+- [Configure OT sensor settings from the Azure portal (Public preview)](configure-sensor-settings-portal.md) +- [Manage individual sensors from the OT sensor console](how-to-manage-individual-sensors.md) +- [Manage OT sensors from the on-premises management console](how-to-manage-sensors-from-the-on-premises-management-console.md) - [Manage the on-premises management console](how-to-manage-the-on-premises-management-console.md)-- [Troubleshoot the sensor and on-premises management console](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md)+- [Troubleshoot the OT sensor and on-premises management console](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md) |
defender-for-iot | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md | Title: What's new in Microsoft Defender for IoT description: This article describes features available in Microsoft Defender for IoT, across both OT and Enterprise IoT networks, and both on-premises and in the Azure portal. Previously updated : 01/03/2023 Last updated : 02/09/2023 # What's new in Microsoft Defender for IoT? Features released earlier than nine months ago are described in the [What's new |Service area |Updates | |||-| **OT networks** | **Cloud features**: [Alerts page GA in the Azure portal](#alerts-ga-in-the-azure-portal) <br><br> **Sensor version 22.2.3**: [Configure OT sensor settings from the Azure portal (Public preview)](#configure-ot-sensor-settings-from-the-azure-portal-public-preview) | +| **OT networks** | **Cloud features**: <br>- [Download updates from the Sites and sensors page (Public preview)](#download-updates-from-the-sites-and-sensors-page-public-preview) <br>- [Alerts page GA in the Azure portal](#alerts-ga-in-the-azure-portal) <br><br> **Sensor version 22.2.3**: [Configure OT sensor settings from the Azure portal (Public preview)](#configure-ot-sensor-settings-from-the-azure-portal-public-preview) | | **Enterprise IoT networks** | **Cloud features**: [Alerts page GA in the Azure portal](#alerts-ga-in-the-azure-portal) | +### Download updates from the Sites and sensors page (Public preview) ++If you're running a local software update on your OT sensor or on-premises management console, the **Sites and sensors** page now provides a new wizard for downloading your update packages, accessed via the **Sensor update (Preview)** menu. ++For example: +++- Threat intelligence updates are also now available only from the **Sites and sensors** page > **Threat intelligence update (Preview)** option. ++- Update packages for the on-premises management console are also available from the **Getting started** > **On-premises management console** tab. ++For more information, see: ++- [Update Defender for IoT OT monitoring software](update-ot-software.md) +- [Update threat intelligence packages](how-to-work-with-threat-intelligence-packages.md#update-threat-intelligence-packages) +- [OT monitoring software versions](release-notes.md) + ### Configure OT sensor settings from the Azure portal (Public preview) For sensor versions 22.2.3 and higher, you can now configure selected settings for cloud-connected sensors using the new **Sensor settings (Preview)** page, accessed via the Azure portal's **Sites and sensors** page. For example: For cloud-connected sensor versions [22.2.3](release-notes.md#2223) and higher, :::image type="content" source="media/update-ot-software/send-package.png" alt-text="Screenshot of the Send package option." lightbox="media/update-ot-software/send-package.png"::: -For more information, see [Update your sensors from the Azure portal](update-ot-software.md#update-your-sensors). +For more information, see [Update your sensors from the Azure portal](update-ot-software.md#update-ot-sensors). ### Azure connectivity status shown on OT sensors |
firewall | Forced Tunneling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/forced-tunneling.md | Azure Firewall provides automatic SNAT for all outbound traffic to public IP add > [!IMPORTANT] > If you deploy Azure Firewall inside of a Virtual WAN Hub (Secured Virtual Hub), advertising the default route over Express Route or VPN Gateway is not currently supported. A fix is being investigated. +> [!IMPORTANT] +> DNAT isn't supported with Forced Tunneling enabled. Firewalls deployed with Forced Tunneling enabled can't support inbound access from the Internet because of asymmetric routing. + ## Forced tunneling configuration You can configure Forced Tunneling during Firewall creation by enabling Forced Tunnel mode as shown below. To support forced tunneling, Service Management traffic is separated from customer traffic. An additional dedicated subnet named **AzureFirewallManagementSubnet** (minimum subnet size /26) is required with its own associated public IP address. This public IP address is for management traffic. It is used exclusively by the Azure platform and can't be used for any other purpose. |
firewall | Premium Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-features.md | IDPS signature rules have the following properties: |Column |Description | ||| |Signature ID |Internal ID for each signature. This ID is also presented in Azure Firewall Network Rules logs.|-|Mode |Indicates if the signature is active or not, and whether firewall will drop or alert upon matched traffic. The below signature mode can override IDPS mode<br>- **Disabled**: The signature isn't enabled on your firewall.<br>- **Alert**: You'll receive alerts when suspicious traffic is detected.<br>- **Alert and Deny**: You'll receive alerts and suspicious traffic will be blocked. Few signature categories are defined as ΓÇ£Alert OnlyΓÇ¥, therefore by default, traffic matching their signatures won't be blocked even though IDPS mode is set to ΓÇ£Alert and DenyΓÇ¥. Customers may override this by customizing these specific signatures to ΓÇ£Alert and DenyΓÇ¥ mode. <br><br> Note: IDPS alerts are available in the portal via network rule log query.| +|Mode |Indicates if the signature is active or not, and whether firewall drops or alerts upon matched traffic. The below signature mode can override IDPS mode<br>- **Disabled**: The signature isn't enabled on your firewall.<br>- **Alert**: You receive alerts when suspicious traffic is detected.<br>- **Alert and Deny**: You receive alerts and suspicious traffic is blocked. Few signature categories are defined as ΓÇ£Alert OnlyΓÇ¥, therefore by default, traffic matching their signatures isn't blocked even though IDPS mode is set to ΓÇ£Alert and DenyΓÇ¥. Customers may override this by customizing these specific signatures to ΓÇ£Alert and DenyΓÇ¥ mode. <br><br> Note: IDPS alerts are available in the portal via network rule log query.| |Severity |Each signature has an associated severity level and assigned priority that indicates the probability that the signature is an actual attack.<br>- **Low (priority 3)**: An abnormal event is one that doesn't normally occur on a network or Informational events are logged. Probability of attack is low.<br>- **Medium (priority 2)**: The signature indicates an attack of a suspicious nature. The administrator should investigate further.<br>- **High (priority 1)**: The attack signatures indicate that an attack of a severe nature is being launched. There's little probability that the packets have a legitimate purpose.| |Direction |The traffic direction for which the signature is applied.<br>- **Inbound**: Signature is applied only on traffic arriving from the Internet and destined to your [configured private IP address range](#idps-private-ip-ranges).<br>- **Outbound**: Signature is applied only on traffic sent from your [configured private IP address range](#idps-private-ip-ranges) to the Internet.<br>- **Bidirectional**: Signature is always applied on any traffic direction.| |Group |The group name that the signature belongs to.|-|Description |Structured from the following three parts:<br>- **Category name**: The category name that the signature belongs to as described in [Azure Firewall IDPS signature rule categories](idps-signature-categories.md).<br>- High level description of the signature<br>- **CVE-ID** (optional) in the case where the signature is associated with a specific CVE. The ID is listed here.| +|Description |Structured from the following three parts:<br>- **Category name**: The category name that the signature belongs to as described in [Azure Firewall IDPS signature rule categories](idps-signature-categories.md).<br>- High level description of the signature<br>- **CVE-ID** (optional) in the case where the signature is associated with a specific CVE.| |Protocol |The protocol associated with this signature.| |Source/Destination Ports |The ports associated with this signature.| |Last updated |The last date that this signature was introduced or modified.| URL Filtering can be applied both on HTTP and HTTPS traffic. When HTTPS traffic ## Web categories -Web categories lets administrators allow or deny user access to web site categories such as gambling websites, social media websites, and others. Web categories will also be included in Azure Firewall Standard, but it will be more fine-tuned in Azure Firewall Premium. As opposed to the Web categories capability in the Standard SKU that matches the category based on an FQDN, the Premium SKU matches the category according to the entire URL for both HTTP and HTTPS traffic. +Web categories lets administrators allow or deny user access to web site categories such as gambling websites, social media websites, and others. Web categories are also included in Azure Firewall Standard, but it's more fine-tuned in Azure Firewall Premium. As opposed to the Web categories capability in the Standard SKU that matches the category based on an FQDN, the Premium SKU matches the category according to the entire URL for both HTTP and HTTPS traffic. For example, if Azure Firewall intercepts an HTTPS request for `www.google.com/news`, the following categorization is expected: -- Firewall Standard ΓÇô only the FQDN part will be examined, so `www.google.com` will be categorized as *Search Engine*. +- Firewall Standard ΓÇô only the FQDN part is examined, so `www.google.com` is categorized as *Search Engine*. -- Firewall Premium ΓÇô the complete URL will be examined, so `www.google.com/news` will be categorized as *News*.+- Firewall Premium ΓÇô the complete URL is examined, so `www.google.com/news` is categorized as *News*. The categories are organized based on severity under **Liability**, **High-Bandwidth**, **Business Use**, **Productivity Loss**, **General Surfing**, and **Uncategorized**. For a detailed description of the web categories, see [Azure Firewall web categories](web-categories.md). Under the **Web Categories** tab in **Firewall Policy Settings**, you can reques - have a suggested category for an uncategorized FQDN or URL - Once you submit a category change report, you'll be given a token in the notifications that indicate that we've received the request for processing. You can check whether the request is in progress, denied, or approved by entering the token in the search bar. Be sure to save your token ID to do so. + Once you submit a category change report, you're given a token in the notifications that indicate that we've received the request for processing. You can check whether the request is in progress, denied, or approved by entering the token in the search bar. Be sure to save your token ID to do so. :::image type="content" source="media/premium-features/firewall-category-change.png" alt-text="Firewall category report dialog"::: |
frontdoor | End To End Tls | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/end-to-end-tls.md | For TLS1.2 the following cipher suites are supported: * TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 * TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 * TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256+* TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 +* TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 > [!NOTE]-> For Windows 10 and later versions, we recommend enabling one or both of the ECDHE cipher suites for better security. CBC ciphers are enabled to support Windows 8.1, 8, and 7 operating systems. The DHE cipher suites are disabled. Hence connections coming from clients supporting only DHE ciphers will result in SSL Handshake failure. Workaround is to enable TLS 1.0 for the Hosts/Domains where clients connect with DHE ciphers. +> For Windows 10 and later versions, we recommend enabling one or both of the ECDHE_GCM cipher suites for better security. Windows 8.1, 8, and 7 aren't compatible with these ECDHE_GCM cipher suites. The ECDHE_CBC and DHE cipher suites have been provided for compatibility with those operating systems. Using custom domains with TLS1.0/1.1 enabled the following cipher suites are supported: |
hdinsight | Identity Broker | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/identity-broker.md | To troubleshoot authentication issues, see [this guide](./domain-joined-authenti ## Clients using OAuth to connect to an HDInsight gateway with HDInsight ID Broker -In the HDInsight ID Broker setup, custom apps and clients that connect to the gateway can be updated to acquire the required OAuth token first. Follow the steps in [this document](../../storage/common/storage-auth-aad-app.md) to acquire the token with the following information: +In the HDInsight ID Broker setup, custom apps and clients that connect to the gateway can be updated to acquire the required OAuth token first. For more information, see [How to authenticate .NET applications with Azure services](/dotnet/azure/sdk/authentication). The key values required for authorizing access to an HDInsight gateway are: -* OAuth resource uri: `https://hib.azurehdinsight.net` -* AppId: 7865c1d2-f040-46cc-875f-831a1ef6a28a -* Permission: (name: Cluster.ReadWrite, id: 8f89faa0-ffef-4007-974d-4989b39ad77d) +* OAuth resource uri: `https://hib.azurehdinsight.net` +* AppId: 7865c1d2-f040-46cc-875f-831a1ef6a28a +* Permission: (name: Cluster.ReadWrite, id: 8f89faa0-ffef-4007-974d-4989b39ad77d) After you acquire the OAuth token, use it in the authorization header of the HTTP request to the cluster gateway (for example, https://\<clustername\>-int.azurehdinsight.net). A sample curl command to Apache Livy API might look like this example: |
load-balancer | Troubleshoot Rhc | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/troubleshoot-rhc.md | |
logic-apps | Create Workflow With Trigger Or Action | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-workflow-with-trigger-or-action.md | + + Title: Create a workflow with a trigger or action +description: Start building your workflow by adding a trigger or an action in Azure Logic Apps. +++ms.suite: integration ++ Last updated : 02/14/2023+# As an Azure Logic Apps developer, I want to create a workflow using trigger and action operations in Azure Logic Apps. +++# Build a workflow with a trigger or action in Azure Logic Apps +++This how-to guide shows how to start your workflow by adding a *trigger* and then continue your workflow by adding an *action*. The trigger is always the first step in any workflow and specifies the condition to meet before your workflow can start to run. Following the trigger, you have to add one or more subsequent actions for your workflow to perform the tasks that you want. The trigger and actions work together to define your workflow's logic and structure. ++This guide shows the steps for Consumption and Standard logic app workflows. ++## Prerequisites ++- An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). ++- To add a trigger, you have to start with a logic app resource and a blank workflow. ++- To add an action, you have to start with a logic app resource and a workflow that minimally has a trigger. ++The following steps use the Azure portal, but you can also use the following tools to create a logic app and workflow: ++ - Consumption workflows: [Visual Studio](quickstart-create-logic-apps-with-visual-studio.md) or [Visual Studio Code](quickstart-create-logic-apps-visual-studio-code.md) ++ - Standard workflows: [Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md) ++## Add a trigger to start your workflow ++### [Consumption](#tab/consumption) ++1. In the [Azure portal](https://portal.azure.com), open your Consumption logic app and blank workflow in the designer. ++1. On the designer, under the search box, select **All** so that you can search all the connectors and triggers by name. ++ The following example shows the designer for a blank Consumption logic app workflow with the **All** group selected. The **Triggers** list shows the available triggers, which appear in a specific order. For more information about the way that the designer organizes operation collections, connectors, and the triggers list, see [Connectors, triggers, and actions in the designer](create-workflow-with-trigger-or-action.md?tabs=consumption#connectors-triggers-actions-designer). + + :::image type="content" source="media/create-workflow-with-trigger-or-action/designer-overview-all-triggers-consumption.png" alt-text="Screenshot showing Azure portal, designer for Consumption logic app with blank workflow, and built-in triggers gallery."::: ++ To show more connectors with triggers in the gallery, below the connectors row, select the down arrow. ++ :::image type="content" source="media/create-workflow-with-trigger-or-action/show-more-connectors-triggers-consumption.png" alt-text="Screenshot showing Azure portal, designer for Consumption workflow, and down arrow selected to show more connectors with triggers."::: ++ The designer uses the following groups to organize connectors and their operations: ++ | Group | Description | + |-|-| + | **For You** | Any connectors and triggers that you recently used. | + | **All** | All connectors and triggers in Azure Logic Apps. | + | **Built-in** | Connectors and triggers that run directly and natively within the Azure Logic Apps runtime. | + | **Standard** and **Enterprise** | Connectors and triggers that are Microsoft-managed, hosted, and run in multi-tenant Azure. | + | **Custom** | Any connectors and actions that you created and installed. | ++1. In the search box, enter the name for the connector or trigger that you want to find. ++1. From the triggers list, select the trigger that you want. ++1. If prompted, provide any necessary connection information, which differs based on the connector. When you're done, select **Create**. ++1. After the trigger information box appears, provide the necessary details for your selected trigger. ++1. When you're done, save your workflow. On the designer toolbar, select **Save**. ++### [Standard](#tab/standard) ++1. In the [Azure portal](https://portal.azure.com), open your Standard logic app and blank workflow in the designer. ++1. On the designer, select **Choose an operation**, if not already selected. ++1. On the **Add a trigger** pane, under the search box, select either **Built-in** or **Azure**, based on the trigger that you want to find. ++ | Group | Description | + |-|-| + | **Built-in** | Connectors and triggers that run directly and natively within the Azure Logic Apps runtime. | + | **Azure** | For stateful workflows only, connectors and triggers that are Microsoft-managed, hosted, and run in multi-tenant Azure. | ++ The following example shows the designer for a blank Standard logic app workflow with the **Built-in** group selected. The **Triggers** list shows the available triggers, which appear in a specific order. For more information about the way that the designer organizes operation collections, connectors, and the triggers list, see [Connectors, triggers, and actions in the designer](create-workflow-with-trigger-or-action.md?tabs=standard#connectors-triggers-actions-designer). + + :::image type="content" source="media/create-workflow-with-trigger-or-action/designer-overview-built-in-triggers-standard.png" alt-text="Screenshot showing Azure portal, designer for Standard logic app with blank workflow, and built-in triggers gallery."::: ++ To show more connectors with triggers in the gallery, below the connectors row, select the down arrow. ++ :::image type="content" source="media/create-workflow-with-trigger-or-action/show-more-built-in-connectors-triggers-standard.png" alt-text="Screenshot showing Azure portal, designer for Standard workflow, and down arrow selected to show more built-in connectors with triggers."::: ++ For stateful workflows, the following example shows the designer for a blank Standard logic app workflow with the **Azure** group selected. The **Triggers** list shows the available triggers, which appear in a specific order. ++ :::image type="content" source="media/create-workflow-with-trigger-or-action/azure-triggers-standard.png" alt-text="Screenshot showing Azure portal, designer for Standard workflow, and Azure triggers gallery."::: ++1. To filter the list, in the search box, enter the name for the connector or trigger. From the triggers list, select the trigger that you want. ++1. If prompted, provide any necessary connection information, which differs based on the connector. When you're done, select **Create**. ++1. After the trigger information box appears, provide the necessary details for your selected trigger. ++1. When you're done, save your workflow. On the designer toolbar, select **Save**. ++### [Standard (Preview)](#tab/standard-preview) ++1. In the [Azure portal](https://portal.azure.com), open your Standard logic app and blank workflow in the preview designer. ++1. On the designer, select **Add a trigger**, if not already selected. ++ The **Browse operations** pane opens and shows the available connectors with triggers. ++ :::image type="content" source="media/create-workflow-with-trigger-or-action/designer-overview-triggers-standard-preview.png" alt-text="Screenshot showing Azure portal, the preview designer for Standard logic app with blank workflow, and connectors with triggers gallery."::: ++1. Choose either option: ++ - To filter the connectors or triggers list by name, in the search box, enter the name for the connector or trigger that you want. ++ - To filter the connectors based on the following groups, open the **Filter** list, and select either **In-App** or **Shared**, based on the group that contains the trigger that you want. ++ | Group | Description | + |-|-| + | **In-App** | Connectors and triggers that run directly and natively within the Azure Logic Apps runtime. In the non-preview designer, this group is the same as the **Built-in** group. | + | **Shared** | For stateful workflows only, connectors and triggers that are Microsoft-managed, hosted, and run in multi-tenant Azure. In the non-preview designer, this group is the same as the **Azure** group. | ++ The following example shows the preview designer for a Standard logic app with a blank workflow and shows the **In-App** group selected. The list shows the available operation collections and connectors, which appear in a [specific order](create-workflow-with-trigger-or-action.md?tabs=standard-preview#connectors-triggers-actions-designer). ++ :::image type="content" source="media/create-workflow-with-trigger-or-action/in-app-connectors-triggers-standard-preview.png" alt-text="Screenshot showing Azure portal, the preview designer for Standard logic app with blank workflow, and 'In-App' connectors with triggers gallery."::: ++ The following example shows the preview designer for a Standard logic app with a blank workflow and shows the **Shared** group selected. The list shows the available operation collections and connectors, which appear in a [specific order](create-workflow-with-trigger-or-action.md?tabs=standard-preview#connectors-triggers-actions-designer). ++ :::image type="content" source="media/create-workflow-with-trigger-or-action/shared-connectors-triggers-standard-preview.png" alt-text="Screenshot showing Azure portal, the preview designer for Standard logic app with blank workflow, and 'Shared' connectors with triggers gallery."::: ++1. From the operation collection or connector list, select the collection or connector that you want. After the triggers list appears, select the trigger that you want. ++1. If prompted, provide any necessary connection information, which differs based on the connector. When you're done, select **Create**. ++1. After the trigger information box appears, provide the necessary details for your selected trigger. ++1. When you're done, save your workflow. On the designer toolbar, select **Save**. ++++## Add an action to run a task ++### [Consumption](#tab/consumption) ++1. In the [Azure portal](https://portal.azure.com), open your Consumption logic app and workflow in the designer. ++1. On the designer, choose one of the following: ++ * To add the action under the last step in the workflow, select **New step**. ++ * To add the action between existing steps, move your pointer over the connecting arrow. Select the plus sign (**+**) that appears, and then select **Add an action**. ++1. Under the **Choose an operation** search box, select **All** so that you can search all the connectors and actions by name. ++ The following example shows the designer for a Consumption logic app workflow with an existing trigger and shows the **All** group selected. The **Actions** list shows the available actions, which appear in a specific order. For more information about the way that the designer organizes operation collections, connectors, and the actions list, see [Connectors, triggers, and actions in the designer](create-workflow-with-trigger-or-action.md?tabs=consumption#connectors-triggers-actions-designer). ++ :::image type="content" source="media/create-workflow-with-trigger-or-action/designer-overview-all-actions-consumption.png" alt-text="Screenshot showing Azure portal, designer for Consumption logic app workflow with existing trigger, and built-in actions gallery."::: ++ To show more connectors with actions in the gallery, below the connectors row, select the down arrow. ++ :::image type="content" source="media/create-workflow-with-trigger-or-action/show-more-connectors-actions-consumption.png" alt-text="Screenshot showing Azure portal, Consumption workflow designer, and down arrow selected to show more connectors with actions."::: ++ The designer uses the following groups to organize connectors and their operations: ++ | Group | Description | + |-|-| + | **For You** | Any connectors and actions that you recently used. | + | **All** | All connectors and actions in Azure Logic Apps. | + | **Built-in** | Connectors and actions that run directly and natively within the Azure Logic Apps runtime. | + | **Standard** and **Enterprise** | Connectors and actions that are Microsoft-managed, hosted, and run in multi-tenant Azure. | + | **Custom** | Any connectors and actions that you created and installed. | ++1. In the search box, enter the name for the connector or action that you want to find. ++1. From the actions list, select the action that you want. ++1. If prompted, provide any necessary connection information, which differs based on the connector. When you're done, select **Create**. ++1. After the action information box appears, provide the necessary details for your selected action. ++1. When you're done, save your workflow. On the designer toolbar, select **Save**. ++### [Standard](#tab/standard) ++1. In the [Azure portal](https://portal.azure.com), open your Standard logic app and workflow in the designer. ++1. On the designer, choose one of the following: ++ * To add the action under the last step in the workflow, select the plus sign (**+**), and then select **Add an action**. ++ * To add the action between existing steps, select the plus sign (**+**) on the connecting arrow, and then select **Add an action**. ++1. On the **Add an action** pane, under the search box, select either **Built-in** or **Azure**, based on the trigger that you want to find. ++ | Group | Description | + |-|-| + | **Built-in** | Connectors and actions that run directly and natively within the Azure Logic Apps runtime. | + | **Azure** | For stateful workflows only, connectors and actions that are Microsoft-managed, hosted, and run in multi-tenant Azure. | ++ The following example shows the designer for a Standard logic app workflow with an existing trigger and shows the **Built-in** group selected. The **Actions** list shows the available actions, which appear in a [specific order](create-workflow-with-trigger-or-action.md?tabs=standard#connectors-triggers-actions-designer). ++ :::image type="content" source="media/create-workflow-with-trigger-or-action/designer-overview-built-in-actions-standard.png" alt-text="Screenshot showing Azure portal, designer for Standard logic app workflow with a trigger, and built-in actions gallery."::: ++ To show more connectors with actions in the gallery, below the connectors row, select the down arrow. ++ :::image type="content" source="media/create-workflow-with-trigger-or-action/show-more-built-in-connectors-actions-standard.png" alt-text="Screenshot showing Azure portal, Standard workflow designer, and down arrow selected to show more built-in connectors with actions."::: ++ For stateful workflows, the following example shows the designer for a Standard logic app workflow with an existing trigger and shows the **Azure** group selected. The **Actions** list shows the available actions, which appear in a [specific order](create-workflow-with-trigger-or-action.md?tabs=standard#connectors-triggers-actions-designer). ++ :::image type="content" source="media/create-workflow-with-trigger-or-action/azure-actions-standard.png" alt-text="Screenshot showing Azure portal, designer for Standard logic app workflow with a trigger, and Azure actions gallery."::: ++1. To filter the list, in the search box, enter the name for the connector or action. From the actions list, select the action that you want. ++1. If prompted, provide any necessary connection information, which differs based on the connector. When you're done, select **Create**. ++1. After the action information box appears, provide the necessary details for your selected action. ++1. When you're done, save your workflow. On the designer toolbar, select **Save**. ++### [Standard (Preview)](#tab/standard-preview) ++1. In the [Azure portal](https://portal.azure.com), open your Standard logic app and workflow in the designer. ++1. On the designer, choose one of the following: ++ * To add the action under the last step in the workflow, select the plus sign (**+**), and then select **Add an action**. ++ * To add the action between existing steps, select the plus sign (**+**) on the connecting arrow, and then select **Add an action**. ++1. On the designer, select **Add an action**, if not already selected. ++ The **Browse operations** pane opens and shows the available connectors. ++1. Choose either option: ++ - To filter the connectors or actions list by name, in the search box, enter the name for the connector or action that you want. ++ - To browse triggers based on the following groups, open the **Filter** list, and select either **In-App** or **Shared**, based on the group that contains the action that you want. ++ | Group | Description | + |-|-| + | **In-App** | Connectors and actions that run directly and natively within the Azure Logic Apps runtime. In the non-preview designer, this group is the same as the **Built-in** group. | + | **Shared** | Connectors and actions that are Microsoft-managed, hosted, and run in multi-tenant Azure. In the non-preview designer, this group is the same as the **Azure** group. | ++ The following example shows the preview designer for a Standard workflow with an existing trigger and shows the **In-App** group selected. The list shows the available operation collections and connectors, which appear in a [specific order](create-workflow-with-trigger-or-action.md?tabs=standard-preview#connectors-triggers-actions-designer). ++ :::image type="content" source="media/create-workflow-with-trigger-or-action/in-app-connectors-actions-standard-preview.png" alt-text="Screenshot showing Azure portal, the preview designer for Standard logic app workflow with a trigger, and In-App connectors with actions gallery."::: ++ The following example shows the preview designer for a Standard workflow with an existing trigger and shows the **Shared** group selected. The list shows the available operation collections and connectors, which appear in a [specific order](create-workflow-with-trigger-or-action.md?tabs=standard-preview#connectors-triggers-actions-designer). ++ :::image type="content" source="media/create-workflow-with-trigger-or-action/shared-connectors-actions-standard-preview.png" alt-text="Screenshot showing Azure portal, the preview designer for Standard logic app workflow with a trigger, and Shared connectors with actions gallery."::: ++1. From the operation collection or connector list, select the collection or connector that you want. After the actions list appears, select the action that you want. ++1. If prompted, provide any necessary connection information, which differs based on the connector. When you're done, select **Create**. ++1. After the action information box appears, provide the necessary details for your selected action. ++1. When you're done, save your workflow. On the designer toolbar, select **Save**. ++++<a name="connectors-triggers-actions-designer"></a> ++## Connectors, triggers, and actions in the designer ++In the workflow designer, you can select from hundreds of triggers and actions, collectively called *operations*. Azure Logic Apps organizes these operations into either collections, such as **Schedule**, **HTTP**, and **Data Operations**, or as connectors, such as **Azure Service Bus**, **SQL Server**, **Azure Blob Storage**, and **Office 365 Outlook**. These collections can include triggers, actions, or both. ++### [Consumption](#tab/consumption) ++Under the search box, a row shows the available operation collections and connectors organized from left to right, based on global popularity and usage. The individual **Triggers** and **Actions** lists are grouped by collection or connector name. These names appear in ascending order, first numerically if any exist, and then alphabetically. ++#### Built-in operations ++The following example shows the **Built-in** triggers gallery: +++The following example shows the **Built-in** actions gallery: +++#### Standard operations ++The following example shows the **Standard** triggers gallery: +++The following example shows the **Standard** actions gallery: +++#### Enterprise operations ++The following example shows the **Enterprise** triggers gallery: +++The following example shows the **Enterprise** actions gallery: +++For more information, see the following documentation: ++- [Built-in operations and connectors in Azure Logic Apps](../connectors/built-in.md) +- [Microsoft-managed (Standard and Enterprise) connectors in Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors) +- [Custom connectors in Azure Logic Apps](custom-connector-overview.md) +- [Billing and pricing for operations in Consumption workflows](logic-apps-pricing.md#consumption-operations) ++### [Standard](#tab/standard) ++In the **Add a trigger** or **Add an action** pane, under the search box, the **Built-in** or **Azure** connectors gallery row shows the available operation collections and connectors organized from left to right in ascending order, first numerically if any exist, and then alphabetically. The individual **Triggers** and **Actions** lists are grouped by collection or connector name and appear in ascending order, first numerically if any exist, and then alphabetically. ++#### Built-in operations ++The following example shows the **Built-in** triggers gallery: +++The following example shows the **Built-in** actions gallery: +++#### Azure operations ++The following example shows the **Azure** triggers gallery: +++The following example shows the **Azure** actions gallery: +++For more information, see the following documentation: ++- [Built-in operations and connectors in Azure Logic Apps](../connectors/built-in.md) +- [Microsoft-managed connectors in Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors) +- [Built-in custom connectors in Azure Logic Apps](custom-connector-overview.md) +- [Billing and pricing for operations in Standard workflows](logic-apps-pricing.md#standard-operations) ++### [Standard (Preview)](#tab/standard-preview) ++In the **Browse operations** pane, the connectors gallery lists the available operation collections and connectors organized from left to right in ascending order, first numerically if any exist, and then alphabetically. After you select a collection or connector, the triggers or actions appear in ascending order alphabetically. ++#### In-App (built-in) operations ++The following example shows the **In-App** collections and connectors gallery when you add a trigger: +++After you select a collection or connector, the individual triggers are grouped by collection or connector name and appear in ascending order, first numerically if any exist, and then alphabetically. ++The following example selected the **Schedule** operations collection and shows the trigger named **Recurrence**: +++The following example shows the **In-App** collections and connectors gallery when you add an action: +++The following example selected the **Azure Queue Storage** connector and shows the available triggers: +++#### Shared (Azure) operations ++The following example shows the **Shared** connectors gallery when you add a trigger: +++After you select a collection or connector, the individual triggers are grouped by collection or connector name and appear in ascending order, first numerically if any exist, and then alphabetically. ++The following example selected the **365 Training** connector and shows the available triggers: +++The following example shows the **Shared** connectors gallery when you add an action: +++The following example selected the **365 Training** connector and shows the available actions: +++For more information, see the following documentation: ++- [Built-in operations and connectors in Azure Logic Apps](../connectors/built-in.md) +- [Microsoft-managed connectors in Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors) +- [Built-in custom connectors in Azure Logic Apps](custom-connector-overview.md) +- [Billing and pricing for operations in Standard workflows](logic-apps-pricing.md#standard-operations) ++++## Next steps ++[General information about connectors, triggers, and actions](/connectors/connectors) |
logic-apps | Logic Apps Pricing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-pricing.md | To help you estimate more accurate consumption costs, review these tips: * Consider the possible number of messages or events that might arrive on any given day, rather than base your calculations on only the polling interval. -* When an event or message meets the trigger criteria, many triggers immediately try to read any other waiting events or messages that meet the criteria. This behavior means that even when you select a longer polling interval, the trigger fires based on the number of waiting events or messages that qualify for starting workflows. Triggers that follow this behavior include Azure Service Bus and Azure Event Hub. +* When an event or message meets the trigger criteria, many triggers immediately try to read any other waiting events or messages that meet the criteria. This behavior means that even when you select a longer polling interval, the trigger fires based on the number of waiting events or messages that qualify for starting workflows. Triggers that follow this behavior include Azure Service Bus and Azure Event Hubs. For example, suppose you set up trigger that checks an endpoint every day. When the trigger checks the endpoint and finds 15 events that meet the criteria, the trigger fires and runs the corresponding workflow 15 times. The Logic Apps service meters all the actions that those 15 workflows perform, including the trigger requests. To help you estimate more accurate consumption costs, review these tips: In single-tenant Azure Logic Apps, a logic app and its workflows follow the [**Standard** plan](https://azure.microsoft.com/pricing/details/logic-apps/) for pricing and billing. You create such logic apps in various ways, for example, when you choose the **Logic App (Standard)** resource type or use the **Azure Logic Apps (Standard)** extension in Visual Studio Code. This pricing model requires that logic apps use a hosting plan and a pricing tier, which differs from the Consumption plan in that you're billed for reserved capacity and dedicated resources whether or not you use them. -When you create or deploy logic apps with the **Logic App (Standard)** resource type, you can use the Workflow Standard hosting plan in all Azure regions. You also have the option to select an existing **App Service Environment v3** resource as your deployment location, but you can only use the App Service plan with this option. If you choose this option, you're charged for the instances used by the App Service plan and for running your logic app workflows. No other charges apply. +When you create or deploy logic apps with the **Logic App (Standard)** resource type, you can use the Workflow Standard hosting plan in all Azure regions. You also have the option to select an existing **App Service Environment v3** resource as your deployment location, but you can only use the [App Service plan](../app-service/overview-hosting-plans.md) with this option. If you choose this option, you're charged for the instances used by the App Service plan and for running your logic app workflows. No other charges apply. > [!IMPORTANT] > The following plans and resources are no longer available or supported with the public release of the **Logic App (Standard)** resource type in Azure regions: |
machine-learning | How To Integrate Azure Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-integrate-azure-policy.md | You can also assign policies by using [Azure PowerShell](../governance/policy/as ## Conditional access policies -To control who can access your Azure Machine Learning workspace, use Azure Active Directory [Conditional Access](../active-directory/conditional-access/overview.md). --> [!IMPORTANT] -> Azure Machine Learning studio cannot be added in cloud apps in Azure AD Conditional Access, as the studio UI is a client application. +You can't use [Azure AD Conditional Access policies](/azure/active-directory/conditional-access/overview) to control access to Azure Machine Learning studio, as it's a client application. Azure Machine Learning does honor conditional access policies you may have created for other cloud apps or services. For example, when attempting to access approved apps from a Jupyter Notebook running on an Azure Machine Learning compute instance. ## Enable self-service using landing zones |
machine-learning | How To Setup Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-authentication.md | Learn how to set up authentication to your Azure Machine Learning workspace from Regardless of the authentication workflow used, Azure role-based access control (Azure RBAC) is used to scope the level of access (authorization) allowed to the resources. For example, an admin or automation process might have access to create a compute instance, but not use it, while a data scientist could use it, but not delete or create it. For more information, see [Manage access to Azure Machine Learning workspace](how-to-assign-roles.md). -Azure AD Conditional Access can be used to further control or restrict access to the workspace for each authentication workflow. For example, an admin can allow workspace access from managed devices only. - ## Prerequisites * Create an [Azure Machine Learning workspace](how-to-manage-workspace.md). print(ml_client) ## Use Conditional Access -As an administrator, you can enforce [Azure AD Conditional Access policies](../active-directory/conditional-access/overview.md) for users signing in to the workspace. For example, you -can require two-factor authentication, or allow sign in only from managed devices. To use Conditional Access for Azure Machine Learning workspaces specifically, [assign the Conditional Access policy](../active-directory/conditional-access/concept-conditional-access-cloud-apps.md) to Machine Learning Cloud app. --> [!IMPORTANT] -> Azure Machine Learning studio cannot be added in cloud apps in Azure AD Conditional Access, as the studio UI is a client application. +You can't use [Azure AD Conditional Access policies](/azure/active-directory/conditional-access/overview) to control access to Azure Machine Learning studio, as it's a client application. Azure Machine Learning does honor conditional access policies you may have created for other cloud apps or services. For example, when attempting to access approved apps from a Jupyter Notebook running on an Azure Machine Learning compute instance. ## Next steps |
machine-learning | Migrate To V2 Resource Compute | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-resource-compute.md | This article gives a comparison of scenario(s) in SDK v1 and SDK v2. from azureml.core.compute_target import ComputeTargetException # Compute Instances need to have a unique name across the region.- # Here we create a unique name with current datetime + # Here, we create a unique name with current datetime ci_basic_name = "basic-ci" + datetime.datetime.now().strftime("%Y%m%d%H%M") compute_config = ComputeInstance.provisioning_configuration( This article gives a comparison of scenario(s) in SDK v1 and SDK v2. ```python # Compute Instances need to have a unique name across the region.- # Here we create a unique name with current datetime + # Here, we create a unique name with current datetime from azure.ai.ml.entities import ComputeInstance, AmlCompute import datetime |
machine-learning | Quickstart Spark Jobs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/quickstart-spark-jobs.md | |
machine-learning | Reference Yaml Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-data.md | The source JSON schema can be found at https://azuremlschemas.azureedge.net/late | Key | Type | Description | Allowed values | Default value | | | - | -- | -- | - |-| `$schema` | string | The YAML schema. If you use the Azure Machine Learning VS Code extension to author the YAML file, including `$schema` at the top of your file enables you to invoke schema and resource completions. | | | -| `name` | string | **Required.** Name of the data asset. | | | -| `version` | string | Version of the dataset. If omitted, Azure ML will autogenerate a version. | | | -| `description` | string | Description of the data asset. | | | -| `tags` | object | Dictionary of tags for the data asset. | | | +| `$schema` | string | The YAML schema. If you use the Azure Machine Learning Visual Studio Code extension to author the YAML file, you can invoke schema and resource completions if you include `$schema` at the top of your file. | | | +| `name` | string | **Required.** The data asset name. | | | +| `version` | string | The dataset version. If omitted, Azure ML autogenerates a version. | | | +| `description` | string | The data asset description. | | | +| `tags` | object | The datastore tag dictionary. | | | | `type` | string | The data asset type. Specify `uri_file` for data that points to a single file source, or `uri_folder` for data that points to a folder source. | `uri_file`, `uri_folder` | `uri_folder` |-| `path` | string | Either a local path to the data source file or folder, or the URI of a cloud path to the data source file or folder. Please ensure that the source provided here is compatible with the `type` specified. <br><br> Supported URI types are `azureml`, `https`, `wasbs`, `abfss`, and `adl`. See [Core yaml syntax](reference-yaml-core-syntax.md) for more information on how to use the `azureml://` URI format. | | | +| `path` | string | Either a local path to the data source file or folder, or the URI of a cloud path to the data source file or folder. Ensure that the source provided here is compatible with the `type` specified. <br><br> Supported URI types are `azureml`, `https`, `wasbs`, `abfss`, and `adl`. To use the `azureml://` URI format, see [Core yaml syntax](reference-yaml-core-syntax.md). | | | ## Remarks The `az ml data` commands can be used for managing Azure Machine Learning data a ## Examples -Examples are available in the [examples GitHub repository](https://github.com/Azure/azureml-examples/tree/main/cli/assets/data). Several are shown below. +Examples are available in the [examples GitHub repository](https://github.com/Azure/azureml-examples/tree/main/cli/assets/data). Several are shown: ## YAML: datastore file |
machine-learning | Reference Yaml Datastore Blob | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-datastore-blob.md | -The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/azureBlob.schema.json. --+See the source JSON schema at https://azuremlschemas.azureedge.net/latest/azureBlob.schema.json. [!INCLUDE [schema note](../../includes/machine-learning-preview-old-json-schema-note.md)] The source JSON schema can be found at https://azuremlschemas.azureedge.net/late | Key | Type | Description | Allowed values | Default value | | | - | -- | -- | - |-| `$schema` | string | The YAML schema. If you use the Azure Machine Learning VS Code extension to author the YAML file, including `$schema` at the top of your file enables you to invoke schema and resource completions. | | | -| `type` | string | **Required.** The type of datastore. | `azure_blob` | | -| `name` | string | **Required.** Name of the datastore. | | | -| `description` | string | Description of the datastore. | | | -| `tags` | object | Dictionary of tags for the datastore. | | | -| `account_name` | string | **Required.** Name of the Azure storage account. | | | -| `container_name` | string | **Required.** Name of the container. | | | -| `endpoint` | string | Endpoint suffix of the storage service, which is used for creating the storage account endpoint URL by combining the storage account name and `endpoint`. Example storage account URL: `https://<storage-account-name>.blob.core.windows.net`. | | `core.windows.net` | -| `protocol` | string | Protocol to use to connect to the container. | `https`, `wasbs` | `https` | -| `credentials` | object | Credential-based authentication credentials for connecting to the Azure storage account. You can provide either an account key or a shared access signature (SAS) token. Credential secrets are stored in the workspace key vault. | | | -| `credentials.account_key` | string | The account key for accessing the storage account. **One of `credentials.account_key` or `credentials.sas_token` is required if `credentials` is specified.** | | | +| `$schema` | string | The YAML schema. If you use the Azure Machine Learning Visual Studio Code extension to author the YAML file, you can invoke schema and resource completions if you include `$schema` at the top of your file. | | | +| `type` | string | **Required.** The datastore type. | `azure_blob` | | +| `name` | string | **Required.** The datastore name. | | | +| `description` | string | The datastore description. | | | +| `tags` | object | The datastore tag dictionary. | | | +| `account_name` | string | **Required.** The Azure storage account name. | | | +| `container_name` | string | **Required.** The container name. | | | +| `endpoint` | string | The endpoint suffix of the storage service, used for creation of the storage account endpoint URL. It combines the storage account name and `endpoint`. Example storage account URL: `https://<storage-account-name>.blob.core.windows.net`. | | `core.windows.net` | +| `protocol` | string | Protocol for connection to the container. | `https`, `wasbs` | `https` | +| `credentials` | object | Credential-based authentication credentials for connection to the Azure storage account. An account key or a shared access signature (SAS) token will work. The workspace key vault stores the credential secrets. | | | +| `credentials.account_key` | string | The account key used for storage account access. **One of `credentials.account_key` or `credentials.sas_token` is required if `credentials` is specified.** | | | | `credentials.sas_token` | string | The SAS token for accessing the storage account. **One of `credentials.account_key` or `credentials.sas_token` is required if `credentials` is specified.** | | | ## Remarks -The `az ml datastore` command can be used for managing Azure Machine Learning datastores. +You can use the `az ml datastore` command to manage Azure Machine Learning datastores. ## Examples -Examples are available in the [examples GitHub repository](https://github.com/Azure/azureml-examples/tree/main/cli/resources/datastore). Several are shown below. +See examples in the [examples GitHub repository](https://github.com/Azure/azureml-examples/tree/main/cli/resources/datastore). Several are shown here: ## YAML: identity-based access |
machine-learning | Reference Yaml Datastore Data Lake Gen1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-datastore-data-lake-gen1.md | -The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/azureDataLakeGen1.schema.json. --+See the source JSON schema at https://azuremlschemas.azureedge.net/latest/azureDataLakeGen1.schema.json. [!INCLUDE [schema note](../../includes/machine-learning-preview-old-json-schema-note.md)] The source JSON schema can be found at https://azuremlschemas.azureedge.net/late | Key | Type | Description | Allowed values | Default value | | | - | -- | -- | - |-| `$schema` | string | The YAML schema. If you use the Azure Machine Learning VS Code extension to author the YAML file, including `$schema` at the top of your file enables you to invoke schema and resource completions. | | | -| `type` | string | **Required.** The type of datastore. | `azure_data_lake_gen1` | | -| `name` | string | **Required.** Name of the datastore. | | | -| `description` | string | Description of the datastore. | | | -| `tags` | object | Dictionary of tags for the datastore. | | | -| `store_name` | string | **Required.** Name of the Azure Data Lake Storage Gen1 account. | | | -| `credentials` | object | Service principal credentials for connecting to the Azure storage account. Credential secrets are stored in the workspace key vault. | | | -| `credentials.tenant_id` | string | The tenant ID of the service principal. **Required if `credentials` is specified.** | | | -| `credentials.client_id` | string | The client ID of the service principal. **Required if `credentials` is specified.** | | | -| `credentials.client_secret` | string | The client secret of the service principal. **Required if `credentials` is specified.** | | | -| `credentials.resource_url` | string | The resource URL that determines what operations will be performed on the Azure Data Lake Storage Gen1 account. | | `https://datalake.azure.net/` | -| `credentials.authority_url` | string | The authority URL used to authenticate the user. | | `https://login.microsoftonline.com` | +| `$schema` | string | The YAML schema. If you use the Azure Machine Learning Visual Studio Code extension to author the YAML file, you can invoke schema and resource completions if you include `$schema` at the top of your file. | | | +| `type` | string | **Required.** The datastore type. | `azure_data_lake_gen1` | | +| `name` | string | **Required.** The datastore name. | | | +| `description` | string | The datastore description. | | | +| `tags` | object | The datastore tag dictionary. | | | +| `store_name` | string | **Required.** The Azure Data Lake Storage Gen1 account name. | | | +| `credentials` | object | Service principal credentials to connect to the Azure storage account. Credential secrets are stored in the workspace key vault. | | | +| `credentials.tenant_id` | string | The service principal tenant ID. **Required if `credentials` is specified.** | | | +| `credentials.client_id` | string | The service principal client ID. **Required if `credentials` is specified.** | | | +| `credentials.client_secret` | string | The service principal client secret. **Required if `credentials` is specified.** | | | +| `credentials.resource_url` | string | The resource URL that determines which operations the Azure Data Lake Storage Gen1 account performs. | | `https://datalake.azure.net/` | +| `credentials.authority_url` | string | The authority URL used for user authentication. | | `https://login.microsoftonline.com` | ## Remarks -The `az ml datastore` command can be used for managing Azure Machine Learning datastores. +You can use the `az ml datastore` command to manage Azure Machine Learning datastores. ## Examples -Examples are available in the [examples GitHub repository](https://github.com/Azure/azureml-examples/tree/main/cli/resources/datastore). Several are shown below. +See examples in the [examples GitHub repository](https://github.com/Azure/azureml-examples/tree/main/cli/resources/datastore). Several are shown here: ## YAML: identity-based access |
machine-learning | Reference Yaml Datastore Data Lake Gen2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-datastore-data-lake-gen2.md | The source JSON schema can be found at https://azuremlschemas.azureedge.net/late | Key | Type | Description | Allowed values | Default value | | | - | -- | -- | - |-| `$schema` | string | The YAML schema. If you use the Azure Machine Learning VS Code extension to author the YAML file, including `$schema` at the top of your file enables you to invoke schema and resource completions. | | | -| `type` | string | **Required.** The type of datastore. | `azure_data_lake_gen2` | | -| `name` | string | **Required.** Name of the datastore. | | | -| `description` | string | Description of the datastore. | | | -| `tags` | object | Dictionary of tags for the datastore. | | | -| `account_name` | string | **Required.** Name of the Azure storage account. | | | -| `filesystem` | string | **Required.** Name of the file system. The parent directory that contains the files and folders. This is equivalent to a container in Azure Blob storage. | | | -| `endpoint` | string | Endpoint suffix of the storage service, which is used for creating the storage account endpoint URL by combining the storage account name and `endpoint`. Example storage account URL: `https://<storage-account-name>.dfs.core.windows.net`. | | `core.windows.net` | -| `protocol` | string | Protocol to use to connect to the file system. | `https`, `abfss` | `https` | +| `$schema` | string | The YAML schema. If you use the Azure Machine Learning Visual Studio Code extension to author the YAML file, you can invoke schema and resource completions if you include `$schema` at the top of your file. | | | +| `type` | string | **Required.** The datastore type. | `azure_data_lake_gen2` | | +| `name` | string | **Required.** The datastore name. | | | +| `description` | string | The datastore description. | | | +| `tags` | object | The datastore tag dictionary. | | | +| `account_name` | string | **Required.** The Azure storage account name. | | | +| `filesystem` | string | **Required.** The file system name. The parent directory containing the files and folders, equivalent to an Azure Blog storage container. | | | +| `endpoint` | string | The endpoint suffix of the storage service, used for creation of the storage account endpoint URL. It combines the storage account name and `endpoint`. Example storage account URL: `https://<storage-account-name>.dfs.core.windows.net`. | | `core.windows.net` | +| `protocol` | string | Protocol for connection to the file system. | `https`, `abfss` | `https` | | `credentials` | object | Service principal credentials for connecting to the Azure storage account. Credential secrets are stored in the workspace key vault. | | |-| `credentials.tenant_id` | string | The tenant ID of the service principal. **Required if `credentials` is specified.** | | | -| `credentials.client_id` | string | The client ID of the service principal. **Required if `credentials` is specified.** | | | -| `credentials.client_secret` | string | The client secret of the service principal. **Required if `credentials` is specified.** | | | -| `credentials.resource_url` | string | The resource URL that determines what operations will be performed on the Azure Data Lake Storage Gen2 account. | | `https://storage.azure.com/` | -| `credentials.authority_url` | string | The authority URL used to authenticate the user. | | `https://login.microsoftonline.com` | +| `credentials.tenant_id` | string | The service principal tenant ID. **Required if `credentials` is specified.** | | | +| `credentials.client_id` | string | The service principal client ID. **Required if `credentials` is specified.** | | | +| `credentials.client_secret` | string | The service principal client secret. **Required if `credentials` is specified.** | | | +| `credentials.resource_url` | string | The resource URL that specifies the operations that will be performed on the Azure Data Lake Storage Gen2 account. | | `https://storage.azure.com/` | +| `credentials.authority_url` | string | The authority URL used for user authentication. | | `https://login.microsoftonline.com` | ## Remarks The `az ml datastore` command can be used for managing Azure Machine Learning da ## Examples -Examples are available in the [examples GitHub repository](https://github.com/Azure/azureml-examples/tree/main/cli/resources/datastore). Several are shown below. +Examples are available in the [examples GitHub repository](https://github.com/Azure/azureml-examples/tree/main/cli/resources/datastore). Several are shown here: ## YAML: identity-based access |
machine-learning | How To Setup Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-setup-authentication.md | Learn how to set up authentication to your Azure Machine Learning workspace. Aut Regardless of the authentication workflow used, Azure role-based access control (Azure RBAC) is used to scope the level of access (authorization) allowed to the resources. For example, an admin or automation process might have access to create a compute instance, but not use it, while a data scientist could use it, but not delete or create it. For more information, see [Manage access to Azure Machine Learning workspace](../how-to-assign-roles.md). -Azure AD Conditional Access can be used to further control or restrict access to the workspace for each authentication workflow. For example, an admin can allow workspace access from managed devices only. - ## Prerequisites * Create an [Azure Machine Learning workspace](../how-to-manage-workspace.md). ws = Workspace(subscription_id="your-sub-id", ## Use Conditional Access -As an administrator, you can enforce [Azure AD Conditional Access policies](../../active-directory/conditional-access/overview.md) for users signing in to the workspace. For example, you -can require two-factor authentication, or allow sign in only from managed devices. To use Conditional Access for Azure Machine Learning workspaces specifically, [assign the Conditional Access policy](../../active-directory/conditional-access/concept-conditional-access-cloud-apps.md) to Machine Learning Cloud app. --> [!IMPORTANT] -> Azure Machine Learning studio cannot be added in cloud apps in Azure AD Conditional Access, as the studio UI is a client application. +You can't use [Azure AD Conditional Access policies](/azure/active-directory/conditional-access/overview) to control access to Azure Machine Learning studio, as it's a client application. Azure Machine Learning does honor conditional access policies you may have created for other cloud apps or services. For example, when attempting to access approved apps from a Jupyter Notebook running on an Azure Machine Learning compute instance. ## Next steps |
managed-grafana | Known Limitations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/known-limitations.md | Title: Azure Managed Grafana limitations description: Learn about current limitations in Azure Managed Grafana. Previously updated : 11/30/2022 Last updated : 02/14/2023 Managed Grafana has the following known limitations: * Private endpoints are currently not available in Grafana. -* Email notifications are currently not supported. - * Reporting is currently not supported. ## Next steps |
network-watcher | Network Watcher Nsg Flow Logging Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-overview.md | Title: Introduction to flow logging for NSGs + Title: Introduction to flow logs for NSGs description: This article explains how to use the NSG flow logs feature of Azure Network Watcher. -# Introduction to flow logging for network security groups +# Introduction to flow logs for network security groups -## Introduction +Flow logs are a feature of Azure Network Watcher that allows you to log information about IP traffic flowing through a [network security group](../virtual-network/network-security-groups-overview.md#security-rules) (NSG). Flow data is sent to Azure Storage accounts. From there, you can access the data and export it to any visualization tool, security information and event management (SIEM) solution, or intrusion detection system (IDS) of your choice. -[Network security group](../virtual-network/network-security-groups-overview.md#security-rules) (NSG) flow logs is a feature of Azure Network Watcher that allows you to log information about IP traffic flowing through an NSG. Flow data is sent to Azure Storage accounts from where you can access it as well as export it to any visualization tool, SIEM, or IDS of your choice. + - +This article shows you how to use, manage, and troubleshoot NSG flow logs. -## Why use Flow Logs? +## Why use flow logs? -It is vital to monitor, manage, and know your own network for uncompromised security, compliance, and performance. Knowing your own environment is of paramount importance to protect and optimize it. You often need to know the current state of the network, who is connecting, where they're connecting from, which ports are open to the internet, expected network behavior, irregular network behavior, and sudden rises in traffic. +It's vital to monitor, manage, and know your own network so that you can protect and optimize it. You need to know the current state of the network, who's connecting, and where users are connecting from. You also need to know which ports are open to the internet, what network behavior is expected, what network behavior is irregular, and when sudden rises in traffic happen. -Flow logs are the source of truth for all network activity in your cloud environment. Whether you're an upcoming startup trying to optimize resources or large enterprise trying to detect intrusion, Flow logs are your best bet. You can use it for optimizing network flows, monitoring throughput, verifying compliance, detecting intrusions and more. +Flow logs are the source of truth for all network activity in your cloud environment. Whether you're in a startup that's trying to optimize resources or a large enterprise that's trying to detect intrusion, flow logs can help. You can use them for optimizing network flows, monitoring throughput, verifying compliance, detecting intrusions, and more. ## Common use cases -**Network Monitoring**: Identify unknown or undesired traffic. Monitor traffic levels and bandwidth consumption. Filter flow logs by IP and port to understand application behavior. Export Flow Logs to analytics and visualization tools of your choice to set up monitoring dashboards. +**Network monitoring**: Identify unknown or undesired traffic. Monitor traffic levels and bandwidth consumption. Filter flow logs by IP and port to understand application behavior. Export flow logs to analytics and visualization tools of your choice to set up monitoring dashboards. -**Usage monitoring and optimization:** Identify top talkers in your network. Combine with GeoIP data to identify cross-region traffic. Understand traffic growth for capacity forecasting. Use data to remove overtly restrictive traffic rules. +**Usage monitoring and optimization:** Identify top talkers in your network. Combine with GeoIP data to identify cross-region traffic. Understand traffic growth for capacity forecasting. Use data to remove overly restrictive traffic rules. -**Compliance**: Use flow data to verify network isolation and compliance with enterprise access rules +**Compliance**: Use flow data to verify network isolation and compliance with enterprise access rules. -**Network forensics & Security analysis**: Analyze network flows from compromised IPs and network interfaces. Export flow logs to any SIEM or IDS tool of your choice. +**Network forensics and security analysis**: Analyze network flows from compromised IPs and network interfaces. Export flow logs to any SIEM or IDS tool of your choice. -## How logging works +## How flow logs work -**Key Properties** +Key properties of flow logs include: -- Flow logs operate at [Layer 4](https://en.wikipedia.org/wiki/OSI_model#Layer_4:_Transport_Layer) and record all IP flows going in and out of an NSG-- Logs are collected at **1-min interval** through the Azure platform and do not affect customer resources or network performance in any way.-- Logs are written in the JSON format and show outbound and inbound flows on a per NSG rule basis.-- Each log record contains the network interface (NIC) the flow applies to, 5-tuple information, the traffic decision & (Version 2 only) throughput information. See _Log Format_ below for full details.-- Flow Logs have a retention feature that allows automatically deleting the logs up to a year after their creation. +- Flow logs operate at [Layer 4](https://en.wikipedia.org/wiki/OSI_model#Layer_4:_Transport_Layer) and record all IP flows going in and out of an NSG. +- Logs are collected at 1-minute intervals through the Azure platform. They don't affect customer resources or network performance in any way. +- Logs are written in JSON format and show outbound and inbound flows per NSG rule. +- Each log record contains the network interface (NIC) that the flow applies to, 5-tuple information, the traffic decision, and (for version 2 only) throughput information. +- Flow logs have a retention feature that allows automatically deleting the logs up to a year after their creation. > [!NOTE]-> Retention is available only if you use [General purpose v2 Storage accounts (GPv2)](../storage/common/storage-account-overview.md#types-of-storage-accounts). +> Retention is available only if you use [general-purpose v2 storage accounts](../storage/common/storage-account-overview.md#types-of-storage-accounts). -**Core concepts** +Core concepts for flow logs include: -- Software defined networks are organized around Virtual Networks (VNETs) and subnets. The security of these VNets and subnets can be managed using an NSG.-- A Network security group (NSG) contains a list of _security rules_ that allow or deny network traffic in resources it is connected to. NSGs can be associated to each virtual network subnet and network interface in a virtual machine. For more information, see [Network security group overview](../virtual-network/network-security-groups-overview.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json).-- All traffic flows in your network are evaluated using the rules in the applicable NSG.-- The result of these evaluations is NSG Flow Logs. Flow logs are collected through the Azure platform and don't require any change to the customer resources.-- Note: Rules are of two types - terminating & non-terminating, each with different logging behaviors.- - NSG Deny rules are terminating. The NSG denying the traffic will log it in Flow logs and processing in this case would stop after any NSG denies traffic. - - NSG Allow rules are non-terminating, which means even if one NSG allows it, processing will continue to the next NSG. The last NSG allowing traffic will log the traffic to Flow logs. -- NSG Flow Logs are written to storage accounts from where they can be accessed.-- You can export, process, analyze, and visualize Flow Logs using tools like Traffic Analytics, Splunk, Grafana, Stealthwatch, etc.+- Software-defined networks are organized around virtual networks and subnets. You can manage the security of these virtual networks and subnets by using an NSG. +- An NSG contains a list of _security rules_ that allow or deny network traffic in resources that it's connected to. NSGs can be associated with each virtual network, subnet, and network interface in a virtual machine (VM). For more information, see [Network security group overview](../virtual-network/network-security-groups-overview.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json). +- All traffic flows in your network are evaluated through the rules in the applicable NSG. The result of these evaluations is NSG flow logs. +- Flow logs are collected through the Azure platform and don't require any change to customer resources. +- Rules are of two types: terminating and non-terminating. Each has different logging behaviors: + - NSG *deny* rules are terminating. The NSG that's denying the traffic will log it in flow logs. Processing in this case stops after any NSG denies traffic. + - NSG *allow* rules are non-terminating. Even if one NSG allows it, processing continues to the next NSG. The last NSG that allows traffic will log the traffic to flow logs. +- NSG flow logs are written to storage accounts. You can export, process, analyze, and visualize flow logs by using tools like Network Watcher traffic analytics, Splunk, Grafana, and Stealthwatch. ## Log format Flow logs include the following properties: -* **time** - Time when the event was logged -* **systemId** - Network Security Group system ID. -* **category** - The category of the event. The category is always **NetworkSecurityGroupFlowEvent** -* **resourceid** - The resource ID of the NSG -* **operationName** - Always NetworkSecurityGroupFlowEvents -* **properties** - A collection of properties of the flow - * **Version** - Version number of the Flow Log event schema - * **flows** - A collection of flows. This property has multiple entries for different rules - * **rule** - Rule for which the flows are listed - * **flows** - a collection of flows - * **mac** - The MAC address of the NIC for the VM where the flow was collected - * **flowTuples** - A string that contains multiple properties for the flow tuple in comma-separated format - * **Time Stamp** - This value is the time stamp of when the flow occurred in UNIX epoch format - * **Source IP** - The source IP - * **Destination IP** - The destination IP - * **Source Port** - The source port - * **Destination Port** - The destination Port - * **Protocol** - The protocol of the flow. Valid values are **T** for TCP and **U** for UDP - * **Traffic Flow** - The direction of the traffic flow. Valid values are **I** for inbound and **O** for outbound. - * **Traffic Decision** - Whether traffic was allowed or denied. Valid values are **A** for allowed and **D** for denied. - * **Flow State - Version 2 Only** - Captures the state of the flow. Possible states are **B**: Begin, when a flow is created. Statistics aren't provided. **C**: Continuing for an ongoing flow. Statistics are provided at 5-minute intervals. **E**: End, when a flow is terminated. Statistics are provided. - * **Packets - Source to destination - Version 2 Only** The total number of TCP packets sent from source to destination since last update. - * **Bytes sent - Source to destination - Version 2 Only** The total number of TCP packet bytes sent from source to destination since last update. Packet bytes include the packet header and payload. - * **Packets - Destination to source - Version 2 Only** The total number of TCP packets sent from destination to source since last update. - * **Bytes sent - Destination to source - Version 2 Only** The total number of TCP packet bytes sent from destination to source since last update. Packet bytes include packet header and payload. ---**NSG flow logs Version 2 (vs Version 1)** --Version 2 of the logs introduces the concept of flow state. You can configure which version of flow logs you receive. --Flow state _B_ is recorded when a flow is initiated. Flow state _C_ and flow state _E_ are states that mark the continuation of a flow and flow termination, respectively. Both _C_ and _E_ states contain traffic bandwidth information. +* `time`: Time when the event was logged. +* `systemId`: System ID of the NSG. +* `category`: Category of the event. The category is always `NetworkSecurityGroupFlowEvent`. +* `resourceid`: Resource ID of the NSG. +* `operationName`: Always `NetworkSecurityGroupFlowEvents`. +* `properties`: Collection of properties of the flow. + * `Version`: Version number of the flow log's event schema + * `flows`: Collection of flows. This property has multiple entries for different rules. + * `rule`: Rule for which the flows are listed. + * `flows`: Collection of flows. + * `mac`: MAC address of the NIC for the VM where the flow was collected. + * `flowTuples`: String that contains multiple properties for the flow tuple, in comma-separated format. + * `Time Stamp`: Time stamp of when the flow occurred, in UNIX epoch format. + * `Source IP`: Source IP address. + * `Destination IP`: Destination IP address. + * `Source Port`: Source port. + * `Destination Port`: Destination port. + * `Protocol`: Protocol of the flow. Valid values are `T` for TCP and `U` for UDP. + * `Traffic Flow`: Direction of the traffic flow. Valid values are `I` for inbound and `O` for outbound. + * `Traffic Decision`: Whether traffic was allowed or denied. Valid values are `A` for allowed and `D` for denied. + * `Flow State - Version 2 Only`: State of the flow. Possible states are: <br><br>`B`: Begin, when a flow is created. Statistics aren't provided. <br>`C`: Continuing for an ongoing flow. Statistics are provided at 5-minute intervals. <br>`E`: End, when a flow is terminated. Statistics are provided. + * `Packets - Source to destination - Version 2 Only`: Total number of TCP packets sent from source to destination since the last update. + * `Bytes sent - Source to destination - Version 2 Only`: Total number of TCP packet bytes sent from source to destination since the last update. Packet bytes include the packet header and payload. + * `Packets - Destination to source - Version 2 Only`: Total number of TCP packets sent from destination to source since the last update. + * `Bytes sent - Destination to source - Version 2 Only`: Total number of TCP packet bytes sent from destination to source since the last update. Packet bytes include packet header and payload. +++Version 2 of NSG flow logs introduces the concept of flow state. You can configure which version of flow logs you receive. ++Flow state `B` is recorded when a flow is initiated. Flow state `C` and flow state `E` are states that mark the continuation of a flow and flow termination, respectively. Both `C` and `E` states contain traffic bandwidth information. ### Sample log records -The text that follows is an example of a flow log. As you can see, there are multiple records that follow the property list described in the preceding section. +In the following examples of a flow log, multiple records follow the property list described earlier. > [!NOTE]-> Values in the *flowTuples* property are a comma-separated list. - -**Version 1 NSG flow log format sample** +> Values in the `flowTuples` property are a comma-separated list. ++Here's an example format of a version 1 NSG flow log: + ```json { "records": [ The text that follows is an example of a flow log. As you can see, there are mul ```-**Version 2 NSG flow log format sample** ++Here's an example format of a version 2 NSG flow log: + ```json { "records": [ The text that follows is an example of a flow log. As you can see, there are mul } ```-**Log tuple explained** - +### Log tuple and bandwidth calculation -**Sample bandwidth calculation** + -Flow tuples from a TCP conversation between 185.170.185.105:35370 and 10.2.0.4:23: +Here's an example bandwidth calculation for flow tuples from a TCP conversation between 185.170.185.105:35370 and 10.2.0.4:23: -"1493763938,185.170.185.105,10.2.0.4,35370,23,T,I,A,B,,,," -"1493695838,185.170.185.105,10.2.0.4,35370,23,T,I,A,C,1021,588096,8005,4610880" -"1493696138,185.170.185.105,10.2.0.4,35370,23,T,I,A,E,52,29952,47,27072" +`1493763938,185.170.185.105,10.2.0.4,35370,23,T,I,A,B,,,,` +`1493695838,185.170.185.105,10.2.0.4,35370,23,T,I,A,C,1021,588096,8005,4610880` +`1493696138,185.170.185.105,10.2.0.4,35370,23,T,I,A,E,52,29952,47,27072` -For continuation _C_ and end _E_ flow states, byte and packet counts are aggregate counts from the time of the previous flow tuple record. Referencing the previous example conversation, the total number of packets transferred is 1021+52+8005+47 = 9125. The total number of bytes transferred is 588096+29952+4610880+27072 = 5256000. +For continuation (`C`) and end (`E`) flow states, byte and packet counts are aggregate counts from the time of the previous flow's tuple record. In the example conversation, the total number of packets transferred is 1021+52+8005+47 = 9125. The total number of bytes transferred is 588096+29952+4610880+27072 = 5256000. +## Enabling NSG flow logs -## Enabling NSG Flow Logs --Use the relevant link from below for guides on enabling flow logs. +For more information about enabling flow logs, see the following guides: - [Azure portal](./network-watcher-nsg-flow-logging-portal.md) - [PowerShell](./network-watcher-nsg-flow-logging-powershell.md)-- [CLI](./network-watcher-nsg-flow-logging-cli.md)-- [REST](./network-watcher-nsg-flow-logging-rest.md)+- [Azure CLI](./network-watcher-nsg-flow-logging-cli.md) +- [REST API](./network-watcher-nsg-flow-logging-rest.md) - [Azure Resource Manager](./network-watcher-nsg-flow-logging-azure-resource-manager.md) ## Updating parameters -**Azure portal** --On the Azure portal, navigate to the NSG Flow Logs section in Network Watcher. Then click the name of the NSG. This will bring up the settings pane for the Flow log. Change the parameters you want and hit **Save** to deploy the changes. +On the Azure portal: -**PS/CLI/REST/ARM** +1. Go to the **NSG flow logs** section in Network Watcher. +1. Select the name of the NSG. +1. On the settings pane for the flow log, change the parameters that you want. +1. Select **Save** to deploy the changes. -To update parameters via command-line tools, use the same command used to enable Flow Logs (from above) but with updated parameters that you want to change. +To update parameters via command-line tools, use the same command that you used to enable flow logs. -## Working with Flow logs +## Working with flow logs -*Read and Export flow logs* +### Read and export flow logs -- [Download & view Flow Logs from the portal](./network-watcher-nsg-flow-logging-portal.md#download-flow-log)-- [Read Flow logs using PowerShell functions](./network-watcher-read-nsg-flow-logs.md)-- [Export NSG Flow Logs to Splunk](https://www.splunk.com/en_us/blog/platform/splunking-azure-nsg-flow-logs.html)+- [Download and view flow logs from the portal](./network-watcher-nsg-flow-logging-portal.md#download-flow-log) +- [Read flow logs by using PowerShell functions](./network-watcher-read-nsg-flow-logs.md) +- [Export NSG flow logs to Splunk](https://www.splunk.com/en_us/blog/platform/splunking-azure-nsg-flow-logs.html) -While flow logs target NSGs, they are not displayed the same as the other logs. Flow logs are stored only within a storage account and follow the logging path shown in the following example: +Although flow logs target NSGs, they're not displayed the same way as the other logs. Flow logs are stored only within a storage account and follow the logging path shown in the following example: ``` https://{storageAccountName}.blob.core.windows.net/insights-logs-networksecuritygroupflowevent/resourceId=/SUBSCRIPTIONS/{subscriptionID}/RESOURCEGROUPS/{resourceGroupName}/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/{nsgName}/y={year}/m={month}/d={day}/h={hour}/m=00/macAddress={macAddress}/PT1H.json ``` -*Visualize flow logs* +### Visualize flow logs ++- [Visualize NSG flow logs by using Azure Network Watcher traffic analytics](./traffic-analytics.md) +- [Visualize NSG flow logs by using Power BI](./network-watcher-visualize-nsg-flow-logs-power-bi.md) +- [Visualize NSG flow logs by using Elastic Stack](./network-watcher-visualize-nsg-flow-logs-open-source-tools.md) +- [Manage and analyze NSG flow logs by using Grafana](./network-watcher-nsg-grafana.md) +- [Manage and analyze NSG flow logs by using Graylog](./network-watcher-analyze-nsg-flow-logs-graylog.md) -- [Azure Traffic analytics](./traffic-analytics.md) is an Azure native service to process flow logs, extracts insights and visualize flow logs. -- [[Tutorial] Visualize NSG Flow logs with Power BI](./network-watcher-visualize-nsg-flow-logs-power-bi.md)-- [[Tutorial] Visualize NSG Flow logs with Elastic Stack](./network-watcher-visualize-nsg-flow-logs-open-source-tools.md)-- [[Tutorial] Manage and analyze NSG Flow logs using Grafana](./network-watcher-nsg-grafana.md)-- [[Tutorial] Manage and analyze NSG Flow logs using Graylog](./network-watcher-analyze-nsg-flow-logs-graylog.md)+### Disable flow logs -*Disable flow logs* +When you disable a flow log, you stop the flow logging for the associated NSG. But the flow log continues to exist as a resource, with all its settings and associations. You can enable it anytime to begin flow logging on the configured NSG. -When the flow log is disabled, the flow logging for associated NSG is stopped. But the flow log as a resource continues to exist with all its settings and associations. It can be enabled anytime to begin flow logging on the configured NSG. Steps to disable/enable a flow logs can be found in [this how to guide](./network-watcher-nsg-flow-logging-powershell.md). +For steps to disable and enable a flow logs, see [this how-to guide](./network-watcher-nsg-flow-logging-powershell.md). -*Delete flow logs* +### Delete flow logs -When the flow log is deleted, not only the flow logging for the associated NSG is stopped but also the flow log resource is deleted with its settings and associations. To begin flow logging again, a new flow log resource must be created for that NSG. A flow log can be deleted using [PowerShell](/powershell/module/az.network/remove-aznetworkwatcherflowlog), [CLI](/cli/azure/network/watcher/flow-log#az-network-watcher-flow-log-delete) or [REST API](/rest/api/network-watcher/flowlogs/delete). The support for deleting flow logs from Azure portal is in pipeline. +When you delete a flow log, you not only stop the flow logging for the associated NSG but also delete the flow log resource (with all its settings and associations). To begin flow logging again, you must create a new flow log resource for that NSG. -Also, when a NSG is deleted, by default the associated flow log resource is deleted. +You can delete a flow log by using [PowerShell](/powershell/module/az.network/remove-aznetworkwatcherflowlog), the [Azure CLI](/cli/azure/network/watcher/flow-log#az-network-watcher-flow-log-delete), or the [REST API](/rest/api/network-watcher/flowlogs/delete). At this time, you can't delete flow logs from the Azure portal. ++Also, when you delete an NSG, the associated flow log resource is deleted by default. > [!NOTE]-> To move a NSG to a different resource group or subscription, the associated flow logs must be deleted, just disabling the flow logs won't work. After migration of NSG, the flow logs must be recreated to enable flow logging on it. +> To move an NSG to a different resource group or subscription, you must delete the associated flow logs. Just disabling the flow logs won't work. After you migrate an NSG, you must re-create the flow logs to enable flow logging on it. ++## Considerations for NSG flow logs ++### Storage account ++- **Location**: The storage account used must be in the same region as the NSG. +- **Performance tier**: Currently, only standard-tier storage accounts are supported. +- **Self-managed key rotation**: If you change or rotate the access keys to your storage account, NSG flow logs will stop working. To fix this problem, you must disable and then re-enable NSG flow logs. ++### Cost ++NSG flow logging is billed on the volume of logs produced. High traffic volume can result in large-flow log volume and the associated costs. -## NSG flow logging considerations +Pricing of NSG flow logs does not include the underlying costs of storage. Using the retention policy feature with NSG flow logs means incurring separate storage costs for extended periods of time. -**Storage account considerations**: +If you want to retain data forever and don't want to apply any retention policy, set retention days to 0. For more information, see [Network Watcher pricing](https://azure.microsoft.com/pricing/details/network-watcher/) and [Azure Storage pricing](https://azure.microsoft.com/pricing/details/storage/). -- Location: The storage account used must be in the same region as the NSG.-- Performance Tier: Currently, only standard tier storage accounts are supported.-- Self-manage key rotation: If you change/rotate the access keys to your storage account, NSG Flow Logs will stop working. To fix this issue, you must disable and then re-enable NSG Flow Logs.+### User-defined inbound TCP rules -**Flow Logging Costs**: NSG flow logging is billed on the volume of logs produced. High traffic volume can result in large flow log volume and the associated costs. NSG Flow log pricing does not include the underlying costs of storage. Using the retention policy feature with NSG Flow Logging means incurring separate storage costs for extended periods of time. If you want to retain data forever and do not want to apply any retention policy, set retention (days) to 0. For more information, see [Network Watcher Pricing](https://azure.microsoft.com/pricing/details/network-watcher/) and [Azure Storage Pricing](https://azure.microsoft.com/pricing/details/storage/) for additional details. +NSGs are implemented as a [stateful firewall](https://en.wikipedia.org/wiki/Stateful_firewall?oldformat=true). But because of current platform limitations, user-defined rules that affect inbound TCP flows are implemented in a stateless way. ++Flows that user-defined inbound rules affect become non-terminating. Additionally, byte and packet counts are not recorded for these flows. Because of those factors, the number of bytes and packets reported in NSG flow logs (and Network Watcher traffic analytics) could be different from actual numbers. ++You can resolve this difference by setting the [FlowTimeoutInMinutes](/powershell/module/az.network/set-azvirtualnetwork) property on the associated virtual networks to a non-null value. You can achieve default stateful behavior by setting `FlowTimeoutInMinutes` to 4 minutes. For long-running connections where you don't want flows to disconnect from a service or destination, you can set `FlowTimeoutInMinutes` to a value of up to 30 minutes. -**Issues with User-defined Inbound TCP rules**: [Network Security Groups (NSGs)](../virtual-network/network-security-groups-overview.md) are implemented as a [Stateful firewall](https://en.wikipedia.org/wiki/Stateful_firewall?oldformat=true). However, due to current platform limitations, user-defined rules that affect inbound TCP flows are implemented in a stateless fashion. Due to this, flows affected by user-defined inbound rules become non-terminating. Additionally byte and packet counts are not recorded for these flows. Consequently the number of bytes and packets reported in NSG Flow Logs (and Traffic Analytics) could be different from actual numbers. This can be resolved by setting the [FlowTimeoutInMinutes](/powershell/module/az.network/set-azvirtualnetwork) property on the associated virtual networks to a non-null value. Default stateful behavior can be achieved by setting FlowTimeoutInMinutes to 4 minutes. For long running connections, where you do not want flows disconnecting from a service or destination, FlowTimeoutInMinutes can be set to a value upto 30 minutes. ```powershell $virtualNetwork = Get-AzVirtualNetwork -Name VnetName -ResourceGroupName RgName $virtualNetwork.FlowTimeoutInMinutes = 4 $virtualNetwork | Set-AzVirtualNetwork ``` -**Inbound flows logged from internet IPs to VMs without public IPs**: VMs that don't have a public IP address assigned via a public IP address associated with the NIC as an instance-level public IP, or that are part of a basic load balancer back-end pool, use [default SNAT](../load-balancer/load-balancer-outbound-connections.md) and have an IP address assigned by Azure to facilitate outbound connectivity. As a result, you might see flow log entries for flows from internet IP addresses, if the flow is destined to a port in the range of ports assigned for SNAT. While Azure won't allow these flows to the VM, the attempt is logged and appears in Network Watcher's NSG flow log by design. We recommend that unwanted inbound internet traffic be explicitly blocked with NSG. +### Inbound flows logged from internet IPs to VMs without public IPs ++VMs that don't have a public IP address associated with the NIC as an instance-level public IP, or that are part of a basic load balancer back-end pool, use [default SNAT](../load-balancer/load-balancer-outbound-connections.md). Azure assigns an IP address to those VMs to facilitate outbound connectivity. As a result, you might see flow log entries for flows from internet IP addresses, if the flow is destined to a port in the range of ports that are assigned for SNAT. ++Although Azure won't allow these flows to the VM, the attempt is logged and appears in the Network Watcher NSG flow log by design. We recommend that you explicitly block unwanted inbound internet traffic with an NSG. ++### NSG on an ExpressRoute gateway subnet ++We don't recommend that you log flows on an Azure ExpressRoute gateway subnet because traffic can bypass that type of gateway (for example, [FastPath](../expressroute/about-fastpath.md)). If an NSG is linked to an ExpressRoute gateway subnet and NSG flow logs are enabled, then outbound flows to virtual machines might not be captured. Such flows must be captured at the subnet or NIC of the VM. -**NSG on ExpressRoute gateway subnet** – It is not recommended to log flows on ExpressRoute gateway subnet because traffic can bypass the express route gateway (example: [FastPath](../expressroute/about-fastpath.md)). Thus, if an NSG is linked to an ExpressRoute Gateway subnet and NSG flow logs are enabled, then outbound flows to virtual machines may not get captured. Such flows must be captured at the subnet or NIC of the VM. +### Traffic across a private link -**Traffic across private link** - To log traffic while accessing PaaS resources via private link, enable NSG flow logs on a subnet NSG containing the private link. Due to platform limitations, the traffic at all the source VMs only can be captured whereas that at the destination PaaS resource cannot be captured. +To log traffic while accessing platform as a service (PaaS) resources via private link, enable NSG flow logs on a subnet NSG that contains the private link. Because of platform limitations, only the traffic at all the source VMs can be captured. Traffic at the destination PaaS resource can't be captured. -**Issue with Application Gateway V2 Subnet NSG**: Flow logging on the application gateway V2 subnet NSG is [not supported](../application-gateway/application-gateway-faq.yml#are-nsg-flow-logs-supported-on-nsgs-associated-to-application-gateway-v2-subnet) currently. This issue does not affect Application Gateway V1. +### Support for the Application Gateway V2 subnet NSG -**Incompatible Services**: Due to current platform limitations, a small set of Azure services are not supported by NSG Flow Logs. The current list of incompatible services is -- [Azure Container Instances (ACI)](https://azure.microsoft.com/services/container-instances/)-- [Logic Apps](https://azure.microsoft.com/services/logic-apps/) +NSG flow logs on the Azure Application Gateway V2 subnet NSG are currently [not supported](../application-gateway/application-gateway-faq.yml#are-nsg-flow-logs-supported-on-nsgs-associated-to-application-gateway-v2-subnet). NSG flow logs on Application Gateway V1 are supported. ++### Incompatible services ++Because of current platform limitations, a few Azure services don't support NSG flow logs. The current list of incompatible services is: ++- [Azure Container Instances](https://azure.microsoft.com/services/container-instances/) +- [Azure Logic Apps](https://azure.microsoft.com/services/logic-apps/) - [Azure Functions](https://azure.microsoft.com/services/functions/) > [!NOTE]-> App services deployed under the App Service Plan do not support NSG Flow Logs. [Learn more](../app-service/overview-vnet-integration.md#how-regional-virtual-network-integration-works). +> App services deployed under an Azure App Service plan don't support NSG flow logs. [Learn more](../app-service/overview-vnet-integration.md#how-regional-virtual-network-integration-works). ## Best practices -**Enable on critical subnets**: Flow Logs should be enabled on all critical subnets in your subscription as an auditability and security best practice. +- **Enable flow logs on critical subnets**: Flow logs should be enabled on all critical subnets in your subscription as an auditing and security best practice. -**Enable NSG Flow Logging on all NSGs attached to a resource**: Flow logging in Azure is configured on the NSG resource. A flow will only be associated to one NSG Rule. In scenarios where multiple NSGs are utilized, we recommend enabling NSG flow logs on all NSGs applied at the resource's subnet or network interface to ensure that all traffic is recorded. For more information, see [how traffic is evaluated](../virtual-network/network-security-group-how-it-works.md) in Network Security Groups. +- **Enable flow logs on all NSGs attached to a resource**: Flow logs in Azure are configured on the NSG resources. A flow will be associated with only one NSG rule. In scenarios where you use multiple NSGs, we recommend enabling NSG flow logs on all NSGs applied at the resource's subnet or network interface to ensure that all traffic is recorded. For more information, see [How network security groups filter network traffic](../virtual-network/network-security-group-how-it-works.md). -Few common scenarios: -1. **Multiple NICs at a VM**: In case multiple NICs are attached to a virtual machine, flow logging must be enabled on all of them -1. **Having NSG at both NIC and Subnet Level**: In case NSG is configured at the NIC as well as the subnet level, then flow logging must be enabled at both the NSGs since the exact sequence of rule processing by NSGs at NIC and subnet level is platform dependent and varies from case to case. Traffic flows will be logged against the NSG which is processed last. The processing order is changed by the platform state. You have to check both of the flow logs. -1. **AKS Cluster Subnet**: AKS adds a default NSG at the cluster subnet. As explained in the above point, flow logging must be enabled on this default NSG. + Here are a few common scenarios: -**Storage provisioning**: Storage should be provisioned in tune with expected Flow Log volume. + - **Multiple NICs at a VM**: If multiple NICs are attached to a virtual machine, you must enable flow logs on all of them. + - **NSG at both the NIC and subnet levels**: If an NSG is configured at the NIC level and the subnet level, you must enable flow logs at both NSGs. The exact sequence of rule processing by NSGs at NIC and subnet levels is platform dependent and varies from case to case. Traffic flows will be logged against the NSG that's processed last. The platform state changes the processing order. You have to check both of the flow logs. + - **Azure Kubernetes Service (AKS) cluster subnet**: AKS adds a default NSG at the cluster subnet. You must enable flow logs on this default NSG. -**Naming**: The NSG name must be upto 80 chars and the NSG rule names upto 65 chars. If the names exceed their character limit, it may get truncated while logging. +- **Storage provisioning**: Provision storage in tune with the expected volume of flow logs. -## Troubleshooting common issues +- **Naming**: The NSG name must be up to 80 characters, and NSG rule names must be up to 65 characters. If the names exceed their character limits, they might be truncated during logging. -**I could not enable NSG Flow Logs** +## Troubleshooting common problems -- **Microsoft.Insights** resource provider is not registered+### I couldn't enable NSG flow logs -If you received an _AuthorizationFailed_ or a _GatewayAuthenticationFailed_ error, you might have not enabled the Microsoft Insights resource provider on your subscription. [Follow the instructions](./network-watcher-nsg-flow-logging-portal.md#register-insights-provider) to enable the Microsoft Insights provider. +If you get an "AuthorizationFailed" or "GatewayAuthenticationFailed" error, you might not have enabled the **Microsoft.Insights** resource provider on your subscription. [Follow the instructions](./network-watcher-nsg-flow-logging-portal.md#register-insights-provider) to enable it. -**I have enabled NSG Flow Logs but do not see data in my storage account** +### I enabled NSG flow logs but don't see data in my storage account -- **Setup time**+This problem might be related to: -NSG Flow Logs may take up to 5 minutes to appear in your storage account (if configured correctly). A PT1H.json will appear which can be accessed [as described here](./network-watcher-nsg-flow-logging-portal.md#download-flow-log). +- **Setup time**: NSG flow logs can take up to 5 minutes to appear in your storage account (if they're configured correctly). A *PT1H.json* file will appear. You can access that file as described in [this article](./network-watcher-nsg-flow-logging-portal.md#download-flow-log). -- **No Traffic on your NSGs**+- **Lack of traffic on your NSGs**: Sometimes you won't see logs because your VMs aren't active, or because upstream filters at Application Gateway or other devices are blocking traffic to your NSGs. -Sometimes you will not see logs because your VMs are not active or there are upstream filters at an App Gateway or other devices that are blocking traffic to your NSGs. +### I want to automate NSG flow logs -**I want to automate NSG Flow Logs** --Support for automation via ARM templates is now available for NSG Flow Logs. Read the [feature announcement](https://azure.microsoft.com/updates/arm-template-support-for-nsg-flow-logs/) & the [Quick Start from ARM template document](quickstart-configure-network-security-group-flow-logs-from-arm-template.md) for more information. +Support for automation via Azure Resource Manager templates (ARM templates) is now available for NSG flow logs. For more information, read the [feature announcement](https://azure.microsoft.com/updates/arm-template-support-for-nsg-flow-logs/) and the [quickstart for configuring NSG flow logs by using an ARM template](quickstart-configure-network-security-group-flow-logs-from-arm-template.md). ## FAQ -**What does NSG Flow Logs do?** +### What do NSG flow logs do? ++You can combine and manage Azure network resources through [network security groups (NSGs)](../virtual-network/network-security-groups-overview.md). NSG flow logs enable you to log 5-tuple flow information about all traffic through your NSGs. The raw flow logs are written to an Azure Storage account. From there, you can further process, analyze, query, or export them as needed. ++### Does using flow logs affect my network latency or performance? -Azure network resources can be combined and managed through [Network Security Groups (NSGs)](../virtual-network/network-security-groups-overview.md). NSG Flow Logs enable you to log 5-tuple flow information about all traffic through your NSGs. The raw flow logs are written to an Azure Storage account from where they can be further processed, analyzed, queried, or exported as needed. +Flow log data is collected outside the path of your network traffic, so it doesn't affect network throughput or latency. You can create or delete flow logs without any risk of impact to network performance. -**Does using Flow Logs impact my network latency or performance?** +### How do I use NSG flow logs with a storage account behind a firewall? -Flow logs data is collected outside of the path of your network traffic, and therefore does not affect network throughput or latency. You can create or delete flow logs without any risk of impact to network performance. +To use a storage account behind a firewall, you have to provide an exception for trusted Microsoft services to access the storage account: -**How do I use NSG Flow Logs with a Storage account behind a firewall?** +1. Go to the storage account by typing the account's name in the global search on the portal or from the [Storage accounts page](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.Storage%2FStorageAccounts). +1. In the **Networking** section, select **Firewalls and virtual networks** at the top of the page. Then make sure that the following items are configured: -To use a Storage account behind a firewall, you have to provide an exception for Trusted Microsoft Services to access your storage account: + - For **Public network access**, select **Enabled from selected virtual networks and IP addresses**. + - For **Firewall**, select **Add your client IP address**. -- Navigate to the Storage account by typing the Storage account's name in global search on the portal or from the [Storage accounts page](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.Storage%2FStorageAccounts)-- Under the **Networking** section, select **Firewalls and virtual networks** at top of page.-- Under the **Public network access**, select:- ☑️ **Enabled from selected virtual networks and IP addresses** -- Under **Firewall** select:- ☑️ **Add your Client IP Address** + > [!Note] + > A client IP address is provided here by default. Use `ipconfig` to verify that this IP address matches the machine that you're using to access the storage account. If the client IP address doesn't match your machine, you might get an "Unauthorized" error when you're trying to access the storage account to read NSG flow logs. - > [!Note] - > A client IP Address is provided here by default, verify this IP matches the machine you are using to access Storage Account using `ipconfig`. If the Client IP Address does not match your machine, you may receive Unauthorized when attempting to access the storage account to read NSG Flow Logs. + - For **Exceptions**, select **Allow Azure service on the trusted services list to access this storage account**. -- Under **Exceptions**, select:- ☑️ **Allow Azure service on the trusted services list to access this storage account.** -- If the above items are already configured, no change is needed.-- Locate your target NSG on the [NSG Flow Logs overview page](https://portal.azure.com/#blade/Microsoft_Azure_Network/NetworkWatcherMenuBlade/flowLogs) and enable NSG Flow Logs using the above configured storage account.+1. Find your target NSG on the [overview page for NSG flow logs](https://portal.azure.com/#blade/Microsoft_Azure_Network/NetworkWatcherMenuBlade/flowLogs), and then enable NSG flow logs by using the previously configured storage account. -You can check the storage logs after a few minutes, you should see an updated TimeStamp or a new JSON file created. +Check the storage logs after a few minutes. You should see an updated time stamp or a new JSON file created. -**How do I use NSG Flow Logs with a Storage account behind a Service Endpoint?** +### How do I use NSG flow logs with a storage account behind a service endpoint? -NSG Flow Logs are compatible with Service Endpoints without requiring any extra configuration. See the [tutorial on enabling Service Endpoints](../virtual-network/tutorial-restrict-network-access-to-resources.md#enable-a-service-endpoint) in your virtual network. +NSG flow logs are compatible with service endpoints without requiring any extra configuration. For more information, see the [tutorial on enabling service endpoints in your virtual network](../virtual-network/tutorial-restrict-network-access-to-resources.md#enable-a-service-endpoint). -**What is the difference between flow logs versions 1 & 2?** +### What's the difference between versions 1 and 2 of flow logs? -Flow Logs version 2 introduces the concept of _Flow State_ & stores information about bytes and packets transmitted. [Read more](#log-format) +Version 2 of flow logs introduces the concept of *flow state* and stores information about transmitted bytes and packets. [Read more](#log-format). ## Pricing -NSG Flow Logs are charged per GB of logs collected and come with a free tier of 5 GB/month per subscription. For the current pricing in your region, see the [Network Watcher pricing page](https://azure.microsoft.com/pricing/details/network-watcher/). +NSG flow logs are charged per gigabyte of logs collected and come with a free tier of 5 GB/month per subscription. For the current pricing in your region, see the [Network Watcher pricing page](https://azure.microsoft.com/pricing/details/network-watcher/). -Storage of logs is charged separately, see [Azure Storage Block blob pricing page](https://azure.microsoft.com/pricing/details/storage/blobs/) for relevant prices. +Storage of logs is charged separately. For relevant prices, see the [Pricing page for Azure Storage block blobs](https://azure.microsoft.com/pricing/details/storage/blobs/). |
networking | Nva Accelerated Connections | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/nva-accelerated-connections.md | Title: Network connections performance optimization and NVAs + Title: Accelerated connections network performance optimization and NVAs description: Learn how Accelerated Connections improves Network Virtual Appliance (NVA) performance. |
notification-hubs | Export Modify Registrations Bulk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/export-modify-registrations-bulk.md | This section assumes you have the following entities: - A provisioned notification hub. - A |