Updates from: 01/28/2023 02:17:40
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Enable Authentication React Spa App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-react-spa-app.md
The sample code is made up of the following components. Add these components fro
> [!IMPORTANT] > If the App component file name is `App.js`, change it to `App.jsx`. -- [src/pages/Hello.jsx](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/blob/main/3-Authorization-II/2-call-api-b2c/SPA/src/pages/Hello.jsx) - Demonstrate how to call a protected resource with OAuth2 bearer token.
+- [src/pages/Hello.jsx](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/blob/main/6-AdvancedScenarios/1-call-api-obo/SPA/src/pages/Hello.jsx) - Demonstrate how to call a protected resource with OAuth2 bearer token.
- It uses the [useMsal](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-react/docs/hooks.md) hook that returns the PublicClientApplication instance. - With PublicClientApplication instance, it acquires an access token to call the REST API. - Invokes the [callApiWithToken](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/blob/main/3-Authorization-II/2-call-api-b2c/SPA/src/fetch.js) function to fetch the data from the REST API and renders the result using the **DataDisplay** component.
active-directory-b2c Partner Nevis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-nevis.md
To get started, you'll need:
- An [Azure AD B2C tenant](./tutorial-create-tenant.md) linked to your Azure subscription >[!NOTE]
->To integrate Nevis into your sign-up policy flow, configure the Azure AD B2C environment to use custom policies. </br>See, [Tutorial: Create user flows and custom policies in Azure Active Directory B2C](/tutorial-create-user-flows.md?pivots=b2c-custom-policy).
+>To integrate Nevis into your sign-up policy flow, configure the Azure AD B2C environment to use custom policies. </br>See, [Tutorial: Create user flows and custom policies in Azure Active Directory B2C](/azure/active-directory-b2c/tutorial-create-user-flows).
## Scenario description
The diagram shows the implementation.
2. In [/samples/Nevis/policy/nevis.html](https://github.com/azure-ad-b2c/partner-integrations/blob/master/samples/Nevis/policy/nevis.html) open the nevis.html file. 3. Replace the **authentication_cloud_url** with the Nevis Admin console URL `https://<instance_id>.mauth.nevis.cloud`. 4. Select **Save**.
-5. [Create an Azure Blob storage account](/customize-ui-with-html.md#2-create-an-azure-blob-storage-account).
+5. [Create an Azure Blob storage account](./customize-ui-with-html.md#2-create-an-azure-blob-storage-account).
6. Upload the nevis.html file to your Azure blob storage.
-7. [Configure CORS](/customize-ui-with-html.md#3-configure-cors).
+7. [Configure CORS](./customize-ui-with-html.md#3-configure-cors).
8. Enable cross-origin resource sharing (CORS) for the file. 9. In the list, select the **nevis.html** file. 10. In the **Overview** tab, next to the **URL**, select the **copy link** icon.
active-directory-b2c Userjourneys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/userjourneys.md
Previously updated : 08/31/2021 Last updated : 01/27/2023
Preconditions can check multiple preconditions. The following example checks whe
## Claims provider selection
-Identity provider selection lets users select an action from a list of options. The identity provider selection consists of a pair of two orchestration steps:
+Claims provider selection lets users select an action from a list of options. The identity provider selection consists of a pair of two orchestration steps:
1. **Buttons** - It starts with type of `ClaimsProviderSelection`, or `CombinedSignInAndSignUp` that contains a list of options a user can choose from. The order of the options inside the `ClaimsProviderSelections` element controls the order of the buttons presented to the user. 2. **Actions** - Followed by type of `ClaimsExchange`. The ClaimsExchange contains list of actions. The action is a reference to a technical profile, such as [OAuth2](oauth2-technical-profile.md), [OpenID Connect](openid-connect-technical-profile.md), [claims transformation](claims-transformation-technical-profile.md), or [self-asserted](self-asserted-technical-profile.md). When a user clicks on one of the buttons, the corresponding action is executed.
active-directory Application Proxy Add On Premises Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-add-on-premises-application.md
Now that you've prepared your environment and installed a connector, you're read
| Field | Description | | : | :-- | | **Name** | The name of the application that will appear on My Apps and in the Azure portal. |
+ | **Maintenance Mode** | Select if you would like to enable maintenance mode and temporarily disable access for all users to the application. |
| **Internal URL** | The URL for accessing the application from inside your private network. You can provide a specific path on the backend server to publish, while the rest of the server is unpublished. In this way, you can publish different sites on the same server as different apps, and give each one its own name and access rules.<br><br>If you publish a path, make sure that it includes all the necessary images, scripts, and style sheets for your application. For example, if your app is at `https://yourapp/app` and uses images located at `https://yourapp/media`, then you should publish `https://yourapp/` as the path. This internal URL doesn't have to be the landing page your users see. For more information, see [Set a custom home page for published apps](application-proxy-configure-custom-home-page.md). | | **External URL** | The address for users to access the app from outside your network. If you don't want to use the default Application Proxy domain, read about [custom domains in Azure AD Application Proxy](./application-proxy-configure-custom-domain.md). | | **Pre Authentication** | How Application Proxy verifies users before giving them access to your application.<br><br>**Azure Active Directory** - Application Proxy redirects users to sign in with Azure AD, which authenticates their permissions for the directory and application. We recommend keeping this option as the default so that you can take advantage of Azure AD security features like Conditional Access and Multi-Factor Authentication. **Azure Active Directory** is required for monitoring the application with Microsoft Defender for Cloud Apps.<br><br>**Passthrough** - Users don't have to authenticate against Azure AD to access the application. You can still set up authentication requirements on the backend. |
Now that you've prepared your environment and installed a connector, you're read
| Field | Description | | : | :-- | | **Backend Application Timeout** | Set this value to **Long** only if your application is slow to authenticate and connect. At default, the backend application timeout has a length of 85 seconds. When set to long, the backend timeout is increased to 180 seconds. |
- | **Use HTTP-Only Cookie** | Set this value to **Yes** to have Application Proxy cookies include the HTTPOnly flag in the HTTP response header. If using Remote Desktop Services, set this value to **No**. |
- | **Use Secure Cookie**| Set this value to **Yes** to transmit cookies over a secure channel such as an encrypted HTTPS request.
- | **Use Persistent Cookie**| Keep this value set to **No**. Only use this setting for applications that can't share cookies between processes. For more information about cookie settings, see [Cookie settings for accessing on-premises applications in Azure Active Directory](./application-proxy-configure-cookie-settings.md).
- | **Translate URLs in Headers** | Keep this value as **Yes** unless your application required the original host header in the authentication request. |
- | **Translate URLs in Application Body** | Keep this value as **No** unless you have hardcoded HTML links to other on-premises applications and don't use custom domains. For more information, see [Link translation with Application Proxy](./application-proxy-configure-hard-coded-link-translation.md).<br><br>Set this value to **Yes** if you plan to monitor this application with Microsoft Defender for Cloud Apps. For more information, see [Configure real-time application access monitoring with Microsoft Defender for Cloud Apps and Azure Active Directory](./application-proxy-integrate-with-microsoft-cloud-application-security.md). |
+ | **Use HTTP-Only Cookie** | Select to have Application Proxy cookies include the HTTPOnly flag in the HTTP response header. If using Remote Desktop Services, keep this unselected. |
+ | **Use Persistent Cookie**| Keep this unselected. Only use this setting for applications that can't share cookies between processes. For more information about cookie settings, see [Cookie settings for accessing on-premises applications in Azure Active Directory](./application-proxy-configure-cookie-settings.md).
+ | **Translate URLs in Headers** | Keep this selected unless your application required the original host header in the authentication request. |
+ | **Translate URLs in Application Body** | Keep this unselected unless you have hardcoded HTML links to other on-premises applications and don't use custom domains. For more information, see [Link translation with Application Proxy](./application-proxy-configure-hard-coded-link-translation.md).<br><br>Select if you plan to monitor this application with Microsoft Defender for Cloud Apps. For more information, see [Configure real-time application access monitoring with Microsoft Defender for Cloud Apps and Azure Active Directory](./application-proxy-integrate-with-microsoft-cloud-application-security.md). |
+ | **Validate Backend SSL Certificate** | Select to enable backend SSL certificate validation for the application. |
7. Select **Add**.
active-directory Concept Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/concept-attributes.md
To view the schema and verify it, follow these steps.
1. Go to [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer). 1. Sign in with your global administrator account. 1. On the left, select **modify permissions** and ensure that **Directory.ReadWrite.All** is *Consented*.
-1. Run the query `https://graph.microsoft.com/beta/serviceprincipals/?$filter=startswith(DisplayName, ΓÇÿ{sync config name}ΓÇÖ)`. This query returns a filtered list of service principals. This can also be acquire via the App Registration node under Azure Active Directory.
+1. Run the query `https://graph.microsoft.com/beta/serviceprincipals/?$filter=startswith(DisplayName, ΓÇÿ{sync config name}ΓÇÖ)`. This query returns a filtered list of service principals. This can also be acquired via the App Registration node under Azure Active Directory.
1. Locate `"appDisplayName": "Active Directory to Azure Active Directory Provisioning"` and note the value for `"id"`. ``` "value": [
To view the schema and verify it, follow these steps.
``` 1. Now run the query `https://graph.microsoft.com/beta/serviceprincipals/{Service Principal Id}/synchronization/jobs/{AD2AAD Provisioning id}/schema`.
- Example: https://graph.microsoft.com/beta/serviceprincipals/653c0018-51f4-4736-a3a3-94da5dcb6862/synchronization/jobs/AD2AADProvisioning.e9287a7367e444c88dc67a531c36d8ec/schema
+ Replace `{Service Principal Id}` and `{AD2ADD Provisioning Id}` with your values.
active-directory Concept Conditional Access Conditions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-conditions.md
The **Configure** toggle when set to **Yes** applies to checked items, when set
- Administrators can apply policy only to supported platforms (such as iOS, Android, and Windows) through the Conditional Access Microsoft Graph API. - Other clients - This option includes clients that use basic/legacy authentication protocols that donΓÇÖt support modern authentication.
- - Authenticated SMTP - Used by POP and IMAP client's to send email messages.
+ - SMTP - Used by POP and IMAP client's to send email messages.
- Autodiscover - Used by Outlook and EAS clients to find and connect to mailboxes in Exchange Online. - Exchange Online PowerShell - Used to connect to Exchange Online with remote PowerShell. If you block Basic authentication for Exchange Online PowerShell, you need to use the Exchange Online PowerShell Module to connect. For instructions, see [Connect to Exchange Online PowerShell using multifactor authentication](/powershell/exchange/exchange-online/connect-to-exchange-online-powershell/mfa-connect-to-exchange-online-powershell). - Exchange Web Services (EWS) - A programming interface that's used by Outlook, Outlook for Mac, and third-party apps.
active-directory Access Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/access-tokens.md
Last updated 12/28/2022 + # Microsoft identity platform access tokens
active-directory Accounts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/accounts-overview.md
Title: Microsoft identity platform accounts & tenant profiles on Android description: An overview of the Microsoft identity platform accounts for Android -+
ms.devlang: java Last updated 09/14/2019-+
active-directory Active Directory Certificate Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-certificate-credentials.md
Title: Microsoft identity platform certificate credentials description: This article discusses the registration and use of certificate credentials for application authentication. -+
Last updated 02/09/2022--++
active-directory Active Directory Saml Protocol Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-saml-protocol-reference.md
Title: How the Microsoft identity platform uses the SAML protocol description: This article provides an overview of the single sign-on and Single Sign-Out SAML profiles in Azure Active Directory. -+
Last updated 11/4/2022-++ - # How the Microsoft identity platform uses the SAML protocol
active-directory App Resilience Continuous Access Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/app-resilience-continuous-access-evaluation.md
Title: "How to use Continuous Access Evaluation enabled APIs in your applications" description: How to increase app security and resilience by adding support for Continuous Access Evaluation, enabling long-lived access tokens that can be revoked based on critical events and policy evaluation. -+++ Last updated 07/09/2021---++ # Customer intent: As an application developer, I want to learn how to use Continuous Access Evaluation for building resiliency through long-lived, refreshable tokens that can be revoked based on critical events and policy evaluation. # How to use Continuous Access Evaluation enabled APIs in your applications
active-directory Desktop App Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/desktop-app-quickstart.md
Previously updated : 01/14/2022 Last updated : 01/27/2023 zone_pivot_groups: desktop-app-quickstart
active-directory Developer Guide Conditional Access Authentication Context https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/developer-guide-conditional-access-authentication-context.md
Title: Developer guidance for Azure AD Conditional Access authentication context
description: Developer guidance and scenarios for Azure AD Conditional Access authentication context +++ Last updated 11/15/2022 -----++ -
active-directory Howto Add App Roles In Azure Ad Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-add-app-roles-in-azure-ad-apps.md
Title: Add app roles and get them from a token description: Learn how to add app roles to an application registered in Azure Active Directory. Assign users and groups to these roles, and receive them in the 'roles' claim in the token. -+
Last updated 09/27/2022--++
active-directory Howto Restrict Your App To A Set Of Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-restrict-your-app-to-a-set-of-users.md
Title: Restrict Azure AD app to a set of users description: Learn how to restrict access to your apps registered in Azure AD to a selected set of users. -+ Last updated 12/19/2022--++ #Customer intent: As a tenant administrator, I want to restrict an application that I have registered in Azuren-e AD to a select set of users available in my Azure AD tenant
active-directory Msal Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-configuration.md
Title: Android MSAL configuration file description: An overview of the Android Microsoft Authentication Library (MSAL) configuration file, which represents an application's configuration in Azure Active Directory. -+
Last updated 09/12/2019-+
active-directory Msal Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-migration.md
Title: Migrate to the Microsoft Authentication Library (MSAL) description: Learn about the differences between the Microsoft Authentication Library (MSAL) and Azure AD Authentication Library (ADAL) and how to migrate to MSAL. -+ Last updated 12/29/2022--++ # Customer intent: As an application developer, I want to learn about MSAL so I can migrate my ADAL applications to MSAL.
active-directory Msal Net Acquire Token Silently https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-acquire-token-silently.md
# Get a token from the token cache using MSAL.NET
-When you acquire an access token using the Microsoft Authentication Library for .NET (MSAL.NET), the token is cached. When the application needs a token, it should attempt to fetch it from the cache first.
+When you acquire an access token using the Microsoft Authentication Library for .NET (MSAL.NET), the token is cached. When the application needs a token, it should first attempt to fetch it from the cache.
-You can monitor the source of the tokens by inspecting the `AuthenticationResult.AuthenticationResultMetadata.TokenSource` property.
+You can monitor the source of the tokens by inspecting the [`AuthenticationResult.AuthenticationResultMetadata.TokenSource`](/dotnet/api/microsoft.identity.client.authenticationresultmetadata.tokensource?view=msal-dotnet-latest&preserve-view=true) property.
## Websites and web APIs
Web APIs on ASP.NET Core should use Microsoft.Identity.Web. Web APIs on ASP.NET
## Web service / Daemon apps
-Applications that request tokens for an app identity, with no user involved, by calling `AcquiretTokenForClient` can either rely on MSAL's internal caching, define their own memory token caching or distributed token caching. For instructions and more information, see [Token cache serialization in MSAL.NET](msal-net-token-cache-serialization.md?tabs=aspnet).
+Applications that request tokens for an app identity, with no user involved, by calling `AcquireTokenForClient` can either rely on MSAL's internal caching, define their own memory token caching or distributed token caching. For instructions and more information, see [Token cache serialization in MSAL.NET](msal-net-token-cache-serialization.md?tabs=aspnet).
-Since no user is involved, there's no reason to call `AcquireTokenSilent`. `AcquireTokenForClient` will look in the cache on its own as there's no API to clear the cache. Cache size is proportional with the number of tenants and resources you need tokens for. Cache size can be managed by setting eviction policies on the underlying cache store, such as MemoryCache, Redis etc.
+Since no user is involved, there's no reason to call `AcquireTokenSilent`. `AcquireTokenForClient` will look in the cache on its own as there's no API to clear the cache. Cache size is proportional with the number of tenants and resources you need tokens for. Cache size can be managed by setting eviction policies on the underlying cache store, such as MemoryCache, Redis, etc.
## Desktop, command-line, and mobile applications
-Desktop, command-line, and mobile applications should first call the AcquireTokenSilent method to verify if an acceptable token is in the cache. In many cases, it's possible to acquire another token with more scopes based on a token in the cache. It's also possible to refresh a token when it's getting close to expiration (as the token cache also contains a refresh token).
+Desktop, command-line, and mobile applications should first call the `AcquireTokenSilent` method to verify if an acceptable token is in the cache. In many cases, it's possible to acquire another token with more scopes based on a token in the cache. It's also possible to refresh a token when it's getting close to expiration (as the token cache also contains a refresh token).
-For authentication flows that require a user interaction, MSAL caches the access, refresh, and ID tokens, and the `IAccount` object, which represents information about a single account. Learn more about [IAccount](/dotnet/api/microsoft.identity.client.iaccount?view=azure-dotnet&preserve-view=true). For application flows, such as [client credentials](msal-authentication-flows.md#client-credentials), only access tokens are cached, because the `IAccount` object and ID token require a user, and the refresh token isn't applicable.
+For authentication flows that require a user interaction, MSAL caches the access, refresh, and ID tokens, and the `IAccount` object, which represents information about a single account. Learn more about [IAccount](/dotnet/api/microsoft.identity.client.iaccount?view=msal-dotnet-latest&preserve-view=true). For application flows, such as [client credentials](msal-authentication-flows.md#client-credentials), only access tokens are cached, because the `IAccount` object and ID token require a user, and the refresh token isn't applicable.
The recommended pattern is to call the `AcquireTokenSilent` method first. If `AcquireTokenSilent` fails, then acquire a token using other methods.
if (result != null)
### Clearing the cache
-In public client applications, clearing the cache is achieved by removing the accounts from the cache. This doesn't remove the session cookie, which is in the browser.
+In public client applications, removing accounts from the cache will clear it. However, this doesn't remove the session cookie, which is in the browser.
```csharp var accounts = (await app.GetAccountsAsync()).ToList();
active-directory Msal Net Client Assertions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-client-assertions.md
Title: Client assertions (MSAL.NET) description: Learn about signed client assertions support for confidential client applications in the Microsoft Authentication Library for .NET (MSAL.NET). -+
Last updated 03/18/2021--++ #Customer intent: As an application developer, I want to learn how to use client assertions to prove the identity of my confidential client application
active-directory Msal Net Differences Adal Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-differences-adal-net.md
Title: Differences between ADAL.NET and MSAL.NET apps description: Learn about the differences between the Microsoft Authentication Library for .NET (MSAL.NET) and Azure AD Authentication Library for .NET (ADAL.NET). -+
Last updated 06/09/2021--++ #Customer intent: As an application developer, I want to learn about the differences between the ADAL.NET and MSAL.NET libraries so I can migrate my applications to MSAL.NET.
active-directory Msal Net Migration Confidential Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-migration-confidential-client.md
Title: Migrate confidential client applications to MSAL.NET description: Learn how to migrate a confidential client application from Azure Active Directory Authentication Library for .NET to Microsoft Authentication Library for .NET. -+ Last updated 06/08/2021--++ #Customer intent: As an application developer, I want to migrate my confidential client app from ADAL.NET to MSAL.NET.
active-directory Msal Net Migration Ios Broker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-migration-ios-broker.md
Title: Migrate Xamarin apps using brokers to MSAL.NET description: Learn how to migrate Xamarin iOS apps that use Microsoft Authenticator from ADAL.NET to MSAL.NET.-+
Last updated 09/08/2019--++ #Customer intent: As an application developer, I want to learn how to migrate my iOS applications that use Microsoft Authenticator from ADAL.NET to MSAL.NET.
active-directory Msal Net Migration Public Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-migration-public-client.md
Title: Migrate public client applications to MSAL.NET description: Learn how to migrate a public client application from Azure Active Directory Authentication Library for .NET to Microsoft Authentication Library for .NET. -+
Last updated 08/31/2021--++ #Customer intent: As an application developer, I want to migrate my public client app from ADAL.NET to MSAL.NET.
active-directory Msal Net Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-migration.md
Title: Migrating to MSAL.NET and Microsoft.Identity.Web description: Learn why and how to migrate from Azure AD Authentication Library for .NET (ADAL.NET) to Microsoft Authentication Library for .NET (MSAL.NET) or Microsoft.Identity.Web -+
Last updated 11/25/2022--++ #Customer intent: As an application developer, I want to learn why and how to migrate from ADAL.NET and MSAL.NET or Microsoft.Identity.Web libraries.
active-directory Msal Net Provide Httpclient https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-provide-httpclient.md
Title: Provide an HttpClient & proxy (MSAL.NET) description: Learn about providing your own HttpClient and proxy to connect to Azure AD using the Microsoft Authentication Library for .NET (MSAL.NET).-+ Last updated 04/23/2019--++ #Customer intent: As an application developer, I want to learn about providing my own HttpClient so I can have fine-grained control of the proxy.
active-directory Msal Net Token Cache Serialization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-token-cache-serialization.md
Title: Token cache serialization (MSAL.NET) description: Learn about serialization and custom serialization of the token cache using the Microsoft Authentication Library for .NET (MSAL.NET). -+
Last updated 11/09/2021--++ #Customer intent: As an application developer, I want to learn about token cache serialization so I can have fine-grained control of the proxy.
active-directory Msal Net Use Brokers With Xamarin Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-use-brokers-with-xamarin-apps.md
Title: Use brokers with Xamarin iOS & Android description: Learn how to setup Xamarin iOS applications that can use the Microsoft Authenticator and the Microsoft Authentication Library for .NET (MSAL.NET). Also learn how to migrate from Azure AD Authentication Library for .NET (ADAL.NET) to the Microsoft Authentication Library for .NET (MSAL.NET).-+
Last updated 09/08/2019--++ #Customer intent: As an application developer, I want to learn how to use brokers with my Xamarin iOS or Android application and MSAL.NET.
active-directory Msal Net User Gets Consent For Multiple Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-user-gets-consent-for-multiple-resources.md
Last updated 04/30/2019 -+ #Customer intent: As an application developer, I want to learn how to specify additional scopes so I can get pre-consent for several resources.
active-directory Msal Python Adfs Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-python-adfs-support.md
Title: Azure AD FS support (MSAL Python) description: Learn about Active Directory Federation Services (AD FS) support in the Microsoft Authentication Library for Python -+
Last updated 11/23/2019--++ #Customer intent: As an application developer, I want to learn about AD FS support in MSAL for Python so I can decide if this platform meets my application development needs and requirements.
active-directory Permissions Consent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/permissions-consent-overview.md
As an application developer, you must identify how your application will access
In this access scenario, a user has signed into a client application. The client application accesses the resource on behalf of the user. Delegated access requires delegated permissions. Both the client and the user must be authorized separately to make the request. For more information about the delegated access scenario, see [delegated access scenario](delegated-access-primer.md).
-For the client app, the correct delegated permissions must be granted. Delegated permissions can also be referred to as scopes. Scopes are permissions for a given resource that represent what a client application can access on behalf of the user.For more information about scopes, see [scopes and permissions](v2-permissions-and-consent.md#scopes-and-permissions).
+For the client app, the correct delegated permissions must be granted. Delegated permissions can also be referred to as scopes. Scopes are permissions for a given resource that represent what a client application can access on behalf of the user. For more information about scopes, see [scopes and permissions](v2-permissions-and-consent.md#scopes-and-permissions).
For the user, the authorization relies on the privileges that the user has been granted for them to access the resource. For example, the user could be authorized to access directory resources by [Azure Active Directory (Azure AD) role-based access control (RBAC)](../roles/custom-overview.md) or to access mail and calendar resources by Exchange Online RBAC. For more information on RBAC for applications, see [RBAC for applications](custom-rbac-for-developers.md).
active-directory Quickstart V2 Aspnet Core Web Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-aspnet-core-web-api.md
Title: "Quickstart: Protect an ASP.NET Core web API with the Microsoft identity platform" description: In this quickstart, you download and modify a code sample that demonstrates how to protect an ASP.NET Core web API by using the Microsoft identity platform for authorization. -+
Last updated 12/09/2022 -++ #Customer intent: As an application developer, I want to know how to write an ASP.NET Core web API that uses the Microsoft identity platform to authorize API requests from clients.
active-directory Quickstart V2 Aspnet Core Webapp Calls Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-aspnet-core-webapp-calls-graph.md
Title: "Quickstart: ASP.NET Core web app that signs in users and calls Microsoft Graph" description: In this quickstart, you learn how an app uses Microsoft.Identity.Web to implement Microsoft sign-in in an ASP.NET Core web app using OpenID Connect and calls Microsoft Graph. -+
Last updated 11/22/2021 -++ #Customer intent: As an application developer, I want to download and run a demo ASP.NET Core web app that can sign in users with personal Microsoft accounts (MSA) and work/school accounts from any Azure Active Directory instance, then access their data in Microsoft Graph on their behalf.
active-directory Quickstart V2 Aspnet Core Webapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-aspnet-core-webapp.md
Title: "Quickstart: Add sign-in with Microsoft Identity to an ASP.NET Core web app" description: In this quickstart, you learn how an app implements Microsoft sign-in on an ASP.NET Core web app by using OpenID Connect -+
Last updated 11/22/2021 -++ #Customer intent: As an application developer, I want to know how to write an ASP.NET Core web app that can sign in personal accounts, as well as work and school accounts, from any Azure Active Directory instance.
active-directory Quickstart V2 Aspnet Webapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-aspnet-webapp.md
Title: "Quickstart: ASP.NET web app that signs in users" description: Download and run a code sample that shows how an ASP.NET web app can sign in Azure AD users. -+
Last updated 11/22/2021 -++ #Customer intent: As an application developer, I want to see a sample ASP.NET web app that can sign in Azure AD users.
active-directory Quickstart V2 Dotnet Native Aspnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-dotnet-native-aspnet.md
Title: "Quickstart: Call an ASP.NET web API that is protected by the Microsoft identity platform" description: In this quickstart, learn how to call an ASP.NET web API that's protected by the Microsoft identity platform from a Windows Desktop (WPF) application. -+
Last updated 01/11/2022 -++ #Customer intent: As an application developer, I want to know how to set up OpenId Connect authentication in a web application that's built by using Node.js with Express.
active-directory Quickstart V2 Netcore Daemon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-netcore-daemon.md
Title: "Quickstart: Get token & call Microsoft Graph in a console app" description: In this quickstart, you learn how a .NET Core sample app can use the client credentials flow to get a token and call Microsoft Graph. -+
Last updated 01/10/2022 --++ #Customer intent: As an application developer, I want to learn how my .NET Core app can get an access token and call an API that's protected by the Microsoft identity platform by using the client credentials flow.
active-directory Quickstart V2 Nodejs Webapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-nodejs-webapp.md
Title: "Quickstart: Add user sign-in to a Node.js web app" description: In this quickstart, you learn how to implement authentication in a Node.js web application using OpenID Connect. -+
Last updated 11/22/2021 -++ #Customer intent: As an application developer, I want to know how to set up OpenID Connect authentication in a web application built using Node.js with Express.
active-directory Quickstart V2 Python Daemon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-python-daemon.md
Title: "Quickstart: Call Microsoft Graph from a Python daemon" description: In this quickstart, you learn how a Python process can get an access token and call an API protected by Microsoft identity platform, using the app's own identity -+
Last updated 01/10/2022 -++ #Customer intent: As an application developer, I want to learn how my Python app can get an access token and call an API that's protected by the Microsoft identity platform using client credentials flow.
active-directory Quickstart V2 Uwp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-uwp.md
Title: "Quickstart: Sign in users and call Microsoft Graph in a Universal Windows Platform app" description: In this quickstart, learn how a Universal Windows Platform (UWP) application can get an access token and call an API protected by Microsoft identity platform. -+
Last updated 01/14/2022 -++ #Customer intent: As an application developer, I want to learn how my Universal Windows Platform (XAML) application can get an access token and call an API that's protected by the Microsoft identity platform.
active-directory Quickstart V2 Windows Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-windows-desktop.md
Title: "Quickstart: Sign in users and call Microsoft Graph in a Windows desktop application" description: In this quickstart, learn how a Windows Presentation Foundation (WPF) app can get an access token and call an API protected by the Microsoft identity platform. -+ Last updated 01/14/2022-++ #Customer intent: As an application developer, I want to learn how my Windows Presentation Foundation (WPF) application can get an access token and call an API that's protected by the Microsoft identity platform.
active-directory Reference App Multi Instancing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-app-multi-instancing.md
Last updated 01/06/2023 + # Configure SAML app multi-instancing for an application in Azure Active Directory
active-directory Reference Saml Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-saml-tokens.md
Last updated 01/19/2023 - + # SAML token claims reference
active-directory Reference Third Party Cookies Spas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-third-party-cookies-spas.md
Title: How to handle Intelligent Tracking Protection (ITP) in Safari description: Single-page app (SPA) authentication when third-party cookies are no longer allowed. -+
Last updated 03/14/2022--++
active-directory Scenario Daemon Acquire Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-daemon-acquire-token.md
Title: Acquire tokens to call a web API (daemon app) - The Microsoft identity platform description: Learn how to build a daemon app that calls web APIs (acquiring tokens) -+ Last updated 05/12/2022-++ #Customer intent: As an application developer, I want to know how to write a daemon app that can call web APIs by using the Microsoft identity platform.
active-directory Scenario Daemon App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-daemon-app-configuration.md
Title: Configure daemon apps that call web APIs description: Learn how to configure the code for your daemon application that calls web APIs (app configuration) -+
Last updated 09/19/2020-++ # Customer intent: As an application developer, I want to know how to write a daemon app that can call web APIs by using the Microsoft identity platform.
active-directory Scenario Daemon App Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-daemon-app-registration.md
Title: Register daemon apps that call web APIs description: Learn how to build a daemon app that calls web APIs - app registration -+
Last updated 12/01/2021-++ #Customer intent: As an application developer, I want to know how to write a daemon app that can call web APIs by using the Microsoft identity platform for developers.
active-directory Scenario Daemon Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-daemon-call-api.md
Title: Call a web API from a daemon app description: Learn how to build a daemon app that calls a web API. -+
Last updated 10/30/2019-++ #Customer intent: As an application developer, I want to know how to write a daemon app that can call web APIs by using the Microsoft identity platform.
active-directory Scenario Daemon Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-daemon-overview.md
Title: Build a daemon app that calls web APIs description: Learn how to build a daemon app that calls web APIs -+
Last updated 12/19/2022-++ #Customer intent: As an application developer, I want to know how to write a daemon app that can call web APIs by using the Microsoft identity platform.
active-directory Scenario Daemon Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-daemon-production.md
Title: Move a daemon app that calls web APIs to production description: Learn how to move a daemon app that calls web APIs to production -+
Last updated 10/30/2019-++ #Customer intent: As an application developer, I want to know how to write a daemon app that can call web APIs by using the Microsoft identity platform.
active-directory Scenario Desktop App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-app-configuration.md
Title: Configure desktop apps that call web APIs description: Learn how to configure the code of a desktop app that calls web APIs -+
Last updated 10/30/2019-++ #Customer intent: As an application developer, I want to know how to write a desktop app that calls web APIs by using the Microsoft identity platform.
active-directory Scenario Desktop App Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-app-registration.md
Title: Register desktop apps that call web APIs description: Learn how to build a desktop app that calls web APIs (app registration) -+
Last updated 09/09/2019-++ #Customer intent: As an application developer, I want to know how to write a desktop app that calls web APIs by using the Microsoft identity platform.
active-directory Scenario Desktop Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-call-api.md
Title: Call web APIs from a desktop app description: Learn how to build a desktop app that calls web APIs -+
Last updated 10/30/2019-++ #Customer intent: As an application developer, I want to know how to write a desktop app that calls web APIs by using the Microsoft identity platform.
active-directory Scenario Desktop Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-overview.md
Title: Build a desktop app that calls web APIs description: Learn how to build a desktop app that calls web APIs (overview) -+
Last updated 11/22/2021-++ #Customer intent: As an application developer, I want to know how to write a desktop app that calls web APIs by using the Microsoft identity platform.
active-directory Scenario Desktop Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-production.md
Title: Move desktop app calling web APIs to production description: Learn how to move a desktop app that calls web APIs to production -+
Last updated 10/30/2019-++ #Customer intent: As an application developer, I want to know how to write a desktop app that calls web APIs by using the Microsoft identity platform.
active-directory Scenario Protected Web Api App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-protected-web-api-app-configuration.md
Title: Configure protected web API apps description: Learn how to build a protected web API and configure your application's code. -+ Last updated 12/09/2022-++ #Customer intent: As an application developer, I want to know how to write a protected web API using the Microsoft identity platform for developers.
You need to specify the `TenantId` only if you want to accept access tokens from
{ "AzureAd": { "Instance": "https://login.microsoftonline.com/",
- "ClientId": "Enter_the_Application_(client)_ID_here"
+ "ClientId": "Enter_the_Application_(client)_ID_here",
"TenantId": "common" }, "Logging": {
active-directory Scenario Protected Web Api App Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-protected-web-api-app-registration.md
Title: Protected web API app registration description: Learn how to build a protected web API and the information you need to register the app.-+ Last updated 01/27/2022-++ # Customer intent: As an application developer, I want to know how to write a protected web API using the Microsoft identity platform for developers.
active-directory Scenario Protected Web Api Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-protected-web-api-overview.md
Title: Protected web API - overview description: Learn how to build a protected web API (overview). -+
Last updated 12/19/2022-++ #Customer intent: As an application developer, I want to know how to write a protected web API using the Microsoft identity platform for developers.
active-directory Scenario Protected Web Api Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-protected-web-api-production.md
Title: Move a protected web API to production description: Learn how to build a protected web API (move to production). -+
Last updated 07/15/2020-++ #Customer intent: As an application developer, I want to know how to write a protected web API using the Microsoft identity platform for developers.
active-directory Scenario Protected Web Api Verification Scope App Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-protected-web-api-verification-scope-app-roles.md
Title: Verify scopes and app roles protected web API description: Verify that the API is only called by applications on behalf of users who have the right scopes and by daemon apps that have the right application roles. -+ Last updated 05/12/2022-++ #Customer intent: As an application developer, I want to learn how to write a protected web API using the Microsoft identity platform for developers.
active-directory Scenario Token Exchange Saml Oauth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-token-exchange-saml-oauth.md
Title: Microsoft identity platform token exchange scenario with SAML and OIDC/OAuth in Azure Active Directory description: Learn about common token exchange scenarios when working with SAML and OIDC/OAuth in Azure Active Directory. -+ Last updated 12/08/2020--++
active-directory Scenario Web Api Call Api Acquire Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-api-call-api-acquire-token.md
Title: Get a token for a web API that calls web APIs description: Learn how to build a web API that calls web APIs that require acquiring a token for the app. -+
Last updated 07/15/2020-++ #Customer intent: As an application developer, I want to know how to write a web API that calls web APIs by using the Microsoft identity platform.
active-directory Scenario Web Api Call Api App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-api-call-api-app-configuration.md
Title: Configure a web API that calls web APIs description: Learn how to build a web API that calls web APIs (app's code configuration) -+ Last updated 08/12/2022-++ #Customer intent: As an application developer, I want to know how to write a web API that calls web APIs by using the Microsoft identity platform.
active-directory Scenario Web Api Call Api App Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-api-call-api-app-registration.md
Title: Register a web API that calls web APIs description: Learn how to build a web API that calls downstream web APIs (app registration). -+
Last updated 05/07/2019-++ #Customer intent: As an application developer, I want to know how to write a web API that calls web APIs by using the Microsoft identity platform.
active-directory Scenario Web Api Call Api Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-api-call-api-call-api.md
Title: Web API that calls web APIs description: Learn how to build a web API that calls web APIs. -+
Last updated 09/26/2020-++ #Customer intent: As an application developer, I want to know how to write a web API that calls web APIs by using the Microsoft identity platform.
active-directory Scenario Web Api Call Api Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-api-call-api-overview.md
Title: Build a web API that calls web APIs description: Learn how to build a web API that calls downstream web APIs (overview). -+
Last updated 11/25/2022-++ #Customer intent: As an application developer, I want to know how to write a web API that calls web APIs by using the Microsoft identity platform.
active-directory Scenario Web Api Call Api Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-api-call-api-production.md
Title: Move web API calling web APIs to production description: Learn how to move a web API that calls web APIs to production. -+
Last updated 05/07/2019-++ #Customer intent: As an application developer, I want to know how to write a web API that calls web APIs using the Microsoft identity platform.
active-directory Scenario Web App Call Api Acquire Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-call-api-acquire-token.md
Title: Get a token in a web app that calls web APIs description: Learn how to acquire a token for a web app that calls web APIs -+
Last updated 05/06/2022-++ #Customer intent: As an application developer, I want to know how to write a web app that calls web APIs by using the Microsoft identity platform.
active-directory Scenario Web App Call Api App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-call-api-app-configuration.md
Title: Configure a web app that calls web APIs description: Learn how to configure the code of a web app that calls web APIs -+
Last updated 09/25/2020-++ #Customer intent: As an application developer, I want to know how to write a web app that calls web APIs by using the Microsoft identity platform.
active-directory Scenario Web App Call Api App Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-call-api-app-registration.md
Title: Register a web app that calls web APIs description: Learn how to register a web app that calls web APIs -+
Last updated 05/07/2019-++ #Customer intent: As an application developer, I want to know how to write a web app that calls web APIs by using the Microsoft identity platform.
active-directory Scenario Web App Call Api Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-call-api-call-api.md
Title: Call a web api from a web app description: Learn how to build a web app that calls web APIs (calling a protected web API) -+
Last updated 09/25/2020-++ #Customer intent: As an application developer, I want to know how to write a web app that calls web APIs by using the Microsoft identity platform.
active-directory Scenario Web App Call Api Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-call-api-overview.md
Title: Build a web app that authenticates users and calls web APIs description: Learn how to build a web app that authenticates users and calls web APIs (overview) -+
Last updated 11/4/2022-++ #Customer intent: As an application developer, I want to know how to write a web app that authenticates users and calls web APIs by using the Microsoft identity platform.
active-directory Scenario Web App Call Api Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-call-api-production.md
Title: Move to production a web app that calls web APIs description: Learn how to move to production a web app that calls web APIs. -+
Last updated 05/07/2019-++ #Customer intent: As an application developer, I want to know how to write a web app that calls web APIs by using the Microsoft identity platform.
active-directory Scenario Web App Call Api Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-call-api-sign-in.md
Title: Remove accounts from the token cache on sign-out description: Learn how to remove an account from the token cache on sign-out -+
Last updated 07/14/2019-++ #Customer intent: As an application developer, I want to know how to write a web app that calls web APIs by using the Microsoft identity platform.
active-directory Scenario Web App Sign User App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-sign-user-app-configuration.md
The initialization code differences are platform dependant. For ASP.NET Core and
# [ASP.NET Core](#tab/aspnetcore)
-In ASP.NET Core web apps (and web APIs), the application is protected because you have a `Authorize` attribute on the controllers or the controller actions. This attribute checks that the user is authenticated. Prior to the release of .NET 6, the code initializaation wis in the *Startup.cs* file. New ASP.NET Core projects with .NET 6 no longer contain a *Startup.cs* file. Taking its place is the *Program.cs* file. The rest of this tutorial pertains to .NET 5 or lower.
+In ASP.NET Core web apps (and web APIs), the application is protected because you have a `Authorize` attribute on the controllers or the controller actions. This attribute checks that the user is authenticated. Prior to the release of .NET 6, the code initialization was in the *Startup.cs* file. New ASP.NET Core projects with .NET 6 no longer contain a *Startup.cs* file. Taking its place is the *Program.cs* file. The rest of this tutorial pertains to .NET 5 or lower.
> [!NOTE] > If you want to start directly with the new ASP.NET Core templates for Microsoft identity platform, that leverage Microsoft.Identity.Web, you can download a preview NuGet package containing project templates for .NET 5.0. Then, once installed, you can directly instantiate ASP.NET Core web applications (MVC or Blazor). See [Microsoft.Identity.Web web app project templates](https://aka.ms/ms-id-web/webapp-project-templates) for details. This is the simplest approach as it will do all the steps below for you.
active-directory Single Multi Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/single-multi-account.md
Title: Single and multiple account public client apps description: An overview of single and multiple account public client apps. -+ Last updated 09/26/2019-+
active-directory Single Sign On Saml Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/single-sign-on-saml-protocol.md
Title: Azure single sign-on SAML protocol
description: This article describes the single sign-on (SSO) SAML protocol in Azure Active Directory documentationcenter: .net-+
Last updated 08/31/2022-++ -+ # Single sign-on SAML protocol
active-directory Single Sign Out Saml Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/single-sign-out-saml-protocol.md
Title: Azure Single Sign Out SAML Protocol description: This article describes the Single Sign-Out SAML Protocol in Azure Active Directory -+ Last updated 11/25/2022-++ - # Single Sign-Out SAML Protocol
active-directory Supported Accounts Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/supported-accounts-validation.md
# required metadata Title: Validation differences by supported account types description: Learn about the validation differences of various properties for different supported account types when registering your app with the Microsoft identity platform.--++ Last updated 09/29/2021 -+ # Validation differences by supported account types (signInAudience)
active-directory Test Automate Integration Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/test-automate-integration-testing.md
Title: Run automated integration tests description: Learn how to run automated integration tests as a user against APIs protected by the Microsoft identity platform. Use the Resource Owner Password Credential Grant (ROPC) auth flow to sign in as a user instead of automating the interactive sign-in prompt UI. --++ Last updated 11/30/2021--++ # Customer intent: As a developer, I want to use ROPC in automated integration tests against APIs protected by Microsoft identity platform so I don't have to automate the interactive sign-in prompts.
active-directory Test Setup Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/test-setup-environment.md
Title: Set up a test environment for your app description: Learn how to set up an Azure Active Directory test environment so you can test your application integrated with Microsoft identity platform. Evaluate whether you need a separate tenant for testing or if you can use your production tenant. -+ Last updated 05/11/2022--++ # Customer intent: As a developer, I want to set up a test environment so that I can test my app integrated with Microsoft identity platform.
active-directory Userinfo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/userinfo.md
Title: Microsoft identity platform UserInfo endpoint description: Learn about the UserInfo endpoint on the Microsoft identity platform. -+
Last updated 08/26/2022-+
active-directory V2 Oauth Ropc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth-ropc.md
Title: Sign in with resource owner password credentials grant description: Support browser-less authentication flows using the resource owner password credential (ROPC) grant. -+
Last updated 08/26/2022-+
active-directory V2 Oauth2 Auth Code Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth2-auth-code-flow.md
Title: Microsoft identity platform and OAuth 2.0 authorization code flow description: Protocol reference for the Microsoft identity platform's implementation of the OAuth 2.0 authorization code grant -+ Last updated 01/05/2023-+
active-directory V2 Oauth2 Client Creds Grant Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth2-client-creds-grant-flow.md
Title: OAuth 2.0 client credentials flow on the Microsoft identity platform description: Build web applications by using the Microsoft identity platform implementation of the OAuth 2.0 authentication protocol. -+
Last updated 02/09/2022--++
active-directory V2 Oauth2 Device Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth2-device-code.md
Title: OAuth 2.0 device code flow description: Sign in users without a browser. Build embedded and browser-less authentication flows using the device authorization grant. -+
Last updated 11/15/2022--++
active-directory V2 Oauth2 Implicit Grant Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth2-implicit-grant-flow.md
Title: OAuth 2.0 implicit grant flow - The Microsoft identity platform description: Secure single-page apps using Microsoft identity platform implicit flow. -+ + Last updated 08/18/2022-+
active-directory V2 Oauth2 On Behalf Of Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth2-on-behalf-of-flow.md
Title: Microsoft identity platform and OAuth2.0 On-Behalf-Of flow description: This article describes how to use HTTP messages to implement service to service authentication using the OAuth2.0 On-Behalf-Of flow. -+
Last updated 09/30/2022-+
active-directory V2 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-overview.md
Title: Microsoft identity platform overview description: Learn about the components of the Microsoft identity platform and how they can help you build identity and access management (IAM) support into your applications. -+
Last updated 11/16/2022-+ # Customer intent: As an application developer, I want a quick introduction to the Microsoft identity platform so I can decide if this platform meets my application development requirements.
active-directory V2 Protocols Oidc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-protocols-oidc.md
Title: OpenID Connect (OIDC) on the Microsoft identity platform description: Sign in Azure AD users by using the Microsoft identity platform's implementation of the OpenID Connect extension to OAuth 2.0.-+ - Last updated 08/26/2022+
active-directory Add Users Administrator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/add-users-administrator.md
To add B2B collaboration users to an application, follow these steps:
## Resend invitations to guest users
-If a guest user has not yet redeemed their invitation, you can resend the invitation email.
+If a guest user hasn't yet redeemed their invitation, you can resend the invitation email.
1. Sign in to the [Azure portal](https://portal.azure.com) as an Azure AD administrator. 2. Search for and select **Azure Active Directory** from any page. 3. Under **Manage**, select **Users**. 4. In the list, select the user's name to open their user profile. 5. Under **My Feed**, in the **B2B collaboration** tile, select the **Manage (resend invitation / reset status** link.
-6. If the user has not yet accepted the invitation, Select the **Yes** option to resend.
+6. If the user hasn't yet accepted the invitation, Select the **Yes** option to resend.
![Screenshot showing the Resend Invite radio button.](./media/add-users-administrator/resend-invitation.png)
If a guest user has not yet redeemed their invitation, you can resend the invita
- To learn how non-Azure AD admins can add B2B guest users, see [How users in your organization can invite guest users to an app](add-users-information-worker.md) - For information about the invitation email, see [The elements of the B2B collaboration invitation email](invitation-email-elements.md).
+- To learn about the B2B collaboration user types, see the [B2B collaboration user properties](user-properties.md) article.
active-directory Tutorial Bulk Invite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/tutorial-bulk-invite.md
For example: `Remove-MgUser -UserId "lstokes_fabrikam.com#EXT#@contoso.onmicroso
## Next steps
-In this tutorial, you sent bulk invitations to guest users outside of your organization. Next, learn how the invitation redemption process works, and how to enforce multi-factor authentication for guest users.
-- - [Learn about the Azure AD B2B collaboration invitation redemption process](redemption-experience.md) - [Enforce multi-factor authentication for B2B guest users](b2b-tutorial-require-mfa.md)
+- [Billing model for guest user collaboration usage](external-identities-pricing.md#about-monthly-active-users-mau-billing)
active-directory 5 Secure Access B2b https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/5-secure-access-b2b.md
You can use an allowlist or blocklist to from specific organizations. You can us
Learn more: [Allow or block invitations to B2B users from specific organizations](../external-identities/allow-deny-list.md) > [!IMPORTANT]
-> These lists don't apply to users in your directory. By default, they don't apply to OneDrive for Business and SharePoint allowlist or blocklists. These lists are separate, but you can enable [SharePoint-OneDrive B2B integration](/sharepoint/sharepoint-azureb2b-integration.md).
+> These lists don't apply to users in your directory. By default, they don't apply to OneDrive for Business and SharePoint allowlist or blocklists. These lists are separate, but you can enable [SharePoint-OneDrive B2B integration](/sharepoint/sharepoint-azureb2b-integration).
Some organizations have a blocklist of bad-actor domains from a managed security provider. For example, if the organization does business with Contoso and uses a com domain, an unrelated organization can use the org domain, and attempt a phishing attack.
active-directory Active Directory Data Storage Eu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-data-storage-eu.md
Title: Customer data storage and processing for European customers in Azure Active Directory description: Learn about where Azure Active Directory stores identity-related data for its European customers. ---+++
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md
The What's new in Azure Active Directory? release notes provide information abou
- Deprecated functionality - Plans for changes ++
+## July 2022
+
+### Public Preview - ADFS to Azure AD: SAML App Multi-Instancing
+
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** SSO
+
+Users can now configure multiple instances of the same application within an Azure AD tenant. It's now supported for both IdP, and Service Provider (SP), initiated single sign-on requests. Multiple application accounts can now have a separate service principal to handle instance-specific claims mapping and roles assignment. For more information, see:
+
+- [Configure SAML app multi-instancing for an application - Microsoft Entra | Microsoft Docs](../develop/reference-app-multi-instancing.md)
+- [Customize app SAML token claims - Microsoft Entra | Microsoft Docs](../develop/active-directory-saml-claims-customization.md)
+++++
+### Public Preview - ADFS to Azure AD: Apply RegEx Replace to groups claim content
+
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** SSO
+
+
+
+Administrators up until recently has the capability to transform claims using many transformations, however using regular expression for claims transformation wasn't exposed to customers. With this public preview release, administrators can now configure and use regular expressions for claims transformation using portal UX.
+For more information, see:[Customize app SAML token claims - Microsoft Entra | Microsoft Docs](../develop/active-directory-saml-claims-customization.md).
+
++
+
++
+### Public Preview - Azure AD Domain Services - Trusts for User Forests
+
+**Type:** New feature
+**Service category:** Azure AD Domain Services
+**Product capability:** Azure AD Domain Services
+
+
+You can now create trusts on both user and resource forests. On-premises AD DS users can't authenticate to resources in the Azure AD DS resource forest until you create an outbound trust to your on-premises AD DS. An outbound trust requires network connectivity to your on-premises virtual network on which you have installed Azure AD Domain Service. On a user forest, trusts can be created for on-premises AD forests that aren't synchronized to Azure AD DS.
+
+To learn more about trusts and how to deploy your own, visit [How trust relationships work for forests in Active Directory](../../active-directory-domain-services/concepts-forest-trust.md).
+
+
++
+
++
+### New Federated Apps available in Azure AD Application gallery - July 2022
+
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** 3rd Party Integration
+
+
+In July 2022 we've added the following 28 new applications in our App gallery with Federation support:
+
+[Lunni Ticket Service](https://ticket.lunni.io/login), [TESMA](https://tesma.com/), [Spring Health](https://benefits.springhealth.com/care), [Sorbet](https://lite.sorbetapp.com/login), [Rainmaker UPS](https://upsairlines.rainmaker.aero/rainmaker.security.web/), [Planview ID](../saas-apps/planview-id-tutorial.md), [Karbonalpha](https://saas.karbonalpha.com/settings/api), [Headspace](../saas-apps/headspace-tutorial.md), [SeekOut](../saas-apps/seekout-tutorial.md), [Stackby](../saas-apps/stackby-tutorial.md), [Infrascale Cloud Backup](../saas-apps/infrascale-cloud-backup-tutorial.md), [Keystone](../saas-apps/keystone-tutorial.md), [LMS・教育管理システム Leaf](../saas-apps/lms-and-education-management-system-leaf-tutorial.md), [ZDiscovery](../saas-apps/zdiscovery-tutorial.md), [ラインズeライブラリアドバンス (Lines eLibrary Advance)](../saas-apps/lines-elibrary-advance-tutorial.md), [Rootly](../saas-apps/rootly-tutorial.md), [Articulate 360](../saas-apps/articulate360-tutorial.md), [Rise.com](../saas-apps/risecom-tutorial.md), [SevOne Network Monitoring System (NMS)](../saas-apps/sevone-network-monitoring-system-tutorial.md), [PGM](https://ups-pgm.4gfactor.com/azure/), [TouchRight Software](https://app.touchrightsoftware.com/), [Tendium](../saas-apps/tendium-tutorial.md), [Training Platform](../saas-apps/training-platform-tutorial.md), [Znapio](https://app.znapio.com/), [Preset](../saas-apps/preset-tutorial.md), [itslearning MS Teams sync](https://itslearning.com/global/), [Veza](../saas-apps/veza-tutorial.md), [Trax](https://app.trax.co/authn/login)
+
+You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial,
+
+For listing your application in the Azure AD app gallery, please read the details here https://aka.ms/AzureADAppRequest
++
+
+
+
++
+### General Availability - No more waiting, provision groups on demand into your SaaS applications.
+
+**Type:** New feature
+**Service category:** Provisioning
+**Product capability:** Identity Lifecycle Management
+
+
+Pick a group of up to five members and provision them into your third-party applications in seconds. Get started testing, troubleshooting, and provisioning to non-Microsoft applications such as ServiceNow, ZScaler, and Adobe. For more information, see: [On-demand provisioning in Azure Active Directory](../app-provisioning/provision-on-demand.md).
+
++
+
+
+### General Availability ΓÇô Protect against by-passing of cloud Azure AD Multi-Factor Authentication when federated with Azure AD
+
+**Type:** New feature
+**Service category:** MS Graph
+**Product capability:** Identity Security & Protection
+
+
+We're delighted to announce a new security protection that prevents bypassing of cloud Azure AD Multi-Factor Authentication when federated with Azure AD. When enabled for a federated domain in your Azure AD tenant, it ensures that a compromised federated account can't bypass Azure AD Multi-Factor Authentication by imitating that a multi factor authentication has already been performed by the identity provider. The protection can be enabled via new security setting, [federatedIdpMfaBehavior](/graph/api/resources/internaldomainfederation?view=graph-rest-beta#federatedidpmfabehavior-values&preserve-view=true).
+
+
+We highly recommend enabling this new protection when using Azure AD Multi-Factor Authentication as your multi factor authentication for your federated users. To learn more about the protection and how to enable it, visit [Enable protection to prevent by-passing of cloud Azure AD Multi-Factor Authentication when federated with Azure AD](/windows-server/identity/ad-fs/deployment/best-practices-securing-ad-fs#enable-protection-to-prevent-by-passing-of-cloud-azure-ad-multi-factor-authentication-when-federated-with-azure-ad).
+
++
+
+
+### Public preview - New provisioning connectors in the Azure AD Application Gallery - July 2022
+
+**Type:** New feature
+**Service category:** App Provisioning
+**Product capability:** 3rd Party Integration
+
+
+You can now automate creating, updating, and deleting user accounts for these newly integrated apps:
+
+- [Tableau Cloud](../saas-apps/tableau-online-provisioning-tutorial.md)
+
+For more information about how to better secure your organization by using automated user account provisioning, see [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md).
+
++
+
+
+### General Availability - Tenant-based service outage notifications
+
+**Type:** New feature
+**Service category:** Other
+**Product capability:** Platform
+
+
+Azure Service Health supports service outage notifications to Tenant Admins for Azure Active Directory issues. These outages will also appear on the Azure AD Admin Portal Overview page with appropriate links to Azure Service Health. Outage events will be able to be seen by built-in Tenant Administrator Roles. We'll continue to send outage notifications to subscriptions within a tenant for transition. More information is available at: [What are Service Health notifications in Azure Active Directory?](../reports-monitoring/overview-service-health-notifications.md).
+
+
++
+
++
+### Public Preview - Multiple Passwordless Phone sign-in Accounts for iOS devices
+
+**Type:** New feature
+**Service category:** Authentications (Logins)
+**Product capability:** User Authentication
+
+
+End users can now enable passwordless phone sign-in for multiple accounts in the Authenticator App on any supported iOS device. Consultants, students, and others with multiple accounts in Azure AD can add each account to Microsoft Authenticator and use passwordless phone sign-in for all of them from the same iOS device. The Azure AD accounts can be in either the same, or different, tenants. Guest accounts aren't supported for multiple account sign-ins from one device.
++
+Note that end users are encouraged to enable the optional telemetry setting in the Authenticator App, if not done so already. For more information, see: [Enable passwordless sign-in with Microsoft Authenticator](../authentication/howto-authentication-passwordless-phone.md)
+
+
++
+
+
+
+### Public Preview - Azure AD Domain Services - Fine Grain Permissions
+
+**Type:** Changed feature
+**Service category:** Azure AD Domain Services
+**Product capability:** Azure AD Domain Services
+
+
+
+Previously to set up and administer your AAD-DS instance you needed top level permissions of Azure Contributor and Azure AD Global Admin. Now for both initial creation, and ongoing administration, you can utilize more fine grain permissions for enhanced security and control. The prerequisites now minimally require:
+
+- You need [Application Administrator](../roles/permissions-reference.md#application-administrator) and [Groups Administrator](../roles/permissions-reference.md#groups-administrator) Azure AD roles in your tenant to enable Azure AD DS.
+- You need [Domain Services Contributor](../../role-based-access-control/built-in-roles.md#domain-services-contributor) Azure role to create the required Azure AD DS resources.
+
+
+Check out these resources to learn more:
+
+- [Tutorial - Create an Azure Active Directory Domain Services managed domain | Microsoft Docs](../../active-directory-domain-services/tutorial-create-instance.md#prerequisites)
+- [Least privileged roles by task - Azure Active Directory | Microsoft Docs](../roles/delegate-by-task.md#domain-services)
+- [Azure built-in roles - Azure RBAC | Microsoft Docs](../../role-based-access-control/built-in-roles.md#domain-services-contributor)
+
+
++
+
+
+### General Availability- Azure AD Connect update release with new functionality and bug fixes
+
+**Type:** Changed feature
+**Service category:** Provisioning
+**Product capability:** Identity Lifecycle Management
+
+
+
+A new Azure AD Connect release fixes several bugs and includes new functionality. This release is also available for auto upgrade for eligible servers. For more information, see: [Azure AD Connect: Version release history](../hybrid/reference-connect-version-history.md#21150).
++
+
+
+### General Availability - Cross-tenant access settings for B2B collaboration
+
+**Type:** Changed feature
+**Service category:** B2B
+**Product capability:** B2B/B2C
+
+
+
+Cross-tenant access settings enable you to control how users in your organization collaborate with members of external Azure AD organizations. Now youΓÇÖll have granular inbound and outbound access control settings that work on a per org, user, group, and application basis. These settings also make it possible for you to trust security claims from external Azure AD organizations like multi-factor authentication (MFA), device compliance, and hybrid Azure AD joined devices. For more information, see: [Cross-tenant access with Azure AD External Identities](../external-identities/cross-tenant-access-overview.md).
+
++
+
+
+### General Availability- Expression builder with Application Provisioning
+
+**Type:** Changed feature
+**Service category:** Provisioning
+**Product capability:** Outbound to SaaS Applications
+
+
+Accidental deletion of users in your apps or in your on-premises directory could be disastrous. WeΓÇÖre excited to announce the general availability of the accidental deletions prevention capability. When a provisioning job would cause a spike in deletions, it will first pause and provide you visibility into the potential deletions. You can then accept or reject the deletions and have time to update the jobΓÇÖs scope if necessary. For more information, see [Understand how expression builder in Application Provisioning works](../app-provisioning/expression-builder.md).
+
++
+
++
+### Public Preview - Improved app discovery view for My Apps portal
+
+**Type:** Changed feature
+**Service category:** My Apps
+**Product capability:** End User Experiences
+
+
+An improved app discovery view for My Apps is in public preview. The preview shows users more apps in the same space and allows them to scroll between collections. It doesn't currently support drag-and-drop and list view. Users can opt into the preview by selecting Try the preview and opt out by selecting Return to previous view. To learn more about My Apps, see [My Apps portal overview](../manage-apps/myapps-overview.md).
++
+
++
+
++
+### Public Preview - New Azure AD Portal All Devices list
+
+**Type:** Changed feature
+**Service category:** Device Registration and Management
+**Product capability:** End User Experiences
+
+
+
+We're enhancing the All Devices list in the Azure AD Portal to make it easier to filter and manage your devices. Improvements include:
+
+All Devices List:
+
+- Infinite scrolling
+- More devices properties can be filtered on
+- Columns can be reordered via drag and drop
+- Select all devices
+
+For more information, see: [Manage devices in Azure AD using the Azure portal](../devices/device-management-azure-portal.md#view-and-filter-your-devices-preview).
++
+
++
+
++
+### Public Preview - ADFS to Azure AD: Persistent NameID for IDP-initiated Apps
+
+**Type:** Changed feature
+**Service category:** Enterprise Apps
+**Product capability:** SSO
+
+
+Previously the only way to have persistent NameID value was to ΓÇïconfigure user attribute with an empty value. Admins can now explicitly configure the NameID value to be persistent ΓÇïalong with the corresponding format.
+
+For more information, see: [Customize app SAML token claims - Microsoft identity platform | Microsoft Docs](../develop/active-directory-saml-claims-customization.md#attributes).
+
++
+
++
+### Public Preview - ADFS to Azure Active Directory: Customize attrname-formatΓÇï
+
+**Type:** Changed feature
+**Service category:** Enterprise Apps
+**Product capability:** SSO
+
+
+With this new parity update, customers can now integrate non-gallery applications such as Socure DevHub with Azure AD to have SSO via SAML.
+
+For more information, see [Claims mapping policy - Microsoft Entra | Microsoft Docs](../develop/reference-claims-mapping-policy-type.md#claim-schema-entry-elements).
+
+ ## June 2022
For more information, see [Hide a third-party application from a user's experien
As part of the transition to the new admin console, two new APIs for retrieving Azure AD activity logs are available. The new set of APIs provides richer filtering and sorting functionality in addition to providing richer audit and sign-in activities. The data previously available through the security reports now can be accessed through the Identity Protection Risk Detections API in Microsoft Graph. -
-## September 2017
-
-### Hotfix for Identity Manager
-
-**Type:** Changed feature
-**Service category:** Identity Manager
-**Product capability:** Identity lifecycle management
-
-A hotfix roll-up package (build 4.4.1642.0) is available as of September 25, 2017, for Identity Manager 2016 Service Pack 1. This roll-up package:
--- Resolves issues and adds improvements.-- Is a cumulative update that replaces all Identity Manager 2016 Service Pack 1 updates up to build 4.4.1459.0 for Identity Manager 2016.-- Requires you to have Identity Manager 2016 build 4.4.1302.0.-
-For more information, see [Hotfix rollup package (build 4.4.1642.0) is available for Identity Manager 2016 Service Pack 1](https://support.microsoft.com/help/4021562).
--
active-directory Lifecycle Workflow Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-tasks.md
Lifecycle Workflow's built-in tasks each include an identifier, known as **taskD
Common task parameters are the non-unique parameters contained in every task. When adding tasks to a new workflow, or a workflow template, you can customize and configure these parameters so that they match your requirements. -
-> [!NOTE]
-> The user's employee hire date is used as the start time for the Temporary Access Pass. Please make sure that the TAP lifetime task setting and the [time portion of your user's hire date](how-to-lifecycle-workflow-sync-attributes.md#importance-of-time) are set appropriately so that the TAP is still valid when the user starts their first day.
- |Parameter |Definition | ||| |category | A read-only string that identifies the category or categories of the task. Automatically determined when the taskDefinitionID is chosen. |
For Microsoft Graph the parameters for the **Send onboarding reminder email** ta
### Generate Temporary Access Pass and send via email to user's manager
-When a compatible user joins your organization, Lifecycle Workflows allow you to automatically generate a Temporary Access Pass(TAP), and have it sent to the new user's manager.
+When a compatible user joins your organization, Lifecycle Workflows allow you to automatically generate a Temporary Access Pass (TAP), and have it sent to the new user's manager.
+
+> [!NOTE]
+> The user's employee hire date is used as the start time for the Temporary Access Pass. Please make sure that the TAP lifetime task setting and the [time portion of your user's hire date](how-to-lifecycle-workflow-sync-attributes.md#importance-of-time) are set appropriately so that the TAP is still valid when the user starts their first day. If the hire date at the time of workflow execution is already in the past, the current time is used as the start time.
With this task in the Azure portal, you're able to give the task a name and description. You must also set:
-**Activation duration**- How long the password is active.
-**One time use**- If the password is one use only.
+- **Activation duration**- How long the passcode is active.
+- **One time use**- If the passcode can only be used once.
:::image type="content" source="media/lifecycle-workflow-task/tap-task.png" alt-text="Screenshot of Workflows task: TAP task.":::
The Azure AD prerequisites to run the **Generate Temporary Access Pass and send
- A populated manager attribute for the user. - A populated manager's mail attribute for the user.-- An enabled TAP tenant policy. For more information, see [Enable the Temporary Access Pass policy](../authentication/howto-authentication-temporary-access-pass.md#enable-the-temporary-access-pass-policy)
-
+- The TAP tenant policy must be enabled and the selected values for activation duration and one time use must be within the allowed range of the policy. For more information, see [Enable the Temporary Access Pass policy](../authentication/howto-authentication-temporary-access-pass.md#enable-the-temporary-access-pass-policy)
> [!IMPORTANT] > A user having this task run for them in a workflow must also not have any other authentication methods, sign-ins, or AAD role assignments for this task to work for them.
For Microsoft Graph the parameters for the **Generate Temporary Access Pass and
```
-> [!NOTE]
-> The employee hire date is the same as the startDateTime used for the tapLifetimeInMinutes parameter.
-- ### Add user to groups Allows users to be added to Microsoft 365 and cloud-only security groups. Mail-enabled, distribution, dynamic and privileged access groups are not supported. To control access to on-premises applications and resources, you need to enable group writeback. For more information, see [Azure AD Connect group writeback](../hybrid/how-to-connect-group-writeback-v2.md).
active-directory Concept Adsync Service Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/concept-adsync-service-account.md
na Previously updated : 01/05/2022 Last updated : 01/27/2023 # ADSync service account
-Azure AD Connect installs an on-premises service which orchestrates synchronization between Active Directory and Azure Active Directory. The Microsoft Azure AD Sync synchronization service (ADSync) runs on a server in your on-premises environment. The credentials for the service are set by default in the Express installations but may be customized to meet your organizational security requirements. These credentials are not used to connect to your on-premises forests or Azure Active Directory.
+Azure AD Connect installs an on-premises service which orchestrates synchronization between Active Directory and Azure Active Directory. The Microsoft Azure AD Sync synchronization service (ADSync) runs on a server in your on-premises environment. The credentials for the service are set by default in the Express installations but may be customized to meet your organizational security requirements. These credentials aren't used to connect to your on-premises forests or Azure Active Directory.
Choosing the ADSync service account is an important planning decision to make prior to installing Azure AD Connect. Any attempt to change the credentials after installation will result in the service failing to start, losing access to the synchronization database, and failing to authenticate with your connected directories (Azure and AD DS). No synchronization will occur until the original credentials are restored.
-The sync service can run under different accounts. It can run under a Virtual Service Account (VSA), a Managed Service Account (gMSA/sMSA), or a regular User Account. The supported options were changed with the 2017 April release and 2021 March release of Azure AD Connect when you do a fresh installation. If you upgrade from an earlier release of Azure AD Connect, these additional options are not available.
+The sync service can run under different accounts. It can run under a Virtual Service Account (VSA), a Managed Service Account (gMSA/sMSA), or a regular User Account. The supported options were changed with the 2017 April release and 2021 March release of Azure AD Connect when you do a fresh installation. If you upgrade from an earlier release of Azure AD Connect, these additional options aren't available.
|Type of account|Installation option|Description| |--||--|
-|Virtual Service Account|Express and custom, 2017 April and later| A Virtual Service Account is used for all express installations, except for installations on a Domain Controller. When using custom installation, it is the default option unless another option is used.|
+|Virtual Service Account|Express and custom, 2017 April and later| A Virtual Service Account is used for all express installations, except for installations on a Domain Controller. When using custom installation, it's the default option unless another option is used.|
|Managed Service Account|Custom, 2017 April and later|If you use a remote SQL Server, then we recommend using a group managed service account. |
-|Managed Service Account|Express and custom, 2021 March and later|A standalone Managed Service Account prefixed with ADSyncMSA_ is created during installation for express installations when installed on a Domain Controller. When using custom installation, it is the default option unless another option is used.|
-|User Account|Express and custom, 2017 April to 2021 March|A User Account prefixed with AAD_ is created during installation for express installations when installed on a Domain Controller. When using custom installation, it is the default option unless another option is used.|
+|Managed Service Account|Express and custom, 2021 March and later|A standalone Managed Service Account prefixed with ADSyncMSA_ is created during installation for express installations when installed on a Domain Controller. When using custom installation, it's the default option unless another option is used.|
+|User Account|Express and custom, 2017 April to 2021 March|A User Account prefixed with AAD_ is created during installation for express installations when installed on a Domain Controller. When using custom installation, it's the default option unless another option is used.|
|User Account|Express and custom, 2017 March and earlier|A User Account prefixed with AAD_ is created during installation for express installations. When using custom installation, another account can be specified.| >[!IMPORTANT]
-> If you use Connect with a build from 2017 March or earlier, then you should not reset the password on the service account since Windows destroys the encryption keys for security reasons. You cannot change the account to any other account without reinstalling Azure AD Connect. If you upgrade to a build from 2017 April or later, then it is supported to change the password on the service account, but you cannot change the account used.
+> If you use Connect with a build from 2017 March or earlier, then you should not reset the password on the service account since Windows destroys the encryption keys for security reasons. You can't change the account to any other account without reinstalling Azure AD Connect. If you upgrade to a build from 2017 April or later, then it's supported to change the password on the service account, but you can't change the account used.
> [!IMPORTANT]
-> You can only set the service account on first installation. It is not supported to change the service account after the installation has been completed. If you need to change the service account password, this is supported and instructions can be found [here](how-to-connect-sync-change-serviceacct-pass.md).
+> You can only set the service account on first installation. It isn't supported to change the service account after the installation has been completed. If you need to change the service account password, this is supported and instructions can be found [here](how-to-connect-sync-change-serviceacct-pass.md).
The following is a table of the default, recommended, and supported options for the sync service account. Legend: - **Bold** indicates the default option and, in most cases, the recommended option. -- *Italic* indicates the recommended option when it is not the default option.
+- *Italic* indicates the recommended option when it's not the default option.
- Non-bold - Supported option - Local account - Local user account on the server - Domain account - Domain user account - sMSA - [standalone Managed Service account](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd548356(v=ws.10))-- gMSA - [group Managed Service account](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/hh831782(v=ws.11))
+- gMSA - [group managed service account](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/hh831782(v=ws.11))
|Machine type |**LocalDB</br> Express**|**LocalDB/LocalSQL</br> Custom**|**Remote SQL</br> Custom**| |--|--|--|--|
Legend:
## Virtual Service Account
-A Virtual Service Account is a special type of managed local account that does not have a password and is automatically managed by Windows.
+A Virtual Service Account is a special type of managed local account that doesn't have a password and is automatically managed by Windows.
![Virtual service account](media/concept-adsync-service-account/account-1.png) The Virtual Service Account is intended to be used with scenarios where the sync engine and SQL are on the same server. If you use remote SQL, then we recommend using a group managed service account instead.
-The Virtual Service Account cannot be used on a Domain Controller due to [Windows Data Protection API (DPAPI)](/previous-versions/ms995355(v=msdn.10)) issues.
+The Virtual Service Account can't be used on a Domain Controller due to [Windows Data Protection API (DPAPI)](/previous-versions/ms995355(v=msdn.10)) issues.
## Managed Service Account
-If you use a remote SQL Server, then we recommend to using a group managed service account. For more information on how to prepare your Active Directory for group Managed Service account, see [Group Managed Service Accounts Overview](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/hh831782(v=ws.11)).
+If you use a remote SQL Server, then we recommend to using a group managed service account. For more information on how to prepare your Active Directory for group managed service account, see [Group Managed Service Accounts Overview](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/hh831782(v=ws.11)).
To use this option, on the [Install required components](how-to-connect-install-custom.md#install-required-components) page, select **Use an existing service account**, and select **Managed Service Account**. ![managed service account](media/concept-adsync-service-account/account-2.png)
-It is also supported to use a standalone managed service account. However, these can only be used on the local machine and there is no benefit to using them over the default Virtual Service Account.
+It is also supported to use a standalone managed service account. However, these can only be used on the local machine and there's no benefit to using them over the default Virtual Service Account.
### Auto-generated standalone Managed Service Account If you install Azure AD Connect on a Domain Controller, a standalone Managed Service Account is created by the installation wizard (unless you specify the account to use in custom settings). The account is prefixed **ADSyncMSA_** and used for the actual sync service to run as.
-This account is a managed domain account that does not have a password and is automatically managed by Windows.
+This account is a managed domain account that doesn't have a password and is automatically managed by Windows.
This account is intended to be used with scenarios where the sync engine and SQL are on the Domain Controller.
A local service account is created by the installation wizard (unless you specif
![user account](media/concept-adsync-service-account/account-3.png)
-The account is created with a long complex password that does not expire.
+The account is created with a long complex password that doesn't expire.
This account is used to store passwords for the other accounts in a secure way. These other accounts passwords are stored encrypted in the database. The private keys for the encryption keys are protected with the cryptographic services secret-key encryption using Windows Data Protection API (DPAPI).
-If you use a full SQL Server, then the service account is the DBO of the created database for the sync engine. The service will not function as intended with any other permission. A SQL login is also created.
+If you use a full SQL Server, then the service account is the DBO of the created database for the sync engine. The service won't function as intended with any other permission. A SQL login is also created.
The account is also granted permission to files, registry keys, and other objects related to the Sync Engine.
active-directory Concept Azure Ad Connect Sync User And Contacts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/concept-azure-ad-connect-sync-user-and-contacts.md
na Previously updated : 01/05/2022 Last updated : 01/27/2023 # Azure AD Connect sync: Understanding Users, Groups, and Contacts
-There are several different reasons why you would have multiple Active Directory forests and there are several different deployment topologies. Common models include an account-resource deployment and GAL syncΓÇÖed forests after a merger & acquisition. But even if there are pure models, hybrid models are common as well. The default configuration in Azure AD Connect sync does not assume any particular model but depending on how user matching was selected in the installation guide, different behaviors can be observed.
+There are several different reasons why you would have multiple Active Directory forests and there are several different deployment topologies. Common models include an account-resource deployment and GAL syncΓÇÖed forests after a merger & acquisition. But even if there are pure models, hybrid models are common as well. The default configuration in Azure AD Connect sync doesn't assume any particular model but depending on how user matching was selected in the installation guide, different behaviors can be observed.
-In this topic, we will go through how the default configuration behaves in certain topologies. We will go through the configuration and the Synchronization Rules Editor can be used to look at the configuration.
+In this topic, we'll go through how the default configuration behaves in certain topologies. We will go through the configuration and the Synchronization Rules Editor can be used to look at the configuration.
There are a few general rules the configuration assumes: * Regardless of which order we import from the source Active Directories, the end result should always be the same. * An active account will always contribute sign-in information, including **userPrincipalName** and **sourceAnchor**.
-* A disabled account will contribute userPrincipalName and sourceAnchor, unless it is a linked mailbox, if there is no active account to be found.
+* A disabled account will contribute userPrincipalName and sourceAnchor, unless it's a linked mailbox, if there's no active account to be found.
* An account with a linked mailbox will never be used for userPrincipalName and sourceAnchor. It is assumed that an active account will be found later. * A contact object might be provisioned to Azure AD as a contact or as a user. You donΓÇÖt really know until all source Active Directory forests have been processed.
Important points to be aware of when synchronizing groups from Active Directory
* Azure AD Connect excludes built-in security groups from directory synchronization.
-* Azure AD Connect does not support synchronizing [Primary Group memberships](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc771489(v=ws.11)) to Azure AD.
+* Azure AD Connect doesn't support synchronizing [Primary Group memberships](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc771489(v=ws.11)) to Azure AD.
-* Azure AD Connect does not support synchronizing [Dynamic Distribution Group memberships](/Exchange/recipients/dynamic-distribution-groups/dynamic-distribution-groups) to Azure AD.
+* Azure AD Connect doesn't support synchronizing [Dynamic Distribution Group memberships](/Exchange/recipients/dynamic-distribution-groups/dynamic-distribution-groups) to Azure AD.
* To synchronize an Active Directory group to Azure AD as a mail-enabled group:
Important points to be aware of when synchronizing groups from Active Directory
* If the group's *proxyAddress* attribute is non-empty, it must contain at least one SMTP proxy address value. Here are some examples:
- * An Active Directory group whose proxyAddress attribute has value *{"X500:/0=contoso.com/ou=users/cn=testgroup"}* will not be mail-enabled in Azure AD. It does not have an SMTP address.
+ * An Active Directory group whose proxyAddress attribute has value *{"X500:/0=contoso.com/ou=users/cn=testgroup"}* won't be mail-enabled in Azure AD. It doesn't have an SMTP address.
* An Active Directory group whose proxyAddress attribute has values *{"X500:/0=contoso.com/ou=users/cn=testgroup","SMTP:johndoe\@contoso.com"}* will be mail-enabled in Azure AD. * An Active Directory group whose proxyAddress attribute has values *{"X500:/0=contoso.com/ou=users/cn=testgroup", "smtp:johndoe\@contoso.com"}* will also be mail-enabled in Azure AD. ## Contacts
-Having contacts representing a user in a different forest is common after a merger & acquisition where a GALSync solution is bridging two or more Exchange forests. The contact object is always joining from the connector space to the metaverse using the mail attribute. If there is already a contact object or user object with the same mail address, the objects are joined together. This is configured in the rule **In from AD ΓÇô Contact Join**. There is also a rule named **In from AD ΓÇô Contact Common** with an attribute flow to the metaverse attribute **sourceObjectType** with the constant **Contact**. This rule has very low precedence so if any user object is joined to the same metaverse object, then the rule **In from AD ΓÇô User Common** will contribute the value User to this attribute. With this rule, this attribute will have the value Contact if no user has been joined and the value User if at least one user has been found.
+Having contacts representing a user in a different forest is common after a merger & acquisition where a GALSync solution is bridging two or more Exchange forests. The contact object is always joining from the connector space to the metaverse using the mail attribute. If there's already a contact object or user object with the same mail address, the objects are joined together. This is configured in the rule **In from AD ΓÇô Contact Join**. There is also a rule named **In from AD ΓÇô Contact Common** with an attribute flow to the metaverse attribute **sourceObjectType** with the constant **Contact**. This rule has very low precedence so if any user object is joined to the same metaverse object, then the rule **In from AD ΓÇô User Common** will contribute the value User to this attribute. With this rule, this attribute will have the value Contact if no user has been joined and the value User if at least one user has been found.
For provisioning an object to Azure AD, the outbound rule **Out to AAD ΓÇô Contact Join** will create a contact object if the metaverse attribute **sourceObjectType** is set to **Contact**. If this attribute is set to **User**, then the rule **Out to AAD ΓÇô User Join** will create a user object instead. It is possible that an object is promoted from Contact to User when more source Active Directories are imported and synchronized.
-For example, in a GALSync topology we will find contact objects for everyone in the second forest when we import the first forest. This will stage new contact objects in the AAD Connector. When we later import and synchronize the second forest, we will find the real users and join them to the existing metaverse objects. We will then delete the contact object in AAD and create a new user object instead.
+For example, in a GALSync topology we'll find contact objects for everyone in the second forest when we import the first forest. This will stage new contact objects in the Azure AD Connector. When we later import and synchronize the second forest, we'll find the real users and join them to the existing metaverse objects. We will then delete the contact object in Azure AD and create a new user object instead.
If you have a topology where users are represented as contacts, make sure you select to match users on the mail attribute in the installation guide. If you select another option, then you will have an order-dependent configuration. Contact objects will always join on the mail attribute, but user objects will only join on the mail attribute if this option was selected in the installation guide. You could then end up with two different objects in the metaverse with the same mail attribute if the contact object was imported before the user object. During export to Azure AD, an error will be thrown. This behavior is by design and would indicate bad data or that the topology was not correctly identified during the installation. ## Disabled accounts Disabled accounts are synchronized as well to Azure AD. Disabled accounts are common to represent resources in Exchange, for example conference rooms. The exception is users with a linked mailbox; as previously mentioned, these will never provision an account to Azure AD.
-The assumption is that if a disabled user account is found, then we will not find another active account later and the object is provisioned to Azure AD with the userPrincipalName and sourceAnchor found. In case another active account will join to the same metaverse object, then its userPrincipalName and sourceAnchor will be used.
+The assumption is that if a disabled user account is found, then we won't find another active account later and the object is provisioned to Azure AD with the userPrincipalName and sourceAnchor found. In case another active account will join to the same metaverse object, then its userPrincipalName and sourceAnchor will be used.
## Changing sourceAnchor
-When an object has been exported to Azure AD then it is not allowed to change the sourceAnchor anymore. When the object has been exported the metaverse attribute **cloudSourceAnchor** is set with the **sourceAnchor** value accepted by Azure AD. If **sourceAnchor** is changed and not match **cloudSourceAnchor**, the rule **Out to AAD ΓÇô User Join** will throw the error **sourceAnchor attribute has changed**. In this case, the configuration or data must be corrected so the same sourceAnchor is present in the metaverse again before the object can be synchronized again.
+When an object has been exported to Azure AD then it's not allowed to change the sourceAnchor anymore. When the object has been exported the metaverse attribute **cloudSourceAnchor** is set with the **sourceAnchor** value accepted by Azure AD. If **sourceAnchor** is changed and not match **cloudSourceAnchor**, the rule **Out to AAD ΓÇô User Join** will throw the error **sourceAnchor attribute has changed**. In this case, the configuration or data must be corrected so the same sourceAnchor is present in the metaverse again before the object can be synchronized again.
## Additional Resources * [Azure AD Connect Sync: Customizing Synchronization options](how-to-connect-sync-whatis.md)
active-directory How To Connect Adconnectivitytools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-adconnectivitytools.md
Previously updated : 01/05/2022 Last updated : 01/27/2023
The ADConnectivity tool is a PowerShell module that is used in one of the following: -- During installation when a network connectivity problem prevents the successful validation of the Active Directory credentials the user provided in the Wizard.
+- During installation, when a network connectivity problem prevents the successful validation of the Active Directory credentials.
- Post installation by a user who calls the functions from a PowerShell session. The tool is located in: **C:\Program Files\Microsoft Azure Active Directory Connect\Tools\ ADConnectivityTool.psm1** ## ADConnectivityTool during installation
-On the **Connect your directories** page, in the Azure AD Connect Wizard, if a network issue occurs, the ADConnectivityTool will automatically use one of its functions to determine what is going on. Any of the following can be considered network issues:
+On the **Connect your directories** page, in the Azure AD Connect Wizard, if a network issue occurs, the ADConnectivityTool will automatically use one of its functions to determine what is going on. The following items can be considered network issues:
- The name of the Forest the user provided was typed wrongly, or said Forest doesnΓÇÖt exist - UDP port 389 is closed in the Domain Controllers associated with the Forest the user provided - The credentials provided in the ΓÇÿAD forest accountΓÇÖ window doesnΓÇÖt have privileges to retrieve the Domain Controllers associated with the target Forest - Any of the TCP ports 53, 88 or 389 are closed in the Domain Controllers associated with the Forest the user provided - Both UDP 389 and a TCP port (or ports) are closed-- DNS could not be resolved for the provided Forest and\or its associated Domain Controllers
+- DNS couldn't be resolved for the provided Forest and\or its associated Domain Controllers
Whenever any of these issues are found, a related error message is displayed in the AADConnect Wizard: ![Error](media/how-to-connect-adconnectivitytools/error1.png)
-For example, when we are attempting to add a directory on the **Connect your directories** screen, Azure AD Connect needs to verify this and expects to be able to communicate with a domain controller over port 389. If it cannot, we will see the error that is shown in the screenshot above.
+For example, when we're attempting to add a directory on the **Connect your directories** screen, Azure AD Connect needs to verify this and expects to be able to communicate with a domain controller over port 389. If it can't, we'll see the error that is shown in the screenshot.
What is actually happening behind the scenes, is that Azure AD Connect is calling the `Start-NetworkConnectivityDiagnosisTools` function. This function is called when the validation of credentials fails due to a network connectivity issue.
You can find reference information on the functions in the [ADConnectivityTools
### Start-ConnectivityValidation
-We are going to call out this function because it can **only** be called manually once the ADConnectivityTool.psm1 has been imported into PowerShell.
+We're going to call out this function because it can **only** be called manually once the ADConnectivityTool.psm1 has been imported into PowerShell.
This function executes the same logic that the Azure AD Connect Wizard runs to validate the provided AD Credentials. However it provides a much more verbose explanation about the problem and a suggested solution.
The connectivity validation consists of the following steps:
The user will be able to add a Directory if all these actions were executed successfully.
-If the user runs this function after a problem is solved (or if no problem exists at all) the output will indicate for the user to go back to the Azure AD Connect Wizard and try inserting the credentials again.
+If the user runs this function, after a problem is solved (or if no problem exists at all), the output will indicate for the user to go back to the Azure AD Connect Wizard and try inserting the credentials again.
active-directory How To Connect Create Custom Sync Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-create-custom-sync-rule.md
na Previously updated : 01/05/2022 Last updated : 01/27/2023
## **Recommended Steps**
-You can use the synchronization rule editor to edit or create a new synchronization rule. You need to be an advanced user to make changes to synchronization rules. Any wrong changes may result in deletion of objects from your target directory. Please read [Recommended Documents](#recommended-documents) to gain expertise in synchronization rules. To modify a synchronization rule go through following steps:
+You can use the synchronization rule editor to edit or create a new synchronization rule. You need to be an advanced user to make changes to synchronization rules. Any wrong changes may result in deletion of objects from your target directory. Please review the [Recommended Documents](#recommended-documents) section. To modify a synchronization rule, go through following steps:
* Launch the synchronization editor from the application menu in desktop as shown below: ![Synchronization Rule Editor Menu](media/how-to-connect-create-custom-sync-rule/how-to-connect-create-custom-sync-rule/syncruleeditormenu.png)
-* In order to customize a default synchronization rule, clone the existing rule by clicking the ΓÇ£EditΓÇ¥ button on the Synchronization Rules Editor, which will create a copy of the standard default rule and disable it. Save the cloned rule with a precedence less than 100. Precedence determines what rule wins(lower numeric value) a conflict resolution if there is an attribute flow conflict.
+* In order to customize a default synchronization rule, clone the existing rule by clicking the ΓÇ£EditΓÇ¥ button on the Synchronization Rules Editor, which will create a copy of the standard default rule and disable it. Save the cloned rule with a precedence less than 100. Precedence determines what rule wins(lower numeric value) a conflict resolution if there's an attribute flow conflict.
![Synchronization Rule Editor](media/how-to-connect-create-custom-sync-rule/how-to-connect-create-custom-sync-rule/clonerule.png) * When modifying a specific attribute, ideally you should only keep the modifying attribute in the cloned rule. Then enable the default rule so that modified attribute comes from cloned rule and other attributes are picked from default standard rule.
-* Please note that in the case where the calculated value of the modified attribute is NULL in your cloned rule and is not NULL in the default standard rule then, the not NULL value will win and will replace the NULL value. If you donΓÇÖt want a NULL value to be replace with a not NULL value then assign AuthoritativeNull in your cloned rule.
+* In the case where the calculated value of the modified attribute is NULL, in your cloned rule, and isn't NULL in the default standard rule then, the not NULL value will win and will replace the NULL value. If you donΓÇÖt want a NULL value to be replaced with a not NULL value, then assign AuthoritativeNull in your cloned rule.
* To modify an **Outbound** rule, change filter from the synchronization rule editor.
active-directory How To Connect Device Writeback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-device-writeback.md
na Previously updated : 01/05/2022 Last updated : 01/27/2023
This provides additional security and assurance that access to applications is g
> [!IMPORTANT] > <li>Devices must be located in the same forest as the users. Since devices must be written back to a single forest, this feature does not currently support a deployment with multiple user forests.</li>
-> <li>Only one device registration configuration object can be added to the on-premises Active Directory forest. This feature is not compatible with a topology where the on-premises Active Directory is synchronized to multiple Azure AD directories.</li>
+> <li>Only one device registration configuration object can be added to the on-premises Active Directory forest. This feature isn't compatible with a topology where the on-premises Active Directory is synchronized to multiple Azure AD directories.</li>
## Part 1: Install Azure AD Connect Install Azure AD Connect using Custom or Express settings. Microsoft recommends to start with all users and groups successfully synchronized before you enable device writeback.
Install Azure AD Connect using Custom or Express settings. Microsoft recommends
>[!NOTE] > The new Configure device options is available only in version 1.1.819.0 and newer.
-2. On the device options page, select **Configure device writeback**. Option to **Disable device writeback** will not be available until device writeback is enabled. Click on **Next** to move to the next page in the wizard.
+2. On the device options page, select **Configure device writeback**. Option to **Disable device writeback** won't be available until device writeback is enabled. Click on **Next** to move to the next page in the wizard.
![Chose device operation](./media/how-to-connect-device-writeback/configuredevicewriteback1.png)
-3. On the writeback page, you will see the supplied domain as the default Device writeback forest.
+3. On the writeback page, you'll see the supplied domain as the default Device writeback forest.
![Custom Install device writeback target forest](./media/how-to-connect-device-writeback/writebackforest.png) 4. **Device container** page provides option of preparing the active directory by using one of the two available options: a. **Provide enterprise administrator credentials**: If the enterprise administrator credentials are provided for the forest where devices need to be written back, Azure AD Connect will prepare the forest automatically during the configuration of device writeback.
- b. **Download PowerShell script**: Azure AD Connect auto-generates a PowerShell script that can prepare the active directory for device writeback. In case the enterprise administrator credentials cannot be provided in Azure AD Connect, it is suggested to download the PowerShell script. Provide the downloaded PowerShell script **CreateDeviceContainer.ps1** to the enterprise administrator of the forest where devices will be written back to.
+ b. **Download PowerShell script**: Azure AD Connect auto-generates a PowerShell script that can prepare the active directory for device writeback. In case the enterprise administrator credentials can't be provided in Azure AD Connect, it's suggested to download the PowerShell script. Provide the downloaded PowerShell script **CreateDeviceContainer.ps1** to the enterprise administrator of the forest where devices will be written back to.
![Prepare active directory forest](./media/how-to-connect-device-writeback/devicecontainercreds.png) The following operations are performed for preparing the active directory forest:
- * If they do not exist already, creates and configures new containers and objects under CN=Device Registration Configuration,CN=Services,CN=Configuration,[forest-dn].
- * If they do not exist already, creates and configures new containers and objects under CN=RegisteredDevices,[domain-dn]. Device objects will be created in this container.
+ * If they don't exist already, creates and configures new containers and objects under CN=Device Registration Configuration,CN=Services,CN=Configuration,[forest-dn].
+ * If they don't exist already, creates and configures new containers and objects under CN=RegisteredDevices,[domain-dn]. Device objects will be created in this container.
* Sets necessary permissions on the Azure AD Connector account, to manage devices on your Active Directory. * Only needs to run on one forest, even if Azure AD Connect is being installed on multiple forests.
Detailed instructions to enable this scenario are available within [Setting up O
## Troubleshooting ### The writeback checkbox is still disabled
-If the checkbox for device writeback is not enabled even though you have followed the steps above, the following steps will guide you through what the installation wizard is verifying before the box is enabled.
+If the checkbox for device writeback isn't enabled even though you've followed the steps above, the following steps will guide you through what the installation wizard is verifying before the box is enabled.
First things first: * The forest where the devices are present must have the forest schema upgraded to Windows 2012 R2 level so that the device object and associated attributes are present .
-* If the installation wizard is already running, then any changes will not be detected. In this case, complete the installation wizard and run it again.
+* If the installation wizard is already running, then any changes won't be detected. In this case, complete the installation wizard and run it again.
* Make sure the account you provide in the initialization script is actually the correct user used by the Active Directory Connector. To verify this, follow these steps: * From the start menu, open **Synchronization Service**. * Open the **Connectors** tab.
Verify configuration in Active Directory:
![Troubleshoot, DeviceRegistrationService in configuration namespace](./media/how-to-connect-device-writeback/troubleshoot1.png)
-* Verify there is only one configuration object by searching the configuration namespace. If there is more than one, delete the duplicate.
+* Verify there's only one configuration object by searching the configuration namespace. If there's more than one, delete the duplicate.
![Troubleshoot, search for the duplicate objects](./media/how-to-connect-device-writeback/troubleshoot2.png)
-* On the Device Registration Service object, make sure the attribute msDS-DeviceLocation is present and has a value. Lookup this location and make sure it is present with the objectType msDS-DeviceContainer.
+* On the Device Registration Service object, make sure the attribute msDS-DeviceLocation is present and has a value. Lookup this location and make sure it's present with the objectType msDS-DeviceContainer.
![Troubleshoot, msDS-DeviceLocation](./media/how-to-connect-device-writeback/troubleshoot3.png)
active-directory How To Connect Health Alert Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-health-alert-catalog.md
na Previously updated : 01/21/2022 Last updated : 01/27/2023 # Azure Active Directory Connect Health Alert Catalog
-Azure AD Connect Health service send alerts indicate that your identity infrastructure is not healthy. This article includes alerts titles, descriptions, and remediation steps for each alert. <br />
+Azure AD Connect Health service send alerts indicate that your identity infrastructure isn't healthy. This article includes alerts titles, descriptions, and remediation steps for each alert. <br />
Error, Warning, and Prewarning are three stages of alerts that are generated from Connect Health service. We highly recommend you take immediate actions on triggered alerts. <br />
-Azure AD Connect Health alerts get resolved on a success condition. Azure AD Connect Health Agents detect and report the success conditions to the service periodically. For a few alerts, the suppression is time-based. In other words, if the same error condition is not observed within 72 hours from alert generation, the alert is automatically resolved.
+Azure AD Connect Health alerts get resolved on a success condition. Azure AD Connect Health Agents detect and report the success conditions to the service periodically. For a few alerts, the suppression is time-based. In other words, if the same error condition isn't observed within 72 hours from alert generation, the alert is automatically resolved.
## General Alerts | Alert Name | Description | Remediation | | | | -- |
-| Health service data is not up to date | The Health Agent(s) running on one or more servers is not connected to the Health Service and the Health Service is not receiving the latest data from this server. The last data processed by the Health Service is older than 2 Hours. | Ensure that the health agents have outbound connectivity to the required service end points. [Read More](how-to-connect-health-data-freshness.md) |
+| Health service data isn't up to date | The Health Agent(s) running on one or more servers isn't connected to the Health Service and the Health Service isn't receiving the latest data from this server. The last data processed by the Health Service is older than 2 Hours. | Ensure that the health agents have outbound connectivity to the required service end points. [Read More](how-to-connect-health-data-freshness.md) |
## Alerts for Azure AD Connect (Sync) | Alert Name | Description | Remediation | | | | -- |
-| Azure AD Connect Sync Service is not running | Microsoft Azure AD Sync Windows service is not running or could not start. As a result, objects will not synchronize with Azure Active Directory. | Start Microsoft Azure Active Directory Sync Services</b> <ol> <li>Click <b>Start</b>, click <b>Run</b>, type <b>Services.msc</b>, and then click <b>OK</b>.</li> <li>Locate the <b>Microsoft Azure AD Sync service</b>, and then check whether the service is started. If the service isn't started, right-click it, and then click <b>Start</b>. |
+| Azure AD Connect Sync Service isn't running | Microsoft Azure AD Sync Windows service isn't running or couldn't start. As a result, objects won't synchronize with Azure Active Directory. | Start Microsoft Azure Active Directory Sync Services</b> <ol> <li>Click <b>Start</b>, click <b>Run</b>, type <b>Services.msc</b>, and then click <b>OK</b>.</li> <li>Locate the <b>Microsoft Azure AD Sync service</b>, and then check whether the service is started. If the service isn't started, right-click it, and then click <b>Start</b>. |
| Import from Azure Active Directory failed | The import operation from Azure Active Directory Connector has failed. | Investigate the event log errors of import operation for further details. |
-| Connection to Azure Active Directory failed due to authentication failure | Connection to Azure Active Directory failed due to authentication failure. As a result objects will not be synchronized with Azure Active Directory. | Investigate the event log errors for further details. |
+| Connection to Azure Active Directory failed due to authentication failure | Connection to Azure Active Directory failed due to authentication failure. As a result objects won't be synchronized with Azure Active Directory. | Investigate the event log errors for further details. |
| Export to Active Directory failed | The export operation to Active Directory Connector has failed. | Investigate the event log errors of export operation for further details. | | Import from Active Directory failed | Import from Active Directory failed. As a result, objects from some domains from this forest may not be imported. | <li>Verify DC connectivity</li> <li>Rerun import manually</li> <li> Investigate event log errors of the import operation for further details. | | Export to Azure Active Directory failed | The export operation to Azure Active Directory Connector has failed. As a result, some objects may not be exported successfully to Azure Active Directory. | Investigate the event log errors of export operation for further details. |
-| Password Hash Synchronization heartbeat was skipped in last 120 minutes | Password Hash Synchronization has not connected with Azure Active Directory in the last 120 minutes. As a result, passwords will not be synchronized with Azure Active Directory. | Restart Microsoft Azure Active Directory Sync
+| Password Hash Synchronization heartbeat was skipped in last 120 minutes | Password Hash Synchronization has not connected with Azure Active Directory in the last 120 minutes. As a result, passwords won't be synchronized with Azure Active Directory. | Restart Microsoft Azure Active Directory Sync
| High CPU Usage detected | The percentage of CPU consumption crossed the recommended threshold on this server. | <li>This could be a temporary spike in CPU consumption. Check the CPU usage trend from the Monitoring section.</li><li>Inspect the top processes consuming the highest CPU usage on the server.<ol type="a"><li>You may use the Task Manager or execute the following PowerShell Command: <br> <i>get-process \| Sort-Object -Descending CPU \| Select-Object -First 10</i></li><li>If there are unexpected processes consuming high CPU usage, stop the processes using the following PowerShell command: <br> <i>stop-process -ProcessName [name of the process]</i></li></li></ol><li>If the processes seen in the above list are the intended processes running on the server and the CPU consumption is continuously near the threshold please consider re-evaluating the deployment requirements of this server.</li><li>As a fail-safe option you may consider restarting the server. | | High Memory Consumption Detected | The percentage of memory consumption of the server is beyond the recommended threshold on this server. | Inspect the top processes consuming the highest memory on the server. You may use the Task Manager or execute the following PowerShell Command:<br> <i>get-process \| Sort-Object -Descending WS \| Select-Object -First 10</i> </br> If there are unexpected processes consuming high memory, stop the processes using the following PowerShell command:<br><i>stop-process -ProcessName [name of the process] </i></li><li> If the processes seen in the above list are the intended processes running on the server, please consider re-evaluating the deployment requirements of this server.</li><li>As a failsafe option, you may consider restarting the server. |
-| Password Hash Synchronization has stopped working | Password Hash Synchronization has stopped. As a result passwords will not be synchronized with Azure Active Directory. | Restart Microsoft Azure Active Directory Sync
+| Password Hash Synchronization has stopped working | Password Hash Synchronization has stopped. As a result passwords won't be synchronized with Azure Active Directory. | Restart Microsoft Azure Active Directory Sync
| Export to Azure Active Directory was Stopped. Accidental delete threshold was reached | The export operation to Azure Active Directory has failed. There were more objects to be deleted than the configured threshold. As a result, no objects were exported. | <li> The number of objects are marked for deletion are greater than the set threshold. Ensure this outcome is desired.</li> <li> To allow the export to continue, perform the following steps: <ol type="a"> <li>Disable Threshold by running Disable-ADSyncExportDeletionThreshold</li> <li>Start Synchronization Service Manager</li> <li>Run Export on Connector with type = Azure Active Directory</li> <li>After successfully exporting the objects, enable Threshold by running: Enable-ADSyncExportDeletionThreshold</li> </ol> </li> | ## Alerts for Active Directory Federation Services | Alert Name | Description | Remediation | | | | -- |
-|Test Authentication Request (Synthetic Transaction) failed to obtain a token | The test authentication requests (Synthetic Transactions) initiated from this server has failed to obtain a token after 5 retries. This may be caused due to transient network issues, AD DS Domain Controller availability or a mis-configured AD FS server. As a result, authentication requests processed by the federation service may fail. The agent uses the Local Computer Account context to obtain a token from the Federation Service. | Ensure that the following steps are taken to validate the health of the server.<ol><li>Validate that there are no additional unresolved alerts for this or other AD FS servers in your farm.</li><li>Validate that this condition is not a transient failure by logging on with a test user from the AD FS login page available at https://{your_adfs_server_name}/adfs/ls/idpinitiatedsignon.aspx</li><li>Go to <a href="https://testconnectivity.microsoft.com">https://testconnectivity.microsoft.com</a> and choose the ΓÇÿOffice 365ΓÇÖ tab. Perform the ΓÇÿOffice 365 Single Sign-On TestΓÇÖ.</li><li>Verify if your AD FS service name can be resolved from this server by executing the following command from a command prompt on this server. nslookup your_adfs_server_name</li></ol><p>If the service name cannot be resolved, refer to the FAQ section for instructions of adding a HOST file entry of your AD FS service with the IP address of this server. This will allow the synthetic transaction module running on this server to request a token</p> |
-| The proxy server cannot reach the federation server | This AD FS proxy server is unable to contact the AD FS service. As a result, authentication requests processed by this server will fail. | Perform the following steps to validate the connectivity between this server and the AD FS service. <ol><li> Ensure that the firewall between this server and the AD FS service is configured accurately. </li><li> Ensure that DNS resolution for the AD FS service name appropriately points to the AD FS service that resides within the corporate network. This can be achieved through a DNS server that serves this server in the perimeter network or through entries in the HOSTS files for the AD FS service name. </li><li> Validate the network connectivity by opening up the browser on this server and accessing the federation metadata endpoint, which is at `https://<your-adfs-service-name>/federationmetadata/2007-06/federationmetadata.xml` </li> |
-| The SSL Certificate is about to expire | The TLS/SSL certificate used by the Federation servers is about to expire within 90 days. Once expired, any requests that require a valid TLS connection will fail. For example, for Microsoft 365 customers, mail clients will not be able to authenticate. | Update the TLS/SSL certificate on each AD FS server.<ol><li>Obtain the TLS/SSL certificate with the following requirements.<ol type="a"><li>Enhanced Key Usage is at least Server Authentication. </li><li>Certificate Subject or Subject Alternative Name (SAN) contains the DNS name of the Federation Service or appropriate wild card. For example: sso.contoso.com or *.contoso.com</li></ol></li><li>Install the new TLS/SSL certificate on each server in the local machine certificate store.</li><li>Ensure that the AD FS Service Account has read access to the certificate's Private Key</li></ol></p><p><b>For AD FS 2.0 in Windows Server 2008R2:</b><ul><li>Bind the new TLS/SSL certificate to the web site in IIS, which hosts the Federation Service. Note that you must perform this step on each Federation Server and Federation Server proxy.</li></ul></p><p><b>For AD FS in Windows Server 2012 R2 and later versions:</b> <li> Refer to <a href="/windows-server/identity/ad-fs/operations/manage-ssl-certificates-ad-fs-wap">Managing SSL Certificates in AD FS and WAP </a> </li> |
-| AD FS service is not running on the server | Active Directory Federation Service (Windows Service) is not running on this server. Any requests targeted to this server will fail. | To start the Active Directory Federation Service (Windows Service):<ol><li>Log on to the server as an administrator.</li><li> Open services.msc</li><li>Find "Active Directory Federation Services". </li><li>Right-click and select "Start". |
-| DNS for the Federation Service may be misconfigured | The DNS server could be configured to use a CNAME record for the AD FS farm name. It is recommended to use A or AAAA record for AD FS in order for the Windows Integrated Authentication to work seamlessly within your corporate network. | Ensure that the DNS record type of the AD FS farm `<Farm Name>` is not CNAME. Configure it to be an A or AAAA record. |
-| AD FS Auditing is disabled | AD FS Auditing is disabled for the server. AD FS Usage section on the portal will not include data from this server. | If AD FS Audits are not enabled follow these instructions:<ol><li>Grant the AD FS service account the "Generate security audits" right on the AD FS server.<li>Open the local security policy on the server gpedit.msc.</li><li>Navigate to "Computer Configuration\Windows Settings\Local Policies\User Rights Assignment" </li><li>Add the AD FS Service Account to have the "Generate security audits" right.</li></li><li>Run the following command from the command prompt:<br><i>auditpol.exe /set /subcategory:"Application Generated" /failure:enable /success:enable </i></li><li>Update Federation Service Properties to include Success and Failure Audits.<li>In the AD FS console, choose "Edit Federation Service Properties".</li><li>From "Federation Service Properties" dialogue box choose the Events tab and select "Success Audits" and "Failure Audits".</li></li></ol></p><p>After following these steps, AD FS Audit Events should be visible from the Event Viewer. To verify:<ol><li>Go to Event Viewer/ Windows Logs /Security.</li><li>Select Filter Current Logs and select AD FS Auditing from the Event sources drop down. For an active AD FS server with AD FS auditing enabled, events should be visible for the above filtering.</li></ol></p><p>If you have followed these instructions before, but still seeing this alert, it is possible that a Group Policy Object is disabling AD FS auditing. The root cause can be one of the following:<ol><li>AD FS service account is being removed from having the right to Generate Security Audits.</li><li>A custom script in Group Policy Object is disabling success and failure audits based on "Application Generated".</li><li>AD FS configuration is not enabled to generate Success/Failure audits. |
+|Test Authentication Request (Synthetic Transaction) failed to obtain a token | The test authentication requests (Synthetic Transactions) initiated from this server has failed to obtain a token after 5 retries. This may be caused due to transient network issues, AD DS Domain Controller availability or a mis-configured AD FS server. As a result, authentication requests processed by the federation service may fail. The agent uses the Local Computer Account context to obtain a token from the Federation Service. | Ensure that the following steps are taken to validate the health of the server.<ol><li>Validate that there are no additional unresolved alerts for this or other AD FS servers in your farm.</li><li>Validate that this condition isn't a transient failure by logging on with a test user from the AD FS login page available at https://{your_adfs_server_name}/adfs/ls/idpinitiatedsignon.aspx</li><li>Go to <a href="https://testconnectivity.microsoft.com">https://testconnectivity.microsoft.com</a> and choose the ΓÇÿOffice 365ΓÇÖ tab. Perform the ΓÇÿOffice 365 single sign-on TestΓÇÖ.</li><li>Verify if your AD FS service name can be resolved from this server by executing the following command from a command prompt on this server. nslookup your_adfs_server_name</li></ol><p>If the service name can't be resolved, refer to the FAQ section for instructions of adding a HOST file entry of your AD FS service with the IP address of this server. This will allow the synthetic transaction module running on this server to request a token</p> |
+| The proxy server can't reach the federation server | This AD FS proxy server is unable to contact the AD FS service. As a result, authentication requests processed by this server will fail. | Perform the following steps to validate the connectivity between this server and the AD FS service. <ol><li> Ensure that the firewall between this server and the AD FS service is configured accurately. </li><li> Ensure that DNS resolution for the AD FS service name appropriately points to the AD FS service that resides within the corporate network. This can be achieved through a DNS server that serves this server in the perimeter network or through entries in the HOSTS files for the AD FS service name. </li><li> Validate the network connectivity by opening up the browser on this server and accessing the federation metadata endpoint, which is at `https://<your-adfs-service-name>/federationmetadata/2007-06/federationmetadata.xml` </li> |
+| The SSL Certificate is about to expire | The TLS/SSL certificate used by the Federation servers is about to expire within 90 days. Once expired, any requests that require a valid TLS connection will fail. For example, for Microsoft 365 customers, mail clients won't be able to authenticate. | Update the TLS/SSL certificate on each AD FS server.<ol><li>Obtain the TLS/SSL certificate with the following requirements.<ol type="a"><li>Enhanced Key Usage is at least Server Authentication. </li><li>Certificate Subject or Subject Alternative Name (SAN) contains the DNS name of the Federation Service or appropriate wild card. For example: sso.contoso.com or *.contoso.com</li></ol></li><li>Install the new TLS/SSL certificate on each server in the local machine certificate store.</li><li>Ensure that the AD FS Service Account has read access to the certificate's Private Key</li></ol></p><p><b>For AD FS 2.0 in Windows Server 2008R2:</b><ul><li>Bind the new TLS/SSL certificate to the web site in IIS, which hosts the Federation Service. Note that you must perform this step on each Federation Server and Federation Server proxy.</li></ul></p><p><b>For AD FS in Windows Server 2012 R2 and later versions:</b> <li> Refer to <a href="/windows-server/identity/ad-fs/operations/manage-ssl-certificates-ad-fs-wap">Managing SSL Certificates in AD FS and WAP </a> </li> |
+| AD FS service isn't running on the server | Active Directory Federation Service (Windows Service) isn't running on this server. Any requests targeted to this server will fail. | To start the Active Directory Federation Service (Windows Service):<ol><li>Log on to the server as an administrator.</li><li> Open services.msc</li><li>Find "Active Directory Federation Services". </li><li>Right-click and select "Start". |
+| DNS for the Federation Service may be misconfigured | The DNS server could be configured to use a CNAME record for the AD FS farm name. It is recommended to use A or AAAA record for AD FS in order for the Windows Integrated Authentication to work seamlessly within your corporate network. | Ensure that the DNS record type of the AD FS farm `<Farm Name>` isn't CNAME. Configure it to be an A or AAAA record. |
+| AD FS Auditing is disabled | AD FS Auditing is disabled for the server. AD FS Usage section on the portal won't include data from this server. | If AD FS Audits aren't enabled follow these instructions:<ol><li>Grant the AD FS service account the "Generate security audits" right on the AD FS server.<li>Open the local security policy on the server gpedit.msc.</li><li>Navigate to "Computer Configuration\Windows Settings\Local Policies\User Rights Assignment" </li><li>Add the AD FS Service Account to have the "Generate security audits" right.</li></li><li>Run the following command from the command prompt:<br><i>auditpol.exe /set /subcategory:"Application Generated" /failure:enable /success:enable </i></li><li>Update Federation Service Properties to include Success and Failure Audits.<li>In the AD FS console, choose "Edit Federation Service Properties".</li><li>From "Federation Service Properties" dialogue box choose the Events tab and select "Success Audits" and "Failure Audits".</li></li></ol></p><p>After following these steps, AD FS Audit Events should be visible from the Event Viewer. To verify:<ol><li>Go to Event Viewer/ Windows Logs /Security.</li><li>Select Filter Current Logs and select AD FS Auditing from the Event sources drop down. For an active AD FS server with AD FS auditing enabled, events should be visible for the above filtering.</li></ol></p><p>If you've followed these instructions before, but still seeing this alert, it is possible that a Group Policy Object is disabling AD FS auditing. The root cause can be one of the following:<ol><li>AD FS service account is being removed from having the right to Generate Security Audits.</li><li>A custom script in Group Policy Object is disabling success and failure audits based on "Application Generated".</li><li>AD FS configuration isn't enabled to generate Success/Failure audits. |
| AD FS SSL certificate is self-signed | You are currently using a self-signed certificate as the TLS/SSL certificate in your AD FS farm. As a result, mail client authentication for Microsoft 365 will fail | <p> Update the TLS/SSL certificate on each AD FS server. </p> <ol><li>Obtain a publicly trusted TLS/SSL certificate with the following requirements. </li><li>Certificate installation file contains its private key. </li> <li>Enhanced Key Usage is at least Server Authentication. </li> <li>Certificate Subject or Subject Alternative Name (SAN) contains the DNS name of the Federation Service or appropriate wild card. For example: sso.contoso.com or *.contoso.com </li></ol> <p>Install the new TLS/SSL certificate on each server in the local machine certificate store. </p> <ol>Ensure that the AD FS Service Account has read access to the certificate's Private Key. <br /> <b>For AD FS 2.0 in Windows Server 2008R2: </b> <li>Bind the new TLS/SSL certificate to the web site in IIS, which hosts the Federation Service. Note that you must perform this step on each Federation Server and Federation Server proxy. </li> <br /><b>For AD FS in Windows Server 2012 R2 or later versions: </b> <li> Refer to <a href="/windows-server/identity/ad-fs/operations/manage-ssl-certificates-ad-fs-wap">Managing SSL Certificates in AD FS and WAP </a> </li> </ol> |
-| The trust between the proxy server and federation server is not valid | The trust between the federation server proxy and the Federation Service could not be established or renewed. | Update the Proxy Trust Certificate on the proxy server. Re-Run the Proxy Configuration Wizard. |
-| Extranet Lockout Protection Disabled for AD FS | The Extranet Lockout Protection feature is DISABLED on your AD FS farm. This feature protects your users from brute force password attacks from the internet and prevents denial of service attacks against your users when AD DS account lockout policies are in effect. With this feature enabled, if the number of failed extranet login attempts for a user (login attempts made via WAP server and AD FS) exceed the 'ExtranetLockoutThreshold' then AD FS servers will stop processing further login attempts for ΓÇÿExtranetObservationWindow' We highly recommend you enable this feature on your AD FS servers. | Run the following command to enable AD FS Extranet Lockout Protection with default values.<br><i>Set-AdfsProperties -EnableExtranetLockout $true</i><br><br>If you have AD lockout policies configured for your users, ensure that the <i>'ExtranetLockoutThreshold'</i> property is set to a value below your AD DS lockout threshold. This ensures that requests that have exceeded the threshold for AD FS are dropped and never validated against your AD DS servers. |
-| Invalid Service Principal Name (SPN) for the AD FS service account | The Service Principal Name of the Federation Service account is not registered or is not unique. As a result, Windows Integrated Authentication from domain-joined clients may not be seamless. | Use [<b>SETSPN -L ServiceAccountName</b>] to list the Service Principals.<br>Use [<b>SETSPN -X</b>] to check for duplicate Service Principal Names.</p><p>If SPN is duplicated for the AD FS service account, remove the SPN from the duplicated account using [<b>SETSPN -d service/namehostname</b>]</p><p>If SPN is not set, use [<b>SETSPN -s {Desired-SPN} {domain_name}\{service_account}</b>] to set the desired SPN for the Federation Service Account. |
-| The Primary AD FS Token Decrypting certificate is about to expire | The Primary AD FS Token Decrypting certificate is about to expire in less than 90 days. AD FS cannot decrypt tokens from trusted claims providers. AD FS cannot decrypt encrypted SSO cookies. The end users will not be able to authenticate to access resources. | If Auto-certificate roll-over is enabled, AD FS manages the Token Decrypting Certificate.</p><p>If you manage your certificate manually, please follow the below instructions. <b>Obtain a new Token Decrypting Certificate.</b><ol type="a"><li>Ensure that the Enhanced Key Usage (EKU) includes "Key Encipherment".</li><li>Subject or Subject Alternative Name (SAN) do not have any restrictions.</li><li>Note that your Federation Servers and Claims Provider partners need to be able to chain to a trusted root certification authority when validating your Token-Decrypting certificate.</li></ol><b>Decide how your Claims Provider partners will trust the new Token-Decrypting certificate</b><ol type="a"><li>Ask partners to pull the Federation Metadata after updating the certificate.</li><li>Share the public key of the new certificate. (.cer file) with the partners. On the Claims Provider partner's AD FS server, launch AD FS Management from the Administrative Tools menu. Under Trust Relationships/Relying Party Trusts, select the trust that was created for you. Under Properties/Encryption click "Browse" to select the new Token-Decrypting certificate and click OK.</li></ol><b>Install the certificate in the local certificate store on each of your Federation Server.</b><ul><li>Ensure that the certificate installation file has the Private Key of the certificate on each server.</li></ul><b>Ensure that the federation service account has access to the new certificate's private key.</b> <b>Add the new certificate to AD FS.</b><ol type="a"><li>Launch AD FS Management from the Administrative Tools menu</li><li>Expand Service and select Certificates</li><li>In the Actions pane, click Add Token-Decrypting Certificate</li><li>You will be presented with a list of certificates that are valid for Token-Decrypting. If you find that your new certificate is not being presented in the list, you need to go back and make sure that the certificate is in the local computer personal store with a private key associated and the certificate has the Key Encipherment as Extended Key Usage.</li><li>Select your new Token-Decrypting certificate and click OK.</li></ol><b>Set the new Token-Decrypting Certificate as Primary.</b><ol type="a"><li>With the Certificates node in AD FS Management selected, you should now see two certificates listed under Token-Decrypting: existing and the new certificate.</li><li>Select your new Token-Decrypting certificate, right-click, and select Set as primary.</li><li>Leave the old certificate as secondary for roll-over purposes. You should plan to remove the old certificate once you are confident it is no longer needed for roll-over, or when the certificate has expired. </li></ol> |
-| The Primary AD FS Token Signing certificate is about to expire | The AD FS token signing certificate is about to expire within 90 days. AD FS cannot issue signed tokens when this certificate is not valid. | <b>Obtain a new Token Signing Certificate.</b><ol type="a"><li>Ensure that the Enhanced Key Usage (EKU) includes "Digital Signature". </li><li>Subject or Subject Alternative Name (SAN) does not have any restrictions. </li><li>Note that your Federation Servers, your Resource Partner Federation Servers and Relying Party Application servers need to be able to chain to a trusted root certificate authority when validating your Token-Signing certificate.</li></ol><b>Install the certificate in the local certificate store on each Federation Server.</b> <ul><li>Ensure that the certificate installation file has the Private Key of the certificate on each server.</li></ul></li><b>Ensure that the Federation Service Account has access to the new certificate's private key.</b> <b>Add the new certificate to AD FS.</b><ol type="a"><li>Launch AD FS Management from the Administrative Tools menu.</li><li>Expand Service and select Certificates</li><li>In the Actions pane, click Add Token-Signing Certificate...</li><li>You will be presented with a list of certificates that are valid for Token-Signing. If you find that your new certificate is not being presented in the list, you need to go back and make sure that the certificate is in the local computer Personal store with private key associated and the certificate has the Digital Signature KU.</li><li>Select your new Token-Signing certificate and click OK</li></ol><b>Inform all the Relying Parties about the change in Token Signing Certificate.</b><ol type="a"><li>Relying Parties that consume AD FS federation metadata, must pull the new Federation Metadata to start using the new certificate.</li><li>Relying Parties that do NOT consume AD FS federation metadata must manually update the public key of the new Token Signing Certificate. Share the .cer file with the Relying Parties.</li></a><b>Set the new Token-Signing Certificate as Primary.</b><ol type="a"><li>With the Certificates node in AD FS Management selected, you should now see two certificates listed under Token-Signing: existing and the new certificate.</li><li>Select your new Token-Signing certificate, right-click, and select Set as <b>primary</b></li><li>Leave the old certificate as secondary for rollover purposes. You should plan to remove the old certificate once you are confident it is no longer needed for rollover, or when the certificate has expired. Note that current users' SSO sessions are signed. Current AD FS Proxy Trust relationships utilize tokens that are signed and encrypted using the old certificate. </li></ol> |
-| AD FS SSL certificate is not found in the local certificate store | The certificate with the thumbprint that is configured as the TLS/SSL certificate in the AD FS database was not found in the local certificate store. As a result, any authentication request over the TLS will fail. For example mail client authentication for Microsoft 365 will fail. | Install the certificate with the configured thumbprint in the local certificate store. |
-| The SSL Certificate expired | The TLS/SSL certificate for the AD FS service has expired. As a result, any authentication requests that require a valid TLS connection will fail. For example: mail client authentication will not be able to authenticate for Microsoft 365. | Update the TLS/SSL certificate on each AD FS server.<ol><li>Obtain the TLS/SSL certificate with the following requirements.<li>Enhanced Key Usage is at least Server Authentication. </li><li>Certificate Subject or Subject Alternative Name (SAN) contains the DNS name of the Federation Service or appropriate wild card. For example: sso.contoso.com or *.contoso.com</li></li><li>Install the new TLS/SSL certificate on each server in the local machine certificate store.</li><li>Ensure that the AD FS Service Account has read access to the certificate's Private Key</li></ol></p><p><b>For AD FS 2.0 in Windows Server 2008R2:</b><ul><li>Bind the new TLS/SSL certificate to the web site in IIS, which hosts the Federation Service. Note that you must perform this step on each Federation Server and Federation Server proxy.</li></ul></p><p><b>For AD FS in Windows Server 2012 R2 or later versions:</b> Refer to: <a href="/windows-server/identity/ad-fs/operations/manage-ssl-certificates-ad-fs-wap">Managing SSL Certificates in AD FS and WAP </a> </li> |
-| The Required end points for Azure Active Directory (for Microsoft 365) are not enabled | The following set of end points required by the Exchange Online Services, Azure AD, and Microsoft 365 are not enabled for the federation service: <li>/adfs/services/trust/2005/usernamemixed</li><li>/adfs/ls/</li> | Enable the required end points for the Microsoft Cloud Services on your federation service.<br>For AD FS in Windows Server 2012R2 or later versions <li> Refer to: <a href="/windows-server/identity/ad-fs/operations/manage-ssl-certificates-ad-fs-wap">Managing SSL Certificates in AD FS and WAP </a> </li></p> |
+| The trust between the proxy server and federation server isn't valid | The trust between the federation server proxy and the Federation Service couldn't be established or renewed. | Update the Proxy Trust Certificate on the proxy server. Re-Run the Proxy Configuration Wizard. |
+| Extranet Lockout Protection Disabled for AD FS | The Extranet Lockout Protection feature is DISABLED on your AD FS farm. This feature protects your users from brute force password attacks from the internet and prevents denial of service attacks against your users when AD DS account lockout policies are in effect. With this feature enabled, if the number of failed extranet login attempts for a user (login attempts made via WAP server and AD FS) exceed the 'ExtranetLockoutThreshold' then AD FS servers will stop processing further login attempts for ΓÇÿExtranetObservationWindow' We highly recommend you enable this feature on your AD FS servers. | Run the following command to enable AD FS Extranet Lockout Protection with default values.<br><i>Set-AdfsProperties -EnableExtranetLockout $true</i><br><br>If you've AD lockout policies configured for your users, ensure that the <i>'ExtranetLockoutThreshold'</i> property is set to a value below your AD DS lockout threshold. This ensures that requests that have exceeded the threshold for AD FS are dropped and never validated against your AD DS servers. |
+| Invalid Service Principal Name (SPN) for the AD FS service account | The Service Principal Name of the Federation Service account isn't registered or isn't unique. As a result, Windows Integrated Authentication from domain-joined clients may not be seamless. | Use [<b>SETSPN -L ServiceAccountName</b>] to list the Service Principals.<br>Use [<b>SETSPN -X</b>] to check for duplicate Service Principal Names.</p><p>If SPN is duplicated for the AD FS service account, remove the SPN from the duplicated account using [<b>SETSPN -d service/namehostname</b>]</p><p>If SPN isn't set, use [<b>SETSPN -s {Desired-SPN} {domain_name}\{service_account}</b>] to set the desired SPN for the Federation Service Account. |
+| The Primary AD FS Token Decrypting certificate is about to expire | The Primary AD FS Token Decrypting certificate is about to expire in less than 90 days. AD FS can't decrypt tokens from trusted claims providers. AD FS can't decrypt encrypted SSO cookies. The end users won't be able to authenticate to access resources. | If Auto-certificate roll-over is enabled, AD FS manages the Token Decrypting Certificate.</p><p>If you manage your certificate manually, please follow the below instructions. <b>Obtain a new Token Decrypting Certificate.</b><ol type="a"><li>Ensure that the Enhanced Key Usage (EKU) includes "Key Encipherment".</li><li>Subject or Subject Alternative Name (SAN) do not have any restrictions.</li><li>Note that your Federation Servers and Claims Provider partners need to be able to chain to a trusted root certification authority when validating your Token-Decrypting certificate.</li></ol><b>Decide how your Claims Provider partners will trust the new Token-Decrypting certificate</b><ol type="a"><li>Ask partners to pull the Federation Metadata after updating the certificate.</li><li>Share the public key of the new certificate. (.cer file) with the partners. On the Claims Provider partner's AD FS server, launch AD FS Management from the Administrative Tools menu. Under Trust Relationships/Relying Party Trusts, select the trust that was created for you. Under Properties/Encryption click "Browse" to select the new Token-Decrypting certificate and click OK.</li></ol><b>Install the certificate in the local certificate store on each of your Federation Server.</b><ul><li>Ensure that the certificate installation file has the Private Key of the certificate on each server.</li></ul><b>Ensure that the federation service account has access to the new certificate's private key.</b> <b>Add the new certificate to AD FS.</b><ol type="a"><li>Launch AD FS Management from the Administrative Tools menu</li><li>Expand Service and select Certificates</li><li>In the Actions pane, click Add Token-Decrypting Certificate</li><li>You'll be presented with a list of certificates that are valid for Token-Decrypting. If you find that your new certificate isn't being presented in the list, you need to go back and make sure that the certificate is in the local computer personal store with a private key associated and the certificate has the Key Encipherment as Extended Key Usage.</li><li>Select your new Token-Decrypting certificate and click OK.</li></ol><b>Set the new Token-Decrypting Certificate as Primary.</b><ol type="a"><li>With the Certificates node in AD FS Management selected, you should now see two certificates listed under Token-Decrypting: existing and the new certificate.</li><li>Select your new Token-Decrypting certificate, right-click, and select Set as primary.</li><li>Leave the old certificate as secondary for roll-over purposes. You should plan to remove the old certificate once you're confident it is no longer needed for roll-over, or when the certificate has expired. </li></ol> |
+| The Primary AD FS Token Signing certificate is about to expire | The AD FS token signing certificate is about to expire within 90 days. AD FS can't issue signed tokens when this certificate isn't valid. | <b>Obtain a new Token Signing Certificate.</b><ol type="a"><li>Ensure that the Enhanced Key Usage (EKU) includes "Digital Signature". </li><li>Subject or Subject Alternative Name (SAN) doesn't have any restrictions. </li><li>Note that your Federation Servers, your Resource Partner Federation Servers and Relying Party Application servers need to be able to chain to a trusted root certificate authority when validating your Token-Signing certificate.</li></ol><b>Install the certificate in the local certificate store on each Federation Server.</b> <ul><li>Ensure that the certificate installation file has the Private Key of the certificate on each server.</li></ul></li><b>Ensure that the Federation Service Account has access to the new certificate's private key.</b> <b>Add the new certificate to AD FS.</b><ol type="a"><li>Launch AD FS Management from the Administrative Tools menu.</li><li>Expand Service and select Certificates</li><li>In the Actions pane, click Add Token-Signing Certificate...</li><li>You'll be presented with a list of certificates that are valid for Token-Signing. If you find that your new certificate isn't being presented in the list, you need to go back and make sure that the certificate is in the local computer Personal store with private key associated and the certificate has the Digital Signature KU.</li><li>Select your new Token-Signing certificate and click OK</li></ol><b>Inform all the Relying Parties about the change in Token Signing Certificate.</b><ol type="a"><li>Relying Parties that consume AD FS federation metadata, must pull the new Federation Metadata to start using the new certificate.</li><li>Relying Parties that do NOT consume AD FS federation metadata must manually update the public key of the new Token Signing Certificate. Share the .cer file with the Relying Parties.</li></a><b>Set the new Token-Signing Certificate as Primary.</b><ol type="a"><li>With the Certificates node in AD FS Management selected, you should now see two certificates listed under Token-Signing: existing and the new certificate.</li><li>Select your new Token-Signing certificate, right-click, and select Set as <b>primary</b></li><li>Leave the old certificate as secondary for rollover purposes. You should plan to remove the old certificate once you're confident it is no longer needed for rollover, or when the certificate has expired. Note that current users' SSO sessions are signed. Current AD FS Proxy Trust relationships utilize tokens that are signed and encrypted using the old certificate. </li></ol> |
+| AD FS SSL certificate isn't found in the local certificate store | The certificate with the thumbprint that is configured as the TLS/SSL certificate in the AD FS database was not found in the local certificate store. As a result, any authentication request over the TLS will fail. For example mail client authentication for Microsoft 365 will fail. | Install the certificate with the configured thumbprint in the local certificate store. |
+| The SSL Certificate expired | The TLS/SSL certificate for the AD FS service has expired. As a result, any authentication requests that require a valid TLS connection will fail. For example: mail client authentication won't be able to authenticate for Microsoft 365. | Update the TLS/SSL certificate on each AD FS server.<ol><li>Obtain the TLS/SSL certificate with the following requirements.<li>Enhanced Key Usage is at least Server Authentication. </li><li>Certificate Subject or Subject Alternative Name (SAN) contains the DNS name of the Federation Service or appropriate wild card. For example: sso.contoso.com or *.contoso.com</li></li><li>Install the new TLS/SSL certificate on each server in the local machine certificate store.</li><li>Ensure that the AD FS Service Account has read access to the certificate's Private Key</li></ol></p><p><b>For AD FS 2.0 in Windows Server 2008R2:</b><ul><li>Bind the new TLS/SSL certificate to the web site in IIS, which hosts the Federation Service. Note that you must perform this step on each Federation Server and Federation Server proxy.</li></ul></p><p><b>For AD FS in Windows Server 2012 R2 or later versions:</b> Refer to: <a href="/windows-server/identity/ad-fs/operations/manage-ssl-certificates-ad-fs-wap">Managing SSL Certificates in AD FS and WAP </a> </li> |
+| The Required end points for Azure Active Directory (for Microsoft 365) aren't enabled | The following set of end points required by the Exchange Online Services, Azure AD, and Microsoft 365 aren't enabled for the federation service: <li>/adfs/services/trust/2005/usernamemixed</li><li>/adfs/ls/</li> | Enable the required end points for the Microsoft Cloud Services on your federation service.<br>For AD FS in Windows Server 2012R2 or later versions <li> Refer to: <a href="/windows-server/identity/ad-fs/operations/manage-ssl-certificates-ad-fs-wap">Managing SSL Certificates in AD FS and WAP </a> </li></p> |
| The Federation server was unable to connect to the AD FS Configuration Database | The AD FS service account is experiencing issues while connecting to the AD FS configuration database. As a result, the AD FS service on this computer may not function as expected. | <li> Ensure that the AD FS service account has access to the configuration database. </li><li>Ensure that the AD FS Configuration Database service is available and reachable. </li> |
-| Required SSL bindings are missing or not configured | The TLS bindings required for this federation server to successfully perform authentication are misconfigured. As a result, AD FS cannot process any incoming requests. | For Windows Server 2012 R2</b><br>Open an elevated admin command prompt and execute the following commands: <ol> <li> To view the current TLS binding:<i> Get-AdfsSslCertificate </i> <li> To add new bindings: <i> netsh http add sslcert hostnameport=\<federation service name>:443 certhash=0102030405060708090A0B0C0D0E0F1011121314 appid={00112233-4455-6677-8899-AABBCCDDEEFF} certstorename=MY </i> |
-| The Primary AD FS Token Signing certificate has expired | The AD FS Token Signing certificate has expired. AD FS cannot issue signed tokens when this certificate is not valid. | If Auto-certificate rollover is enabled, AD FS will manage updating the Token Signing Certificate.</p><p>If you manage your certificate manually, follow the below instructions. <ol><li><b>Obtain a new Token Signing Certificate.</b><ol type="a"><li>Ensure that the Enhanced Key Usage (EKU) includes "Digital Signature". </li><li>Subject or Subject Alternative Name (SAN) does not have any restrictions. </li><li>Remember that your Federation Servers, your Resource Partner Federation Servers and Relying Party Application servers need to be able to chain to a trusted root certificate authority when validating your Token-Signing certificate.</li></ol></li><li><b>Install the certificate in the local certificate store on each Federation Server.</b> <ul><li>Ensure that the certificate installation file has the Private Key of the certificate on each server.</li></ul></li><li><b>Ensure that the Federation Service Account has access to the new certificate's private key.</b></li><li> <b>Add the new certificate to AD FS.</b><ol type="a"><li>Launch AD FS Management from the Administrative Tools menu.</li><li>Expand Service and select Certificates</li><li>In the Actions pane, click Add Token-Signing Certificate...</li><li>You will be presented with a list of certificates that are valid for Token-Signing. If you find that your new certificate is not being presented in the list, you need to go back and make sure that the certificate is in the local computer Personal store with private key associated and the certificate has the Digital Signature KU.</li><li>Select your new Token-Signing certificate and click OK</li></ol></li><li><b>Inform all the Relying Parties about the change in Token Signing Certificate.</b><ol type="a"><li>Relying Parties that consume AD FS federation metadata, must pull the new Federation Metadata to start using the new certificate.</li><li>Relying Parties that do NOT consume AD FS federation metadata must manually update the public key of the new Token Signing Certificate. Share the .cer file with the Relying Parties.</li></ol></li><li><b>Set the new Token-Signing Certificate as Primary.</b><ol type="a"><li>With the Certificates node in AD FS Management selected, you should now see two certificates listed under Token-Signing: existing and the new certificate.</li><li>Select your new Token-Signing certificate, right-click, and select Set as <b>primary</b></li><li>Leave the old certificate as secondary for rollover purposes. You should plan to remove the old certificate once you are confident it is no longer needed for rollover, or when the certificate has expired. Remember that current users' SSO sessions are signed. Current AD FS Proxy Trust relationships utilize tokens that are signed and encrypted using the old certificate. </li></ol></li>|
-| Proxy server is dropping requests for congestion control | This proxy server is currently dropping requests from the extranet due to a higher than normal latency between this proxy server and the federation server. As a result, certain portion of the authentication requests processed by the AD FS Proxy server can fail. | <li>Verify if the network latency between the Federation Proxy Server and the Federation Servers falls within the acceptable range. Refer to the Monitoring Section for trending values of the "Token Request Latency". A latency greater than [1500 ms] should be considered as high latency. If high latency is observed, ensure the network between AD FS and AD FS Proxy servers does not have any connectivity issues.</li><li>Ensure Federation Servers are not overloaded with authentication requests. Monitoring Section provides trending views for Token Requests per second, CPU utilization and Memory consumption.</li><li>If the above items have been verified and this issue is still seen, adjust the congestion avoidance setting on each of the Federation Proxy Servers as per the guidance from the related links. |
-| The AD FS service account is denied access to one of the certificate's private key. | The AD FS service account does not have access to the private key of one of the AD FS certificates on this computer. | Ensure that the AD FS service account is provided access to the TLS, token signing, and token decryption certificates stored in the local computer certificate store.<ol> <li> From Command Line type MMC.</li><li>Go to File->Add/Remove Snap-In</li><li> Select Certificates and click Add. -> Select Computer Account and click Next. -> Select Local Computer and click Finish. Click OK. </li></ol> <br>Open Certificates(Local Computer)/Personal/Certificates.For all the certificates that are used by AD FS:<ol><li>Right-click the certificate.</li><li>Select All Tasks -> Manage Private Keys.</li><li>On the Security Tab under Group or user names ensure that the AD FS service account is present. If not select Add and add the AD FS service account.</li><li>Select the AD FS service account and under "Permissions for \<AD FS Service Account Name>" make sure Read permission is allowed (check mark). |
-| The AD FS SSL certificate does not have a private key | AD FS TLS/SSL certificate was installed without a private key. As a result any authentication request over the SSL will fail. For example, mail client authentication for Microsoft 365 will fail. | Update the TLS/SSL certificate on each AD FS server.<ol><li>Obtain a publicly trusted TLS/SSL certificate with the following requirements.<ol type="a"><li>Certificate installation file contains its private key.</li><li>Enhanced Key Usage is at least Server Authentication. </li><li>Certificate Subject or Subject Alternative Name (SAN) contains the DNS name of the Federation Service or appropriate wild card. For example: sso.contoso.com or *.contoso.com</li></ol></li><li>Install the new TLS/SSL certificate on each server in the local machine certificate store.</li><li>Ensure that the AD FS Service Account has read access to the certificate's Private Key</li></ol></p><p><b>For AD FS 2.0 in Windows Server 2008R2:</b><ul><li>Bind the new TLS/SSL certificate to the web site in IIS which hosts the Federation Service. Note that you must perform this step on each Federation Server and Federation Server proxy.</li></ul></p><p><b>For AD FS in Windows Server 2012 R2 or later versions:</b> <li> Refer to: <a href="/windows-server/identity/ad-fs/operations/manage-ssl-certificates-ad-fs-wap">Managing SSL Certificates in AD FS and WAP </a> </li> |
-| The Primary AD FS Token Decrypting certificate has expired | The Primary AD FS Token Decrypting certificate has expired. AD FS cannot decrypt tokens from trusted claims providers. AD FS cannot decrypt encrypted SSO cookies. The end users will not be able to authenticate to access resources. | <p>If Auto-certificate roll-over is enabled, AD FS manages the Token Decrypting Certificate.</p><p>If you manage your certificate manually, follow the below instructions.<ol><li><b>Obtain a new Token Decrypting Certificate.</b><ul><li>Ensure that the Enhanced Key Usage (EKU) includes "Key Encipherment".</li><li>Subject or Subject Alternative Name (SAN) do not have any restrictions.</li><li>Note that your Federation Servers and Claims Provider partners need to be able to chain to a trusted root certification authority when validating your Token-Decrypting certificate.</li></ul></li><li><b>Decide how your Claims Provider partners will trust the new Token-Decrypting certificate</b><ul><li>Ask partners to pull the Federation Metadata after updating the certificate.</li><li>Share the public key of the new certificate. (.cer file) with the partners. On the Claims Provider partner's AD FS server, launch AD FS Management from the Administrative Tools menu. Under Trust Relationships/Relying Party Trusts, select the trust that was created for you. Under Properties/Encryption click "Browse" to select the new Token-Decrypting certificate and click OK.</li></ul></li><li><b>Install the certificate in the local certificate store on each of your Federation Server.</b><ul><li>Ensure that the certificate installation file has the Private Key of the certificate on each server.</li></ul></li><li><b>Ensure that the federation service account has access to the new certificate's private key.</b></li><li><b>Add the new certificate to AD FS.</b><ul><li>Launch AD FS Management from the Administrative Tools menu</li><li>Expand Service and select Certificates</li><li>In the Actions pane, click Add Token-Decrypting Certificate</li><li>You will be presented with a list of certificates that are valid for Token-Decrypting. If you find that your new certificate is not being presented in the list, you need to go back and make sure that the certificate is in the local computer personal store with a private key associated and the certificate has the Key Encipherment as Extended Key Usage.</li><li>Select your new Token-Decrypting certificate and click OK.</li></ul></li><li><b>Set the new Token-Decrypting Certificate as Primary.</b><ul><li>With the Certificates node in AD FS Management selected, you should now see two certificates listed under Token-Decrypting: existing and the new certificate.</li><li>Select your new Token-Decrypting certificate, right-click, and select Set as primary.</li><li>Leave the old certificate as secondary for roll-over purposes. You should plan to remove the old certificate once you are confident it is no longer needed for roll-over, or when the certificate has expired. </li></ul></li> |
+| Required SSL bindings are missing or not configured | The TLS bindings required for this federation server to successfully perform authentication are misconfigured. As a result, AD FS can't process any incoming requests. | For Windows Server 2012 R2</b><br>Open an elevated admin command prompt and execute the following commands: <ol> <li> To view the current TLS binding:<i> Get-AdfsSslCertificate </i> <li> To add new bindings: <i> netsh http add sslcert hostnameport=\<federation service name>:443 certhash=0102030405060708090A0B0C0D0E0F1011121314 appid={00112233-4455-6677-8899-AABBCCDDEEFF} certstorename=MY </i> |
+| The Primary AD FS Token Signing certificate has expired | The AD FS Token Signing certificate has expired. AD FS can't issue signed tokens when this certificate isn't valid. | If Auto-certificate rollover is enabled, AD FS will manage updating the Token Signing Certificate.</p><p>If you manage your certificate manually, follow the below instructions. <ol><li><b>Obtain a new Token Signing Certificate.</b><ol type="a"><li>Ensure that the Enhanced Key Usage (EKU) includes "Digital Signature". </li><li>Subject or Subject Alternative Name (SAN) doesn't have any restrictions. </li><li>Remember that your Federation Servers, your Resource Partner Federation Servers and Relying Party Application servers need to be able to chain to a trusted root certificate authority when validating your Token-Signing certificate.</li></ol></li><li><b>Install the certificate in the local certificate store on each Federation Server.</b> <ul><li>Ensure that the certificate installation file has the Private Key of the certificate on each server.</li></ul></li><li><b>Ensure that the Federation Service Account has access to the new certificate's private key.</b></li><li> <b>Add the new certificate to AD FS.</b><ol type="a"><li>Launch AD FS Management from the Administrative Tools menu.</li><li>Expand Service and select Certificates</li><li>In the Actions pane, click Add Token-Signing Certificate...</li><li>You'll be presented with a list of certificates that are valid for Token-Signing. If you find that your new certificate isn't being presented in the list, you need to go back and make sure that the certificate is in the local computer Personal store with private key associated and the certificate has the Digital Signature KU.</li><li>Select your new Token-Signing certificate and click OK</li></ol></li><li><b>Inform all the Relying Parties about the change in Token Signing Certificate.</b><ol type="a"><li>Relying Parties that consume AD FS federation metadata, must pull the new Federation Metadata to start using the new certificate.</li><li>Relying Parties that do NOT consume AD FS federation metadata must manually update the public key of the new Token Signing Certificate. Share the .cer file with the Relying Parties.</li></ol></li><li><b>Set the new Token-Signing Certificate as Primary.</b><ol type="a"><li>With the Certificates node in AD FS Management selected, you should now see two certificates listed under Token-Signing: existing and the new certificate.</li><li>Select your new Token-Signing certificate, right-click, and select Set as <b>primary</b></li><li>Leave the old certificate as secondary for rollover purposes. You should plan to remove the old certificate once you're confident it is no longer needed for rollover, or when the certificate has expired. Remember that current users' SSO sessions are signed. Current AD FS Proxy Trust relationships utilize tokens that are signed and encrypted using the old certificate. </li></ol></li>|
+| Proxy server is dropping requests for congestion control | This proxy server is currently dropping requests from the extranet due to a higher than normal latency between this proxy server and the federation server. As a result, certain portion of the authentication requests processed by the AD FS Proxy server can fail. | <li>Verify if the network latency between the Federation Proxy Server and the Federation Servers falls within the acceptable range. Refer to the Monitoring Section for trending values of the "Token Request Latency". A latency greater than [1500 ms] should be considered as high latency. If high latency is observed, ensure the network between AD FS and AD FS Proxy servers doesn't have any connectivity issues.</li><li>Ensure Federation Servers aren't overloaded with authentication requests. Monitoring Section provides trending views for Token Requests per second, CPU utilization and Memory consumption.</li><li>If the above items have been verified and this issue is still seen, adjust the congestion avoidance setting on each of the Federation Proxy Servers as per the guidance from the related links. |
+| The AD FS service account is denied access to one of the certificate's private key. | The AD FS service account doesn't have access to the private key of one of the AD FS certificates on this computer. | Ensure that the AD FS service account is provided access to the TLS, token signing, and token decryption certificates stored in the local computer certificate store.<ol> <li> From Command Line type MMC.</li><li>Go to File->Add/Remove Snap-In</li><li> Select Certificates and click Add. -> Select Computer Account and click Next. -> Select Local Computer and click Finish. Click OK. </li></ol> <br>Open Certificates(Local Computer)/Personal/Certificates.For all the certificates that are used by AD FS:<ol><li>Right-click the certificate.</li><li>Select All Tasks -> Manage Private Keys.</li><li>On the Security Tab under Group or user names ensure that the AD FS service account is present. If not select Add and add the AD FS service account.</li><li>Select the AD FS service account and under "Permissions for \<AD FS Service Account Name>" make sure Read permission is allowed (check mark). |
+| The AD FS SSL certificate doesn't have a private key | AD FS TLS/SSL certificate was installed without a private key. As a result any authentication request over the SSL will fail. For example, mail client authentication for Microsoft 365 will fail. | Update the TLS/SSL certificate on each AD FS server.<ol><li>Obtain a publicly trusted TLS/SSL certificate with the following requirements.<ol type="a"><li>Certificate installation file contains its private key.</li><li>Enhanced Key Usage is at least Server Authentication. </li><li>Certificate Subject or Subject Alternative Name (SAN) contains the DNS name of the Federation Service or appropriate wild card. For example: sso.contoso.com or *.contoso.com</li></ol></li><li>Install the new TLS/SSL certificate on each server in the local machine certificate store.</li><li>Ensure that the AD FS Service Account has read access to the certificate's Private Key</li></ol></p><p><b>For AD FS 2.0 in Windows Server 2008R2:</b><ul><li>Bind the new TLS/SSL certificate to the web site in IIS which hosts the Federation Service. Note that you must perform this step on each Federation Server and Federation Server proxy.</li></ul></p><p><b>For AD FS in Windows Server 2012 R2 or later versions:</b> <li> Refer to: <a href="/windows-server/identity/ad-fs/operations/manage-ssl-certificates-ad-fs-wap">Managing SSL Certificates in AD FS and WAP </a> </li> |
+| The Primary AD FS Token Decrypting certificate has expired | The Primary AD FS Token Decrypting certificate has expired. AD FS can't decrypt tokens from trusted claims providers. AD FS can't decrypt encrypted SSO cookies. The end users won't be able to authenticate to access resources. | <p>If Auto-certificate roll-over is enabled, AD FS manages the Token Decrypting Certificate.</p><p>If you manage your certificate manually, follow the below instructions.<ol><li><b>Obtain a new Token Decrypting Certificate.</b><ul><li>Ensure that the Enhanced Key Usage (EKU) includes "Key Encipherment".</li><li>Subject or Subject Alternative Name (SAN) do not have any restrictions.</li><li>Note that your Federation Servers and Claims Provider partners need to be able to chain to a trusted root certification authority when validating your Token-Decrypting certificate.</li></ul></li><li><b>Decide how your Claims Provider partners will trust the new Token-Decrypting certificate</b><ul><li>Ask partners to pull the Federation Metadata after updating the certificate.</li><li>Share the public key of the new certificate. (.cer file) with the partners. On the Claims Provider partner's AD FS server, launch AD FS Management from the Administrative Tools menu. Under Trust Relationships/Relying Party Trusts, select the trust that was created for you. Under Properties/Encryption click "Browse" to select the new Token-Decrypting certificate and click OK.</li></ul></li><li><b>Install the certificate in the local certificate store on each of your Federation Server.</b><ul><li>Ensure that the certificate installation file has the Private Key of the certificate on each server.</li></ul></li><li><b>Ensure that the federation service account has access to the new certificate's private key.</b></li><li><b>Add the new certificate to AD FS.</b><ul><li>Launch AD FS Management from the Administrative Tools menu</li><li>Expand Service and select Certificates</li><li>In the Actions pane, click Add Token-Decrypting Certificate</li><li>You'll be presented with a list of certificates that are valid for Token-Decrypting. If you find that your new certificate isn't being presented in the list, you need to go back and make sure that the certificate is in the local computer personal store with a private key associated and the certificate has the Key Encipherment as Extended Key Usage.</li><li>Select your new Token-Decrypting certificate and click OK.</li></ul></li><li><b>Set the new Token-Decrypting Certificate as Primary.</b><ul><li>With the Certificates node in AD FS Management selected, you should now see two certificates listed under Token-Decrypting: existing and the new certificate.</li><li>Select your new Token-Decrypting certificate, right-click, and select Set as primary.</li><li>Leave the old certificate as secondary for roll-over purposes. You should plan to remove the old certificate once you're confident it is no longer needed for roll-over, or when the certificate has expired. </li></ul></li> |
## Alerts for Active Directory Domain Services | Alert Name | Description | Remediation | | | | -- |
-| Domain controller is unreachable via LDAP ping | Domain Controller is not reachable via LDAP Ping. This can be caused due to Network issues or machine issues. As a result, LDAP Pings will fail. | <li>Examine alerts list for related alerts, such as: Domain Controller is not advertising. </li><li>Ensure affected Domain Controller has sufficient disk space. Running out of space will stop the DC from advertising itself as an LDAP server. </li><li> Attempt to find the PDC: Run <br> <i>netdom query fsmo </i> </br> on the affected Domain Controller. <li> Ensure physical network is properly configured/connected. </li> |
+| Domain controller is unreachable via LDAP ping | Domain Controller isn't reachable via LDAP Ping. This can be caused due to Network issues or machine issues. As a result, LDAP Pings will fail. | <li>Examine alerts list for related alerts, such as: Domain Controller isn't advertising. </li><li>Ensure affected Domain Controller has sufficient disk space. Running out of space will stop the DC from advertising itself as an LDAP server. </li><li> Attempt to find the PDC: Run <br> <i>netdom query fsmo </i> </br> on the affected Domain Controller. <li> Ensure physical network is properly configured/connected. </li> |
| Active Directory replication error encountered | This domain controller is experiencing replication issues, which can be found by going to the Replication Status Dashboard. Replication errors may be due to improper configuration or other related issues. Untreated replication errors can lead to data inconsistency. | See additional details for the names of the affected source and destination DCs. Navigate to Replication Status dashboard and look for the active errors on the affected DCs. Click on the error to open a blade with more details on how to remediate that particular error.|
-| Domain controller is unable to find a PDC | A PDC is not reachable through this domain controller. This will lead to impacted user logons, unapplied group policy changes, and system time synchronization failure. | <li>Examine alerts list for related alerts that could be impacting your PDC, such as: Domain Controller is not advertising. </li> <li>Attempt to find the PDC: Run <br> <i>netdom query fsmo </i> </br> on the affected Domain Controller.<li>Ensure network is working properly. </li> |
-| Domain controller is unable to find a Global Catalog server | A global catalog server is not reachable from this domain controller. It will result in failed authentications attempted through this Domain Controller. | Examine the alerts list for any <b>Domain Controller is not advertising</b> alerts where the impacted server might be a GC. If there are no advertising alerts, check the SRV records for the GCs. You can check them by running: <br> <i> nltest \/dnsgetdc: [ForestName] \/gc </i> </br> It should list the DCs advertising as GCs. If the list is empty, check the DNS configuration to ensure that the GC has registered the SRV records. The DC is able to find them in DNS. <br />For troubleshooting Global Catalogs, see <a href="/previous-versions/windows/it-pro/windows-2000-server/cc961811(v=technet.10)#ECAA">Advertising as a Global Catalog Server. </a> |
-| Domain controller unable to reach local SYSVOL share | Sysvol contains important elements from Group Policy Objects and scripts to be distributed within DCs of a domain. The DC will not advertise itself as DC and Group Policies will not be applied. | See <a href="https://support.microsoft.com/kb/2958414">How to troubleshoot missing SYSVOL and Netlogon shares </a> |
+| Domain controller is unable to find a PDC | A PDC isn't reachable through this domain controller. This will lead to impacted user logons, unapplied group policy changes, and system time synchronization failure. | <li>Examine alerts list for related alerts that could be impacting your PDC, such as: Domain Controller isn't advertising. </li> <li>Attempt to find the PDC: Run <br> <i>netdom query fsmo </i> </br> on the affected Domain Controller.<li>Ensure network is working properly. </li> |
+| Domain controller is unable to find a Global Catalog server | A global catalog server isn't reachable from this domain controller. It will result in failed authentications attempted through this Domain Controller. | Examine the alerts list for any <b>Domain Controller isn't advertising</b> alerts where the impacted server might be a GC. If there are no advertising alerts, check the SRV records for the GCs. You can check them by running: <br> <i> nltest \/dnsgetdc: [ForestName] \/gc </i> </br> It should list the DCs advertising as GCs. If the list is empty, check the DNS configuration to ensure that the GC has registered the SRV records. The DC is able to find them in DNS. <br />For troubleshooting Global Catalogs, see <a href="/previous-versions/windows/it-pro/windows-2000-server/cc961811(v=technet.10)#ECAA">Advertising as a Global Catalog Server. </a> |
+| Domain controller unable to reach local sysvol share | Sysvol contains important elements from Group Policy Objects and scripts to be distributed within DCs of a domain. The DC won't advertise itself as DC and Group Policies won't be applied. | See <a href="https://support.microsoft.com/kb/2958414">How to troubleshoot missing sysvol and Netlogon shares </a> |
| Domain Controller time is out of sync | The time on this Domain Controller is outside of the normal Time Skew range. As a result, Kerberos authentications will fail. | <li>Restart Windows Time Service: Run <br><i>net stop w32time</i> </br> then <br><i>net start w32time </i></br> on the affected Domain Controller.</li><li>Resync Time: Run <br><i>w32tm \/resync </i></br> on the affected Domain Controller. |
-| Domain controller is not advertising | This domain controller is not properly advertising the roles it's capable of performing. This can be caused by problems with replication, DNS misconfiguration, critical services not running, or because of the server not being fully initialized. As a result, domain controllers, domain members, and other devices will not be able to locate this domain controller. Additionally, other domain controllers might not be able to replicate from this domain controller. | Examine alerts list for other related alerts such as: Replication is broken. Domain controller time is out of sync. Netlogon service is not running. DFSR and/or NTFRS services are not running. Identify and troubleshoot related DNS problems: Logon to affected Domain controller. Open System Event Log. If events 5774, 5775 or 5781 are present, see <a href="/previous-versions/windows/it-pro/windows-2000-server/bb727055(v=technet.10)#ECAA">Troubleshooting Domain Controller Locator DNS Records Registration Failure</a> Identify and troubleshoot related Windows Time Service Issues: Ensure Windows Time service is running: Run '<b>net start w32time</b>' on the affected Domain Controller. Restart Windows Time Service: Run '<b>net stop w32time</b>' then '<b>net start w32time</b>' on the affected Domain Controller. |
-| GPSVC service is not running | If the service is stopped or disabled, settings configured by the admin will not be applied and applications and components will not be manageable through Group Policy. Any components or applications that depend on the Group Policy component might not be functional if the service is disabled. | Run <br><i>net start gpsvc </i></br> on the affected Domain Controller. |
-| DFSR and/or NTFRS services are not running | If both DFSR and NTFRS services are stopped, Domain Controllers will not be able to replicate SYSVOL data. SYSVOL Data will be out of consistency. | <li>If using DFSR:<ol type="1" > Run '<b>net start dfsr</b>' on the affected Domain Controller. </li><li>If using NTFRS:<ol type="1" >Run '<b>net start ntfrs</b>' on the affected Domain Controller. </li>|
-| Netlogon service is not running | Logon requests, registration, authentication, and locating of domain controllers will be unavailable on this DC. | Run '<b>net start netlogon</b>' on the affected Domain Controller |
-| W32Time service is not running | If Windows Time Service is stopped, date and time synchronization will be unavailable. If this service is disabled, any services that explicitly depend on it will fail to start. | Run '<b>net start win32Time</b>' on the affected Domain Controller |
-| ADWS service is not running | If Active Directory Web Services service is stopped or disabled, client applications, such as Active Directory PowerShell, will not be able to access or manage any directory service instances that are running locally on this server. | Run '<b>net start adws</b>' on the affected Domain Controller |
-| Root PDC is not Syncing from NTP Server | If you do not configure the PDC to synchronize time from an external or internal time source, the PDC emulator uses its internal clock and is itself the reliable time source for the forest. If time is not accurate on the PDC itself, all computers will have incorrect time settings. | On the affected Domain Controller, open a command prompt. Stop the Time service: net stop w32time</li> <li>Configure the external time source: <br> <i>w32tm \/config \/manualpeerlist: time.windows.com \/syncfromflags:manual \/reliable:yes </i></br><br>Note: Replace time.windows.com with the address of your desired external time source. Start the Time service: <br> <i>net start w32time </i></br> |
-| Domain controller is quarantined | This Domain Controller is not connected to any of the other working Domain Controllers. This may be caused due to improper configuration. As a result, this DC is not being used and will not replicate from/to anyone. | Enable inbound and outbound replication: Run '<b>repadmin /options ServerName -DISABLE_INBOUND_REPL</b>' on the affected Domain Controller. Run '<b>repadmin /options ServerName -DISABLE_OUTBOUND_REPL</b>' on the affected Domain Controller. Create a new replication connection to another Domain Controller:<ol type="1"><li>Open Active Directory Sites and
-| Outbound Replication is Disabled | DCs with disabled Outbound Replication, will not be able to distribute any changes originating within itself. | To enable outbound replication on the affected Domain Controller, follow these steps: Click Start, click Run, type cmd and then click OK. Type the following text, and then press ENTER:<br><i>repadmin /options -DISABLE_OUTBOUND_REPL </i> |
-| Inbound Replication is Disabled | DCs with disabled Inbound Replication, will not have the latest information. This condition can lead to logon failures. | To enable inbound replication on the affected Domain Controller, follow these steps: Click Start, click Run, type cmd and then click OK. Type the following text, and then press ENTER:<br><i>repadmin /options -DISABLE_INBOUND_REPL</i> </br> |
-| LanmanServer service is not running | If this service is disabled, any services that explicitly depend on it will fail to start. | Run '<b>net start LanManServer</b>' on the affected Domain Controller. |
-| Kerberos Key Distribution Center service is not running | If KDC Service is stopped, users will not be able to authentication through this DC using the Kerberos v5 authentication protocol. | Run '<b>net start kdc</b>' on the affected Domain Controller. |
-| DNS service is not running | If DNS Service is stopped, computers and users using that server for DNS purposes will fail to find resources. | Run '<b>net start dns</b>' on the affected Domain Controller. |
-| DC had USN Rollback | When USN rollbacks occur, modifications to objects and attributes are not inbound replicated by destination domain controllers that have previously seen the USN. Because these destination domain controllers believe they are up to date, no replication errors are reported in Directory Service event logs or by monitoring and diagnostic tools. USN rollback may affect the replication of any object or attribute in any partition. The most frequently observed side effect is that user accounts and computer accounts that are created on the rollback domain controller do not exist on one or more replication partners. Or, the password updates that originated on the rollback domain controller do not exist on replication partners. | There are two approaches to recover from a USN rollback: <p>Remove the Domain Controller from the domain, following these steps: <ol type="1"><li>Remove Active Directory from the domain controller to force it to be a stand-alone server. For more information, click the following article number to view the article in the Microsoft Knowledge Base: <br><a href="https://support.microsoft.com/kb/332199">332199</a> Domain controllers do not demote gracefully when you use the Active Directory Installation Wizard to force demotion in Windows Server 2003 and in Windows 2000 Server. </li> <li>Shut down the demoted server.</li> <li>On a healthy domain controller, clean up the metadata of the demoted domain controller. For more information, click the following article number to view the article in the Microsoft Knowledge Base: <br><a href="https://support.microsoft.com/kb/216498">216498</a> How to remove data in Active Directory after an unsuccessful domain controller demotion</li> <li>If the incorrectly restored domain controller hosts operations master roles, transfer these roles to a healthy domain controller. For more information, click the following article number to view the article in the Microsoft Knowledge Base: <br><a href="https://support.microsoft.com/kb/255504">255504</a> Using Ntdsutil.exe to transfer or seize FSMO roles to a domain controller</li> <li>Restart the demoted server.</li> <li>If you are required to, install Active Directory on the stand-alone server again.</li> <li>If the domain controller was previously a global catalog, configure the domain controller to be a global catalog. For more information, click the following article number to view the article in the Microsoft Knowledge Base: <br><a href="https://support.microsoft.com/kb/313994">313994</a> How to create or move a global catalog in Windows 2000</li> <li>If the domain controller previously hosted operations master roles, transfer the operations master roles back to the domain controller. For more information, click the following article number to view the article in the Microsoft Knowledge Base: <br><a href="https://support.microsoft.com/kb/255504">255504</a> Using Ntdsutil.exe to transfer or seize FSMO roles to a domain controller Restore the system state of a good backup.</li></ol></p> <p>Evaluate whether valid system state backups exist for this domain controller. If a valid system state backup was made before the rolled-back domain controller was incorrectly restored, and the backup contains recent changes that were made on the domain controller, restore the system state from the most recent backup.</p> <p>You can also use the snapshot as a source of a backup. Or you can set the database to give itself a new invocation ID using the procedure in the section "To restore a previous version of a virtual domain controller VHD without system state data backup" in <a href="/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd363545(v=ws.10)">this article</a></p></p> |
+| Domain controller isn't advertising | This domain controller isn't properly advertising the roles it's capable of performing. This can be caused by problems with replication, DNS misconfiguration, critical services not running, or because of the server not being fully initialized. As a result, domain controllers, domain members, and other devices won't be able to locate this domain controller. Additionally, other domain controllers might not be able to replicate from this domain controller. | Examine alerts list for other related alerts such as: Replication is broken. Domain controller time is out of sync. Netlogon service isn't running. DFSR and/or NTFRS services aren't running. Identify and troubleshoot related DNS problems: Logon to affected Domain controller. Open System Event Log. If events 5774, 5775 or 5781 are present, see <a href="/previous-versions/windows/it-pro/windows-2000-server/bb727055(v=technet.10)#ECAA">Troubleshooting Domain Controller Locator DNS Records Registration Failure</a> Identify and troubleshoot related Windows Time Service Issues: Ensure Windows Time service is running: Run '<b>net start w32time</b>' on the affected Domain Controller. Restart Windows Time Service: Run '<b>net stop w32time</b>' then '<b>net start w32time</b>' on the affected Domain Controller. |
+| GPSVC service isn't running | If the service is stopped or disabled, settings configured by the admin won't be applied and applications and components won't be manageable through Group Policy. Any components or applications that depend on the Group Policy component might not be functional if the service is disabled. | Run <br><i>net start gpsvc </i></br> on the affected Domain Controller. |
+| DFSR and/or NTFRS services aren't running | If both DFSR and NTFRS services are stopped, Domain Controllers won't be able to replicate sysvol data. sysvol Data will be out of consistency. | <li>If using DFSR:<ol type="1" > Run '<b>net start dfsr</b>' on the affected Domain Controller. </li><li>If using NTFRS:<ol type="1" >Run '<b>net start ntfrs</b>' on the affected Domain Controller. </li>|
+| Netlogon service isn't running | Logon requests, registration, authentication, and locating of domain controllers will be unavailable on this DC. | Run '<b>net start netlogon</b>' on the affected Domain Controller |
+| W32Time service isn't running | If Windows Time Service is stopped, date and time synchronization will be unavailable. If this service is disabled, any services that explicitly depend on it will fail to start. | Run '<b>net start win32Time</b>' on the affected Domain Controller |
+| ADWS service isn't running | If Active Directory Web Services service is stopped or disabled, client applications, such as Active Directory PowerShell, won't be able to access or manage any directory service instances that are running locally on this server. | Run '<b>net start adws</b>' on the affected Domain Controller |
+| Root PDC isn't Syncing from NTP Server | If you do not configure the PDC to synchronize time from an external or internal time source, the PDC emulator uses its internal clock and is itself the reliable time source for the forest. If time isn't accurate on the PDC itself, all computers will have incorrect time settings. | On the affected Domain Controller, open a command prompt. Stop the Time service: net stop w32time</li> <li>Configure the external time source: <br> <i>w32tm \/config \/manualpeerlist: time.windows.com \/syncfromflags:manual \/reliable:yes </i></br><br>Note: Replace time.windows.com with the address of your desired external time source. Start the Time service: <br> <i>net start w32time </i></br> |
+| Domain controller is quarantined | This Domain Controller isn't connected to any of the other working Domain Controllers. This may be caused due to improper configuration. As a result, this DC isn't being used and won't replicate from/to anyone. | Enable inbound and outbound replication: Run '<b>repadmin /options ServerName -DISABLE_INBOUND_REPL</b>' on the affected Domain Controller. Run '<b>repadmin /options ServerName -DISABLE_OUTBOUND_REPL</b>' on the affected Domain Controller. Create a new replication connection to another Domain Controller:<ol type="1"><li>Open Active Directory Sites and
+| Outbound Replication is Disabled | DCs with disabled Outbound Replication, won't be able to distribute any changes originating within itself. | To enable outbound replication on the affected Domain Controller, follow these steps: Click Start, click Run, type cmd and then click OK. Type the following text, and then press ENTER:<br><i>repadmin /options -DISABLE_OUTBOUND_REPL </i> |
+| Inbound Replication is Disabled | DCs with disabled Inbound Replication, won't have the latest information. This condition can lead to logon failures. | To enable inbound replication on the affected Domain Controller, follow these steps: Click Start, click Run, type cmd and then click OK. Type the following text, and then press ENTER:<br><i>repadmin /options -DISABLE_INBOUND_REPL</i> </br> |
+| LanmanServer service isn't running | If this service is disabled, any services that explicitly depend on it will fail to start. | Run '<b>net start LanManServer</b>' on the affected Domain Controller. |
+| Kerberos Key Distribution Center service isn't running | If KDC Service is stopped, users won't be able to authentication through this DC using the Kerberos v5 authentication protocol. | Run '<b>net start kdc</b>' on the affected Domain Controller. |
+| DNS service isn't running | If DNS Service is stopped, computers and users using that server for DNS purposes will fail to find resources. | Run '<b>net start dns</b>' on the affected Domain Controller. |
+| DC had USN Rollback | When USN rollbacks occur, modifications to objects and attributes aren't inbound replicated by destination domain controllers that have previously seen the USN. Because these destination domain controllers believe they are up to date, no replication errors are reported in Directory Service event logs or by monitoring and diagnostic tools. USN rollback may affect the replication of any object or attribute in any partition. The most frequently observed side effect is that user accounts and computer accounts that are created on the rollback domain controller do not exist on one or more replication partners. Or, the password updates that originated on the rollback domain controller do not exist on replication partners. | There are two approaches to recover from a USN rollback: <p>Remove the Domain Controller from the domain, following these steps: <ol type="1"><li>Remove Active Directory from the domain controller to force it to be a stand-alone server. For more information, click the following article number to view the article in the Microsoft Knowledge Base: <br><a href="https://support.microsoft.com/kb/332199">332199</a> Domain controllers do not demote gracefully when you use the Active Directory Installation Wizard to force demotion in Windows Server 2003 and in Windows 2000 Server. </li> <li>Shut down the demoted server.</li> <li>On a healthy domain controller, clean up the metadata of the demoted domain controller. For more information, click the following article number to view the article in the Microsoft Knowledge Base: <br><a href="https://support.microsoft.com/kb/216498">216498</a> How to remove data in Active Directory after an unsuccessful domain controller demotion</li> <li>If the incorrectly restored domain controller hosts operations master roles, transfer these roles to a healthy domain controller. For more information, click the following article number to view the article in the Microsoft Knowledge Base: <br><a href="https://support.microsoft.com/kb/255504">255504</a> Using Ntdsutil.exe to transfer or seize FSMO roles to a domain controller</li> <li>Restart the demoted server.</li> <li>If you're required to, install Active Directory on the stand-alone server again.</li> <li>If the domain controller was previously a global catalog, configure the domain controller to be a global catalog. For more information, click the following article number to view the article in the Microsoft Knowledge Base: <br><a href="https://support.microsoft.com/kb/313994">313994</a> How to create or move a global catalog in Windows 2000</li> <li>If the domain controller previously hosted operations master roles, transfer the operations master roles back to the domain controller. For more information, click the following article number to view the article in the Microsoft Knowledge Base: <br><a href="https://support.microsoft.com/kb/255504">255504</a> Using Ntdsutil.exe to transfer or seize FSMO roles to a domain controller Restore the system state of a good backup.</li></ol></p> <p>Evaluate whether valid system state backups exist for this domain controller. If a valid system state backup was made before the rolled-back domain controller was incorrectly restored, and the backup contains recent changes that were made on the domain controller, restore the system state from the most recent backup.</p> <p>You can also use the snapshot as a source of a backup. Or you can set the database to give itself a new invocation ID using the procedure in the section "To restore a previous version of a virtual domain controller VHD without system state data backup" in <a href="/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd363545(v=ws.10)">this article</a></p></p> |
active-directory How To Connect Install Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-prerequisites.md
To read more about securing your Active Directory environment, see [Best practic
#### Installation prerequisites - Azure AD Connect must be installed on a domain-joined Windows Server 2016 or later - **note that Windows Server 2022 is not yet supported**. You can deploy Azure AD Connect on Windows Server 2016 but since Windows Server 2016 is in extended support, you may require [a paid support program](/lifecycle/policies/fixed#extended-support) if you require support for this configuration. We recommend the usage of domain joined Windows Server 2019.-- The minimum .Net Framework version required is 4.6.2, and newer versions of .Net are also supported.
+- The minimum .NET Framework version required is 4.6.2, and newer versions of .Net are also supported.
- Azure AD Connect can't be installed on Small Business Server or Windows Server Essentials before 2019 (Windows Server Essentials 2019 is supported). The server must be using Windows Server standard or better. - The Azure AD Connect server must have a full GUI installed. Installing Azure AD Connect on Windows Server Core isn't supported. - The Azure AD Connect server must not have PowerShell Transcription Group Policy enabled if you use the Azure AD Connect wizard to manage Active Directory Federation Services (AD FS) configuration. You can enable PowerShell transcription if you use the Azure AD Connect wizard to manage sync configuration.
To read more about securing your Active Directory environment, see [Best practic
- You must configure TLS/SSL certificates. For more information, see [Managing SSL/TLS protocols and cipher suites for AD FS](/windows-server/identity/ad-fs/operations/manage-ssl-protocols-in-ad-fs) and [Managing SSL certificates in AD FS](/windows-server/identity/ad-fs/operations/manage-ssl-certificates-ad-fs-wap). - You must configure name resolution. - It is not supported to break and analyze traffic between Azure AD Connect and Azure AD. Doing so may disrupt the service.-- If your Hybrid Identity Administrators have MFA enabled, the URL https://secure.aadcdn.microsoftonline-p.com *must* be in the trusted sites list. You're prompted to add this site to the trusted sites list when you're prompted for an MFA challenge and it hasn't been added before. You can use Internet Explorer to add it to your trusted sites.
+- If your Hybrid Identity Administrators have MFA enabled, the URL `https://secure.aadcdn.microsoftonline-p.com` *must* be in the trusted sites list. You're prompted to add this site to the trusted sites list when you're prompted for an MFA challenge and it hasn't been added before. You can use Internet Explorer to add it to your trusted sites.
- If you plan to use Azure AD Connect Health for syncing, ensure that the prerequisites for Azure AD Connect Health are also met. For more information, see [Azure AD Connect Health agent installation](how-to-connect-health-agent-install.md). ### Harden your Azure AD Connect server
We recommend that you harden your Azure AD Connect server to decrease the securi
- Implement dedicated [privileged access workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/) for all personnel with privileged access to your organization's information systems. - Follow these [additional guidelines](/windows-server/identity/ad-ds/plan/security-best-practices/reducing-the-active-directory-attack-surface) to reduce the attack surface of your Active Directory environment. - Follow the [Monitor changes to federation configuration](how-to-connect-monitor-federation-changes.md) to setup alerts to monitor changes to the trust established between your Idp and Azure AD. -- Enable Multi Factor Authentication (MFA) for all users that have privileged access in Azure AD or in AD. One security issue with using AADConnect is that if an attacker can get control over the Azure AD Connect server they can manipulate users in Azure AD. To prevent a attacker from using these capabilities to take over Azure AD accounts, MFA offers protections so that even if an attacker manages to e.g. reset a user's password using Azure AD Connect they still cannot bypass the second factor.-- Disable Soft Matching on your tenant. Soft Matching is a great feature to help transfering source of autority for existing cloud managed objects to Azure AD Connect, but it comes with certain security risks. If you do not require it, you should [disable Soft Matching](how-to-connect-syncservice-features.md#blocksoftmatch).
+- Enable Multi Factor Authentication (MFA) for all users that have privileged access in Azure AD or in AD. One security issue with using Azure AD Connect is that if an attacker can get control over the Azure AD Connect server they can manipulate users in Azure AD. To prevent an attacker from using these capabilities to take over Azure AD accounts, MFA offers protections so that even if an attacker manages to e.g. reset a user's password using Azure AD Connect they still cannot bypass the second factor.
+- Disable Soft Matching on your tenant. Soft Matching is a great feature to help transferring source of authority for existing cloud managed objects to Azure AD Connect, but it comes with certain security risks. If you do not require it, you should [disable Soft Matching](how-to-connect-syncservice-features.md#blocksoftmatch).
- Disable Hard Match Takeover. Hard match takeover allows Azure AD Connect to take control of a cloud managed object and changing the source of authority for the object to Active Directory. Once the source of authority of an object is taken over by Azure AD Connect, changes made to the Active Directory object that is linked to the Azure AD object will overwrite the original Azure AD data - including the password hash, if Password Hash Sync is enabled. An attacker could use this capability to take over control of cloud managed objects. To mitigate this risk, [disable hard match takeover](/powershell/module/msonline/set-msoldirsyncfeature?view=azureadps-1.0&preserve-view=true#example-3-block-cloud-object-takeover-through-hard-matching-for-the-tenant). ### SQL Server used by Azure AD Connect
active-directory How To Connect Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-preview.md
na Previously updated : 01/21/2022 Last updated : 01/27/2023
This topic describes how to use features currently in preview.
## Azure AD Connect sync V2 endpoint API
-We have deployed a new endpoint (API) for Azure AD Connect that improves the performance of the synchronization service operations to Azure Active Directory. By utilizing the new V2 endpoint, you will experience noticeable performance gains on export and import to Azure AD. This new endpoint also supports syncing groups with up to 250k members. Using this endpoint also allows you to write back Microsoft 365 unified groups, with no maximum membership limit, to your on-premises Active Directory, when group writeback is enabled. For more information see [Azure AD Connect sync V2 endpoint API](how-to-connect-sync-endpoint-api-v2.md).
+We've deployed a new endpoint (API) for Azure AD Connect that improves the performance of the synchronization service operations to Azure Active Directory. By utilizing the new V2 endpoint, you'll experience noticeable performance gains on export and import to Azure AD. This new endpoint also supports syncing groups with up to 250k members. Using this endpoint also allows you to write back Microsoft 365 unified groups, with no maximum membership limit, to your on-premises Active Directory, when group writeback is enabled. For more information see [Azure AD Connect sync V2 endpoint API](how-to-connect-sync-endpoint-api-v2.md).
## User writeback > [!IMPORTANT]
active-directory How To Connect Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sso.md
Title: 'Azure AD Connect: Seamless Single Sign-On | Microsoft Docs'
-description: This topic describes Azure Active Directory (Azure AD) Seamless Single Sign-On and how it allows you to provide true single sign-on for corporate desktop users inside your corporate network.
+ Title: 'Azure AD Connect: Seamless single sign-on | Microsoft Docs'
+description: This topic describes Azure Active Directory (Azure AD) Seamless single sign-on and how it allows you to provide true single sign-on for corporate desktop users inside your corporate network.
keywords: what is Azure AD Connect, install Active Directory, required components for Azure AD, SSO, Single Sign-on documentationcenter: ''
na Previously updated : 01/21/2022 Last updated : 01/27/2023
-# Azure Active Directory Seamless Single Sign-On
+# Azure Active Directory Seamless single sign-on
-## What is Azure Active Directory Seamless Single Sign-On?
+## What is Azure Active Directory Seamless single sign-on?
-Azure Active Directory Seamless Single Sign-On (Azure AD Seamless SSO) automatically signs users in when they are on their corporate devices connected to your corporate network. When enabled, users don't need to type in their passwords to sign in to Azure AD, and usually, even type in their usernames. This feature provides your users easy access to your cloud-based applications without needing any additional on-premises components.
+Azure Active Directory Seamless single sign-on (Azure AD Seamless SSO) automatically signs users in when they are on their corporate devices connected to your corporate network. When enabled, users don't need to type in their passwords to sign in to Azure AD, and usually, even type in their usernames. This feature provides your users easy access to your cloud-based applications without needing any additional on-premises components.
>[!VIDEO https://www.youtube.com/embed/PyeAC85Gm7w] Seamless SSO can be combined with either the [Password Hash Synchronization](how-to-connect-password-hash-synchronization.md) or [Pass-through Authentication](how-to-connect-pta.md) sign-in methods. Seamless SSO is _not_ applicable to Active Directory Federation Services (ADFS).
-![Seamless Single Sign-On](./media/how-to-connect-sso/sso1.png)
+![Seamless single sign-on](./media/how-to-connect-sso/sso1.png)
## SSO via primary refresh token vs. Seamless SSO For Windows 10, Windows Server 2016 and later versions, itΓÇÖs recommended to use SSO via primary refresh token (PRT). For Windows 7 and Windows 8.1, itΓÇÖs recommended to use Seamless SSO.
-Seamless SSO needs the user's device to be domain-joined, but it is not used on Windows 10 [Azure AD joined devices](../devices/concept-azure-ad-join.md) or [hybrid Azure AD joined devices](../devices/concept-azure-ad-join-hybrid.md). SSO on Azure AD joined, Hybrid Azure AD joined, and Azure AD registered devices works based on the [Primary Refresh Token (PRT)](../devices/concept-primary-refresh-token.md)
+Seamless SSO needs the user's device to be domain-joined, but it isn't used on Windows 10 [Azure AD joined devices](../devices/concept-azure-ad-join.md) or [hybrid Azure AD joined devices](../devices/concept-azure-ad-join-hybrid.md). SSO on Azure AD joined, Hybrid Azure AD joined, and Azure AD registered devices works based on the [Primary Refresh Token (PRT)](../devices/concept-primary-refresh-token.md)
SSO via PRT works once devices are registered with Azure AD for hybrid Azure AD joined, Azure AD joined or personal registered devices via Add Work or School Account. For more information on how SSO works with Windows 10 using PRT, see: [Primary Refresh Token (PRT) and Azure AD](../devices/concept-primary-refresh-token.md)
For more information on how SSO works with Windows 10 using PRT, see: [Primary R
- If an application (for example, `https://myapps.microsoft.com/contoso.com`) forwards a `domain_hint` (OpenID Connect) or `whr` (SAML) parameter - identifying your tenant, or `login_hint` parameter - identifying the user, in its Azure AD sign-in request, users are automatically signed in without them entering usernames or passwords. - Users also get a silent sign-on experience if an application (for example, `https://contoso.sharepoint.com`) sends sign-in requests to Azure AD's endpoints set up as tenants - that is, `https://login.microsoftonline.com/contoso.com/<..>` or `https://login.microsoftonline.com/<tenant_ID>/<..>` - instead of Azure AD's common endpoint - that is, `https://login.microsoftonline.com/common/<...>`. - Sign out is supported. This allows users to choose another Azure AD account to sign in with, instead of being automatically signed in using Seamless SSO automatically.-- Microsoft 365 Win32 clients (Outlook, Word, Excel, and others) with versions 16.0.8730.xxxx and above are supported using a non-interactive flow. For OneDrive, you will have to activate the [OneDrive silent config feature](https://techcommunity.microsoft.com/t5/Microsoft-OneDrive-Blog/Previews-for-Silent-Sync-Account-Configuration-and-Bandwidth/ba-p/120894) for a silent sign-on experience.
+- Microsoft 365 Win32 clients (Outlook, Word, Excel, and others) with versions 16.0.8730.xxxx and above are supported using a non-interactive flow. For OneDrive, you'll have to activate the [OneDrive silent config feature](https://techcommunity.microsoft.com/t5/Microsoft-OneDrive-Blog/Previews-for-Silent-Sync-Account-Configuration-and-Bandwidth/ba-p/120894) for a silent sign-on experience.
- It can be enabled via Azure AD Connect.-- It is a free feature, and you don't need any paid editions of Azure AD to use it.-- It is supported on web browser-based clients and Office clients that support [modern authentication](/office365/enterprise/modern-auth-for-office-2013-and-2016) on platforms and browsers capable of Kerberos authentication:
+- It's a free feature, and you don't need any paid editions of Azure AD to use it.
+- It's supported on web browser-based clients and Office clients that support [modern authentication](/office365/enterprise/modern-auth-for-office-2013-and-2016) on platforms and browsers capable of Kerberos authentication:
| OS\Browser |Internet Explorer|Microsoft Edge\*\*\*\*|Google Chrome|Mozilla Firefox|Safari| | | | | | | --
For more information on how SSO works with Windows 10 using PRT, see: [Primary R
>Microsoft Edge legacy is no longer supported
-\*Requires Internet Explorer version 11 or later. ([Beginning August 17, 2021, Microsoft 365 apps and services will not support IE 11](https://techcommunity.microsoft.com/t5/microsoft-365-blog/microsoft-365-apps-say-farewell-to-internet-explorer-11-and/ba-p/1591666).)
+\*Requires Internet Explorer version 11 or later. ([Beginning August 17, 2021, Microsoft 365 apps and services won't support IE 11](https://techcommunity.microsoft.com/t5/microsoft-365-blog/microsoft-365-apps-say-farewell-to-internet-explorer-11-and/ba-p/1591666).)
\*\*Requires Internet Explorer version 11 or later. Disable Enhanced Protected Mode.
active-directory How To Connect Sync Technical Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-technical-concepts.md
na Previously updated : 01/15/2018 Last updated : 01/27/2023
In the picture above, the connector is synonymous with the connector space but e
The connector is responsible for all import and export functionality to the system and frees developers from needing to understand how to connect to each system natively when using declarative provisioning to customize data transformations.
-Imports and exports only occur when scheduled, allowing for further insulation from changes occurring within the system, since changes do not automatically propagate to the connected data source. In addition, developers may also create their own connectors for connecting to virtually any data source.
+Imports and exports only occur when scheduled, allowing for further insulation from changes occurring within the system, since changes don't automatically propagate to the connected data source. In addition, developers may also create their own connectors for connecting to virtually any data source.
## Attribute flow The metaverse is the consolidated view of all joined identities from neighboring connector spaces. In the figure above, attribute flow is depicted by lines with arrowheads for both inbound and outbound flow. Attribute flow is the process of copying or transforming data from one system to another and all attribute flows (inbound or outbound).
Attribute flow occurs between the connector space and the metaverse bi-direction
Attribute flow only occurs when these synchronizations are run. Attribute flows are defined in Synchronization Rules. These can be inbound (ISR in the picture above) or outbound (OSR in the picture above). ## Connected system
-Connected system (aka connected directory) is referring to the remote system Azure AD Connect sync has connected to and reading and writing identity data to and from.
+Connected system is referring to the remote system Azure AD Connect sync has connected to and reading and writing identity data to and from.
## Connector space Each connected data source is represented as a filtered subset of the objects and attributes in the connector space.
As identities are linked together and authority is assigned for various attribut
Objects are created when an authoritative system projects them into the metaverse. As soon as all connections are removed, the metaverse object is deleted.
-Objects in the metaverse cannot be edited directly. All data in the object must be contributed through attribute flow. The metaverse maintains persistent connectors with each connector space. These connectors do not require reevaluation for each synchronization run. This means that Azure AD Connect sync does not have to locate the matching remote object each time. This avoids the need for costly agents to prevent changes to attributes that would normally be responsible for correlating the objects.
+Objects in the metaverse can't be edited directly. All data in the object must be contributed through attribute flow. The metaverse maintains persistent connectors with each connector space. These connectors don't require reevaluation for each synchronization run. This means that Azure AD Connect sync doesn't have to locate the matching remote object each time. This avoids the need for costly agents to prevent changes to attributes that would normally be responsible for correlating the objects.
When discovering new data sources that may have preexisting objects that need to be managed, Azure AD Connect sync uses a process called a join rule to evaluate potential candidates with which to establish a link.
-Once the link is established, this evaluation does not reoccur and normal attribute flow can occur between the remote connected data source and the metaverse.
+Once the link is established, this evaluation doesn't reoccur and normal attribute flow can occur between the remote connected data source and the metaverse.
## Provisioning When an authoritative source projects a new object into the metaverse a new connector space object can be created in another Connector representing a downstream connected data source. This inherently establishes a link, and attribute flow can proceed bi-directionally.
-Whenever a rule determines that a new connector space object needs to be created, it is called provisioning. However, because this operation only takes place within the connector space, it does not carry over into the connected data source until an export is performed.
+Whenever a rule determines that a new connector space object needs to be created, it's called provisioning. However, because this operation only takes place within the connector space, it doesn't carry over into the connected data source until an export is performed.
## Additional Resources * [Azure AD Connect Sync: Customizing Synchronization options](how-to-connect-sync-whatis.md)
active-directory Howto Troubleshoot Upn Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/howto-troubleshoot-upn-changes.md
Learn more: [How it works: Azure AD Multi-Factor Authentication](../authenticati
When you change user UPN, the old UPN appears on the user account and notification might not be received. Use verification codes.
-Learn more: [Common questions about the Microsoft Authenticator app](/account-billing/common-problems-with-the-microsoft-authenticator-app-12d283d1-bcef-4875-9ae5-ac360e2945dd)
+Learn more: [Common questions about the Microsoft Authenticator app](https://prod.support.services.microsoft.com/account-billing/common-questions-about-the-microsoft-authenticator-app-12d283d1-bcef-4875-9ae5-ac360e2945dd)
**Workaround**
active-directory Plan Connect Design Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-connect-design-concepts.md
na Previously updated : 08/10/2018 Last updated : 01/27/2023
The purpose of this document is to describe areas that must be considered while
## sourceAnchor The sourceAnchor attribute is defined as *an attribute immutable during the lifetime of an object*. It uniquely identifies an object as being the same object on-premises and in Azure AD. The attribute is also called **immutableId** and the two names are used interchangeably.
-The word immutable, that is "cannot be changed", is important to this document. Since this attributeΓÇÖs value cannot be changed after it has been set, it is important to pick a design that supports your scenario.
+The word immutable, that is "can't be changed", is important to this document. Since this attributeΓÇÖs value can't be changed after it has been set, it's important to pick a design that supports your scenario.
The attribute is used for the following scenarios:
The attribute is used for the following scenarios:
* If you move from a cloud-only identity to a synchronized identity model, then this attribute allows objects to "hard match" existing objects in Azure AD with on-premises objects. * If you use federation, then this attribute together with the **userPrincipalName** is used in the claim to uniquely identify a user.
-This topic only talks about sourceAnchor as it relates to users. The same rules apply to all object types, but it is only for users this problem usually is a concern.
+This topic only talks about sourceAnchor as it relates to users. The same rules apply to all object types, but it's only for users this problem usually is a concern.
### Selecting a good sourceAnchor attribute The attribute value must follow the following rules:
The attribute value must follow the following rules:
* Not contain a special character: &#92; ! # $ % & * + / = ? ^ &#96; { } | ~ < > ( ) ' ; : , [ ] " \@ _ * Must be globally unique * Must be either a string, integer, or binary
-* Should not be based on user's name because these can change
-* Should not be case-sensitive and avoid values that may vary by case
+* Shouldn't be based on user's name because these can change
+* Shouldn't be case-sensitive and avoid values that may vary by case
* Should be assigned when the object is created
-If the selected sourceAnchor is not of type string, then Azure AD Connect Base64Encode the attribute value to ensure no special characters appear. If you use another federation server than ADFS, make sure your server can also Base64Encode the attribute.
+If the selected sourceAnchor isn't of type string, then Azure AD Connect Base64Encode the attribute value to ensure no special characters appear. If you use another federation server than ADFS, make sure your server can also Base64Encode the attribute.
-The sourceAnchor attribute is case-sensitive. A value of ΓÇ£JohnDoeΓÇ¥ is not the same as ΓÇ£johndoeΓÇ¥. But you should not have two different objects with only a difference in case.
+The sourceAnchor attribute is case-sensitive. A value of ΓÇ£JohnDoeΓÇ¥ isn't the same as ΓÇ£johndoeΓÇ¥. But you shouldn't have two different objects with only a difference in case.
-If you have a single forest on-premises, then the attribute you should use is **objectGUID**. This is also the attribute used when you use express settings in Azure AD Connect and also the attribute used by DirSync.
+If you've a single forest on-premises, then the attribute you should use is **objectGUID**. This is also the attribute used when you use express settings in Azure AD Connect and also the attribute used by DirSync.
-If you have multiple forests and do not move users between forests and domains, then **objectGUID** is a good attribute to use even in this case.
+If you've multiple forests and don't move users between forests and domains, then **objectGUID** is a good attribute to use even in this case.
-If you move users between forests and domains, then you must find an attribute that does not change or can be moved with the users during the move. A recommended approach is to introduce a synthetic attribute. An attribute that could hold something that looks like a GUID would be suitable. During object creation, a new GUID is created and stamped on the user. A custom sync rule can be created in the sync engine server to create this value based on the **objectGUID** and update the selected attribute in ADDS. When you move the object, make sure to also copy the content of this value.
+If you move users between forests and domains, then you must find an attribute that doesn't change or can be moved with the users during the move. A recommended approach is to introduce a synthetic attribute. An attribute that could hold something that looks like a GUID would be suitable. During object creation, a new GUID is created and stamped on the user. A custom sync rule can be created in the sync engine server to create this value based on the **objectGUID** and update the selected attribute in AD DS. When you move the object, make sure to also copy the content of this value.
-Another solution is to pick an existing attribute you know does not change. Commonly used attributes include **employeeID**. If you consider an attribute that contains letters, make sure there is no chance the case (upper case vs. lower case) can change for the attribute's value. Bad attributes that should not be used include those attributes with the name of the user. In a marriage or divorce, the name is expected to change, which is not allowed for this attribute. This is also one reason why attributes such as **userPrincipalName**, **mail**, and **targetAddress** are not even possible to select in the Azure AD Connect installation wizard. Those attributes also contain the "\@" character, which is not allowed in the sourceAnchor.
+Another solution is to pick an existing attribute you know doesn't change. Commonly used attributes include **employeeID**. If you consider an attribute that contains letters, make sure there's no chance the case (upper case vs. lower case) can change for the attribute's value. Bad attributes that shouldn't be used include those attributes with the name of the user. In a marriage or divorce, the name is expected to change, which isn't allowed for this attribute. This is also one reason why attributes such as **userPrincipalName**, **mail**, and **targetAddress** aren't even possible to select in the Azure AD Connect installation wizard. Those attributes also contain the "\@" character, which isn't allowed in the sourceAnchor.
### Changing the sourceAnchor attribute
-The sourceAnchor attribute value cannot be changed after the object has been created in Azure AD and the identity is synchronized.
+The sourceAnchor attribute value can't be changed after the object has been created in Azure AD and the identity is synchronized.
For this reason, the following restrictions apply to Azure AD Connect: * The sourceAnchor attribute can only be set during initial installation. If you rerun the installation wizard, this option is read-only. If you need to change this setting, then you must uninstall and reinstall.
-* If you install another Azure AD Connect server, then you must select the same sourceAnchor attribute as previously used. If you have earlier been using DirSync and move to Azure AD Connect, then you must use **objectGUID** since that is the attribute used by DirSync.
-* If the value for sourceAnchor is changed after the object has been exported to Azure AD, then Azure AD Connect sync throws an error and does not allow any more changes on that object before the issue has been fixed and the sourceAnchor is changed back in the source directory.
+* If you install another Azure AD Connect server, then you must select the same sourceAnchor attribute as previously used. If you've earlier been using DirSync and move to Azure AD Connect, then you must use **objectGUID** since that is the attribute used by DirSync.
+* If the value for sourceAnchor is changed after the object has been exported to Azure AD, then Azure AD Connect sync throws an error and doesn't allow any more changes on that object before the issue has been fixed and the sourceAnchor is changed back in the source directory.
## Using ms-DS-ConsistencyGuid as sourceAnchor
-By default, Azure AD Connect (version 1.1.486.0 and older) uses objectGUID as the sourceAnchor attribute. ObjectGUID is system-generated. You cannot specify its value when creating on-premises AD objects. As explained in section [sourceAnchor](#sourceanchor), there are scenarios where you need to specify the sourceAnchor value. If the scenarios are applicable to you, you must use a configurable AD attribute (for example, ms-DS-ConsistencyGuid) as the sourceAnchor attribute.
+By default, Azure AD Connect (version 1.1.486.0 and older) uses objectGUID as the sourceAnchor attribute. ObjectGUID is system-generated. You can't specify its value when creating on-premises AD objects. As explained in section [sourceAnchor](#sourceanchor), there are scenarios where you need to specify the sourceAnchor value. If the scenarios are applicable to you, you must use a configurable AD attribute (for example, ms-DS-ConsistencyGuid) as the sourceAnchor attribute.
Azure AD Connect (version 1.1.524.0 and after) now facilitates the use of ms-DS-ConsistencyGuid as sourceAnchor attribute. When using this feature, Azure AD Connect automatically configures the synchronization rules to:
Azure AD Connect (version 1.1.524.0 and after) now facilitates the use of ms-DS-
2. For any given on-premises AD User object whose ms-DS-ConsistencyGuid attribute isn't populated, Azure AD Connect writes its objectGUID value back to the ms-DS-ConsistencyGuid attribute in on-premises Active Directory. After the ms-DS-ConsistencyGuid attribute is populated, Azure AD Connect then exports the object to Azure AD. >[!NOTE]
-> Once an on-premises AD object is imported into Azure AD Connect (that is, imported into the AD Connector Space and projected into the Metaverse), you cannot change its sourceAnchor value anymore. To specify the sourceAnchor value for a given on-premises AD object, configure its ms-DS-ConsistencyGuid attribute before it is imported into Azure AD Connect.
+> Once an on-premises AD object is imported into Azure AD Connect (that is, imported into the AD Connector Space and projected into the Metaverse), you can't change its sourceAnchor value anymore. To specify the sourceAnchor value for a given on-premises AD object, configure its ms-DS-ConsistencyGuid attribute before it's imported into Azure AD Connect.
### Permission required For this feature to work, the AD DS account used to synchronize with on-premises Active Directory must be granted write permission to the ms-DS-ConsistencyGuid attribute in on-premises Active Directory.
When installing Azure AD Connect with Express mode, the Azure AD Connect wizard
* First, the Azure AD Connect wizard queries your Azure AD tenant to retrieve the AD attribute used as the sourceAnchor attribute in the previous Azure AD Connect installation (if any). If this information is available, Azure AD Connect uses the same AD attribute. >[!NOTE]
- > Only newer versions of Azure AD Connect (1.1.524.0 and after) store information in your Azure AD tenant about the sourceAnchor attribute used during installation. Older versions of Azure AD Connect do not.
+ > Only newer versions of Azure AD Connect (1.1.524.0 and after) store information in your Azure AD tenant about the sourceAnchor attribute used during installation. Older versions of Azure AD Connect don't.
-* If information about the sourceAnchor attribute used isn't available, the wizard checks the state of the ms-DS-ConsistencyGuid attribute in your on-premises Active Directory. If the attribute isn't configured on any object in the directory, the wizard uses the ms-DS-ConsistencyGuid as the sourceAnchor attribute. If the attribute is configured on one or more objects in the directory, the wizard concludes the attribute is being used by other applications and is not suitable as sourceAnchor attribute...
+* If information about the sourceAnchor attribute used isn't available, the wizard checks the state of the ms-DS-ConsistencyGuid attribute in your on-premises Active Directory. If the attribute isn't configured on any object in the directory, the wizard uses the ms-DS-ConsistencyGuid as the sourceAnchor attribute. If the attribute is configured on one or more objects in the directory, the wizard concludes the attribute is being used by other applications and isn't suitable as sourceAnchor attribute...
* In which case, the wizard falls back to using objectGUID as the sourceAnchor attribute.
When installing Azure AD Connect with Custom mode, the Azure AD Connect wizard p
| A specific attribute | Select this option if you wish to specify an existing AD attribute as the sourceAnchor attribute. | ### How to enable the ConsistencyGuid feature - Existing deployment
-If you have an existing Azure AD Connect deployment which is using objectGUID as the Source Anchor attribute, you can switch it to using ConsistencyGuid instead.
+If you've an existing Azure AD Connect deployment which is using objectGUID as the Source Anchor attribute, you can switch it to using ConsistencyGuid instead.
>[!NOTE] > Only newer versions of Azure AD Connect (1.1.552.0 and after) support switching from ObjectGuid to ConsistencyGuid as the Source Anchor attribute.
To switch from objectGUID to ConsistencyGuid as the Source Anchor attribute:
![Enable ConsistencyGuid for existing deployment - step 6](./media/plan-connect-design-concepts/consistencyguidexistingdeployment04.png)
-During the analysis (step 4), if the attribute is configured on one or more objects in the directory, the wizard concludes the attribute is being used by another application and returns an error as illustrated in the diagram below. This error can also occur if you have previously enabled the ConsistencyGuid feature on your primary Azure AD Connect server and you are trying to do the same on your staging server.
+During the analysis (step 4), if the attribute is configured on one or more objects in the directory, the wizard concludes the attribute is being used by another application and returns an error as illustrated in the diagram below. This error can also occur if you've previously enabled the ConsistencyGuid feature on your primary Azure AD Connect server and you're trying to do the same on your staging server.
![Enable ConsistencyGuid for existing deployment - error](./media/plan-connect-design-concepts/consistencyguidexistingdeploymenterror.png)
- If you are certain that the attribute isn't used by other existing applications, you can suppress the error by restarting the Azure AD Connect wizard with the **/SkipLdapSearch** switch specified. To do so, run the following command in command prompt:
+ If you're certain that the attribute isn't used by other existing applications, you can suppress the error by restarting the Azure AD Connect wizard with the **/SkipLdapSearch** switch specified. To do so, run the following command in command prompt:
``` "c:\Program Files\Microsoft Azure Active Directory Connect\AzureADConnect.exe" /SkipLdapSearch ``` ### Impact on AD FS or third-party federation configuration
-If you are using Azure AD Connect to manage on-premises AD FS deployment, the Azure AD Connect automatically updates the claim rules to use the same AD attribute as sourceAnchor. This ensures that the ImmutableID claim generated by ADFS is consistent with the sourceAnchor values exported to Azure AD.
+If you're using Azure AD Connect to manage on-premises AD FS deployment, the Azure AD Connect automatically updates the claim rules to use the same AD attribute as sourceAnchor. This ensures that the ImmutableID claim generated by ADFS is consistent with the sourceAnchor values exported to Azure AD.
-If you are managing AD FS outside of Azure AD Connect or you are using third-party federation servers for authentication, you must manually update the claim rules for ImmutableID claim to be consistent with the sourceAnchor values exported to Azure AD as described in article section [Modify AD FS claim rules](./how-to-connect-fed-management.md#modclaims). The wizard returns the following warning after installation completes:
+If you're managing AD FS outside of Azure AD Connect or you're using third-party federation servers for authentication, you must manually update the claim rules for ImmutableID claim to be consistent with the sourceAnchor values exported to Azure AD as described in article section [Modify AD FS claim rules](./how-to-connect-fed-management.md#modclaims). The wizard returns the following warning after installation completes:
![Third-party federation configuration](./media/plan-connect-design-concepts/consistencyGuid-03.png) ### Adding new directories to existing deployment
-Suppose you have deployed Azure AD Connect with the ConsistencyGuid feature enabled, and now you would like to add another directory to the deployment. When you try to add the directory, Azure AD Connect wizard checks the state of the ms-DS-ConsistencyGuid attribute in the directory. If the attribute is configured on one or more objects in the directory, the wizard concludes the attribute is being used by other applications and returns an error as illustrated in the diagram below. If you are certain that the attribute isn't used by existing applications, you can suppress the error by restarting the Azure AD Connect wizard with the **/SkipLdapSearch** switch specified as described above or you need to contact Support for more information.
+Suppose you've deployed Azure AD Connect with the ConsistencyGuid feature enabled, and now you would like to add another directory to the deployment. When you try to add the directory, Azure AD Connect wizard checks the state of the ms-DS-ConsistencyGuid attribute in the directory. If the attribute is configured on one or more objects in the directory, the wizard concludes the attribute is being used by other applications and returns an error as illustrated in the diagram below. If you're certain that the attribute isn't used by existing applications, you can suppress the error by restarting the Azure AD Connect wizard with the **/SkipLdapSearch** switch specified as described above or you need to contact Support for more information.
![Adding new directories to existing deployment](./media/plan-connect-design-concepts/consistencyGuid-04.png) ## Azure AD sign-in
-While integrating your on-premises directory with Azure AD, it is important to understand how the synchronization settings can affect the way user authenticates. Azure AD uses userPrincipalName (UPN) to authenticate the user. However, when you synchronize your users, you must choose the attribute to be used for value of userPrincipalName carefully.
+While integrating your on-premises directory with Azure AD, it's important to understand how the synchronization settings can affect the way user authenticates. Azure AD uses userPrincipalName (UPN) to authenticate the user. However, when you synchronize your users, you must choose the attribute to be used for value of userPrincipalName carefully.
### Choosing the attribute for userPrincipalName
-When you are selecting the attribute for providing the value of UPN to be used in Azure one should ensure
+When you're selecting the attribute for providing the value of UPN to be used in Azure one should ensure
-* The attribute values conform to the UPN syntax (RFC 822), that is it should be of the format username\@domain
+* The attribute values conform to the UPN syntax (RFC 822), it should be in the format of username\@domain
* The suffix in the values matches to one of the verified custom domains in Azure AD
-In express settings, the assumed choice for the attribute is userPrincipalName. If the userPrincipalName attribute does not contain the value you want your users to sign in to Azure, then you must choose **Custom Installation**.
+In express settings, the assumed choice for the attribute is userPrincipalName. If the userPrincipalName attribute doesn't contain the value you want your users to sign in to Azure, then you must choose **Custom Installation**.
>[!NOTE] >It's recommended as a best practice that the UPN prefix contains more than one character. ### Custom domain state and UPN
-It is important to ensure that there is a verified domain for the UPN suffix.
+It is important to ensure that there's a verified domain for the UPN suffix.
-John is a user in contoso.com. You want John to use the on-premises UPN john\@contoso.com to sign in to Azure after you have synced users to your Azure AD directory contoso.onmicrosoft.com. To do so, you need to add and verify contoso.com as a custom domain in Azure AD before you can start syncing the users. If the UPN suffix of John, for example contoso.com, does not match a verified domain in Azure AD, then Azure AD replaces the UPN suffix with contoso.onmicrosoft.com.
+John is a user in contoso.com. You want John to use the on-premises UPN john\@contoso.com to sign in to Azure after you've synced users to your Azure AD directory contoso.onmicrosoft.com. To do so, you need to add and verify contoso.com as a custom domain in Azure AD before you can start syncing the users. If the UPN suffix of John, for example contoso.com, doesn't match a verified domain in Azure AD, then Azure AD replaces the UPN suffix with contoso.onmicrosoft.com.
### Non-routable on-premises domains and UPN for Azure AD
-Some organizations have non-routable domains, like contoso.local, or simple single label domains like contoso. You are not able to verify a non-routable domain in Azure AD. Azure AD Connect can sync to only a verified domain in Azure AD. When you create an Azure AD directory, it creates a routable domain that becomes default domain for your Azure AD for example, contoso.onmicrosoft.com. Therefore, it becomes necessary to verify any other routable domain in such a scenario in case you don't want to sync to the default onmicrosoft.com domain.
+Some organizations have non-routable domains, like contoso.local, or simple single label domains like contoso. You aren't able to verify a non-routable domain in Azure AD. Azure AD Connect can sync to only a verified domain in Azure AD. When you create an Azure AD directory, it creates a routable domain that becomes default domain for your Azure AD for example, contoso.onmicrosoft.com. Therefore, it becomes necessary to verify any other routable domain in such a scenario in case you don't want to sync to the default onmicrosoft.com domain.
Read [Add your custom domain name to Azure Active Directory](../fundamentals/add-custom-domain.md) for more info on adding and verifying domains.
-Azure AD Connect detects if you are running in a non-routable domain environment and would appropriately warn you from going ahead with express settings. If you are operating in a non-routable domain, then it is likely that the UPN, of the users, have non-routable suffixes too. For example, if you are running under contoso.local, Azure AD Connect suggests you to use custom settings rather than using express settings. Using custom settings, you are able to specify the attribute that should be used as UPN to sign in to Azure after the users are synced to Azure AD.
+Azure AD Connect detects if you're running in a non-routable domain environment and would appropriately warn you from going ahead with express settings. If you're operating in a non-routable domain, then it's likely that the UPN, of the users, have non-routable suffixes too. For example, if you're running under contoso.local, Azure AD Connect suggests you to use custom settings rather than using express settings. Using custom settings, you're able to specify the attribute that should be used as UPN to sign in to Azure after the users are synced to Azure AD.
## Next steps Learn more about [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
active-directory Plan Connect Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-connect-topologies.md
We recommend having a single tenant in Azure AD for an organization. Before you
This topology implements the following use cases:
-* AADConnect can synchronize the users, groups, and contacts from a single Active Directory to multiple Azure AD tenants. These tenants can be in different Azure environments, such as the Azure China environment or the Azure Government environment, but they could also be in the same Azure environment, such as two tenants that are both in Azure Commercial. For more details on options, see https://docs.microsoft.com/azure/azure-government/documentation-government-plan-identity.
+* AADConnect can synchronize the users, groups, and contacts from a single Active Directory to multiple Azure AD tenants. These tenants can be in different Azure environments, such as the Azure China environment or the Azure Government environment, but they could also be in the same Azure environment, such as two tenants that are both in Azure Commercial. For more details on options, see [Planning identity for Azure Government applications] (/azure/azure-government/documentation-government-plan-identity).
* The same Source Anchor can be used for a single object in separate tenants (but not for multiple objects in the same tenant). (The verified domain can't be the same in two tenants. More details are needed to enable the same object to have two UPNs.) * You will need to deploy an AADConnect server for every Azure AD tenant you want to synchronize to - one AADConnect server cannot synchronize to more than one Azure AD tenant. * It is supported to have different sync scopes and different sync rules for different tenants.
active-directory Plan Hybrid Identity Design Considerations Identity Adoption Strategy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-hybrid-identity-design-considerations-identity-adoption-strategy.md
na Previously updated : 04/29/2019 Last updated : 01/27/2023
In this task, you define the hybrid identity adoption strategy for your hybrid i
* [Determine multi-factor authentication requirements](plan-hybrid-identity-design-considerations-multifactor-auth-requirements.md) ## Define business needs strategy
-The first task addresses determining the organizations business needs. This task can be broad and scope creep can occur if you are not careful. In the beginning, keep it simple but always remember to plan for a design that will accommodate and facilitate change in the future. Regardless of whether it is a simple design or a complex one, Azure Active Directory is the Microsoft Identity platform that supports Microsoft 365, Microsoft Online Services, and cloud aware applications.
+The first task addresses determining the organizations business needs. This task can be broad and scope creep can occur if you aren't careful. In the beginning, keep it simple but always remember to plan for a design that will accommodate and facilitate change in the future. Regardless of whether it's a simple design or a complex one, Azure Active Directory is the Microsoft Identity platform that supports Microsoft 365, Microsoft Online Services, and cloud aware applications.
## Define an integration strategy
-Microsoft has three main integration scenarios: cloud identities, synchronized identities, and federated identities. You should plan on adopting one of these integration strategies. The strategy you choose can vary. Decisions in choosing one may include, what type of user experience you want to provide, do you have an existing infrastructure, and what is the most cost effective.
+Microsoft has three main integration scenarios: cloud identities, synchronized identities, and federated identities. You should plan on adopting one of these integration strategies. The strategy you choose can vary. Decisions in choosing one may include, what type of user experience you want to provide, do you've an existing infrastructure, and what is the most cost effective.
![integration scenarios](./media/plan-hybrid-identity-design-considerations/integration-scenarios.png) The scenarios defined in the above figure are:
-* **Cloud identities**: identities that exist solely in the cloud. In the case of Azure AD, they would reside specifically in your Azure AD directory.
+* **Cloud identities**: identities that exist solely in the cloud. For Azure AD, they would reside specifically in your Azure AD directory.
* **Synchronized**: identities that exist on-premises and in the cloud. Using Azure AD Connect, users are either created or joined with existing Azure AD accounts. The userΓÇÖs password hash is synchronized from the on-premises environment to the cloud in what is called a password hash. Remember that if a user is disabled in the on-premises environment, it can take up to three hours for that account status to show up in Azure AD. This behavior is due to the synchronization time interval. * **Federated**: identities exist both on-premises and in the cloud. Using Azure AD Connect, users are either created or joined with existing Azure AD accounts.
The following table helps in determining the advantages and disadvantages of eac
| | | | | **Cloud identities** |Easier to manage for small organization. <br> Nothing to install on-premises. No extra hardware needed<br>Easily disabled if the user leaves the company |Users will need to sign in when accessing workloads in the cloud <br> Passwords may or may not be the same for cloud and on-premises identities | | **Synchronized** |On-premises password authenticates both on-premises and cloud directories <br>Easier to manage for small, medium, or large organizations <br>Users can have single sign-on (SSO) for some resources <br> Microsoft preferred method for synchronization <br> Easier to manage |Some customers may be reluctant to synchronize their directories with the cloud due specific companyΓÇÖs policies |
-| **Federated** |Users can have single sign-on (SSO) <br>If a user is terminated or leaves, the account can be immediately disabled and access revoked,<br> Supports advanced scenarios that cannot be accomplished with synchronized |More steps to set up and configure <br> Higher maintenance <br> May require extra hardware for the STS infrastructure <br> May require extra hardware to install the federation server. Other software is required if AD FS is used <br> Require extensive setup for SSO <br> Critical point of failure if the federation server is down, users wonΓÇÖt be able to authenticate |
+| **Federated** |Users can have single sign-on (SSO) <br>If a user is terminated or leaves, the account can be immediately disabled and access revoked,<br> Supports advanced scenarios that can't be accomplished with synchronized |More steps to set up and configure <br> Higher maintenance <br> May require extra hardware for the STS infrastructure <br> May require extra hardware to install the federation server. Other software is required if AD FS is used <br> Require extensive setup for SSO <br> Critical point of failure if the federation server is down, users wonΓÇÖt be able to authenticate |
### Client experience The strategy that you use will dictate the user sign-in experience. The following tables provide you with information on what the users should expect their sign-in experience to be. Not all federated identity providers support SSO in all scenarios.
The strategy that you use will dictate the user sign-in experience. The followi
| Exchange ActiveSync |Prompt for credentials |single sign-on for Lync, prompted credentials for Exchange | | Mobile apps |Prompt for credentials |Prompt for credentials |
-If you have a third-party IdP or are going to use one to provide federation with Azure AD, you need to be aware of the following supported capabilities:
+If you've a third-party IdP or are going to use one to provide federation with Azure AD, you need to be aware of the following supported capabilities:
* Any SAML 2.0 provider that is compliant for the SP-Lite profile can support authentication to Azure AD and associated applications * Supports passive authentication, which facilitates authentication to OWA, SPO, etc. * Exchange Online clients can be supported via the SAML 2.0 Enhanced Client Profile (ECP)
-You must also be aware of what capabilities will not be available:
+You must also be aware of what capabilities won't be available:
* Without WS-Trust/Federation support, all other active clients break * That means no Lync client, OneDrive client, Office Subscription, Office Mobile prior to Office 2016
You must also be aware of what capabilities will not be available:
> ## Define synchronization strategy
-This task defines the tools that will be used to synchronize the organizationΓÇÖs on-premises data to the cloud and what topology you should use. Because, most organizations use Active Directory, information on using Azure AD Connect to address the questions above is provided in some detail. For environments that do not have Active Directory, there is information about using FIM 2010 R2 or MIM 2016 to help plan this strategy. However, future releases of Azure AD Connect will support LDAP directories, so depending on your timeline, this information may be able to assist.
+This task defines the tools that will be used to synchronize the organizationΓÇÖs on-premises data to the cloud and what topology you should use. Because, most organizations use Active Directory, information on using Azure AD Connect to address the questions above is provided in some detail. For environments that don't have Active Directory, there's information about using FIM 2010 R2 or MIM 2016 to help plan this strategy. However, future releases of Azure AD Connect will support LDAP directories, so depending on your timeline, this information may be able to assist.
### Synchronization tools
-Over the years, several synchronization tools have existed and used for various scenarios. Currently Azure AD Connect is the go to tool of choice for all supported scenarios. AAD Sync and DirSync are also still around and may even be present in your environment now.
+Over the years, several synchronization tools have existed and used for various scenarios. Currently Azure AD Connect is the go to tool of choice for all supported scenarios. Azure AD Sync and DirSync are also still around and may even be present in your environment now.
> [!NOTE] > For the latest information regarding the supported capabilities of each tool, read [Directory integration tools comparison](plan-hybrid-identity-design-considerations-tools-comparison.md) article.
Over the years, several synchronization tools have existed and used for various
### Supported topologies When defining a synchronization strategy, the topology that is used must be determined. Depending on the information that was determined in step 2 you can determine which topology is the proper one to use.
-The single forest, single Azure AD topology is the most common and consists of a single Active Directory forest and a single instance of Azure AD. This topology is going to be used in a most scenarios and is the expected topology when using Azure AD Connect Express installation as shown in the figure below.
+The single forest, single Azure AD topology is the most common and consists of a single Active Directory forest and a single instance of Azure AD. This topology is going to be used in most scenarios and is the expected topology when using Azure AD Connect Express installation as shown in the figure below.
![Supported topologies](./media/plan-hybrid-identity-design-considerations/single-forest.png) Single Forest Scenario
-It is common for large and even small organizations to have multiple forests, as shown in Figure 5.
+It's common for large and even small organizations to have multiple forests, as shown in Figure 5.
> [!NOTE] > For more information about the different on-premises and Azure AD topologies with Azure AD Connect sync read the article [Topologies for Azure AD Connect](plan-connect-topologies.md).
The multi-forest single Azure AD topology should be considered if the following
* Users have only 1 identity across all forests ΓÇô the uniquely identifying users section below describes this scenario in more detail. * The user authenticates to the forest in which their identity is located
-* UPN and Source Anchor (immutable id) will come from this forest
+* UPN and Source Anchor (immutable ID) will come from this forest
* All forests are accessible by Azure AD Connect ΓÇô meaning it does not need to be domain joined and can be placed in a DMZ. * Users have only one mailbox * The forest that hosts a userΓÇÖs mailbox has the best data quality for attributes visible in the Exchange Global Address List (GAL)
-* If there is no mailbox on the user, then any forest may be used to contribute values
-* If you have a linked mailbox, then there is also another account in a different forest used to sign in.
+* If there's no mailbox on the user, then any forest may be used to contribute values
+* If you've a linked mailbox, then there's also another account in a different forest used to sign in.
> [!NOTE] > Objects that exist in both on-premises and in the cloud are ΓÇ£connectedΓÇ¥ via a unique identifier. In the context of Directory Synchronization, this unique identifier is referred to as the SourceAnchor. In the context of Single Sign-On, this identifier is referred to as the ImmutableId. [Design concepts for Azure AD Connect](plan-connect-design-concepts.md#sourceanchor) for more considerations regarding the use of SourceAnchor. > >
-If the above are not true and you have more than one active account or more than one mailbox, Azure AD Connect will pick one and ignore the other. If you have linked mailboxes but no other account, accounts will not be exported to Azure AD and that user will not be a member of any groups. This behavior is different from how it was in the past with DirSync and is intentional to better support multi-forest scenarios. A multi-forest scenario is shown in the figure below.
+If the above aren't true and you've more than one active account or more than one mailbox, Azure AD Connect will pick one and ignore the other. If you've linked mailboxes but no other account, accounts won't be exported to Azure AD and that user won't be a member of any groups. This behavior is different from how it was in the past with DirSync and is intentional to better support multi-forest scenarios. A multi-forest scenario is shown in the figure below.
![multiple Azure AD tenants](./media/plan-hybrid-identity-design-considerations/multiforest-multipleAzureAD.png) **Multi-forest multiple Azure AD scenario**
-It is recommended to have just a single directory in Azure AD for an organization. However, it is supported if a 1:1 relationship is kept between an Azure AD Connect sync server and an Azure AD directory. For each instance of Azure AD, you need an installation of Azure AD Connect. Also, Azure AD, by design is isolated and users in one instance of Azure AD, will not be able to see users in another instance.
+It's recommended to have just a single directory in Azure AD for an organization. However, it's supported if a 1:1 relationship is kept between an Azure AD Connect sync server and an Azure AD directory. For each instance of Azure AD, you need an installation of Azure AD Connect. Also, Azure AD, by design is isolated and users in one instance of Azure AD, won't be able to see users in another instance.
-It is possible and supported to connect one on-premises instance of Active Directory to multiple Azure AD directories as shown in the figure below:
+It's possible and supported to connect one on-premises instance of Active Directory to multiple Azure AD directories as shown in the figure below:
![single forest filtering](./media/plan-hybrid-identity-design-considerations/single-forest-flitering.png)
The following statements must be true:
* Azure AD Connect sync servers must be configured for filtering so they each have a mutually exclusive set of objects. This done, for example, by scoping each server to a particular domain or OU. * A DNS domain can only be registered in a single Azure AD directory so the UPNs of the users in the on-premises AD must use separate namespaces
-* Users in one instance of Azure AD will only be able to see users from their instance. They will not be able to see users in the other instances
+* Users in one instance of Azure AD will only be able to see users from their instance. They won't be able to see users in the other instances
* Only one of the Azure AD directories can enable Exchange hybrid with the on-premises AD
-* Mutual exclusivity also applies to write-back. Thus, some write-back features are not supported with this topology since it is assumed to be a single on-premises configuration.
+* Mutual exclusivity also applies to write-back. Thus, some write-back features aren't supported with this topology since it's assumed to be a single on-premises configuration.
* Group write-back with default configuration * Device write-back
-The following items are not supported and should not be chosen as an implementation:
+The following items aren't supported and should not be chosen as an implementation:
-* It is not supported to have multiple Azure AD Connect sync servers connecting to the same Azure AD directory even if they are configured to synchronize mutually exclusive set of object
-* It is unsupported to sync the same user to multiple Azure AD directories.
-* It is also unsupported to make a configuration change to make users in one Azure AD to appear as contacts in another Azure AD directory.
-* It is also unsupported to modify Azure AD Connect sync to connect to multiple Azure AD directories.
-* Azure AD directories are by design isolated. It is unsupported to change the configuration of Azure AD Connect sync to read data from another Azure AD directory in an attempt to build a common and unified GAL between the directories. It is also unsupported to export users as contacts to another on-premises AD using Azure AD Connect sync.
+* It isn't supported to have multiple Azure AD Connect sync servers connecting to the same Azure AD directory even if they are configured to synchronize mutually exclusive set of object
+* It's unsupported to sync the same user to multiple Azure AD directories.
+* It's also unsupported to make a configuration change to make users in one Azure AD to appear as contacts in another Azure AD directory.
+* It's also unsupported to modify Azure AD Connect sync to connect to multiple Azure AD directories.
+* Azure AD directories are by design isolated. It's unsupported to change the configuration of Azure AD Connect sync to read data from another Azure AD directory in an attempt to build a common and unified GAL between the directories. It's also unsupported to export users as contacts to another on-premises AD using Azure AD Connect sync.
> [!NOTE] > If your organization restricts computers on your network from connecting to the Internet, this article lists the endpoints (FQDNs, IPv4, and IPv6 address ranges) that you should include in your outbound allow lists and Internet Explorer Trusted Sites Zone of client computers to ensure your computers can successfully use Microsoft 365. For more information read [Office 365 URLs and IP address ranges](https://support.office.com/article/Office-365-URLs-and-IP-address-ranges-8548a211-3fe7-47cb-abb1-355ea5aa88a2?ui=en-US&rs=en-US&ad=US).
The following items are not supported and should not be chosen as an implementat
> ## Define multi-factor authentication strategy
-In this task, you will define the multi-factor authentication strategy to use. Azure AD Multi-Factor Authentication comes in two different versions. One is a cloud-based and the other is on-premises based using the Azure MFA Server. Based on the evaluation you did above you can determine which solution is the correct one for your strategy. Use the table below to determine which design option best fulfills your companyΓÇÖs security requirement:
+In this task, you'll define the multi-factor authentication strategy to use. Azure AD Multi-Factor Authentication comes in two different versions. One is a cloud-based and the other is on-premises based using the Azure MFA Server. Based on the evaluation you did above you can determine which solution is the correct one for your strategy. Use the table below to determine which design option best fulfills your companyΓÇÖs security requirement:
Multi-factor design options:
active-directory Plan Hybrid Identity Design Considerations Incident Response Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-hybrid-identity-design-considerations-incident-response-requirements.md
Title: Hybrid identity design - incident response requirements Azure | Microsoft Docs
-description: Determine monitoring and reporting capabilities for the hybrid identity solution that can be leveraged by IT to take actions to identify and mitigate a potential threats
+description: Determine monitoring and reporting capabilities for the hybrid identity solution that can be leveraged by IT to take actions to identify and mitigate a potential threat.
documentationcenter: ''
na Previously updated : 07/18/2017 Last updated : 01/27/2023 # Determine incident response requirements for your hybrid identity solution
-Large or medium organizations most likely will have a [security incident response](/previous-versions/tn-archive/cc700825(v=technet.10)) in place to help IT take actions accordingly to the level of incident. The identity management system is an important component in the incident response process because it can be used to help identifying who performed a specific action against the target. The hybrid identity solution must be able to provide monitoring and reporting capabilities that can be leveraged by IT to take actions to identify and mitigate a potential threat. In a typical incident response plan you will have the following phases as part of the plan:
+Large or medium organizations most likely will have a [security incident response](/previous-versions/tn-archive/cc700825(v=technet.10)) in place to help IT take actions accordingly to the level of incident. The identity management system is an important component in the incident response process because it can be used to help identifying who performed a specific action against the target. The hybrid identity solution must be able to provide monitoring and reporting capabilities that can be leveraged by IT to take actions to identify and mitigate a potential threat. In a typical incident response plan you'll have the following phases as part of the plan:
1. Initial assessment. 2. Incident communication.
Many times the identity system can also help in initial assessment phase mainly
The identity management system should assist IT admins to identify and report those suspicious activities. Usually these technical requirements can be fulfilled by monitoring all systems and having a reporting capability that can highlight potential threats. Use the questions below to help you design your hybrid identity solution while taking into consideration incident response requirements:
-* Does your company has a security incident response in place?
+* Does your company have a security incident response in place?
* If yes, is the current identity management system used as part of the process? * Does your company need to identify suspicious sign-on attempts from users across different devices? * Does your company need to detect potential compromised userΓÇÖs credentials?
The identity management system should assist IT admins to identify and report th
* Does your company need to know when a user resets their password? ## Policy enforcement
-During damage control and risk reduction-phase, it is important to quickly reduce the actual and potential effects of an attack. That action that you will take at this point can make the difference between a minor and a major one. The exact response will depend on your organization and the nature of the attack that you face. If the initial assessment concluded that an account was compromised, you will need to enforce policy to block this account. ThatΓÇÖs just one example where the identity management system will be leveraged. Use the questions below to help you design your hybrid identity solution while taking into consideration how policies will be enforced to react to an ongoing incident:
+During damage control and risk reduction-phase, it is important to quickly reduce the actual and potential effects of an attack. That action that you'll take at this point can make the difference between a minor and a major one. The exact response will depend on your organization and the nature of the attack that you face. If the initial assessment concluded that an account was compromised, you'll need to enforce policy to block this account. ThatΓÇÖs just one example where the identity management system will be leveraged. Use the questions below to help you design your hybrid identity solution while taking into consideration how policies will be enforced to react to an ongoing incident:
* Does your company have policies in place to block users from access the network if necessary?
- * If yes, does the current solution integrate with the hybrid identity management system that you are going to adopt?
+ * If yes, does the current solution integrate with the hybrid identity management system that you're going to adopt?
* Does your company need to enforce Conditional Access for users that are in quarantine? > [!NOTE]
-> Make sure to take notes of each answer and understand the rationale behind the answer. [Define data protection strategy](plan-hybrid-identity-design-considerations-data-protection-strategy.md) will go over the options available and advantages/disadvantages of each option. By having answered those questions you will select which option best suits your business needs.
+> Make sure to take notes of each answer and understand the rationale behind the answer. [Define data protection strategy](plan-hybrid-identity-design-considerations-data-protection-strategy.md) will go over the options available and advantages/disadvantages of each option. By having answered those questions you'll select which option best suits your business needs.
> >
active-directory Reference Connect Dirsync Deprecated https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-dirsync-deprecated.md
na Previously updated : 07/13/2017 Last updated : 01/27/2023
# Upgrade Windows Azure Active Directory Sync and Azure Active Directory Sync
-Azure AD Connect is the best way to connect your on-premises directory with Azure AD and Microsoft 365. This is a great time to upgrade to Azure AD Connect from Windows Azure Active Directory Sync (DirSync) or Azure AD Sync (AADSync) as these tools are now deprecated and do not work anymore.
+Azure AD Connect is the best way to connect your on-premises directory with Azure AD and Microsoft 365. This is a great time to upgrade to Azure AD Connect from Windows Azure Active Directory Sync (DirSync) or Azure AD Sync (AADSync) as these tools are now deprecated and don't work anymore.
The two identity synchronization tools that are deprecated were offered for single forest customers (DirSync) and for multi-forest and other advanced customers (Azure AD Sync). These older tools have been replaced with a single solution that is available for all scenarios: Azure AD Connect. It offers new functionality, feature enhancements, and support for new scenarios. To be able to continue to synchronize your on-premises identity data to Azure AD and Microsoft 365, you must upgrade to Azure AD Connect.
Azure AD Connect is the successor to DirSync and Azure AD Sync. It combines all
|April 1st, 2021| Windows Azure Active Directory Sync ("DirSync") and Microsoft Azure Active Directory Sync ("Azure AD Sync") do no longer work | ## How to transition to Azure AD Connect
-If you are running DirSync, there are two ways you can upgrade: In-place upgrade and parallel deployment. An in-place upgrade is recommended for most customers and if you have a recent operating system and less than 50,000 objects. In other cases, it is recommended to do a parallel deployment where your DirSync configuration is moved to a new server running Azure AD Connect.
+If you're running DirSync, there are two ways you can upgrade: In-place upgrade and parallel deployment. An in-place upgrade is recommended for most customers and if you have a recent operating system and less than 50,000 objects. In other cases, it's recommended to do a parallel deployment where your DirSync configuration is moved to a new server running Azure AD Connect.
| Solution | Scenario | | | | | [Upgrade from DirSync](how-to-dirsync-upgrade-get-started.md) |<li>If you have an existing DirSync server already running.</li> |
-| [Upgrade from Azure AD Sync](how-to-upgrade-previous-version.md) |<li>If you are moving from Azure AD Sync.</li> |
+| [Upgrade from Azure AD Sync](how-to-upgrade-previous-version.md) |<li>If you're moving from Azure AD Sync.</li> |
## FAQ
The notification was also sent to customers using Azure AD Connect with a build
DirSync/Azure AD Sync will continue to work on April 13, 2017. However, Azure AD may no longer accept communications from DirSync/Azure AD Sync after December 31, 2017. Dirsync and Azure AD Sync will no longer work after April 1st, 2021 **Q: Which DirSync versions can I upgrade from?**
-It is supported to upgrade from any DirSync release currently being used.
+It's supported to upgrade from any DirSync release currently being used.
**Q: What about the Azure AD Connector for FIM/MIM?**
-The Azure AD Connector for FIM/MIM has **not** been announced as deprecated. It is at **feature freeze**; no new functionality is added and it receives no bug fixes. Microsoft recommends customers using it to plan to move from it to Azure AD Connect. It is strongly recommended to not start any new deployments using it. This Connector will be announced deprecated in the future.
+The Azure AD Connector for FIM/MIM has **not** been announced as deprecated. It's at **feature freeze**; no new functionality is added and it receives no bug fixes. Microsoft recommends customers using it to plan to move from it to Azure AD Connect. It's strongly recommended to not start any new deployments using it. This Connector will be announced deprecated in the future.
## Additional Resources * [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md)
active-directory Reference Connect Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-instances.md
Azure AD Connect is most commonly used with the world-wide instance of Azure AD and Microsoft 365. But there are also other instances and these have different requirements for URLs and other special considerations. ## Microsoft Cloud Germany
-The [Microsoft Cloud Germany](https://www.microsoft.de/cloud-deutschland) is a sovereign cloud operated by a German data trustee.
+The [Microsoft Cloud Germany](https://www.microsoft.com/de-de/microsoft-cloud) is a sovereign cloud operated by a German data trustee.
| URLs to open in proxy server | | |
Features currently not present in the Microsoft Cloud Germany:
## Microsoft Azure Government The [Microsoft Azure Government cloud](https://azure.microsoft.com/features/gov/) is a cloud for US government.
-This cloud has been supported by earlier releases of DirSync. From build 1.1.180 of Azure AD Connect, the next generation of the cloud is supported. This generation is using US-only based endpoints and have a different list of URLs to open in your proxy server.
+This cloud has been supported by earlier releases of DirSync. From build 1.1.180 of Azure AD Connect, the next generation of the cloud is supported. This generation is using US-only based endpoints and has a different list of URLs to open in your proxy server.
| URLs to open in proxy server | | |
active-directory Reference Connect Msexchuserholdpolicies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-msexchuserholdpolicies.md
na Previously updated : 09/15/2020 Last updated : 01/27/2023
The following reference document describes these attributes used by Exchange and
## What are msExchUserHoldPolicies and cloudMsExchUserHoldPolicies? There are two types of [holds](/Exchange/policy-and-compliance/holds/holds) available for an Exchange Server: Litigation Hold and In-Place Hold. When Litigation Hold is enabled, all mailbox all items are placed on hold. An In-Place Hold is used to preserve only those items that meet the criteria of a search query that you defined by using the In-Place eDiscovery tool.
-The MsExchUserHoldPolcies and cloudMsExchUserHoldPolicies attributes allow on-premises AD and Azure AD to determine which users are under a hold depending on whether they are using on-premises Exchange or Exchange on-line.
+The MsExchUserHoldPolcies and cloudMsExchUserHoldPolicies attributes allow on-premises AD and Azure AD to determine which users are under a hold depending on whether they're using on-premises Exchange or Exchange on-line.
## msExchUserHoldPolicies synchronization flow By default MsExchUserHoldPolcies are synchronized by Azure AD Connect directly to the msExchUserHoldPolicies attribute in the metaverse and then to the msExchUserHoldPolicies attribute in Azure AD
Outbound to Azure AD:
|Azure Active Directory|msExchUserHoldPolicies|Direct|msExchUserHoldPolicies|Out to AAD ΓÇô UserExchangeOnline| ## cloudMsExchUserHoldPolicies synchronization flow
-By default cloudMsExchUserHoldPolicies are synchronized by Azure AD Connect directly to the cloudMsExchUserHoldPolicies attribute in the metaverse. Then, if msExchUserHoldPolicies is not null in the metaverse, the attribute in flowed out to Active Directory.
+By default cloudMsExchUserHoldPolicies are synchronized by Azure AD Connect directly to the cloudMsExchUserHoldPolicies attribute in the metaverse. Then, if msExchUserHoldPolicies isn't null in the metaverse, the attribute in flowed out to Active Directory.
The following tables describe the flow:
Outbound to on-premises Active Directory:
|Azure Active Directory|cloudMsExchUserHoldPolicies|IF(NOT NULL)|msExchUserHoldPolicies|Out to AD ΓÇô UserExchangeOnline| ## Information on the attribute behavior
-The msExchangeUserHoldPolicies are a single authority attribute. A single authority attribute can be set on an object (in this case, user object) in the on-premises directory or in the cloud directory. The Start of Authority rules dictate, that if the attribute is synchronized from on-premises, then Azure AD will not be allowed to update this attribute.
+The msExchangeUserHoldPolicies are a single authority attribute. A single authority attribute can be set on an object (in this case, user object) in the on-premises directory or in the cloud directory. The Start of Authority rules dictate, that if the attribute is synchronized from on-premises, then Azure AD won't be allowed to update this attribute.
-To allow users to set a hold policy on a user object in the cloud, the cloudMSExchangeUserHoldPolicies attribute is used. This attribute is used because Azure AD cannot set msExchangeUserHoldPolicies directly based on the rules explained above. This attribute will then synchronize back to the on-premises directory if, the msExchangeUserHoldPolicies is not null and replace the current value of msExchangeUserHoldPolicies.
+To allow users to set a hold policy on a user object in the cloud, the cloudMSExchangeUserHoldPolicies attribute is used. This attribute is used because Azure AD can't set msExchangeUserHoldPolicies directly based on the rules explained above. This attribute will then synchronize back to the on-premises directory if, the msExchangeUserHoldPolicies isn't null and replace the current value of msExchangeUserHoldPolicies.
Under certain circumstances, for instance, if both were changed on-premises and in Azure at the same time, this could cause some issues.
active-directory Reference Connect Version History Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-version-history-archive.md
Released: February 2016
* [Automatic upgrade](how-to-connect-install-automatic-upgrade.md) feature for Express settings customers. * Support for the Hybrid Identity Administrator by using Azure AD Multi-Factor Authentication and Privileged Identity Management in the installation wizard.
- * You need to allow your proxy to also allow traffic to https://secure.aadcdn.microsoftonline-p.com if you use Multi-Factor Authentication.
+ * You need to allow your proxy to also allow traffic to `https://secure.aadcdn.microsoftonline-p.com` if you use Multi-Factor Authentication.
* You need to add https://secure.aadcdn.microsoftonline-p.com to your trusted sites list for Multi-Factor Authentication to properly work. * Allow changing the user's sign-in method after initial installation. * Allow [Domain and OU filtering](how-to-connect-install-custom.md#domain-and-ou-filtering) in the installation wizard. This also allows connecting to forests where not all domains are available.
active-directory Tshoot Connect Attribute Not Syncing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-attribute-not-syncing.md
na Previously updated : 01/31/2019 Last updated : 01/27/2023
Before investigating attribute syncing issues, letΓÇÖs understand the **Azure AD
* **CS:** Connector Space, a table in database. * **MV:** Metaverse, a table in database. * **AD:** Active Directory
-* **AAD:** Azure Active Directory
+* **Azure AD:** Azure Active Directory
### **Synchronization Steps** * Import from AD: Active Directory objects are brought into AD CS.
-* Import from AAD: Azure Active Directory objects are brought into AAD CS.
+* Import from Azure AD: Azure Active Directory objects are brought into Azure AD CS.
* Synchronization: **Inbound Synchronization Rules** and **Outbound Synchronization Rules** are run in the order of precedence number from lower to higher. To view the Synchronization Rules, you can go to **Synchronization Rules Editor** from the desktop applications. The **Inbound Synchronization Rules** brings in data from CS to MV. The **Outbound Synchronization Rules** moves data from MV to CS. * Export to AD: After running Synchronization, objects are exported from AD CS to **Active Directory**.
-* Export to AAD: After running Synchronization, objects are exported from AAD CS to **Azure Active Directory**.
+* Export to Azure AD: After running Synchronization, objects are exported from Azure AD CS to **Azure Active Directory**.
### **Step by Step Investigation**
-* We will start our search from the **Metaverse** and look at the attribute mapping from source to target.
+* We'll start our search from the **Metaverse** and look at the attribute mapping from source to target.
* Launch **Synchronization Service Manager** from the desktop applications, as shown below:
Before investigating attribute syncing issues, letΓÇÖs understand the **Azure AD
![Screenshot that shows the Connector Space Object Properties screen with the Preview button highlighted.](media/tshoot-connect-attribute-not-syncing/tshoot-connect-attribute-not-syncing/csattributes.png)
-* Now click on the **Import Attribute Flow**, this shows flow of attributes from **Active Directory Connector Space** to the **Metaverse**. **Sync Rule** column shows which **Synchronization Rule** contributed to that attribute. **Data Source** column shows you the attributes from the **Connector Space**. **Metaverse Attribute** column shows you the attributes in the **Metaverse**. You can look for the attribute not syncing here. If you don't find the attribute here, then this is not mapped and you have to create new custom **Synchronization Rule** to map the attribute.
+* Now click on the **Import Attribute Flow**, this shows flow of attributes from **Active Directory Connector Space** to the **Metaverse**. **Sync Rule** column shows which **Synchronization Rule** contributed to that attribute. **Data Source** column shows you the attributes from the **Connector Space**. **Metaverse Attribute** column shows you the attributes in the **Metaverse**. You can look for the attribute not syncing here. If you don't find the attribute here, then this isn't mapped and you have to create new custom **Synchronization Rule** to map the attribute.
![Connector Space Attributes](media/tshoot-connect-attribute-not-syncing/tshoot-connect-attribute-not-syncing/cstomvattributeflow.png)
Before investigating attribute syncing issues, letΓÇÖs understand the **Azure AD
![Screenshot that shows the attribute flow from Metaverse back to Active Directory Connector Space using Outbound Synchronization Rules.](media/tshoot-connect-attribute-not-syncing/tshoot-connect-attribute-not-syncing/mvtocsattributeflow.png)
-* Similarly, you can view the **Azure Active Directory Connector Space** object and can generate the **Preview** to view attribute flow from **Metaverse** to the **Connector Space** and vice versa, this way you can investigate why an attribute is not syncing.
+* Similarly, you can view the **Azure Active Directory Connector Space** object and can generate the **Preview** to view attribute flow from **Metaverse** to the **Connector Space** and vice versa, this way you can investigate why an attribute isn't syncing.
## **Recommended Documents** * [Azure AD Connect sync: Technical Concepts](./how-to-connect-sync-technical-concepts.md)
active-directory Tshoot Connect Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-connectivity.md
If you use a **Microsoft account** rather than a **school or organization** acco
![A Microsoft Account is used](./media/tshoot-connect-connectivity/unknownerror.png) ### The MFA endpoint cannot be reached
-This error appears if the endpoint **https://secure.aadcdn.microsoftonline-p.com** cannot be reached and your Hybrid Identity Administrator has MFA enabled.
+This error appears if the endpoint `https://secure.aadcdn.microsoftonline-p.com` cannot be reached and your Hybrid Identity Administrator has MFA enabled.
![nomachineconfig](./media/tshoot-connect-connectivity/nomicrosoftonlinep.png) * If you see this error, verify that the endpoint **secure.aadcdn.microsoftonline-p.com** has been added to the proxy.
The multi-factor authentication (MFA) challenge was canceled.
<div id="connect-msolservice-failed"> <!--
- Empty div just to act as an alias for the "Connect To MS Online Failed" header
+ Empty div just to act as an alias for the "Connect To MSOnline Failed" header
because we used the mentioned id in the code to jump to this section. --> </div>
-### Connect To MS Online Failed
+### Connect To MSOnline Failed
Authentication was successful, but Azure AD PowerShell has an authentication problem. <div id="get-msoluserrole-failed">
active-directory Tshoot Connect Source Anchor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-source-anchor.md
Previously updated : 04/19/2019 Last updated : 01/27/2023
active-directory Tutorial Passthrough Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tutorial-passthrough-authentication.md
Previously updated : 05/31/2019 Last updated : 01/27/2023
The following tutorial will walk you through creating a hybrid identity environm
## Prerequisites The following are prerequisites required for completing this tutorial-- A computer with [Hyper-V](/windows-server/virtualization/hyper-v/hyper-v-technology-overview) installed. It is suggested to do this on either a [Windows 10](/virtualization/hyper-v-on-windows/about/supported-guest-os) or a [Windows Server 2016](/windows-server/virtualization/hyper-v/supported-windows-guest-operating-systems-for-hyper-v-on-windows) computer.
+- A computer with [Hyper-V](/windows-server/virtualization/hyper-v/hyper-v-technology-overview) installed. It's suggested to do this on either a [Windows 10](/virtualization/hyper-v-on-windows/about/supported-guest-os) or a [Windows Server 2016](/windows-server/virtualization/hyper-v/supported-windows-guest-operating-systems-for-hyper-v-on-windows) computer.
- An [Azure subscription](https://azure.microsoft.com/free) - A copy of Windows Server 2016
In order to finish building the virtual machine, you need to finish the operatin
1. Hyper-V Manager, double-click on the virtual machine 2. Click on the Start button.
-3. You will be prompted to ΓÇÿPress any key to boot from CD or DVDΓÇÖ. Go ahead and do so.
+3. You'll be prompted to ΓÇÿPress any key to boot from CD or DVDΓÇÖ. Go ahead and do so.
4. On the Windows Server start up screen select your language and click **Next**. 5. Click **Install Now**. 6. Enter your license key and click **Next**.
Now we need to create an Azure AD tenant so that we can synchronize our users to
6. Once this has completed, click the **here** link, to manage the directory. ## Create a Hybrid Identity Administrator in Azure AD
-Now that we have an Azure AD tenant, we will create a Hybrid Identity Administratoristrator account. This account is used to create the Azure AD Connector account during Azure AD Connect installation. The Azure AD Connector account is used to write information to Azure AD. To create the Hybrid Identity Administrator account do the following.
+Now that we have an Azure AD tenant, we'll create a Hybrid Identity Administratoristrator account. This account is used to create the Azure AD Connector account during Azure AD Connect installation. The Azure AD Connector account is used to write information to Azure AD. To create the Hybrid Identity Administrator account do the following.
1. Under **Manage**, select **Users**.</br> ![Screenshot that shows the User option selected in the Manage section where you create a Hybrid Identity Administrator in Azure AD.](media/tutorial-password-hash-sync/gadmin1.png)</br> 2. Select **All users** and then select **+ New user**.
-3. Provide a name and username for this user. This will be your Global Admin for the tenant. You will also want to change the **Directory role** to **Hybrid Identity Administrator.** You can also show the temporary password. When you are done, select **Create**.</br>
+3. Provide a name and username for this user. This will be your Global Admin for the tenant. You'll also want to change the **Directory role** to **Hybrid Identity Administrator.** You can also show the temporary password. When you're done, select **Create**.</br>
![Screenshot that shows the Create button you select when you create a Hybrid Identity Administrator in Azure AD.](media/tutorial-password-hash-sync/gadmin2.png)</br> 4. Once this has completed, open a new web browser and sign-in to myapps.microsoft.com using the new Hybrid Identity Administrator account and the temporary password. 5. Change the password for the Hybrid Identity Administrator to something that you will remember.
Now that we have a tenant and a Hybrid Identity Administrator, we need to add ou
4. On **Custom domain names**, enter the name of your custom domain in the box, and click **Add Domain**. 5. On the custom domain name screen you will be supplied with either TXT or MX information. This information must be added to the DNS information of the domain registrar under your domain. So you need to go to your domain registrar, enter either the TXT or MX information in the DNS settings for your domain. This will allow Azure to verify your domain. This may take up to 24 hours for Azure to verify it. For more information, see the [add a custom domain](../../active-directory/fundamentals/add-custom-domain.md) documentation.</br> ![Screenshot that shows where you add the TXT or MX information.](media/tutorial-federation/custom2.png)</br>
-6. To ensure that it is verified, click the Verify button.</br>
+6. To ensure that it's verified, click the Verify button.</br>
![Screenshot that shows a successful verification message after you select Verify.](media/tutorial-federation/custom3.png)</br> ## Download and install Azure AD Connect
-Now it is time to download and install Azure AD Connect. Once it has been installed we will run through the express installation. Do the following:
+Now it's time to download and install Azure AD Connect. Once it has been installed we'll run through the express installation. Do the following:
1. Download [Azure AD Connect](https://www.microsoft.com/download/details.aspx?id=47594) 2. Navigate to and double-click **AzureADConnect.msi**.
We will now verify that the users that we had in our on-premises directory have
## Test signing in with one of our users 1. Browse to [https://myapps.microsoft.com](https://myapps.microsoft.com)
-2. Sign-in with a user account that was created in our new tenant. You will need to sign-in using the following format: (user@domain.onmicrosoft.com). Use the same password that the user uses to sign-in on-premises.
+2. Sign-in with a user account that was created in our new tenant. You'll need to sign-in using the following format: (user@domain.onmicrosoft.com). Use the same password that the user uses to sign-in on-premises.
![Verify](media/tutorial-password-hash-sync/verify1.png) You have now successfully setup a hybrid identity environment that you can use to test and familiarize yourself with what Azure has to offer.
active-directory Tutorial Password Hash Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tutorial-password-hash-sync.md
na Previously updated : 05/31/2019 Last updated : 01/27/2023
The following tutorial will walk you through creating a hybrid identity environm
## Prerequisites The following are prerequisites required for completing this tutorial-- A computer with [Hyper-V](/windows-server/virtualization/hyper-v/hyper-v-technology-overview) installed. It is suggested to do this on either a [Windows 10](/virtualization/hyper-v-on-windows/about/supported-guest-os) or a [Windows Server 2016](/windows-server/virtualization/hyper-v/supported-windows-guest-operating-systems-for-hyper-v-on-windows) computer.
+- A computer with [Hyper-V](/windows-server/virtualization/hyper-v/hyper-v-technology-overview) installed. It's suggested to do this on either a [Windows 10](/virtualization/hyper-v-on-windows/about/supported-guest-os) or a [Windows Server 2016](/windows-server/virtualization/hyper-v/supported-windows-guest-operating-systems-for-hyper-v-on-windows) computer.
- An [external network adapter](/virtualization/hyper-v-on-windows/quick-start/connect-to-network) to allow the virtual machine to communicate with the internet. - An [Azure subscription](https://azure.microsoft.com/free) - A copy of Windows Server 2016
In order to finish building the virtual machine, you need to finish the operatin
1. Hyper-V Manager, double-click on the virtual machine 2. Click on the Start button.
-3. You will be prompted to ΓÇÿPress any key to boot from CD or DVDΓÇÖ. Go ahead and do so.
+3. You'll be prompted to ΓÇÿPress any key to boot from CD or DVDΓÇÖ. Go ahead and do so.
4. On the Windows Server start up screen select your language and click **Next**. 5. Click **Install Now**. 6. Enter your license key and click **Next**.
Now we need to create an Azure AD tenant so that we can synchronize our users to
6. Once this has completed, click the **here** link, to manage the directory. ## Create a Hybrid Identity Administrator in Azure AD
-Now that we have an Azure AD tenant, we will create a Hybrid Identity Administratoristrator account. This account is used to create the Azure AD Connector account during Azure AD Connect installation. The Azure AD Connector account is used to write information to Azure AD. To create the Hybrid Identity Administrator account do the following.
+Now that we have an Azure AD tenant, we'll create a Hybrid Identity Administratoristrator account. This account is used to create the Azure AD Connector account during Azure AD Connect installation. The Azure AD Connector account is used to write information to Azure AD. To create the Hybrid Identity Administrator account do the following.
1. Under **Manage**, select **Users**.</br> ![Screenshot that shows the User option selected in the Manage section where you create a Hybrid Identity Administrator in Azure AD.](media/tutorial-password-hash-sync/gadmin1.png)</br> 2. Select **All users** and then select **+ New user**.
-3. Provide a name and username for this user. This will be your Hybrid Identity Administrator for the tenant. You will also want to change the **Directory role** to **Hybrid Identity Administrator.** You can also show the temporary password. When you are done, select **Create**.</br>
+3. Provide a name and username for this user. This will be your Hybrid Identity Administrator for the tenant. You'll also want to change the **Directory role** to **Hybrid Identity Administrator.** You can also show the temporary password. When you are done, select **Create**.</br>
![Screenshot that shows the Create button you select when you create a Hybrid Identity Administrator in Azure AD.](media/tutorial-password-hash-sync/gadmin2.png)</br> 4. Once this has completed, open a new web browser and sign-in to myapps.microsoft.com using the new Hybrid Identity Administrator account and the temporary password.
-5. Change the password for the Hybrid Identity Administrator to something that you will remember.
+5. Change the password for the Hybrid Identity Administrator to something that you'll remember.
## Download and install Azure AD Connect
-Now it is time to download and install Azure AD Connect. Once it has been installed we will run through the express installation. Do the following:
+Now it's time to download and install Azure AD Connect. Once it has been installed we'll run through the express installation. Do the following:
1. Download [Azure AD Connect](https://www.microsoft.com/download/details.aspx?id=47594) 2. Navigate to and double-click **AzureADConnect.msi**.
Now it is time to download and install Azure AD Connect. Once it has been insta
## Verify users are created and synchronization is occurring
-We will now verify that the users that we had in our on-premises directory have been synchronized and now exist in out Azure AD tenant. Be aware that this may take a few hours to complete. To verify users are synchronized do the following.
+We'll now verify that the users that we had in our on-premises directory have been synchronized and now exist in out Azure AD tenant. Be aware that this may take a few hours to complete. To verify users are synchronized do the following.
1. Browse to the [Azure portal](https://portal.azure.com) and sign in with an account that has an Azure subscription.
We will now verify that the users that we had in our on-premises directory have
## Test signing in with one of our users 1. Browse to [https://myapps.microsoft.com](https://myapps.microsoft.com)
-2. Sign-in with a user account that was created in our new tenant. You will need to sign-in using the following format: (user@domain.onmicrosoft.com). Use the same password that the user uses to sign-in on-premises.</br>
+2. Sign-in with a user account that was created in our new tenant. You'll need to sign-in using the following format: (user@domain.onmicrosoft.com). Use the same password that the user uses to sign-in on-premises.</br>
![Verify](media/tutorial-password-hash-sync/verify1.png)</br> You have now successfully setup a hybrid identity environment that you can use to test and familiarize yourself with what Azure has to offer.
active-directory Datawiza Azure Ad Sso Mfa Oracle Ebs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/datawiza-azure-ad-sso-mfa-oracle-ebs.md
+
+ Title: Configure Datawiza for Azure Active Directory Multi-Factor Authentication and single sign-on to Oracle EBS
+description: Learn to enable Azure AD MFA and SSO for an Oracle E-Business Suite application via Datawiza
+++++++ Last updated : 01/26/2023++++
+# Configure Datawiza for Azure Active Directory Multi-Factor Authentication and single sign-on to Oracle EBS
+
+In this tutorial, learn how to enable Azure Active Directory Multi-Factor Authentication (MFA) and single sign-on (SSO) for an Oracle E-Business Suite (Oracle EBS) application via Datawiza.
+
+The benefits of integrating applications with Azure Active Directory (Azure AD) via Datawiza:
+
+* [Embrace proactive security with Zero Trust](https://www.microsoft.com/security/business/zero-trust) - a security model that adapts to modern environments and embraces hybrid workplace, while it protects people, devices, apps, and data
+* [Azure Active Directory single sign-on](https://azure.microsoft.com/solutions/active-directory-sso/#overview) - secure and seamless access for users and apps, from any location, using a device
+* [How it works: Azure AD Multi-Factor Authentication](../authentication/concept-mfa-howitworks.md) - users are prompted during sign-in for forms of identification, such as a code on their cellphone or a fingerprint scan
+* [What is Conditional Access?](../conditional-access/overview.md) - policies are if-then statements, if a user wants to access a resource, then they must complete an action
+* [Easy authentication and authorization in Azure AD with no-code Datawiza](https://www.microsoft.com/security/blog/2022/05/17/easy-authentication-and-authorization-in-azure-active-directory-with-no-code-datawiza/) - use web applications such as: Oracle JDE, Oracle E-Business Suite, Oracle Sibel, and home-grown apps
+* Use the [Datawiza Cloud Management Console](https://console.datawiza.com) (DCMC) - manage access to applications in public clouds and on-premises
+
+## Scenario description
+
+This document focuses on modern identity providers (IdPs) integrating with the legacy Oracle EBS application. Oracle EBS requires a set of Oracle EBS service account credentials and an Oracle EBS database container (DBC) file.
+
+## Architecture
+
+The solution contains the following components:
+
+* **Azure AD** Microsoft's cloud-based identity and access management service, which helps users sign in and access external and internal resources.
+* **Oracle EBS** the legacy application to be protected by Azure AD.
+* **Datawiza Access Proxy (DAP)**: A super lightweight container-based reverse-proxy implements OIDC/OAuth or SAML for user sign-on flow and transparently passes identity to applications through HTTP headers.
+* **Datawiza Cloud Management Console (DCMC)**: A centralized management console that manages DAP. DCMC provides UI and RESTful APIs for administrators to manage the configurations of DAP and its granular access control policies.
+
+### Prerequisites
+
+Ensure the following prerequisites are met.
+
+* An Azure subscription.
+ * If you don't have on, you can get an [Azure free account](https://azure.microsoft.com/free/)
+* An Azure AD tenant linked to the Azure subscription
+* An account with Azure AD Application Admin permissions
+ * See, [Azure AD built-in roles](../roles/permissions-reference.md)
+* Docker and Docker Compose are required to run DAP
+ * See, [Get Docker](https://docs.docker.com/get-docker/) and [Overview, Docker Compose](https://docs.docker.com/compose/install/)
+* User identities synchronized from an on-premises directory to Azure AD, or created in Azure AD and flowed back to your on-premises directory
+ * See, [Azure AD Connect sync: Understand and customize synchronization](../hybrid/how-to-connect-sync-whatis.md)
+
+* An Oracle EBS environment
+
+## Configure the Oracle EBS environment for SSO and create the DBC file
+
+To enable SSO in the Oracle EBS environment:
+
+1. Sign in to the Oracle EBS Management console as an Administrator.
+2. Scroll down the Navigator panel and expand **User Management**.
+
+ [ ![Screenshot of the User Management dialog.](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/navigator-user-management.png) ](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/navigator-user-management.png#lightbox)
+
+3. Add a user account.
+
+ [ ![Screenshot of the User Account option.](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/user-account.png) ](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/user-account.png#lightbox)
+
+4. For **User Name**, enter **DWSSOUSER**.
+5. For **Password**, enter a password.
+6. For **Description**, enter **DW User account for SSO**.
+7. For **Password Expiration**, select **None**.
+8. Assign the **Apps Schema Connect** role to the user.
+
+ [ ![Screenshot of the assigned Apps Schema Connect role under Results.](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/assign-role.png) ](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/assign-role.png#lightbox)
+
+## Register DAP with Oracle EBS
+
+In the Oracle EBS Linux environment, generate a new DBC file for DAP. You need the apps user credentials, and the default DBC file (under $FND_SECURE) used by the Apps Tier.
+
+1. Configure the environment for Oracle EBS using a command similar to: `./u01/install/APPS/EBSapps.env run`
+2. Use the AdminDesktop utility to generate the new DBC file. Specify the name of a new Desktop Node for this DBC file:
+
+>>`java oracle.apps.fnd.security.AdminDesktop apps/apps CREATE NODE_NAME=\<ebs domain name> DBC=/u01/install/APPS/fs1/inst/apps/EBSDB_apps/appl/fnd/12.0.0/secure/EBSDB.dbc`
+
+3. This action generates a file called `ebsdb_\<ebs domain name>.dbc` in the location where you ran the previous command.
+4. Copy the DBC file content to a notebook. You will use the content later.
+
+## Enable Oracle EBS for SSO
+
+1. To integrate JDE with Azure AD, sign in to [Datawiza Cloud Management Console (DCMC)](https://console.datawiza.com/).
+2. The Welcome page appears.
+3. Select the orange Getting started button.
+
+ ![Screenshot of the Getting Started button.](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/getting-started.png#lightbox)
+
+4. Enter a **Name**.
+5. Enter a **Description**.
+6. Select **Next**.
+
+ [ ![Screenshot of the name entry under Deployment Name.](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/deployment-name.png) ](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/deployment-name.png#lightbox)
+
+7. On **Add Application**, for **Platform** select **Oracle E-Business Suite**.
+8. For **App Name**, enter the app name.
+9. For **Public Domain** enter the external-facing URL of the application, for example `https://ebs-external.example.com`. You can use localhost DNS for testing.
+10. For **Listen Port**, select the port that DAP listens on. You can use the port in Public Domain if you aren't deploying the DAP behind a load balancer.
+11. For **Upstream Servers**, enter the URL and port combination of the Oracle EBS implementation being protected.
+12. For **EBS Service Account**, enter the username from Service Account (DWSSOUSER).
+13. For **EBS Account Password**, enter the password for the Service Account.
+14. For **EBS User Mapping**, the product decides the attribute to be mapped to Oracle EBS username for authentication.
+15. For **EBS DBC Content**, use the content you copied.
+16. Select **Next**.
+
+ [ ![Screenshot of Add Application entries and selections.](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/add-application.png) ](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/add-application.png#lightbox)
+
+### IdP configuration
+
+Use the DCMC one-click integration to help you complete Azure AD configuration. With this feature, you can reduce management costs and configuration errors are less likely.
+
+ [ ![Screenshot of the Configure IDP dialog with entries, selections, and the Create button.](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/configure-idp.png) ](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/configure-idp.png#lightbox)
+
+### Docker Compose file
+
+Configuration on the management console is complete. You are prompted to deploy Datawiza Access Proxy (DAP) with your application. Make a note the deployment Docker Compose file. The file includes the image of the DAP, PROVISIONING_KEY, and PROVISIONING_SECRET. DAP uses this information to pull the latest configuration and policies from DCMC.
+
+ ![Screenshot of Docker information.](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/docker-information.png)
+
+### SSL configuration
+
+1. For certificate configuration, select the **Advanced** tab on your application page.
+
+ [ ![Screenshot of the Advanced tab.](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/advanced-tab.png) ](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/advanced-tab.png#lightbox)
+
+2. Enable SSL.
+3. Select a **Cert Type**.
+
+ [ ![Screenshot of Enable SSL and Cert Type options.](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/cert-type.png) ](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/cert-type.png#lightbox)
+
+4. There's a self-signed certificate for localhost, which you can use for testing.
+
+ [ ![Screenshot of the Self Signed option.](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/self-signed-cert-type.png) ](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/self-signed-cert-type.png#lightbox)
+
+5. (Optional) You can upload a certificate from a file.
+
+ [ ![Screenshot of the File Based option.](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/file-based-cert-option.png) ](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/file-based-cert-option.png#lightbox)
+
+6. Select **Save**.
+
+### Optional: Enable MFA on Azure AD
+
+To provide more security for sign-ins, you can enforce MFA for user sign-in by enabling MFA on the Azure portal.
+
+1. Sign in to the Azure portal as a Global Administrator.
+2. Select **Azure Active Directory** > **Manage** > **Properties**.
+3. Under **Properties**, select **Manage security defaults**.
+
+ [ ![Screenshot of Manage Properties function and the Manage Security Defaults option.](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/manage-security-defaults.png) ](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/manage-security-defaults.png#lightbox)
+
+4. Under **Enable security defaults**, select **Yes**.
+5. Select **Save**.
+
+ [ ![Screenshot of the Manage security defaults and Enable security defaults options.](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/enable-security-defaults.png) ](./media/datawiza-azure-ad-sso-mfa-oracle-ebs/enable-security-defaults.png#lightbox)
+
+## Next steps
+
+- Video: [Enable SSO and MFA for Oracle JD Edwards with Azure AD via Datawiza](https://www.youtube.com/watch?v=_gUGWHT5m90)
+- [Tutorial: Configure Secure Hybrid Access with Azure AD and Datawiza](./datawiza-with-azure-ad.md)
+- [Tutorial: Configure Azure AD B2C with Datawiza to provide secure hybrid access](../../active-directory-b2c/partner-datawiza.md)
+- Go to docs.datawiza.com for Datawiza [User Guides](https://docs.datawiza.com/)
active-directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/overview.md
ms.devlang: Previously updated : 10/30/2022 Last updated : 01/23/2023
There are two types of managed identities:
- A service principal of a special type is created in Azure AD for the identity. The service principal is tied to the lifecycle of that Azure resource. When the Azure resource is deleted, Azure automatically deletes the service principal for you. - By design, only that Azure resource can use this identity to request tokens from Azure AD. - You authorize the managed identity to have access to one or more services.
- - The name of the system-assigned service principal is always the same as the name of the Azure resource it is created for. For a deployment slot, the name of its system-assigned identity is <app-name>/slots/<slot-name>.
+ - The name of the system-assigned service principal is always the same as the name of the Azure resource it is created for. For a deployment slot, the name of its system-assigned identity is ```<app-name>/slots/<slot-name>```.
- **User-assigned**. You may also create a managed identity as a standalone Azure resource. You can [create a user-assigned managed identity](how-to-manage-ua-identity-portal.md) and assign it to one or more Azure Resources. When you enable a user-assigned managed identity: - A service principal of a special type is created in Azure AD for the identity. The service principal is managed separately from the resources that use it.
The following table shows the differences between the two types of managed ident
| Creation | Created as part of an Azure resource (for example, Azure Virtual Machines or Azure App Service). | Created as a stand-alone Azure resource. | | Life cycle | Shared life cycle with the Azure resource that the managed identity is created with. <br/> When the parent resource is deleted, the managed identity is deleted as well. | Independent life cycle. <br/> Must be explicitly deleted. | | Sharing across Azure resources | CanΓÇÖt be shared. <br/> It can only be associated with a single Azure resource. | Can be shared. <br/> The same user-assigned managed identity can be associated with more than one Azure resource. |
-| Common use cases | Workloads that are contained within a single Azure resource. <br/> Workloads for which you need independent identities. <br/> For example, an application that runs on a single virtual machine. | Workloads that run on multiple resources and can share a single identity. <br/> Workloads that need pre-authorization to a secure resource, as part of a provisioning flow. <br/> Workloads where resources are recycled frequently, but permissions should stay consistent. <br/> For example, a workload where multiple virtual machines need to access the same resource. |
+| Common use cases | Workloads contained within a single Azure resource. <br/> Workloads needing independent identities. <br/> For example, an application that runs on a single virtual machine. | Workloads that run on multiple resources and can share a single identity. <br/> Workloads needing pre-authorization to a secure resource, as part of a provisioning flow. <br/> Workloads where resources are recycled frequently, but permissions should stay consistent. <br/> For example, a workload where multiple virtual machines need to access the same resource. |
## How can I use managed identities for Azure resources?
active-directory Concept Pim For Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/concept-pim-for-groups.md
na Previously updated : 01/11/2023 Last updated : 01/25/2023
# Privileged Identity Management (PIM) for Groups (preview)
-With Azure Active Directory (Azure AD), part of Microsoft Entra, you can provide users just-in-time membership in the group and just-in-time ownership of the group using the Azure AD Privileged Identity Management for Groups feature. These groups can be used to govern access to a variety of scenarios that include Azure AD roles, Azure roles, as well as Azure SQL, Azure Key Vault, Intune, other application roles, and 3rd party applications.
+With Azure Active Directory (Azure AD), part of Microsoft Entra, you can provide users just-in-time membership in the group and just-in-time ownership of the group using the Azure AD Privileged Identity Management for Groups feature. These groups can be used to govern access to various scenarios that include Azure AD roles, Azure roles, as well as Azure SQL, Azure Key Vault, Intune, other application roles, and third party applications.
## What is PIM for Groups?
-PIM for Groups is part of Azure AD Privileged Identity Management ΓÇô alongside with PIM for Azure AD Roles and PIM for Azure Resources, PIM for Groups enables users to activate the ownership or membership of an Azure AD security group or Microsoft 365 group. Groups can be used to govern access to a variety of scenarios that include Azure AD roles, Azure roles, as well as Azure SQL, Azure Key Vault, Intune, other application roles, and 3rd party applications.
+PIM for Groups is part of Azure AD Privileged Identity Management ΓÇô alongside with PIM for Azure AD Roles and PIM for Azure Resources, PIM for Groups enables users to activate the ownership or membership of an Azure AD security group or Microsoft 365 group. Groups can be used to govern access to various scenarios that include Azure AD roles, Azure roles, as well as Azure SQL, Azure Key Vault, Intune, other application roles, and third party applications.
With PIM for Groups you can use policies similar to ones you use in PIM for Azure AD Roles and PIM for Azure Resources: you can require approval for membership or ownership activation, enforce multi-factor authentication (MFA), require justification, limit maximum activation time, and more. Each group in PIM for Groups has two policies: one for activation of membership and another for activation of ownership in the group. Up until January 2023, PIM for Groups feature was called ΓÇ£Privileged Access GroupsΓÇ¥.
There are two ways to make a group of users eligible for Azure AD role:
To provide a group of users with just-in-time access to Azure AD directory roles with permissions in SharePoint, Exchange, or Security & Microsoft Purview compliance portal (for example, Exchange Administrator role), be sure to make active assignments of users to the group, and then assign the group to a role as eligible for activation (Option #1 above). If you choose to make active assignment of a group to a role and assign users to be eligible to group membership instead, it may take significant time to have all permissions of the role activated and ready to use.
+## Privileged Identity Management and group nesting
+
+In Azure AD, role-assignable groups canΓÇÖt have other groups nested inside them. To learn more, see [Use Azure AD groups to manage role assignments](../roles/groups-concept.md). This is applicable to active membership: one group cannot be an active member of another group that is role-assignable.
+
+One group can be an eligible member of another group, even if one of those groups is role-assignable.
+
+If a user is an active member of Group A, and Group A is an eligible member of Group B, the user can activate their membership in Group B. This activation will be only for the user that requested the activation for, it does not mean that the entire Group A becomes an active member of Group B.
+ ## Next steps - [Bring groups into Privileged Identity Management (preview)](groups-discover-groups.md)
active-directory Asana Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/asana-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Asana | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Asana'
description: Learn how to configure single sign-on between Azure Active Directory and Asana.
Previously updated : 11/21/2022 Last updated : 01/27/2023
-# Tutorial: Azure Active Directory integration with Asana
+# Tutorial: Azure AD SSO integration with Asana
In this tutorial, you'll learn how to integrate Asana with Azure Active Directory (Azure AD). When you integrate Asana with Azure AD, you can:
To configure the integration of Asana into Azure AD, you need to add Asana from
1. In the **Add from the gallery** section, type **Asana** in the search box. 1. Select **Asana** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
- Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
## Configure and test Azure AD SSO for Asana
To configure and test Azure AD SSO with Asana, perform the following steps:
1. **[Create Asana test user](#create-asana-test-user)** - to have a counterpart of B.Simon in Asana that is linked to the Azure AD representation of user. 1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-### Configure Azure AD SSO
+## Configure Azure AD SSO
Follow these steps to enable Azure AD SSO in the Azure portal.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Select a Single sign-on method** page, select **SAML**. 1. On the **Set up Single Sign-On with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
-4. On the **Basic SAML Configuration** section, perform the following steps:
+1. On the **Basic SAML Configuration** section, perform the following steps:
- ![Asana Domain and URLs single sign-on information](common/sp-identifier.png)
-
- a. In the **Sign on URL** text box, type the URL:
+ a. In the **Identifier (Entity ID)** text box, type the URL:
`https://app.asana.com/`
- b. In the **Identifier (Entity ID)** text box, type the URL:
- `https://app.asana.com/`
+ > [!Note]
+ > If you require a different value to the Identifier (Entity ID) please [get in touch with us](https://form-beta.asana.com/?k=BT9rHN4rEoRKARjEYg6neA&d=15793206719).
- c. In the **Reply URL (Assertion Consumer Service URL)** text box, type the URL:
+ b. In the **Reply URL (Assertion Consumer Service URL)** text box, type the URL:
`https://app.asana.com/-/saml/consume`
-5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
+ c. In the **Sign on URL** text box, type the URL:
+ `https://app.asana.com/`
+
+1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
- ![The Certificate download link](common/certificatebase64.png)
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
-6. On the **Set up Asana** section, copy the appropriate URL(s) as per your requirement.
+1. On the **Set up Asana** section, copy the appropriate URL(s) as per your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
### Create an Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
### Configure Asana SSO
-1. In a different browser window, sign-on to your Asana application. To configure SSO in Asana, access the workspace settings by clicking the workspace name on the top right corner of the screen. Then, click on **\<your workspace name\> Settings**.
+1. In a different browser window, sign on to your Asana application. To configure SSO in Asana, access the workspace settings by clicking the workspace name on the top right corner of the screen. Then, click on **\<your workspace name\> Settings**.
![Asana sso settings](./media/asana-tutorial/settings.png)
active-directory Compliance Genie Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/compliance-genie-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Compliance Genie
+description: Learn how to configure single sign-on between Azure Active Directory and Compliance Genie.
++++++++ Last updated : 01/27/2023++++
+# Azure Active Directory SSO integration with Compliance Genie
+
+In this article, you'll learn how to integrate Compliance Genie with Azure Active Directory (Azure AD). Compliance Genie is an all-in-One Health & Safety App, allowing to manage and keep track of health & safety across your company for risk assessments, incident management, audits and documentation. When you integrate Compliance Genie with Azure AD, you can:
+
+* Control in Azure AD who has access to Compliance Genie.
+* Enable your users to be automatically signed-in to Compliance Genie with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for Compliance Genie in a test environment. Compliance Genie supports both **SP** initiated single sign-on and also supports **Just In Time** user provisioning.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Compliance Genie, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Compliance Genie single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Compliance Genie application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Compliance Genie from the Azure AD gallery
+
+Add Compliance Genie from the Azure AD application gallery to configure single sign-on with Compliance Genie. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Compliance Genie** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `https://login.microsoftonline.com/<TenantID>`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://login.be-safetech.com/Login/AzureAssertionConsumerService/<COMPANYID>`
+
+ c. In the **Sign on URL** textbox, type the URL:
+ `https://login.be-safetech.com/Login/Azure`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Compliance Genie Client support team](mailto:admin@be-safetech.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+## Configure Compliance Genie SSO
+
+To configure single sign-on on **Compliance Genie** side, you need to send the **App Federation Metadata Url** to [Compliance Genie support team](mailto:admin@be-safetech.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Compliance Genie test user
+
+In this section, a user called B.Simon is created in Compliance Genie. Compliance Genie supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Compliance Genie, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Compliance Genie Sign-on URL where you can initiate the login flow.
+
+* Go to Compliance Genie Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Compliance Genie tile in the My Apps, this will redirect to Compliance Genie Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Compliance Genie you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Respondent Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/respondent-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Respondent
+description: Learn how to configure single sign-on between Azure Active Directory and Respondent.
++++++++ Last updated : 01/27/2023++++
+# Azure Active Directory SSO integration with Respondent
+
+In this article, you'll learn how to integrate Respondent with Azure Active Directory (Azure AD). Respondent is a global marketplace that connects business professionals and consumers with researchers. Manage recruitment, scheduling, and the payment of your research participants on Respondent. When you integrate Respondent with Azure AD, you can:
+
+* Control in Azure AD who has access to Respondent.
+* Enable your users to be automatically signed-in to Respondent with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for Respondent in a test environment. Respondent supports **SP** and **IDP** initiated single sign-on.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Respondent, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Respondent single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Respondent application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Respondent from the Azure AD gallery
+
+Add Respondent from the Azure AD application gallery to configure single sign-on with Respondent. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Respondent** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `https://app.respondent.io/auth/saml/sp/<ID>`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://app.respondent.io/auth/saml/sp/<ID>`
+
+1. If you wish to configure the application in **SP** initiated mode, then perform the following step:
+
+ In the **Sign on URL** textbox, type the URL:
+ `https://app.respondent.io/auth/saml/login`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Respondent Client support team](mailto:enterprisesupport@respondent.io) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Respondent application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the image of attributes configuration.](common/default-attributes.png "Image")
+
+1. In addition to above, Respondent application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | userId | user.objectid |
+ | email | user.mail |
+ | firstName | user.givenname |
+ | lastName | user.surname |
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
+
+1. On the **Set up Respondent** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+## Configure Respondent SSO
+
+To configure single sign-on on **Respondent** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Respondent support team](mailto:enterprisesupport@respondent.io). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Respondent test user
+
+In this section, you create a user called Britta Simon at Respondent. Work with [Respondent support team](mailto:enterprisesupport@respondent.io) to add the users in the Respondent platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Respondent Sign on URL where you can initiate the login flow.
+
+* Go to Respondent Sign on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Respondent for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Respondent tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Respondent for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Respondent you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Talon Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/talon-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Talon
+description: Learn how to configure single sign-on between Azure Active Directory and Talon.
++++++++ Last updated : 01/27/2023++++
+# Azure Active Directory SSO integration with Talon
+
+In this article, you'll learn how to integrate Talon with Azure Active Directory (Azure AD). Talon, a Chromium-based browser, isolates endpoint web traffic, providing a responsive, native user experience. Talon integrates with Azure AD to streamline onboarding and policy enforcement. When you integrate Talon with Azure AD, you can:
+
+* Control in Azure AD who has access to Talon.
+* Enable your users to be automatically signed-in to Talon with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for Talon in a test environment. Talon supports **IDP** initiated single sign-on.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Talon, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Talon single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Talon application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Talon from the Azure AD gallery
+
+Add Talon from the Azure AD application gallery to configure single sign-on with Talon. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Talon** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+
+1. Talon application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the image of attributes configuration.](common/default-attributes.png "Image")
+
+1. In addition to above, Talon application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | groups | user.groups |
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up Talon** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+## Configure Talon SSO
+
+To configure single sign-on on **Talon** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Talon support team](mailto:support@talon-sec.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Talon test user
+
+In this section, you create a user called Britta Simon at Talon. Work with [Talon support team](mailto:support@talon-sec.com) to add the users in the Talon platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on Test this application in Azure portal and you should be automatically signed in to the Talon for which you set up the SSO.
+
+* You can use Microsoft My Apps. When you click the Talon tile in the My Apps, you should be automatically signed in to the Talon for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Talon you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Webtma Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/webtma-tutorial.md
+
+ Title: Azure Active Directory SSO integration with WebTMA
+description: Learn how to configure single sign-on between Azure Active Directory and WebTMA.
++++++++ Last updated : 01/27/2023++++
+# Azure Active Directory SSO integration with WebTMA
+
+In this article, you'll learn how to integrate WebTMA with Azure Active Directory (Azure AD). WebTMA is a CMMS (Computerized Maintenance Management System) Asset, Space, Parts and work order management system. When you integrate WebTMA with Azure AD, you can:
+
+* Control in Azure AD who has access to WebTMA.
+* Enable your users to be automatically signed-in to WebTMA with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for WebTMA in a test environment. WebTMA supports both **SP** and **IDP** initiated single sign-on and also supports **Just In Time** user provisioning.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Prerequisites
+
+To integrate Azure Active Directory with WebTMA, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* WebTMA single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the WebTMA application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add WebTMA from the Azure AD gallery
+
+Add WebTMA from the Azure AD application gallery to configure single sign-on with WebTMA. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **WebTMA** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type the URL:
+ `http://www.webtma.net`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://<hostName>/<loginApplicationPath>/SAMLService.aspx?c=<clientName>`
+
+1. If you wish to configure the application in **SP** initiated mode, then perform the following step:
+
+ In the **Sign on URL** textbox, type a URL using the following pattern:
+ `https://<hostName>/<loginApplicationPath>/SAMLLogin.aspx?c=<clientName>`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Reply URL and Sign on URL. Contact [Respondent Client support team](mailto:support@tmasystems.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. WebTMA application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the image of attributes configuration.](common/default-attributes.png "Image")
+
+1. In addition to above, WebTMA application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | email | user.mail |
+ | FirstName | user.givenname |
+ | LastName | user.surname |
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up WebTMA** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+## Configure WebTMA SSO
+
+To configure single sign-on on **WebTMA** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [WebTMA support team](mailto:support@tmasystems.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create WebTMA test user
+
+In this section, a user called B.Simon is created in WebTMA. WebTMA supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in WebTMA, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to WebTMA Sign-on URL where you can initiate the login flow.
+
+* Go to WebTMA Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the WebTMA for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the WebTMA tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the WebTMA for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure WebTMA you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Decentralized Identifier Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/decentralized-identifier-overview.md
editor:
Previously updated : 08/20/2022 Last updated : 01/26/2023
But identity data has too often been exposed in security breaches. These breache
## Why we need Decentralized Identity
-Today we use our digital identity at work, at home, and across every app, service, and device we use. ItΓÇÖs made up of everything we say, do, and experience in our livesΓÇöpurchasing tickets for an event, checking into a hotel, or even ordering lunch. Currently, our identity and all our digital interactions are owned and controlled by other parties, some of whom we arenΓÇÖt even aware of.
+Today we use our digital identity at work, at home, and across every app, service, and device we use. ItΓÇÖs made up of everything we say, do, and experience in our livesΓÇöpurchasing tickets for an event, checking into a hotel, or even ordering lunch. Currently, our identity and all our digital interactions are owned and controlled by other parties, in some cases, even without our knowledge.
-Generally, users grant consent to several apps and devices. This approach requires a high degree of vigilance on the user's part to track who has access to what information. On the enterprise front, collaboration with consumers and partners requires high-touch orchestration to securely exchange data in a way that maintains privacy and security for all involved.
+Every day users grant apps and devices access to their data. A great deal of effort would be required for them to keep track of who has access to which pieces of information. On the enterprise front, collaboration with consumers and partners requires high-touch orchestration to securely exchange data in a way that maintains privacy and security for all involved.
We believe a standards-based Decentralized Identity system can unlock a new set of experiences that give users and organizations greater control over their dataΓÇöand deliver a higher degree of trust and security for apps, devices, and service providers.
Microsoft is actively collaborating with members of the Decentralized Identity F
Before we can understand DIDs, it helps to compare them with current identity systems. Email addresses and social network IDs are human-friendly aliases for collaboration but are now overloaded to serve as the control points for data access across many scenarios beyond collaboration. This creates a potential problem, because access to these IDs can be removed at any time by external parties.
-Decentralized Identifiers (DIDs) are different. DIDs are user-generated, self-owned, globally unique identifiers rooted in decentralized systems like ION. They possess unique characteristics, like greater assurance of immutability, censorship resistance, and tamper evasiveness. These attributes are critical for any ID system that is intended to provide self-ownership and user control.
+Decentralized Identifiers (DIDs) are different. DIDs are user-generated, self-owned, globally unique identifiers rooted in decentralized systems like ION. They possess unique characteristics, like greater assurance of immutability, censorship resistance, and tamper evasiveness. These attributes are critical for any ID system intended to provide self-ownership and user control.
MicrosoftΓÇÖs verifiable credential solution uses decentralized credentials (DIDs) to cryptographically sign as proof that a relying party (verifier) is attesting to information proving they are the owners of a verifiable credential. A basic understanding of DIDs is recommended for anyone creating a verifiable credential solution based on the Microsoft offering.
The scenario we use to explain how VCs work involves:
-Today, Alice provides a username and password to log onto WoodgroveΓÇÖs networked environment. Woodgrove is deploying a verifiable credential solution to provide a more manageable way for Alice to prove that she is an employee of Woodgrove. Proseware accepts verifiable credentials issued by Woodgrove as proof of employment to offer corporate discounts as part of their corporate discount program.
+Today, Alice provides a username and password to sign in WoodgroveΓÇÖs networked environment. Woodgrove is deploying a verifiable credential solution to provide a more manageable way for Alice to prove that she is an employee of Woodgrove. Proseware accepts verifiable credentials issued by Woodgrove as proof of employment that can give access to corporate discounts as part of their corporate discount program.
-Alice requests Woodgrove Inc for a proof of employment verifiable credential. Woodgrove Inc attests Alice's identity and issues a signed verifiable credential that Alice can accept and store in her digital wallet application. Alice can now present this verifiable credential as a proof of employment on the Proseware site. After a successful presentation of the credential, Proseware offers discount to Alice and the transaction is logged in Alice's wallet application so that she can track where and to whom she has presented her proof of employment verifiable credential.
+Alice requests Woodgrove Inc for a proof of employment verifiable credential. Woodgrove Inc attests Alice's identity and issues a signed verifiable credential that Alice can accept and store in her digital wallet application. Alice can now present this verifiable credential as a proof of employment on the Proseware site. After a successful presentation of the credential, Proseware offers discount to Alice and the transaction is logged in Alice's wallet application so that she can track where and to whom she presented her proof of employment verifiable credential.
![microsoft-did-overview](media/decentralized-identifier-overview/did-overview.png)
active-directory How To Create A Free Developer Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-create-a-free-developer-account.md
Previously updated : 04/01/2021 Last updated : 01/26/2023 # Customer intent: As a developer, I want to learn how to create a developer Azure Active Directory account so I can participate in the preview with a P2 license.
active-directory Howto Verifiable Credentials Partner Lexisnexis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/howto-verifiable-credentials-partner-lexisnexis.md
Title: Configure LexisNexis Risk Solutions as an identity verification partner u
description: This article shows you the steps you need to follow to configure LexisNexis as your identity verification partner -+ Previously updated : 09/1/2022 Last updated : 01/26/2023 # Customer intent: As a developer, I'm looking for information about the open standards that are supported by Microsoft Entra Verified ID.
active-directory Verifiable Credentials Configure Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-tenant.md
Previously updated : 08/11/2022 Last updated : 01/26/2023 # Customer intent: As an enterprise, we want to enable customers to manage information about themselves by using verifiable credentials.
Last updated 08/11/2022
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)]
-Microsoft Entra Verified ID is a decentralized identity solution that helps you safeguard your organization. The service allows you to issue and verify credentials. Issuers can use the Verified ID service to issue their own customized verifiable credentials. Verifiers can use the service's free REST API to easily request and accept verifiable credentials in their apps and services. In both cases, you will have to configure your Azure AD tenant so that you can use it to either issue your own verifiable credentials, or verify the presentation of a user's verifiable credentials that were issued by another organization. In case you are both an issuer and a verifier, you can use a single Azure AD tenant to both issue your own verifiable credentials as well as verify those of others.
+Microsoft Entra Verified ID is a decentralized identity solution that helps you safeguard your organization. The service allows you to issue and verify credentials. Issuers can use the Verified ID service to issue their own customized verifiable credentials. Verifiers can use the service's free REST API to easily request and accept verifiable credentials in apps and services. In both cases, your Azure AD tenant needs to be configured to either issue your own verifiable credentials, or verify the presentation of a user's verifiable credentials issued by a third party. In the event that you are both an issuer and a verifier, you can use a single Azure AD tenant to both issue your own verifiable credentials and verify those of others.
In this tutorial, you learn how to configure your Azure AD tenant to use the verifiable credentials service.
The following diagram illustrates the Verified ID architecture and the component
## Prerequisites -- You need an Azure tenant with an active subscription. If you don't have Azure subscription, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- Ensure that you have the [global administrator](../../active-directory/roles/permissions-reference.md#global-administrator) or the [authentication policy administrator](../../active-directory/roles/permissions-reference.md#authentication-policy-administrator) permission for the directory you want to configure. If you're not the global administrator, you will need permission [application administrator](../../active-directory/roles/permissions-reference.md#application-administrator) to complete the app registration including granting admin consent.-- Ensure that you have the [contributor](../../role-based-access-control/built-in-roles.md#contributor) role for the Azure subscription or the resource group that you will deploy Azure Key Vault in.
+- You need an Azure tenant with an active subscription. If you don't have an Azure subscription, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Ensure that you have the [global administrator](../../active-directory/roles/permissions-reference.md#global-administrator) or the [authentication policy administrator](../../active-directory/roles/permissions-reference.md#authentication-policy-administrator) permission for the directory you want to configure. If you're not the global administrator, you need the [application administrator](../../active-directory/roles/permissions-reference.md#application-administrator) permission to complete the app registration including granting admin consent.
+- Ensure that you have the [contributor](../../role-based-access-control/built-in-roles.md#contributor) role for the Azure subscription or the resource group where you are deploying Azure Key Vault.
## Create a key vault
You can choose to grant issuance and presentation permissions separately if you
- `https://contoso.com/.well-known/did-configuration.json` Once that you have successfully completed the verification steps, you are ready to continue to the next tutorial.
-If you have selected ION as the trust system, you will not see the DID registration section as it is not applicable for ION and you only have to distribute the did-configuration.json file.
+If you selected ION as the trust system, you will not see the DID registration section because it doesn't apply to ION. When using ION, you only have to distribute the did-configuration.json file.
## Next steps
aks Azure Files Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-files-csi.md
In addition to the original in-tree driver features, Azure Files CSI driver supp
## Use a persistent volume with Azure Files
-A [persistent volume (PV)][persistent-volume] represents a piece of storage that's provisioned for use with Kubernetes pods. A PV can be used by one or many pods and can be dynamically or statically provisioned. If multiple pods need concurrent access to the same storage volume, you can use Azure Files to connect by using the [Server Message Block (SMB)][smb-overview] or [NFS protocol][nfs-overview]. This article shows you how to dynamically create an Azure Files share for use by multiple pods in an AKS cluster. For static provisioning, see [Manually create and use a volume with an Azure Files share][azure-files-storage-provision.md#statically-provision-a-volume].
+A [persistent volume (PV)][persistent-volume] represents a piece of storage that's provisioned for use with Kubernetes pods. A PV can be used by one or many pods and can be dynamically or statically provisioned. If multiple pods need concurrent access to the same storage volume, you can use Azure Files to connect by using the [Server Message Block (SMB)][smb-overview] or [NFS protocol][nfs-overview]. This article shows you how to dynamically create an Azure Files share for use by multiple pods in an AKS cluster. For static provisioning, see [Manually create and use a volume with an Azure Files share][statically-provision-a-volume].
With Azure Files shares, there is no limit as to how many can be mounted on a node.
The output of the commands resembles the following example:
- For more about storage best practices, see [Best practices for storage and backups in Azure Kubernetes Service][operator-best-practices-storage]. <!-- LINKS - external -->
-[access-modes]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
-[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
-[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
-[kubernetes-storage-classes]: https://kubernetes.io/docs/concepts/storage/storage-classes/
-[kubernetes-volumes]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/
-[managed-disk-pricing-performance]: https://azure.microsoft.com/pricing/details/managed-disks/
[smb-overview]: /windows/desktop/FileIO/microsoft-smb-protocol-and-cifs-protocol-overview [nfs-overview]:/windows-server/storage/nfs/nfs-overview [kubectl-exec]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#exec [csi-specification]: https://github.com/container-storage-interface/spec/blob/master/spec.md [data-plane-api]: https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/azcore/internal/shared/shared.go
-[vhd-disk-feature]: https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/master/deploy/example/disk
<!-- LINKS - internal --> [csi-drivers-overview]: csi-storage-drivers.md [azure-disk-csi]: azure-disk-csi.md [azure-blob-csi]: azure-blob-csi.md [persistent-volume-claim-overview]: concepts-storage.md#persistent-volume-claims
-[access-tier-file-share]: ../storage/files/storage-files-planning#storage-tiers.md
-[access-tier-storage-account]: ../storage/blobs/access-tiers-overview.md
-[azure-tags]: ../azure-resource-manager/management/tag-resources.md
-[azure-disk-volume]: azure-disk-volume.md
-[azure-files-pvc]: azure-files-dynamic-pv.md
-[azure-files-pvc-manual]: azure-files-volume.md
-[premium-storage]: ../virtual-machines/disks-types.md
-[az-disk-list]: /cli/azure/disk#az_disk_list
-[az-snapshot-create]: /cli/azure/snapshot#az_snapshot_create
-[az-disk-create]: /cli/azure/disk#az_disk_create
-[az-disk-show]: /cli/azure/disk#az_disk_show
-[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
-[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
-[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
-[install-azure-cli]: /cli/azure/install-azure-cli
[operator-best-practices-storage]: operator-best-practices-storage.md [concepts-storage]: concepts-storage.md
-[storage-class-concepts]: concepts-storage.md#storage-classes
-[az-extension-add]: /cli/azure/extension#az_extension_add
-[az-extension-update]: /cli/azure/extension#az_extension_update
-[az-feature-register]: /cli/azure/feature#az_feature_register
-[az-feature-list]: /cli/azure/feature#az_feature_list
-[az-provider-register]: /cli/azure/provider#az_provider_register
[node-resource-group]: faq.md#why-are-two-resource-groups-created-with-aks [storage-skus]: ../storage/common/storage-redundancy.md [storage-tiers]: ../storage/files/storage-files-planning.md#storage-tiers
-[use-tags]: use-tags.md
[private-endpoint-overview]: ../private-link/private-endpoint-overview.md [persistent-volume]: concepts-storage.md#persistent-volumes [share-snapshots-overview]: ../storage/files/storage-snapshots-files.md
-[zrs-account-type]: ../storage/common/storage-redundancy.md#zone-redundant-storage
[access-tiers-overview]: ../storage/blobs/access-tiers-overview.md [tag-resources]: ../azure-resource-manager/management/tag-resources.md
+[statically-provision-a-volume]: azure-csi-files-storage-provision.md#statically-provision-a-volume
aks Concepts Sustainable Software Engineering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-sustainable-software-engineering.md
Sustainable software engineering is a shift in priorities and focus. In many cas
* Applying sustainable software engineering principles can give you faster performance or lower latency, such as by lowering total network traversal. * Reducing carbon emissions may cause slower performance or increased latency, such as delaying low-priority workloads.
-The guidance found in this article is focused on Azure Kubernetes Services you're building or operating on Azure and includes design and configuration checklists, recommended design, and configuration options. Before applying sustainable software engineering principles to your application, review the priorities, needs, and trade-offs of your application.
+The guidance found in this article is focused on Azure Kubernetes Services you are building or operating on Azure and includes design and configuration checklists, recommended design practices, and configuration options. Before applying sustainable software engineering principles to your application, review the priorities, needs, and trade-offs of your application.
## Prerequisites
The guidance found in this article is focused on Azure Kubernetes Services you'r
## Understanding the shared responsibility model
-Sustainability ΓÇô just like security ΓÇô is a shared responsibility between the cloud provider and the customer or partner designing and deploying AKS clusters on the platform. Deploying AKS does not automatically make it sustainable, even if the [data centers are optimized for sustainability](https://infrastructuremap.microsoft.com/fact-sheets). Applications that aren't optimized may still emit more carbon than necessary.
+Sustainability ΓÇô just like security ΓÇô is a shared responsibility between the cloud provider and the customer or partner designing and deploying AKS clusters on the platform. Deploying AKS does not automatically make it sustainable, even if the [data centers are optimized for sustainability](https://infrastructuremap.microsoft.com/fact-sheets). Applications that are not properly optimized may still emit more carbon than necessary.
Learn more about the [shared responsibility model for sustainability](/azure/architecture/framework/sustainability/sustainability-design-methodology#a-shared-responsibility).
Transport Layer Security (TLS) ensures that all data passed between the web serv
### Use cloud native network security tools and controls
-Azure Font Door and Application Gateway help manage traffic from web applications while Azure Web Application Firewall provides protection against OWASP top 10 attacks and load shedding bad bots. Using these capabilities helps remove unnecessary data transmission and reduces the burden on the cloud infrastructure, with lower bandwidth and less infrastructure requirements.
+Azure Font Door and Application Gateway help manage traffic from web applications while Azure Web Application Firewall provides protection against OWASP top 10 attacks and load shedding bad bots at the network edge. Using these capabilities helps remove unnecessary data transmission and reduces the burden on the cloud infrastructure, with lower bandwidth and less infrastructure requirements.
* Use [Application Gateway Ingress Controller (AGIC) in AKS](/azure/architecture/example-scenario/aks-agic/aks-agic) to filter and offload traffic at the network edge from reaching your origin to reduce energy consumption and carbon emissions.
Many attacks on cloud infrastructure seek to misuse deployed resources for the a
## Next steps > [!div class="nextstepaction"]
-> [Azure Well-Architected Framework review of AKS](/azure/architecture/framework/services/compute/azure-kubernetes-service/azure-kubernetes-service)
+> [Azure Well-Architected Framework review of AKS](/azure/architecture/framework/services/compute/azure-kubernetes-service/azure-kubernetes-service)
aks Quick Windows Container Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-cli.md
This article assumes a basic understanding of Kubernetes concepts. For more info
- If you have multiple Azure subscriptions, select the appropriate subscription ID in which the resources should be billed using the [az account](/cli/azure/account) command.
+- Verify _Microsoft.OperationsManagement_ and _Microsoft.OperationalInsights_ providers are registered on your subscription. These are Azure resource providers required to support Container insights. To check the registration status, run the following commands:
+
+ ```sh
+ az provider show -n Microsoft.OperationsManagement -o table
+ az provider show -n Microsoft.OperationalInsights -o table
+ ```
+
+ If they are not registered, register _Microsoft.OperationsManagement_ and _Microsoft.OperationalInsights_ using the following commands:
+
+ ```sh
+ az provider register --namespace Microsoft.OperationsManagement
+ az provider register --namespace Microsoft.OperationalInsights
+ ```
+
+ > [!NOTE]
+ > Run the commands with administrative privileges if you plan to run the commands in this quickstart locally instead of in Azure Cloud Shell.
+ ### Limitations The following limitations apply when you create and manage AKS clusters that support multiple node pools:
To learn more about AKS, and walk through a complete code to deployment example,
[aks-faq]: faq.md [az-extension-add]: /cli/azure/extension#az-extension-add [az-extension-update]: /cli/azure/extension#az-extension-update
+[azure-monitor-containers]: ../../azure-monitor/containers/container-insights-overview.md
[windows-server-password]: /windows/security/threat-protection/security-policy-settings/password-must-meet-complexity-requirements#reference [win-faq-change-admin-creds]: ../windows-faq.md#how-do-i-change-the-administrator-password-for-windows-server-nodes-on-my-cluster
analysis-services Analysis Services Addservprinc Admins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-addservprinc-admins.md
Title: Add service principal to Azure Analysis Services admin role | Microsoft Docs
+ Title: Learn how to add a service principal to Azure Analysis Services admin role | Microsoft Docs
description: Learn how to add an automation service principal to the Azure Analysis Services server admin role Previously updated : 05/14/2021 Last updated : 01/24/2023
# Add a service principal to the server administrator role
- To automate unattended PowerShell tasks, a service principal must have **server administrator** privileges on the Analysis Services server being managed. This article describes how to add a service principal to the server administrators role on an Azure AS server. You can do this using SQL Server Management Studio or a Resource Manager template.
+ To automate unattended PowerShell tasks, a service principal must have **server administrator** privileges on the Analysis Services server being managed. This article describes how to add a service principal to the server administrators role on an Analysis Services server. You can do this using SQL Server Management Studio or a Resource Manager template.
> [!NOTE] > Service principals must be added directly to the server administrator role. Adding a service principal to a security group, and then adding that security group to the server administrator role is not supported.
Before completing this task, you must have a service principal registered in Azu
## Using SQL Server Management Studio
-You can configure server administrators using SQL Server Management Studio (SSMS). To complete this task, you must have [server administrator](analysis-services-server-admins.md) permissions on the Azure AS server.
+You can configure server administrators using SQL Server Management Studio (SSMS). To complete this task, you must have [server administrator](analysis-services-server-admins.md) permissions on the Analysis Services server.
-1. In SSMS, connect to your Azure AS server.
+1. In SSMS, connect to your Analysis Services server.
2. In **Server Properties** > **Security**, click **Add**. 3. In **Select a User or Group**, search for your registered app by name, select, and then click **Add**.
- ![Search for service principal account](./media/analysis-services-addservprinc-admins/aas-add-sp-ssms-picker.png)
+ ![Screenshot that shows Search for service principal account.](./media/analysis-services-addservprinc-admins/aas-add-sp-ssms-picker.png)
4. Verify the service principal account ID, and then click **OK**.
The following Resource Manager template deploys an Analysis Services server with
## Using managed identities
-A managed identity can also be added to the Analysis Services Admins list. For example, you might have a [Logic App with a system-assigned managed identity](../logic-apps/create-managed-service-identity.md), and want to grant it the ability to administer your Analysis Services server.
+A managed identity can also be added to the Analysis Services Admins list. For example, you might have a [Logic App with a system-assigned managed identity](../logic-apps/create-managed-service-identity.md), and want to grant it the ability to administer your server.
In most parts of the Azure portal and APIs, managed identities are identified using their service principal object ID. However, Analysis Services requires that they be identified using their client ID. To obtain the client ID for a service principal, you can use the Azure CLI:
analysis-services Analysis Services Async Refresh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-async-refresh.md
Title: Asynchronous refresh for Azure Analysis Services models | Microsoft Docs
+ Title: Learn about asynchronous refresh for Azure Analysis Services models | Microsoft Docs
description: Describes how to use the Azure Analysis Services REST API to code asynchronous refresh of model data.
https://westus.asazure.windows.net/servers/myserver/models/AdventureWorks/
By using the base URL, resources and operations can be appended based on the following parameters:
-![Async refresh](./media/analysis-services-async-refresh/aas-async-refresh-flow.png)
+![Diagram that shows asynchronous refresh logic.](./media/analysis-services-async-refresh/aas-async-refresh-flow.png)
- Anything that ends in **s** is a collection. - Anything that ends with **()** is a function.
analysis-services Analysis Services Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-backup.md
Title: Azure Analysis Services database backup and restore | Microsoft Docs
+ Title: Learn about Azure Analysis Services database backup and restore | Microsoft Docs
description: This article describes how to backup and restore model metadata and data from an Azure Analysis Services database. Previously updated : 03/29/2021 Last updated : 01/24/2023
Before backing up, you need to configure storage settings for your server.
### To configure storage settings 1. In Azure portal > **Settings**, click **Backup**.
- ![Backups in Settings](./media/analysis-services-backup/aas-backup-backups.png)
+ ![Screenshot that shows Backups in Settings.](./media/analysis-services-backup/aas-backup-backups.png)
2. Click **Enabled**, then click **Storage Settings**.
- ![Enable](./media/analysis-services-backup/aas-backup-enable.png)
+ ![Screenshot that shows Enabled button.](./media/analysis-services-backup/aas-backup-enable.png)
3. Select your storage account or create a new one. 4. Select a container or create a new one.
- ![Select container](./media/analysis-services-backup/aas-backup-container.png)
+ ![Screenshot that shows selecting a container.](./media/analysis-services-backup/aas-backup-container.png)
5. Save your backup settings.
- ![Save backup settings](./media/analysis-services-backup/aas-backup-save.png)
+ ![Screenshot that shows Save backup settings.](./media/analysis-services-backup/aas-backup-save.png)
## Backup
-### To backup by using SSMS
+### To backup by using SQL Server Management Studio
-1. In SSMS, right-click a database > **Back Up**.
+1. In SQL Server Management Studio (SSMS), right-click a database > **Back Up**.
2. In **Backup Database** > **Backup file**, click **Browse**.
analysis-services Analysis Services Bcdr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-bcdr.md
Title: Azure Analysis Services high availability | Microsoft Docs
+ Title: Learn about Azure Analysis Services high availability | Microsoft Docs
description: This article describes how Azure Analysis Services provides high availability during service disruption. Previously updated : 02/02/2022 Last updated : 01/24/2023
# Analysis Services high availability
-This article describes assuring high availability for Azure Analysis Services servers.
+This article describes assuring high availability for Analysis Services servers in Azure.
## Assuring high availability during a service disruption
analysis-services Analysis Services Capacity Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-capacity-limits.md
Title: Azure Analysis Services resource and object limits | Microsoft Docs
+ Title: Learn about Azure Analysis Services resource and object limits | Microsoft Docs
description: This article describes resource and object limits for an Azure Analysis Services server. Previously updated : 03/29/2021 Last updated : 01/24/2023
This article describes resource and model object limits.
## Tier limits
-For QPU and Memory limits for developer, basic, and standard tiers, see the [Azure Analysis Services pricing page](https://azure.microsoft.com/pricing/details/analysis-services/).
+For Query Processing Units (QPU) and Memory limits for developer, basic, and standard tiers, see the [Azure Analysis Services pricing page](https://azure.microsoft.com/pricing/details/analysis-services/).
## Object limits
analysis-services Analysis Services Connect Excel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-connect-excel.md
Title: Connect to Azure Analysis Services with Excel | Microsoft Docs
+ Title: Learn how to connect to Azure Analysis Services with Excel | Microsoft Docs
description: Learn how to connect to an Azure Analysis Services server by using Excel. Once connected, users can create PivotTables to explore data. Previously updated : 05/16/2022 Last updated : 01/24/2023 # Connect with Excel
-Once you've created a server and deployed a tabular model to it, clients can connect and begin exploring data. This article describes connecting to an Azure Analysis Services resource by using the Excel desktop app. Connecting to an Azure Analysis Services resource is not supported in Excel for the web or Excel for Mac.
+ This article describes connecting to an Azure Analysis Services resource by using the Excel desktop app. Connecting to an Azure Analysis Services resource is not supported in Excel for the web or Excel for Mac.
## Before you begin
Connecting to a server in Excel is supported by using Get Data in Excel 2016 and
> [!IMPORTANT] > If you sign in with a Microsoft Account, Live ID, Yahoo, Gmail, etc., or you are required to sign in with multi-factor authentication, leave the password field blank. You are prompted for a password after clicking Next.
- ![Connect from Excel logon](./media/analysis-services-connect-excel/aas-connect-excel-logon.png)
+ ![Screenshot that shows Connect to Database Server screen in Data Connection Wizard.](./media/analysis-services-connect-excel/aas-connect-excel-logon.png)
3. In **Select Database and Table**, select the database and model or perspective, and then click **Finish**.
- ![Connect from Excel select model](./media/analysis-services-connect-excel/aas-connect-excel-select.png)
+ ![Screenshot that shows selecting a model in Data Connection Wizard.](./media/analysis-services-connect-excel/aas-connect-excel-select.png)
## See also
analysis-services Analysis Services Connect Pbi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-connect-pbi.md
Title: Connect to Azure Analysis Services with Power BI | Microsoft Docs
+ Title: Learn how to connect to Azure Analysis Services with Power BI | Microsoft Docs
description: Learn how to connect to an Azure Analysis Services server by using Power BI. Once connected, users can explore model data. Previously updated : 06/30/2021 Last updated : 01/24/2023 # Connect with Power BI
-Once you've created a server in Azure, and deployed a tabular model to it, users in your organization are ready to connect and begin exploring data.
+After you've created a server in Azure and deployed a tabular model to it, users in your organization are ready to connect and begin exploring data.
> [!NOTE] > If publishing a Power BI Desktop model to the Power BI service, on the Azure Analysis Services server, ensure the Case-Sensitive collation server property is not selected (default). The Case-Sensitive server property can be set by using SQL Server Management Studio.
Once you've created a server in Azure, and deployed a tabular model to it, users
5. When prompted to enter your credentials, select **Microsoft account**, and then click **Sign in**.
- :::image type="content" source="media/analysis-services-connect-pbi/aas-sign-in.png" alt-text="Sign in to Azure AS":::
+ :::image type="content" source="media/analysis-services-connect-pbi/aas-sign-in.png" alt-text="Screenshot showing Sign in to Azure Analysis Services.":::
> [!NOTE] > Windows and Basic authentication are not supported.
analysis-services Analysis Services Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-connect.md
Title: Connecting to Azure Analysis Services servers| Microsoft Docs
+ Title: Learn about connecting to Azure Analysis Services servers| Microsoft Docs
description: Learn how to connect to and get data from an Analysis Services server in Azure. Previously updated : 02/02/2022 Last updated : 01/24/2023
analysis-services Analysis Services Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-create-powershell.md
description: This quickstart describes how to create an Azure Analysis Services
Previously updated : 10/12/2021 Last updated : 01/26/2023
analysis-services Analysis Services Create Sample Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-create-sample-model.md
description: In this tutorial, learn how to add a sample model in Azure Analysis
Previously updated : 10/12/2021 Last updated : 01/26/2023 #Customer intent: As a BI developer, from the portal, I want to add a basic sample model database to my server for testing tool and client connections and queries.
Sign in to the [portal](https://portal.azure.com/).
1. In server **Overview**, click **New model**.
- ![Create a sample model](./media/analysis-services-create-sample-model/aas-create-sample-new-model.png)
+ ![Screen showing New model button.](./media/analysis-services-create-sample-model/aas-create-sample-new-model.png)
2. In **New model** > **Choose a data source**, verify **Sample data** is selected, and then click **Add**.
- ![Select New model](./media/analysis-services-create-sample-model/aas-create-sample-data.png)
+ ![Screen showing New model dialog.](./media/analysis-services-create-sample-model/aas-create-sample-data.png)
3. In **Overview**, verify the `adventureworks` sample model is added.
- ![Select sample data](./media/analysis-services-create-sample-model/aas-create-sample-verify.png)
+ ![Screen showing new model on the server.](./media/analysis-services-create-sample-model/aas-create-sample-verify.png)
## Clean up resources
-Your sample model is using cache memory resources. If you are not using your sample model for testing, you should remove it from your server.
+Your sample model is using cache memory resources. If you're not using your sample model for testing, you should remove it from your server.
These steps describe how to delete a model from a server by using SSMS.
analysis-services Analysis Services Create Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-create-server.md
description: This quickstart describes how to create an Azure Analysis Services
Previously updated : 10/12/2021 Last updated : 01/26/2023
analysis-services Analysis Services Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-create-template.md
Title: Quickstart - Create an Azure Analysis Services server resource by using A
description: Quickstart showing how to an Azure Analysis Services server resource by using an Azure Resource Manager template. Previously updated : 10/12/2021 Last updated : 01/26/2023 tags: azure-resource-manager
A single [Microsoft.AnalysisServices/servers](/azure/templates/microsoft.analysi
1. Select the following Deploy to Azure link to sign in to Azure and open a template. The template is used to create an Analysis Services server resource and specify required and optional properties.
- [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.analysisservices%2Fanalysis-services-create%2Fazuredeploy.json)
+ [![Deploy to Azure button](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.analysisservices%2Fanalysis-services-create%2Fazuredeploy.json)
2. Select or enter the following values.
analysis-services Analysis Services Database Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-database-users.md
Title: Manage database roles and users in Azure Analysis Services | Microsoft Docs
+ Title: Learn how to manage database roles and users in Azure Analysis Services | Microsoft Docs
description: Learn how to manage database roles and users on an Analysis Services server in Azure. Previously updated : 02/02/2022 Last updated : 01/27/2023
When adding a **service principal** use `app:appid@tenantid`.
8. In **Add External Member**, enter users or groups in your tenant Azure AD by email address. After you click OK and close Role Manager, roles and role members appear in Tabular Model Explorer.
- ![Roles and users in Tabular Model Explorer](./media/analysis-services-database-users/aas-roles-tmexplorer.png)
+ ![Screen showing roles and users in Tabular Model Explorer.](./media/analysis-services-database-users/aas-roles-tmexplorer.png)
9. Deploy to your Azure Analysis Services server.
To add roles and users to a deployed model database, you must be connected to th
4. Click **Membership**, then enter a user or group in your tenant Azure AD by email address.
- ![Add user](./media/analysis-services-database-users/aas-roles-adduser-ssms.png)
+ ![Screen showing Add user.](./media/analysis-services-database-users/aas-roles-adduser-ssms.png)
5. If the role you are creating has Read permission, you can add row filters by using a DAX formula. Click **Row Filters**, select a table, and then type a DAX formula in the **DAX Filter** field.
analysis-services Analysis Services Datasource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-datasource.md
Title: Data sources supported in Azure Analysis Services | Microsoft Docs
+ Title: Learn about data sources supported in Azure Analysis Services | Microsoft Docs
description: Describes data sources and connectors supported for tabular 1200 and higher data models in Azure Analysis Services. Previously updated : 02/02/2022 Last updated : 01/27/2023
Connecting to on-premises data sources from an Azure Analysis Services server re
## Understanding providers
-When creating tabular 1400 and higher model projects in Visual Studio, by default you do not specify a data provider when connecting to a data source by using **Get Data**. Tabular 1400 and higher models use [Power Query](/power-query/power-query-what-is-power-query) connectors to manage connections, data queries, and mashups between the data source and Analysis Services. These are sometimes referred to as *structured* data source connections in that connection property settings are set for you. You can, however, enable legacy data sources for a model project in Visual Studio. When enabled, you can use **Table Import Wizard** to connect to certain data sources traditionally supported in tabular 1200 and lower models as *legacy*, or *provider* data sources. When specified as a provider data source, you can specify a particular data provider and other advanced connection properties. For example, you can connect to a SQL Server Data Warehouse instance or even an Azure SQL Database as a legacy data source. You can then select the OLE DB Driver for SQL Server MSOLEDBSQL data provider. In this case, selecting an OLE DB data provider may provide improved performance over the Power Query connector.
+When creating tabular 1400 and higher model projects in Visual Studio, by default you don't specify a data provider when connecting to a data source by using Get Data. Tabular 1400 and higher models use [Power Query](/power-query/power-query-what-is-power-query) connectors to manage connections, data queries, and mashups between the data source and Analysis Services. These are sometimes referred to as *structured* data source connections in that connection property settings are set for you. You can, however, enable legacy data sources for a model project in Visual Studio. When enabled, you can use Table Import Wizard to connect to certain data sources traditionally supported in tabular 1200 and lower models as *legacy*, or *provider* data sources. When specified as a provider data source, you can specify a particular data provider and other advanced connection properties. For example, you can connect to a SQL Server Data Warehouse instance or even an Azure SQL Database as a legacy data source. You can then select the OLE DB Driver for SQL Server MSOLEDBSQL data provider. In this case, selecting an OLE DB data provider may provide improved performance over the Power Query connector.
-When using the Table Import Wizard in Visual Studio, connections to any data source require a data provider. A default data provider is selected for you. You can change the data provider if needed. The type of provider you choose can depend on performance, whether or not the model is using in-memory storage or DirectQuery, and which Analysis Services platform you deploy your model to.
+When using the Table Import Wizard in Visual Studio, connections to any data source require a data provider. A default data provider is selected for you. You can change the data provider if needed. The type of provider you choose might depend on performance, whether or not the model is using in-memory storage or DirectQuery, and which Analysis Services platform you deploy your model to.
### Specify provider data sources in tabular 1400 and higher model projects To enable provider data sources, in Visual Studio, click **Tools** > **Options** > **Analysis Services Tabular** > **Data Import**, select **Enable legacy data sources**.
-![Enable legacy data sources](media/analysis-services-datasource/aas-enable-legacy-datasources.png)
+![Screenshot of Enable legacy data sources.](media/analysis-services-datasource/aas-enable-legacy-datasources.png)
With legacy data sources enabled, in **Tabular Model Explorer**, right-click **Data Sources** > **Import From Data Source (Legacy)**.
-![Legacy data sources in Tabular Model Explorer](media/analysis-services-datasource/aas-import-legacy-datasources.png)
+![Screenshot of Legacy data sources in Tabular Model Explorer.](media/analysis-services-datasource/aas-import-legacy-datasources.png)
Just like with tabular 1200 model projects, use **Table Import Wizard** to connect to a data source. On the connect page, click **Advanced**. Specify data provider and other connection settings in **Set Advanced Properties**.
-![Legacy data sources Advanced properties](media/analysis-services-datasource/aas-import-legacy-advanced.png)
+![Screenshot of Legacy data sources Advanced properties.](media/analysis-services-datasource/aas-import-legacy-advanced.png)
## Impersonation In some cases, it may be necessary to specify a different impersonation account. Impersonation account can be specified in Visual Studio or SQL Server Management Studio (SSMS).
Direct Query mode is not supported with OAuth credentials.
## Enable Oracle managed provider
-In some cases, DAX queries to an Oracle data source may return unexpected results. This can be due to the provider being used for the data source connection.
+In some cases, DAX queries to an Oracle data source may return unexpected results. This might be due to the provider being used for the data source connection.
As described in the [Understanding providers](#understanding-providers) section, tabular models connect to data sources as either a *structured* data source or a *provider* data source. For models with an Oracle data source specified as a provider data source, ensure the specified provider is Oracle Data Provider for .NET (Oracle.DataAccess.Client).
analysis-services Analysis Services Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-deploy.md
Title: Deploy a model to Azure Analysis Services by using Visual Studio | Microsoft Docs
+ Title: Learn how to deploy a model to Azure Analysis Services by using Visual Studio | Microsoft Docs
description: Learn how to deploy a tabular model to an Azure Analysis Services server by using Visual Studio. Previously updated : 12/01/2020 Last updated : 01/27/2023
To get started, you need:
* **On-premises gateway** - If one or more data sources are on-premises in your organization's network, you need to install an [On-premises data gateway](analysis-services-gateway.md). The gateway is necessary for your server in the cloud connect to your on-premises data sources to process and refresh data in the model. > [!TIP]
-> Before you deploy, make sure you can process the data in your tables. In Visual Studio, click **Model** > **Process** > **Process All**. If processing fails, you cannot successfully deploy.
+> Before you deploy, make sure you can process the data in your tables. In Visual Studio, click **Model** > **Process** > **Process All**. If processing fails, you can't successfully deploy.
> >
To get started, you need:
In **Azure portal** > server > **Overview** > **Server name**, copy the server name.
-![Get server name in Azure](./media/analysis-services-deploy/aas-deploy-get-server-name.png)
+![Screenshot showing how to get server name in Azure.](./media/analysis-services-deploy/aas-deploy-get-server-name.png)
## To deploy from Visual Studio 1. In Visual Studio > **Solution Explorer**, right-click the project > **Properties**. Then in **Deployment** > **Server** paste the server name.
- ![Paste server name into deployment server property](./media/analysis-services-deploy/aas-deploy-deployment-server-property.png)
+ ![Screenshot showing how to paste server name into deployment server property.](./media/analysis-services-deploy/aas-deploy-deployment-server-property.png)
2. In **Solution Explorer**, right-click **Properties**, then click **Deploy**. You may be prompted to sign in to Azure.
- ![Deploy to server](./media/analysis-services-deploy/aas-deploy-deploy.png)
+ ![Screenshot showing Deploy to server.](./media/analysis-services-deploy/aas-deploy-deploy.png)
Deployment status appears in both the Output window and in Deploy.
- ![Deployment status](./media/analysis-services-deploy/aas-deploy-status.png)
+ ![Screenshot showing deployment status.](./media/analysis-services-deploy/aas-deploy-status.png)
That's all there is to it!
analysis-services Analysis Services Gateway Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-gateway-install.md
Title: Install On-premises data gateway for Azure Analysis Services | Microsoft Docs
+ Title: Learn how to install On-premises data gateway for Azure Analysis Services | Microsoft Docs
description: Learn how to install and configure an On-premises data gateway to connect to on-premises data sources from an Azure Analysis Services server. Previously updated : 01/31/2022 Last updated : 01/27/2023
To learn more about how Azure Analysis Services works with the gateway, see [Con
2. Select **On-premises data gateway**.
- ![Select](media/analysis-services-gateway-install/aas-gateway-installer-select.png)
+ ![Screenshot showing gateway selection.](media/analysis-services-gateway-install/aas-gateway-installer-select.png)
2. Select a location, accept the terms, and then click **Install**.
- ![Install location and license terms](media/analysis-services-gateway-install/aas-gateway-installer-accept.png)
+ ![Screenshot showing install location and license terms.](media/analysis-services-gateway-install/aas-gateway-installer-accept.png)
3. Sign in to Azure. The account must be in your tenant's Azure Active Directory. This account is used for the gateway administrator. Azure B2B (guest) accounts are not supported when installing and registering the gateway.
- ![Sign in to Azure](media/analysis-services-gateway-install/aas-gateway-installer-account.png)
+ ![Screenshot showing sign in to Azure.](media/analysis-services-gateway-install/aas-gateway-installer-account.png)
> [!NOTE] > If you sign in with a domain account, it's mapped to your organizational account in Azure AD. Your organizational account is used as the gateway administrator.
In order to create a gateway resource in Azure, you must register the local inst
> [!IMPORTANT] > Save your recovery key in a safe place. The recovery key is required in-order to takeover, migrate, or restore a gateway.
- ![Register](media/analysis-services-gateway-install/aas-gateway-register-name.png)
+ ![Screenshot showing Register.](media/analysis-services-gateway-install/aas-gateway-register-name.png)
## Create an Azure gateway resource
After you've installed and registered your gateway, you need to create a gateway
1. In Azure portal, click **Create a resource**, then search for **On-premises data gateway**, and then click **Create**.
- ![Create a gateway resource](media/analysis-services-gateway-install/aas-gateway-new-azure-resource.png)
+ ![Screenshot showing create a gateway resource.](media/analysis-services-gateway-install/aas-gateway-new-azure-resource.png)
2. In **Create connection gateway**, enter these settings:
After you've installed and registered your gateway, you need to create a gateway
1. In your Azure Analysis Services server overview, click **On-Premises Data Gateway**.
- ![Connect server to gateway](media/analysis-services-gateway-install/aas-gateway-connect-server.png)
+ ![Screenshot showing On-Premises Data Gateway in Settings.](media/analysis-services-gateway-install/aas-gateway-connect-server.png)
2. In **Pick an On-Premises Data Gateway to connect**, select your gateway resource, and then click **Connect selected gateway**.
- ![Connect server to gateway resource](media/analysis-services-gateway-install/aas-gateway-connect-resource.png)
+ ![Screenshot showing Connect server to gateway resource](media/analysis-services-gateway-install/aas-gateway-connect-resource.png)
> [!NOTE] > If your gateway does not appear in the list, your server is likely not in the same region as the region you specified when registering the gateway.
After you've installed and registered your gateway, you need to create a gateway
When connection between your server and gateway resource is successful, status will show **Connected**.
- ![Connect server to gateway resource success](media/analysis-services-gateway-install/aas-gateway-connect-success.png)
+ ![Screenshot showing connect server to gateway resource success.](media/analysis-services-gateway-install/aas-gateway-connect-success.png)
# [PowerShell](#tab/azure-powershell)
-Use [Get-AzResource](/powershell/module/az.resources/get-azresource) to get the the gateway ResourceID. Then connect the gateway resource to an existing or new server by specifying **-GatewayResourceID** in [Set-AzAnalysisServicesServer](/powershell/module/az.analysisservices/set-azanalysisservicesserver) or [New-AzAnalysisServicesServer](/powershell/module/az.analysisservices/new-azanalysisservicesserver).
+Use [Get-AzResource](/powershell/module/az.resources/get-azresource) to get the gateway ResourceID. Then connect the gateway resource to an existing or new server by specifying **-GatewayResourceID** in [Set-AzAnalysisServicesServer](/powershell/module/az.analysisservices/set-azanalysisservicesserver) or [New-AzAnalysisServicesServer](/powershell/module/az.analysisservices/new-azanalysisservicesserver).
To get the gateway resource ID:
analysis-services Analysis Services Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-gateway.md
Title: On-premises data gateway for Azure Analysis Services | Microsoft Docs
+ Title: Learn about the On-premises data gateway for Azure Analysis Services | Microsoft Docs
description: An On-premises gateway is necessary if your Analysis Services server in Azure will connect to on-premises data sources. Previously updated : 02/02/2022 Last updated : 01/27/2023
When installing for an Azure Analysis Services environment, it's important you f
## Connecting to a gateway resource in a different subscription
-It's recommended you create your Azure gateway resource in the same subscription as your server. However, you can configure your servers to connect to a gateway resource in another subscription. Connecting to a gateway resource in another subscription is not supported when configuring existing server settings or creating a new server in the portal, but can be configured by using PowerShell. To learn more, see [Connect gateway resource to server](analysis-services-gateway-install.md#connect-gateway-resource-to-server).
+It's recommended you create your Azure gateway resource in the same subscription as your server. However, you can configure servers to connect to a gateway resource in another subscription. Connecting to a gateway resource in another subscription isn't supported when configuring existing server settings or creating a new server in the portal, but can be configured by using PowerShell. To learn more, see [Connect gateway resource to server](analysis-services-gateway-install.md#connect-gateway-resource-to-server).
## Ports and communication settings
-The gateway creates an outbound connection to Azure Service Bus. It communicates on outbound ports: TCP 443 (default), 5671, 5672, 9350 through 9354. The gateway does not require inbound ports.
+The gateway creates an outbound connection to Azure Service Bus. It communicates on outbound ports: TCP 443 (default), 5671, 5672, 9350 through 9354. The gateway doesn't require inbound ports.
-You may need to include IP addresses for your data region in your firewall. You can download the [Microsoft Azure Datacenter IP list](https://www.microsoft.com/download/details.aspx?id=56519). This list is updated weekly. The IP Addresses listed in the Azure Datacenter IP list are in CIDR notation. To learn more, see [Classless Inter-Domain Routing](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing).
+You may need to include IP addresses for your data region in your firewall. Download the [Microsoft Azure Datacenter IP list](https://www.microsoft.com/download/details.aspx?id=56519). This list is updated weekly. The IP Addresses listed in the Azure Datacenter IP list are in CIDR notation. To learn more, see [Classless Inter-Domain Routing](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing).
The following are fully qualified domain names used by the gateway.
analysis-services Analysis Services Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-logging.md
Title: Diagnostic logging for Azure Analysis Services | Microsoft Docs
+ Title: Learn about diagnostic logging for Azure Analysis Services | Microsoft Docs
description: Describes how to setup up logging to monitoring your Azure Analysis Services server. Previously updated : 04/27/2021 Last updated : 01/27/2023
The Metrics category logs the same [Server metrics](analysis-services-monitor.md
1. In [Azure portal](https://portal.azure.com) > server, click **Diagnostic settings** in the left navigation, and then click **Turn on diagnostics**.
- ![Turn on resource logging for Azure Cosmos DB in the Azure portal](./media/analysis-services-logging/aas-logging-turn-on-diagnostics.png)
+ ![Screenshot showing Turn on diagnostics in the Azure portal.](./media/analysis-services-logging/aas-logging-turn-on-diagnostics.png)
2. In **Diagnostic settings**, specify the following options:
Metrics and server events are integrated with xEvents in your Log Analytics work
To view your diagnostic data, in Log Analytics workspace, open **Logs** from the left menu.
-![Log Search options in the Azure portal](./media/analysis-services-logging/aas-logging-open-log-search.png)
+![Screenshot showing log Search options in the Azure portal.](./media/analysis-services-logging/aas-logging-open-log-search.png)
In the query builder, expand **LogManagement** > **AzureDiagnostics**. AzureDiagnostics includes Engine and Service events. Notice a query is created on-the-fly. The EventClass\_s field contains xEvent names, which may look familiar if you've used xEvents for on-premises logging. Click **EventClass\_s** or one of the event names and Log Analytics workspace continues constructing a query. Be sure to save your queries to reuse later.
There are hundreds of queries you can use. To learn more about queries, see [Get
In this quick tutorial, you create a storage account in the same subscription and resource group as your Analysis Service server. You then use Set-AzDiagnosticSetting to turn on diagnostics logging, sending output to the new storage account. ### Prerequisites+ To complete this tutorial, you must have the following resources: * An existing Azure Analysis Services server. For instructions on creating a server resource, see [Create a server in Azure portal](analysis-services-create-server.md), or [Create an Azure Analysis Services server by using PowerShell](analysis-services-create-powershell.md).
analysis-services Analysis Services Long Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-long-operations.md
Title: Best practices for long running operations in Azure Analysis Services | Microsoft Docs
+ Title: Learn about best practices for long running operations in Azure Analysis Services | Microsoft Docs
description: This article describes best practices for long running operations. Previously updated : 04/27/2021 Last updated : 01/27/2023
There are many reasons why long running operations can be disrupted. For example
- Azure Analysis Services service updates - Service Fabric updates. Service Fabric is a platform component used by a number of Microsoft cloud services, including Azure Analysis Services.
-Besides updates that occur in the service, there is a natural movement of services across nodes due to load balancing. Node movements are an expected part of a cloud service. Azure Analysis Services tries to minimize impacts from node movements, but it's impossible to eliminate them entirely.
+Besides updates that occur in the service, there's a natural movement of services across nodes due to load balancing. Node movements are an expected part of a cloud service. Azure Analysis Services tries to minimize impacts from node movements, but it's impossible to eliminate them entirely.
-In addition to node movements, there are other failures that can occur. For example, a data source database system might be offline or network connectivity is lost. If during refresh, a partition has 10 million rows and a failure occurs at the 9 millionth row, there is no way to restart refresh at the point of failure. The service has to start again from the beginning.
+In addition to node movements, other failures can occur. For example, a data source database system might be offline or network connectivity is lost. If during refresh, a partition has 10 million rows and a failure occurs at the 9 millionth row, there's no way to restart refresh at the point of failure. The service must be started again from the beginning.
## Refresh REST API Service interruptions can be challenging for long running operations like data refresh. Azure Analysis Services includes a REST API to help mitigate negative impacts from service interruptions. To learn more, see [Asynchronous refresh with the REST API](analysis-services-async-refresh.md).
-Besides the REST API, there are other approaches you can use to minimize potential issues during long running refresh operations. The main goal is to avoid having to restart the refresh operation from the beginning, and instead perform refreshes in smaller batches that can be committed in stages.
+Besides the REST API, there are other approaches you can use to minimize potential issues during long running refresh operations. The goal is to avoid having to restart the refresh operation from the beginning, and instead perform refreshes in smaller batches that can be committed in stages.
The REST API allows for such restart, but it doesn't allow for full flexibility of partition creation and deletion. If a scenario requires complex data management operations, your solution should include some form of batching in its logic. For example, by using transactions to process data in multiple, separate batches allows for a failure to not require restart from the beginning, but instead from an intermediate checkpoint. ## Scale-out query replicas
-Whether using REST or custom logic, client application queries can still return inconsistent or intermediate results while batches are being processed. If consistent data returned by client application queries is required while processing is happening, and model data is in an intermediate state, you can use [scale-out](analysis-services-scale-out.md) with read-only query replicas.
+Whether using REST or custom logic, client application queries can still return inconsistent or intermediate results while batches are being processed. If consistent data returned by client application queries is required while processing is happening, and model data is in an intermediate state, use [scale-out](analysis-services-scale-out.md) with read-only query replicas.
By using read-only query replicas, while refreshes are being performed in batches, client application users can continue to query the old snapshot of data on the read-only replicas. Once refreshes are finished, a Synch operation can be performed to bring the read-only replicas up to date.
api-management Protect With Ddos Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/protect-with-ddos-protection.md
Previously updated : 10/24/2022 Last updated : 01/24/2023 # Defend your Azure API Management instance against DDoS attacks
This article shows how to defend your Azure API Management instance against dist
## Supported configurations
-Enabling Azure DDoS Protection for API Management is currently available only for instances deployed (injected) in a VNet in [external mode](api-management-using-with-vnet.md).
+Enabling Azure DDoS Protection for API Management is supported only for instances **deployed (injected) in a VNet** in [external mode](api-management-using-with-vnet.md) or [internal mode](api-management-using-with-internal-vnet.md).
-Currently, Azure DDoS Protection can't be enabled for the following API Management configurations:
+* External mode - All API Management endpoints are protected
+* Internal mode - Only the management endpoint accessible on port 3443 is protected
+
+### Unsupported configurations
* Instances that aren't VNet-injected
-* Instances deployed in a VNet in [internal mode](api-management-using-with-internal-vnet.md)
* Instances configured with a [private endpoint](private-endpoint.md) + ## Prerequisites * An API Management instance
- * The instance must be deployed in an Azure VNet in [external mode](api-management-using-with-vnet.md)
- * The instance to be configured with an Azure public IP address resource, which is supported only on the API Management `stv2` [compute platform](compute-infrastructure.md).
- * If the instance is hosted on the `stv1` platform, you must [migrate](compute-infrastructure.md#how-do-i-migrate-to-the-stv2-platform) to the `stv2` platform.
+ * The instance must be deployed in an Azure VNet in [external mode](api-management-using-with-vnet.md) or [internal mode](api-management-using-with-internal-vnet.md).
+ * The instance must be configured with an Azure public IP address resource, which is supported only on the API Management `stv2` [compute platform](compute-infrastructure.md).
+ > [!NOTE]
+ > If the instance is hosted on the `stv1` platform, you must [migrate](compute-infrastructure.md#how-do-i-migrate-to-the-stv2-platform) to the `stv2` platform.
* An Azure DDoS Protection [plan](../ddos-protection/manage-ddos-protection.md) * The plan you select can be in the same, or different, subscription than the virtual network and the API Management instance. If the subscriptions differ, they must be associated to the same Azure Active Directory tenant. * You may use a plan created using either the Network DDoS protection SKU or IP DDoS Protection SKU (preview). See [Azure DDoS Protection SKU Comparison](../ddos-protection/ddos-protection-sku-comparison.md).
app-service Deploy Run Package https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-run-package.md
The command also restarts the app. Because `WEBSITE_RUN_FROM_PACKAGE` is set, Ap
## Run from external URL instead
-You can also run a package from an external URL, such as Azure Blob Storage. You can use the [Azure Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md) to upload package files to your Blob storage account. You should use a private storage container with a [Shared Access Signature (SAS)](../vs-azure-tools-storage-manage-with-storage-explorer.md#generate-a-sas-in-storage-explorer) or [use a managed identity](#fetch-a-package-from-azure-blob-storage-using-a-managed-identity) to enable the App Service runtime to access the package securely.
+You can also run a package from an external URL, such as Azure Blob Storage. You can use the [Azure Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md) to upload package files to your Blob storage account. You should use a private storage container with a [Shared Access Signature (SAS)](../vs-azure-tools-storage-manage-with-storage-explorer.md#generate-a-sas-in-storage-explorer) or [use a managed identity](#access-a-package-in-azure-blob-storage-using-a-managed-identity) to enable the App Service runtime to access the package securely.
Once you upload your file to Blob storage and have an SAS URL for the file, set the `WEBSITE_RUN_FROM_PACKAGE` app setting to the URL. The following example does it by using Azure CLI:
az webapp config appsettings set --name <app-name> --resource-group <resource-gr
If you publish an updated package with the same name to Blob storage, you need to restart your app so that the updated package is loaded into App Service.
-### Fetch a package from Azure Blob Storage using a managed identity
+### Access a package in Azure Blob Storage using a managed identity
[!INCLUDE [Run from package via Identity](../../includes/app-service-run-from-package-via-identity.md)]
+## Deploy WebJob files when running from package
+
+There are two ways to deploy [WebJob](webjobs-create.md) files when you [enable running an app from package](#enable-running-from-package):
++
+- Deploy in the same ZIP package as your app: include them as you normally would in `<project-root>\app_data\jobs\...` (which maps to the deployment path `\site\wwwroot\app_data\jobs\...` as specified in the [WebJobs quickstart](webjobs-create.md#webjob-types)).
+- Deploy separately from the ZIP package of your app: Since the usual deployment path `\site\wwwroot\app_data\jobs\...` is now read-only, you can't deploy WebJob files there. Instead, deploy WebJob files to `\site\jobs\...`, which is not read only. WebJobs deployed to `\site\wwwroot\app_data\jobs\...` and `\site\jobs\...` both run.
+
+> [!NOTE]
+> When `\site\wwwroot` becomes read-only, operations like the creation of the *disable.job* will fail.
+ ## Troubleshooting - Running directly from a package makes `wwwroot` read-only. Your app will receive an error if it tries to write files to this directory.
app-service Reference App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/reference-app-settings.md
The following environment variables are related to the app environment in genera
| `WEBSITE_PLATFORM_VERSION` | Read-only. App Service platform version. || | `HOME` | Read-only. Path to the home directory (for example, `D:\home` for Windows). || | `SERVER_PORT` | Read-only. The port the app should listen to. | |
-| `WEBSITE_WARMUP_PATH` | A relative path to ping to warm up the app, beginning with a slash. The default is `/`, which pings the root path. The specific path can be pinged by an unauthenticated client, such as Azure Traffic Manager, even if [App Service authentication](overview-authentication-authorization.md) is set to reject unauthenticated clients. (NOTE: This app setting does not change the path used by AlwaysOn.) ||
+| `WEBSITE_WARMUP_PATH` | A relative path to ping to warm up the app, beginning with a slash. The default is `/`, which pings the root path. The specific path can be pinged by an unauthenticated client, such as Azure Traffic Manager, even if [App Service authentication](overview-authentication-authorization.md) is set to reject unauthenticated clients. (NOTE: This app setting doesn't change the path used by AlwaysOn.) ||
| `WEBSITE_COMPUTE_MODE` | Read-only. Specifies whether app runs on dedicated (`Dedicated`) or shared (`Shared`) VM/s. || | `WEBSITE_SKU` | Read-only. SKU of the app. Possible values are `Free`, `Shared`, `Basic`, and `Standard`. || | `SITE_BITNESS` | Read-only. Shows whether the app is 32-bit (`x86`) or 64-bit (`AMD64`). ||
-| `WEBSITE_HOSTNAME` | Read-only. Primary hostname for the app. Custom hostnames are not accounted for here. ||
+| `WEBSITE_HOSTNAME` | Read-only. Primary hostname for the app. Custom hostnames aren't accounted for here. ||
| `WEBSITE_VOLUME_TYPE` | Read-only. Shows the storage volume type currently in use. || | `WEBSITE_NPM_DEFAULT_VERSION` | Default npm version the app is using. || | `WEBSOCKET_CONCURRENT_REQUEST_LIMIT` | Read-only. Limit for websocket's concurrent requests. For **Standard** tier and above, the value is `-1`, but there's still a per VM limit based on your VM size (see [Cross VM Numerical Limits](https://github.com/projectkudu/kudu/wiki/Azure-Web-App-sandbox#cross-vm-numerical-limits)). || | `WEBSITE_PRIVATE_EXTENSIONS` | Set to `0` to disable the use of private site extensions. || | `WEBSITE_TIME_ZONE` | By default, the time zone for the app is always UTC. You can change it to any of the valid values that are listed in [TimeZone](/previous-versions/windows/it-pro/windows-vista/cc749073(v=ws.10)). If the specified value isn't recognized, UTC is used. | `Atlantic Standard Time` |
-| `WEBSITE_ADD_SITENAME_BINDINGS_IN_APPHOST_CONFIG` | After slot swaps, the app may experience unexpected restarts. This is because after a swap, the hostname binding configuration goes out of sync, which by itself doesn't cause restarts. However, certain underlying storage events (such as storage volume failovers) may detect these discrepancies and force all worker processes to restart. To minimize these types of restarts, set the app setting value to `1`on all slots (default is`0`). However, do not set this value if you are running a Windows Communication Foundation (WCF) application. For more information, see [Troubleshoot swaps](deploy-staging-slots.md#troubleshoot-swaps)||
+| `WEBSITE_ADD_SITENAME_BINDINGS_IN_APPHOST_CONFIG` | After slot swaps, the app may experience unexpected restarts. This is because after a swap, the hostname binding configuration goes out of sync, which by itself doesn't cause restarts. However, certain underlying storage events (such as storage volume failovers) may detect these discrepancies and force all worker processes to restart. To minimize these types of restarts, set the app setting value to `1`on all slots (default is`0`). However, don't set this value if you're running a Windows Communication Foundation (WCF) application. For more information, see [Troubleshoot swaps](deploy-staging-slots.md#troubleshoot-swaps)||
| `WEBSITE_PROACTIVE_AUTOHEAL_ENABLED` | By default, a VM instance is proactively "autohealed" when it's using more than 90% of allocated memory for more than 30 seconds, or when 80% of the total requests in the last two minutes take longer than 200 seconds. If a VM instance has triggered one of these rules, the recovery process is an overlapping restart of the instance. Set to `false` to disable this recovery behavior. The default is `true`. For more information, see [Proactive Auto Heal](https://azure.github.io/AppService/2017/08/17/Introducing-Proactive-Auto-Heal.html). || | `WEBSITE_PROACTIVE_CRASHMONITORING_ENABLED` | Whenever the w3wp.exe process on a VM instance of your app crashes due to an unhandled exception for more than three times in 24 hours, a debugger process is attached to the main worker process on that instance, and collects a memory dump when the worker process crashes again. This memory dump is then analyzed and the call stack of the thread that caused the crash is logged in your App ServiceΓÇÖs logs. Set to `false` to disable this automatic monitoring behavior. The default is `true`. For more information, see [Proactive Crash Monitoring](https://azure.github.io/AppService/2021/03/01/Proactive-Crash-Monitoring-in-Azure-App-Service.html). || | `WEBSITE_DAAS_STORAGE_SASURI` | During crash monitoring (proactive or manual), the memory dumps are deleted by default. To save the memory dumps to a storage blob container, specify the SAS URI. ||
WEBSITE_CLASSIC_MODE
## Variable prefixes
-The following table shows environment variables prefixes that App Service uses for various purposes.
+The following table shows environment variable prefixes that App Service uses for various purposes.
| Setting name | Description | |-|-|
The following environment variables are related to app deployment. For variables
| Setting name| Description | |-|-|
-| `DEPLOYMENT_BRANCH`| For [local Git](deploy-local-git.md) or [cloud Git](deploy-continuous-deployment.md) deployment (such as GitHub), set to the branch in Azure you want to deploy to. By default, it is `master`. |
+| `DEPLOYMENT_BRANCH`| For [local Git](deploy-local-git.md) or [cloud Git](deploy-continuous-deployment.md) deployment (such as GitHub), set to the branch in Azure you want to deploy to. By default, it's `master`. |
| `WEBSITE_RUN_FROM_PACKAGE`| Set to `1` to run the app from a local ZIP package, or set to the URL of an external URL to run the app from a remote ZIP package. For more information, see [Run your app in Azure App Service directly from a ZIP package](deploy-run-package.md). | | `WEBSITE_USE_ZIP` | Deprecated. Use `WEBSITE_RUN_FROM_PACKAGE`. | | `WEBSITE_RUN_FROM_ZIP` | Deprecated. Use `WEBSITE_RUN_FROM_PACKAGE`. | | `WEBSITE_WEBDEPLOY_USE_SCM` | Set to `false` for WebDeploy to stop using the Kudu deployment engine. The default is `true`. To deploy to Linux apps using Visual Studio (WebDeploy/MSDeploy), set it to `false`. |
-| `MSDEPLOY_RENAME_LOCKED_FILES` | Set to `1` to attempt to rename DLLs if they can't be copied during a WebDeploy deployment. This setting is not applicable if `WEBSITE_WEBDEPLOY_USE_SCM` is set to `false`. |
+| `MSDEPLOY_RENAME_LOCKED_FILES` | Set to `1` to attempt to rename DLLs if they can't be copied during a WebDeploy deployment. This setting isn't applicable if `WEBSITE_WEBDEPLOY_USE_SCM` is set to `false`. |
| `WEBSITE_DISABLE_SCM_SEPARATION` | By default, the main app and the Kudu app run in different sandboxes. When you stop the app, the Kudu app is still running, and you can continue to use Git deploy and MSDeploy. Each app has its own local files. Turning off this separation (setting to `true`) is a legacy mode that's no longer fully supported. | | `WEBSITE_ENABLE_SYNC_UPDATE_SITE` | Set to `1` ensure that REST API calls to update `site` and `siteconfig` are completely applied to all instances before returning. The default is `1` if deploying with an ARM template, to avoid race conditions with subsequent ARM calls. | | `WEBSITE_START_SCM_ON_SITE_CREATION` | In an ARM template deployment, set to `1` in the ARM template to pre-start the Kudu app as part of app creation. |
Kudu build configuration applies to native Windows apps and is used to control t
| `SCM_BUILD_ARGS` | Add things at the end of the msbuild command line, such that it overrides any previous parts of the default command line. | To do a clean build: `-t:Clean;Compile`| | `SCM_SCRIPT_GENERATOR_ARGS` | Kudu uses the `azure site deploymentscript` command described [here](http://blog.amitapple.com/post/38418009331/azurewebsitecustomdeploymentpart2) to generate a deployment script. It automatically detects the language framework type and determines the parameters to pass to the command. This setting overrides the automatically generated parameters. | To treat your repository as plain content files: `--basic -p <folder-to-deploy>` | | `SCM_TRACE_LEVEL` | Build trace level. The default is `1`. Set to higher values, up to 4, for more tracing. | `4` |
-| `SCM_COMMAND_IDLE_TIMEOUT` | Time-out in seconds for each command that the build process launches to wait before without producing any output. After that, the command is considered idle and killed. The default is `60` (one minute). In Azure, there's also a general idle request timeout that disconnects clients after 230 seconds. However, the command will still continue running server-side after that. | |
+| `SCM_COMMAND_IDLE_TIMEOUT` | Time out in seconds for each command that the build process launches to wait before without producing any output. After that, the command is considered idle and killed. The default is `60` (one minute). In Azure, there's also a general idle request timeout that disconnects clients after 230 seconds. However, the command will still continue running server-side after that. | |
| `SCM_LOGSTREAM_TIMEOUT` | Time-out of inactivity in seconds before stopping log streaming. The default is `1800` (30 minutes).| | | `SCM_SITEEXTENSIONS_FEED_URL` | URL of the site extensions gallery. The default is `https://www.nuget.org/api/v2/`. The URL of the old feed is `http://www.siteextensions.net/api/v2/`. | | | `SCM_USE_LIBGIT2SHARP_REPOSITORY` | Set to `0` to use git.exe instead of libgit2sharp for git operations. | | | `WEBSITE_LOAD_USER_PROFILE` | In case of the error `The specified user does not have a valid profile.` during ASP.NET build automation (such as during Git deployment), set this variable to `1` to load a full user profile in the build environment. This setting is only applicable when `WEBSITE_COMPUTE_MODE` is `Dedicated`. | |
-| `WEBSITE_SCM_IDLE_TIMEOUT_IN_MINUTES` | Time-out in minutes for the SCM (Kudu) site. The default is `20`. | |
+| `WEBSITE_SCM_IDLE_TIMEOUT_IN_MINUTES` | Time out in minutes for the SCM (Kudu) site. The default is `20`. | |
| `SCM_DO_BUILD_DURING_DEPLOYMENT` | With [ZIP deploy](deploy-zip.md), the deployment engine assumes that a ZIP file is ready to run as-is and doesn't run any build automation. To enable the same build automation as in [Git deploy](deploy-local-git.md), set to `true`. | <!--
This section shows the configurable runtime settings for each supported language
|-|-|-| | `JAVA_HOME` | Path of the Java installation directory || | `JAVA_OPTS` | For Java SE apps, environment variables to pass into the `java` command. Can contain system variables. For Tomcat, use `CATALINA_OPTS`. | `-Dmysysproperty=%DRIVEPATH%` |
-| `AZURE_JAVA_APP_PATH` | Environment variable can be used by custom scripts So they have a location for app.jar after it's copied to local | |
+| `AZURE_JAVA_APP_PATH` | Environment variable can be used by custom scripts so they have a location for app.jar after it's copied to local. | |
| `SKIP_JAVA_KEYSTORE_LOAD` | value of 1 to disable App Service from loading the certificates into the key store automatically ||
-| `WEBSITE_JAVA_JAR_FILE_NAME` | The .jar file to use. Appends .jar if the string does not end in .jar. Defaults to app.jar ||
-| `WEBSITE_JAVA_WAR_FILE_NAME` | The .war file to use. Appends .war if the string does not end in .war. Defaults to app.war ||
+| `WEBSITE_JAVA_JAR_FILE_NAME` | The .jar file to use. Appends .jar if the string doesn't end in .jar. Defaults to app.jar ||
+| `WEBSITE_JAVA_WAR_FILE_NAME` | The .war file to use. Appends .war if the string doesn't end in .war. Defaults to app.war ||
| `JAVA_ARGS` | java opts required by different java webserver. Defaults to `-noverify -Djava.net.preferIPv4Stack=true` || | `JAVA_WEBSERVER_PORT_ENVIRONMENT_VARIABLES` | environment variables used by popular Java web frameworks for server port. Some frameworks included are: Spring, Micronaut, Grails, MicroProfile Thorntail, Helidon, Ratpack, and Quarkus || | `JAVA_TMP_DIR` | Added to Java args as `-Dsite.tempdir`. Defaults to `TEMP`. ||
For more information on deployment slots, see [Set up staging environments in Az
|`WEBSITE_SWAP_WARMUP_PING_STATUSES`| Valid HTTP response codes for the warm-up operation during a swap. If the returned status code isn't in the list, the warmup and swap operations are stopped. By default, all response codes are valid. | `200,202` | | `WEBSITE_SLOT_NUMBER_OF_TIMEOUTS_BEFORE_RESTART` | During a slot swap, maximum number of timeouts after which we force restart the site on a specific VM instance. The default is `3`. || | `WEBSITE_SLOT_MAX_NUMBER_OF_TIMEOUTS` | During a slot swap, maximum number of timeout requests for a single URL to make before giving up. The default is `5`. ||
-| `WEBSITE_SKIP_ALL_BINDINGS_IN_APPHOST_CONFIG` | Set to `true` or `1` to skip all bindings in `applicationHost.config`. The default is `false`. If your app triggers a restart because `applicationHost.config` is updated with the swapped hostnames of th slots, set this variable to `true` to avoid a restart of this kind. If you are running a Windows Communication Foundation (WCF) app, do not set this variable. ||
+| `WEBSITE_SKIP_ALL_BINDINGS_IN_APPHOST_CONFIG` | Set to `true` or `1` to skip all bindings in `applicationHost.config`. The default is `false`. If your app triggers a restart because `applicationHost.config` is updated with the swapped hostnames of th slots, set this variable to `true` to avoid a restart of this kind. If you're running a Windows Communication Foundation (WCF) app, don't set this variable. ||
<!-- |`WEBSITE_SWAP_SLOTNAME`|||
For more information on custom containers, see [Run a custom container in Azure]
|-|-|-| | `WEBSITES_ENABLE_APP_SERVICE_STORAGE` | Set to `true` to enable the `/home` directory to be shared across scaled instances. The default is `true` for custom containers. || | `WEBSITES_CONTAINER_START_TIME_LIMIT` | Amount of time in seconds to wait for the container to complete start-up before restarting the container. Default is `230`. You can increase it up to the maximum of `1800`. ||
-| `DOCKER_REGISTRY_SERVER_URL` | URL of the registry server, when running a custom container in App Service. For security, this variable is not passed on to the container. | `https://<server-name>.azurecr.io` |
-| `DOCKER_REGISTRY_SERVER_USERNAME` | Username to authenticate with the registry server at `DOCKER_REGISTRY_SERVER_URL`. For security, this variable is not passed on to the container. ||
-| `DOCKER_REGISTRY_SERVER_PASSWORD` | Password to authenticate with the registry server at `DOCKER_REGISTRY_SERVER_URL`. For security, this variable is not passed on to the container. ||
+| `DOCKER_REGISTRY_SERVER_URL` | URL of the registry server, when running a custom container in App Service. For security, this variable isn't passed on to the container. | `https://<server-name>.azurecr.io` |
+| `DOCKER_REGISTRY_SERVER_USERNAME` | Username to authenticate with the registry server at `DOCKER_REGISTRY_SERVER_URL`. For security, this variable isn't passed on to the container. ||
+| `DOCKER_REGISTRY_SERVER_PASSWORD` | Password to authenticate with the registry server at `DOCKER_REGISTRY_SERVER_URL`. For security, this variable isn't passed on to the container. ||
| `DOCKER_ENABLE_CI` | Set to `true` to enable the continuous deployment for custom containers. The default is `false` for custom containers. || | `WEBSITE_PULL_IMAGE_OVER_VNET` | Connect and pull from a registry inside a Virtual Network or on-premises. Your app will need to be connected to a Virtual Network using VNet integration feature. This setting is also needed for Azure Container Registry with Private Endpoint. || | `WEBSITES_WEB_CONTAINER_NAME` | In a Docker Compose app, only one of the containers can be internet accessible. Set to the name of the container defined in the configuration file to override the default container selection. By default, the internet accessible container is the first container to define port 80 or 8080, or, when no such container is found, the first container defined in the configuration file. | |
-| `WEBSITES_PORT` | For a custom container, the custom port number on the container for App Service to route requests to. By default, App Service attempts automatic port detection of ports 80 and 8080. This setting is *not* injected into the container as an environment variable. ||
+| `WEBSITES_PORT` | For a custom container, the custom port number on the container for App Service to route requests to. By default, App Service attempts automatic port detection of ports 80 and 8080. This setting isn't injected into the container as an environment variable. ||
| `WEBSITE_CPU_CORES_LIMIT` | By default, a Windows container runs with all available cores for your chosen pricing tier. To reduce the number of cores, set to the number of desired cores limit. For more information, see [Customize the number of compute cores](configure-custom-container.md?pivots=container-windows#customize-the-number-of-compute-cores).|| | `WEBSITE_MEMORY_LIMIT_MB` | By default all Windows Containers deployed in Azure App Service are limited to 1 GB RAM. Set to the desired memory limit in MB. The cumulative total of this setting across apps in the same plan must not exceed the amount allowed by the chosen pricing tier. For more information, see [Customize container memory](configure-custom-container.md?pivots=container-windows#customize-container-memory). || | `CONTAINER_WINRM_ENABLED` | For a Windows containerized app, set to `1` to enable Windows Remote Management (WIN-RM). ||
WEBSITE_DISABLE_PRELOAD_HANG_MITIGATION
| `WEBSITE_INSTANCE_ID` | Read-only. Unique ID of the current VM instance, when the app is scaled out to multiple instances. | | `WEBSITE_IIS_SITE_NAME` | Deprecated. Use `WEBSITE_INSTANCE_ID`. | | `WEBSITE_DISABLE_OVERLAPPED_RECYCLING` | Overlapped recycling makes it so that before the current VM instance of an app is shut down, a new VM instance starts. In some cases, it can cause file locking issues. You can try turning it off by setting to `1`. |
-| `WEBSITE_DISABLE_CROSS_STAMP_SCALE` | By default, apps are allowed to scale across stamps if they use Azure Files or a Docker container. Set to `1` or `true` to disable cross-stamp scaling within the app's region. The default is `0`. Custom Docker containers that set `WEBSITES_ENABLE_APP_SERVICE_STORAGE` to `true` or `1` cannot scale cross-stamps because their content is not completely encapsulated in the Docker container. |
+| `WEBSITE_DISABLE_CROSS_STAMP_SCALE` | By default, apps are allowed to scale across stamps if they use Azure Files or a Docker container. Set to `1` or `true` to disable cross-stamp scaling within the app's region. The default is `0`. Custom Docker containers that set `WEBSITES_ENABLE_APP_SERVICE_STORAGE` to `true` or `1` can't scale cross-stamps because their content isn't completely encapsulated in the Docker container. |
## Logging
WEBSITE_DISABLE_PRELOAD_HANG_MITIGATION
| `DIAGNOSTICS_TEXTTRACEMAXLOGFILESIZEBYTES` | Maximum size of the log file in bytes. The default is `131072` (128 KB). || | `DIAGNOSTICS_TEXTTRACEMAXLOGFOLDERSIZEBYTES` | Maximum size of the log folder in bytes. The default is `1048576` (1 MB). || | `DIAGNOSTICS_TEXTTRACEMAXNUMLOGFILES` | Maximum number of log files to keep. The default is `20`. | |
-| `DIAGNOSTICS_TEXTTRACETURNOFFPERIOD` | Time-out in milliseconds to keep application logging enabled. The default is `43200000` (12 hours). ||
+| `DIAGNOSTICS_TEXTTRACETURNOFFPERIOD` | Time out in milliseconds to keep application logging enabled. The default is `43200000` (12 hours). ||
| `WEBSITE_LOG_BUFFERING` | By default, log buffering is enabled. Set to `0` to disable it. || | `WEBSITE_ENABLE_PERF_MODE` | For native Windows apps, set to `TRUE` to turn off IIS log entries for successful requests returned within 10 seconds. This is a quick way to do performance benchmarking by removing extended logging. ||
The following environment variables are related to [key vault references](app-se
| Setting name | Description | |-|-| | `WEBSITE_KEYVAULT_REFERENCES` | Read-only. Contains information (including statuses) for all Key Vault references that are currently configured in the app. |
-| `WEBSITE_SKIP_CONTENTSHARE_VALIDATION` | If you set the shared storage connection of your app (using `WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`) to a Key Vault reference, the app cannot resolve the key vault reference at app creation or update if one of the following conditions is true: <br/>- The app accesses the key vault with a system-assigned identity.<br/>- The app accesses the key vault with a user-assigned identity, and the key vault is [locked with a VNet](../key-vault/general/overview-vnet-service-endpoints.md).<br/>To avoid errors at create or update time, set this variable to `1`. |
-| `WEBSITE_DELAY_CERT_DELETION` | This env var can be set to 1 by users in order to ensure that a certificate that a worker process is dependent upon is not deleted until it exits. |
+| `WEBSITE_SKIP_CONTENTSHARE_VALIDATION` | If you set the shared storage connection of your app (using `WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`) to a Key Vault reference, the app can't resolve the key vault reference at app creation or update if one of the following conditions is true: <br/>- The app accesses the key vault with a system-assigned identity.<br/>- The app accesses the key vault with a user-assigned identity, and the key vault is [locked with a VNet](../key-vault/general/overview-vnet-service-endpoints.md).<br/>To avoid errors at create or update time, set this variable to `1`. |
+| `WEBSITE_DELAY_CERT_DELETION` | This env var can be set to 1 by users in order to ensure that a certificate that a worker process is dependent upon isn't deleted until it exits. |
<!-- | `WEBSITE_ALLOW_DOUBLE_ESCAPING_URL` | TODO | --> ## CORS
The following environment variables are related to [App Service authentication](
| Setting name| Description| |-|-|
-| `WEBSITE_AUTH_DISABLE_IDENTITY_FLOW` | When set to `true`, disables assigning the thread principal identity in ASP.NET-based web applications (including v1 Function Apps). This is designed to allow developers to protect access to their site with auth, but still have it use a separate login mechanism within their app logic. The default is `false`. |
-| `WEBSITE_AUTH_HIDE_DEPRECATED_SID` | `true` or `false`. The default value is `false`. This is a setting for the legacy Azure Mobile Apps integration for Azure App Service. Setting this to `true` resolves an issue where the SID (security ID) generated for authenticated users might change if the user changes their profile information. Changing this value may result in existing Azure Mobile Apps user IDs changing. Most apps do not need to use this setting. |
-| `WEBSITE_AUTH_NONCE_DURATION`| A _timespan_ value in the form `_hours_:_minutes_:_seconds_`. The default value is `00:05:00`, or 5 minutes. This setting controls the lifetime of the [cryptographic nonce](https://en.wikipedia.org/wiki/Cryptographic_nonce) generated for all browser-driven logins. If a login fails to complete in the specified time, the login flow will be retried automatically. This application setting is intended for use with the V1 (classic) configuration experience. If using the V2 authentication configuration schema, you should instead use the `login.nonce.nonceExpirationInterval` configuration value. |
-| `WEBSITE_AUTH_PRESERVE_URL_FRAGMENT` | When set to `true` and users click on app links that contain URL fragments, the login process will ensure that the URL fragment part of your URL does not get lost in the login redirect process. For more information, see [Customize sign-in and sign-out in Azure App Service authentication](configure-authentication-customize-sign-in-out.md#preserve-url-fragments). |
+| `WEBSITE_AUTH_DISABLE_IDENTITY_FLOW` | When set to `true`, disables assigning the thread principal identity in ASP.NET-based web applications (including v1 Function Apps). This is designed to allow developers to protect access to their site with auth, but still have it use a separate sign-in mechanism within their app logic. The default is `false`. |
+| `WEBSITE_AUTH_HIDE_DEPRECATED_SID` | `true` or `false`. The default value is `false`. This is a setting for the legacy Azure Mobile Apps integration for Azure App Service. Setting this to `true` resolves an issue where the SID (security ID) generated for authenticated users might change if the user changes their profile information. Changing this value may result in existing Azure Mobile Apps user IDs changing. Most apps don't need to use this setting. |
+| `WEBSITE_AUTH_NONCE_DURATION`| A _timespan_ value in the form `_hours_:_minutes_:_seconds_`. The default value is `00:05:00`, or 5 minutes. This setting controls the lifetime of the [cryptographic nonce](https://en.wikipedia.org/wiki/Cryptographic_nonce) generated for all browser-driven logins. If a sign-in fails to complete in the specified time, the sign-in flow will be retried automatically. This application setting is intended for use with the V1 (classic) configuration experience. If using the V2 authentication configuration schema, you should instead use the `login.nonce.nonceExpirationInterval` configuration value. |
+| `WEBSITE_AUTH_PRESERVE_URL_FRAGMENT` | When set to `true` and users select on app links that contain URL fragments, the sign-in process will ensure that the URL fragment part of your URL doesn't get lost in the sign-in redirect process. For more information, see [Customize sign-in and sign-out in Azure App Service authentication](configure-authentication-customize-sign-in-out.md#preserve-url-fragments). |
| `WEBSITE_AUTH_USE_LEGACY_CLAIMS` | To maintain backward compatibility across upgrades, the authentication module uses the legacy claims mapping of short to long names in the `/.auth/me` API, so certain mappings are excluded (e.g. "roles"). To get the more modern version of the claims mappings, set this variable to `False`. In the "roles" example, it would be mapped to the long claim name "http://schemas.microsoft.com/ws/2008/06/identity/claims/role". | | `WEBSITE_AUTH_DISABLE_WWWAUTHENTICATE` | `true` or `false`. The default value is `false`. When set to `true`, removes the [`WWW-Authenticate`](https://developer.mozilla.org/docs/Web/HTTP/Headers/WWW-Authenticate) HTTP response header from module-generated HTTP 401 responses. This application setting is intended for use with the V1 (classic) configuration experience. If using the V2 authentication configuration schema, you should instead use the `identityProviders.azureActiveDirectory.login.disableWwwAuthenticate` configuration value. | | `WEBSITE_AUTH_STATE_DIRECTORY` | A local file system directory path where tokens are stored when the file-based token store is enabled. The default value is `%HOME%\Data\.auth`. This application setting is intended for use with the V1 (classic) configuration experience. If using the V2 authentication configuration schema, you should instead use the `login.tokenStore.fileSystem.directory` configuration value. |
The following environment variables are related to [App Service authentication](
| `WEBSITE_AUTH_VALIDATE_NONCE`| `true` or `false`. The default value is `true`. This value should never be set to `false` except when temporarily debugging [cryptographic nonce](https://en.wikipedia.org/wiki/Cryptographic_nonce) validation failures that occur during interactive logins. This application setting is intended for use with the V1 (classic) configuration experience. If using the V2 authentication configuration schema, you should instead use the `login.nonce.validateNonce` configuration value. | | `WEBSITE_AUTH_V2_CONFIG_JSON` | This environment variable is populated automatically by the Azure App Service platform and is used to configure the integrated authentication module. The value of this environment variable corresponds to the V2 (non-classic) authentication configuration for the current app in Azure Resource Manager. It's not intended to be configured explicitly. | | `WEBSITE_AUTH_ENABLED` | Read-only. Injected into a Windows or Linux app to indicate whether App Service authentication is enabled. |
-| `WEBSITE_AUTH_ENCRYPTION_KEY` | By default, the automatically generated key is used as the encryption key. To override, set to a desired key. This is recommended if you want to share tokens or sessions across multiple apps. If specified, it supercedes the `MACHINEKEY_DecryptionKey` setting. |
-| `WEBSITE_AUTH_SIGNING_KEY` | By default, the automatically generated key is used as the signing key. To override, set to a desired key. This is recommended if you want to share tokens or sessions across multiple apps. If specified, it supercedes the `MACHINEKEY_ValidationKey` setting. |
+| `WEBSITE_AUTH_ENCRYPTION_KEY` | By default, the automatically generated key is used as the encryption key. To override, set to a desired key. This is recommended if you want to share tokens or sessions across multiple apps. If specified, it supersedes the `MACHINEKEY_DecryptionKey` setting. |
+| `WEBSITE_AUTH_SIGNING_KEY` | By default, the automatically generated key is used as the signing key. To override, set to a desired key. This is recommended if you want to share tokens or sessions across multiple apps. If specified, it supersedes the `MACHINEKEY_ValidationKey` setting. |
<!-- System settings WEBSITE_AUTH_RUNTIME_VERSION
The following environment variables are related to [health checks](monitor-insta
| Setting name | Description | |-|-|
-| `WEBSITE_HEALTHCHECK_MAXPINGFAILURES` | The maximum number of failed pings before removing the instance. Set to a value between `2` and `100`. When you are scaling up or out, App Service pings the Health check path to ensure new instances are ready. For more information, see [Health check](monitor-instances-health-check.md).|
+| `WEBSITE_HEALTHCHECK_MAXPINGFAILURES` | The maximum number of failed pings before removing the instance. Set to a value between `2` and `100`. When you're scaling up or out, App Service pings the Health check path to ensure new instances are ready. For more information, see [Health check](monitor-instances-health-check.md).|
| `WEBSITE_HEALTHCHECK_MAXUNHEALTHYWORKERPERCENT` | To avoid overwhelming healthy instances, no more than half of the instances will be excluded. For example, if an App Service Plan is scaled to four instances and three are unhealthy, at most two will be excluded. The other two instances (one healthy and one unhealthy) will continue to receive requests. In the worst-case scenario where all instances are unhealthy, none will be excluded. To override this behavior, set to a value between `0` and `100`. A higher value means more unhealthy instances will be removed. The default is `50` (50%). | ## Push notifications
The following environment variables are related to [WebJobs](webjobs-create.md).
|-|-| | `WEBSITE_FUNCTIONS_ARMCACHE_ENABLED` | Set to `0` to disable the functions cache. | | `WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT` | [App settings reference for Azure Functions](../azure-functions/functions-app-settings.md) |
+|`AzureWebJobsSecretStorageType` | [App settings reference for Azure Functions](../azure-functions/functions-app-settings.md) |
| `FUNCTIONS_EXTENSION_VERSION` | [App settings reference for Azure Functions](../azure-functions/functions-app-settings.md) |
-`AzureWebJobsSecretStorageType` | [App settings reference for Azure Functions](../azure-functions/functions-app-settings.md) |
| `FUNCTIONS_WORKER_RUNTIME` | [App settings reference for Azure Functions](../azure-functions/functions-app-settings.md) | | `AzureWebJobsStorage` | [App settings reference for Azure Functions](../azure-functions/functions-app-settings.md) | | `WEBSITE_CONTENTAZUREFILECONNECTIONSTRING` | [App settings reference for Azure Functions](../azure-functions/functions-app-settings.md) | | `WEBSITE_CONTENTSHARE` | [App settings reference for Azure Functions](../azure-functions/functions-app-settings.md) | | `WEBSITE_CONTENTOVERVNET` | [App settings reference for Azure Functions](../azure-functions/functions-app-settings.md) | | `WEBSITE_ENABLE_BROTLI_ENCODING` | [App settings reference for Azure Functions](../azure-functions/functions-app-settings.md) |
-| `WEBSITE_USE_PLACEHOLDER` | Set to `0` to disable the placeholder functions optimization on the consumption plan. The placeholder is an optimization that [improves the cold start](../azure-functions/functions-scale.md#cold-start-behavior). |
+| `WEBSITE_USE_PLACEHOLDER` | [App settings reference for Azure Functions](../azure-functions/functions-app-settings.md) |
| `WEBSITE_PLACEHOLDER_MODE` | Read-only. Shows whether the function app is running on a placeholder host (`generalized`) or its own host (`specialized`). | | `WEBSITE_DISABLE_ZIP_CACHE` | When your app runs from a [ZIP package](deploy-run-package.md) ( `WEBSITE_RUN_FROM_PACKAGE=1`), the five most recently deployed ZIP packages are cached in the app's file system (D:\home\data\SitePackages). Set this variable to `1` to disable this cache. For Linux consumption apps, the ZIP package cache is disabled by default. | <!--
application-gateway Self Signed Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/self-signed-certificates.md
Previously updated : 11/28/2022 Last updated : 01/27/2023
The following configuration is an example [NGINX server block](https://nginx.org
![Trusted root certificates](media/self-signed-certificates/trusted-root-cert.png) > [!NOTE]
- > It's assumed that DNS has been configured to point the web server name (in this example, www.fabrikam.com) to your web server's IP address. If not, you can edit the [hosts file](https://answers.microsoft.com/en-us/windows/forum/all/how-to-edit-host-file-in-windows-10/7696f204-2aaf-4111-913b-09d6917f7f3d) to resolve the name.
+ > It's assumed that DNS has been configured to point the web server name (in this example, `www.fabrikam.com`) to your web server's IP address. If not, you can edit the [hosts file](https://answers.microsoft.com/en-us/windows/forum/all/how-to-edit-host-file-in-windows-10/7696f204-2aaf-4111-913b-09d6917f7f3d) to resolve the name.
1. Browse to your website, and click the lock icon on your browser's address box to verify the site and certificate information. ## Verify the configuration with OpenSSL
application-gateway Tutorial Multiple Sites Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-multiple-sites-powershell.md
# Create an application gateway that hosts multiple web sites using Azure PowerShell
-You can use Azure PowerShell to [configure the hosting of multiple web sites](multiple-site-overview.md) when you create an [application gateway](overview.md). In this article, you define backend address pools using virtual machines scale sets. You then configure listeners and rules based on domains that you own to make sure web traffic arrives at the appropriate servers in the pools. This article assumes that you own multiple domains and uses examples of *www.contoso.com* and *www.fabrikam.com*.
+You can use Azure PowerShell to [configure the hosting of multiple web sites](multiple-site-overview.md) when you create an [application gateway](overview.md). In this article, you define backend address pools using virtual machines scale sets. You then configure listeners and rules based on domains that you own to make sure web traffic arrives at the appropriate servers in the pools. This article assumes that you own multiple domains and uses examples of `www.contoso.com` and `www.fabrikam.com`.
In this article, you learn how to:
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/overview.md
Title: Azure Automation Change Tracking and Inventory overview
description: This article describes the Change Tracking and Inventory feature, which helps you identify software and Microsoft service changes in your environment. Previously updated : 06/18/2021 Last updated : 01/18/2023
You can enable Change Tracking and Inventory in the following ways:
## Tracking file changes
-For tracking changes in files on both Windows and Linux, Change Tracking and Inventory uses MD5 hashes of the files. The feature uses the hashes to detect if changes have been made since the last inventory.
+For tracking changes in files on both Windows and Linux, Change Tracking and Inventory uses MD5 hashes of the files. The feature uses the hashes to detect if changes have been made since the last inventory. To track the Linux files, ensure that you have READ access for the OMS agent user.
## Tracking file content changes
The default collection frequency for Windows services is 30 minutes. You can con
To optimize performance, the Log Analytics agent only tracks changes. Setting a high threshold might miss changes if the service returns to its original state. Setting the frequency to a smaller value allows you to catch changes that might be missed otherwise.
+For critical services, we recommend marking the **Startup** state as **Automatic** (Delayed Start) so that, once the VM reboots, the services data collection will start after the MMA agent starts instead of starting quickly as soon as the VM is up.
+ > [!NOTE] > While the agent can track changes down to a 10-second interval, the data still takes a few minutes to display in the Azure portal. Changes that occur during the time to display in the portal are still tracked and logged.
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/overview.md
# What is Azure Arc resource bridge (preview)?
-Azure Arc resource bridge (preview) is part of the core Azure Arc platform, and is designed to host other Azure Arc services. In this release, the resource bridge supports VM self-servicing and management from Azure, for virtualized Windows and Linux virtual machines hosted in an on-premises environment on [Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-overview), VMware ([Arc-enabled VMware vSphere](../vmware-vsphere/index.yml) preview), and System Center Virtual Machine Manager (SCVMM) ([Arc-enabled SCVMM](../system-center-virtual-machine-manager/index.yml) preview).
+Azure Arc resource bridge (preview) is a Microsoft managed product that is part of the core Azure Arc platform. It is designed to host other Azure Arc services. In this release, the resource bridge supports VM self-servicing and management from Azure, for virtualized Windows and Linux virtual machines hosted in an on-premises environment on [Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-overview), VMware ([Arc-enabled VMware vSphere](../vmware-vsphere/index.yml) preview), and System Center Virtual Machine Manager (SCVMM) ([Arc-enabled SCVMM](../system-center-virtual-machine-manager/index.yml) preview).
-Arc resource bridge is a packaged virtual machine that hosts a *management* Kubernetes cluster and requires no user management. The virtual machine is deployed on the on-premises infrastructure, and an ARM resource of Arc resource bridge is created in Azure. The two resources are then connected, allowing VM self-service and management from Azure. The on-premises resource bridge uses guest management to tag local resources, making them available in Azure.
+Arc resource bridge is a packaged virtual machine that hosts a *management* Kubernetes cluster and requires minimal user management. The virtual machine is deployed on the on-premises infrastructure, and an ARM resource of Arc resource bridge is created in Azure. The two resources are then connected, allowing VM self-service and management from Azure. The on-premises resource bridge uses guest management to tag local resources, making them available in Azure.
Arc resource bridge delivers the following benefits:
Arc resource bridge delivers the following benefits:
* Designed to recover from software failures. * Supports deployment to any private cloud hosted on Hyper-V or VMware from the Azure portal or using the Azure Command-Line Interface (CLI).
-All management operations are performed from Azure, so no local configuration is required on the appliance.
## Overview
Custom locations and cluster extension are both Azure resources, which are linke
Some resources are unique to the infrastructure. For example, vCenter has a resource pool, network, and template resources. During VM creation, these resources need to be specified. With Azure Stack HCI, you just need to select the custom location, network and template to create a VM.
-To summarize, the Azure resources are projections of the resources running in your on-premises private cloud. If the on-premises resource is not healthy, it can impact the health of the related resources. For example, if the Arc resource bridge (preview) has been deleted by accident, all the resources hosted in the Arc resource bridge (preview) are impacted. That is, the custom locations and cluster extensions are deleted as a result. The actual VMs are not impacted, as they are running on vCenter, but the management path to those VMs is interrupted, and you won't be able to start or stop the VM from Azure. It is not recommended to manage or modify the Arc resource bridge (preview) using any on-premises applications directly.
+To summarize, the Azure resources are projections of the resources running in your on-premises private cloud. If the on-premises resource is not healthy, it can impact the health of the related resources that are projected in Azure. For example, if the resource bridge is deleted by accident, all the resources projected in Azure by the resource bridge are impacted. The on-premises VMs in your on-premises private cloud are not impacted, as they are running on vCenter but you won't be able to start or stop the VMs from Azure. It is not recommended to directly manage or modify the resource bridge using any on-premises applications.
## Benefits of Azure Arc resource bridge (preview)
If you are deploying on Azure Stack HCI, the x32 Azure CLI installer can be used
### Supported regions
-Azure Arc resource bridge currently supports the following Azure regions:
+Arc resource bridge currently supports the following Azure regions:
* East US * West Europe
While Azure has a number of redundancy features at every level of failure, if a
### Private cloud environments
-The following private cloud environments and their versions are officially supported for the Azure Arc resource bridge:
+The following private cloud environments and their versions are officially supported for Arc resource bridge:
-* VMware vSphere version 6.7
+* VMware vSphere version 6.7, 7.0
* Azure Stack HCI
+* SCVMM
### Required Azure permissions
-* To onboard the Arc resource bridge, you must have the [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role for the resource group.
-* To read, modify, and delete the Arc resource bridge, you must have the [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role for the resource group.
+* To onboard Arc resource bridge, you must have the [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role for the resource group.
+* To read, modify, and delete Arc resource bridge, you must have the [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role for the resource group.
### Networking
-The Arc resource bridge communicates outbound securely to Azure Arc over TCP port 443. If the appliance needs to connect through a firewall or proxy server to communicate over the internet, it communicates outbound using the HTTPS protocol.
+Arc resource bridge communicates outbound securely to Azure Arc over TCP port 443. If the appliance needs to connect through a firewall or proxy server to communicate over the internet, it communicates outbound using the HTTPS protocol.
You may need to allow specific URLs to [ensure outbound connectivity is not blocked](troubleshoot-resource-bridge.md#restricted-outbound-connectivity) by your firewall or proxy server. ## Next steps * Learn more about [how Azure Arc-enabled VMware vSphere extends Azure's governance and management capabilities to VMware vSphere infrastructure](../vmware-vsphere/overview.md).
-* Learn more about [provisioning and managing on-premises Windows and Linux VMs running on Azure Stack HCI clusters](/azure-stack/hci/manage/azure-arc-enabled-virtual-machines).
+* Learn more about [provisioning and managing on-premises Windows and Linux VMs running on Azure Stack HCI clusters](/azure-stack/hci/manage/azure-arc-enabled-virtual-machines).
azure-arc Troubleshoot Resource Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/troubleshoot-resource-bridge.md
Title: Troubleshoot Azure Arc resource bridge (preview) issues description: This article tells how to troubleshoot and resolve issues with the Azure Arc resource bridge (preview) when trying to deploy or connect to the service. Previously updated : 12/06/2022 Last updated : 01/26/2023
For example, if you specified the wrong location, or subscription during deploym
To resolve this issue, delete the appliance and update the appliance YAML file. Then redeploy and create the resource bridge.
-### Failure due to previous failed deployments
+### Failure due to previous deployments
-If an Arc resource bridge deployment fails, subsequent deployments may fail due to residual cached folders remaining on the machine.
+If Arc resource bridge is deployed multiple times, an old token or expired credentials left on the management machine may cause future deployments to fail.
-To prevent this from happening, be sure to run the `az arcappliance delete` command after any failed deployment. This command must be run with the latest `arcappliance` Azure CLI extension. To ensure that you have the latest version installed on your machine, run the following command:
+To prevent this from happening, be sure to run the `az arcappliance delete` command after any failed deployment, or to delete the current bridge before attempting another deployment. The delete command must be run with the latest `arcappliance` Azure CLI extension. To ensure that you have the latest version installed on your machine, run the following command:
```azurecli az extension update --name arcappliance ```
-If the failed deployment is not successfully removed, residual cached folders may cause future Arc resource bridge deployments to fail. This may cause the error message `Unavailable desc = connection closed before server preface received` to surface when various `az arcappliance` commands are run, including `prepare` and `delete`.
+If all components of Arc resource bridge are not completely deleted, the residual token or expired credentials may cause future deployments to fail. When this is the case, the error will contain the message `Unavailable desc = connection closed before server preface received` to surface when various `az arcappliance` commands are run, including `prepare` and `delete`.
-To resolve this error, the .wssd\python and .wssd\kva folders in the user profile directory need to be deleted on the machine where the Arc resource bridge CLI commands are being run. You can delete these manually by navigating to the user profile directory (typically C:\Users\<username>), then deleting the .wssd\python and/or .wssd\kva folders. After they are deleted, try the command again.
+To resolve this error, the .wssd\python and .wssd\kva folders in the user profile directory need to be deleted from the management machine. You can delete these manually by navigating to the user profile directory (typically `C:\Users\<username>`), then deleting the `.wssd\python` and/or `.wssd\kva` folders. After they are deleted, retry the command that failed.
### Token refresh error
When using the `az arcappliance createConfig` or `az arcappliance run` command,
When the appliance is deployed to a host resource pool, there is no high availability if the host hardware fails. Because of this, we recommend that you don't try to deploy the appliance in a host resource pool.
+### Resource bridge status "Offline" and `provisioningState` "Failed"
+
+When deploying Arc resource bridge, the bridge may appear to be successfully deployed, because no errors were encountered when running `az arcappliance deploy` or `az arcappliance create`. However, when viewing the bridge in Azure portal, you may see status shows as **Offline**, and `az arcappliance show` may show the `provisioningState` as **Failed**. This happens when required providers are not registered before the bridge is deployed.
+
+To resolve this problem, delete the resource bridge, register the providers, then redeploy the resource bridge.
+
+1. Delete the resource bridge:
+
+ ```azurecli
+ az arcappliance delete <fabric> --config-file <path to appliance.yaml>
+ ```
+
+1. Register the providers:
+
+ ```azurecli
+ az provider register --namespace Microsoft.ExtendedLocation ΓÇôwait
+ az provider register --namespace Microsoft.ResourceConnector ΓÇôwait
+ ```
+
+1. Redeploy the resource bridge.
+
+> [!NOTE]
+> Partner products (such as Arc-enabled VMware vSphere) may have their own required providers to register. To see additional providers that must be registered, see the product's documentation.
+ ## Networking issues ### Restricted outbound connectivity
azure-arc Create Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/create-virtual-machine.md
Title: Create a virtual machine on System Center Virtual Machine Manager using Azure Arc (preview) description: This article helps you create a virtual machine using Azure portal (preview). Previously updated : 05/25/2022 Last updated : 01/27/2023 ms.+ keywords: "VMM, Arc, Azure"
azure-arc Enable Scvmm Inventory Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/enable-scvmm-inventory-resources.md
Title: Enable SCVMM inventory resources in Azure Arc center (preview) description: This article helps you enable SCVMM inventory resources from Azure portal (preview) + Previously updated : 05/25/2022 Last updated : 01/27/2023 keywords: "VMM, Arc, Azure"
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/overview.md
Title: Overview of the Azure Connected System Center Virtual Machine Manager (preview) description: This article provides a detailed overview of the Azure Arc-enabled System Center Virtual Machine Manager (preview). Previously updated : 12/07/2022 Last updated : 01/27/2023 ms.+ keywords: "VMM, Arc, Azure"
azure-arc Quickstart Connect System Center Virtual Machine Manager To Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/quickstart-connect-system-center-virtual-machine-manager-to-arc.md
description: In this QuickStart, you will learn how to use the helper script to
Previously updated : 12/07/2022
+ms.
+ Last updated : 01/27/2023
azure-functions Event Driven Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/event-driven-scaling.md
# Event-driven scaling in Azure Functions
-In the Consumption and Premium plans, Azure Functions scales CPU and memory resources by adding additional instances of the Functions host. The number of instances is determined on the number of events that trigger a function.
+In the Consumption and Premium plans, Azure Functions scales CPU and memory resources by adding more instances of the Functions host. The number of instances is determined on the number of events that trigger a function.
Each instance of the Functions host in the Consumption plan is limited to 1.5 GB of memory and one CPU. An instance of the host is the entire function app, meaning all functions within a function app share resource within an instance and scale at the same time. Function apps that share the same Consumption plan scale independently. In the Premium plan, the plan size determines the available memory and CPU for all apps in that plan on that instance.
-Function code files are stored on Azure Files shares on the function's main storage account. When you delete the main storage account of the function app, the function code files are deleted and cannot be recovered.
+Function code files are stored on Azure Files shares on the function's main storage account. When you delete the main storage account of the function app, the function code files are deleted and can't be recovered.
## Runtime scaling Azure Functions uses a component called the *scale controller* to monitor the rate of events and determine whether to scale out or scale in. The scale controller uses heuristics for each trigger type. For example, when you're using an Azure Queue storage trigger, it scales based on the queue length and the age of the oldest queue message.
-The unit of scale for Azure Functions is the function app. When the function app is scaled out, additional resources are allocated to run multiple instances of the Azure Functions host. Conversely, as compute demand is reduced, the scale controller removes function host instances. The number of instances is eventually "scaled in" to zero when no functions are running within a function app.
+The unit of scale for Azure Functions is the function app. When the function app is scaled out, more resources are allocated to run multiple instances of the Azure Functions host. Conversely, as compute demand is reduced, the scale controller removes function host instances. The number of instances is eventually "scaled in" to zero when no functions are running within a function app.
![Scale controller monitoring events and creating instances](./media/functions-scale/central-listener.png) ## Cold Start
-After your function app has been idle for a number of minutes, the platform may scale the number of instances on which your app runs down to zero. The next request has the added latency of scaling from zero to one. This latency is referred to as a _cold start_. The number of dependencies required by your function app can impact the cold start time. Cold start is more of an issue for synchronous operations, such as HTTP triggers that must return a response. If cold starts are impacting your functions, consider running in a Premium plan or in a Dedicated plan with the **Always on** setting enabled.
+After your function app has been idle for a number of minutes, the platform may scale the number of instances on which your app runs down to zero. The next request has the added latency of scaling from zero to one. This latency is referred to as a _cold start_. The number of dependencies required by your function app can affect the cold start time. Cold start is more of an issue for synchronous operations, such as HTTP triggers that must return a response. If cold starts are impacting your functions, consider running in a Premium plan or in a Dedicated plan with the **Always on** setting enabled.
## Understanding scaling behaviors
-Scaling can vary on a number of factors, and scale differently based on the trigger and language selected. There are a few intricacies of scaling behaviors to be aware of:
+Scaling can vary based on several factors, and apps scale differently based on the trigger and language selected. There are a few intricacies of scaling behaviors to be aware of:
* **Maximum instances:** A single function app only scales out to a maximum of 200 instances. A single instance may process more than one message or request at a time though, so there isn't a set limit on number of concurrent executions. You can [specify a lower maximum](#limit-scale-out) to throttle scale as required. * **New instance rate:** For HTTP triggers, new instances are allocated, at most, once per second. For non-HTTP triggers, new instances are allocated, at most, once every 30 seconds. Scaling is faster when running in a [Premium plan](functions-premium-plan.md).
-* **Scale efficiency:** For Service Bus triggers, use _Manage_ rights on resources for the most efficient scaling. With _Listen_ rights, scaling isn't as accurate because the queue length can't be used to inform scaling decisions. To learn more about setting rights in Service Bus access policies, see [Shared Access Authorization Policy](../service-bus-messaging/service-bus-sas.md#shared-access-authorization-policies). For Event Hub triggers, see the [this scaling guidance](#event-hubs-trigger).
+* **Scale efficiency:** For Service Bus triggers, use _Manage_ rights on resources for the most efficient scaling. With _Listen_ rights, scaling isn't as accurate because the queue length can't be used to inform scaling decisions. To learn more about setting rights in Service Bus access policies, see [Shared Access Authorization Policy](../service-bus-messaging/service-bus-sas.md#shared-access-authorization-policies). For Event Hubs triggers, see the [this scaling guidance](#event-hubs-triggers).
-## Limit scale out
+## Limit scale-out
You may wish to restrict the maximum number of instances an app used to scale out. This is most common for cases where a downstream component like a database has limited throughput. By default, Consumption plan functions scale out to as many as 200 instances, and Premium plan functions will scale out to as many as 100 instances. You can specify a lower maximum for a specific app by modifying the `functionAppScaleLimit` value. The `functionAppScaleLimit` can be set to `0` or `null` for unrestricted, or a valid value between `1` and the app maximum.
The following considerations apply for scale-in behaviors:
* For Consumption plan function apps running on Windows, only apps created after May 2021 have drain mode behaviors enabled by default. * To enable graceful shutdown for functions using the Service Bus trigger, use version 4.2.0 or a later version of the [Service Bus Extension](functions-bindings-service-bus.md).
-## Event Hubs trigger
+## Event Hubs triggers
This section describes how scaling behaves when your function uses an [Event Hubs trigger](functions-bindings-event-hubs-trigger.md) or an [IoT Hub trigger](functions-bindings-event-iot-trigger.md). In these cases, each instance of an event triggered function is backed by a single [EventProcessorHost](/dotnet/api/microsoft.azure.eventhubs.processor) instance. The trigger (powered by Event Hubs) ensures that only one [EventProcessorHost](/dotnet/api/microsoft.azure.eventhubs.processor) instance can get a lease on a given partition.
-For example, consider an Event Hub as follows:
+For example, consider an event hub as follows:
* 10 partitions * 1,000 events distributed evenly across all partitions, with 100 messages in each partition
-When your function is first enabled, there is only one instance of the function. Let's call the first function instance `Function_0`. The `Function_0` function has a single instance of [EventProcessorHost](/dotnet/api/microsoft.azure.eventhubs.processor) that holds a lease on all ten partitions. This instance is reading events from partitions 0-9. From this point forward, one of the following happens:
+When your function is first enabled, there's only one instance of the function. Let's call the first function instance `Function_0`. The `Function_0` function has a single instance of [EventProcessorHost](/dotnet/api/microsoft.azure.eventhubs.processor) that holds a lease on all 10 partitions. This instance is reading events from partitions 0-9. From this point forward, one of the following happens:
* **New function instances are not needed**: `Function_0` is able to process all 1,000 events before the Functions scaling logic take effect. In this case, all 1,000 messages are processed by `Function_0`.
-* **An additional function instance is added**: If the Functions scaling logic determines that `Function_0` has more messages than it can process, a new function app instance (`Function_1`) is created. This new function also has an associated instance of [EventProcessorHost](/dotnet/api/microsoft.azure.eventhubs.processor). As the underlying Event Hubs detect that a new host instance is trying read messages, it load balances the partitions across the host instances. For example, partitions 0-4 may be assigned to `Function_0` and partitions 5-9 to `Function_1`.
+* **An additional function instance is added**: If the Functions scaling logic determines that `Function_0` has more messages than it can process, a new function app instance (`Function_1`) is created. This new function also has an associated instance of [EventProcessorHost](/dotnet/api/microsoft.azure.eventhubs.processor). As the underlying event hub detects that a new host instance is trying read messages, it load balances the partitions across the host instances. For example, partitions 0-4 may be assigned to `Function_0` and partitions 5-9 to `Function_1`.
* **N more function instances are added**: If the Functions scaling logic determines that both `Function_0` and `Function_1` have more messages than they can process, new `Functions_N` function app instances are created. Apps are created to the point where `N` is greater than the number of event hub partitions. In our example, Event Hubs again load balances the partitions, in this case across the instances `Function_0`...`Functions_9`.
-As scaling occurs, `N` instances is a number greater than the number of event hub partitions. This pattern is used to ensure [EventProcessorHost](/dotnet/api/microsoft.azure.eventhubs.processor) instances are available to obtain locks on partitions as they become available from other instances. You are only charged for the resources used when the function instance executes. In other words, you are not charged for this over-provisioning.
+As scaling occurs, `N` instances is a number greater than the number of event hub partitions. This pattern is used to ensure [EventProcessorHost](/dotnet/api/microsoft.azure.eventhubs.processor) instances are available to obtain locks on partitions as they become available from other instances. You're only charged for the resources used when the function instance executes. In other words, you aren't charged for this over-provisioning.
When all function execution completes (with or without errors), checkpoints are added to the associated storage account. When check-pointing succeeds, all 1,000 messages are never retrieved again.
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
Title: App settings reference for Azure Functions
-description: Reference documentation for the Azure Functions app settings or environment variables.
+description: Reference documentation for the Azure Functions app settings or environment variables used to configure functions apps.
Previously updated : 10/04/2022 Last updated : 12/15/2022 # App settings reference for Azure Functions
-App settings in a function app contain configuration options that affect all functions for that function app. When you run locally, these settings are accessed as local [environment variables](functions-develop-local.md#local-settings-file). This article lists the app settings that are available in function apps.
+Application settings in a function app contain configuration options that affect all functions for that function app. These settings are accessed as environment variables. This article lists the app settings that are available in function apps.
[!INCLUDE [Function app settings](../../includes/functions-app-settings.md)]
-There are other function app configuration options in the [host.json](functions-host-json.md) file and in the [local.settings.json](functions-develop-local.md#local-settings-file) file.
-Example connection string values are truncated for readability.
+In this article, example connection string values are truncated for readability.
-> [!NOTE]
-> You can use application settings to override host.json setting values without having to change the host.json file itself. This is helpful for scenarios where you need to configure or modify specific host.json settings for a specific environment. This also lets you change host.json settings without having to republish your project. To learn more, see the [host.json reference article](functions-host-json.md#override-hostjson-values). Changes to function app settings require your function app to be restarted.
+## App setting considerations
+
+When using app settings, you should be aware of the following considerations:
+++ Changes to function app settings require your function app to be restarted.+++ In setting names, double-underscore (`__`) and semicolon (`:`) are considered reserved values. Double-underscores are interpreted as hierarchical delimiters on both Windows and Linux, and colons are interpreted in the same way only on Linux. For example, the setting `AzureFunctionsWebHost__hostid=somehost_123456` would be interpreted as the following JSON object:+
+ ```json
+ "AzureFunctionsWebHost": {
+ "hostid": "somehost_123456"
+ }
+ ````
+
+ In this article, only double-underscores are used, since they're supported on both operating systems.
+++ When Functions runs locally, app settings are specified in the `Values` collection in the [local.settings.json](functions-develop-local.md#local-settings-file).+++ There are other function app configuration options in the [host.json](functions-host-json.md) file and in the [local.settings.json](functions-develop-local.md#local-settings-file) file.+++ You can use application settings to override host.json setting values without having to change the host.json file itself. This is helpful for scenarios where you need to configure or modify specific host.json settings for a specific environment. This also lets you change host.json settings without having to republish your project. To learn more, see the [host.json reference article](functions-host-json.md#override-hostjson-values). +++ This article documents the settings that are most relevant to your function apps. Because Azure Functions runs on App Service, other application settings may also be supported. For more information, see [Environment variables and app settings in Azure App Service](../app-service/reference-app-settings.md).+++ Some scenarios also require you to work with settings documented in [App Service site settings](#app-service-site-settings). +++ Changing any _read-only_ [App Service application settings](../app-service/reference-app-settings.md#app-environment) can put your function app into an unresponsive state. +++ Take care when updating application settings by using REST APIs, including ARM templates. Because these APIs replace the existing application settings, you must include all existing settings when adding or modifying settings using REST APIs or ARM templates. When possible use Azure CLI or Azure PowerShell to programmatically work with application settings. For more information, see [Work with application settings](./functions-how-to-use-azure-function-app-settings.md#settings). ## APPINSIGHTS_INSTRUMENTATIONKEY
The instrumentation key for Application Insights. Don't use both `APPINSIGHTS_IN
||| |APPINSIGHTS_INSTRUMENTATIONKEY|`55555555-af77-484b-9032-64f83bb83bb`|
+Don't use both `APPINSIGHTS_INSTRUMENTATIONKEY` and `APPLICATIONINSIGHTS_CONNECTION_STRING`. Use of `APPLICATIONINSIGHTS_CONNECTION_STRING` is recommended.
+ [!INCLUDE [azure-monitor-log-analytics-rebrand](../../includes/azure-monitor-instrumentation-key-deprecation.md)] ## APPLICATIONINSIGHTS_CONNECTION_STRING
-The connection string for Application Insights. When possible, use `APPLICATIONINSIGHTS_CONNECTION_STRING` instead of `APPINSIGHTS_INSTRUMENTATIONKEY`. Using `APPLICATIONINSIGHTS_CONNECTION_STRING` is required in the following cases:
+The connection string for Application Insights. Don't use both `APPINSIGHTS_INSTRUMENTATIONKEY` and `APPLICATIONINSIGHTS_CONNECTION_STRING`. While the use of `APPLICATIONINSIGHTS_CONNECTION_STRING` is recommended in all cases, it's required in the following cases:
-+ When your function app requires the added customizations supported by using the connection string.
-+ When your Application Insights instance runs in a sovereign cloud, which requires a custom endpoint.
++ When your function app requires the added customizations supported by using the connection string. ++ When your Application Insights instance runs in a sovereign cloud, which requires a custom endpoint. For more information, see [Connection strings](../azure-monitor/app/sdk-connection-string.md).
By default, [Functions proxies](functions-proxies.md) use a shortcut to send API
|Key|Value|Description| |-|-|-| |AZURE_FUNCTION_PROXY_DISABLE_LOCAL_CALL|`true`|Calls with a backend URL pointing to a function in the local function app won't be sent directly to the function. Instead, the requests are directed back to the HTTP frontend for the function app.|
-|AZURE_FUNCTION_PROXY_DISABLE_LOCAL_CALL|`false`|Calls with a backend URL pointing to a function in the local function app are forwarded directly to the function. This is the default value. |
+|AZURE_FUNCTION_PROXY_DISABLE_LOCAL_CALL|`false`|Calls with a backend URL pointing to a function in the local function app are forwarded directly to the function. `false` is the default value. |
## AZURE_FUNCTION_PROXY_BACKEND_URL_DECODE_SLASHES
-This setting controls whether the characters `%2F` are decoded as slashes in route parameters when they are inserted into the backend URL.
+This setting controls whether the characters `%2F` are decoded as slashes in route parameters when they're inserted into the backend URL.
|Key|Value|Description| |-|-|-|
When `AZURE_FUNCTION_PROXY_BACKEND_URL_DECODE_SLASHES` is set to `true`, the URL
## AZURE_FUNCTIONS_ENVIRONMENT
-In version 2.x and later versions of the Functions runtime, configures app behavior based on the runtime environment. This value is read during initialization, and can be set to any value. Only the values of `Development`, `Staging`, and `Production` are honored by the runtime. When this application setting isn't present when running in Azure, the environment is assumed to be `Production`. Use this setting instead of `ASPNETCORE_ENVIRONMENT` if you need to change the runtime environment in Azure to something other than `Production`. The Azure Functions Core Tools set `AZURE_FUNCTIONS_ENVIRONMENT` to `Development` when running on a local computer, and this can't be overridden in the local.settings.json file. To learn more, see [Environment-based Startup class and methods](/aspnet/core/fundamentals/environments#environment-based-startup-class-and-methods).
+In version 2.x and later versions of the Functions runtime, configures app behavior based on the runtime environment. This value is read during initialization, and can be set to any value. Only the values of `Development`, `Staging`, and `Production` are honored by the runtime. When this application setting isn't present when running in Azure, the environment is assumed to be `Production`. Use this setting instead of `ASPNETCORE_ENVIRONMENT` if you need to change the runtime environment in Azure to something other than `Production`. The Azure Functions Core Tools set `AZURE_FUNCTIONS_ENVIRONMENT` to `Development` when running on a local computer, and this setting can't be overridden in the local.settings.json file. To learn more, see [Environment-based Startup class and methods](/aspnet/core/fundamentals/environments#environment-based-startup-class-and-methods).
## AzureFunctionsJobHost__\*
In version 2.x and later versions of the Functions runtime, application settings
Sets the host ID for a given function app, which should be a unique ID. This setting overrides the automatically generated host ID value for your app. Use this setting only when you need to prevent host ID collisions between function apps that share the same storage account.
-A host ID must be between 1 and 32 characters, contain only lowercase letters, numbers, and dashes, not start or end with a dash, and not contain consecutive dashes. An easy way to generate an ID is to take a GUID, remove the dashes, and make it lower case, such as by converting the GUID `1835D7B5-5C98-4790-815D-072CC94C6F71` to the value `1835d7b55c984790815d072cc94c6f71`.
+A host ID must meet the following requirements:
+++ Be between 1 and 32 characters++ contain only lowercase letters, numbers, and dashes++ Not start or end with a dash++ Not contain consecutive dashes +
+An easy way to generate an ID is to take a GUID, remove the dashes, and make it lower case, such as by converting the GUID `1835D7B5-5C98-4790-815D-072CC94C6F71` to the value `1835d7b55c984790815d072cc94c6f71`.
|Key|Sample value| |||
Optional storage account connection string for storing logs and displaying them
## AzureWebJobsDisableHomepage
-`true` means disable the default landing page that is shown for the root URL of a function app. Default is `false`.
+A value of `true` disables the default landing page that is shown for the root URL of a function app. The default value is `false`.
|Key|Sample value| |||
When this app setting is omitted or set to `false`, a page similar to the follow
## AzureWebJobsFeatureFlags
-A comma-delimited list of beta features to enable. Beta features enabled by these flags are not production ready, but can be enabled for experimental use before they go live.
+A comma-delimited list of beta features to enable. Beta features enabled by these flags aren't production ready, but can be enabled for experimental use before they go live.
|Key|Sample value| |||
Add `EnableProxies` to this list to re-enable proxies on version 4.x of the Func
## AzureWebJobsKubernetesSecretName
-Indicates the Kubernetes Secrets resource used for storing keys. Supported only when running in Kubernetes. Requires that `AzureWebJobsSecretStorageType` be set to `kubernetes`. When `AzureWebJobsKubernetesSecretName` isn't set, the repository is considered read-only. In this case, the values must be generated before deployment. The [Azure Functions Core Tools](functions-run-local.md) generates the values automatically when deploying to Kubernetes.
+Indicates the Kubernetes Secrets resource used for storing keys. Supported only when running in Kubernetes. This setting requires you to set `AzureWebJobsSecretStorageType` to `kubernetes`. When `AzureWebJobsKubernetesSecretName` isn't set, the repository is considered read-only. In this case, the values must be generated before deployment. The [Azure Functions Core Tools](functions-run-local.md) generates the values automatically when deploying to Kubernetes.
|Key|Sample value| |||
To learn more, see [Secret repositories](security-concepts.md#secret-repositorie
## AzureWebJobsSecretStorageKeyVaultClientId
-The client ID of the user-assigned managed identity or the app registration used to access the vault where keys are stored. Requires that `AzureWebJobsSecretStorageType` be set to `keyvault`. Supported in version 4.x and later versions of the Functions runtime.
+The client ID of the user-assigned managed identity or the app registration used to access the vault where keys are stored. This setting requires you to set `AzureWebJobsSecretStorageType` to `keyvault`. Supported in version 4.x and later versions of the Functions runtime.
|Key|Sample value| |||
To learn more, see [Secret repositories](security-concepts.md#secret-repositorie
## AzureWebJobsSecretStorageKeyVaultClientSecret
-The secret for client ID of the user-assigned managed identity or the app registration used to access the vault where keys are stored. Requires that `AzureWebJobsSecretStorageType` be set to `keyvault`. Supported in version 4.x and later versions of the Functions runtime.
+The secret for client ID of the user-assigned managed identity or the app registration used to access the vault where keys are stored. This setting requires you to set `AzureWebJobsSecretStorageType` to `keyvault`. Supported in version 4.x and later versions of the Functions runtime.
|Key|Sample value| |||
To learn more, see [Secret repositories](security-concepts.md#secret-repositorie
## AzureWebJobsSecretStorageKeyVaultName
-The name of a key vault instance used to store keys. This setting is only supported for version 3.x of the Functions runtime. For version 4.x, instead use `AzureWebJobsSecretStorageKeyVaultUri`. Requires that `AzureWebJobsSecretStorageType` be set to `keyvault`.
+The name of a key vault instance used to store keys. This setting is only supported for version 3.x of the Functions runtime. For version 4.x, instead use `AzureWebJobsSecretStorageKeyVaultUri`. This setting requires you to set `AzureWebJobsSecretStorageType` to `keyvault`.
-The vault must have an access policy corresponding to the system-assigned managed identity of the hosting resource. The access policy should grant the identity the following secret permissions: `Get`,`Set`, `List`, and `Delete`. <br/>When running locally, the developer identity is used, and settings must be in the [local.settings.json file](functions-develop-local.md#local-settings-file).
+The vault must have an access policy corresponding to the system-assigned managed identity of the hosting resource. The access policy should grant the identity the following secret permissions: `Get`,`Set`, `List`, and `Delete`. <br/>When your functions run locally, the developer identity is used, and settings must be in the [local.settings.json file](functions-develop-local.md#local-settings-file).
|Key|Sample value| |||
To learn more, see [Secret repositories](security-concepts.md#secret-repositorie
## AzureWebJobsSecretStorageKeyVaultTenantId
-The tenant ID of the app registration used to access the vault where keys are stored. Requires that `AzureWebJobsSecretStorageType` be set to `keyvault`. Supported in version 4.x and later versions of the Functions runtime. To learn more, see [Secret repositories](security-concepts.md#secret-repositories).
+The tenant ID of the app registration used to access the vault where keys are stored. This setting requires you to set `AzureWebJobsSecretStorageType` to `keyvault`. Supported in version 4.x and later versions of the Functions runtime. To learn more, see [Secret repositories](security-concepts.md#secret-repositories).
|Key|Sample value| |||
The tenant ID of the app registration used to access the vault where keys are st
## AzureWebJobsSecretStorageKeyVaultUri
-The URI of a key vault instance used to store keys. Supported in version 4.x and later versions of the Functions runtime. This is the recommended setting for using a key vault instance for key storage. Requires that `AzureWebJobsSecretStorageType` be set to `keyvault`.
+The URI of a key vault instance used to store keys. Supported in version 4.x and later versions of the Functions runtime. This is the recommended setting for using a key vault instance for key storage. This setting requires you to set `AzureWebJobsSecretStorageType` to `keyvault`.
The `AzureWebJobsSecretStorageKeyVaultUri` value should be the full value of **Vault URI** displayed in the **Key Vault overview** tab, including `https://`.
-The vault must have an access policy corresponding to the system-assigned managed identity of the hosting resource. The access policy should grant the identity the following secret permissions: `Get`,`Set`, `List`, and `Delete`. <br/>When running locally, the developer identity is used, and settings must be in the [local.settings.json file](functions-develop-local.md#local-settings-file).
+The vault must have an access policy corresponding to the system-assigned managed identity of the hosting resource. The access policy should grant the identity the following secret permissions: `Get`,`Set`, `List`, and `Delete`. <br/>When your functions run locally, the developer identity is used, and settings must be in the [local.settings.json file](functions-develop-local.md#local-settings-file).
|Key|Sample value| |||
Specifies the repository or provider to use for key storage. Keys are always enc
|Key| Value| Description| ||||
-|AzureWebJobsSecretStorageType|`blob`|Keys are stored in a Blob storage container in the account provided by the `AzureWebJobsStorage` setting. This is the default behavior when `AzureWebJobsSecretStorageType` isn't set.<br/>To specify a different storage account, use the `AzureWebJobsSecretStorageSas` setting to indicate the SAS URL of a second storage account. |
-|AzureWebJobsSecretStorageType | `files` | Keys are persisted on the file system. This is the default for Functions v1.x.|
+|AzureWebJobsSecretStorageType|`blob`|Keys are stored in a Blob storage container in the account provided by the `AzureWebJobsStorage` setting. Blob storage is the default behavior when `AzureWebJobsSecretStorageType` isn't set.<br/>To specify a different storage account, use the `AzureWebJobsSecretStorageSas` setting to indicate the SAS URL of a second storage account. |
+|AzureWebJobsSecretStorageType | `files` | Keys are persisted on the file system. This is the default behavior for Functions v1.x.|
|AzureWebJobsSecretStorageType |`keyvault` | Keys are stored in a key vault instance set by `AzureWebJobsSecretStorageKeyVaultName`. | |Kubernetes Secrets | `kubernetes` | Supported only when running the Functions runtime in Kubernetes. When `AzureWebJobsKubernetesSecretName` isn't set, the repository is considered read-only. In this case, the values must be generated before deployment. The [Azure Functions Core Tools](functions-run-local.md) generates the values automatically when deploying to Kubernetes.|
Sets the shared memory size (in bytes) when the Python worker is using shared me
The value above sets a shared memory size of ~256 MB.
-Requires that [FUNCTIONS\_WORKER\_SHARED\_MEMORY\_DATA\_TRANSFER\_ENABLED](#functions_worker_shared_memory_data_transfer_enabled) be set to `1`.
+Requires that [`FUNCTIONS_WORKER_SHARED_MEMORY_DATA_TRANSFER_ENABLED`](#functions_worker_shared_memory_data_transfer_enabled) be set to `1`.
+
+## ENABLE\_ORYX\_BUILD
+
+Indicates whether the [Oryx build system](https://github.com/microsoft/Oryx) is used during deployment. `ENABLE_ORYX_BUILD` must be set to `true` when doing remote build deployments to Linux. For more information, see [Remote build on Linux](functions-deployment-technologies.md#remote-build-on-linux).
+
+|Key|Sample value|
+|||
+|ENABLE_ORYX_BUILD|`true`|
## FUNCTION\_APP\_EDIT\_MODE
-Dictates whether editing in the Azure portal is enabled. Valid values are "readwrite" and "readonly".
+Dictates whether editing in the Azure portal is enabled. Valid values are `readwrite` and `readonly`.
|Key|Sample value| |||
Dictates whether editing in the Azure portal is enabled. Valid values are "readw
## FUNCTIONS\_EXTENSION\_VERSION
-The version of the Functions runtime that hosts your function app. A tilde (`~`) with major version means use the latest version of that major version (for example, "~3"). When new versions for the same major version are available, they are automatically installed in the function app. To pin the app to a specific version, use the full version number (for example, "3.0.12345"). Default is "~3". A value of `~1` pins your app to version 1.x of the runtime. For more information, see [Azure Functions runtime versions overview](functions-versions.md). A value of `~4` means that your app runs on version 4.x of the runtime, which supports .NET 6.0.
+The version of the Functions runtime that hosts your function app. A tilde (`~`) with major version means use the latest version of that major version (for example, `~3`). When new versions for the same major version are available, they're automatically installed in the function app. To pin the app to a specific version, use the full version number (for example, `3.0.12345`). Default is `~3`. A value of `~1` pins your app to version 1.x of the runtime. For more information, see [Azure Functions runtime versions overview](functions-versions.md). A value of `~4` means that your app runs on version 4.x of the runtime, which supports .NET 6.0.
|Key|Sample value| |||
This setting enables your function app to run in a version 2.x compatible mode o
>[!IMPORTANT] > This setting is intended only as a short-term workaround while you update your app to run correctly on version 3.x. This setting is supported as long as the [2.x runtime is supported](functions-versions.md). If you encounter issues that prevent your app from running on version 3.x without using this setting, please [report your issue](https://github.com/Azure/azure-functions-host/issues/new?template=Bug_report.md).
-Requires that [FUNCTIONS\_EXTENSION\_VERSION](#functions_extension_version) be set to `~3`.
+You must also set [FUNCTIONS\_EXTENSION\_VERSION](functions-app-settings.md#functions_extension_version) to `~3`.
|Key|Sample value| ||| |FUNCTIONS\_V2\_COMPATIBILITY\_MODE|`true`|
+## FUNCTIONS\_REQUEST\_BODY\_SIZE\_LIMIT
+
+Overrides the default limit on the body size of requests sent to HTTP endpoints. The value is given in bytes, with a default maximum request size of 104857600 bytes.
+
+|Key|Sample value|
+|||
+|FUNCTIONS\_REQUEST\_BODY\_SIZE\_LIMIT |`250000000`|
+ ## FUNCTIONS\_WORKER\_PROCESS\_COUNT
-Specifies the maximum number of language worker processes, with a default value of `1`. The maximum value allowed is `10`. Function invocations are evenly distributed among language worker processes. Language worker processes are spawned every 10 seconds until the count set by FUNCTIONS\_WORKER\_PROCESS\_COUNT is reached. Using multiple language worker processes is not the same as [scaling](functions-scale.md). Consider using this setting when your workload has a mix of CPU-bound and I/O-bound invocations. This setting applies to all language runtimes, except for .NET running in process (`dotnet`).
+Specifies the maximum number of language worker processes, with a default value of `1`. The maximum value allowed is `10`. Function invocations are evenly distributed among language worker processes. Language worker processes are spawned every 10 seconds until the count set by FUNCTIONS\_WORKER\_PROCESS\_COUNT is reached. Using multiple language worker processes isn't the same as [scaling](functions-scale.md). Consider using this setting when your workload has a mix of CPU-bound and I/O-bound invocations. This setting applies to all language runtimes, except for .NET running in process (`dotnet`).
|Key|Sample value| |||
This setting enables the Python worker to use shared memory to improve throughpu
With this setting enabled, you can use the [DOCKER_SHM_SIZE](#docker_shm_size) setting to set the shared memory size. To learn more, see [Shared memory](functions-reference-python.md#shared-memory).
+## JAVA_OPTS
+
+Used to customize the Java virtual machine (JVM) used to run your Java functions when running on a [Premium plan](./functions-premium-plan.md) or [Dedicated plan](./dedicated-plan.md). When running on a Consumption plan, instead use `languageWorkers__java__arguments`. For more information, see [Customize JVM](functions-reference-java.md#customize-jvm).
+
+## languageWorkers__java__arguments
+
+Used to customize the Java virtual machine (JVM) used to run your Java functions when running on a [Consumption plan](./functions-premium-plan.md). This setting does increase the cold start times for Java functions running in a Consumption plan. For a Premium or Dedicated plan, instead use `JAVA_OPTS`. For more information, see [Customize JVM](functions-reference-java.md#customize-jvm).
+ ## MDMaxBackgroundUpgradePeriod Controls the managed dependencies background update period for PowerShell function apps, with a default value of `7.00:00:00` (weekly).
To learn more, see [Dependency management](functions-reference-powershell.md#dep
## PIP\_INDEX\_URL
-This setting lets you override the base URL of the Python Package Index, which by default is `https://pypi.org/simple`. Use this setting when you need to run a remote build using custom dependencies that are found in a package index repository compliant with PEP 503 (the simple repository API) or in a local directory that follows the same format.
+This setting lets you override the base URL of the Python Package Index, which by default is `https://pypi.org/simple`. Use this setting when you need to run a remote build using custom dependencies. These custom dependencies can be in a package index repository compliant with PEP 503 (the simple repository API) or in a local directory that follows the same format.
|Key|Sample value| |||
To learn more, see [`pip` documentation for `--index-url`](https://pip.pypa.io/e
## PIP\_EXTRA\_INDEX\_URL
-The value for this setting indicates an extra index URL for custom packages for Python apps, to use in addition to the `--index-url`. Use this setting when you need to run a remote build using custom dependencies that are found in an extra package index. Should follow the same rules as --index-url.
+The value for this setting indicates an extra index URL for custom packages for Python apps, to use in addition to the `--index-url`. Use this setting when you need to run a remote build using custom dependencies that are found in an extra package index. Should follow the same rules as `--index-url`.
|Key|Sample value| |||
The value for this setting indicates an extra index URL for custom packages for
To learn more, see [`pip` documentation for `--extra-index-url`](https://pip.pypa.io/en/stable/cli/pip_wheel/?highlight=index%20url#cmdoption-extra-index-url) and [Custom dependencies](functions-reference-python.md#remote-build-with-extra-index-url) in the Python developer reference.
-## PYTHON\_ISOLATE\_WORKER\_DEPENDENCIES (Preview)
+## PYTHON\_ISOLATE\_WORKER\_DEPENDENCIES
-The configuration is specific to Python function apps. It defines the prioritization of module loading order. When your Python function apps face issues related to module collision (e.g. when you're using protobuf, tensorflow, or grpcio in your project), configuring this app setting to `1` should resolve your issue. By default, this value is set to `0`. This flag is currently in Preview.
+The configuration is specific to Python function apps. It defines the prioritization of module loading order. By default, this value is set to `0`.
|Key|Value|Description| ||--|--|
-|PYTHON\_ISOLATE\_WORKER\_DEPENDENCIES|`0`| Prioritize loading the Python libraries from internal Python worker's dependencies. Third-party libraries defined in requirements.txt may be shadowed. |
+|PYTHON\_ISOLATE\_WORKER\_DEPENDENCIES|`0`| Prioritize loading the Python libraries from internal Python worker's dependencies, which is the default behavior. Third-party libraries defined in requirements.txt may be shadowed. |
|PYTHON\_ISOLATE\_WORKER\_DEPENDENCIES|`1`| Prioritize loading the Python libraries from application's package defined in requirements.txt. This prevents your libraries from colliding with internal Python worker's libraries. | ## PYTHON_ENABLE_DEBUG_LOGGING
When debugging Python functions, make sure to also set a debug or trace [logging
## PYTHON\_ENABLE\_WORKER\_EXTENSIONS
-The configuration is specific to Python function apps. Setting this to `1` allows the worker to load in [Python worker extensions](functions-reference-python.md#python-worker-extensions) defined in requirements.txt. It enables your function app to access new features provided by third-party packages. It may also change the behavior of function load and invocation in your app. Please ensure the extension you choose is trustworthy as you bear the risk of using it. Azure Functions gives no express warranties to any extensions. For how to use an extension, please visit the extension's manual page or readme doc. By default, this value sets to `0`.
+The configuration is specific to Python function apps. Setting this to `1` allows the worker to load in [Python worker extensions](functions-reference-python.md#python-worker-extensions) defined in requirements.txt. It enables your function app to access new features provided by third-party packages. It may also change the behavior of function load and invocation in your app. Ensure the extension you choose is trustworthy as you bear the risk of using it. Azure Functions gives no express warranties to any extensions. For how to use an extension, visit the extension's manual page or readme doc. By default, this value sets to `0`.
|Key|Value|Description| ||--|--|
The configuration is specific to Python function apps. Setting this to `1` allow
## PYTHON\_THREADPOOL\_THREAD\_COUNT
-Specifies the maximum number of threads that a Python language worker would use to execute function invocations, with a default value of `1` for Python version `3.8` and below. For Python version `3.9` and above, the value is set to `None`. Note that this setting does not guarantee the number of threads that would be set during executions. The setting allows Python to expand the number of threads to the specified value. The setting only applies to Python functions apps. Additionally, the setting applies to synchronous functions invocation and not for coroutines.
+Specifies the maximum number of threads that a Python language worker would use to execute function invocations, with a default value of `1` for Python version `3.8` and below. For Python version `3.9` and above, the value is set to `None`. This setting doesn't guarantee the number of threads that would be set during executions. The setting allows Python to expand the number of threads to the specified value. The setting only applies to Python functions apps. Additionally, the setting applies to synchronous functions invocation and not for coroutines.
|Key|Sample value|Max value| ||||
The value for this key is supplied in the format `<DESTINATION>:<VERBOSITY>`, wh
[!INCLUDE [functions-scale-controller-logging](../../includes/functions-scale-controller-logging.md)]
+## SCM\_DO\_BUILD\_DURING\_DEPLOYMENT
+
+Controls remote build behavior during deployment. When `SCM_DO_BUILD_DURING_DEPLOYMENT` is set to `true`, the project is built remotely during deployment.
+
+|Key|Sample value|
+|-|-|
+|SCM_DO_BUILD_DURING_DEPLOYMENT|`true`|
+ ## SCM\_LOGSTREAM\_TIMEOUT Controls the timeout, in seconds, when connected to streaming logs. The default value is 7200 (2 hours).
Changing or removing this setting may cause your function app to not start. To l
The following considerations apply when using an Azure Resource Manager (ARM) template to create a function app during deployment:
-+ When you don't set a `WEBSITE_CONTENTSHARE` value for the main function app or any apps in slots, unique share values are generated for you. This is the recommended approach for an ARM template deployment.
++ When you don't set a `WEBSITE_CONTENTSHARE` value for the main function app or any apps in slots, unique share values are generated for you. Not setting `WEBSITE_CONTENTSHARE` is the recommended approach for an ARM template deployment. + There are scenarios where you must set the `WEBSITE_CONTENTSHARE` value to a predefined share, such as when you [use a secured storage account in a virtual network](configure-networking-how-to.md#restrict-your-storage-account-to-a-virtual-network). In this case, you must set a unique share name for the main function app and the app for each deployment slot. + Don't make `WEBSITE_CONTENTSHARE` a slot setting. + When you specify `WEBSITE_CONTENTSHARE`, the value must follow [this guidance for share names](/rest/api/storageservices/naming-and-referencing-shares--directories--files--and-metadata#share-names).
Sets the DNS server used by an app when resolving IP addresses. This setting is
Controls whether Brotli encoding is used for compression instead of the default gzip compression. When `WEBSITE_ENABLE_BROTLI_ENCODING` is set to `1`, Brotli encoding is used; otherwise gzip encoding is used. +
+## WEBSITE_FUNCTIONS_ARMCACHE_ENABLED
+<!-- verify this info-->
+
+Disables caching when deploying function apps using Azure Resource Manager (ARM) templates.
+
+|Key|Sample value|
+|||
+| WEBSITE_FUNCTIONS_ARMCACHE_ENABLED| 0 |
++ ## WEBSITE\_MAX\_DYNAMIC\_APPLICATION\_SCALE\_OUT The maximum number of instances that the app can scale out to. Default is no limit.
Sets the version of Node.js to use when running your function app on Windows. Yo
||| |WEBSITE\_NODE\_DEFAULT_VERSION|`~10`|
+## WEBSITE\_OVERRIDE\_STICKY\_DIAGNOSTICS\_SETTINGS
+
+When performing [a slot swap](functions-deployment-slots.md#swap-slots) on Premium Functions it can fail to swap if the storage account associated with the Function App is network restricted. This is due to a legacy [application logging feature](../app-service/troubleshoot-diagnostic-logs.md#enable-application-logging-windows) that Functions and App Service share. This setting overrides that legacy logging feature and allows the swap to occur. Set to `0` in the production slot and mark it as a Deployment Slot setting (also known as sticky), or add to all slots to make sure that all version settings are also swapped.
+
+|Key|Sample value|
+|||
+|WEBSITE\_OVERRIDE\_STICKY\_DIAGNOSTICS\_SETTINGS|`0`|
+ ## WEBSITE\_OVERRIDE\_STICKY\_EXTENSION\_VERSIONS By default, the version settings for function apps are specific to each slot. This setting is used when upgrading functions by using [deployment slots](functions-deployment-slots.md). This prevents unanticipated behavior due to changing versions after a swap. Set to `0` in production and in the slot to make sure that all version settings are also swapped. For more information, see [Upgrade using slots](migrate-version-3-version-4.md#upgrade-using-slots).
Enables your function app to run from a mounted package file.
||| |WEBSITE\_RUN\_FROM\_PACKAGE|`1`|
-Valid values are either a URL that resolves to the location of a deployment package file, or `1`. When set to `1`, the package must be in the `d:\home\data\SitePackages` folder. When using zip deployment with this setting, the package is automatically uploaded to this location. In preview, this setting was named `WEBSITE_RUN_FROM_ZIP`. For more information, see [Run your functions from a package file](run-functions-from-deployment-package.md).
+Valid values are either a URL that resolves to the location of a deployment package file, or `1`. When set to `1`, the package must be in the `d:\home\data\SitePackages` folder. When you use zip deployment with `WEBSITE_RUN_FROM_PACKAGE` enabled, the package is automatically uploaded to this location. In preview, this setting was named `WEBSITE_RUN_FROM_ZIP`. For more information, see [Run your functions from a package file](run-functions-from-deployment-package.md).
## WEBSITE\_SKIP\_CONTENTSHARE\_VALIDATION
-The [WEBSITE_CONTENTAZUREFILECONNECTIONSTRING](#website_contentazurefileconnectionstring) and [WEBSITE_CONTENTSHARE](#website_contentshare) settings have additional validation checks to ensure that the app can be properly started. Creation of application settings will fail if the Function App cannot properly call out to the downstream Storage Account or Key Vault due to networking constraints or other limiting factors. When WEBSITE_SKIP_CONTENTSHARE_VALIDATION is set to `1`, the validation check is skipped; otherwise the value defaults to `0` and the validation will take place.
+The [WEBSITE_CONTENTAZUREFILECONNECTIONSTRING](#website_contentazurefileconnectionstring) and [WEBSITE_CONTENTSHARE](#website_contentshare) settings have extra validation checks to ensure that the app can be properly started. Creation of application settings will fail if the function app can't properly call out to the downstream Storage Account or Key Vault due to networking constraints or other limiting factors. When WEBSITE_SKIP_CONTENTSHARE_VALIDATION is set to `1`, the validation check is skipped; otherwise the value defaults to `0` and the validation will take place.
|Key|Sample value| ||| |WEBSITE_SKIP_CONTENTSHARE_VALIDATION|`1`|
-If validation is skipped and either the connection string or content share are not valid, the app will be unable to start properly and will only serve HTTP 500 errors.
+If validation is skipped and either the connection string or content share isn't valid, the app won't be able to start properly. In this case, functions return HTTP 500 errors. For more information, see [Troubleshoot error: "Azure Functions Runtime is unreachable"](functions-recover-storage-account.md)
## WEBSITE\_SLOT\_NAME
Allows you to set the timezone for your function app.
[!INCLUDE [functions-timezone](../../includes/functions-timezone.md)]
+## WEBSITE\_USE\_PLACEHOLDER
+
+Indicates whether to use a specific [cold start](event-driven-scaling.md#cold-start) optimization when running on the [Consumption plan](consumption-plan.md). Set to `0` to disable the cold-start optimization on the Consumption plan.
+
+|Key|Sample value|
+|||
+|WEBSITE_USE_PLACEHOLDER|`1`|
+ ## WEBSITE\_VNET\_ROUTE\_ALL > [!IMPORTANT]
-> WEBSITE_VNET_ROUTE_ALL is a legacy app setting that has been replaced by the [vnetRouteAllEnabled configuration setting](../app-service/configure-vnet-integration-routing.md).
+> WEBSITE_VNET_ROUTE_ALL is a legacy app setting that has been replaced by the [vnetRouteAllEnabled](#vnetrouteallenabled) site setting.
Indicates whether all outbound traffic from the app is routed through the virtual network. A setting value of `1` indicates that all traffic is routed through the virtual network. You need this setting when using features of [Regional virtual network integration](functions-networking-options.md#regional-virtual-network-integration). It's also used when a [virtual network NAT gateway is used to define a static outbound IP address](functions-how-to-use-nat-gateway.md).
Indicates whether all outbound traffic from the app is routed through the virtua
## App Service site settings
-Some configurations must be maintained at the App Service level as site settings, such as language versions. These settings are usually set in the portal, by using REST APIs, or by using Azure CLI or Azure PowerShell. The following are site settings that could be required, depending on your runtime language, OS, and versions:
+Some configurations must be maintained at the App Service level as site settings, such as language versions. These settings are managed in the portal, by using REST APIs, or by using Azure CLI or Azure PowerShell. The following are site settings that could be required, depending on your runtime language, OS, and versions:
### linuxFxVersion
Sets the specific version of PowerShell on which your functions run. For more in
When running locally, you instead use the [`FUNCTIONS_WORKER_RUNTIME_VERSION`](functions-reference-powershell.md#running-local-on-a-specific-version) setting in the local.settings.json file.
+### vnetrouteallenabled
+
+Indicates whether all outbound traffic from the app is routed through the virtual network. A setting value of `1` indicates that all traffic is routed through the virtual network. You need this setting when using features of [Regional virtual network integration](functions-networking-options.md#regional-virtual-network-integration). It's also used when a [virtual network NAT gateway is used to define a static outbound IP address](functions-how-to-use-nat-gateway.md). For more information, see [Configure application routing](../app-service/configure-vnet-integration-routing.md#configure-application-routing).
+
+This site setting replaces the legacy [WEBSITE\_VNET\_ROUTE\_ALL](#website_vnet_route_all) setting.
+ ## Next steps [Learn how to update app settings](functions-how-to-use-azure-function-app-settings.md#settings)
azure-functions Functions Bindings Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql.md
Support for the SQL bindings extension is available in the 1.11.3b1 version of t
azure-functions==1.11.3b1 ```
-Following setting the library version, update your application settings to [isolate the dependencies](./functions-app-settings.md#python_isolate_worker_dependencies-preview) by adding `PYTHON_ISOLATE_WORKER_DEPENDENCIES` with the value `1` to your application settings. Locally, this is set in the `local.settings.json` file as seen below:
+Following setting the library version, update your application settings to [isolate the dependencies](./functions-app-settings.md#python_isolate_worker_dependencies) by adding `PYTHON_ISOLATE_WORKER_DEPENDENCIES` with the value `1` to your application settings. Locally, this is set in the `local.settings.json` file as seen below:
```json "PYTHON_ISOLATE_WORKER_DEPENDENCIES": "1"
azure-functions Functions Create First Quarkus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-first-quarkus.md
Title: Deploy Serverless Java Apps with Quarkus on Azure Functions
-description: Deploy Serverless Java Apps with Quarkus on Azure Functions
+ Title: Deploy serverless Java apps with Quarkus on Azure Functions
+description: Learn how to develop, build, and deploy a serverless Java app by using Quarkus on Azure Functions.
ms.devlang: java
-# Deploy Serverless Java Apps with Quarkus on Azure Functions
+# Deploy serverless Java apps with Quarkus on Azure Functions
-In this article, you'll develop, build, and deploy a serverless Java app with Quarkus on Azure Functions. This article uses Quarkus Funqy and its built-in support for Azure Functions HTTP trigger for Java. Using Quarkus with Azure Functions gives you the power of the Quarkus programming model with the scale and flexibility of Azure Functions. When you're finished, you'll run serverless [Quarkus](https://quarkus.io) applications on Azure Functions and continuing to monitor the application on Azure.
+In this article, you'll develop, build, and deploy a serverless Java app to Azure Functions by using [Quarkus](https://quarkus.io). This article uses Quarkus Funqy and its built-in support for the Azure Functions HTTP trigger for Java. Using Quarkus with Azure Functions gives you the power of the Quarkus programming model with the scale and flexibility of Azure Functions. When you finish, you'll run serverless Quarkus applications on Azure Functions and continue to monitor your app on Azure.
## Prerequisites
-* [Azure CLI](/cli/azure/overview), installed on your own computer.
-* [An Azure Account](https://azure.microsoft.com/)
-* [Java JDK 17](/azure/developer/java/fundamentals/java-support-on-azure) with JAVA_HOME configured appropriately. This article was written with Java 17 in mind, but Azure functions and Quarkus support older versions of Java as well.
-* [Apache Maven 3.8.1+](https://maven.apache.org)
+* The [Azure CLI](/cli/azure/overview) installed on your own computer.
+* An [Azure account](https://azure.microsoft.com/). [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+* [Java JDK 17](/azure/developer/java/fundamentals/java-support-on-azure) with `JAVA_HOME` configured appropriately. This article was written with Java 17 in mind, but Azure Functions and Quarkus also support older versions of Java.
+* [Apache Maven 3.8.1+](https://maven.apache.org).
-## A first look at the sample application
-Clone the sample code for this guide. The sample is on [GitHub](https://github.com/Azure-Samples/quarkus-azure).
+## Create the app project
+
+Use the following command to clone the sample Java project for this article. The sample is on [GitHub](https://github.com/Azure-Samples/quarkus-azure).
```bash git clone https://github.com/Azure-Samples/quarkus-azure ```
-Explore the sample function. Open the file *functions-quarkus/src/main/java/io/quarkus/GreetingFunction.java*. The `@Funq` annotation makes your method (e.g. `funqyHello`) a serverless function. Azure Functions Java has its own set of Azure-specific annotations, but these annotations are not necessary when using Quarkus on Azure Functions in a simple capacity as we're doing here. For more information on the Azure Functions Java annotations, see [Azure Functions Java developer guide](/azure/azure-functions/functions-reference-java).
+Explore the sample function. Open the *functions-quarkus/src/main/java/io/quarkus/GreetingFunction.java* file.
+
+Run the following command. The `@Funq` annotation makes your method (in this case, `funqyHello`) a serverless function.
```java @Funq
public String funqyHello() {
} ```
-Unless you specify otherwise, the function's name is taken to be same as the method name. You can also define the function name with a parameter to the annotation, as shown here.
+Azure Functions Java has its own set of Azure-specific annotations, but these annotations aren't necessary when you're using Quarkus on Azure Functions in a simple capacity as we're doing here. For more information about Azure Functions Java annotations, see the [Azure Functions Java developer guide](/azure/azure-functions/functions-reference-java).
+
+Unless you specify otherwise, the function's name is the same as the method name. You can also use the following command to define the function name with a parameter to the annotation:
```java @Funq("alternateName")
public String funqyHello() {
} ```
-The name is important: the name becomes a part of the REST URI to invoke the function, as shown later in the article.
+The name is important. It becomes a part of the REST URI to invoke the function, as shown later in the article.
-## Test the Serverless Function locally
+## Test the function locally
-Use `mvn` to run `Quarkus Dev mode` on your local terminal. Running Quarkus in this way enables live reload with background compilation. When you modify your Java files and/or your resource files and refresh your browser, these changes will automatically take effect.
+Use `mvn` to run Quarkus dev mode on your local terminal. Running Quarkus in this way enables live reload with background compilation. When you modify your Java files and/or your resource files and refresh your browser, these changes will automatically take effect.
-A browser refresh triggers a scan of the workspace. If any changes are detected, the Java files are recompiled and the application is redeployed. Your redeployed application services the request. If there are any issues with compilation or deployment an error page will let you know.
+A browser refresh triggers a scan of the workspace. If the scan detects any changes, the Java files are recompiled and the application is redeployed. Your redeployed application services the request. If there are any problems with compilation or deployment, an error page will let you know.
-Replace `yourResourceGroupName` with a resource group name. Function app names must be globally unique across all of Azure. Resource group names must be globally unique within a subscription. This article achieves the necessary uniqueness by prepending the resource group name to the function name. For this reason, consider prepending some unique identifier to any names you create that must be unique. A useful technique is to use your initials followed by today's date in `mmdd` format. The resourceGroup is not necessary for this part of the instructions, but it's required later. For simplicity, the maven project requires the property be defined.
+In the following procedure, replace `yourResourceGroupName` with a resource group name. Function app names must be globally unique across all of Azure. Resource group names must be globally unique within a subscription. This article achieves the necessary uniqueness by prepending the resource group name to the function name. Consider prepending a unique identifier to any names you create that must be unique. A useful technique is to use your initials followed by today's date in `mmdd` format.
-1. Invoke Quarkus dev mode.
+The resource group is not necessary for this part of the instructions, but it's required later. For simplicity, the Maven project requires you to define the property.
+
+1. Invoke Quarkus dev mode:
```bash cd functions-azure mvn -DskipTests -DresourceGroup=<yourResourceGroupName> quarkus:dev ```
- The output should look like this.
+ The output should look like this:
```output ...
Replace `yourResourceGroupName` with a resource group name. Function app names m
Press [r] to resume testing, [o] Toggle test output, [:] for the terminal, [h] for more options> ```
-1. Access the function using the `CURL` command on your local terminal.
+1. Access the function by using the `CURL` command on your local terminal:
```bash curl localhost:8080/api/funqyHello ```
- The output should look like this.
+ The output should look like this:
```output "hello funqy" ```
-### Add Dependency injection to function
+## Add dependency injection to the function
-Dependency injection in Quarkus is provided by the open standard technology Jakarta EE Contexts and Dependency Injection (CDI). For a high level overview on injection in general, and CDI in specific, see the [Jakarta EE tutorial](https://eclipse-ee4j.github.io/jakartaee-tutorial/#injection).
+The open-standard technology Jakarta EE Contexts and Dependency Injection (CDI) provides dependency injection in Quarkus. For a high-level overview of injection in general, and CDI specifically, see the [Jakarta EE tutorial](https://eclipse-ee4j.github.io/jakartaee-tutorial/#injection).
-1. Add a new function that uses dependency injection
+1. Add a new function that uses dependency injection.
- Create a *GreetingService.java* file in the *functions-quarkus/src/main/java/io/quarkus* directory. Make the source code of the file be the following.
+ Create a *GreetingService.java* file in the *functions-quarkus/src/main/java/io/quarkus* directory. Use the following code as the source code of the file:
```java package io.quarkus;
Dependency injection in Quarkus is provided by the open standard technology Jaka
Save the file.
- `GreetingService` is an injectable bean that implements a `greeting()` method returning a string `Welcome...` message with a parameter `name`.
+ `GreetingService` is an injectable bean that implements a `greeting()` method. The method returns a `Welcome...` string message with a `name` parameter.
-1. Open the existing the *functions-quarkus/src/main/java/io/quarkus/GreetingFunction.java* file. Replace the class with the below code to add a new field `gService` and method `greeting`.
+1. Open the existing *functions-quarkus/src/main/java/io/quarkus/GreetingFunction.java* file. Replace the class with the following code to add a new `gService` field and the `greeting` method:
```java package io.quarkus;
Dependency injection in Quarkus is provided by the open standard technology Jaka
Save the file.
-1. Access the new function `greeting` using the `CURL` command on your local terminal.
+1. Access the new `greeting` function by using the `curl` command on your local terminal:
```bash curl -d '"Dan"' -X POST localhost:8080/api/greeting ```
- The output should look like this.
+ The output should look like this:
```output "Welcome to build Serverless Java with Quarkus on Azure Functions, Dan" ``` > [!IMPORTANT]
- > `Live Coding` (also referred to as dev mode) allows you to run the app and make changes on the fly. Quarkus will automatically re-compile and reload the app when changes are made. This is a powerful and efficient style of developing that you'll use throughout the tutorial.
+ > Live Coding (also called dev mode) allows you to run the app and make changes on the fly. Quarkus will automatically recompile and reload the app when changes are made. This is a powerful and efficient style of developing that you'll use throughout this article.
- Before moving forward to the next step, stop Quarkus Dev Mode by pressing `CTRL-C`.
+ Before you move forward to the next step, stop Quarkus dev mode by selecting Ctrl+C.
-## Deploy the Serverless App to Azure Functions
+## Deploy the app to Azure
-1. If you haven't already, sign in to your Azure subscription by using the [az login](/cli/azure/reference-index) command and follow the on-screen directions.
+1. If you haven't already, sign in to your Azure subscription by using the following [az login](/cli/azure/reference-index) command and follow the on-screen directions:
```azurecli az login ``` > [!NOTE]
- > If you've multiple Azure tenants associated with your Azure credentials, you must specify which tenant you want to sign in to. You can do this with the `--tenant` option. For example, `az login --tenant contoso.onmicrosoft.com`.
+ > If multiple Azure tenants are associated with your Azure credentials, you must specify which tenant you want to sign in to. You can do this by using the `--tenant` option. For example: `az login --tenant contoso.onmicrosoft.com`.
+ >
> Continue the process in the web browser. If no web browser is available or if the web browser fails to open, use device code flow with `az login --use-device-code`.
- Once you've signed in successfully, the output on your local terminal should look similar to the following.
+ After you sign in successfully, the output on your local terminal should look similar to the following:
```output xxxxxxx-xxxxx-xxxx-xxxxx-xxxxxxxxx 'Microsoft'
Dependency injection in Quarkus is provided by the open standard technology Jaka
] ```
-1. Build and deploy the functions to Azure
+1. Build and deploy the functions to Azure.
- The *pom.xml* you generated in the previous step uses the `azure-functions-maven-plugin`. Running `mvn install` generates config files and a staging directory required by the `azure-functions-maven-plugin`. For `yourResourceGroupName`, use the value you used previously.
+ The *pom.xml* file that you generated in the previous step uses `azure-functions-maven-plugin`. Running `mvn install` generates configuration files and a staging directory that `azure-functions-maven-plugin` requires. For `yourResourceGroupName`, use the value that you used previously.
```bash mvn clean install -DskipTests -DtenantId=<your tenantId from shown previously> -DresourceGroup=<yourResourceGroupName> azure-functions:deploy ```
-1. During deployment, sign in to Azure. The `azure-functions-maven-plugin` is configured to prompt for Azure sign in each time the project is deployed. Examine the build output. During the build, you'll see output similar to the following.
+1. During deployment, sign in to Azure. The `azure-functions-maven-plugin` plug-in is configured to prompt for Azure sign-in each time the project is deployed. During the build, output similar to the following appears:
```output [INFO] Auth type: DEVICE_CODE To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code AXCWTLGMP to authenticate. ```
- Do as the output says and authenticate to Azure using the browser and provided device code. Many other authentication and configuration options are available. The complete reference documentation for `azure-functions-maven-plugin` is available at [Azure Functions: Configuration Details](https://github.com/microsoft/azure-maven-plugins/wiki/Azure-Functions:-Configuration-Details).
+ Do as the output says and authenticate to Azure by using the browser and the provided device code. Many other authentication and configuration options are available. The complete reference documentation for `azure-functions-maven-plugin` is available at [Azure Functions: Configuration Details](https://github.com/microsoft/azure-maven-plugins/wiki/Azure-Functions:-Configuration-Details).
-1. After authenticating, the build should continue and complete. The output should include `BUILD SUCCESS` near the end.
+1. After authentication, the build should continue and finish. Make sure that output includes `BUILD SUCCESS` near the end.
```output Successfully deployed the artifact to https://quarkus-demo-123451234.azurewebsites.net ```
- You can also find the `URL` to trigger your function on Azure in the output log.
+ You can also find the URL to trigger your function on Azure in the output log:
```output [INFO] HTTP Trigger Urls: [INFO] quarkus : https://quarkus-azure-functions-http-archetype-20220629204040017.azurewebsites.net/api/{*path} ```
- It will take a while for the deployment to complete. In the meantime, let's explore Azure Functions in the portal.
-
-## Access and Monitor the Serverless Function on Azure
+ It will take a while for the deployment to finish. In the meantime, let's explore Azure Functions in the Azure portal.
-Sign in to the Portal and ensure you've selected the same tenant and subscription used in the Azure CLI. You can visit the portal at [https://aka.ms/publicportal](https://aka.ms/publicportal).
+## Access and monitor the serverless function on Azure
-1. Type `Function App` in the search bar at the top of the Azure portal and press Enter. Your function should be deployed and show up with the name `<yourResourceGroupName>-function-quarkus`.
+Sign in to [the portal](https://aka.ms/publicportal) and ensure that you've selected the same tenant and subscription that you used in the Azure CLI.
- :::image type="content" source="media/functions-create-first-quarkus/azure-function-app.png" alt-text="The function app in the portal":::
+1. Type **function app** on the search bar at the top of the Azure portal and select the Enter key. Your function app should be deployed and show up with the name `<yourResourceGroupName>-function-quarkus`.
- Select the `function name`. you'll see the function app's detail information such as **Location**, **Subscription**, **URL**, **Metrics**, and **App Service Plan**.
+ :::image type="content" source="media/functions-create-first-quarkus/azure-function-app.png" alt-text="Screenshot that shows the function app in the portal.":::
-1. In the detail page, select the `URL`.
+1. Select the function app to show detailed information, such as **Location**, **Subscription**, **URL**, **Metrics**, and **App Service Plan**. Then, select the **URL** value.
- :::image type="content" source="media/functions-create-first-quarkus/azure-function-app-detail.png" alt-text="The function app detail page in the portal":::
+ :::image type="content" source="media/functions-create-first-quarkus/azure-function-app-detail.png" alt-text="Screenshot that shows a URL and other function app details.":::
- Then, you'll see if your function is "up and running" now.
+1. Confirm that the welcome page says your function app is "up and running."
- :::image type="content" source="media/functions-create-first-quarkus/azure-function-app-ready.png" alt-text="The function welcome page":::
+ :::image type="content" source="media/functions-create-first-quarkus/azure-function-app-ready.png" alt-text="Screenshot that shows the welcome page for a function app.":::
-1. Invoke the `greeting` function using `CURL` command on your local terminal.
+1. Invoke the `greeting` function by using the following `curl` command on your local terminal.
> [!IMPORTANT]
- > Replace `YOUR_HTTP_TRIGGER_URL` with your own function URL that you find in Azure portal or output.
+ > Replace `YOUR_HTTP_TRIGGER_URL` with your own function URL that you find in the Azure portal or output.
```bash curl -d '"Dan on Azure"' -X POST https://YOUR_HTTP_TRIGGER_URL/api/greeting ```
- The output should look similar to the following.
+ The output should look similar to the following:
```output "Welcome to build Serverless Java with Quarkus on Azure Functions, Dan on Azure" ```
- You can also access the other function (`funqyHello`).
+ You can also access the other function (`funqyHello`) by using the following `curl` command:
```bash curl https://YOUR_HTTP_TRIGGER_URL/api/funqyHello ```
- The output should be the same as you observed above.
+ The output should be the same as what you observed earlier:
```output "hello funqy" ```
- If you want to exercise the basic metrics capability in the Azure portal, try invoking the function within a shell for loop, as shown here.
+ If you want to exercise the basic metrics capability in the Azure portal, try invoking the function within a shell `for` loop:
```bash for i in {1..100}; do curl -d '"Dan on Azure"' -X POST https://YOUR_HTTP_TRIGGER_URL/api/greeting; done ```
- After a while, you'll see some metrics data in the portal, as shown next.
+ After a while, you'll see some metrics data in the portal.
- :::image type="content" source="media/functions-create-first-quarkus/portal-metrics.png" alt-text="Function metrics in the portal":::
+ :::image type="content" source="media/functions-create-first-quarkus/portal-metrics.png" alt-text="Screenshot that shows function metrics in the portal.":::
- Now that you've opened your Azure function in the portal, here are some more features accessible from the portal.
+Now that you've opened your Azure function in the portal, here are more features that you can access from the portal:
- * Monitor the performance of your Azure function. For more information, see [Monitoring Azure Functions](/azure/azure-functions/monitor-functions).
- * Explore telemetry. For more information, see [Analyze Azure Functions telemetry in Application Insights](/azure/azure-functions/analyze-telemetry-data).
- * Set up logging. For more information, see [Enable streaming execution logs in Azure Functions](/azure/azure-functions/streaming-logs).
+* Monitor the performance of your Azure function. For more information, see [Monitoring Azure Functions](/azure/azure-functions/monitor-functions).
+* Explore telemetry. For more information, see [Analyze Azure Functions telemetry in Application Insights](/azure/azure-functions/analyze-telemetry-data).
+* Set up logging. For more information, see [Enable streaming execution logs in Azure Functions](/azure/azure-functions/streaming-logs).
## Clean up resources
-If you don't need these resources, you can delete them by running the following command in the Cloud Shell or on your local terminal:
+If you don't need these resources, you can delete them by running the following command in Azure Cloud Shell or on your local terminal:
```azurecli az group delete --name <yourResourceGroupName> --yes
az group delete --name <yourResourceGroupName> --yes
## Next steps
-In this guide, you learned how to:
+In this article, you learned how to:
> [!div class="checklist"] >
-> * Run Quarkus dev mode
-> * Deploy a Funqy app to Azure functions using the `azure-functions-maven-plugin`
-> * Examine the performance of the function in the portal
+> * Run Quarkus dev mode.
+> * Deploy a Funqy app to Azure functions by using `azure-functions-maven-plugin`.
+> * Examine the performance of the function in the portal.
-To learn more about Azure Functions and Quarkus, see the following articles and references.
+To learn more about Azure Functions and Quarkus, see the following articles and references:
* [Azure Functions Java developer guide](/azure/azure-functions/functions-reference-java) * [Quickstart: Create a Java function in Azure using Visual Studio Code](/azure/azure-functions/create-first-function-vs-code-java)
azure-functions Functions How To Use Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-how-to-use-nat-gateway.md
Last updated 2/26/2021
# Tutorial: Control Azure Functions outbound IP with an Azure virtual network NAT gateway
-Virtual network address translation (NAT) simplifies outbound-only internet connectivity for virtual networks. When configured on a subnet, all outbound connectivity uses your specified static public IP addresses. An NAT can be useful for Azure Functions or Web Apps that need to consume a third-party service that uses an allowlist of IP address as a security measure. To learn more, see [What is Virtual Network NAT?](../virtual-network/nat-gateway/nat-overview.md).
+Virtual network address translation (NAT) simplifies outbound-only internet connectivity for virtual networks. When configured on a subnet, all outbound connectivity uses your specified static public IP addresses. An NAT can be useful for apps that need to consume a third-party service that uses an allowlist of IP address as a security measure. To learn more, see [What is Virtual Network NAT?](../virtual-network/nat-gateway/nat-overview.md).
This tutorial shows you how to use virtual network NATs to route outbound traffic from an HTTP triggered function. This function lets you check its own outbound IP address. During this tutorial, you'll:
Next, you create a function app in the [Premium plan](functions-premium-plan.md)
This tutorial shows you how to create your function app in a [Premium plan](functions-premium-plan.md). The same functionality is also available when using a [Dedicated (App Service) plan](dedicated-plan.md). > [!NOTE]
-> For the best experience in this tutorial, choose .NET for runtime stack and choose Windows for operating system. Also, create you function app in the same region as your virtual network.
+> For the best experience in this tutorial, choose .NET for runtime stack and choose Windows for operating system. Also, create your function app in the same region as your virtual network.
[!INCLUDE [functions-premium-create](../../includes/functions-premium-create.md)]
You can now connect your function app to the virtual network.
1. Select **OK** to add the subnet. Close the **VNet Integration** and **Network Feature Status** pages to return to your function app page.
-The function app can now access the virtual network. Next, you'll add an HTTP-triggered function to the function app.
+The function app can now access the virtual network. When connectivity is enabled, the [`vnetrouteallenabled`](functions-app-settings.md#vnetrouteallenabled) site setting is set to `1`. You must have either this site setting or the legacy [`WEBSITE_VNET_ROUTE_ALL`](functions-app-settings.md#website_vnet_route_all) application setting set to `1`.
+
+Next, you'll add an HTTP-triggered function to the function app.
## <a name="create-function"></a>Create an HTTP trigger function
Now, let's create the NAT gateway. When you start with the [previous virtual net
Once the deployment completes, the NAT gateway is ready to route traffic from your function app subnet to the Internet.
-## Update function configuration
-
-Now, you must add an application setting `WEBSITE_VNET_ROUTE_ALL` set to a value of `1`. This setting forces outbound traffic through the virtual network and associated NAT gateway. Without this setting, internet traffic isn't routed through the integrated virtual network, and you'll see the same outbound IPs.
-
-1. Navigate to your function app in the Azure portal and select **Configuration** from the left-hand menu.
-
-1. Under **Application settings**, select **+ New application setting** and complete use the following values to fill out the fields:
-
- |Field Name |Value |
- |||
- |**Name** |WEBSITE_VNET_ROUTE_ALL|
- |**Value** |1|
-
-1. Select **OK** to close the new application setting dialog.
-
-1. Select **Save** and then **Continue** to save the settings.
-
-The function app's now configured to route traffic through its associated virtual network.
- ## Verify new outbound IPs Repeat [the steps earlier](#verify-current-outbound-ips) to run the function again. You should now see the outbound IP address that you configured in the NAT shown in the function output.
You created resources to complete this tutorial. You'll be billed for these reso
## Next steps > [!div class="nextstepaction"]
-> [Azure Functions networking options](functions-networking-options.md)
+> [Azure Functions networking options](functions-networking-options.md)
azure-functions Functions Infrastructure As Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-infrastructure-as-code.md
resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
A function app has many child resources that you can use in your deployment, including app settings and source control options. You also might choose to remove the **sourcecontrols** child resource, and use a different [deployment option](functions-continuous-deployment.md) instead.
-> [!IMPORTANT]
-> To successfully deploy your application by using Azure Resource Manager, it's important to understand how resources are deployed in Azure. In the following example, top-level configurations are applied by using `siteConfig`. It's important to set these configurations at a top level, because they convey information to the Functions runtime and deployment engine. Top-level information is required before the child **sourcecontrols/web** resource is applied. Although it's possible to configure these settings in the child-level **config/appSettings** resource, in some cases your function app must be deployed *before* **config/appSettings** is applied. For example, when you're using functions with [Logic Apps](../logic-apps/index.yml), your functions are a dependency of another resource.
-
-# [Bicep](#tab/bicep)
-
-```bicep
-resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
- name: functionAppName
- location: location
- kind: 'functionapp'
- properties: {
- serverFarmId: hostingPlan.id
- siteConfig: {
- alwaysOn: true
- appSettings: [
- {
- name: 'FUNCTIONS_EXTENSION_VERSION'
- value: '~3'
- }
- {
- name: 'Project'
- value: 'src'
+Considerations for custom deployments:
+++ To successfully deploy your application by using Azure Resource Manager, it's important to understand how resources are deployed in Azure. In the following example, top-level configurations are applied by using `siteConfig`. It's important to set these configurations at a top level, because they convey information to the Functions runtime and deployment engine. Top-level information is required before the child **sourcecontrols/web** resource is applied. Although it's possible to configure these settings in the child-level **config/appSettings** resource, in some cases your function app must be deployed *before* **config/appSettings** is applied. For example, when you're using functions with [Logic Apps](../logic-apps/index.yml), your functions are a dependency of another resource.+
+ # [Bicep](#tab/bicep)
+
+ ```bicep
+ resource functionApp 'Microsoft.Web/sites@2022-03-01' = {
+ name: functionAppName
+ location: location
+ kind: 'functionapp'
+ properties: {
+ serverFarmId: hostingPlan.id
+ siteConfig: {
+ alwaysOn: true
+ appSettings: [
+ {
+ name: 'FUNCTIONS_EXTENSION_VERSION'
+ value: '~3'
+ }
+ {
+ name: 'Project'
+ value: 'src'
+ }
+ ]
}
+ }
+ dependsOn: [
+ storageAccount
] }
- }
- dependsOn: [
- storageAccount
- ]
-}
-
-resource config 'Microsoft.Web/sites/config@2022-03-01' = {
- parent: functionApp
- name: 'appsettings'
- properties: {
- AzureWebJobsStorage: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};AccountKey=${storageAccount.listKeys().keys[0].value}'
- AzureWebJobsDashboard: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};AccountKey=${storageAccount.listKeys().keys[0].value}'
- FUNCTIONS_EXTENSION_VERSION: '~3'
- FUNCTIONS_WORKER_RUNTIME: 'dotnet'
- Project: 'src'
- }
- dependsOn: [
- sourcecontrol
- storageAccount
- ]
-}
-
-resource sourcecontrol 'Microsoft.Web/sites/sourcecontrols@2022-03-01' = {
- parent: functionApp
- name: 'web'
- properties: {
- repoUrl: repoUrl
- branch: branch
- isManualIntegration: true
- }
-}
-```
-
-# [JSON](#tab/json)
-
-```json
-"resources": [
- {
- "type": "Microsoft.Web/sites",
- "apiVersion": "2022-03-01",
- "name": "[variables('functionAppName')]",
- "location": "[parameters('location')]",
- "kind": "functionapp",
- "properties": {
- "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
- "siteConfig": {
- "alwaysOn": true,
- "appSettings": [
- {
- "name": "FUNCTIONS_EXTENSION_VERSION",
- "value": "~3"
- },
- {
- "name": "Project",
- "value": "src"
+
+ resource config 'Microsoft.Web/sites/config@2022-03-01' = {
+ parent: functionApp
+ name: 'appsettings'
+ properties: {
+ AzureWebJobsStorage: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};AccountKey=${storageAccount.listKeys().keys[0].value}'
+ AzureWebJobsDashboard: 'DefaultEndpointsProtocol=https;AccountName=${storageAccountName};AccountKey=${storageAccount.listKeys().keys[0].value}'
+ FUNCTIONS_EXTENSION_VERSION: '~3'
+ FUNCTIONS_WORKER_RUNTIME: 'dotnet'
+ Project: 'src'
+ }
+ dependsOn: [
+ sourcecontrol
+ storageAccount
+ ]
+ }
+
+ resource sourcecontrol 'Microsoft.Web/sites/sourcecontrols@2022-03-01' = {
+ parent: functionApp
+ name: 'web'
+ properties: {
+ repoUrl: repoUrl
+ branch: branch
+ isManualIntegration: true
+ }
+ }
+ ```
+
+ # [JSON](#tab/json)
+
+ ```json
+ "resources": [
+ {
+ "type": "Microsoft.Web/sites",
+ "apiVersion": "2022-03-01",
+ "name": "[variables('functionAppName')]",
+ "location": "[parameters('location')]",
+ "kind": "functionapp",
+ "properties": {
+ "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
+ "siteConfig": {
+ "alwaysOn": true,
+ "appSettings": [
+ {
+ "name": "FUNCTIONS_EXTENSION_VERSION",
+ "value": "~3"
+ },
+ {
+ "name": "Project",
+ "value": "src"
+ }
+ ]
}
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
+ "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]"
+ ]
+ },
+ {
+ "type": "Microsoft.Web/sites/config",
+ "apiVersion": "2022-03-01",
+ "name": "[format('{0}/{1}', variables('functionAppName'), 'appsettings')]",
+ "properties": {
+ "AzureWebJobsStorage": "[format('DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1}', variables('storageAccountName'), listKeys(variables('storageAccountName'), '2021-09-01').keys[0].value)]",
+ "AzureWebJobsDashboard": "[format('DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1}', variables('storageAccountName'), listKeys(variables('storageAccountName'), '2021-09-01').keys[0].value)]",
+ "FUNCTIONS_EXTENSION_VERSION": "~3",
+ "FUNCTIONS_WORKER_RUNTIME": "dotnet",
+ "Project": "src"
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.Web/sites', variables('functionAppName'))]",
+ "[resourceId('Microsoft.Web/sites/sourcecontrols', variables('functionAppName'), 'web')]",
+ "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]"
+ ]
+ },
+ {
+ "type": "Microsoft.Web/sites/sourcecontrols",
+ "apiVersion": "2022-03-01",
+ "name": "[format('{0}/{1}', variables('functionAppName'), 'web')]",
+ "properties": {
+ "repoUrl": "[parameters('repoURL')]",
+ "branch": "[parameters('branch')]",
+ "isManualIntegration": true
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.Web/sites', variables('functionAppName'))]"
] }
- },
- "dependsOn": [
- "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
- "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]"
]
- },
- {
- "type": "Microsoft.Web/sites/config",
- "apiVersion": "2022-03-01",
- "name": "[format('{0}/{1}', variables('functionAppName'), 'appsettings')]",
- "properties": {
- "AzureWebJobsStorage": "[format('DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1}', variables('storageAccountName'), listKeys(variables('storageAccountName'), '2021-09-01').keys[0].value)]",
- "AzureWebJobsDashboard": "[format('DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1}', variables('storageAccountName'), listKeys(variables('storageAccountName'), '2021-09-01').keys[0].value)]",
- "FUNCTIONS_EXTENSION_VERSION": "~3",
- "FUNCTIONS_WORKER_RUNTIME": "dotnet",
- "Project": "src"
- },
- "dependsOn": [
- "[resourceId('Microsoft.Web/sites', variables('functionAppName'))]",
- "[resourceId('Microsoft.Web/sites/sourcecontrols', variables('functionAppName'), 'web')]",
- "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]"
- ]
- },
- {
- "type": "Microsoft.Web/sites/sourcecontrols",
- "apiVersion": "2022-03-01",
- "name": "[format('{0}/{1}', variables('functionAppName'), 'web')]",
- "properties": {
- "repoUrl": "[parameters('repoURL')]",
- "branch": "[parameters('branch')]",
- "isManualIntegration": true
- },
- "dependsOn": [
- "[resourceId('Microsoft.Web/sites', variables('functionAppName'))]"
- ]
- }
-]
-```
+ ```
+
+
-++ The previous Bicep file and ARM template use the [Project](https://github.com/projectkudu/kudu/wiki/Customizing-deployments#using-app-settings-instead-of-a-deployment-file) application settings value, which sets the base directory in which the Functions deployment engine (Kudu) looks for deployable code. In our repository, our functions are in a subfolder of the **src** folder. So, in the preceding example, we set the app settings value to `src`. If your functions are in the root of your repository, or if you're not deploying from source control, you can remove this app settings value.
-> [!TIP]
-> This Bicep/ARM template uses the [Project](https://github.com/projectkudu/kudu/wiki/Customizing-deployments#using-app-settings-instead-of-a-deployment-file) app settings value, which sets the base directory in which the Functions deployment engine (Kudu) looks for deployable code. In our repository, our functions are in a subfolder of the **src** folder. So, in the preceding example, we set the app settings value to `src`. If your functions are in the root of your repository, or if you're not deploying from source control, you can remove this app settings value.
++ When updating application settings using Bicep or ARM, make sure that you include all existing settings. You must do this because the underlying REST APIs calls replace the existing application settings when the update APIs are called. ## Deploy your template
azure-functions Functions Recover Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-recover-storage-account.md
Title: 'Troubleshoot error: Azure Functions Runtime is unreachable' description: Learn how to troubleshoot an invalid storage account. Previously updated : 12/13/2022 Last updated : 12/15/2022 # Troubleshoot error: "Azure Functions Runtime is unreachable"
By default, the container in which your function app runs uses port `:80`. When
Starting with version 3.x of the Functions runtime, [host ID collision](storage-considerations.md#host-id-considerations) are detected and logged as a warning. In version 4.x, an error is logged and the host is stopped. If the runtime can't start for your function app, [review the logs](analyze-telemetry-data.md). If there's a warning or an error about host ID collisions, follow the mitigation steps in [Host ID considerations](storage-considerations.md#host-id-considerations).
+## Read-only app settings
+
+Changing any _read-only_ [App Service application settings](../app-service/reference-app-settings.md#app-environment) can put your function app into an unreachable state.
+ ## Next steps Learn about monitoring your function apps:
azure-functions Functions Reference Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-powershell.md
The following table shows the PowerShell versions available to each major versio
| Functions version | PowerShell version | .NET version | |-|--||
-| 4.x (recommended) | PowerShell 7.2 (recommended) | .NET 6 |
+| 4.x | PowerShell 7.2 | .NET 6 |
You can see the current version by printing `$PSVersionTable` from any function.
To learn more about Azure Functions runtime support policy, please refer to this
### Running local on a specific version
-Support for PowerShell 7.0 in Azure Functions ended on 3 December 2022. To use PowerShell 7.2 when running locally, you need to add the setting `"FUNCTIONS_WORKER_RUNTIME_VERSION" : "7.2"` to the `Values` array in the local.setting.json file in the project root. When running locally on PowerShell 7.2, your local.settings.json file looks like the following example:
+Support for PowerShell 7.0 in Azure Functions has ended on 3 December 2022. To use PowerShell 7.2 when running locally, you need to add the setting `"FUNCTIONS_WORKER_RUNTIME_VERSION" : "7.2"` to the `Values` array in the local.setting.json file in the project root. When running locally on PowerShell 7.2, your local.settings.json file looks like the following example:
```json {
Support for PowerShell 7.0 in Azure Functions ended on 3 December 2022. To use P
} ```
+> [!NOTE]
+> In PowerShell Functions, the value "~7" for FUNCTIONS_WORKER_RUNTIME_VERSION refers to "7.0.x". We do not automatically upgrade PowerShell Function apps that have "~7" to "7.2". Going forward, for PowerShell Function Apps, we will require that apps specify both the major and minor version they want to target. Hence, it is necessary to mention "7.2" if you want to target "7.2.x"
+ ### Changing the PowerShell version
-Support for PowerShell 7.0 in Azure Functions ended on 3 December 2022. Your function app must be running on version 4.x to be able to upgrade to PowerShell 7.2. To learn how to do this, see [View and update the current runtime version](set-runtime-version.md#view-and-update-the-current-runtime-version).
+Support for PowerShell 7.0 in Azure Functions has ended on 3 December 2022. To upgrade your Function App to PowerShell 7.2, ensure the value of FUNCTIONS_EXTENSION_VERSION is set to ~4. To learn how to do this, see [View and update the current runtime version](set-runtime-version.md#view-and-update-the-current-runtime-version).
Use the following steps to change the PowerShell version used by your function app. You can do this either in the Azure portal or by using PowerShell.
azure-functions Functions Reference Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-python.md
The Azure Functions Python worker requires a specific set of libraries. You can
> If your function app's *requirements.txt* file contains an `azure-functions-worker` entry, remove it. The functions worker is automatically managed by the Azure Functions platform, and we regularly update it with new features and bug fixes. Manually installing an old version of worker in the *requirements.txt* file might cause unexpected issues. > [!NOTE]
-> If your package contains certain libraries that might collide with worker's dependencies (for example, protobuf, tensorflow, or grpcio), configure [`PYTHON_ISOLATE_WORKER_DEPENDENCIES`](functions-app-settings.md#python_isolate_worker_dependencies-preview) to `1` in app settings to prevent your application from referring to worker's dependencies. This feature is in preview.
+> If your package contains certain libraries that might collide with worker's dependencies (for example, protobuf, tensorflow, or grpcio), configure [`PYTHON_ISOLATE_WORKER_DEPENDENCIES`](functions-app-settings.md#python_isolate_worker_dependencies) to `1` in app settings to prevent your application from referring to worker's dependencies. This feature is in preview.
### The Azure Functions Python library
azure-functions Recover Python Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/recover-python-functions.md
To identify the actual cause of your issue, you need to get the Python project f
* If the function app has a `WEBSITE_RUN_FROM_PACKAGE` app setting and its value is a URL, download the file by copying and pasting the URL into your browser. * If the function app has `WEBSITE_RUN_FROM_PACKAGE` and it's set to `1`, go to `https://<app-name>.scm.azurewebsites.net/api/vfs/data/SitePackages` and download the file from the latest `href` URL. * If the function app doesn't have either of the preceding app settings, go to `https://<app-name>.scm.azurewebsites.net/api/settings` and find the URL under `SCM_RUN_FROM_PACKAGE`. Download the file by copying and pasting the URL into your browser.
-* If none of these suggestions resolves the issue, go to `https://<app-name>.scm.azurewebsites.net/DebugConsole` and view the content under `/home/site/wwwroot`.
+* If suggestions resolve the issue, go to `https://<app-name>.scm.azurewebsites.net/DebugConsole` and view the content under `/home/site/wwwroot`.
The rest of this article helps you troubleshoot potential causes of this error by inspecting your function app's content, identifying the root cause, and resolving the specific issue.
Example error logs:
You can mitigate this issue in either of two ways:
-* Set the application setting [PYTHON_ISOLATE_WORKER_DEPENDENCIES](functions-app-settings.md#python_isolate_worker_dependencies-preview) to a value of `1`.
+* Set the application setting [PYTHON_ISOLATE_WORKER_DEPENDENCIES](functions-app-settings.md#python_isolate_worker_dependencies) to a value of `1`.
* Pin Protobuf to a non-4.x.x. version, as in the following example:
azure-functions Storage Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/storage-considerations.md
You can use the following strategies to avoid host ID collisions:
You can explicitly set a specific host ID for your function app in the application settings by using the `AzureFunctionsWebHost__hostid` setting. For more information, see [AzureFunctionsWebHost__hostid](functions-app-settings.md#azurefunctionswebhost__hostid).
-When the collision occurs between slots, you may need to mark this setting as a slot setting. To learn how to create app settings, see [Work with application settings](functions-how-to-use-azure-function-app-settings.md#settings).
+When the collision occurs between slots, you must set a specific host ID for each slot, including the production slot. You must also mark these settings as [deployment settings](functions-deployment-slots.md#create-a-deployment-setting) so they don't get swapped. To learn how to create app settings, see [Work with application settings](functions-how-to-use-azure-function-app-settings.md#settings).
## Azure Arc-enabled clusters
azure-monitor Data Collection Iis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-iis.md
Heartbeat
``` ### Verify that IIS logs are being created
-Look at the timestamps of the log files and open the latest to see that latest timestamps are present in the log files. The default location for IIS log files is C:\\inetpub\\LogFiles\\W3SVC1.
+Look at the timestamps of the log files and open the latest to see that latest timestamps are present in the log files. The default location for IIS log files is C:\\inetpub\\logs\\LogFiles\\W3SVC1.
:::image type="content" source="media/data-collection-text-log/iis-log-timestamp.png" lightbox="media/data-collection-text-log/iis-log-timestamp.png" alt-text="Screenshot of an IIS log, showing the timestamp.":::
azure-monitor Container Insights Metric Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-metric-alerts.md
These handpicked alerts come from the Prometheus community. Source code for thes
### Recommended alert rules The following table lists the recommended alert rules that you can enable for either Prometheus metrics or custom metrics.
+Source code for the recommended alerts can be found in [GitHub](https://github.com/Azure/prometheus-collector/blob/68ab5b195a77d72b0b8e36e5565b645c3d1e2d5d/mixins/kubernetes/rules/recording_and_alerting_rules/templates/ci_recommended_alerts.json):
| Prometheus alert name | Custom metric alert name | Description | Default threshold | |:|:|:|:|
azure-monitor Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/diagnostic-settings.md
After a few moments, the new setting appears in your list of settings for this r
# [PowerShell](#tab/powershell)
-Use the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) cmdlet to create a diagnostic setting with [Azure PowerShell](../powershell-samples.md). See the documentation for this cmdlet for descriptions of its parameters.
+Use the [New-AzDiagnosticSetting](/powershell/module/az.monitor/new-azdiagnosticsetting?view=azps-9.1.0&preserve-view=true) cmdlet to create a diagnostic setting with [Azure PowerShell](../powershell-samples.md). See the documentation for this cmdlet for descriptions of its parameters.
> [!IMPORTANT] > You can't use this method for an activity log. Instead, use [Create diagnostic setting in Azure Monitor by using an Azure Resource Manager template](./resource-manager-diagnostic-settings.md) to create a Resource Manager template and deploy it with PowerShell.
-The following example PowerShell cmdlet creates a diagnostic setting by using all three destinations.
+The following example PowerShell cmdlet creates a diagnostic setting for all logs and metrics for a key vault by using Log Analytics Workspace.
```powershell
-Set-AzDiagnosticSetting -Name KeyVault-Diagnostics -ResourceId /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myresourcegroup/providers/Microsoft.KeyVault/vaults/mykeyvault -Category AuditEvent -MetricCategory AllMetrics -Enabled $true -StorageAccountId /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount -WorkspaceId /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/oi-default-east-us/providers/microsoft.operationalinsights/workspaces/myworkspace -EventHubAuthorizationRuleId /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myresourcegroup/providers/Microsoft.EventHub/namespaces/myeventhub/authorizationrules/RootManageSharedAccessKey
+$KV= Get-AzKeyVault -ResourceGroupName <resource group name> -VaultName <key vault name>
+$Law= Get-AzOperationalInsightsWorkspace -ResourceGroupName <resource group name> -Name <workspace name> #LAW name is case sensitive
+
+$metric = @()
+$log = @()
+$metric += New-AzDiagnosticSettingMetricSettingsObject -Enabled $true -Category AllMetrics -RetentionPolicyDay 30 -RetentionPolicyEnabled $true
+$log += New-AzDiagnosticSettingLogSettingsObject -Enabled $true -CategoryGroup allLogs -RetentionPolicyDay 30 -RetentionPolicyEnabled $true
+$log += New-AzDiagnosticSettingLogSettingsObject -Enabled $true -CategoryGroup audit -RetentionPolicyDay 30 -RetentionPolicyEnabled $true
+New-AzDiagnosticSetting -Name 'KeyVault-Diagnostics' -ResourceId $KV.ResourceId -WorkspaceId $Law.ResourceId -Log $log -Metric $metric -Verbose
``` # [CLI](#tab/cli)
Every effort is made to ensure all log data is sent correctly to your destinatio
## Next step
-[Read more about Azure platform logs](./platform-logs-overview.md)
+[Read more about Azure platform logs](./platform-logs-overview.md)
azure-netapp-files Azure Government https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-government.md
na Previously updated : 01/17/2023 Last updated : 01/27/2023
All [Azure NetApp Files features](whats-new.md) available on Azure public cloud
| Azure NetApp Files features | Azure public cloud availability | Azure Government availability | |: |: |: |
-| Azure NetApp Files cross-region replication | Generally available (GA) | [Limited](cross-region-replication-introduction.md#supported-region-pairs) |
| Azure NetApp Files backup | Public preview | No | | Cross-zone replication | Public preview | No | | Standard network features | Generally available (GA) | No |
See [Connect to Azure Government with PowerShell](../azure-government/documentat
* [What's new in Azure NetApp Files](whats-new.md) * [Compare Azure Government and global Azure](../azure-government/compare-azure-government-global-azure.md) * [Azure NetApp Files REST API](azure-netapp-files-develop-with-rest-api.md)
-* [Azure NetApp Files REST API using PowerShell](develop-rest-api-powershell.md)
+* [Azure NetApp Files REST API using PowerShell](develop-rest-api-powershell.md)
azure-netapp-files Azure Netapp Files Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-introduction.md
Title: What is Azure NetApp Files | Microsoft Docs
-description: Learn about Azure NetApp Files, an enterprise-class, high-performance, metered file storage service that supports any workload type and is highly available.
+description: Learn about Azure NetApp Files, an Azure native, first-party, enterprise-class, high-performance file storage service.
documentationcenter: ''
na Previously updated : 01/24/2023 Last updated : 01/26/2023 # What is Azure NetApp Files
-Azure NetApp Files is an Azure native, first-party, enterprise-class, high-performance file storage service. You can select service and performance levels, create NetApp accounts, capacity pools, volumes, and manage data protection.
+Azure NetApp Files is an Azure native, first-party, enterprise-class, high-performance file storage service. It provides NAS volumes as a service for which you can create NetApp accounts, capacity pools, select service and performance levels, create volumes, and manage data protection. It allows you to create and manage high-performance, highly available, and scalable file shares, using the same protocols and tools that you're familiar with and enterprise applications rely on on-premises. Azure NetApp Files supports SMB and NFS protocols and can be used for various use cases such as file sharing, home directories, databases, high-performance computing and more. Additionally, it also provides built-in availability, data protection and disaster recovery capabilities.
-The Azure NetApp Files documentation provides instructions on creating and managing volumes by using Azure NetApp Files.
+## High performance
+
+Azure NetApp Files is designed to provide high-performance file storage for enterprise workloads. Key features that contribute to the high performance include:
+
+* High throughput:
+ Azure NetApp Files supports high throughput for large file transfers and can handle many random read and write operations with high concurrency, over the Azure high-speed network. This functionality helps to ensure that your workloads aren't bottlenecked by VM disk storage performance. Azure NetApp Files supports multiple service levels, such that you can choose the optimal mix of capacity, performance and cost.
+* Low latency:
+ Azure NetApp Files is built on top of an all-flash bare-metal fleet, which is optimized for low latency, high throughput, and random IO. This functionality helps to ensure that your workloads experience optimal (low) storage latency.
+* Protocols:
+ Azure NetApp Files supports both SMB, NFSv3/NFSv4.1, and dual-protocol volumes, which are the most common protocols used in enterprise environments. This functionality allows you to use the same protocols and tools that you use on-premises, which helps to ensure compatibility and ease of use. It supports NFS `nconnect` and SMB multichannel for increased network performance.
+* Scale:
+ Azure NetApp Files can scale up or down to meet the performance and capacity needs of your workloads. You can increase or decrease the size of your volumes as needed, and the service automatically provisions the necessary throughput.
+* Changing of service levels:
+ With Azure NetApp Files, you can dynamically and online change your volumesΓÇÖ service levels to tune your capacity and performance needs whenever you need to. This functionality can even be fully automated through APIs.
+* Optimized for workloads:
+ Azure NetApp Files is optimized for workloads like HPC, IO-intensive, and database scenarios. It provides high performance, high availability, and scalability for demanding workloads.
+
+All these features work together to provide a high-performance file storage solution that can handle the demands of enterprise workloads. They help to ensure that your workloads experience optimal (low) storage latency.
+
+## High availability
+
+Azure NetApp Files is designed to provide high availability for your file storage needs. Key features that contribute to the high availability include:
+
+* Automatic failover:
+ Azure NetApp Files supports automatic failover within the bare-metal fleet if there's disruption or maintenance event. This functionality helps to ensure that your data is always available, even in a failure.
+* Multi-protocol access:
+ Azure NetApp Files supports both SMB and NFS protocols, helping to ensure that your applications can access your data, regardless of the protocol they use.
+* Self-healing:
+ Azure NetApp Files is built on top of a self-healing storage infrastructure, which helps to ensure that your data is always available and recoverable.
+* Support for Availability Zones:
+ Volumes can be deployed in an Availability Zones of choice, enabling you to build HA application architectures for increased application availability.
+* Data replication:
+ Azure NetApp Files supports data replication between different Azure regions and Availability Zones, which helps to ensure that your data is always available, even in an outage.
+* Azure NetApp Files provides a high [availability SLA](https://azure.microsoft.com/support/legal/sla/netapp/v1_1/).
+
+All these features work together to provide a high-availability file storage solution to ensure that your data is always available, recoverable, and accessible to your applications, even in an outage.
+
+## Data protection
+
+Azure NetApp Files provides built-in data protection to help ensure the safe storage, availability, and recoverability of your data. Key features include:
+
+* Snapshot copies:
+ Azure NetApp Files allows you to create point-in-time snapshots of your volumes, which can be restored or reverted to a previous state. The snapshots are incremental. That is, they only capture the changes made since the last snapshot, at the block level, which helps to drastically reduce storage consumption.
+* Backup and restore:
+ Azure NetApp Files provides integrated backup, which allows you to create backups of your volume snapshots to lower-cost Azure storage and restore them if data loss happens.
+* Data replication:
+ Azure NetApp Files supports data replication between different Azure regions and Availability Zones, which helps to ensure high availability and disaster recovery. Replication can be done asynchronously, and the service can fail over to a secondary region or zone in an outage.
+* Security:
+ Azure NetApp Files provides built-in security features such as RBAC/IAM, Active Directory Domain Services (AD DS), Azure Active Directory Domain Services (AADDS) and LDAP integration, and Azure Policy. This functionality helps to protect data from unauthorized access, breaches, and misconfigurations.
+
+All these features work together to provide a comprehensive data protection solution that helps to ensure that your data is always available, recoverable, and secure.
## Next steps
azure-netapp-files Configure Kerberos Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-kerberos-encryption.md
The following requirements apply to NFSv4.1 client encryption:
1. Follow the instructions in [Create an Active Directory connection](create-active-directory-connections.md).
- Kerberos requires that you create at least one machine account in Active Directory. The account information you provide is used for creating the accounts for both SMB *and* NFSv4.1 Kerberos volumes. This machine is account is created automatically during volume creation.
+ Kerberos requires that you create at least one computer account in Active Directory. The account information you provide is used for creating the accounts for both SMB *and* NFSv4.1 Kerberos volumes. This machine is account is created automatically during volume creation.
2. Under **Kerberos Realm**, enter the **AD Server Name** and the **KDC IP** address.
- AD Server and KDC IP can be the same server. This information is used to create the SPN machine account used by Azure NetApp Files. After the machine account is created, Azure NetApp Files will use DNS Server records to locate additional KDC servers as needed.
+ AD Server and KDC IP can be the same server. This information is used to create the SPN computer account used by Azure NetApp Files. After the computer account is created, Azure NetApp Files will use DNS Server records to locate additional KDC servers as needed.
![Kerberos Realm](../media/azure-netapp-files/kerberos-realm.png)
azure-netapp-files Configure Ldap Extended Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-ldap-extended-groups.md
The following information is passed to the server in the query:
1. LDAP volumes require an Active Directory configuration for LDAP server settings. Follow instructions in [Requirements for Active Directory connections](create-active-directory-connections.md#requirements-for-active-directory-connections) and [Create an Active Directory connection](create-active-directory-connections.md#create-an-active-directory-connection) to configure Active Directory connections on the Azure portal. > [!NOTE]
- > Ensure that you have configured the Active Directory connection settings. A machine account will be created in the organizational unit (OU) that is specified in the Active Directory connection settings. The settings are used by the LDAP client to authenticate with your Active Directory.
+ > Ensure that you have configured the Active Directory connection settings. A computer account will be created in the organizational unit (OU) that is specified in the Active Directory connection settings. The settings are used by the LDAP client to authenticate with your Active Directory.
2. Ensure that the Active Directory LDAP server is up and running on the Active Directory.
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-active-directory-connections.md
Several features of Azure NetApp Files require that you have an Active Directory
The AD connection is visible only through the NetApp account it's created in. However, you can enable the Shared AD feature to allow NetApp accounts that are under the same subscription and same region to use the same AD connection. See [Map multiple NetApp accounts in the same subscription and region to an AD connection](#shared_ad). * The Azure NetApp Files AD connection admin account must have the following properties:
- * It must be an AD DS domain user account in the same domain where the Azure NetApp Files machine accounts are created.
- * It must have the permission to create machine accounts (for example, AD domain join) in the AD DS organizational unit path specified in the **Organizational unit path option** of the AD connection.
+ * It must be an AD DS domain user account in the same domain where the Azure NetApp Files computer accounts are created.
+ * It must have the permission to create computer accounts (for example, AD domain join) in the AD DS organizational unit path specified in the **Organizational unit path option** of the AD connection.
* It cannot be a [Group Managed Service Account](/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview).
-* The AD connection admin account supports Kerberos AES-128 and Kerberos AES-256 encryption types for authentication with AD DS for Azure NetApp Files machine account creation (for example, AD domain join operations).
+* The AD connection admin account supports Kerberos AES-128 and Kerberos AES-256 encryption types for authentication with AD DS for Azure NetApp Files computer account creation (for example, AD domain join operations).
* To enable the AES encryption on the Azure NetApp Files AD connection admin account, you must use an AD domain user account that is a member of one of the following AD DS groups:
Several features of Azure NetApp Files require that you have an Active Directory
> See [Understand guidelines for Active Directory Domain Services site design and planning for Azure NetApp Files](understand-guidelines-active-directory-domain-service-site.md). Ensure that your AD DS site design and configuration meets the requirements for Azure NetApp Files. Otherwise, Azure NetApp Files service operations, SMB authentication, Kerberos, or LDAP operations might fail. * **SMB server (computer account) prefix (required)**
- This is the naming prefix for new machine accounts created in AD DS for Azure NetApp Files SMB, dual protocol, and NFSv4.1 Kerberos volumes.
+ This is the naming prefix for new computer accounts created in AD DS for Azure NetApp Files SMB, dual protocol, and NFSv4.1 Kerberos volumes.
For example, if the naming standard that your organization uses for file services is `NAS-01`, `NAS-02`, and so on, then you would use `NAS` for the prefix.
- Azure NetApp Files will create additional machine accounts in AD DS as needed.
+ Azure NetApp Files will create additional computer accounts in AD DS as needed.
>[!IMPORTANT] >Renaming the SMB server prefix after you create the Active Directory connection is disruptive. You will need to re-mount existing SMB shares after renaming the SMB server prefix. * **Organizational unit path**
- This is the LDAP path for the organizational unit (OU) where SMB server machine accounts will be created. That is, `OU=second level, OU=first level`. For example, if you want to use an OU called `ANF` created at the root of the domain, the value would be `OU=ANF`.
+ This is the LDAP path for the organizational unit (OU) where SMB server computer accounts will be created. That is, `OU=second level, OU=first level`. For example, if you want to use an OU called `ANF` created at the root of the domain, the value would be `OU=ANF`.
If no value is provided, Azure NetApp Files will use the `CN=Computers` container.
Several features of Azure NetApp Files require that you have an Active Directory
Azure NetApp Files supports LDAP Channel Binding if both LDAP Signing and LDAP over TLS settings options are enabled in the Active Directory Connection. For more information, see [ADV190023 | Microsoft Guidance for Enabling LDAP Channel Binding and LDAP Signing](https://portal.msrc.microsoft.com/en-us/security-guidance/advisory/ADV190023). >[!NOTE]
- >DNS PTR records for the AD DS machine account(s) must be created in the AD DS **Organizational Unit** specified in the Azure NetApp Files AD connection for LDAP Signing to work.
+ >DNS PTR records for the AD DS computer account(s) must be created in the AD DS **Organizational Unit** specified in the Azure NetApp Files AD connection for LDAP Signing to work.
![Screenshot of the LDAP signing checkbox.](../media/azure-netapp-files/active-directory-ldap-signing.png)
azure-netapp-files Faq Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-smb.md
Azure NetApp Files also supports [`LOCK` response](/openspecs/windows_protocols/
NTLMv2 and Kerberos network authentication methods are supported with SMB volumes in Azure NetApp Files. NTLMv1 and LanManager are disabled and are not supported.
-## What is the password rotation policy for the Active Directory machine account for SMB volumes?
+## What is the password rotation policy for the Active Directory computer account for SMB volumes?
-The Azure NetApp Files service has a policy that automatically updates the password on the Active Directory machine account that is created for SMB volumes. This policy has the following properties:
+The Azure NetApp Files service has a policy that automatically updates the password on the Active Directory computer account that is created for SMB volumes. This policy has the following properties:
* Schedule interval: 4 weeks * Schedule randomization period: 120 minutes * Schedule: Sunday `@0100`
-To see when the password was last updated on the Azure NetApp Files SMB machine account, check the `pwdLastSet` property on the computer account using the [Attribute Editor](create-volumes-dual-protocol.md#access-active-directory-attribute-editor) in the **Active Directory Users and Computers** utility:
+To see when the password was last updated on the Azure NetApp Files SMB computer account, check the `pwdLastSet` property on the computer account using the [Attribute Editor](create-volumes-dual-protocol.md#access-active-directory-attribute-editor) in the **Active Directory Users and Computers** utility:
![Screenshot that shows the Active Directory Users and Computers utility](../media/azure-netapp-files/active-directory-users-computers-utility.png ) >[!NOTE] > Due to an interoperability issue with the [April 2022 Monthly Windows Update](
-https://support.microsoft.com/topic/april-12-2022-kb5012670-monthly-rollup-cae43d16-5b5d-43ea-9c52-9174177c6277), the policy that automatically updates the Active Directory machine account password for SMB volumes has been suspended until a fix is deployed.
+https://support.microsoft.com/topic/april-12-2022-kb5012670-monthly-rollup-cae43d16-5b5d-43ea-9c52-9174177c6277), the policy that automatically updates the Active Directory computer account password for SMB volumes has been suspended until a fix is deployed.
## Does Azure NetApp Files support Alternate Data Streams (ADS)?
azure-netapp-files Modify Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/modify-active-directory-connections.md
Once you have [created an Active Directory connection](create-active-directory-c
| Secondary DNS | Secondary DNS server IP addresses for the Active Directory domain. | Yes | None* | New DNS IP will be used for DNS resolution in case primary DNS fails. | | AD DNS Domain Name | The domain name of your Active Directory Domain Services that you want to join.ΓÇ»| No | None | N/A | | AD Site Name | The site to which the domain controller discovery is limited. | Yes | This should match the site name in Active Directory Sites and Services. See footnote.* | Domain discovery will be limited to the new site name. If not specified, "Default-First-Site-Name" will be used. |
-| SMB Server (Computer Account) Prefix | Naming prefix for the machine account in Active Directory that Azure NetApp Files will use for the creation of new accounts. See footnote.* | Yes | Existing volumes need to be mounted again as the mount is changed for SMB shares and NFS Kerberos volumes.* | Renaming the SMB server prefix after you create the Active Directory connection is disruptive. You'll need to remount existing SMB shares and NFS Kerberos volumes after renaming the SMB server prefix as the mount path will change. |
-| Organizational Unit Path | The LDAP path for the organizational unit (OU) where SMB server machine accounts will be created. `OU=second level`, `OU=first level`| No | If you are using Azure NetApp Files with Azure Active Directory Domain Services (AADDS), the organizational path is `OU=AADDC Computers` when you configure Active Directory for your NetApp Account. | Machine accounts will be placed under the OU specified. If not specified, the default of `OU=Computers` is used by default. |
+| SMB Server (Computer Account) Prefix | Naming prefix for the computer account in Active Directory that Azure NetApp Files will use for the creation of new accounts. See footnote.* | Yes | Existing volumes need to be mounted again as the mount is changed for SMB shares and NFS Kerberos volumes.* | Renaming the SMB server prefix after you create the Active Directory connection is disruptive. You'll need to remount existing SMB shares and NFS Kerberos volumes after renaming the SMB server prefix as the mount path will change. |
+| Organizational Unit Path | The LDAP path for the organizational unit (OU) where SMB server computer accounts will be created. `OU=second level`, `OU=first level`| No | If you are using Azure NetApp Files with Azure Active Directory Domain Services (AADDS), the organizational path is `OU=AADDC Computers` when you configure Active Directory for your NetApp Account. | Computer accounts will be placed under the OU specified. If not specified, the default of `OU=Computers` is used by default. |
| AES Encryption | To take advantage of the strongest security with Kerberos-based communication, you can enable AES-256 and AES-128 encryption on the SMB server. | Yes | If you enable AES encryption, the user credentials used to join Active Directory must have the highest corresponding account option enabled, matching the capabilities enabled for your Active Directory. For example, if your Active Directory has only AES-128 enabled, you must enable the AES-128 account option for the user credentials. If your Active Directory has the AES-256 capability, you must enable the AES-256 account option (which also supports AES-128). If your Active Directory does not have any Kerberos encryption capability, Azure NetApp Files uses DES by default.* | Enable AES encryption for Active Directory Authentication | | LDAP Signing | This functionality enables secure LDAP lookups between the Azure NetApp Files service and the user-specified Active Directory Domain Services domain controller. | Yes | LDAP signing to Require Signing in group policy* | This provides ways to increase the security for communication between LDAP clients and Active Directory domain controllers. | | Allow local NFS users with LDAP | If enabled, this option will manage access for local users and LDAP users. | Yes | This option will allow access to local users. It is not recommended and, if enabled, should only be used for a limited time and later disabled. | If enabled, this option will allow access to local users and LDAP users. If access is needed for only LDAP users, this option must be disabled. |
azure-netapp-files Troubleshoot Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/troubleshoot-volumes.md
This article describes error messages and resolutions that can help you troubles
| Error conditions | Resolutions | |--|-| | The SMB or dual-protocol volume creation fails with the following error: <br> `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"InternalServerError", "message":"Error when creating - Could not query DNS server. Verify that the network configuration is correct and that DNS servers are available."}]}` | This error indicates that the DNS is not reachable. <br> Consider the following solutions: <ul><li>Check if AD DS and the volume are being deployed in same region.</li> <li>Check if AD DS and the volume are using the same VNet. If they're using different VNETs, make sure that the VNets are peered with each other. See [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md). </li> <li>The DNS server might have network security groups (NSGs) applied. As such, it does not allow the traffic to flow. In this case, open the NSGs to the DNS or AD to connect to various ports. For port requirements, see [Requirements for Active Directory connections](create-active-directory-connections.md#requirements-for-active-directory-connections). </li></ul> <br>The same solutions apply for Azure AD DS. Azure AD DS should be deployed in the same region. The VNet should be in the same region or peered with the VNet used by the volume. |
-| The SMB or dual-protocol volume creation fails with the following error: <br> `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"InternalServerError", "message":"Error when creating - Failed to create the Active Directory machine account \"SMBTESTAD-C1C8\". Reason: Kerberos Error: Invalid credentials were given Details: Error: Machine account creation procedure failed\n [ 563] Loaded the preliminary configuration.\n**[ 670] FAILURE: Could not authenticate as 'test@contoso.com':\n** Unknown user (KRB5KDC_ERR_C_PRINCIPAL_UNKNOWN)\n. "}]}` | <ul><li>Make sure that the username entered is correct. </li> <li>Make sure that the user is part of the Administrator group that has the privilege to create machine accounts. </li> <li> If you use Azure AD DS, make sure that the user is part of the Azure AD group `Azure AD DC Administrators`. </li></ul> |
+| The SMB or dual-protocol volume creation fails with the following error: <br> `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"InternalServerError", "message":"Error when creating - Failed to create the Active Directory machine account \"SMBTESTAD-C1C8\". Reason: Kerberos Error: Invalid credentials were given Details: Error: Machine account creation procedure failed\n [ 563] Loaded the preliminary configuration.\n**[ 670] FAILURE: Could not authenticate as 'test@contoso.com':\n** Unknown user (KRB5KDC_ERR_C_PRINCIPAL_UNKNOWN)\n. "}]}` | <ul><li>Make sure that the username entered is correct. </li> <li>Make sure that the user is part of the Administrator group that has the privilege to create machine (computer) accounts. </li> <li> If you use Azure AD DS, make sure that the user is part of the Azure AD group `Azure AD DC Administrators`. </li></ul> |
| The SMB or dual-protocol volume creation fails with the following error: <br> `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"InternalServerError", "message":"Error when creating - Failed to create the Active Directory machine account \"SMBTESTAD-A452\". Reason: Kerberos Error: Pre-authentication information was invalid Details: Error: Machine account creation procedure failed\n [ 567] Loaded the preliminary configuration.\n [ 671] Successfully connected to ip 10.x.x.x, port 88 using TCP\n**[ 1099] FAILURE: Could not authenticate as\n** 'user@contoso.com': CIFS server account password does\n** not match password stored in Active Directory\n** (KRB5KDC_ERR_PREAUTH_FAILED)\n. "}]}` | Make sure that the password entered for joining the AD connection is correct. | | The SMB or dual-protocol volume creation fails with the following error: <br> `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"InternalServerError","message":"Error when creating - Failed to create the Active Directory machine account \"SMBTESTAD-D9A2\". Reason: SecD Error: ou not found Details: Error: Machine account creation procedure failed\n [ 561] Loaded the preliminary configuration.\n [ 665] Successfully connected to ip 10.x.x.x, port 88 using TCP\n [ 1039] Successfully connected to ip 10.x.x.x, port 389 using TCP\n**[ 1147] FAILURE: Specifed OU 'OU=AADDC Com' does not exist in\n** contoso.com\n. "}]}` | Make sure that the OU path specified for joining the AD connection is correct. If you use Azure AD DS, make sure that the organizational unit path is `OU=AADDC Computers`. | | The SMB or dual-protocol volume creation fails with the following error: <br> `Failed to create the Active Directory machine account \"SMB-ANF-VOL. Reason: LDAP Error: Local error occurred Details: Error: Machine account creation procedure failed. [nnn] Loaded the preliminary configuration. [nnn] Successfully connected to ip 10.x.x.x, port 88 using TCP [nnn] Successfully connected to ip 10.x.x.x, port 389 using [nnn] Entry for host-address: 10.x.x.x not found in the current source: FILES. Ignoring and trying next available source [nnn] Source: DNS unavailable. Entry for host-address:10.x.x.x found in any of the available sources\n*[nnn] FAILURE: Unable to SASL bind to LDAP server using GSSAPI: local error [nnn] Additional info: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (Cannot determine realm for numeric host address) [nnn] Unable to connect to LDAP (Active Directory) service on contoso.com (Error: Local error) [nnn] Unable to make a connection (LDAP (Active Directory):contosa.com, result: 7643. ` | The pointer (PTR) record of the AD host machine might be missing on the DNS server. You need to create a reverse lookup zone on the DNS server, and then add a PTR record of the AD host machine in that reverse lookup zone. |
azure-netapp-files Understand Guidelines Active Directory Domain Service Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/understand-guidelines-active-directory-domain-service-site.md
Ensure that you meet the following requirements about the DNS configurations:
* Ensure that [the SRV records registered by the AD DS Net Logon service](https://social.technet.microsoft.com/wiki/contents/articles/7608.srv-records-registered-by-net-logon.aspx) have been created on the DNS servers. * Ensure that the PTR records for the AD DS domain controllers used by Azure NetApp Files have been created on the DNS servers. * Azure NetApp Files supports standard and secure dynamic DNS updates. If you require secure dynamic DNS updates, ensure that secure updates are configured on the DNS servers.
-* If dynamic DNS updates are not used, you need to manually create an A record and a PTR record for the AD DS machine account(s) created in the AD DS **Organizational Unit** (specified in the Azure NetApp Files AD connection) to support Azure NetApp FIles LDAP Signing, LDAP over TLS, SMB, dual-protocol, or Kerberos NFSv4.1 volumes.
+* If dynamic DNS updates are not used, you need to manually create an A record and a PTR record for the AD DS computer account(s) created in the AD DS **Organizational Unit** (specified in the Azure NetApp Files AD connection) to support Azure NetApp FIles LDAP Signing, LDAP over TLS, SMB, dual-protocol, or Kerberos NFSv4.1 volumes.
* For complex or large AD DS topologies, [DNS Policies or DNS subnet prioritization may be required to support LDAP enabled NFS volumes](#ad-ds-ldap-discover). ### Time source requirements
azure-resource-manager Microsoft Common Dropdown https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-common-dropdown.md
Title: DropDown UI element
-description: Describes the Microsoft.Common.DropDown UI element for Azure portal. Use to select from available options when deploying a managed application.
+description: Describes the Microsoft.Common.DropDown UI element for Azure portal. The element is used to select from the available options when deploying a managed application.
Previously updated : 07/14/2020 Last updated : 01/27/2023
A selection control with a dropdown list. You can allow selection of only a sing
## UI sample
-The DropDown element has different options which determine its appearance in the portal.
+The DropDown element has different options that determine its appearance in the portal.
When only a single item is allowed for selection, the control appears as: When descriptions are included, the control appears as: When multi-select is enabled, the control adds a **Select all** option and checkboxes for selecting more than one item: Descriptions can be included with multi-select enabled. When filtering is enabled, the control includes a text box for adding the filtering value. ## Schema ```json {
- "name": "element1",
- "type": "Microsoft.Common.DropDown",
- "label": "Example drop down",
- "placeholder": "",
- "defaultValue": ["Value two"],
- "toolTip": "",
- "multiselect": true,
- "selectAll": true,
- "filter": true,
- "filterPlaceholder": "Filter items ...",
- "multiLine": true,
- "defaultDescription": "A value for selection",
- "constraints": {
- "allowedValues": [
- {
- "label": "Value one",
- "description": "The value to select for option 1.",
- "value": "one"
- },
- {
- "label": "Value two",
- "description": "The value to select for option 2.",
- "value": "two"
- }
- ],
- "required": true
- },
- "visible": true
+ "name": "element1",
+ "type": "Microsoft.Common.DropDown",
+ "label": "Example drop down",
+ "placeholder": "",
+ "defaultValue": ["Value two"],
+ "toolTip": "",
+ "multiselect": true,
+ "selectAll": true,
+ "filter": true,
+ "filterPlaceholder": "Filter items ...",
+ "multiLine": true,
+ "defaultDescription": "A value for selection",
+ "constraints": {
+ "allowedValues": [
+ {
+ "label": "Value one",
+ "description": "The value to select for option 1.",
+ "value": "one"
+ },
+ {
+ "label": "Value two",
+ "description": "The value to select for option 2.",
+ "value": "two"
+ }
+ ],
+ "required": true
+ },
+ "visible": true
} ```
When filtering is enabled, the control includes a text box for adding the filter
- The `defaultDescription` property is used for items that don't have a description. - The `placeholder` property is help text that disappears when the user begins editing. If the `placeholder` and `defaultValue` are both defined, the `defaultValue` takes precedence and is shown.
+## Example
+
+In the following example, the `defaultValue` is defined using the values of the `allowedValues` instead of the labels. The default value can contain multiple values when `multiselect` is enabled.
++
+```json
+{
+ "name": "element1",
+ "type": "Microsoft.Common.DropDown",
+ "label": "Example drop down",
+ "placeholder": "",
+ "defaultValue": [{"value": "one"}, {"value": "two"}],
+ "toolTip": "Multiple values can be selected",
+ "multiselect": true,
+ "selectAll": true,
+ "filter": true,
+ "filterPlaceholder": "Filter items ...",
+ "multiLine": true,
+ "defaultDescription": "A value for selection",
+ "constraints": {
+ "allowedValues": [
+ {
+ "label": "Value one",
+ "description": "The value to select for option 1.",
+ "value": "one"
+ },
+ {
+ "label": "Value two",
+ "description": "The value to select for option 2.",
+ "value": "two"
+ }
+ ],
+ "required": true
+ },
+ "visible": true
+}
+```
+ ## Next steps
-* For an introduction to creating UI definitions, see [Getting started with CreateUiDefinition](create-uidefinition-overview.md).
-* For a description of common properties in UI elements, see [CreateUiDefinition elements](create-uidefinition-elements.md).
+- For an introduction to creating UI definitions, see [Getting started with CreateUiDefinition](create-uidefinition-overview.md).
+- For a description of common properties in UI elements, see [CreateUiDefinition elements](create-uidefinition-elements.md).
azure-video-analyzer Export Portion Of Video As Mp4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/cloud/export-portion-of-video-as-mp4.md
Complete the following prerequisites to run the [C# SDK sample code](https://git
1. Get your Azure Active Directory [Tenant ID](../../../active-directory/fundamentals/active-directory-how-to-find-tenant.md). 1. Register an application with Microsoft identity platform to get app registration [Client ID](../../../active-directory/develop/quickstart-register-app.md#register-an-application) and [Client secret](../../../active-directory/develop/quickstart-register-app.md#add-a-client-secret). 1. [Visual Studio Code](https://code.visualstudio.com/) on your development machine with following extensions -
- * [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools)
+ * [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit)
* [C#](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp). 1. [.NET Core 3.1 SDK](https://dotnet.microsoft.com/download/dotnet-core/3.1) on your development machine. 1. A recorded video in the Video Analyzer account, or an [RTSP camera](../quotas-limitations.md#supported-cameras-1) accessible over the internet. Alternatively, you can deploy an [RTSP camera simulator](get-started-livepipelines-portal.md#deploy-rtsp-camera-simulator).
azure-video-analyzer Analyze Ai Composition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/analyze-ai-composition.md
After completing the steps in this guide, you'll be able to run a simulated live
> [!NOTE] > You will need an Azure subscription with permissions for creating service principals (owner role provides this). If you do not have the right permissions, please reach out to your account administrator to grant you the right permissions.
-* [Visual Studio Code](https://code.visualstudio.com/) on your development machine. Make sure you have the [Azure IoT Tools extension](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools).
+* [Visual Studio Code](https://code.visualstudio.com/) on your development machine. Make sure you have the [Azure IoT Tools extension](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit).
* Make sure the network that your development machine is connected to permits Advanced Message Queueing Protocol (AMQP) over port 5671 for outbound traffic. This setup enables Azure IoT Tools to communicate with Azure IoT Hub. * Complete [Quickstart: Analyze a live video feed from a (simulated) IP camera using your own gRPC model](analyze-live-video-use-your-model-grpc.md). Do not skip this step as this is a strict requirement for the how to guide.
azure-video-analyzer Deploy Iot Edge Linux On Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/deploy-iot-edge-linux-on-windows.md
In this article, you'll learn how to deploy Azure Video Analyzer on an edge devi
* An Azure account that has an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) if you don't already have one.
-* [Visual Studio Code](https://code.visualstudio.com/) on your development machine. Make sure you have the [Azure IoT Tools extension](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools).
+* [Visual Studio Code](https://code.visualstudio.com/) on your development machine. Make sure you have the [Azure IoT Tools extension](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit).
* Read [What is EFLOW](../../../iot-edge/iot-edge-for-linux-on-windows.md). ## Deployment steps
azure-video-analyzer Deploy On Stack Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/deploy-on-stack-edge.md
In the article, we'll deploy Video Analyzer by using Azure IoT Hub, but the Azur
We recommend that you use a [general-purpose v2 storage account](../../../storage/common/storage-account-upgrade.md?tabs=azure-portal). * [Visual Studio Code](https://code.visualstudio.com/), installed on your development machine
-* The [Azure IoT Tools extension](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools), installed in Visual Studio Code
+* The [Azure IoT Tools extension](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit), installed in Visual Studio Code
* Make sure the network that your development machine is connected to permits Advanced Message Queueing Protocol over port 5671. This setup enables Azure IoT Tools to communicate with your Azure IoT hub. ## Configure Azure Stack Edge to use Video Analyzer
azure-video-analyzer Detect Motion Record Video Clips Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/detect-motion-record-video-clips-cloud.md
This article walks you through the steps to use Azure Video Analyzer edge module
[!INCLUDE [azure-subscription-permissions](./includes/common-includes/azure-subscription-permissions.md)] * [Visual Studio Code](https://code.visualstudio.com/), with the following extensions:
- * [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools)
+ * [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit)
### Set up Azure resources
azure-video-analyzer Get Started Detect Motion Emit Events Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/get-started-detect-motion-emit-events-portal.md
After you complete the setup steps, you'll be able to run the simulated live vid
- An IoT Edge device on which you have admin privileges: - [Deploy to an IoT Edge device](deploy-iot-edge-device.md) - [Deploy to an IoT Edge for Linux on Windows](deploy-iot-edge-linux-on-windows.md)-- [Visual Studio Code](https://code.visualstudio.com/), with the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools) extension.
+- [Visual Studio Code](https://code.visualstudio.com/), with the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) extension.
[!INCLUDE [install-docker-prompt](./includes/common-includes/install-docker-prompt.md)]
azure-video-analyzer Get Started Detect Motion Emit Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/get-started-detect-motion-emit-events.md
After completing the setup steps, you'll be able to run the simulated live video
[!INCLUDE [azure-subscription-permissions](./includes/common-includes/azure-subscription-permissions.md)] * [Visual Studio Code](https://code.visualstudio.com/), with the following extensions:
- * [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools)
+ * [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit)
[!INCLUDE [install-docker-prompt](./includes/common-includes/install-docker-prompt.md)]
azure-video-analyzer Record Event Based Live Video https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/record-event-based-live-video.md
Prerequisites for this tutorial are:
[!INCLUDE [azure-subscription-permissions](./includes/common-includes/azure-subscription-permissions.md)] * [Install Docker](https://docs.docker.com/desktop/#download-and-install) on your machine. * [Visual Studio Code](https://code.visualstudio.com/), with the following extensions:
- * [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools)
+ * [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit)
* [C#](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp) * [.NET Core 3.1 SDK](https://dotnet.microsoft.com/download/dotnet-core/3.1).
azure-vmware Concepts Network Design Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-network-design-considerations.md
Use either 2-byte or 4-byte public ASN numbers, and make sure that they're compa
## Management VMs and default routes from on-premises > [!IMPORTANT]
-> Azure VMware Solution management virtual machines (VMs) won't honor a default route from on-premises.
+> Azure VMware Solution management virtual machines (VMs) won't honor a default route from on-premises for RFC1918 destinations.
-If you're routing back to your on-premises networks by using only a default route advertised toward Azure, vCenter Server and NSX-T Manager VMs won't be compatible with that route.
+If you're routing back to your on-premises networks by using only a default route advertised toward Azure, traffic from vCenter Server and NSX-T Manager VMs towards on-premises destinations with private IP addresses won't follow that route.
-To reach vCenter Server and NSX-T Manager, provide specific routes from on-premises to allow traffic to have a return path to those networks.
+To reach vCenter Server and NSX-T Manager from on-premises, provide specific routes to allow traffic to have a return path to those networks. For example, advertise the RFC1918 summaries (10.0.0.0/8, 172.16.0.0/12 and 192.168.0.0/16).
## Default route to Azure VMware Solution for internet traffic inspection
azure-vmware Protect Azure Vmware Solution With Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/protect-azure-vmware-solution-with-application-gateway.md
This procedure shows you how to define backend address pools using VMs running o
6. Configure the corresponding backend pool and HTTP settings. Select **Add**.
-7. Test the connection. Open your preferred browser and navigate to the different websites hosted on your Azure VMware Solution environment, for example, http://www.fabrikam.com.
+7. Test the connection. Open your preferred browser and navigate to the different websites hosted on your Azure VMware Solution environment.
:::image type="content" source="media/application-gateway/app-gateway-multi-backend-pool-07.png" alt-text="Screenshot of browser page showing successful test the connection.":::
backup Blob Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/blob-restore.md
Block blobs in storage accounts with operational backup configured can be restor
- Blobs will be restored to the same storage account. So blobs that have undergone changes since the time to which you're restoring will be overwritten. - Only block blobs in a standard general-purpose v2 storage account can be restored as part of a restore operation. Append blobs, page blobs, and premium block blobs aren't restored. - When you perform a restore operation, Azure Storage blocks data operations on the blobs in the ranges being restored for the duration of the operation.-- A blob with an active lease cannot be restored. If a blob with an active lease is included in the range of blobs to restore, the restore operation will fail automatically. Break any active leases before starting the restore operation.
+- If a blob with an active lease is included in the range to restore, and if the current version of the leased blob is different from the previous version at the timestamp provided for PITR, the restore operation will fail atomically. We recommend breaking any active leases before initiating the restore operation.
- Snapshots aren't created or deleted as part of a restore operation. Only the base blob is restored to its previous state. - If you delete a container from the storage account by calling the **Delete Container** operation, that container cannot be restored with a restore operation. Rather than deleting an entire container, delete individual blobs if you may want to restore them later. Also, Microsoft recommends enabling soft delete for containers in addition to operational backup to protect against accidental deletion of containers. - Refer to the [support matrix](blob-backup-support-matrix.md) for all limitations and supported scenarios.
cognitive-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/models.md
For example, our most powerful GPT-3 model is called `text-davinci-003`, while o
## Finding what models are available
-You can easily see the models you have available for both inference and fine-tuning in your resource by using the [Models API](/rest/api/cognitiveservices/azureopenai/models/list).
+You can easily see the models you have available for both inference and fine-tuning in your resource by using the [Models API](/rest/api/cognitiveservices/azureopenaistable/models/list).
## Finding the right model
communication-services Email Authentication Best Practice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/email/email-authentication-best-practice.md
[!INCLUDE [Public Preview Notice](../../includes/public-preview-include.md)]
-This article provides the best practices on how to use the sender authentication methods that help prevent attackers from sending messages that look like they come from your domain.
+This article provides the Email Sending best practices on DNS records and how to use the sender authentication methods that help prevent attackers from sending messages that look like they come from your domain.
-## Email authentication
+## Email authentication and DNS setup
Sending an email requires several steps which include verifying the sender of the email actually owns the domain, checking the domain reputation, virus scanning, filtering for spam, phishing attempts, malware etc. Configuring proper email authentication is a foundational principle for establishing trust in email and protecting your domainΓÇÖs reputation. If an email passes authentication checks, the receiving domain can apply policy to that email in keeping with the reputation already established for the identities associated with those authentication checks, and the recipient can be assured that those identities are valid.
+### MX (Mail Exchange) record
+MX (Mail Exchange) record is used to route email to the correct server. It specifies the mail server responsible for accepting email messages on behalf of your domain. DNS needs to be updated with the latest information of MX records of your email domain otherwise it will result in some delivery failures.
+ ### SPF (Sender Policy Framework)
-SPF [RFC 7208](https://tools.ietf.org/html/rfc7208) is a mechanism that allows domain owners to publish and maintain, via a standard DNS TXT record, a list of systems authorized to send email on their behalf.
+SPF [RFC 7208](https://tools.ietf.org/html/rfc7208) is a mechanism that allows domain owners to publish and maintain, via a standard DNS TXT record, a list of systems authorized to send email on their behalf. This record is used to specify which mail servers are authorized to send email on behalf of your domain. It helps to prevent email spoofing and increase email deliverability.
### DKIM (Domain Keys Identified Mail)
-DKIM [RFC 6376](https://tools.ietf.org/html/rfc6376) allows an organization to claim responsibility for transmitting a message in a way that can be validated by the recipient
+DKIM [RFC 6376](https://tools.ietf.org/html/rfc6376) allows an organization to claim responsibility for transmitting a message in a way that can be validated by the recipient. This record is also used to authenticate the domain the email is sent from, and helps to prevent email spoofing and increase email deliverability.
### DMARC (Domain-based Message Authentication, Reporting, and Conformance)
-DMARC [RFC 7489](https://tools.ietf.org/html/rfc7489) is a scalable mechanism by which a mail-originating organization can express domain-level policies and preferences for message validation, disposition, and reporting that a mail-receiving organization can use to improve mail handling.
+DMARC [RFC 7489](https://tools.ietf.org/html/rfc7489) is a scalable mechanism by which a mail-originating organization can express domain-level policies and preferences for message validation, disposition, and reporting that a mail-receiving organization can use to improve mail handling. It is also used to specify how email receivers should handle messages that fail SPF and DKIM checks. This improves email deliverability and helps to prevent email spoofing.
### ARC (Authenticated Received Chain) The ARC protocol [RFC 8617](https://tools.ietf.org/html/rfc8617) provides an authenticated chain of custody for a message, allowing each entity that handles the message to identify what entities handled it previously as well as the messageΓÇÖs authentication assessment at each hop. ARC is not yet an internet standard, but adoption is increasing. + ### How Email authentication works Email authentication verifies that email messages from a sender (for example, notification@contoso.com) are legitimate and come from expected sources for that email domain (for example, contoso.com.) An email message may contain multiple originator or sender addresses. These addresses are used for different purposes. For example, consider these addresses:
The following documents may be interesting to you:
- Familiarize yourself with the [Email client library](../email/sdk-features.md) - How to send emails with custom verified domains? [Add custom domains](../../quickstarts/email/add-custom-verified-domains.md)-- How to send emails with Azure Managed Domains? [Add Azure Managed domains](../../quickstarts/email/add-azure-managed-domains.md)
+- How to send emails with Azure Managed Domains? [Add Azure Managed domains](../../quickstarts/email/add-azure-managed-domains.md)
communication-services Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/capabilities.md
In this article, you will learn which capabilities are supported for Teams exter
| | File sharing | ❌ | | | Reply to specific chat message | ❌ | | | React to chat message | ❌ |
-| | [Data Loss Prevention (DLP)](/microsoft-365/compliance/dlp-microsoft-teams) | ✔️* |
+| | [Data Loss Prevention (DLP)](/microsoft-365/compliance/dlp-microsoft-teams) | ✔️*|
+| | [Customer Managed Keys (CMK)](/microsoft-365/compliance/customer-key-overview) | ✔️ |
| Mid call control | Turn your video on/off | ✔️ | | | Mute/Unmute mic | ✔️ | | | Switch between cameras | ✔️ |
communication-services Get Started Video Effects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-video-effects.md
+
+ Title: Quickstart - Add video effects to your video calls
+
+description: Learn how to add video effects in your video calls using Azure Communication Services.
++++ Last updated : 01/09/2023+++++++
+# QuickStart: Add video effects to your video calls
++
+## Next steps
+For more information, see the following articles:
+
+- Check out our [calling hero sample](../../samples/calling-hero-sample.md)
+- Get started with the [UI Library](https://aka.ms/acsstorybook)
+- Learn about [Calling SDK capabilities](./getting-started-with-calling.md?pivots=platform-web)
+- Learn more about [how calling works](../../concepts/voice-video-calling/about-call-types.md)
communication-services Get Started Volume Indicator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-volume-indicator.md
As a developer you can have control over checking microphone volume in JavaScrip
> The quick start examples here are available starting on the public preview version [1.9.1-beta.1](https://www.npmjs.com/package/@azure/communication-calling/v/1.9.1-beta.1) of the calling Web SDK. Make sure to use that SDK version or newer when trying this quickstart. ## Checking the audio stream volume
-As a developer it can be nice to have the ability to check and display to end users the current microphone volume. ACS calling API exposes this information using `getVolume`. The `getVolume` value is a number ranging from 0 to 100 (with 0 noting zero audio detected, 100 as the max level detectable). This value iss sampled every 200 ms to get near real time value of volume.
+As a developer it can be nice to have the ability to check and display to end users the current local microphone volume or the incoming microphone level. ACS calling API exposes this information using `getVolume`. The `getVolume` value is a number ranging from 0 to 100 (with 0 noting zero audio detected, 100 as the max level detectable). This value is sampled every 200 ms to get near real time value of volume level.
### Example usage
-Sample code to get volume of selected microphone. This example shows how to generate the volume level by accessing `getVolume`.
+This example shows how to generate the volume level by accessing `getVolume` of the local audio stream and of the remote incoming audio stream.
```javascript
-//Get the vaolume of the local audio source
+//Get the volume of the local audio source
const volumeIndicator = await new SDK.LocalAudioStream(deviceManager.selectedMicrophone).getVolume(); volumeIndicator.on('levelChanged', ()=>{ console.log(`Volume is ${volumeIndicator.level}`)
const volumeIndicator = await remoteAudioStream.getVolume();
volumeIndicator.on('levelChanged', ()=>{ console.log(`Volume is ${volumeIndicator.level}`) })- ```
+For a more detailed code sample on how to create a UI display to show the local and current incominng audio level please see [here](https://github.com/Azure-Samples/communication-services-web-calling-tutorial/blob/2a3548dd4446fa2e06f5f5b2c2096174500397c9/Project/src/MakeCall/VolumeVisualizer.js).
+
communication-services Get Started Webview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-webview.md
++
+ Title: Azure Communication Calling Web SDK in WebView environment
+
+description: In this quickstart, you'll learn how to integrate Azure Communication Calling WebJS SDK in a WebView environment
++ Last updated : 01/18/2022+++
+zone_pivot_groups: acs-plat-ios-android
+++
+# QuickStart: Add video calling to your WebView client app
+++
+## Next steps
+
+For more information, see the following articles:
+
+- Learn about [Calling SDK capabilities](./getting-started-with-calling.md?pivots=platform-web)
+- Learn more about [how calling works](../../concepts/voice-video-calling/about-call-types.md)
container-registry Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/zone-redundancy.md
Zone redundancy is a feature of the Premium container registry service tier. Fo
|Americas |Europe |Africa |Asia Pacific | |||||
- |Brazil South<br/>Canada Central<br/>Central US<br/>East US<br/>East US 2<br/>South Central US<br/>US Government Virginia<br/>West US 2<br/>West US 3 |France Central<br/>Germany West Central<br/>North Europe<br/>Norway East<br/>West Europe<br/>UK South |South Africa North<br/> |Australia East<br/>Central India<br/>Japan East<br/>Korea Central<br/>Southeast Asia<br/>East Asia<br/> |
+ |Brazil South<br/>Canada Central<br/>Central US<br/>East US<br/>East US 2<br/>East US 2 EUAP<br/>South Central US<br/>US Government Virginia<br/>West US 2<br/>West US 3 |France Central<br/>Germany West Central<br/>North Europe<br/>Norway East<br/>Sweden Central<br/>Switzerland North<br/>UK South<br/>West Europe |South Africa North<br/> |Australia East<br/>Central India<br/>China North 3<br/>East Asia<br/>Japan East<br/>Korea Central<br/>Qatar Central<br/>Southeast Asia<br/>UAE North |
-* Region conversions to availability zones aren't currently supported. To enable availability zone support in a region, the registry must either be created in the desired region, with availability zone support enabled, or a replicated region must be added with availability zone support enabled.
+* Region conversions to availability zones aren't currently supported.
+* To enable availability zone support in a region, create the registry in the desired region with availability zone support enabled, or add a replicated region with availability zone support enabled.
* A registry with an AZ-enabled stamp creates a home region replication with an AZ-enabled stamp by default. The AZ stamp can't be disabled once it's enabled. * The home region replication represents the home region registry. It helps to view and manage the availability zone properties and can't be deleted.
-* The availability zone is per region, once the replications are created, their states cannot be changed, except by deleting and re-creating the replications.
+* The availability zone is per region, once the replications are created, their states can't be changed, except by deleting and re-creating the replications.
* Zone redundancy can't be disabled in a region. * [ACR Tasks](container-registry-tasks-overview.md) doesn't yet support availability zones.
In the command output, note the `zoneRedundancy` property for the replica. When
1. In **Location**, select a region that supports zone redundancy for Azure Container Registry, such as *East US*. 1. In **SKU**, select **Premium**. 1. In **Availability zones**, select **Enabled**.
-1. Optionally, configure additional registry settings, and then select **Review + create**.
+1. Optionally, configure more registry settings, and then select **Review + create**.
1. Select **Create** to deploy the registry instance. :::image type="content" source="media/zone-redundancy/enable-availability-zones-portal.png" alt-text="Enable zone redundancy in Azure portal":::
cosmos-db Automated Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/automated-recommendations.md
In this category, the advisor detects the indexing mode, indexing policy, indexe
|Name |Description | ||| | Lazy indexing | Detects usage of lazy indexing mode and recommends using consistent indexing mode instead. The purpose of Azure Cosmos DBΓÇÖs lazy indexing mode is limited and can impact the freshness of query results in some situations so consistent indexing mode is recommended. |
-| Composite indexing| Detects the accounts where queries could benefit from composite indexes and recommend using them. Composite indexes can dramatically improve the performance and throughput consumption of some queries.|
| Default indexing policy with many indexed paths | Detects containers running on default indexing with many indexed paths and recommends customizing the indexing policy.|
-| ORDER BY queries with high RU/s charge| Detects containers issuing ORDER BY queries with high RU/s charge and recommends exploring composite indexes.|
+| ORDER BY queries with high RU/s charge| Detects containers issuing ORDER BY queries with high RU/s charge and recommends exploring composite indexes for one container per account that issues the highest number of these queries in a 24 hour period.|
| MongoDB 3.6 accounts with no index and high RU/s consumption| Detects Azure Cosmos DBΓÇÖs API for MongoDB with 3.6 version of containers issuing queries with high RU/s charge and recommends adding indexes.| ## Cost optimization recommendations
cosmos-db Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-python.md
ms.devlang: python Last updated 1/17/2023-+ # Quickstart: Azure Cosmos DB for NoSQL client library for Python
cosmos-db Concepts Burstable Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-burstable-compute.md
+
+ Title: Burstable compute - Azure Cosmos DB for PostgreSQL
+description: Definition and workloads of burstable compute.
+++++ Last updated : 01/18/2023++
+# Burstable compute
++
+Burstable compute on a single node cluster is ideal for workloads like
+development environments or small databases that don't need the full
+performance of the node's CPU continuously. These workloads typically have
+burstable performance requirements.
+
+The Azure burstable compute options allow you to configure a single node with
+baseline performance that can build up credits when it's using less than its
+baseline. When the node has accumulated credits, the node can burst above the
+baseline when your workload requires higher CPU performance. You can use the
+**CPU credits remaining** and **CPU credits consumed** metrics to track
+accumulated and used credits respectively.
+
+> [!IMPORTANT]
+>
+> Scaling for burstable compute is available, but only in certain combinations:
+>
+> * You can change 1 vCore burstable to 2 vCore burstable, and the other way
+> around.
+> * You can convert a burstable (1 or 2 vCores) single node configuration to
+> either
+> 1. a single node cluster with regular (non-burstable) compute; or
+> 2. a multi-node cluster.
+>
+> You also can't go back from a regular (non-burstable) single node to
+> burstable compute.
+
+**Next steps**
+
+* See the [limits and limitations](reference-limits.md#burstable-compute) of
+ burstable compute.
+* Review available [cluster metrics](concepts-monitoring.md#metrics).
cosmos-db Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-monitoring.md
Previously updated : 09/27/2022 Last updated : 01/18/2023 # Monitor and tune Azure Cosmos DB for PostgreSQL
These metrics are available for nodes:
||||| |active_connections|Active Connections|Count|The number of active connections to the server.| |apps_reserved_memory_percent|Reserved Memory Percent|Percent|Calculated from the ratio of Committed_AS/CommitLimit as shown in /proc/meminfo.|
+|cpu_credits_consumed|CPU credits consumed|Credits|Total number of credits consumed by the node. (Only available when burstable compute is provisioned on the node.)|
+|cpu_credits_remaining|CPU credits remaining|Credits|Total number of credits available to burst. (Only available when burstable compute is provisioned on the node.)|
|cpu_percent|CPU percent|Percent|The percentage of CPU in use.| |iops|IOPS|Count|See the [IOPS definition](../../virtual-machines/premium-storage-performance.md#iops) and [Azure Cosmos DB for PostgreSQL throughput](resources-compute.md)| |memory_percent|Memory percent|Percent|The percentage of memory in use.|
cosmos-db Product Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/product-updates.md
Previously updated : 01/11/2023 Last updated : 01/23/2023 # Product updates for Azure Cosmos DB for PostgreSQL
Updates that donΓÇÖt directly affect the internals of a cluster are rolled out g
Updates that change cluster internals, such as installing a [new minor PostgreSQL version](https://www.postgresql.org/developer/roadmap/), are delivered to existing clusters as part of the next [scheduled maintenance](concepts-maintenance.md) event. Such updates are available immediately to newly created clusters.
+### January 2023
+
+* General availability: 1- and 2-vCore [burstable compute](concepts-burstable-compute.md) options for single-node clusters (for dev/test and smaller workloads).
+ * Available in all supported regions. See compute and storage details [here](resources-compute.md#single-node-cluster).
+ ### December 2022 * General availability: Azure Cosmos DB for PostgreSQL is now available in the Sweden Central and Switzerland West regions.
cosmos-db Reference Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-limits.md
to keep nodes healthy:
* 300 for 0-3 vCores * 500 for 4-15 vCores * 1000 for 16+ vCores
+* Maximum connections per node with burstable compute
+ * 20 for 1 vCore burstable
+ * 40 for 2 vCores burstable
The connection limits above are for *user* connections (`max_connections` minus `superuser_reserved_connections`). We reserve extra connections for
Azure Cosmos DB for PostgreSQL. If you do need more vCores for a region in your
subscription, see how to [adjust compute quotas](howto-compute-quota.md).
+### Burstable compute
+
+In Azure Cosmos DB for PostgreSQL clusters with [burstable
+compute](concepts-burstable-compute.md) enabled, the following features are
+currently **not supported**:
+
+* Accelerated networking
+* Local caching
+* PostgreSQL and Citus version upgrades
+* PostgreSQL 11 support
+* Read replicas
+* High availability
+* The [azure_storage](howto-ingest-azure-blob-storage.md) extension
+ ## PostgreSQL ### Database creation
cosmos-db Resources Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/resources-compute.md
Previously updated : 07/08/2022 Last updated : 01/27/2023 # Azure Cosmos DB for PostgreSQL compute and storage [!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)]+
+Compute resources are provided as vCores, which represent the logical CPU of
+the underlying hardware. The storage size for provisioning refers to the
+capacity available to the coordinator and worker nodes in your cluster. The
+storage includes database files, temporary files, transaction logs, and the
+Postgres server logs.
+
+## Multi-node cluster
-You can select the compute and storage settings independently for
-worker nodes and the coordinator node in a cluster.
-Compute resources are provided as vCores, which represent
-the logical CPU of the underlying hardware. The storage size for
-provisioning refers to the capacity available to the coordinator
-and worker nodes in your cluster. The storage
-includes database files, temporary files, transaction logs, and
-the Postgres server logs.
+You can select the compute and storage settings independently for worker nodes
+and the coordinator node in a multi-node cluster.
| Resource | Worker node | Coordinator node | |--|--|--|
the Postgres server logs.
| Memory per vCore, GiB | 8 | 4 | | Storage size, TiB | 0.5, 1, 2 | 0.5, 1, 2 | | Storage type | General purpose (SSD) | General purpose (SSD) |
-| IOPS | Up to 3 IOPS/GiB | Up to 3 IOPS/GiB |
The total amount of RAM in a single node is based on the selected number of vCores.
available to each worker and coordinator node.
| Storage size, TiB | Maximum IOPS | |-|--|
-| 0.5 | 1,536 |
-| 1 | 3,072 |
-| 2 | 6,148 |
+| 0.5 | 2,300 |
+| 1 | 5,000 |
+| 2 | 7,500 |
For the entire cluster, the aggregated IOPS work out to the following values: | Worker nodes | 0.5 TiB, total IOPS | 1 TiB, total IOPS | 2 TiB, total IOPS | |--||-|-|
-| 2 | 3,072 | 6,144 | 12,296 |
-| 3 | 4,608 | 9,216 | 18,444 |
-| 4 | 6,144 | 12,288 | 24,592 |
-| 5 | 7,680 | 15,360 | 30,740 |
-| 6 | 9,216 | 18,432 | 36,888 |
-| 7 | 10,752 | 21,504 | 43,036 |
-| 8 | 12,288 | 24,576 | 49,184 |
-| 9 | 13,824 | 27,648 | 55,332 |
-| 10 | 15,360 | 30,720 | 61,480 |
-| 11 | 16,896 | 33,792 | 67,628 |
-| 12 | 18,432 | 36,864 | 73,776 |
-| 13 | 19,968 | 39,936 | 79,924 |
-| 14 | 21,504 | 43,008 | 86,072 |
-| 15 | 23,040 | 46,080 | 92,220 |
-| 16 | 24,576 | 49,152 | 98,368 |
-| 17 | 26,112 | 52,224 | 104,516 |
-| 18 | 27,648 | 55,296 | 110,664 |
-| 19 | 29,184 | 58,368 | 116,812 |
-| 20 | 30,720 | 61,440 | 122,960 |
-
-**Next steps**
+| 2 | 4,600 | 10,000 | 15,000 |
+| 3 | 6,900 | 15,000 | 22,500 |
+| 4 | 9,200 | 20,000 | 30,000 |
+| 5 | 11,500 | 25,000 | 37,500 |
+| 6 | 13,800 | 30,000 | 45,000 |
+| 7 | 16,100 | 35,000 | 52,500 |
+| 8 | 18,400 | 40,000 | 60,000 |
+| 9 | 20,700 | 45,000 | 67,500 |
+| 10 | 23,000 | 50,000 | 75,000 |
+| 11 | 25,300 | 55,000 | 82,500 |
+| 12 | 27,600 | 60,000 | 90,000 |
+| 13 | 29,900 | 65,000 | 97,500 |
+| 14 | 32,200 | 70,000 | 105,000 |
+| 15 | 34,500 | 75,000 | 112,500 |
+| 16 | 36,800 | 80,000 | 120,000 |
+| 17 | 39,100 | 85,000 | 127,500 |
+| 18 | 41,400 | 90,000 | 135,000 |
+| 19 | 43,700 | 95,000 | 142,500 |
+| 20 | 46,000 | 100,000 | 150,000 |
+
+## Single node cluster
+
+Single-node cluster resource options differ between [burstable
+compute](concepts-burstable-compute.md) and regular compute.
+
+**Burstable compute**
+
+| Resource | Resource value |
+|-|-|
+| Burstable compute, vCores | 1, 2 |
+| Burstable compute memory per vCore, GiB | 4 |
+| Storage size, GiB | 32, 64, 128 |
+| Storage IOPS | Up to 500 |
+| Storage type | General purpose (SSD) |
+
+**Regular compute**
+
+| Resource | Resource value |
+|-|-|
+| Compute, vCores | 2, 4, 8, 16, 32, 64 |
+| Compute memory per vCore, GiB | 4 |
+| Storage size, GiB (IOPS, up to) | 128 (500), 256 (1,100), 512 (2,300), 1024ΓÇá (5,000), 2048ΓÇá (7,500) |
+| Storage type | General purpose (SSD) |
+
+ΓÇá 1024 GiB and 2048 GiB are supported with 8 vCores or greater.
+
+## Next steps
* Learn how to [create a cluster in the portal](quickstart-create-portal.md) * Change [compute quotas](howto-compute-quota.md) for a subscription and region
cost-management-billing Allocate Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/allocate-costs.md
Title: Allocate Azure costs
description: This article explains how create cost allocation rules to distribute costs of subscriptions, resource groups, or tags to others. Previously updated : 04/08/2022 Last updated : 01/27/2023
The following items are currently unsupported by the cost allocation public prev
Cost allocation data exposed by the [Usage Details](/rest/api/consumption/usagedetails/list) API is supported by the 2021-10-01 version or later. However, cost allocation data results might be empty if you're using an unsupported API or if you don't have any cost allocation rules.
+If you have cost allocation rules enabled, the `UnitPrice` field in your usage details file will be 0. We recommend that you use price sheet data to get unit price information until it's available in the usage details file.
+ ## Next steps - Read the [Cost Management + Billing FAQ](../cost-management-billing-faq.yml) for questions and answers about cost allocation.
cost-management-billing Exchange And Refund Azure Reservations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/exchange-and-refund-azure-reservations.md
Previously updated : 12/06/2022 Last updated : 01/27/2023
However, you can't exchange dissimilar reservations. For example, you can't exch
You can also exchange a reservation to purchase another reservation of a similar type in a different region. For example, you can exchange a reservation that's in West US 2 region for one that's in West Europe region. > [!NOTE]
-> Exchanges will be unavailable for Azure reserved instances for compute services purchased on or after **January 1, 2024**. Microsoft launched Azure savings plan for compute and it's designed to help you save broadly on predictable compute usage. The savings plan provides more flexibility needed to accommodate changes such as virtual machine series and regions. With savings plan providing the flexibility automatically, we’re adjusting our reservations exchange policy. You can continue to exchange VM sizes (with instance size flexibility) but we'll no longer support exchanging instance series or regions for Azure Reserved Virtual Machine Instances, Azure Dedicated Host reservations, and Azure App Services reservations. For a limited time you may [trade-in](../savings-plan/reservation-trade-in.md) your Azure reserved instances for compute for a savings plan. Or, you may continue to use and purchase reservations for those predictable, stable workloads where you know the specific configuration you’ll need and want additional savings. Learn more about [Azure savings plan for compute](../savings-plan/index.yml) and how it works with reservations.
+> Exchanges will be unavailable for Azure Reserved Virtual Machine Instances, Azure Dedicated Host reservations, and Azure App Services reservations **purchased on or after January 1, 2024**. Microsoft launched Azure savings plan for compute and it's designed to help you save broadly on predictable compute usage. The savings plan provides more flexibility needed to accommodate changes such as virtual machine series and regions. With savings plan providing the flexibility automatically, we’re adjusting our reservations exchange policy. You can continue to exchange VM sizes (with instance size flexibility) but we'll no longer support exchanging instance series or regions for these Azure reserved instances. Azure Reserved Virtual Machine Instances, Azure Dedicated Host reservations, and Azure App Services reservations purchased **prior to January 1, 2024** will reserve the right to **exchange one more time** after the policy change goes into effect.
+>
+> For a limited time you may [trade-in](../savings-plan/reservation-trade-in.md) your Azure reserved instances for compute for a savings plan. Or, you may continue to use and purchase reservations for those predictable, stable workloads where you know the specific configuration youΓÇÖll need and want additional savings. Learn more about [Azure savings plan for compute](../savings-plan/index.yml) and how it works with reservations.
When you exchange a reservation, you can change your term from one-year to three-year.
data-lake-analytics Data Lake Analytics Account Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-account-policies.md
Title: Manage Azure Data Lake Analytics Account Policies description: Learn how to use account policies to control usage of a Data Lake Analytics account, such as maximum AUs and maximum jobs. -+ Previously updated : 04/30/2018 Last updated : 01/27/2023 # Manage Azure Data Lake Analytics using Account Policies
These policies apply to all jobs in a Data Lake Analytics account.
### Maximum number of AUs in a Data Lake Analytics account
-A policy controls the total number of Analytics Units (AUs) your Data Lake Analytics account can use. By default, the value is set to 250. For example, if this value is set to 250 AUs, you can have one job running with 250 AUs assigned to it, or 10 jobs running with 25 AUs each. Additional jobs that are submitted are queued until the running jobs are finished. When running jobs are finished, AUs are freed up for the queued jobs to run.
+A policy controls the total number of Analytics Units (AUs) your Data Lake Analytics account can use. By default, the value is set to 250. For example, if this value is set to 250 AUs, you can have one job running with 250 AUs assigned to it, or 10 jobs running with 25 AUs each. Other jobs that are submitted are queued until the running jobs are finished. When running jobs are finished, AUs are freed up for the queued jobs to run.
To change the number of AUs for your Data Lake Analytics account: 1. In the Azure portal, go to your Data Lake Analytics account.
-2. Click **Limits and policies**.
+2. Select **Limits and policies**.
3. Under **Maximum AUs**, move the slider to select a value, or enter the value in the text box.
-4. Click **Save**.
+4. Select **Save**.
> [!NOTE] > If you need more than the default (250) AUs, in the portal, click **Help+Support** to submit a support request. The number of AUs available in your Data Lake Analytics account can be increased.
This policy limits how many jobs can run simultaneously. By default, this value
To change the number of jobs that can run simultaneously: 1. In the Azure portal, go to your Data Lake Analytics account.
-2. Click **Limits and policies**.
+2. Select **Limits and policies**.
3. Under **Maximum Number of Running Jobs**, move the slider to select a value, or enter the value in the text box.
-4. Click **Save**.
+4. Select **Save**.
> [!NOTE] > If you need to run more than the default (20) number of jobs, in the portal, click **Help+Support** to submit a support request. The number of jobs that can run simultaneously in your Data Lake Analytics account can be increased. ### How long to keep job metadata and resources
-When your users run U-SQL jobs, the Data Lake Analytics service keeps all related files. These files include the U-SQL script, the DLL files referenced in the U-SQL script, compiled resources, and statistics. The files are in the /system/ folder of the default Azure Data Lake Storage account. This policy controls how long these resources are stored before they are automatically deleted (the default is 30 days). You can use these files for debugging, and for performance-tuning of jobs that you'll rerun in the future.
+When your users run U-SQL jobs, the Data Lake Analytics service keeps all related files. These files include the U-SQL script, the DLL files referenced in the U-SQL script, compiled resources, and statistics. The files are in the /system/ folder of the default Azure Data Lake Storage account. This policy controls how long these resources are stored before they're automatically deleted (the default is 30 days). You can use these files for debugging, and for performance-tuning of jobs that you'll rerun in the future.
To change how long to keep job metadata and resources: 1. In the Azure portal, go to your Data Lake Analytics account.
-2. Click **Limits and policies**.
+2. Select **Limits and policies**.
3. Under **Days to Retain Job Queries**, move the slider to select a value, or enter the value in the text box.
-4. Click **Save**.
+4. Select **Save**.
## Job-level policies
Data Lake Analytics has two policies that you can set at the job level:
- **Priority**: Users can only submit jobs that have a priority lower than or equal to this value. A higher number indicates a lower priority. By default, this limit is set to 1, which is the highest possible priority.
-There is a default policy set on every account. The default policy applies to all users of the account. You can create additional policies for specific users and groups.
+There's a default policy set on every account. The default policy applies to all users of the account. You can create more policies for specific users and groups.
> [!NOTE] > Account-level policies and job-level policies apply simultaneously.
There is a default policy set on every account. The default policy applies to al
1. In the Azure portal, go to your Data Lake Analytics account.
-2. Click **Limits and policies**.
+2. Select **Limits and policies**.
-3. Under **Job Submission Limits**, click the **Add Policy** button. Then, select or enter the following settings:
+3. Under **Job Submission Limits**, select the **Add Policy** button. Then, select or enter the following settings:
1. **Compute Policy Name**: Enter a policy name, to remind you of the purpose of the policy.
There is a default policy set on every account. The default policy applies to al
4. **Set the Priority Limit**: Set the priority limit that applies to the selected user or group.
-4. Click **Ok**.
+4. Select **Ok**.
5. The new policy is listed in the **Default** policy table, under **Job Submission Limits**.
There is a default policy set on every account. The default policy applies to al
1. In the Azure portal, go to your Data Lake Analytics account.
-2. Click **Limits and policies**.
+2. Select **Limits and policies**.
3. Under **Job Submission Limits**, find the policy you want to edit.
-4. To see the **Delete** and **Edit** options, in the rightmost column of the table, click `...`.## Additional resources for job policies
+4. To see the **Delete** and **Edit** options, in the rightmost column of the table, select `...`.
+
+## More resources for job policies
- [Policy overview blog post](/archive/blogs/azuredatalake/managing-your-azure-data-lake-analytics-compute-resources-overview) - [Account-level policies blog post](/archive/blogs/azuredatalake/managing-your-azure-data-lake-analytics-compute-resources-account-level-policy)
data-lake-analytics Data Lake Analytics Cicd Manage Assemblies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-cicd-manage-assemblies.md
Title: Manage U-SQL assemblies in a CI/CD pipeline - Azure Data Lake
description: 'Learn the best practices for managing U-SQL C# assemblies in a CI/CD pipeline with Azure DevOps.' Previously updated : 10/30/2018 Last updated : 01/27/2023 # Best practices for managing U-SQL assemblies in a CI/CD pipeline
You can deploy a U-SQL database by using a U-SQL database project or a `.usqldbp
### Deploy a U-SQL database in Azure DevOps
-`PackageDeploymentTool.exe` provides the programming and command-line interfaces that help to deploy U-SQL databases. The SDK is included in the [U-SQL SDK Nuget package](https://www.nuget.org/packages/Microsoft.Azure.DataLake.USQL.SDK/), located at `build/runtime/PackageDeploymentTool.exe`.
+`PackageDeploymentTool.exe` provides the programming and command-line interfaces that help to deploy U-SQL databases. The SDK is included in the [U-SQL SDK NuGet package](https://www.nuget.org/packages/Microsoft.Azure.DataLake.USQL.SDK/), located at `build/runtime/PackageDeploymentTool.exe`.
In Azure DevOps, you can use a command-line task and this SDK to set up an automation pipeline for the U-SQL database refresh. [Learn more about the SDK and how to set up a CI/CD pipeline for U-SQL database deployment](data-lake-analytics-cicd-overview.md#deploy-u-sql-database-through-azure-pipelines).
data-lake-analytics Data Lake Analytics Data Lake Tools Debug Recurring Job https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-data-lake-tools-debug-recurring-job.md
Title: Debug recurring jobs in Azure Data Lake Analytics description: Learn how to use Azure Data Lake Tools for Visual Studio to debug an abnormal recurring job.-+ Previously updated : 05/20/2018 Last updated : 01/27/2023 # Troubleshoot an abnormal recurring job
You can find all submitted recurring jobs through the job list at the bottom of
![Shortcut menu for comparing jobs](./media/data-lake-analytics-data-lake-tools-debug-recurring-job/compare-job.png)
-Pay attention to the big differences between these two jobs. Those differences are probably causing the performance problems. To check further, use the steps in the following diagram:
+Pay attention to the differences between these two jobs. Those differences are probably causing the performance problems. To check further, use the steps in the following diagram:
![Process diagram for checking differences between jobs](./media/data-lake-analytics-data-lake-tools-debug-recurring-job/recurring-job-diff-debugging-flow.png)
data-lake-analytics Data Lake Analytics Data Lake Tools Export Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-data-lake-tools-export-database.md
Title: Export U-SQL database- Azure Data Lake Tools for Visual Studio description: Learn how to use Azure Data Lake Tools for Visual Studio to export a U-SQL database and automatically import it to a local account.-+ Previously updated : 11/27/2017 Last updated : 01/27/2023 # Export a U-SQL database
The export action is completed by running a U-SQL job. Therefore, exporting from
### Step 3: Check the objects list and other configurations
-In this step, you can verify the selected objects in the **Export object list** box. If there are any errors, select **Previous** to go back and correctly configure the objects that you want to export.
+In this step, you can verify the selected objects in the **Export object list** box. If there are any errors, select **Previous** to go back, and correctly configure the objects that you want to export.
You can also configure other settings for the export target. Configuration descriptions are listed in the following table:
data-lake-analytics Data Lake Analytics Data Lake Tools Local Debug https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-data-lake-tools-local-debug.md
Title: Debug Azure Data Lake Analytics code locally description: Learn how to use Azure Data Lake Tools for Visual Studio to debug U-SQL jobs on your local workstation.-+ Previously updated : 07/03/2018 Last updated : 01/27/2023 # Debug Azure Data Lake Analytics code locally
data-lake-analytics Data Lake Analytics Data Lake Tools Use Vertex Execution View https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-data-lake-tools-use-vertex-execution-view.md
Title: Vertex Execution View in Data Lake Tools for Visual Studio description: This article describes how to use the Vertex Execution View to exam Data Lake Analytics jobs. -- Previously updated : 10/13/2016 Last updated : 01/27/2023 # Use the Vertex Execution View in Data Lake Tools for Visual Studio+ Learn how to use the Vertex Execution View to exam Data Lake Analytics jobs. [!INCLUDE [retirement-flag](includes/retirement-flag.md)] ## Open the Vertex Execution View
-Open a U-SQL job in Data Lake Tools for Visual Studio. Click **Vertex Execution View** in the bottom left corner. You may be prompted to load profiles first and it can take some time depending on your network connectivity.
+
+Open a U-SQL job in Data Lake Tools for Visual Studio. Select **Vertex Execution View** in the bottom left corner. You may be prompted to load profiles first and it can take some time depending on your network connectivity.
![Screenshot that shows the Data Lake Analytics Tools Vertex Execution View](./media/data-lake-analytics-data-lake-tools-use-vertex-execution-view/data-lake-tools-open-vertex-execution-view.png) ## Understand Vertex Execution View+ The Vertex Execution View has three parts: ![Screenshot that shows the Vertex Execution View with the "Vertex selector" and center-top and center-bottom panes highlighted.](./media/data-lake-analytics-data-lake-tools-use-vertex-execution-view/data-lake-tools-vertex-execution-view.png)
-The **Vertex selector** on the left lets you select vertices by features (such as top 10 data read, or choose by stage). One of the most commonly-used filters is to see the **vertices on critical path**. The **Critical path** is the longest chain of vertices of a U-SQL job. Understanding the critical path is useful for optimizing your jobs by checking which vertex takes the longest time.
+The **Vertex selector** on the left lets you select vertices by features (such as top 10 data read, or choose by stage). One of the most commonly used filters is to see the **vertices on critical path**. The **Critical path** is the longest chain of vertices of a U-SQL job. Understanding the critical path is useful for optimizing your jobs by checking which vertex takes the longest time.
![Screenshot that shows the Vertex Execution View top-center pane that displays the "running status of all the vertices".](./media/data-lake-analytics-data-lake-tools-use-vertex-execution-view/data-lake-tools-vertex-execution-view-pane2.png)
The top center pane shows the **running status of all the vertices**.
![Screenshot that shows the Vertex Execution View bottom-center pane that displays information about each vertex.](./media/data-lake-analytics-data-lake-tools-use-vertex-execution-view/data-lake-tools-vertex-execution-view-pane3.png) The bottom center pane shows information about each vertex:
-* Process Name: The name of the vertex instance. It is composed of different parts in StageName|VertexName|VertexRunInstance. For example, the SV7_Split[62].v1 vertex stands for the second running instance (.v1, index starting from 0) of Vertex number 62 in Stage SV7_Split.
+
+* Process Name: The name of the vertex instance. It's composed of different parts in StageName|VertexName|VertexRunInstance. For example, the SV7_Split[62].v1 vertex stands for the second running instance (.v1, index starting from 0) of Vertex number 62 in Stage SV7_Split.
* Total Data Read/Written: The data was read/written by this vertex. * State/Exit Status: The final status when the vertex is ended. * Exit Code/Failure Type: The error when the vertex failed.
The bottom center pane shows information about each vertex:
* Process Create Start Time/Process Queued Time/Process Start Time/Process Complete Time: when the vertex process starts creation; when the vertex process starts to queue; when the certain vertex process starts; when the certain vertex is completed. ## Next steps+ * To log diagnostics information, see [Accessing diagnostics logs for Azure Data Lake Analytics](data-lake-analytics-diagnostic-logs.md) * To see a more complex query, see [Analyze Website logs using Azure Data Lake Analytics](data-lake-analytics-analyze-weblogs.md). * To view job details, see [Use Job Browser and Job View for Azure Data lake Analytics jobs](data-lake-analytics-data-lake-tools-view-jobs.md)
data-lake-analytics Data Lake Analytics Manage Use Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-manage-use-cli.md
Title: Manage Azure Data Lake Analytics using Azure CLI
description: This article describes how to use the Azure CLI to manage Data Lake Analytics jobs, data sources, & users. Previously updated : 01/29/2018 Last updated : 01/27/2023 # Manage Azure Data Lake Analytics using the Azure CLI
Last updated 01/29/2018
[!INCLUDE [retirement-flag](includes/retirement-flag.md)]
-Learn how to manage Azure Data Lake Analytics accounts, data sources, users, and jobs using the Azure CLI. To see management topics using other tools, click the tab select above.
+Learn how to manage Azure Data Lake Analytics accounts, data sources, users, and jobs using the Azure CLI. To see management topics using other tools, select the tab select above.
## Prerequisites
Before you begin this tutorial, you must have the following resources:
## Manage accounts
-Before running any Data Lake Analytics jobs, you must have a Data Lake Analytics account. Unlike Azure HDInsight, you don't pay for an Analytics account when it is not running a job. You only pay for the time when it is running a job. For more information, see [Azure Data Lake Analytics Overview](data-lake-analytics-overview.md).
+Before running any Data Lake Analytics jobs, you must have a Data Lake Analytics account. Unlike Azure HDInsight, you don't pay for an Analytics account when it isn't running a job. You only pay for the time when it's running a job. For more information, see [Azure Data Lake Analytics Overview](data-lake-analytics-overview.md).
### Create accounts
Data Lake Analytics currently supports the following two data sources:
- [Azure Storage](../storage/common/storage-introduction.md) When you create an Analytics account, you must designate an Azure Data Lake Storage account to be the default
-storage account. The default Data Lake storage account is used to store job metadata and job audit logs. After you have created an Analytics account, you can add additional Data Lake Storage accounts and/or Azure Storage account.
+storage account. The default Data Lake storage account is used to store job metadata and job audit logs. After you've created an Analytics account, you can add other Data Lake Storage accounts and/or Azure Storage account.
### Find the default Data Lake Store account
You can view the default Data Lake Store account used by running the `az dla acc
az dla account show --account "<Data Lake Analytics account name>" ```
-### Add additional Blob storage accounts
+### Add other Blob storage accounts
```azurecli az dla account blob-storage add --access-key "<Azure Storage Account Key>" --account "<Data Lake Analytics account name>" --storage-account-name "<Storage account name>"
You can view the default Data Lake Store account used by running the `az dla acc
> Only Blob storage short names are supported. Don't use FQDN, for example "myblob.blob.core.windows.net". >
-### Add additional Data Lake Store accounts
+### Add other Data Lake Store accounts
-The following command updates the specified Data Lake Analytics account with an additional Data Lake Store account:
+The following command updates the specified Data Lake Analytics account with another Data Lake Store account:
```azurecli az dla account data-lake-store add --account "<Data Lake Analytics account name>" --data-lake-store-account-name "<Data Lake Store account name>"
data-lake-analytics Data Lake Analytics Manage Use Java Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-manage-use-java-sdk.md
Title: Manage Azure Data Lake Analytics using Azure Java SDK description: This article describes how to use the Azure Java SDK to write apps that manage Data Lake Analytics jobs, data sources, & users. -+ Previously updated : 08/20/2019 Last updated : 01/27/2023 # Manage Azure Data Lake Analytics using a Java app+ [!INCLUDE [manage-selector](../../includes/data-lake-analytics-selector-manage.md)] [!INCLUDE [retirement-flag](includes/retirement-flag.md)]
This article describes how to manage Azure Data Lake Analytics accounts, data sources, users, and jobs using an app written using the Azure Java SDK. ## Prerequisites+ * **Java Development Kit (JDK) 8** (using Java version 1.8). * **IntelliJ** or another suitable Java development environment. The instructions in this document use IntelliJ.
-* Create an Azure Active Directory (AAD) application and retrieve its **Client ID**, **Tenant ID**, and **Key**. For more information about AAD applications and instructions on how to get a client ID, see [Create Active Directory application and service principal using portal](../active-directory/develop/howto-create-service-principal-portal.md). The Reply URI and Key is available from the portal once you have the application created and key generated.
+* Create an Azure Active Directory (Azure AD) application and retrieve its **Client ID**, **Tenant ID**, and **Key**. For more information about Azure AD applications and instructions on how to get a client ID, see [Create Active Directory application and service principal using portal](../active-directory/develop/howto-create-service-principal-portal.md). The Reply URI and Key is available from the portal once you have the application created and key generated.
## Authenticating using Azure Active Directory The code following snippet provides code for **non-interactive** authentication, where the application provides its own credentials. ## Create a Java application+ 1. Open IntelliJ and create a Java project using the **Command-Line App** template.
-2. Right-click on the project on the left-hand side of your screen and click **Add Framework Support**. Choose **Maven** and click **OK**.
+2. Right-click on the project on the left-hand side of your screen and click **Add Framework Support**. Choose **Maven** and select **OK**.
3. Open the newly created **"pom.xml"** file and add the following snippet of text between the **\</version>** tag and the **\</project>** tag: ```xml
data-lake-analytics Data Lake Analytics Manage Use Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-manage-use-nodejs.md
Title: Manage Azure Data Lake Analytics using Azure SDK for Node.js description: This article describes how to use the Azure SDK for Node.js to manage Data Lake Analytics accounts, data sources, jobs & users. -+ Previously updated : 06/28/2022 Last updated : 01/27/2023 # Manage Azure Data Lake Analytics using Azure SDK for Node.js+ [!INCLUDE [manage-selector](../../includes/data-lake-analytics-selector-manage.md)] [!INCLUDE [retirement-flag](includes/retirement-flag.md)]
This article describes how to manage Azure Data Lake Analytics accounts, data sources, users, and jobs using an app written using the Azure SDK for Node.js. The following versions are supported:+ * **Node.js version: 0.10.0 or higher** * **REST API version for Account: 2015-10-01-preview** ## Features+ * Account management: create, get, list, update, and delete. ## How to install+ ```bash npm install @azure/arm-datalake-analytics ``` ## Authenticate using Azure Active Directory+ ```javascript const { DefaultAzureCredential } = require("@azure/identity"); //service principal authentication
npm install @azure/arm-datalake-analytics
``` ## Create the Data Lake Analytics client+ ```javascript const { DataLakeAnalyticsAccountManagementClient } = require("@azure/arm-datalake-analytics"); var accountClient = new DataLakeAnalyticsAccountManagementClient(credentials, 'your-subscription-id'); ``` ## Create a Data Lake Analytics account+ ```javascript var util = require('util'); var resourceGroupName = 'testrg';
client.accounts.beginCreateAndWait(resourceGroupName, accountName, accountToCrea
}) ``` - ## See also+ * [Microsoft Azure SDK for Node.js](https://github.com/Azure/azure-sdk-for-js)
data-lake-analytics Data Lake Analytics Secure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-secure.md
Title: Secure Azure Data Lake Analytics for multiple users description: Learn how to configure multiple users to run jobs in Azure Data Lake Analytics. -+ Previously updated : 05/30/2018 Last updated : 01/27/2023
-# Configure user access to job information to job information in Azure Data Lake Analytics
+# Configure user access to job information in Azure Data Lake Analytics
[!INCLUDE [retirement-flag](includes/retirement-flag.md)]
In Azure Data Lake Analytics, you can use multiple user accounts or service prin
In order for those same users to see the detailed job information, the users need to be able to read the contents of the job folders. The job folders are located in `/system/` directory.
-If the necessary permissions are not configured, the user may see an error: `Graph data not available - You don't have permissions to access the graph data.`
+If the necessary permissions aren't configured, the user may see an error: `Graph data not available - You don't have permissions to access the graph data.`
## Configure user access to job information
data-lake-analytics Data Lake Analytics U Sql Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-u-sql-catalog.md
Title: Use the U-SQL catalog in Azure Data Lake Analytics description: Learn how to use the U-SQL catalog to share code and data. Create table-valued functions, create views, create tables, and query them. -+ Previously updated : 05/09/2017 Last updated : 01/27/2023 # Get started with the U-SQL Catalog in Azure Data Lake Analytics
data-lake-analytics Data Lake Analytics U Sql Cognitive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-u-sql-cognitive.md
Title: U-SQL Cognitive capabilities in Azure Data Lake Analytics
-description: Learn how to use the intelligence of Cognitive capabilities in U-SQL. This code samples help you get started.
-
+description: Learn how to use the intelligence of Cognitive capabilities in U-SQL. These code samples help you get started.
+ Previously updated : 06/05/2018 Last updated : 01/27/2023 # Get started with the Cognitive capabilities of U-SQL ## Overview+ Cognitive capabilities for U-SQL enable developers to use put intelligence in their big data programs. The following samples using cognitive capabilities are available:+ * Imaging: [Detect faces](https://github.com/Azure-Samples/usql-cognitive-imaging-ocr-hello-world) * Imaging: [Detect emotion](https://github.com/Azure-Samples/usql-cognitive-imaging-emotion-detection-hello-world) * Imaging: [Detect objects (tagging)](https://github.com/Azure-Samples/usql-cognitive-imaging-object-tagging-hello-world)
The following samples using cognitive capabilities are available:
* Text: [Key Phrase Extraction & Sentiment Analysis](https://github.com/Azure-Samples/usql-cognitive-text-hello-world) ## Registering Cognitive Extensions in U-SQL+ Before you begin, follow the steps in this article to register Cognitive Extensions in U-SQL: [Registering Cognitive Extensions in U-SQL](/u-sql/objects-and-extensions/cognitive-capabilities-in#registeringExtensions). ## Next steps+ * [U-SQL/Cognitive Samples](https://github.com/Azure-Samples?utf8=Γ£ô&q=usql%20cognitive) * [Develop U-SQL scripts using Data Lake Tools for Visual Studio](data-lake-analytics-data-lake-tools-get-started.md) * [Using U-SQL window functions for Azure Data Lake Analytics jobs](./data-lake-analytics-u-sql-get-started.md)
data-lake-analytics Data Lake Analytics U Sql Develop User Defined Operators https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-u-sql-develop-user-defined-operators.md
Title: Develop U-SQL user-defined operators - Azure Data Lake Analytics description: Learn how to develop user-defined operators to be used and reused in Azure Data Lake Analytics jobs. -+ Previously updated : 12/05/2016 Last updated : 01/27/2023 # Develop U-SQL user-defined operators (UDOs)+ This article describes how to develop user-defined operators to process data in a U-SQL job. ## Define and use a user-defined operator in U-SQL ### To create and submit a U-SQL job
-1. From the Visual Studio select **File > New > Project > U-SQL Project**.
-2. Click **OK**. Visual Studio creates a solution with a Script.usql file.
+1. From the Visual Studio menu, select **File > New > Project > U-SQL Project**.
+2. Select **OK**. Visual Studio creates a solution with a Script.usql file.
3. From **Solution Explorer**, expand Script.usql, and then double-click **Script.usql.cs**. 4. Paste the following code into the file:
This article describes how to develop user-defined operators to process data in
``` 6. Specify the Data Lake Analytics account, Database, and Schema.
-7. From **Solution Explorer**, right-click **Script.usql**, and then click **Build Script**.
-8. From **Solution Explorer**, right-click **Script.usql**, and then click **Submit Script**.
-9. If you haven't connected to your Azure subscription, you will be prompted to enter your Azure account credentials.
-10. Click **Submit**. Submission results and job link are available in the Results window when the submission is completed.
-11. Click the **Refresh** button to see the latest job status and refresh the screen.
+7. From **Solution Explorer**, right-click **Script.usql**, and then select **Build Script**.
+8. From **Solution Explorer**, right-click **Script.usql**, and then select **Submit Script**.
+9. If you haven't connected to your Azure subscription, you'll be prompted to enter your Azure account credentials.
+10. Select **Submit**. Submission results and job link are available in the Results window when the submission is completed.
+11. Select the **Refresh** button to see the latest job status and refresh the screen.
### To see the output
-1. From **Server Explorer**, expand **Azure**, expand **Data Lake Analytics**, expand your Data Lake Analytics account, expand **Storage Accounts**, right-click the Default Storage, and then click **Explorer**.
+1. From **Server Explorer**, expand **Azure**, expand **Data Lake Analytics**, expand your Data Lake Analytics account, expand **Storage Accounts**, right-click the Default Storage, and then select **Explorer**.
2. Expand Samples, expand Outputs, and then double-click **Drivers.csv**.
data-lake-analytics Data Lake Analytics U Sql Develop With Python R Csharp In Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-u-sql-develop-with-python-r-csharp-in-vscode.md
Title: Run U-SQL jobs in Python, R, and C# - Azure Data Lake Analytics description: Learn how to use code behind with Python, R and C# to submit job in Azure Data Lake. -+ Previously updated : 11/22/2017 Last updated : 01/27/2023 # Develop U-SQL with Python, R, and C# for Azure Data Lake Analytics in Visual Studio Code
data-lake-analytics Data Lake Analytics U Sql Programmability Guide UDO https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-u-sql-programmability-guide-UDO.md
Title: U-SQL UDO programmability guide for Azure Data Lake
-description: Learn about the U-SQL UDO programmability Azure Data Lake Analytics to enable you create good USQL script.
+description: Learn about the U-SQL UDO programmability Azure Data Lake Analytics to enable you to create good USQL scripts.
-+ Previously updated : 06/30/2017 Last updated : 01/27/2023 # U-SQL user-defined objects overview ## U-SQL: user-defined objects: UDO+ U-SQL enables you to define custom programmability objects, which are called user-defined objects or UDO. The following is a list of UDO in U-SQL:
UDO is typically called explicitly in U-SQL script as part of the following U-SQ
> UDOΓÇÖs are limited to consume 0.5Gb memory. This memory limitation does not apply to local executions. ## Next steps+ * [U-SQL programmability guide - overview](data-lake-analytics-u-sql-programmability-guide.md) * [U-SQL programmability guide - UDT and UDAGG](data-lake-analytics-u-sql-programmability-guide-UDT-AGG.md)
data-lake-analytics Data Lake Analytics U Sql Programmability Guide UDT AGG https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-u-sql-programmability-guide-UDT-AGG.md
Title: U-SQL UDT and UDAGG programmability guide for Azure Data Lake
-description: Learn about the U-SQL UDT and UDAGG programmability in Azure Data Lake Analytics to enable you create good USQL script.
+description: Learn about the U-SQL UDT and UDAGG programmability in Azure Data Lake Analytics to enable you to create good USQL scripts.
-+ Previously updated : 06/30/2017 Last updated : 01/27/2023 # U-SQL programmability guide - UDT and UDAGG ## Use user-defined types: UDT+ User-defined types, or UDT, is another programmability feature of U-SQL. U-SQL UDT acts like a regular C# user-defined type. C# is a strongly typed language that allows the use of built-in and custom user-defined types.
-U-SQL cannot implicitly serialize or de-serialize arbitrary UDTs when the UDT is passed between vertices in rowsets. This means that the user has to provide an explicit formatter by using the IFormatter interface. This provides U-SQL with the serialize and de-serialize methods for the UDT.
+U-SQL can't implicitly serialize or de-serialize arbitrary UDTs when the UDT is passed between vertices in rowsets. This means that the user has to provide an explicit formatter by using the IFormatter interface. This provides U-SQL with the serialize and de-serialize methods for the UDT.
> [!NOTE] > U-SQLΓÇÖs built-in extractors and outputters currently cannot serialize or de-serialize UDT data to or from files even with the IFormatter set. So when you're writing UDT data to a file with the OUTPUT statement, or reading it with an extractor, you have to pass it as a string or byte array. Then you call the serialization and deserialization code (that is, the UDTΓÇÖs ToString() method) explicitly. User-defined extractors and outputters, on the other hand, can read and write UDTs.
USQL-Programmability\Types.usql 52 1 USQL-Programmability
To work with UDT in outputter, we either have to serialize it to string with the ToString() method or create a custom outputter.
-UDTs currently cannot be used in GROUP BY. If UDT is used in GROUP BY, the following error is thrown:
+UDTs currently can't be used in GROUP BY. If UDT is used in GROUP BY, the following error is thrown:
```output Error 1 E_CSC_USER_INVALIDTYPEINCLAUSE: GROUP BY doesn't support type MyNameSpace.Myfunction_Returning_UDT
C:\Users\sergeypu\Documents\Visual Studio 2013\Projects\USQL-Programmability\USQ
62 5 USQL-Programmability ```
-To define a UDT, we have to:
+To define a UDT, we must:
1. Add the following namespaces:
using System.IO;
3. Define a used-defined type with SqlUserDefinedType attribute.
-**SqlUserDefinedType** is used to mark a type definition in an assembly as a user-defined type (UDT) in U-SQL. The properties on the attribute reflect the physical characteristics of the UDT. This class cannot be inherited.
+**SqlUserDefinedType** is used to mark a type definition in an assembly as a user-defined type (UDT) in U-SQL. The properties on the attribute reflect the physical characteristics of the UDT. This class can't be inherited.
SqlUserDefinedType is a required attribute for UDT definition.
The `IFormatter` interface serializes and de-serializes an object graph with the
`IColumnWriter` writer / `IColumnReader` reader: The underlying column stream. `ISerializationContext` context: Enum that defines a set of flags that specifies the source or destination context for the stream during serialization.
-* **Intermediate**: Specifies that the source or destination context is not a persisted store.
+* **Intermediate**: Specifies that the source or destination context isn't a persisted store.
* **Persistence**: Specifies that the source or destination context is a persisted store.
-As a regular C# type, a U-SQL UDT definition can include overrides for operators such as +/==/!=. It can also include static methods. For example, if we are going to use this UDT as a parameter to a U-SQL MIN aggregate function, we have to define < operator override.
+As a regular C# type, a U-SQL UDT definition can include overrides for operators such as +/==/!=. It can also include static methods. For example, if we're going to use this UDT as a parameter to a U-SQL MIN aggregate function, we have to define < operator override.
Earlier in this guide, we demonstrated an example for fiscal period identification from the specific date in the format `Qn:Pn (Q1:P10)`. The following example shows how to define a custom type for fiscal period values.
var result = new FiscalPeriod(binaryReader.ReadInt16(), binaryReader.ReadInt16()
The defined type includes two numbers: quarter and month. Operators `==/!=/>/<` and static method `ToString()` are defined here.
-As mentioned earlier, UDT can be used in SELECT expressions, but cannot be used in OUTPUTTER/EXTRACTOR without custom serialization. It either has to be serialized as a string with `ToString()` or used with a custom OUTPUTTER/EXTRACTOR.
+As mentioned earlier, UDT can be used in SELECT expressions, but can't be used in OUTPUTTER/EXTRACTOR without custom serialization. It either has to be serialized as a string with `ToString()` or used with a custom OUTPUTTER/EXTRACTOR.
Now letΓÇÖs discuss usage of UDT. In a code-behind section, we changed our GetFiscalPeriod function to the following:
var result = new FiscalPeriod(binaryReader.ReadInt16(), binaryReader.ReadInt16()
``` ## Use user-defined aggregates: UDAGG
-User-defined aggregates are any aggregation-related functions that are not shipped out-of-the-box with U-SQL. The example can be an aggregate to perform custom math calculations, string concatenations, manipulations with strings, and so on.
+User-defined aggregates are any aggregation-related functions that aren't shipped out-of-the-box with U-SQL. The example can be an aggregate to perform custom math calculations, string concatenations, manipulations with strings, and so on.
The user-defined aggregate base class definition is as follows:
The user-defined aggregate base class definition is as follows:
} ```
-**SqlUserDefinedAggregate** indicates that the type should be registered as a user-defined aggregate. This class cannot be inherited.
+**SqlUserDefinedAggregate** indicates that the type should be registered as a user-defined aggregate. This class can't be inherited.
SqlUserDefinedType attribute is **optional** for UDAGG definition.
Then use the following syntax:
AGG<UDAGG_functionname>(param1,param2) ```
-Here is an example of UDAGG:
+Here's an example of UDAGG:
```csharp public class GuidAggregate : IAggregate<string, string, string>
data-lake-analytics Data Lake Analytics U Sql Programmability Guide Applier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-u-sql-programmability-guide-applier.md
Title: U-SQL user defined applier programmability guide for Azure Data Lake description: Learn about the U-SQL UDO programmability guide - user defined applier. -+ Previously updated : 06/30/2017 Last updated : 01/27/2023 # Use user-defined applier
public class ParserApplier : IApplier
* Apply is called for each row of the outer table. It returns the `IUpdatableRow` output rowset. * The Constructor class is used to pass parameters to the user-defined applier.
-**SqlUserDefinedApplier** indicates that the type should be registered as a user-defined applier. This class cannot be inherited.
+**SqlUserDefinedApplier** indicates that the type should be registered as a user-defined applier. This class can't be inherited.
**SqlUserDefinedApplier** is **optional** for a user-defined applier definition.
The output values must be set with `IUpdatableRow` output:
output.Set<int>("mycolumn", mycolumn) ```
-It is important to understand that custom appliers only output columns and values that are defined with `output.Set` method call.
+It's important to understand that custom appliers only output columns and values that are defined with `output.Set` method call.
The actual output is triggered by calling `yield return output.AsReadOnly();`.
The user-defined applier parameters can be passed to the constructor. Applier ca
new USQL_Programmability.ParserApplier ("all") AS properties(make string, model string, year string, type string, millage int); ```
-Here is the user-defined applier example:
+Here's the user-defined applier example:
```csharp [SqlUserDefinedApplier]
In this use case scenario, user-defined applier acts as a comma-delimited value
210 X5AB2CD45XY458893 Nissan,Altima,2011,4Dr,74000 ```
-It is a typical tab-delimited TSV file with a properties column that contains car properties such as make and model. Those properties must be parsed to the table columns. The applier that's provided also enables you to generate a dynamic number of properties in the result rowset, based on the parameter that's passed. You can generate either all properties or a specific set of properties only.
+It's a typical tab-delimited TSV file with a properties column that contains car properties such as make and model. Those properties must be parsed to the table columns. The applier that's provided also enables you to generate a dynamic number of properties in the result rowset, based on the parameter that's passed. You can generate either all properties or a specific set of properties only.
```text ...USQL_Programmability.ParserApplier ("all")
data-lake-analytics Data Lake Analytics U Sql Programmability Guide Combiner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-u-sql-programmability-guide-combiner.md
Title: U-SQL user defined combiner programmability guide for Azure Data Lake description: Learn about the U-SQL UDO programmability guide - user defined combiner. -+ Previously updated : 06/30/2017 Last updated : 01/27/2023 # Use user-defined combiner ## U-SQL UDO: user-defined combiner+ User-defined combiner, or UDC, enables you to combine rows from left and right rowsets, based on custom logic. User-defined combiner is used with COMBINE expression. ## How to define and use user-defined combiner
public override IEnumerable<IRow> Combine(IRowset left, IRowset right,
} ```
-The **SqlUserDefinedCombiner** attribute indicates that the type should be registered as a user-defined combiner. This class cannot be inherited.
+The **SqlUserDefinedCombiner** attribute indicates that the type should be registered as a user-defined combiner. This class can't be inherited.
-**SqlUserDefinedCombiner** is used to define the Combiner mode property. It is an optional attribute for a user-defined combiner definition.
+**SqlUserDefinedCombiner** is used to define the Combiner mode property. It's an optional attribute for a user-defined combiner definition.
CombinerMode Mode
var myRowset =
}).ToList(); ```
-After enumerating both rowsets, we are going to loop through all rows. For each row in the left rowset, we are going to find all rows that satisfy the condition of our combiner.
+After enumerating both rowsets, we're going to loop through all rows. For each row in the left rowset, we're going to find all rows that satisfy the condition of our combiner.
The output values must be set with `IUpdatableRow` output.
public override IEnumerable<IRow> Combine(IRowset left, IRowset right,
} ```
-In this use-case scenario, we are building an analytics report for the retailer. The goal is to find all products that cost more than $20,000 and that sell through the website faster than through the regular retailer within a certain time frame.
+In this use-case scenario, we're building an analytics report for the retailer. The goal is to find all products that cost more than $20,000 and that sell through the website faster than through the regular retailer within a certain time frame.
-Here is the base U-SQL script. You can compare the logic between a regular JOIN and a combiner:
+Here's the base U-SQL script. You can compare the logic between a regular JOIN and a combiner:
```sql DECLARE @LocalURI string = @"\usql-programmability\";
data-lake-analytics Data Lake Analytics U Sql Programmability Guide Extractor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-u-sql-programmability-guide-extractor.md
Title: U-SQL user defined extractor programmability guide for Azure Data Lake description: Learn about the U-SQL UDO programmability guide - user defined extractor. -+ Previously updated : 06/30/2017 Last updated : 01/27/2023 # Use user-defined extractor ## U-SQL UDO: user-defined extractor+ U-SQL allows you to import external data by using an EXTRACT statement. An EXTRACT statement can use built-in UDO extractors: * *Extractors.Text()*: Provides extraction from delimited text files of different encodings.
It can be useful to develop a custom extractor. This can be helpful during data
* Parse data in unsupported encoding. ## How to define and use user-defined extractor+ To define a user-defined extractor, or UDE, we need to create an `IExtractor` interface. All input parameters to the extractor, such as column/row delimiters, and encoding, need to be defined in the constructor of the class. The `IExtractor` interface should also contain a definition for the `IEnumerable<IRow>` override as follows: ```csharp
public class SampleExtractor : IExtractor
} ```
-The **SqlUserDefinedExtractor** attribute indicates that the type should be registered as a user-defined extractor. This class cannot be inherited.
+The **SqlUserDefinedExtractor** attribute indicates that the type should be registered as a user-defined extractor. This class can't be inherited.
SqlUserDefinedExtractor is an optional attribute for UDE definition. It used to define AtomicFileProcessing property for the UDE object.
data-lake-analytics Data Lake Analytics U Sql Programmability Guide Outputter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-u-sql-programmability-guide-outputter.md
Title: U-SQL user defined outputter programmability guide for Azure Data Lake description: Learn about the U-SQL UDO programmability guide user defined outputter. -+ Previously updated : 06/30/2017 Last updated : 01/27/2023 # Use user-defined outputter ## U-SQL UDO: user-defined outputter+ User-defined outputter is another U-SQL UDO that allows you to extend built-in U-SQL functionality. Similar to the extractor, there are several built-in outputters. * *Outputters.Text()*: Writes data to delimited text files of different encodings.
Custom outputter allows you to write data in a custom defined format. This can b
* Modifying output data or adding custom attributes. ## How to define and use user-defined outputter+ To define user-defined outputter, we need to create the `IOutputter` interface. Following is the base `IOutputter` class implementation:
public class MyOutputter : IOutputter
* The Constructor class is used to pass parameters to the user-defined outputter. * `Close` is used to optionally override to release expensive state or determine when the last row was written.
-**SqlUserDefinedOutputter** attribute indicates that the type should be registered as a user-defined outputter. This class cannot be inherited.
+**SqlUserDefinedOutputter** attribute indicates that the type should be registered as a user-defined outputter. This class can't be inherited.
SqlUserDefinedOutputter is an optional attribute for a user-defined outputter definition. It's used to define the AtomicFileProcessing property.
This approach enables you to build a flexible outputter for any metadata schema.
The output data is written to file by using `System.IO.StreamWriter`. The stream parameter is set to `output.BaseStream` as part of `IUnstructuredWriter output`.
-Note that it's important to flush the data buffer to the file after each row iteration. In addition, the `StreamWriter` object must be used with the Disposable attribute enabled (default) and with the **using** keyword:
+It's important to flush the data buffer to the file after each row iteration. In addition, the `StreamWriter` object must be used with the Disposable attribute enabled (default) and with the **using** keyword:
```csharp using (StreamWriter streamWriter = new StreamWriter(output.BaseStream, this._encoding))
USING USQL_Programmability.Factory.HTMLOutputter(isHeader: true);
``` ## Next steps+ * [U-SQL programmability guide - overview](data-lake-analytics-u-sql-programmability-guide.md) * [U-SQL programmability guide - UDT and UDAGG](data-lake-analytics-u-sql-programmability-guide-UDT-AGG.md)
data-lake-analytics Data Lake Analytics U Sql Programmability Guide Processor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-u-sql-programmability-guide-processor.md
Title: U-SQL user defined processor programmability guide for Azure Data Lake description: Learn about the U-SQL UDO programmability guide - user defined processor. -+ Previously updated : 06/30/2017 Last updated : 01/27/2023 # Use user-defined processor ## U-SQL UDO: user-defined processor+ User-defined processor, or UDP, is a type of U-SQL UDO that enables you to process the incoming rows by applying programmability features. UDP enables you to combine columns, modify values, and add new columns if necessary. Basically, it helps to process a rowset to produce required data elements. ## How to define and use user-defined processor+ To define a UDP, we need to create an `IProcessor` interface with the `SqlUserDefinedProcessor` attribute, which is optional for UDP. This interface should contain the definition for the `IRow` interface rowset override, as shown in the following example:
public override IRow Process(IRow input, IUpdatableRow output)
} ```
-**SqlUserDefinedProcessor** indicates that the type should be registered as a user-defined processor. This class cannot be inherited.
+**SqlUserDefinedProcessor** indicates that the type should be registered as a user-defined processor. This class can't be inherited.
The SqlUserDefinedProcessor attribute is **optional** for UDP definition.
data-lake-analytics Data Lake Analytics U Sql Programmability Guide Reducer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-u-sql-programmability-guide-reducer.md
Title: U-SQL user defined reducer programmability guide for Azure Data Lake description: Learn about the U-SQL UDO programmability guide - user defined reducer. -+ Previously updated : 06/30/2017 Last updated : 01/27/2023 # Use user-defined reducer
U-SQL enables you to write custom rowset reducers in C# by using the user-define
User-defined reducer, or UDR, can be used to eliminate unnecessary rows during data extraction (import). It also can be used to manipulate and evaluate rows and columns. Based on programmability logic, it can also define which rows need to be extracted. ## How to define and use user-defined reducer+ To define a UDR class, we need to create an `IReducer` interface with an optional `SqlUserDefinedReducer` attribute. This class interface should contain a definition for the `IEnumerable` interface rowset override.
public class EmptyUserReducer : IReducer
} ```
-The **SqlUserDefinedReducer** attribute indicates that the type should be registered as a user-defined reducer. This class cannot be inherited.
+The **SqlUserDefinedReducer** attribute indicates that the type should be registered as a user-defined reducer. This class can't be inherited.
**SqlUserDefinedReducer** is an optional attribute for a user-defined reducer definition. It's used to define IsRecursive property. * bool IsRecursive
The parameter for the `Row.Get` method is a column that's passed as part of the
For output, use the `output.Set` method.
-It is important to understand that custom reducer only outputs values that are defined with the `output.Set` method call.
+It's important to understand that custom reducer only outputs values that are defined with the `output.Set` method call.
```csharp output.Set<string>("mycolumn", guid);
data-lake-analytics Data Lake Analytics U Sql R Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-u-sql-r-extensions.md
Title: Extend U-SQL scripts with R in Azure Data Lake Analytics description: Learn how to run R code in U-SQL scripts using Azure Data Lake Analytics. Embed R code inline or reference from files. -+ Previously updated : 06/20/2017 Last updated : 01/27/2023 # Extend U-SQL scripts with R code in Azure Data Lake Analytics
The following example illustrates the basic steps for deploying R code:
* Use the `REFERENCE ASSEMBLY` statement to enable R extensions for the U-SQL Script. * Use the `REDUCE` operation to partition the input data on a key. * The R extensions for U-SQL include a built-in reducer (`Extension.R.Reducer`) that runs R code on each vertex assigned to the reducer.
-* Usage of dedicated named data frames called `inputFromUSQL` and `outputToUSQL` respectively to pass data between U-SQL and R. Input and output DataFrame identifier names are fixed (that is, users cannot change these predefined names of input and output DataFrame identifiers).
+* Usage of dedicated named data frames called `inputFromUSQL` and `outputToUSQL` respectively to pass data between U-SQL and R. Input and output DataFrame identifier names are fixed (that is, users can't change these predefined names of input and output DataFrame identifiers).
## Embedding R code in the U-SQL script
DECLARE @PartitionCount int = 10;
### Datatypes * String and numeric columns from U-SQL are converted as-is between R DataFrame and U-SQL [supported types: `double`, `string`, `bool`, `integer`, `byte`].
-* The `Factor` datatype is not supported in U-SQL.
+* The `Factor` datatype isn't supported in U-SQL.
* `byte[]` must be serialized as a base64-encoded `string`. * U-SQL strings can be converted to factors in R code, once U-SQL create R input dataframe or by setting the reducer parameter `stringsAsFactors: true`. ### Schemas
-* U-SQL datasets cannot have duplicate column names.
+* U-SQL datasets can't have duplicate column names.
* U-SQL datasets column names must be strings. * Column names must be the same in U-SQL and R scripts.
-* Readonly column cannot be part of the output dataframe. Because readonly columns are automatically injected back in the U-SQL table if it is a part of output schema of UDO.
+* Readonly column can't be part of the output dataframe. Because readonly columns are automatically injected back in the U-SQL table if it's a part of output schema of UDO.
### Functional limitations * The R Engine can't be instantiated twice in the same process.
-* Currently, U-SQL does not support Combiner UDOs for prediction using partitioned models generated using Reducer UDOs. Users can declare the partitioned models as resource and use them in their R Script (see sample code `ExtR_PredictUsingLMRawStringReducer.usql`)
+* Currently, U-SQL doesn't support Combiner UDOs for prediction using partitioned models generated using Reducer UDOs. Users can declare the partitioned models as resource and use them in their R Script (see sample code `ExtR_PredictUsingLMRawStringReducer.usql`)
### R Versions
XML
### Input and Output size limitations
-Every vertex has a limited amount of memory assigned to it. Because the input and output DataFrames must exist in memory in the R code, the total size for the input and output cannot exceed 500 MB.
+Every vertex has a limited amount of memory assigned to it. Because the input and output DataFrames must exist in memory in the R code, the total size for the input and output can't exceed 500 MB.
### Sample code
More sample code is available in your Data Lake Store account after you install
## Deploying Custom R modules with U-SQL
-First, create an R custom module and zip it and then upload the zipped R custom module file to your ADL store. In the example, we will upload magittr_1.5.zip to the root of the default ADLS account for the ADLA account we are using. Once you upload the module to ADL store, declare it as use DEPLOY RESOURCE to make it available in your U-SQL script and call `install.packages` to install it.
+First, create an R custom module and zip it and then upload the zipped R custom module file to your ADL store. In the example, we'll upload magittr_1.5.zip to the root of the default ADLS account for the ADLA account we're using. Once you upload the module to ADL store, declare it as use DEPLOY RESOURCE to make it available in your U-SQL script and call `install.packages` to install it.
```usql REFERENCE ASSEMBLY [ExtR];
data-lake-analytics Dotnet Upgrade Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/dotnet-upgrade-troubleshoot.md
Title: How to troubleshoot the Azure Data Lake Analytics U-SQL job failures because of .NET Framework 4.7.2 upgrade description: 'Troubleshoot U-SQL job failures because of the upgrade to .NET Framework 4.7.2.'-+ Previously updated : 10/11/2019 Last updated : 01/27/2023 # Azure Data Lake Analytics is upgrading to the .NET Framework v4.7.2
Check for the potential of backwards-compatibility breaking issues by running th
1. Perform a runtime check. The runtime deployment isn't done side-by-side in ADLA. You can perform a runtime check before the upgrade, using VisualStudioΓÇÖs local run with a local .NET Framework 4.7.2 against a representative data set. 3. If you indeed are impacted by a backwards-incompatibility, take the necessary steps to fix it (such as fixing your data or code logic).
-In most cases, you should not be impacted by backwards-incompatibility.
+In most cases, you shouldn't be impacted by backwards-incompatibility.
## Timeline
You can submit your job against the old runtime version (which is built targetin
### What are the most common backwards-compatibility issues you may encounter
-The most common backwards-incompatibilities that the checker is likely to identify are (we generated this list by running the checker on our own internal ADLA jobs), which libraries are impacted (note: that you may call the libraries only indirectly, thus it is important to take required action #1 to check if your jobs are impacted), and possible actions to remedy. Note: In almost all cases for our own jobs, the warnings turned out to be false positives due to the narrow natures of most breaking changes.
+The most common backwards-incompatibilities that the checker is likely to identify are (we generated this list by running the checker on our own internal ADLA jobs), which libraries are impacted (note: that you may call the libraries only indirectly, thus it's important to take required action #1 to check if your jobs are impacted), and possible actions to remedy. Note: In almost all cases for our own jobs, the warnings turned out to be false positives due to the narrow natures of most breaking changes.
- IAsyncResult.CompletedSynchronously property must be correct for the resulting task to complete
- - When calling TaskFactory.FromAsync, the implementation of the IAsyncResult.CompletedSynchronously property must be correct for the resulting task to complete. That is, the property must return true if, and only if, the implementation completed synchronously. Previously, the property was not checked.
+ - When calling TaskFactory.FromAsync, the implementation of the IAsyncResult.CompletedSynchronously property must be correct for the resulting task to complete. That is, the property must return true if, and only if, the implementation completed synchronously. Previously, the property wasn't checked.
- Impacted Libraries: mscorlib, System.Threading.Tasks - Suggested Action: Ensure TaskFactory.FromAsync returns true correctly
The most common backwards-incompatibilities that the checker is likely to identi
- Suggested Action: Ensure data retrieved is the format you want - XmlWriter throws on invalid surrogate pairs
- - For apps that target the .NET Framework 4.5.2 or previous versions, writing an invalid surrogate pair using exception fallback handling does not always throw an exception. For apps that target the .NET Framework 4.6, attempting to write an invalid surrogate pair throws an `ArgumentException`.
+ - For apps that target the .NET Framework 4.5.2 or previous versions, writing an invalid surrogate pair using exception fallback handling doesn't always throw an exception. For apps that target the .NET Framework 4.6, attempting to write an invalid surrogate pair throws an `ArgumentException`.
- Impacted Libraries: System.Xml, System.Xml.ReaderWriter
- - Suggested Action: Ensure you are not writing an invalid surrogate pair that will cause argument exception
+ - Suggested Action: Ensure you aren't writing an invalid surrogate pair that will cause argument exception
-- HtmlTextWriter does not render `<br/>` element correctly
+- HtmlTextWriter doesn't render `<br/>` element correctly
- Beginning in the .NET Framework 4.6, calling `HtmlTextWriter.RenderBeginTag()` and `HtmlTextWriter.RenderEndTag()` with a `<BR />` element will correctly insert only one `<BR />` (instead of two) - Impacted Libraries: System.Web
- - Suggested Action: Ensure you are inserting the amount of `<BR />` you expect to see so no random behavior is seen in production job
+ - Suggested Action: Ensure you're inserting the amount of `<BR />` you expect to see so no random behavior is seen in production job
- Calling CreateDefaultAuthorizationContext with a null argument has changed - The implementation of the AuthorizationContext returned by a call to the `CreateDefaultAuthorizationContext(IList<IAuthorizationPolicy>)` with a null authorizationPolicies argument has changed its implementation in the .NET Framework 4.6. - Impacted Libraries: System.IdentityModel
- - Suggested Action: Ensure you are handling the new expected behavior when there is null authorization policy
+ - Suggested Action: Ensure you're handling the new expected behavior when there's null authorization policy
- RSACng now correctly loads RSA keys of non-standard key size - In .NET Framework versions prior to 4.6.2, customers with non-standard key sizes for RSA certificates are unable to access those keys via the `GetRSAPublicKey()` and `GetRSAPrivateKey()` extension methods. A `CryptographicException` with the message "The requested key size is not supported" is thrown. With the .NET Framework 4.6.2 this issue has been fixed. Similarly, `RSA.ImportParameters()` and `RSACng.ImportParameters()` now work with non-standard key sizes without throwing `CryptographicException`'s.
The most common backwards-incompatibilities that the checker is likely to identi
- Suggested Action: Ensure RSA keys are working as expected - Path colon checks are stricter
- - In .NET Framework 4.6.2, a number of changes were made to support previously unsupported paths (both in length and format). Checks for proper drive separator (colon) syntax were made more correct, which had the side effect of blocking some URI paths in a few select Path APIs where they used to be tolerated.
+ - In .NET Framework 4.6.2, many changes were made to support previously unsupported paths (both in length and format). Checks for proper drive separator (colon) syntax were made more correct, which had the side effect of blocking some URI paths in a few select Path APIs where they used to be tolerated.
- Impacted Libraries: mscorlib, System.Runtime.Extensions - Suggested Action: - Calls to ClaimsIdentity constructors
- - Starting with the .NET Framework 4.6.2, there is a change in how `T:System.Security.Claims.ClaimsIdentity` constructors with an `T:System.Security.Principal.IIdentity` parameter set the `P:System.Security.Claims.ClaimsIdentify.Actor` property. If the `T:System.Security.Principal.IIdentity` argument is a `T:System.Security.Claims.ClaimsIdentity` object, and the `P:System.Security.Claims.ClaimsIdentify.Actor` property of that `T:System.Security.Claims.ClaimsIdentity` object is not `null`, the `P:System.Security.Claims.ClaimsIdentify.Actor` property is attached by using the `M:System.Security.Claims.ClaimsIdentity.Clone` method. In the Framework 4.6.1 and earlier versions, the `P:System.Security.Claims.ClaimsIdentify.Actor` property is attached as an existing reference. Because of this change, starting with the .NET Framework 4.6.2, the `P:System.Security.Claims.ClaimsIdentify.Actor` property of the new `T:System.Security.Claims.ClaimsIdentity` object is not equal to the `P:System.Security.Claims.ClaimsIdentify.Actor` property of the constructor's `T:System.Security.Principal.IIdentity` argument. In the .NET Framework 4.6.1 and earlier versions, it is equal.
+ - Starting with the .NET Framework 4.6.2, there's a change in how `T:System.Security.Claims.ClaimsIdentity` constructors with an `T:System.Security.Principal.IIdentity` parameter set the `P:System.Security.Claims.ClaimsIdentify.Actor` property. If the `T:System.Security.Principal.IIdentity` argument is a `T:System.Security.Claims.ClaimsIdentity` object, and the `P:System.Security.Claims.ClaimsIdentify.Actor` property of that `T:System.Security.Claims.ClaimsIdentity` object is not `null`, the `P:System.Security.Claims.ClaimsIdentify.Actor` property is attached by using the `M:System.Security.Claims.ClaimsIdentity.Clone` method. In the Framework 4.6.1 and earlier versions, the `P:System.Security.Claims.ClaimsIdentify.Actor` property is attached as an existing reference. Because of this change, starting with the .NET Framework 4.6.2, the `P:System.Security.Claims.ClaimsIdentify.Actor` property of the new `T:System.Security.Claims.ClaimsIdentity` object is not equal to the `P:System.Security.Claims.ClaimsIdentify.Actor` property of the constructor's `T:System.Security.Principal.IIdentity` argument. In the .NET Framework 4.6.1 and earlier versions, it's equal.
- Impacted Libraries: mscorlib - Suggested Action: Ensure ClaimsIdentity is working as expected on new runtime - Serialization of control characters with DataContractJsonSerializer is now compatible with ECMAScript V6 and V8
- - In the .NET framework 4.6.2 and earlier versions, the DataContractJsonSerializer did not serialize some special control characters, such as \b, \f, and \t, in a way that was compatible with the ECMAScript V6 and V8 standards. Starting with the .NET Framework 4.7, serialization of these control characters is compatible with ECMAScript V6 and V8.
+ - In the .NET framework 4.6.2 and earlier versions, the DataContractJsonSerializer didn't serialize some special control characters, such as \b, \f, and \t, in a way that was compatible with the ECMAScript V6 and V8 standards. Starting with the .NET Framework 4.7, serialization of these control characters is compatible with ECMAScript V6 and V8.
- Impacted Libraries: System.Runtime.Serialization.Json - Suggested Action: Ensure same behavior with DataContractJsonSerializer
data-lake-analytics Runtime Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/runtime-troubleshoot.md
Title: How to troubleshoot the Azure Data Lake Analytics U-SQL runtime failures description: 'Learn how to troubleshoot U-SQL runtime failures.'-+ Previously updated : 10/10/2019 Last updated : 01/27/2023 # Learn how to troubleshoot U-SQL runtime failures due to runtime changes
The Azure Data Lake U-SQL runtime, including the compiler, optimizer, and job ma
## Choosing your U-SQL runtime version
-When you submit U-SQL jobs from either Visual Studio, the ADL SDK or the Azure Data Lake Analytics portal, your job will use the currently available default runtime. New versions of the U-SQL runtime are released on a regular basis and include both minor updates and security fixes.
+When you submit U-SQL jobs from either Visual Studio, the ADL SDK or the Azure Data Lake Analytics portal, your job will use the currently available default runtime. New versions of the U-SQL runtime are released regularly and include both minor updates and security fixes.
-You can also choose a custom runtime version; either because you want to try out a new update, need to stay on an older version of a runtime, or were provided with a hotfix for a reported problem where you cannot wait for the regular new update.
+You can also choose a custom runtime version; either because you want to try out a new update, need to stay on an older version of a runtime, or were provided with a hotfix for a reported problem where you can't wait for the regular new update.
> [!CAUTION] > Choosing a runtime that is different from the default has the potential to break your U-SQL jobs. Use these other versions for testing only.
-In rare cases, Microsoft Support may pin a different version of a runtime as the default for your account. Please ensure that you revert this pin as soon as possible. If you remain pinned to that version, it will expire at some later date.
+In rare cases, Microsoft Support may pin a different version of a runtime as the default for your account. Ensure that you revert this pin as soon as possible. If you remain pinned to that version, it will expire at some later date.
### Monitoring your jobs U-SQL runtime version
You can see the history of which runtime version your past jobs have used in you
1. In the Azure portal, go to your Data Lake Analytics account. 2. Select **View All Jobs**. A list of all the active and recently finished jobs in the account appears.
-3. Optionally, click **Filter** to help you find the jobs by **Time Range**, **Job Name**, and **Author** values.
+3. Optionally, select **Filter** to help you find the jobs by **Time Range**, **Job Name**, and **Author** values.
4. You can see the runtime used in the completed jobs. ![Displaying the runtime version of a past job](./media/runtime-troubleshoot/prior-job-usql-runtime-version-.png)
-The available runtime versions change over time. The default runtime is always called "default" and we keep at least the previous runtime available for some time as well as make special runtimes available for a variety of reasons. Explicitly named runtimes generally follow the following format (italics are used for variable parts and [] indicates optional parts):
+The available runtime versions change over time. The default runtime is always called "default" and we keep at least the previous runtime available for some time and make special runtimes available for various reasons. Explicitly named runtimes generally follow the following format (italics are used for variable parts and [] indicates optional parts):
release_YYYYMMDD_adl_buildno[_modifier]
For example, release_20190318_adl_3394512_2 means the second version of the buil
There are two possible runtime version issues that you may encounter:
-1. A script or some user-code is changing behavior from one release to the next. Such breaking changes are normally communicated ahead of time with the publication of release notes. If you encounter such a breaking change, contact Microsoft Support to report this breaking behavior (in case it has not been documented yet) and submit your jobs against the older runtime version.
+1. A script or some user-code is changing behavior from one release to the next. Such breaking changes are normally communicated ahead of time with the publication of release notes. If you encounter such a breaking change, contact Microsoft Support to report this breaking behavior (in case it hasn't been documented yet) and submit your jobs against the older runtime version.
-2. You have been using a non-default runtime either explicitly or implicitly when it has been pinned to your account, and that runtime has been removed after some time. If you encounter missing runtimes, upgrade your scripts to run with the current default runtime. If you need additional time, contact Microsoft Support
+2. You have been using a non-default runtime either explicitly or implicitly when it has been pinned to your account, and that runtime has been removed after some time. If you encounter missing runtimes, upgrade your scripts to run with the current default runtime. If you need more time, contact Microsoft Support
## Known issues
There are two possible runtime version issues that you may encounter:
`at Roslyn.Compilers.MetadataReader.PEFile.CustomAttributeTableReader.get_Item(UInt32 rowId)` `...`
- **Solution**: Please use Newtonsoft.Json file v12.0.2 or lower.
-2. Customers might see temporary files and folders on their store. Those are produced as part of the normal job execution, but are usually deleted before the customers see them. Under certain circumstances, which are rare and random, they might remain visible for a period of time. They are eventually deleted, and are never counted as part of user storage, or generate any form of charges whatsoever. Depending on the customers' job logic they might cause issues. For instance, if the job enumerates all files in the folder and then compares file lists, it might fail because of the unexpected temporary files being present. Similarly, if a downstream job enumerates all files from a given folder for further processing, it might also enumerate the temp files.
+ **Solution**: Use Newtonsoft.Json file v12.0.2 or lower.
+2. Customers might see temporary files and folders on their store. Those are produced as part of the normal job execution, but are usually deleted before the customers see them. Under certain circumstances, which are rare and random, they might remain visible. They're eventually deleted, and are never counted as part of user storage, or generate any form of charges whatsoever. Depending on the customers' job logic they might cause issues. For instance, if the job enumerates all files in the folder and then compares file lists, it might fail because of the unexpected temporary files being present. Similarly, if a downstream job enumerates all files from a given folder for further processing, it might also enumerate the temp files.
- **Solution**: A fix is identified in the runtime where the temp files will be stored in account level temp folder than the current output folder. The temp files will be written in this new temp folder and will be deleted at the end the job execution.
- Since this fix is handling the customer data, it is extremely important to have this fix well validated within MSFT before it is released. It is expected to have this fix available as beta runtime in the middle of year 2021 and as default runtime in the second half of year 2021.
+ **Solution**: A fix is identified in the runtime where the temp files will be stored in account level temp folder rather than the current output folder. The temp files will be written in this new temp folder and will be deleted at the end the job execution.
+ Since this fix is handling the customer data, it's important to have this fix well validated within MSFT before it's released. It's expected to have this fix available as beta runtime in the middle of year 2021 and as default runtime in the second half of year 2021.
## See also
deployment-environments Configure Catalog Item https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/configure-catalog-item.md
In Azure Deployment Environments Preview, you can use a [catalog](concept-enviro
A catalog item is combined of least two files: - An [Azure Resource Manager template (ARM template)](../azure-resource-manager/templates/overview.md) in JSON file format. For example, *azuredeploy.json*.-- A manifest YAML file (*manifest.yml*).
+- A manifest YAML file (*manifest.yaml*).
>[!NOTE] > Azure Deployment Environments Preview currently supports only ARM templates.
deployment-environments How To Configure Use Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-use-cli.md
az devcenter admin project delete -g <resource-group-name> --name <project-name>
**Create an environment** ```azurecli
-az devcenter dev environment create -g <resource-group-name> --dev-center-name <devcenter-name> \
+az devcenter dev environment create --dev-center-name <devcenter-name> \
--project-name <project-name> -n <name> --environment-type <environment-type-name> \ --catalog-item-name <catalog-item-name> catalog-name <catalog-name> \ --parameters <deployment-parameters-json-string>
dev-box Overview What Is Microsoft Dev Box https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/overview-what-is-microsoft-dev-box.md
Microsoft Dev Box bridges the gap between development teams and IT, bringing con
Start using Microsoft Dev Box: - [Quickstart: Configure the Microsoft Dev Box Preview service](./quickstart-configure-dev-box-service.md)-- [Quickstart: Configure a Microsoft Dev Box Preview project](./quickstart-configure-dev-box-project.md) - [Quickstart: Create a Dev Box](./quickstart-create-dev-box.md)
dev-box Quickstart Configure Dev Box Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/quickstart-configure-dev-box-project.md
- Title: Configure a Microsoft Dev Box Preview project-
-description: 'This quickstart shows you how to configure a Microsoft Dev Box Preview project, create a dev box pool and provide access to dev boxes for your users.'
----- Previously updated : 10/12/2022-
-<!--
- Customer intent:
- As a Dev Box Project Admin I want to configure projects so that I can provide Dev Boxes for my users.
- -->
-
-# Quickstart: Configure a Microsoft Dev Box Preview project
-To enable developers to self-serve dev boxes in projects, you must configure dev box pools that specify the dev box definitions and network connections used when dev boxes are created. Dev box users create dev boxes using the dev box pool.
-
-In this quickstart, you'll perform the following tasks:
-
-* [Create a dev box pool](#create-a-dev-box-pool)
-* [Provide access to a dev box project](#provide-access-to-a-dev-box-project)
-
-## Create a dev box pool
-A dev box pool is a collection of dev boxes that you manage together. You must have a pool before users can create a dev box, and all dev boxes created in the pool will be in the same region.
-
-The following steps show you how to create a dev box pool associated with a project. You'll use an existing dev box definition and network connection in the dev center to configure a dev box pool.
-
-If you don't have an available dev center with an existing dev box definition and network connection, follow the steps in [Quickstart: Configure the Microsoft Dev Box Preview service](quickstart-configure-dev-box-service.md) to create them.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-2. In the search box, type *Projects* and then select **Projects** from the list.
-
- <!-- :::image type="content" source="./media/quickstart-configure-dev-box-projects/discovery-via-azure-portal.png" alt-text="Screenshot showing the Azure portal with the search box highlighted."::: -->
-
-3. Open the project in which you want to create the dev box pool.
-
- :::image type="content" source="./media/quickstart-configure-dev-box-projects/projects-grid.png" alt-text="Screenshot of the list of existing projects.":::
-
-4. Select **Dev box pools** and then select **+ Create**.
-
- :::image type="content" source="./media/quickstart-configure-dev-box-projects/dev-box-pool-grid-empty.png" alt-text="Screenshot of the list of dev box pools within a project. The list is empty.":::
-
-1. On the **Create a dev box pool** page, enter the following values:
-
- |Name|Value|
- |-|-|
- |**Name**|Enter a name for the pool. The pool name is visible to developers to select when they're creating dev boxes, and must be unique within a project.|
- |**Dev box definition**|Select an existing dev box definition. The definition determines the base image and size for the dev boxes created within this pool.|
- |**Network connection**|Select an existing network connection. The network connection determines the region of the dev boxes created within this pool.|
- |**Dev Box Creator Privileges**|Select Local Administrator or Standard User.|
- |**Enable Auto-stop**|Yes is the default. Select No to disable an Auto-stop schedule. You can configure an Auto-stop schedule after the pool has been created.|
- |**Stop time**| Select a time to shutdown all the dev boxes in the pool. All Dev Boxes in this pool will be shut down at this time, everyday.|
- |**Time zone**| Select the time zone that the stop time is in.|
- |**Licensing**| Select this check box to confirm that your organization has Azure Hybrid Benefit licenses that you want to apply to the dev boxes in this pool. |
--
- :::image type="content" source="./media/how-to-manage-stop-schedule/dev-box-pool-create.png" alt-text="Screenshot of the Create dev box pool dialog.":::
-
-6. Select **Add**.
-
-7. Verify that the new dev box pool appears in the list. You may need to refresh the screen.
-
-The dev box pool will be deployed and health checks will be run to ensure the image and network pass the validation criteria to be used for dev boxes. The screenshot below shows four dev box pools, each with a different status.
-
- :::image type="content" source="./media/quickstart-configure-dev-box-projects/dev-box-pool-grid-populated.png" alt-text="Screenshot showing a list of existing pools.":::
-
-## Provide access to a dev box project
-Before users can create dev boxes based on the dev box pools in a project, you must provide access for them through a role assignment. The Dev Box User role enables dev box users to create, manage and delete their own dev boxes. You must have sufficient permissions to a project before you can add users to it.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. In the search box, type *Projects* and then select **Projects** from the list.
-
-1. Select the project you want to provide your team members access to.
-
- :::image type="content" source="./media/quickstart-configure-dev-box-projects/projects-grid.png" alt-text="Screenshot of the list of existing projects.":::
-
-1. Select **Access Control (IAM)** from the left menu.
-
- :::image type="content" source="./media/quickstart-configure-dev-box-projects/access-control-tab.png" alt-text="Screenshot showing the Project Access control page with the Access Control link highlighted.":::
-
-1. Select **Add** > **Add role assignment**.
-
- :::image type="content" source="./media/quickstart-configure-dev-box-projects/add-role-assignment.png" alt-text="Screenshot showing the Add menu with Add role assignment highlighted.":::
-
-1. On the Add role assignment page, search for *devcenter dev box user*, select the **DevCenter Dev Box User** built-in role, and then select **Next**.
-
- :::image type="content" source="./media/quickstart-configure-dev-box-projects/dev-box-user-role.png" alt-text="Screenshot showing the Add role assignment search box highlighted.":::
-
-1. On the Members page, select **+ Select Members**.
-
- :::image type="content" source="./media/quickstart-configure-dev-box-projects/dev-box-user-select-members.png" alt-text="Screenshot showing the Members tab with Select members highlighted.":::
-
-1. On the **Select members** pane, select the Active Directory Users or Groups you want to add, and then select **Select**.
-
- :::image type="content" source="./media/quickstart-configure-dev-box-projects/select-members-search.png" alt-text="Screenshot showing the Select members pane with a user account highlighted.":::
-
-1. On the Add role assignment page, select **Review + assign**.
-
-The user will now be able to view the project and all the pools within it. They can create dev boxes from any of the pools and manage those dev boxes from the [developer portal](https://aka.ms/devbox-portal).
--
-## Project admins
-
-The Microsoft Dev Box service makes it possible for you to delegate administration of projects to a member of the project team. Project administrators can assist with the day-to-day management of projects for their team, like creating and managing dev box pools. To provide users permissions to manage projects, add them to the DevCenter Project Admin role. The tasks in this quickstart can be performed by project admins. To learn how to add a user to the Project Admin role, see [Provide access to projects for project admins](how-to-project-admin.md).
-
-## Next steps
-
-In this quickstart, you created a dev box pool within an existing project and assigned a user permission to create dev boxes based on the new pool.
-
-To learn about how to create to your dev box and connect to it, advance to the next quickstart:
-
-> [!div class="nextstepaction"]
-> [Create a dev box](./quickstart-create-dev-box.md)
dev-box Quickstart Configure Dev Box Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/quickstart-configure-dev-box-service.md
Title: Configure the Microsoft Dev Box Preview service
-description: 'This quickstart shows you how to configure the Microsoft Dev Box Preview service to provide dev boxes for your users. You will create a dev center, add a network connection, and then create a dev box definition, and a project.'
+description: "This quickstart shows you how to configure the Microsoft Dev Box Preview service to provide dev boxes for your users. You'll create a dev center, add a network connection, and then create a dev box definition, and a project."
Previously updated : 12/16/2022 Last updated : 01/24/2023 <!--
# Quickstart: Configure the Microsoft Dev Box Preview service
-This quickstart describes how to configure the Microsoft Dev Box service by using the Azure portal to enable development teams to self-serve dev boxes.
+This quickstart describes how to configure the Microsoft Dev Box service by using the Azure portal to enable development teams to self-serve dev boxes.
+
+This quickstart will take you through the process of setting up your Dev Box environment. You'll create a dev center to organize your dev box resources, configure network components to enable dev boxes to connect to your organizational resources, and create a dev box definition that will form the basis of your dev boxes. YouΓÇÖll then create a project and a dev box pool, which work together to help you give access to users who will manage or use the dev boxes.
+
+After you've completed this quickstart, you'll have a Dev Box configuration ready for users to create and connect to dev boxes.
## Prerequisites
To complete this quick start, make sure that you have:
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/). - Owner or Contributor permissions on an Azure Subscription or a specific resource group. - Network Contributor permissions on an existing virtual network (owner or contributor) or permission to create a new virtual network and subnet.-- User licenses. To use Windows 365 Enterprise, each user must be licensed for Windows 11 Enterprise or Windows 10 Enterprise, Endpoint Manager, and Azure Active Directory P1.
+- User licenses. To use Dev Box, each user must be licensed for Windows 11 Enterprise or Windows 10 Enterprise, Microsoft Intune, and Azure Active Directory P1.
- These licenses are available independently and also included in the following subscriptions: - Microsoft 365 F3 - Microsoft 365 E3, Microsoft 365 E5
To complete this quick start, make sure that you have:
## Create a dev center
-The following steps show you how to create and configure a dev center.
+To begin the configuration, you'll create a dev center to enable you to manage your dev box resources. The following steps show you how to create and configure a dev center.
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In the search box, type *Dev centers* and then select **Dev centers** from the list.
+1. In the search box, type *Dev centers* and then select **Dev centers** in the search results.
- <!-- :::image type="content" source="./media/quickstart-configure-dev-box-service/discovery-via-azure-portal.png" alt-text="Screenshot showing the Azure portal with the search box highlighted."::: -->
+ :::image type="content" source="./media/quickstart-configure-dev-box-service/discover-dev-centers.png" alt-text="Screenshot showing the Azure portal with the search box and dev centers result highlighted.":::
1. On the dev centers page, select **+Create**. + :::image type="content" source="./media/quickstart-configure-dev-box-service/create-dev-center.png" alt-text="Screenshot showing the Azure portal Dev center with create highlighted."::: 1. On the **Create a dev center** page, on the **Basics** tab, enter the following values:
The following steps show you how to create and configure a dev center.
|**Name**|Enter a name for your dev center.| |**Location**|Select the location/region you want the dev center to be created in.|
- :::image type="content" source="./media/quickstart-configure-dev-box-service/create-devcenter-basics.png" alt-text="Screenshot showing the Create dev center Basics tab.":::
+ :::image type="content" source="./media/quickstart-configure-dev-box-service/create-dev-center-basics.png" alt-text="Screenshot showing the Create dev center Basics tab.":::
The currently supported Azure locations with capacity are listed here: [Microsoft Dev Box Preview](https://aka.ms/devbox_acom). 1. [Optional] On the **Tags** tab, enter a name and value pair that you want to assign.
- :::image type="content" source="./media/quickstart-configure-dev-box-service/create-devcenter-tags.png" alt-text="Screenshot showing the Create dev center Tags tab.":::
+ :::image type="content" source="./media/quickstart-configure-dev-box-service/create-dev-center-tags.png" alt-text="Screenshot showing the Create dev center Tags tab.":::
1. Select **Review + Create**. 1. On the **Review** tab, select **Create**. 1. You can check on the progress of the dev center creation from any page in the Azure portal by opening the notifications pane.
- :::image type="content" source="./media/quickstart-configure-dev-box-service/azure-notifications.png" alt-text="Screenshot showing Azure portal notifications pane.":::
+
+ :::image type="content" source="./media/quickstart-configure-dev-box-service/notifications-pane.png" alt-text="Screenshot showing Azure portal notifications pane.":::
1. When the deployment is complete, select **Go to resource**. You'll see the dev center page. ## Create a network connection+ Network connections determine the region into which dev boxes are deployed and allow them to be connected to your existing virtual networks. The following steps show you how to create and configure a network connection in Microsoft Dev Box. To create a network connection, you must have: -- An existing virtual network (vnet) and subnet. If you don't have a vnet and subnet available, follow the instructions here: [Quickstart: Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md) to create them.
+- An existing virtual network (vnet) and subnet. If you don't have a vnet and subnet available, follow the instructions here: [Create a virtual network and subnet](#create-a-virtual-network-and-subnet) to create them.
- A configured and working Hybrid AD join or Azure AD join.
- - **Hybrid AD join:** To learn how to join your AD DS domain-joined computers to Azure AD from an on-premises Active Directory Domain Services (AD DS) environment, see [Plan your hybrid Azure Active Directory join deployment](../active-directory/devices/hybrid-azuread-join-plan.md).
- **Azure AD join:** To learn how to join devices directly to Azure Active Directory (Azure AD), see [Plan your Azure Active Directory join deployment](../active-directory/devices/azureadjoin-plan.md).
+ - **Hybrid AD join:** To learn how to join your AD DS domain-joined computers to Azure AD from an on-premises Active Directory Domain Services (AD DS) environment, see [Plan your hybrid Azure Active Directory join deployment](../active-directory/devices/hybrid-azuread-join-plan.md).
- If your organization routes egress traffic through a firewall, you need to open certain ports to allow the Dev Box service to function. For more information, see [Network requirements](/windows-365/enterprise/requirements-network).
-Follow these steps to create a network connection:
+### Create a virtual network and subnet
-1. Sign in to the [Azure portal](https://portal.azure.com).
+You must have a vnet and subnet available for your network connection; create them using these steps:
+
+1. In the search box, type *Virtual Network* and then select **Virtual Network** in the search results.
+
+1. On the **Virtual Network** page, select **Create**.
+
+1. In **Create virtual network**, enter or select this information in the **Basics** tab:
+
+ :::image type="content" source="./media/quickstart-configure-dev-box-service/vnet-basics-tab.png" alt-text="Screenshot of creating a virtual network in Azure portal.":::
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select an existing resource group or select **Create new**, and enter a name for the resource group. |
+ | **Instance details** | |
+ | Name | Enter a name for your vnet. |
+ | Region | Enter the location/region you want the vnet to be created in. |
+
+1. On the **IP Addresses** tab, note the default IP address assignment and subnet. You can accept the defaults unless they conflict with your existing configuration.
+
+1. Select the **Review + create** tab. Review the vnet and subnet configuration.
-1. In the search box, type *Network connections* and then select **Network connections** from the list.
+1. Select **Create**.
+
+### Create a network connection
+
+Now that you have an available vnet and subnet, you need a network connection to associate the vnet and subnet with the dev center. Follow these steps to create a network connection:
+
+1. In the search box, type *Network connections* and then select **Network connections** in the search results.
1. On the **Network Connections** page, select **+Create**.
- :::image type="content" source="./media/quickstart-configure-dev-box-service/network-connections-empty.png" alt-text="Screenshot showing the Network Connections page with Create highlighted.":::
+ :::image type="content" source="./media/quickstart-configure-dev-box-service/create-network-connection.png" alt-text="Screenshot showing the Network Connections page with Create highlighted.":::
1. Follow the steps on the appropriate tab to create your network connection. #### [Azure AD join](#tab/AzureADJoin/)
Follow these steps to create a network connection:
|**Virtual network**|Select the virtual network you want the network connection to use.| |**Subnet**|Select the subnet you want the network connection to use.|
- :::image type="content" source="./media/quickstart-configure-dev-box-service/create-native-network-connection-full-blank.png" alt-text="Screenshot showing the create network connection basics tab with Azure Active Directory join highlighted.":::
+ :::image type="content" source="./media/quickstart-configure-dev-box-service/create-nc-native-join.png" alt-text="Screenshot showing the create network connection basics tab with Azure Active Directory join highlighted.":::
#### [Hybrid Azure AD join](#tab/HybridAzureADJoin/)
Follow these steps to create a network connection:
|**AD username UPN**| The username, in user principal name (UPN) format, that you want to use for connecting the Cloud PCs to your Active Directory domain. For example, svcDomainJoin@corp.contoso.com. This service account must have permission to join computers to the domain and, if set, the target OU. | |**AD domain password**| The password for the user. |
- :::image type="content" source="./media/quickstart-configure-dev-box-service/create-hybrid-network-connect