Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory | About Microsoft Identity Platform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/about-microsoft-identity-platform.md | - Title: Evolution of Microsoft identity platform -description: Learn about Microsoft identity platform, an evolution of the Azure Active Directory (Azure AD) identity service and developer platform. -------- Previously updated : 09/27/2021-------# Evolution of Microsoft identity platform --The [Microsoft identity platform](../develop/index.yml) is an evolution of the Azure Active Directory (Azure AD) developer platform. It allows developers to build applications that sign in users, get tokens to call APIs, such as Microsoft Graph, or APIs that developers have built. It consists of an authentication service, open-source libraries, application registration, and configuration (through a developer portal and application API), full developer documentation, quickstart samples, code samples, tutorials, how-to guides, and other developer content. The Microsoft identity platform supports industry standard protocols such as OAuth 2.0 and OpenID Connect. --Many developers have previously worked with the Azure AD v1.0 platform to authenticate Microsoft work and school accounts by requesting tokens from the Azure AD v1.0 endpoint, using Azure AD Authentication Library (ADAL), Azure portal for application registration and configuration, and the Microsoft Graph API for programmatic application configuration. --With the unified Microsoft identity platform (v2.0), you can write code once and authenticate any Microsoft identity into your application. For several platforms, the fully supported open-source Microsoft Authentication Library (MSAL) is recommended for use against the identity platform endpoints. MSAL is simple to use, provides great single sign-on (SSO) experiences for your users, helps you achieve high reliability and performance, and is developed using Microsoft Secure Development Lifecycle (SDL). When calling APIs, you can configure your application to take advantage of incremental consent, which allows you to delay the request for consent for more invasive scopes until the application's usage warrants this at runtime. MSAL also supports Azure Active Directory B2C, so your customers use their preferred social, enterprise, or local account identities to get single sign-on access to your applications and APIs. --With Microsoft identity platform, expand your reach to these kinds of users: --- Work and school accounts (Microsoft Entra ID provisioned accounts)-- Personal accounts (such as Outlook.com or Hotmail.com)-- Your customers who bring their own email or social identity (such as LinkedIn, Facebook, Google) via MSAL and Azure AD B2C--You can use the Azure portal to register and configure your application, and use the Microsoft Graph API for programmatic application configuration. --Update your application at your own pace. Applications built with ADAL libraries continue to be supported. Mixed application portfolios, that consist of applications built with ADAL and applications built with MSAL libraries, are also supported. This means that applications using the latest ADAL and the latest MSAL will deliver SSO across the portfolio, provided by the shared token cache between these libraries. Applications updated from ADAL to MSAL will maintain user sign-in state upon upgrade. --## Microsoft identity platform experience --The following diagram shows the Microsoft identity experience at a high level, including the app registration experience, SDKs, endpoints, and supported identities. --![Microsoft identity platform today](./media/about-microsoft-identity-platform/about-microsoft-identity-platform.svg) --### App registration experience --The Azure portal **[App registrations](https://go.microsoft.com/fwlink/?linkid=2083908)** experience is the one portal experience for managing all applications you've integrated with Microsoft identity platform. If you have been using the Application Registration Portal, start using the Azure portal app registration experience instead. --For integration with Azure AD B2C (when authenticating social or local identities), you'll need to register your application in an Azure AD B2C tenant. This experience is also part of the Azure portal. --Use the [Application API](/graph/api/resources/application) to programmatically configure your applications integrated with Microsoft identity platform for authenticating any Microsoft identity. --### MSAL libraries --You can use the MSAL library to build applications that authenticate all Microsoft identities. The MSAL libraries in .NET and JavaScript are generally available. MSAL libraries for iOS and Android are in preview and suitable for use in a production environment. We provide the same production level support for MSAL libraries in preview as we do for versions of MSAL and ADAL that are generally available. --You can also use the MSAL libraries to integrate your application with Azure AD B2C. --### Microsoft identity platform endpoint --Microsoft identity platform (v2.0) endpoint is OIDC certified. It works with the Microsoft Authentication Libraries (MSAL) or any other standards-compliant library. It implements human readable scopes, in accordance with industry standards. --## Next steps --Learn more in the [Microsoft identity platform documentation](../develop/index.yml). |
active-directory | Active Directory Acs Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/active-directory-acs-migration.md | - Title: Migrate from the Azure Access Control Service -description: Learn about the options for moving apps and services from the Azure Access Control Service (ACS). --------- Previously updated : 10/03/2018------# How to: Migrate from the Azure Access Control Service ---Microsoft Azure Access Control Service (ACS), a service of Azure Active Directory (Azure AD), will be retired on November 7, 2018. Applications and services that currently use Access Control must be fully migrated to a different authentication mechanism by then. This article describes recommendations for current customers, as you plan to deprecate your use of Access Control. If you don't currently use Access Control, you don't need to take any action. --## Overview --Access Control is a cloud authentication service that offers an easy way to authenticate and authorize users for access to your web applications and services. It allows many features of authentication and authorization to be factored out of your code. Access Control is primarily used by developers and architects of Microsoft .NET clients, ASP.NET web applications, and Windows Communication Foundation (WCF) web services. --Use cases for Access Control can be broken down into three main categories: --- Authenticating to certain Microsoft cloud services, including Azure Service Bus and Dynamics CRM. Client applications obtain tokens from Access Control to authenticate to these services to perform various actions.-- Adding authentication to web applications, both custom and prepackaged (like SharePoint). By using Access Control "passive" authentication, web applications can support sign-in with a Microsoft account (formerly Live ID), and with accounts from Google, Facebook, Yahoo, Azure AD, and Active Directory Federation Services (AD FS).-- Securing custom web services with tokens issued by Access Control. By using "active" authentication, web services can ensure that they allow access only to known clients that have authenticated with Access Control.--Each of these use cases and their recommended migration strategies are discussed in the following sections. --> [!WARNING] -> In most cases, significant code changes are required to migrate existing apps and services to newer technologies. We recommend that you immediately begin planning and executing your migration to avoid any potential outages or downtime. --Access Control has the following components: --- A secure token service (STS), which receives authentication requests and issues security tokens in return.-- The Azure classic portal, where you create, delete, and enable and disable Access Control namespaces.-- A separate Access Control management portal, where you customize and configure Access Control namespaces.-- A management service, which you can use to automate the functions of the portals.-- A token transformation rule engine, which you can use to define complex logic to manipulate the output format of tokens that Access Control issues.--To use these components, you must create one or more Access Control namespaces. A *namespace* is a dedicated instance of Access Control that your applications and services communicate with. A namespace is isolated from all other Access Control customers. Other Access Control customers use their own namespaces. A namespace in Access Control has a dedicated URL that looks like this: --```HTTP -https://<mynamespace>.accesscontrol.windows.net -``` --All communication with the STS and management operations are done at this URL. You use different paths for different purposes. To determine whether your applications or services use Access Control, monitor for any traffic to https://<namespace>.accesscontrol.windows.net. Any traffic to this URL is handled by Access Control, and needs to be discontinued. --The exception to this is any traffic to `https://accounts.accesscontrol.windows.net`. Traffic to this URL is already handled by a different service and **is not** affected by the Access Control deprecation. --For more information about Access Control, see [Access Control Service 2.0 (archived)](/previous-versions/azure/azure-services/hh147631(v=azure.100)). --## Find out which of your apps will be impacted --Follow the steps in this section to find out which of your apps will be impacted by ACS retirement. --### Download and install ACS PowerShell --1. Go to the PowerShell Gallery and download [Acs.Namespaces](https://www.powershellgallery.com/packages/Acs.Namespaces/1.0.2). -2. Install the module by running -- ```powershell - Install-Module -Name Acs.Namespaces - ``` --3. Get a list of all possible commands by running -- ```powershell - Get-Command -Module Acs.Namespaces - ``` -- To get help on a specific command, run: -- ``` - Get-Help [Command-Name] -Full - ``` - - where `[Command-Name]` is the name of the ACS command. --### List your ACS namespaces --1. Connect to ACS using the **Connect-AcsAccount** cmdlet. - - You may need to run `Set-ExecutionPolicy -ExecutionPolicy Bypass` before you can execute commands and be the admin of those subscriptions in order to execute the commands. --2. List your available Azure subscriptions using the **Get-AcsSubscription** cmdlet. -3. List your ACS namespaces using the **Get-AcsNamespace** cmdlet. --### Check which applications will be impacted --1. Use the namespace from the previous step and go to `https://<namespace>.accesscontrol.windows.net` -- For example, if one of the namespaces is contoso-test, go to `https://contoso-test.accesscontrol.windows.net` --2. Under **Trust relationships**, select **Relying party applications** to see the list of apps that will be impacted by ACS retirement. -3. Repeat steps 1-2 for any other ACS namespace(s) that you have. --## Retirement schedule --As of November 2017, all Access Control components are fully supported and operational. The only restriction is that you [can't create new Access Control namespaces via the Azure classic portal](https://azure.microsoft.com/blog/acs-access-control-service-namespace-creation-restriction/). --Here's the schedule for deprecating Access Control components: --- **November 2017**: The Azure AD admin experience in the Azure classic portal [is retired](https://blogs.technet.microsoft.com/enterprisemobility/2017/09/18/marching-into-the-future-of-the-azure-ad-admin-experience-retiring-the-azure-classic-portal/). At this point, namespace management for Access Control is available at a new, dedicated URL: `https://manage.windowsazure.com?restoreClassic=true`. Use this URL to view your existing namespaces, enable and disable namespaces, and to delete namespaces, if you choose to.-- **April 2, 2018**: The Azure classic portal is completely retired, meaning Access Control namespace management is no longer available via any URL. At this point, you can't disable or enable, delete, or enumerate your Access Control namespaces. However, the Access Control management portal will be fully functional and located at `https://<namespace>.accesscontrol.windows.net`. All other components of Access Control continue to operate normally.-- **November 7, 2018**: All Access Control components are permanently shut down. This includes the Access Control management portal, the management service, STS, and the token transformation rule engine. At this point, any requests sent to Access Control (located at `<namespace>.accesscontrol.windows.net`) fail. You should have migrated all existing apps and services to other technologies well before this time.--> [!NOTE] -> A policy disables namespaces that have not requested a token for a period of time. As of early September 2018, this period of time is currently at 14 days of inactivity, but this will be shortened to 7 days of inactivity in the coming weeks. If you have Access Control namespaces that are currently disabled, you can [download and install ACS PowerShell](#download-and-install-acs-powershell) to re-enable the namespace(s). --## Migration strategies --The following sections describe high-level recommendations for migrating from Access Control to other Microsoft technologies. --### Clients of Microsoft cloud services --Each Microsoft cloud service that accepts tokens that are issued by Access Control now supports at least one alternate form of authentication. The correct authentication mechanism varies for each service. We recommend that you refer to the specific documentation for each service for official guidance. For convenience, each set of documentation is provided here: --| Service | Guidance | -| - | -- | -| Azure Service Bus | [Migrate to shared access signatures](/azure/service-bus-messaging/service-bus-sas) | -| Azure Service Bus Relay | [Migrate to shared access signatures](/azure/azure-relay/relay-migrate-acs-sas) | -| Azure Managed Cache | [Migrate to Azure Cache for Redis](/azure/azure-cache-for-redis/cache-faq) | -| Azure DataMarket | [Migrate to the Azure AI services APIs](https://azure.microsoft.com/services/cognitive-services/) | -| BizTalk Services | [Migrate to the Logic Apps feature of Azure App Service](https://azure.microsoft.com/services/cognitive-services/) | -| Azure Media Services | [Migrate to Azure AD authentication](https://azure.microsoft.com/blog/azure-media-service-aad-auth-and-acs-deprecation/) | -| Azure Backup | [Upgrade the Azure Backup agent](/azure/backup/backup-azure-file-folder-backup-faq) | --<!-- Dynamics CRM: Migrate to new SDK, Dynamics team handling privately --> -<!-- Azure RemoteApp deprecated in favor of Citrix: https://www.zdnet.com/article/microsoft-to-drop-azure-remoteapp-in-favor-of-citrix-remoting-technologies/ --> -<!-- Exchange push notifications are moving, customers don't need to move --> -<!-- Retail federation services are moving, customers don't need to move --> -<!-- Azure StorSimple: TODO --> -<!-- Azure SiteRecovery: TODO --> --### SharePoint customers --SharePoint 2013, 2016, and SharePoint Online customers have long used ACS for authentication purposes in cloud, on premises, and hybrid scenarios. Some SharePoint features and use cases will be affected by ACS retirement, while others will not. The below table summarizes migration guidance for some of the most popular SharePoint feature that leverage ACS: --| Feature | Guidance | -| - | -- | -| Authenticating users from Azure AD | Previously, Azure AD did not support SAML 1.1 tokens required by SharePoint for authentication, and ACS was used as an intermediary that made SharePoint compatible with Azure AD token formats. Now, you can [connect SharePoint directly to Azure AD using Azure AD App Gallery SharePoint on premises app](../saas-apps/sharepoint-on-premises-tutorial.md). | -| [App authentication & server-to-server authentication in SharePoint on premises](/SharePoint/security-for-sharepoint-server/authentication-overview) | Not affected by ACS retirement; no changes necessary. | -| [Low trust authorization for SharePoint add-ins (provider hosted and SharePoint hosted)](/sharepoint/dev/sp-add-ins/three-authorization-systems-for-sharepoint-add-ins) | Not affected by ACS retirement; no changes necessary. | -| [SharePoint cloud hybrid search](/archive/blogs/spses/cloud-hybrid-search-service-application) | Not affected by ACS retirement; no changes necessary. | --### Web applications that use passive authentication --For web applications that use Access Control for user authentication, Access Control provides the following features and capabilities to web application developers and architects: --- Deep integration with Windows Identity Foundation (WIF).-- Federation with Google, Facebook, Yahoo, Azure Active Directory, and AD FS accounts, and Microsoft accounts.-- Support for the following authentication protocols: OAuth 2.0 Draft 13, WS-Trust, and Web Services Federation (WS-Federation).-- Support for the following token formats: JSON Web Token (JWT), SAML 1.1, SAML 2.0, and Simple Web Token (SWT).-- A home realm discovery experience, integrated into WIF, that allows users to pick the type of account they use to sign in. This experience is hosted by the web application and is fully customizable.-- Token transformation that allows rich customization of the claims received by the web application from Access Control, including:- - Pass through claims from identity providers. - - Adding additional custom claims. - - Simple if-then logic to issue claims under certain conditions. --Unfortunately, there isn't one service that offers all of these equivalent capabilities. You should evaluate which capabilities of Access Control you need, and then choose between using [Microsoft Entra ID](https://azure.microsoft.com/develop/identity/signin/), [Azure Active Directory B2C](https://azure.microsoft.com/services/active-directory-b2c/) (Azure AD B2C), or another cloud authentication service. --<a name='migrate-to-azure-active-directory'></a> --#### Migrate to Microsoft Entra ID --A path to consider is integrating your apps and services directly with Microsoft Entra ID. Microsoft Entra ID is the cloud-based identity provider for Microsoft work or school accounts. Microsoft Entra ID is the identity provider for Microsoft 365, Azure, and much more. It provides similar federated authentication capabilities to Access Control, but doesn't support all Access Control features. --The primary example is federation with social identity providers, such as Facebook, Google, and Yahoo. If your users sign in with these types of credentials, Microsoft Entra ID is not the solution for you. --Microsoft Entra ID also doesn't necessarily support the exact same authentication protocols as Access Control. For example, although both Access Control and Microsoft Entra ID support OAuth, there are subtle differences between each implementation. Different implementations require you to modify code as part of a migration. --However, Microsoft Entra ID does provide several potential advantages to Access Control customers. It natively supports Microsoft work or school accounts hosted in the cloud, which are commonly used by Access Control customers. --A Microsoft Entra tenant can also be federated to one or more instances of on-premises Active Directory via AD FS. This way, your app can authenticate cloud-based users and users that are hosted on-premises. It also supports the WS-Federation protocol, which makes it relatively straightforward to integrate with a web application by using WIF. --The following table compares the features of Access Control that are relevant to web applications with those features that are available in Microsoft Entra ID. --At a high level, *Microsoft Entra ID is probably the best choice for your migration if you let users sign in only with their Microsoft work or school accounts*. --| Capability | Access Control support | Microsoft Entra ID support | -| - | -- | - | -| **Types of accounts** | | | -| Microsoft work or school accounts | Supported | Supported | -| Accounts from Windows Server Active Directory and AD FS |- Supported via federation with a Microsoft Entra tenant <br />- Supported via direct federation with AD FS | Only supported via federation with a Microsoft Entra tenant | -| Accounts from other enterprise identity management systems |- Possible via federation with a Microsoft Entra tenant <br />- Supported via direct federation | Possible via federation with a Microsoft Entra tenant | -| Microsoft accounts for personal use | Supported | Supported via the Microsoft Entra v2.0 OAuth protocol, but not over any other protocols | -| Facebook, Google, Yahoo accounts | Supported | Not supported whatsoever | -| **Protocols and SDK compatibility** | | | -| WIF | Supported | Supported, but limited instructions are available | -| WS-Federation | Supported | Supported | -| OAuth 2.0 | Support for Draft 13 | Support for RFC 6749, the most modern specification | -| WS-Trust | Supported | Not supported | -| **Token formats** | | | -| JWT | Supported In Beta | Supported | -| SAML 1.1 | Supported | Preview | -| SAML 2.0 | Supported | Supported | -| SWT | Supported | Not supported | -| **Customizations** | | | -| Customizable home realm discovery/account-picking UI | Downloadable code that can be incorporated into apps | Not supported | -| Upload custom token-signing certificates | Supported | Supported | -| Customize claims in tokens |- Pass through input claims from identity providers<br />- Get access token from identity provider as a claim<br />- Issue output claims based on values of input claims<br />- Issue output claims with constant values |- Cannot pass through claims from federated identity providers<br />- Cannot get access token from identity provider as a claim<br />- Cannot issue output claims based on values of input claims<br />- Can issue output claims with constant values<br />- Can issue output claims based on properties of users synced to Microsoft Entra ID | -| **Automation** | | | -| Automate configuration and management tasks | Supported via Access Control Management Service | Supported using the Microsoft Graph API | --If you decide that Microsoft Entra ID is the best migration path for your applications and services, you should be aware of two ways to integrate your app with Microsoft Entra ID. --To use WS-Federation or WIF to integrate with Microsoft Entra ID, we recommend following the approach described in [Configure federated single sign-on for a non-gallery application](../develop/single-sign-on-saml-protocol.md). The article refers to configuring Microsoft Entra ID for SAML-based single sign-on, but also works for configuring WS-Federation. Following this approach requires a Microsoft Entra ID P1 or P2 license. This approach has two advantages: --- You get the full flexibility of Microsoft Entra token customization. You can customize the claims that are issued by Microsoft Entra ID to match the claims that are issued by Access Control. This especially includes the user ID or Name Identifier claim. To continue to receive consistent user IDentifiers for your users after you change technologies, ensure that the user IDs issued by Microsoft Entra ID match those issued by Access Control.-- You can configure a token-signing certificate that is specific to your application, and with a lifetime that you control.--> [!NOTE] -> This approach requires a Microsoft Entra ID P1 or P2 license. If you are an Access Control customer and you require a premium license for setting up single-sign on for an application, contact us. We'll be happy to provide developer licenses for you to use. --An alternative approach is to follow [this code sample](https://github.com/Azure-Samples/active-directory-dotnet-webapp-wsfederation), which gives slightly different instructions for setting up WS-Federation. This code sample does not use WIF, but rather, the ASP.NET 4.5 OWIN middleware. However, the instructions for app registration are valid for apps using WIF, and don't require a Microsoft Entra ID P1 or P2 license. --If you choose this approach, you need to understand [signing key rollover in Microsoft Entra ID](../develop/signing-key-rollover.md). This approach uses the Microsoft Entra global signing key to issue tokens. By default, WIF does not automatically refresh signing keys. When Microsoft Entra ID rotates its global signing keys, your WIF implementation needs to be prepared to accept the changes. For more information, see [Important information about signing key rollover in Microsoft Entra ID](/previous-versions/azure/dn641920(v=azure.100)). --If you can integrate with Microsoft Entra ID via the OpenID Connect or OAuth protocols, we recommend doing so. We have extensive documentation and guidance about how to integrate Microsoft Entra ID into your web application available in our [Microsoft Entra developer guide](../develop/index.yml). --#### Migrate to Azure Active Directory B2C --The other migration path to consider is Azure AD B2C. Azure AD B2C is a cloud authentication service that, like Access Control, allows developers to outsource their authentication and identity management logic to a cloud service. It's a paid service (with free and premium tiers) that is designed for consumer-facing applications that might have up to millions of active users. --Like Access Control, one of the most attractive features of Azure AD B2C is that it supports many different types of accounts. With Azure AD B2C, you can sign in users by using their Microsoft account, or Facebook, Google, LinkedIn, GitHub, or Yahoo accounts, and more. Azure AD B2C also supports "local accounts," or username and passwords that users create specifically for your application. Azure AD B2C also provides rich extensibility that you can use to customize your sign-in flows. --However, Azure AD B2C doesn't support the breadth of authentication protocols and token formats that Access Control customers might require. You also can't use Azure AD B2C to get tokens and query for additional information about the user from the identity provider, Microsoft or otherwise. --The following table compares the features of Access Control that are relevant to web applications with those that are available in Azure AD B2C. At a high level, *Azure AD B2C is probably the right choice for your migration if your application is consumer facing, or if it supports many different types of accounts.* --| Capability | Access Control support | Azure AD B2C support | -| - | -- | - | -| **Types of accounts** | | | -| Microsoft work or school accounts | Supported | Supported via custom policies | -| Accounts from Windows Server Active Directory and AD FS | Supported via direct federation with AD FS | Supported via SAML federation by using custom policies | -| Accounts from other enterprise identity management systems | Supported via direct federation through WS-Federation | Supported via SAML federation by using custom policies | -| Microsoft accounts for personal use | Supported | Supported | -| Facebook, Google, Yahoo accounts | Supported | Facebook and Google supported natively, Yahoo supported via OpenID Connect federation by using custom policies | -| **Protocols and SDK compatibility** | | | -| Windows Identity Foundation (WIF) | Supported | Not supported | -| WS-Federation | Supported | Not supported | -| OAuth 2.0 | Support for Draft 13 | Support for RFC 6749, the most modern specification | -| WS-Trust | Supported | Not supported | -| **Token formats** | | | -| JWT | Supported In Beta | Supported | -| SAML 1.1 | Supported | Not supported | -| SAML 2.0 | Supported | Not supported | -| SWT | Supported | Not supported | -| **Customizations** | | | -| Customizable home realm discovery/account-picking UI | Downloadable code that can be incorporated into apps | Fully customizable UI via custom CSS | -| Upload custom token-signing certificates | Supported | Custom signing keys, not certificates, supported via custom policies | -| Customize claims in tokens |- Pass through input claims from identity providers<br />- Get access token from identity provider as a claim<br />- Issue output claims based on values of input claims<br />- Issue output claims with constant values |- Can pass through claims from identity providers; custom policies required for some claims<br />- Cannot get access token from identity provider as a claim<br />- Can issue output claims based on values of input claims via custom policies<br />- Can issue output claims with constant values via custom policies | -| **Automation** | | | -| Automate configuration and management tasks | Supported via Access Control Management Service |- Creation of users allowed using the Microsoft Graph API<br />- Cannot create B2C tenants, applications, or policies programmatically | --If you decide that Azure AD B2C is the best migration path for your applications and services, begin with the following resources: --- [Azure AD B2C documentation](/azure/active-directory-b2c/overview)-- [Azure AD B2C custom policies](/azure/active-directory-b2c/custom-policy-overview)-- [Azure AD B2C pricing](https://azure.microsoft.com/pricing/details/active-directory-b2c/)--#### Migrate to Ping Identity or Auth0 --In some cases, you might find that Microsoft Entra ID and Azure AD B2C aren't sufficient to replace Access Control in your web applications without making major code changes. Some common examples might include: --- Web applications that use WIF or WS-Federation for sign-in with social identity providers such as Google or Facebook.-- Web applications that perform direct federation to an enterprise identity provider over the WS-Federation protocol.-- Web applications that require the access token issued by a social identity provider (such as Google or Facebook) as a claim in the tokens issued by Access Control.-- Web applications with complex token transformation rules that Microsoft Entra ID or Azure AD B2C can't reproduce.-- Multi-tenant web applications that use ACS to centrally manage federation to many different identity providers--In these cases, you might want to consider migrating your web application to another cloud authentication service. We recommend exploring the following options. Each of the following options offer capabilities similar to Access Control: --![This image shows the Auth0 logo](./media/active-directory-acs-migration/rsz-auth0.png) --[Auth0](https://auth0.com/access-management) is a flexible cloud identity service that has created [high-level migration guidance for customers of Access Control](https://auth0.com/access-management), and supports nearly every feature that ACS does. --![This image shows the Ping Identity logo](./media/active-directory-acs-migration/rsz-ping.png) --[Ping Identity](https://www.pingidentity.com) offers two solutions similar to ACS. PingOne is a cloud identity service that supports many of the same features as ACS, and PingFederate is a similar on premises identity product that offers more flexibility. Refer to Ping's ACS retirement guidance for more details on using these products. --Our aim in working with Ping Identity and Auth0 is to ensure that all Access Control customers have a migration path for their apps and services that minimizes the amount of work required to move from Access Control. --<!-- --## SharePoint 2010, 2013, 2016 --TODO: Azure AD only, use Azure AD SAML 1.1 tokens, when we bring it back online. -Other IDPs: use Auth0? https://auth0.com/docs/integrations/sharepoint. ->--### Web services that use active authentication --For web services that are secured with tokens issued by Access Control, Access Control offers the following features and capabilities: --- Ability to register one or more *service identities* in your Access Control namespace. Service identities can be used to request tokens.-- Support for the OAuth WRAP and OAuth 2.0 Draft 13 protocols for requesting tokens, by using the following types of credentials:- - A simple password that's created for the service identity - - A signed SWT by using a symmetric key or X509 certificate - - A SAML token issued by a trusted identity provider (typically, an AD FS instance) -- Support for the following token formats: JWT, SAML 1.1, SAML 2.0, and SWT.-- Simple token transformation rules.--Service identities in Access Control are typically used to implement server-to-server authentication. --<a name='migrate-to-azure-active-directory'></a> --#### Migrate to Microsoft Entra ID --Our recommendation for this type of authentication flow is to migrate to [Microsoft Entra ID](https://azure.microsoft.com/develop/identity/signin/). Microsoft Entra ID is the cloud-based identity provider for Microsoft work or school accounts. Microsoft Entra ID is the identity provider for Microsoft 365, Azure, and much more. --You can also use Microsoft Entra ID for server-to-server authentication by using the Microsoft Entra implementation of the OAuth client credentials grant. The following table compares the capabilities of Access Control in server-to-server authentication with those that are available in Microsoft Entra ID. --| Capability | Access Control support | Microsoft Entra ID support | -| - | -- | - | -| How to register a web service | Create a relying party in the Access Control management portal | Create a Microsoft Entra web application in the Azure portal | -| How to register a client | Create a service identity in Access Control management portal | Create another Microsoft Entra web application in the Azure portal | -| Protocol used |- OAuth WRAP protocol<br />- OAuth 2.0 Draft 13 client credentials grant | OAuth 2.0 client credentials grant | -| Client authentication methods |- Simple password<br />- Signed SWT<br />- SAML token from a federated identity provider |- Simple password<br />- Signed JWT | -| Token formats |- JWT<br />- SAML 1.1<br />- SAML 2.0<br />- SWT<br /> | JWT only | -| Token transformation |- Add custom claims<br />- Simple if-then claim issuance logic | Add custom claims | -| Automate configuration and management tasks | Supported via Access Control Management Service | Supported using the Microsoft Graph API | --For guidance about implementing server-to-server scenarios, see the following resources: --- Service-to-Service section of the [Microsoft Entra developer guide](../develop/index.yml)-- [Daemon code sample by using simple password client credentials](https://github.com/Azure-Samples/active-directory-dotnet-daemon)-- [Daemon code sample by using certificate client credentials](https://github.com/Azure-Samples/active-directory-dotnet-daemon-certificate-credential)--#### Migrate to Ping Identity or Auth0 --In some cases, you might find that the Microsoft Entra client credentials and the OAuth grant implementation aren't sufficient to replace Access Control in your architecture without major code changes. Some common examples might include: --- Server-to-server authentication using token formats other than JWTs.-- Server-to-server authentication using an input token provided by an external identity provider.-- Server-to-server authentication with token transformation rules that Microsoft Entra ID cannot reproduce.--In these cases, you might consider migrating your web application to another cloud authentication service. We recommend exploring the following options. Each of the following options offer capabilities similar to Access Control: --![This image shows the Auth0 logo](./media/active-directory-acs-migration/rsz-auth0.png) --[Auth0](https://auth0.com/access-management) is a flexible cloud identity service that has created [high-level migration guidance for customers of Access Control](https://auth0.com/access-management), and supports nearly every feature that ACS does. --![This image shows the Ping Identity logo](./media/active-directory-acs-migration/rsz-ping.png) -[Ping Identity](https://www.pingidentity.com) offers two solutions similar to ACS. PingOne is a cloud identity service that supports many of the same features as ACS, and PingFederate is a similar on premises identity product that offers more flexibility. Refer to Ping's ACS retirement guidance for more details on using these products. --Our aim in working with Ping Identity and Auth0 is to ensure that all Access Control customers have a migration path for their apps and services that minimizes the amount of work required to move from Access Control. --#### Passthrough authentication --For passthrough authentication with arbitrary token transformation, there is no equivalent Microsoft technology to ACS. If that is what your customers need, Auth0 might be the one who provides the closest approximation solution. --## Questions, concerns, and feedback --We understand that many Access Control customers won't find a clear migration path after reading this article. You might need some assistance or guidance in determining the right plan. If you would like to discuss your migration scenarios and questions, please leave a comment on this page. |
active-directory | Active Directory Authentication Libraries | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/active-directory-authentication-libraries.md | - Title: Azure Active Directory Authentication Library -description: The Azure AD Authentication Library (ADAL) allows client application developers to easily authenticate users to cloud or on-premises Active Directory (AD) and then obtain access tokens for securing API calls. ------- Previously updated : 12/29/2022-------# Azure Active Directory Authentication Library ---The Azure Active Directory Authentication Library (ADAL) v1.0 enables application developers to authenticate users to cloud or on-premises Active Directory (AD), and obtain tokens for securing API calls. ADAL makes authentication easier for developers through features such as: --- Configurable token cache that stores access tokens and refresh tokens-- Automatic token refresh when an access token expires and a refresh token is available-- Support for asynchronous method calls--> [!NOTE] -> Looking for the Azure AD v2.0 libraries? Checkout the [MSAL library guide](../develop/reference-v2-libraries.md). ---> [!WARNING] -> Azure Active Directory Authentication Library (ADAL) has been deprecated. Please use the [Microsoft Authentication Library (MSAL)](/entr). --## Microsoft-supported Client Libraries --| Platform | Library | Download | Source Code | Sample | Reference -| | | | | | | -| .NET Client, Windows Store, UWP, Xamarin iOS and Android |ADAL .NET v3 |[NuGet](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) |[GitHub](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet) | [Desktop app](../develop/quickstart-v2-windows-desktop.md) | | -| JavaScript |ADAL.js |[GitHub](https://github.com/AzureAD/azure-activedirectory-library-for-js) |[GitHub](https://github.com/AzureAD/azure-activedirectory-library-for-js) |[Single-page app](https://github.com/Azure-Samples/active-directory-javascript-singlepageapp-dotnet-webapi) | | -| iOS, macOS |ADAL |[GitHub](https://github.com/AzureAD/azure-activedirectory-library-for-objc/releases) |[GitHub](https://github.com/AzureAD/azure-activedirectory-library-for-objc) |[iOS app](../develop/quickstart-v2-ios.md) | | -| Android |ADAL |[Maven](https://search.maven.org/search?q=g:com.microsoft.aad+AND+a:adal&core=gav) |[GitHub](https://github.com/AzureAD/azure-activedirectory-library-for-android) |[Android app](../develop/quickstart-v2-android.md) | [JavaDocs](https://javadoc.io/doc/com.microsoft.aad/adal/)| -| Node.js |ADAL |[npm](https://www.npmjs.com/package/adal-node) |[GitHub](https://github.com/AzureAD/azure-activedirectory-library-for-nodejs) | [Node.js web app](https://github.com/Azure-Samples/active-directory-node-webapp-openidconnect)|[Reference](/javascript/api/overview/azure/active-directory) | -| Java |ADAL4J |[Maven](https://search.maven.org/#search%7Cga%7C1%7Ca%3Aadal4j%20g%3Acom.microsoft.azure) |[GitHub](https://github.com/AzureAD/azure-activedirectory-library-for-java) |[Java web app](https://github.com/Azure-Samples/active-directory-java-webapp-openidconnect) |[Reference](https://javadoc.io/doc/com.microsoft.azure/adal4j) | -| Python |ADAL |[GitHub](https://github.com/AzureAD/azure-activedirectory-library-for-python) |[GitHub](https://github.com/AzureAD/azure-activedirectory-library-for-python) |[Python web app](https://github.com/Azure-Samples/active-directory-python-webapp-graphapi) |[Reference](https://adal-python.readthedocs.io/) | --## Microsoft-supported Server Libraries --| Platform | Library | Download | Source Code | Sample | Reference -| | | | | | | -| .NET |OWIN for AzureAD|[NuGet](https://www.nuget.org/packages/Microsoft.Owin.Security.ActiveDirectory/) |[GitHub](https://github.com/aspnet/AspNetKatan) | | -| .NET |OWIN for OpenIDConnect |[NuGet](https://www.nuget.org/packages/Microsoft.Owin.Security.OpenIdConnect) |[GitHub](https://github.com/aspnet/AspNetKatana/tree/main/src/Microsoft.Owin.Security.OpenIdConnect) |[Web App](https://github.com/AzureADSamples/WebApp-OpenIDConnect-DotNet) | | -| .NET |OWIN for WS-Federation |[NuGet](https://www.nuget.org/packages/Microsoft.Owin.Security.WsFederation) |[GitHub](https://github.com/aspnet/AspNetKatana/tree/main/src/Microsoft.Owin.Security.WsFederation) |[MVC Web App](https://github.com/AzureADSamples/WebApp-WSFederation-DotNet) | | -| .NET |Identity Protocol Extensions for .NET 4.5 |[NuGet](https://www.nuget.org/packages/Microsoft.IdentityModel.Protocol.Extensions) |[GitHub](https://github.com/AzureAD/azure-activedirectory-identitymodel-extensions-for-dotnet) | | | -| .NET |JWT Handler for .NET 4.5 |[NuGet](https://www.nuget.org/packages/System.IdentityModel.Tokens.Jwt) |[GitHub](https://github.com/AzureAD/azure-activedirectory-identitymodel-extensions-for-dotnet) | | | -| Node.js |Azure AD Passport |[npm](https://www.npmjs.com/package/passport-azure-ad) |[GitHub](https://github.com/AzureAD/passport-azure-ad) | [Web API](../develop/authentication-flows-app-scenarios.md)| | --## Scenarios --Here are three common scenarios for using ADAL in a client that accesses a remote resource: --### Authenticating users of a native client application running on a device --In this scenario, a developer has a mobile client or desktop application that needs to access a remote resource, such as a web API. The web API does not allow anonymous calls and must be called in the context of an authenticated user. The web API is pre-configured to trust access tokens issued by a specific Azure AD tenant. Azure AD is pre-configured to issue access tokens for that resource. To invoke the web API from the client, the developer uses ADAL to facilitate authentication with Azure AD. The most secure way to use ADAL is to have it render the user interface for collecting user credentials (rendered as browser window). --ADAL makes it easy to authenticate the user, obtain an access token and refresh token from Azure AD, and then call the web API using the access token. --For a code sample that demonstrates this scenario using authentication to Azure AD, see [Native Client WPF Application to Web API](https://github.com/azureadsamples/nativeclient-dotnet). --### Authenticating a confidential client application running on a web server --In this scenario, a developer has an application running on a server that needs to access a remote resource, such as a web API. The web API does not allow anonymous calls, so it must be called from an authorized service. The web API is pre-configured to trust access tokens issued by a specific Azure AD tenant. Azure AD is pre-configured to issue access tokens for that resource to a service with client credentials (client ID and secret). ADAL facilitates authentication of the service with Azure AD returning an access token that can be used to call the web API. ADAL also handles managing the lifetime of the access token by caching it and renewing it as necessary. For a code sample that demonstrates this scenario, see [Daemon console Application to Web API](https://github.com/AzureADSamples/Daemon-DotNet). --### Authenticating a confidential client application running on a server, on behalf of a user --In this scenario, a developer has a web application running on a server that needs to access a remote resource, such as a web API. The web API does not allow anonymous calls, so it must be called from an authorized service on behalf of an authenticated user. The web API is pre-configured to trust access tokens issued by a specific Microsoft Entra tenant, and Microsoft Entra ID is pre-configured to issue access tokens for that resource to a service with client credentials. Once the user is authenticated in the web application, the application can get an authorization code for the user from Microsoft Entra ID. The web application can then use ADAL to obtain an access token and refresh token on behalf of a user using the authorization code and client credentials associated with the application from Microsoft Entra ID. Once the web application is in possession of the access token, it can call the web API until the token expires. When the token expires, the web application can use ADAL to get a new access token by using the refresh token that was previously received. For a code sample that demonstrates this scenario, see [Native client to Web API to Web API](https://github.com/Azure-Samples/active-directory-dotnet-webapi-onbehalfof). --## See Also --- [The Azure Active Directory developer's guide](v1-overview.md)-- [Authentication scenarios for Azure Active Directory](v1-authentication-scenarios.md)-- [Azure Active Directory code samples](sample-v1-code.md) |
active-directory | Active Directory Devhowto Adal Error Handling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/active-directory-devhowto-adal-error-handling.md | - Title: ADAL client app error handling best practices -description: Provides error handling guidance and best practices for ADAL client applications. --------- Previously updated : 02/27/2017----# Error handling best practices for Azure Active Directory Authentication Library (ADAL) clients ---This article provides guidance on the type of errors that developers may encounter, when using ADAL to authenticate users. When using ADAL, there are several cases where a developer may need to step in and handle errors. Proper error handling ensures a great end-user experience, and limits the number of times the end user needs to sign in. --In this article, we explore the specific cases for each platform supported by ADAL, and how your application can handle each case properly. The error guidance is split into two broader categories, based on the token acquisition patterns provided by ADAL APIs: --- **AcquireTokenSilent**: Client attempts to get a token silently (no UI), and may fail if ADAL is unsuccessful. -- **AcquireToken**: Client can attempt silent acquisition, but can also perform interactive requests that require sign-in.--> [!TIP] -> It's a good idea to log all errors and exceptions when using ADAL. Logs are not only helpful for understanding the overall health of your application, but are also important when debugging broader problems. While your application may recover from certain errors, they may hint at broader design problems that require code changes in order to resolve. -> -> When implementing the error conditions covered in this document, you should log the error code and description for the reasons discussed earlier. See the [Error and logging reference](#error-and-logging-reference) for examples of logging code. -> --## AcquireTokenSilent --AcquireTokenSilent attempts to get a token with the guarantee that the end user does not see a User Interface (UI). There are several cases where silent acquisition may fail, and needs to be handled through interactive requests or by a default handler. We dive into the specifics of when and how to employ each case in the sections that follow. --There is a set of errors generated by the operating system, which may require error handling specific to the application. For more information, see "Operating System" errors section in [Error and logging reference](#error-and-logging-reference). --### Application scenarios --- [Native client](../develop/developer-glossary.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json#native-client) applications (iOS, Android, .NET Desktop, or Xamarin)-- [Web client](../develop/developer-glossary.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json#web-client) applications calling a [resource](../develop/developer-glossary.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json#resource-server) (.NET)--### Error cases and actionable steps --Fundamentally, there are two cases of AcquireTokenSilent errors: --| Case | Description | -||-| -| **Case 1**: Error is resolvable with an interactive sign-in | For errors caused by a lack of valid tokens, an interactive request is necessary. Specifically, cache lookup and an invalid/expired refresh token require an AcquireToken call to resolve.<br><br>In these cases, the end user needs to be prompted to sign in. The application can choose to do an interactive request immediately, after end-user interaction (such as hitting a sign-in button), or later. The choice depends on the desired behavior of the application.<br><br>See the code in the following section for this specific case and the errors that diagnose it.| -| **Case 2**: Error is not resolvable with an interactive sign-in | For network and transient/temporary errors, or other failures, performing an interactive AcquireToken request does not resolve the issue. Unnecessary interactive sign-in prompts can also frustrate end users. ADAL automatically attempts a single retry for most errors on AcquireTokenSilent failures.<br><br>The client application can also attempt a retry at some later point, but when and how is dependent on the application behavior and desired end-user experience. For example, the application can do an AcquireTokenSilent retry after a few minutes, or in response to some end-user action. An immediate retry will result in the application being throttled, and should not be attempted.<br><br>A subsequent retry failing with the same error does not mean the client should do an interactive request using AcquireToken, as it does not resolve the error.<br><br>See the code in the following section for this specific case and the errors that diagnose it. | --### .NET --The following guidance provides examples for error handling in conjunction with ADAL methods: --- acquireTokenSilentAsync(…)-- acquireTokenSilentSync(…) -- [deprecated] acquireTokenSilent(…)-- [deprecated] acquireTokenByRefreshToken(…) --Your code would be implemented as follows: --```csharp -try{ - AcquireTokenSilentAsync(…); -} --catch (AdalSilentTokenAcquisitionException e) { - // Exception: AdalSilentTokenAcquisitionException - // Caused when there are no tokens in the cache or a required refresh failed. -- // Action: Case 1, resolvable with an interactive request. -} --catch(AdalServiceException e) { - // Exception: AdalServiceException - // Represents an error produced by the STS. - // e.ErrorCode contains the error code and description, which can be used for debugging. - // NOTE: Do not code a dependency on the contents of the error description, as it can change over time. -- // Action: Case 2, not resolvable with an interactive request. - // Attempt retry after a timed interval or user action. -} - -catch (AdalException e) { - // Exception: AdalException - // Represents a library exception generated by ADAL .NET. - // e.ErrorCode contains the error code. -- // Action: Case 2, not resolvable with an interactive request. - // Attempt retry after a timed interval or user action. - // Example Error: network_not_available, default case. -} -``` --### Android --The following guidance provides examples for error handling in conjunction with ADAL methods: --- acquireTokenSilentSync(…)-- acquireTokenSilentAsync(...)-- [deprecated] acquireTokenSilent(…)--Your code would be implemented as follows: --```java -// *Inside callback* -public void onError(Exception e) { -- if (e instanceof AuthenticationException) { - // Exception: AdalException - // Represents a library exception generated by ADAL Android. - // Error Code: e.getCode(). -- // Errors: ADALError.ERROR_SILENT_REQUEST, - // ADALError.AUTH_REFRESH_FAILED_PROMPT_NOT_ALLOWED, - // ADALError.INVALID_TOKEN_CACHE_ITEM - // Description: Request failed due to no tokens in - // cache or failed a required refresh. -- // Action: Case 1, resolvable with an interactive request. -- // Action: Case 2, not resolvable with an interactive request. - // Attempt retry after a timed interval or user action. - // Example Errors: default case, - // DEVICE_CONNECTION_IS_NOT_AVAILABLE, - // BROKER_AUTHENTICATOR_ERROR_GETAUTHTOKEN, - } -} -``` --### iOS --The following guidance provides examples for error handling in conjunction with ADAL methods: --- acquireTokenSilentWithResource(…)--Your code would be implemented as follows: --```objc -[context acquireTokenSilentWithResource:[ARGS], completionBlock:^(ADAuthenticationResult *result) { - if (result.status == AD_FAILED) { - if ([error.domain isEqualToString:ADAuthenticationErrorDomain]){ - // Exception: AD_FAILED - // Represents a library error generated by ADAL Objective-C. - // Error Code: result.error.code -- // Errors: AD_ERROR_SERVER_REFRESH_TOKEN_REJECTED, AD_ERROR_CACHE_NO_REFRESH_TOKEN - // Description: No tokens in cache or failed a required token refresh failed. - // Action: Case 1, resolvable with an interactive request. -- // Error: AD_ERROR_CACHE_MULTIPLE_USERS - // Description: There was ambiguity in the silent request resulting in multiple cache items. - // Action: Special Case, application should perform another silent request and specify the user using ADUserIdentifier. - // Can be caused in cases of a multi-user application. -- // Action: Case 2, not resolvable with an interactive request. - // Attempt retry after some time or user action. - // Example Errors: default case, - // AD_ERROR_CACHE_BAD_FORMAT - } - } -}] -``` --## AcquireToken --AcquireToken is the default ADAL method used to get tokens. In cases where user identity is required, AcquireToken attempts to get a token silently first, then displays UI if necessary (unless PromptBehavior.Never is passed). In cases where application identity is required, AcquireToken attempts to get a token, but doesn't show UI as there is no end user. --When handling AcquireToken errors, error handling is dependent on the platform and scenario the application is trying to achieve. --The operating system can also generate a set of errors, which require error handling dependent on the specific application. For more information, see "Operating System errors" in [Error and logging reference](#error-and-logging-reference). --### Application scenarios --- Native client applications (iOS, Android, .NET Desktop, or Xamarin)-- Web applications that call a resource API (.NET)-- Single-page applications (JavaScript)-- Service-to-Service applications (.NET, Java)- - All scenarios, including on-behalf-of - - On-Behalf-of specific scenarios --### Error cases and actionable steps: Native client applications --If you're building a native client application, there are a few error handling cases to consider which relate to network issues, transient failures, and other platform-specific errors. In most cases, an application shouldn't perform immediate retries, but rather wait for end-user interaction that prompts a sign-in. --There are a few special cases in which a single retry may resolve the issue. For example, when a user needs to enable data on a device, or completed the Azure AD broker download after the initial failure. --In cases of failure, an application can present UI to allow the end user to perform some interaction that prompts a retry. For instance, if the device failed for an offline error, a "Try to Sign in again" button prompting an AcquireToken retry rather than immediately retrying the failure. --Error handling in native applications can be defined by two cases: --| Case | Description | -||-| -| **Case 1**:<br>Non-Retryable Error (most cases) | 1. Do not attempt immediate retry. Present the end-user UI based on the specific error that invokes a retry (for example, "Try to Sign in again" or "Download Azure AD broker application"). | -| **Case 2**:<br>Retryable Error | 1. Perform a single retry as the end user may have entered a state that results in a success.<br><br>2. If retry fails, present the end-user UI based on the specific error that invokes a retry ("Try to Sign in again", "Download Azure AD broker app", etc.). | --> [!IMPORTANT] -> If a user account is passed to ADAL in a silent call and fails, the subsequent interactive request allows the end user to sign in using a different account. After a successful AcquireToken using a user account, the application must verify the signed-in user matches the applications's local user object. A mismatch does not generate an exception (except in Objective C), but should be considered in cases where a user is known locally before the authentication requests (like a failed silent call). -> --#### .NET --The following guidance provides examples for error handling in conjunction with all non-silent AcquireToken(…) ADAL methods, *except*: --- AcquireTokenAsync(…, IClientAssertionCertification, …)-- AcquireTokenAsync(…, ClientCredential, …)-- AcquireTokenAsync(…, ClientAssertion, …)-- AcquireTokenAsync(…, UserAssertion,…) --Your code would be implemented as follows: --```csharp -try { - AcquireTokenAsync(…); -} - -catch(AdalServiceException e) { - // Exception: AdalServiceException - // Represents an error produced by the STS. - // e.ErrorCode contains the error code and description, which can be used for debugging. - // NOTE: Do not code a dependency on the contents of the error description, as it can change over time. - - // Design time consideration: Certain errors may be caused at development and exposed through this exception. - // Looking inside the description will give more guidance on resolving the specific issue. -- // Action: Case 1: Non-Retryable - // Do not perform an immediate retry. Only retry after user action. - // Example Errors: default case -- } --catch (AdalException e) { - // Exception: AdalException - // Represents a library exception generated by ADAL .NET. - // e.ErrorCode contains the error code -- // Action: Case 1, Non-Retryable - // Do not perform an immediate retry. Only retry after user action. - // Example Errors: network_not_available, default case -} -``` --> [!NOTE] -> ADAL .NET has an extra consideration as it supports PromptBehavior.Never, which has behavior like AcquireTokenSilent. -> --The following guidance provides examples for error handling in conjunction with ADAL methods: --- acquireToken(…, PromptBehavior.Never)--Your code would be implemented as follows: --```csharp - try {acquireToken(…, PromptBehavior.Never); - } --catch(AdalServiceException e) { - // Exception: AdalServiceException represents - // Represents an error produced by the STS. - // e.ErrorCode contains the error code and description, which can be used for debugging. - // NOTE: Do not code a dependency on the contents of the error description, as it can change over time. -- // Action: Case 1: Non-Retryable - // Do not perform an immediate retry. Only retry after user action. - // Example Errors: default case --} catch (AdalException e) { - // Error Code: e.ErrorCode == "user_interaction_required" - // Description: user_interaction_required indicates the silent request failed - // in a way that's resolvable with an interactive request. - // Action: Resolvable with an interactive request. -- // Action: Case 1, Non-Retryable - // Do not perform an immediate retry. Only retry after user action. - // Example Errors: network_not_available, default case -} -``` --#### Android --The following guidance provides examples for error handling in conjunction with all non-silent AcquireToken(…) ADAL methods. --Your code would be implemented as follows: --```java -AcquireTokenAsync(…); --// *Inside callback* -public void onError(Exception e) { - if (e instanceof AuthenticationException) { - // Exception: AdalException - // Represents a library exception generated by ADAL Android. - // Error Code: e.getCode(); -- // Error: ADALError.BROKER_APP_INSTALLATION_STARTED - // Description: Broker app not installed, user will be prompted to download the app. -- // Action: Case 2, Retriable Error - // Perform a single retry. If that fails, only try again after user action. -- // Action: Case 1, Non-Retriable - // Do not perform an immediate retry. Only retry after user action. - // Example Errors: default case, DEVICE_CONNECTION_IS_NOT_AVAILABLE - } -} -``` --#### iOS --The following guidance provides examples for error handling in conjunction with all non-silent AcquireToken(…) ADAL methods. --Your code would be implemented as follows: --```objc -[context acquireTokenWithResource:[ARGS], completionBlock:^(ADAuthenticationResult *result) { - if (result.status == AD_FAILED) { - if ([error.domain isEqualToString:ADAuthenticationErrorDomain]){ - // Exception: AD_FAILED - // Represents a library error generated by ADAL ObjC. - // Error Code: result.error.code -- // Error: AD_ERROR_SERVER_WRONG_USER - // Description: App passed a user into ADAL and the end user signed in with a different account. - // Action: Case 1, Non-retriable (as is) and up to the application on how to handle this case. - // It can attempt a new request without specifying the user, or use UI to clarify the user account to sign in. -- // Action: Case 1, Non-Retriable - // Do not perform an immediate retry. Only retry after user action. - // Example Errors: default case - } - } -}] -``` --### Error cases and actionable steps: Web applications that call a resource API (.NET) --If you're building a .NET web app that calls gets a token using an authorization code for a resource, the only code required is a default handler for the generic case. --The following guidance provides examples for error handling in conjunction with ADAL methods: --- AcquireTokenByAuthorizationCodeAsync(…)--Your code would be implemented as follows: --```csharp -try { - AcquireTokenByAuthorizationCodeAsync(…); -} --catch (AdalException e) { - // Exception: AdalException - // Represents a library exception generated by ADAL .NET. - // Error Code: e.ErrorCode -- // Action: Do not perform an immediate retry. Only try again after user action or wait until much later. - // Example Errors: default case -} -``` --### Error cases and actionable steps: Single-page applications (adal.js) --If you're building a single-page application using adal.js with AcquireToken, the error handling code is similar to that of a typical silent call. Specifically in adal.js, AcquireToken never shows a UI. --A failed AcquireToken has the following cases: --| Case | Description | -||-| -| **Case 1**:<br>Resolvable with an interactive request | 1. If login() fails, do not perform immediate retry. Only retry after user action prompts a retry.| -| **Case 2**:<br>Not Resolvable with an interactive request. Error is retryable. | 1. Perform a single retry as the end user major have entered a state that results in a success.<br><br>2. If retry fails, present the end user with an action based on the specific error that can invoke a retry ("Try to Sign in again"). | -| **Case 3**:<br>Not Resolvable with an interactive request. Error is not retryable. | 1. Do not attempt immediate retry. Present the end user with an action based on the specific error that can invoke a retry ("Try to Sign in again"). | --Your code would be implemented as follows: --```javascript -AuthContext.acquireToken(…, function(error, errorDesc, token) { - if (error || errorDesc) { - // Represents any token acquisition failure that occurred. - // Error Code: error.indexOf("<ERROR_STRING>") -- // Errors: if (error.indexOf("interaction_required")) - // if (error.indexOf("login required")) - // Description: ADAL wasn't able to silently acquire a token because of expire or fresh session. - // Action: Case 1, Resolvable with an interactive login() request. -- // Error: if (error.indexOf("Token Renewal Failed")) - // Description: Timeout when refreshing the token. - // Action: Case 2, Not resolvable interactively, error is retriable. - // Perform a single retry. Only try again after user action. -- // Action: Case 3, Not resolvable interactively, error is not retriable. - // Do not perform an immediate retry. Only retry after user action. - // Example Errors: default case - } -} -``` --### Error cases and actionable steps: service-to-service applications (.NET only) --If you're building a service-to-service application that uses AcquireToken, there are a few key errors your code must handle. The only recourse to failure is to return the error back to the calling app (for on-behalf-of cases) or apply a retry strategy. --#### All scenarios --For *all* service-to-service application scenarios, including on-behalf-of: --- Do not attempt an immediate retry. ADAL attempts a single retry for certain failed requests. -- Only continue retrying after a user or app action is prompts a retry. For example, a daemon application that does work on some set interval should wait until the next interval to retry.--The following guidance provides examples for error handling in conjunction with ADAL methods: --- AcquireTokenAsync(…, IClientAssertionCertification, …)-- AcquireTokenAsync(…,ClientCredential, …)-- AcquireTokenAsync(…,ClientAssertion, …)-- AcquireTokenAsync(…,UserAssertion, …)--Your code would be implemented as follows: --```csharp -try { - AcquireTokenAsync(…); -} --catch (AdalException e) { - // Exception: AdalException - // Represents a library exception generated by ADAL .NET. - // Error Code: e.ErrorCode -- // Action: Do not perform an immediate retry. Only try again after user action (if applicable) or wait until much later. - // Example Errors: default case -} -``` --#### On-behalf-of scenarios --For *on-behalf-of* service-to-service application scenarios. --The following guidance provides examples for error handling in conjunction with ADAL methods: --- AcquireTokenAsync(…, UserAssertion, …)--Your code would be implemented as follows: --```csharp -try { -AcquireTokenAsync(…); -} --catch (AdalServiceException e) { - // Exception: AdalServiceException - // Represents an error produced by the STS. - // e.ErrorCode contains the error code and description, which can be used for debugging. - // NOTE: Do not code a dependency on the contents of the error description, as it can change over time. -- // Error: On-Behalf-Of Error Handler - if (e.ErrorCode == "interaction_required") { - // Description: The client needs to perform some action due to a config from admin. - // Action: Capture `claims` parameter inside ex.InnerException.InnerException. - // Generate HTTP error 403 with claims, throw back HTTP error to client. - // Wait for client to retry. - } -} - -catch (AdalException e) { - // Exception: AdalException - // Represents a library exception generated by ADAL .NET. - // Error Code: e.ErrorCode -- // Action: Do not perform an immediate retry. Only try again after user action (if applicable) or wait until much later. - // Example Error: default case -} -``` --We've built a [complete sample](https://github.com/Azure-Samples/active-directory-dotnet-webapi-onbehalfof-ca) that demonstrates this scenario. --## Error and logging reference --### Logging Personal Identifiable Information & Organizational Identifiable Information -By default, ADAL logging does not capture or log any personal identifiable information or organizational identifiable information. The library allows app developers to turn this on through a setter in the Logger class. By logging personal identifiable information or organizational identifiable information, the app takes responsibility for safely handling highly sensitive data and complying with any regulatory requirements. --### .NET --#### ADAL library errors --To explore specific ADAL errors, the source code in the [`azure-activedirectory-library-for-dotnet` repository](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/blob/8f6d560fbede2247ec0e217a21f6929d4375dcaa/src/ADAL.PCL/Utilities/Constants.cs#L58) is the best error reference. --#### Guidance for error logging code --ADAL .NET logging changes depending on the platform being worked on. Refer to the [Logging wiki](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/Logging-in-ADAL.Net) for code on how to enable logging. --### Android --#### ADAL library errors --To explore specific ADAL errors, the source code in the [`azure-activedirectory-library-for-android` repository](https://github.com/AzureAD/azure-activedirectory-library-for-android/blob/dev/adal/src/main/java/com/microsoft/aad/adal/ADALError.java#L33) is the best error reference. --#### Operating System errors --Android OS errors are exposed through AuthenticationException in ADAL, are identifiable as "SERVER_INVALID_REQUEST", and can be further granular through the error descriptions. --For a full list of common errors and what steps to take when your app or end users encounter them, refer to the [ADAL Android Wiki](https://github.com/AzureAD/azure-activedirectory-library-for-android/wiki). --#### Guidance for error logging code --```java -// 1. Configure Logger -Logger.getInstance().setExternalLogger(new ILogger() { - @Override - public void Log(String tag, String message, String additionalMessage, LogLevel level, ADALError errorCode) { - // … - // You can write this to logfile depending on level or errorcode. - writeToLogFile(getApplicationContext(), tag +":" + message + "-" + additionalMessage); - } -} --// 2. Set the log level -Logger.getInstance().setLogLevel(Logger.LogLevel.Verbose); --// By default, the `Logger` does not capture any PII or OII --// PII or OII will be logged -Logger.getInstance().setEnablePII(true); --// To STOP logging PII or OII, use the following setter -Logger.getInstance().setEnablePII(false); ---// 3. Send logs to logcat. -adb logcat > "C:\logmsg\logfile.txt"; -``` --### iOS --#### ADAL library errors --To explore specific ADAL errors, the source code in the [`azure-activedirectory-library-for-objc` repository](https://github.com/AzureAD/azure-activedirectory-library-for-objc/blob/dev/ADAL/src/ADAuthenticationError.m#L295) is the best error reference. --#### Operating system errors --iOS errors may arise during sign-in when users use web views, and the nature of authentication. This can be caused by conditions such as TLS errors, timeouts, or network errors: --- For Entitlement Sharing, logins are not persistent and the cache appears empty. You can resolve by adding the following line of code to the keychain:- `[[ADAuthenticationSettings sharedInstance] setSharedCacheKeychainGroup:nil];` -- For the NsUrlDomain set of errors, the action changes depending on the app logic. See the [NSURLErrorDomain reference documentation](https://developer.apple.com/documentation/foundation/nsurlerrordomain#declarations) for specific instances that can be handled.-- See [ADAL Obj-C Common Issues](https://github.com/AzureAD/azure-activedirectory-library-for-objc#adauthenticationerror) for the list of common errors maintained by the ADAL Objective-C team.--#### Guidance for error logging code --```objc -// 1. Enable NSLogging -[ADLogger setNSLogging:YES]; --// 2. Set the log level (if you want verbose) -[ADLogger setLevel:ADAL_LOG_LEVEL_VERBOSE]; --// 3. Set up a callback block to simply print out -[ADLogger setLogCallBack:^(ADAL_LOG_LEVEL logLevel, NSString *message, NSString *additionalInformation, NSInteger errorCode, NSDictionary *userInfo) { - NSString* log = [NSString stringWithFormat:@"%@ %@", message, additionalInformation]; - NSLog(@"%@", log); -}]; -``` --### Guidance for error logging code - JavaScript --```javascript -0: Error1: Warning2: Info3: Verbose -window.Logging = { - level: 3, - log: function (message) { - console.log(message); - } -}; -``` --## Related content --* [Azure AD Authentication Library][Auth-Libraries] -* [Authentication scenarios][Auth-Scenarios] -* [Register an application with the Microsoft identity platform][Integrating-Apps] --Use the comments section that follows, to provide feedback and help us refine and shape our content. --[![Shows the "Sign in with Microsoft" button][Sign-In]][Sign-In] --<!--Reference style links --> --[Auth-Libraries]: ./active-directory-authentication-libraries.md -[Auth-Scenarios]:v1-authentication-scenarios.md -[Integrating-Apps]:../develop/quickstart-register-app.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json --<!--Image references--> -[Sign-In]:./media/active-directory-devhowto-multi-tenant-overview/sign-in-with-microsoft-light.png |
active-directory | App Types | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/app-types.md | - Title: Application types in v1.0 -description: Describes the types of apps and scenarios supported by the Azure Active Directory v2.0 endpoint. -------- Previously updated : 09/24/2018-------# Application types in v1.0 ---Azure Active Directory (Azure AD) supports authentication for a variety of modern app architectures, all of them based on industry-standard protocols OAuth 2.0 or OpenID Connect. --The following diagram illustrates the scenarios and application types, and how different components can be added: --![Application Types and scenarios](./media/authentication-scenarios/application-types-scenarios.png) --These are the five primary application scenarios supported by Azure AD: --- **[Single-page application (SPA)](single-page-application.md)**: A user needs to sign in to a single-page application that is secured by Azure AD.-- **[Web browser to web application](web-app.md)**: A user needs to sign in to a web application that is secured by Azure AD.-- **[Native application to web API](native-app.md)**: A native application that runs on a phone, tablet, or PC needs to authenticate a user to get resources from a web API that is secured by Azure AD.-- **[Web application to web API](web-api.md)**: A web application needs to get resources from a web API secured by Azure AD.-- **[Daemon or server application to web API](service-to-service.md)**: A daemon application or a server application with no web user interface needs to get resources from a web API secured by Azure AD.--Follow the links to learn more about each type of app and understand the high-level scenarios before you start working with the code. You can also learn about the differences you need to know when writing a particular app that works with the v1.0 endpoint or v2.0 endpoint. --> [!NOTE] -> The v2.0 endpoint doesn't support all Azure AD scenarios and features. To determine whether you should use the v2.0 endpoint, read about [v2.0 limitations](./azure-ad-endpoint-comparison.md?bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json&toc=/azure/active-directory/azuread-dev/toc.json). --You can develop any of the apps and scenarios described here using various languages and platforms. They are all backed by complete code samples available in the code samples guide: [v1.0 code samples by scenario](sample-v1-code.md) and [v2.0 code samples by scenario](../develop/sample-v2-code.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json). You can also download the code samples directly from the corresponding [GitHub sample repositories](https://github.com/Azure-Samples?q=active-directory). --In addition, if your application needs a specific piece or segment of an end-to-end scenario, in most cases that functionality can be added independently. For example, if you have a native application that calls a web API, you can easily add a web application that also calls the web API. --## App registration --### Registering an app that uses the Azure AD v1.0 endpoint --Any application that outsources authentication to Azure AD must be registered in a directory. This step involves telling Azure AD about your application, including the URL where it's located, the URL to send replies after authentication, the URI to identify your application, and more. This information is required for a few key reasons: --* Azure AD needs to communicate with the application when handling sign-on or exchanging tokens. The information passed between Azure AD and the application includes the following: - - * **Application ID URI** - The identifier for an application. This value is sent to Azure AD during authentication to indicate which application the caller wants a token for. Additionally, this value is included in the token so that the application knows it was the intended target. - * **Reply URL** and **Redirect URI** - For a web API or web application, the Reply URL is the location where Azure AD will send the authentication response, including a token if authentication was successful. For a native application, the Redirect URI is a unique identifier to which Azure AD will redirect the user-agent in an OAuth 2.0 request. - * **Application ID** - The ID for an application, which is generated by Azure AD when the application is registered. When requesting an authorization code or token, the Application ID and Key are sent to Azure AD during authentication. - * **Key** - The key that is sent along with an Application ID when authenticating to Azure AD to call a web API. -* Azure AD needs to ensure the application has the required permissions to access your directory data, other applications in your organization, and so on. --For details, learn how to [register an app](../develop/quickstart-register-app.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json). --## Single-tenant and multi-tenant apps --Provisioning becomes clearer when you understand that there are two categories of applications that can be developed and integrated with Azure AD: --* **Single tenant application** - A single tenant application is intended for use in one organization. These are typically line-of-business (LoB) applications written by an enterprise developer. A single tenant application only needs to be accessed by users in one directory, and as a result, it only needs to be provisioned in one directory. These applications are typically registered by a developer in the organization. -* **Multi-tenant application** - A multi-tenant application is intended for use in many organizations, not just one organization. These are typically software-as-a-service (SaaS) applications written by an independent software vendor (ISV). Multi-tenant applications need to be provisioned in each directory where they will be used, which requires user or administrator consent to register them. This consent process starts when an application has been registered in the directory and is given access to the Graph API or perhaps another web API. When a user or administrator from a different organization signs up to use the application, they are presented with a dialog that displays the permissions the application requires. The user or administrator can then consent to the application, which gives the application access to the stated data, and finally registers the application in their directory. For more information, see [Overview of the Consent Framework](../develop/application-consent-experience.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json). --### Additional considerations when developing single tenant or multi-tenant apps --Some additional considerations arise when developing a multi-tenant application instead of a single tenant application. For example, if you are making your application available to users in multiple directories, you need a mechanism to determine which tenant they're in. A single tenant application only needs to look in its own directory for a user, while a multi-tenant application needs to identify a specific user from all the directories in Azure AD. To accomplish this task, Azure AD provides a common authentication endpoint where any multi-tenant application can direct sign-in requests, instead of a tenant-specific endpoint. This endpoint is `https://login.microsoftonline.com/common` for all directories in Azure AD, whereas a tenant-specific endpoint might be `https://login.microsoftonline.com/contoso.onmicrosoft.com`. The common endpoint is especially important to consider when developing your application because you'll need the necessary logic to handle multiple tenants during sign-in, sign-out, and token validation. --If you are currently developing a single tenant application but want to make it available to many organizations, you can easily make changes to the application and its configuration in Azure AD to make it multi-tenant capable. In addition, Azure AD uses the same signing key for all tokens in all directories, whether you are providing authentication in a single tenant or multi-tenant application. --Each scenario listed in this document includes a subsection that describes its provisioning requirements. For more in-depth information about provisioning an application in Azure AD and the differences between single and multi-tenant applications, see [Integrating applications with Azure Active Directory](../develop/single-and-multi-tenant-apps.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json) for more information. Continue reading to understand the common application scenarios in Azure AD. --## Next steps --- Learn more about other Azure AD [authentication basics](v1-authentication-scenarios.md) |
active-directory | Azure Ad Endpoint Comparison | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/azure-ad-endpoint-comparison.md | - Title: Why update to Microsoft identity platform (v2.0) -description: Know the differences between the Microsoft identity platform (v2.0) endpoint and the Azure Active Directory (Azure AD) v1.0 endpoint, and learn the benefits of updating to v2.0. -------- Previously updated : 11/09/2022-------# Why update to Microsoft identity platform (v2.0)? --When developing a new application, it's important to know the differences between the Microsoft identity platform (v2.0) and Azure Active Directory (v1.0) endpoints. This article covers the main differences between the endpoints and some existing limitations for Microsoft identity platform. --## Who can sign in --![Who can sign in with v1.0 and v2.0 endpoints](media/azure-ad-endpoint-comparison/who-can-signin.svg) --* The v1.0 endpoint allows only work and school accounts to sign in to your application (Azure AD) -* The Microsoft identity platform endpoint allows work and school accounts from Microsoft Entra ID and personal Microsoft accounts (MSA), such as hotmail.com, outlook.com, and msn.com, to sign in. -* Both endpoints also accept sign-ins of *[guest users](../external-identities/what-is-b2b.md)* of a Microsoft Entra directory for applications configured as *[single-tenant](../develop/single-and-multi-tenant-apps.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json)* or for *multi-tenant* applications configured to point to the tenant-specific endpoint (`https://login.microsoftonline.com/{TenantId_or_Name}`). --The Microsoft identity platform endpoint allows you to write apps that accept sign-ins from personal Microsoft accounts, and work and school accounts. This gives you the ability to write your app completely account-agnostic. For example, if your app calls the [Microsoft Graph](https://graph.microsoft.io), some additional functionality and data will be available to work accounts, such as their SharePoint sites or directory data. But for many actions, such as [Reading a user's mail](/graph/api/user-list-messages), the same code can access the email for both personal and work and school accounts. --For Microsoft identity platform endpoint, you can use the Microsoft Authentication Library (MSAL) to gain access to the consumer, educational, and enterprise worlds. The Azure AD v1.0 endpoint accepts sign-ins from work and school accounts only. --## Incremental and dynamic consent --Apps using the Azure AD v1.0 endpoint are required to specify their required OAuth 2.0 permissions in advance, for example: --![Example showing the Permissions Registration UI](./media/azure-ad-endpoint-comparison/app-reg-permissions.png) --The permissions set directly on the application registration are **static**. While static permissions of the app defined in the Azure portal keep the code nice and simple, it presents some possible issues for developers: --* The app needs to request all the permissions it would ever need upon the user's first sign-in. This can lead to a long list of permissions that discourages end users from approving the app's access on initial sign-in. --* The app needs to know all of the resources it would ever access ahead of time. It was difficult to create apps that could access an arbitrary number of resources. --With the Microsoft identity platform endpoint, you can ignore the static permissions defined in the app registration information in the Azure portal and request permissions incrementally instead, which means asking for a bare minimum set of permissions upfront and growing more over time as the customer uses additional app features. To do so, you can specify the scopes your app needs at any time by including the new scopes in the `scope` parameter when requesting an access token - without the need to pre-define them in the application registration information. If the user hasn't yet consented to new scopes added to the request, they'll be prompted to consent only to the new permissions. To learn more, see [permissions, consent, and scopes](../develop/permissions-consent-overview.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json). --Allowing an app to request permissions dynamically through the `scope` parameter gives developers full control over your user's experience. You can also front load your consent experience and ask for all permissions in one initial authorization request. If your app requires a large number of permissions, you can gather those permissions from the user incrementally as they try to use certain features of the app over time. --Admin consent done on behalf of an organization still requires the static permissions registered for the app, so you should set those permissions for apps in the app registration portal if you need an admin to give consent on behalf of the entire organization. This reduces the cycles required by the organization admin to set up the application. --## Scopes, not resources --For apps using the v1.0 endpoint, an app can behave as a **resource**, or a recipient of tokens. A resource can define a number of **scopes** or **oAuth2Permissions** that it understands, allowing client apps to request tokens from that resource for a certain set of scopes. Consider the Microsoft Graph API as an example of a resource: --* Resource identifier, or `AppID URI`: `https://graph.microsoft.com/` -* Scopes, or `oAuth2Permissions`: `Directory.Read`, `Directory.Write`, and so on. --This holds true for the Microsoft identity platform endpoint. An app can still behave as a resource, define scopes, and be identified by a URI. Client apps can still request access to those scopes. However, the way that a client requests those permissions have changed. --For the v1.0 endpoint, an OAuth 2.0 authorize request to Azure AD might have looked like: --```text -GET https://login.microsoftonline.com/common/oauth2/authorize? -client_id=2d4d11a2-f814-46a7-890a-274a72a7309e -&resource=https://graph.microsoft.com/ -... -``` --Here, the **resource** parameter indicated which resource the client app is requesting authorization. Azure AD computed the permissions required by the app based on static configuration in the Azure portal, and issued tokens accordingly. --For applications using the Microsoft identity platform endpoint, the same OAuth 2.0 authorize request looks like: --```text -GET https://login.microsoftonline.com/common/oauth2/v2.0/authorize? -client_id=2d4d11a2-f814-46a7-890a-274a72a7309e -&scope=https://graph.microsoft.com/directory.read%20https://graph.microsoft.com/directory.write -... -``` --Here, the **scope** parameter indicates which resource and permissions the app is requesting authorization. The desired resource is still present in the request - it's encompassed in each of the values of the scope parameter. Using the scope parameter in this manner allows the Microsoft identity platform endpoint to be more compliant with the OAuth 2.0 specification, and aligns more closely with common industry practices. It also enables apps to do [incremental consent](#incremental-and-dynamic-consent) - only requesting permissions when the application requires them as opposed to up front. --## Well-known scopes --### Offline access --Apps using the Microsoft identity platform endpoint may require the use of a new well-known permission for apps - the `offline_access` scope. All apps will need to request this permission if they need to access resources on the behalf of a user for a prolonged period of time, even when the user may not be actively using the app. The `offline_access` scope will appear to the user in consent dialogs as **Access your data anytime**, which the user must agree to. Requesting the `offline_access` permission will enable your web app to receive OAuth 2.0 refresh_tokens from the Microsoft identity platform endpoint. Refresh tokens are long-lived, and can be exchanged for new OAuth 2.0 access tokens for extended periods of access. --If your app doesn't request the `offline_access` scope, it won't receive refresh tokens. This means that when you redeem an authorization code in the OAuth 2.0 authorization code flow, you'll only receive back an access token from the `/token` endpoint. That access token remains valid for a short period of time (typically one hour), but will eventually expire. At that point in time, your app will need to redirect the user back to the `/authorize` endpoint to retrieve a new authorization code. During this redirect, the user may or may not need to enter their credentials again or reconsent to permissions, depending on the type of app. --To learn more about OAuth 2.0, `refresh_tokens`, and `access_tokens`, check out the [Microsoft identity platform protocol reference](../develop/v2-protocols.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json). --### OpenID, profile, and email --Historically, the most basic OpenID Connect sign-in flow with Microsoft identity platform would provide a lot of information about the user in the resulting *id_token*. The claims in an id_token can include the user's name, preferred username, email address, object ID, and more. --The information that the `openid` scope affords your app access to is now restricted. The `openid` scope will only allow your app to sign in the user and receive an app-specific identifier for the user. If you want to get personal data about the user in your app, your app needs to request additional permissions from the user. Two new scopes, `email` and `profile`, will allow you to request additional permissions. --* The `email` scope allows your app access to the user's primary email address through the `email` claim in the id_token, assuming the user has an addressable email address. -* The `profile` scope affords your app access to all other basic information about the user, such as their name, preferred username, object ID, and so on, in the id_token. --These scopes allow you to code your app in a minimal-disclosure fashion so you can only ask the user for the set of information that your app needs to do its job. For more information on these scopes, see [the Microsoft identity platform scope reference](../develop/permissions-consent-overview.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json). --## Token claims --The Microsoft identity platform endpoint issues a smaller set of claims in its tokens by default to keep payloads small. If you have apps and services that have a dependency on a particular claim in a v1.0 token that is no longer provided by default in a Microsoft identity platform token, consider using the [optional claims](../develop/optional-claims.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json) feature to include that claim. --> [!IMPORTANT] -> v1.0 and v2.0 tokens can be issued by both the v1.0 and v2.0 endpoints! id_tokens *always* match the endpoint they're requested from, and access tokens *always* match the format expected by the Web API your client will call using that token. So if your app uses the v2.0 endpoint to get a token to call Microsoft Graph, which expects v1.0 format access tokens, your app will receive a token in the v1.0 format. --## Limitations --There are a few restrictions to be aware of when using Microsoft identity platform. --When you build applications that integrate with the Microsoft identity platform, you need to decide whether the Microsoft identity platform endpoint and authentication protocols meet your needs. The v1.0 endpoint and platform is still fully supported and, in some respects, is more feature rich than Microsoft identity platform. However, Microsoft identity platform [introduces significant benefits](azure-ad-endpoint-comparison.md) for developers. --Here's a simplified recommendation for developers now: --* If you want or need to support personal Microsoft accounts in your application, or you're writing a new application, use Microsoft identity platform. But before you do, make sure you understand the limitations discussed in this article. -* If you're migrating or updating an application that relies on SAML, you can't use Microsoft identity platform. Instead, refer to the [Azure AD v1.0 guide](v1-overview.md). --The Microsoft identity platform endpoint will evolve to eliminate the restrictions listed here, so that you'll only ever need to use the Microsoft identity platform endpoint. In the meantime, use this article to determine whether the Microsoft identity platform endpoint is right for you. We'll continue to update this article to reflect the current state of the Microsoft identity platform endpoint. Check back to reevaluate your requirements against Microsoft identity platform capabilities. --### Restrictions on app registrations --For each app that you want to integrate with the Microsoft identity platform endpoint, you can create an app registration in the new [**App registrations** experience](https://aka.ms/appregistrations) in the Azure portal. Existing Microsoft account apps aren't compatible with the portal, but all Microsoft Entra apps are, regardless of where or when they were registered. --App registrations that support work and school accounts and personal accounts have the following caveats: --* Only two app secrets are allowed per application ID. -* An application that wasn't registered in a tenant can only be managed by the account that registered it. It can't be shared with other developers. This is the case for most apps that were registered using a personal Microsoft account in the App Registration Portal. If you'd like to share your app registration with multiple developers, register the application in a tenant using the new **App registrations** section of the Azure portal. -* There are several restrictions on the format of the redirect URL that is allowed. For more information about redirect URL, see the next section. --### Restrictions on redirect URLs --For the most up-to-date information about restrictions on redirect URLs for apps that are registered for Microsoft identity platform, see [Redirect URI/reply URL restrictions and limitations](../develop/reply-url.md) in the Microsoft identity platform documentation. --To learn how to register an app for use with Microsoft identity platform, see [Register an app using the new App registrations experience](../develop/quickstart-register-app.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json). --### Restrictions on libraries and SDKs --Currently, library support for the Microsoft identity platform endpoint is limited. If you want to use the Microsoft identity platform endpoint in a production application, you have these options: --* If you're building a web application, you can safely use the generally available server-side middleware to do sign-in and token validation. These include the OWIN OpenID Connect middleware for ASP.NET and the Node.js Passport plug-in. For code samples that use Microsoft middleware, see the [Microsoft identity platform getting started](../develop/v2-overview.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json#getting-started) section. -* If you're building a desktop or mobile application, you can use one of the Microsoft Authentication Libraries (MSAL). These libraries are generally available or in a production-supported preview, so it is safe to use them in production applications. You can read more about the terms of the preview and the available libraries in [authentication libraries reference](../develop/reference-v2-libraries.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json). -* For platforms not covered by Microsoft libraries, you can integrate with the Microsoft identity platform endpoint by directly sending and receiving protocol messages in your application code. The OpenID Connect and OAuth protocols [are explicitly documented](../develop/v2-protocols.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json) to help you do such an integration. -* Finally, you can use open-source OpenID Connect and OAuth libraries to integrate with the Microsoft identity platform endpoint. The Microsoft identity platform endpoint should be compatible with many open-source protocol libraries without changes. The availability of these kinds of libraries varies by language and platform. The [OpenID Connect](https://openid.net/connect/) and [OAuth 2.0](https://oauth.net/2/) websites maintain a list of popular implementations. For more information, see [Microsoft identity platform and authentication libraries](../develop/reference-v2-libraries.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json), and the list of open-source client libraries and samples that have been tested with the Microsoft identity platform endpoint. -* For reference, the `.well-known` endpoint for the Microsoft identity platform common endpoint is `https://login.microsoftonline.com/common/v2.0/.well-known/openid-configuration`. Replace `common` with your tenant ID to get data specific to your tenant. --### Protocol changes --The Microsoft identity platform endpoint does not support SAML or WS-Federation; it only supports OpenID Connect and OAuth 2.0. The notable changes to the OAuth 2.0 protocols from the v1.0 endpoint are: --* The `email` claim is returned if an optional claim is configured **or** scope=email was specified in the request. -* The `scope` parameter is now supported in place of the `resource` parameter. -* Many responses have been modified to make them more compliant with the OAuth 2.0 specification, for example, correctly returning `expires_in` as an int instead of a string. --To better understand the scope of protocol functionality supported in the Microsoft identity platform endpoint, see [OpenID Connect and OAuth 2.0 protocol reference](../develop/v2-protocols.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json). --#### SAML usage --If you've used Active Directory Authentication Library (ADAL) in Windows applications, you might have taken advantage of Windows Integrated authentication, which uses the Security Assertion Markup Language (SAML) assertion grant. With this grant, users of federated Microsoft Entra tenants can silently authenticate with their on-premises Active Directory instance without entering credentials. While [SAML is still a supported protocol](../develop/saml-protocol-reference.md) for use with enterprise users, the v2.0 endpoint is only for use with OAuth 2.0 applications. --## Next steps --Learn more in the [Microsoft identity platform documentation](../develop/index.yml). |
active-directory | Azure Ad Federation Metadata | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/azure-ad-federation-metadata.md | - Title: Azure AD Federation Metadata -description: This article describes the federation metadata document that Azure Active Directory publishes for services that accept Azure Active Directory tokens. ------- Previously updated : 01/07/2017-------# Federation metadata ---Azure Active Directory (Azure AD) publishes a federation metadata document for services that is configured to accept the security tokens that Azure AD issues. The federation metadata document format is described in the [Web Services Federation Language (WS-Federation) Version 1.2](https://docs.oasis-open.org/wsfed/federation/v1.2/os/ws-federation-1.2-spec-os.html), which extends [Metadata for the OASIS Security Assertion Markup Language (SAML) v2.0](https://docs.oasis-open.org/security/saml/v2.0/saml-metadata-2.0-os.pdf). --## Tenant-specific and Tenant-independent metadata endpoints -Azure AD publishes tenant-specific and tenant-independent endpoints. --Tenant-specific endpoints are designed for a particular tenant. The tenant-specific federation metadata includes information about the tenant, including tenant-specific issuer and endpoint information. Applications that restrict access to a single tenant use tenant-specific endpoints. --Tenant-independent endpoints provide information that is common to all Azure AD tenants. This information applies to tenants hosted at *login.microsoftonline.com* and is shared across tenants. Tenant-independent endpoints are recommended for multi-tenant applications, since they are not associated with any particular tenant. --## Federation metadata endpoints -Azure AD publishes federation metadata at `https://login.microsoftonline.com/<TenantDomainName>/FederationMetadata/2007-06/FederationMetadata.xml`. --For **tenant-specific endpoints**, the `TenantDomainName` can be one of the following types: --* A registered domain name of an Azure AD tenant, such as: `contoso.onmicrosoft.com`. -* The immutable tenant ID of the domain, such as `72f988bf-86f1-41af-91ab-2d7cd011db45`. --For **tenant-independent endpoints**, the `TenantDomainName` is `common`. This document lists only the Federation Metadata elements that are common to all Azure AD tenants that are hosted at login.microsoftonline.com. --For example, a tenant-specific endpoint might be `https://login.microsoftonline.com/contoso.onmicrosoft.com/FederationMetadata/2007-06/FederationMetadata.xml`. The tenant-independent endpoint is [https://login.microsoftonline.com/common/FederationMetadata/2007-06/FederationMetadata.xml](https://login.microsoftonline.com/common/FederationMetadata/2007-06/FederationMetadata.xml). You can view the federation metadata document by typing this URL in a browser. --## Contents of federation Metadata -The following section provides information needed by services that consume the tokens issued by Azure AD. --### Entity ID -The `EntityDescriptor` element contains an `EntityID` attribute. The value of the `EntityID` attribute represents the issuer, that is, the security token service (STS) that issued the token. It is important to validate the issuer when you receive a token. --The following metadata shows a sample tenant-specific `EntityDescriptor` element with an `EntityID` element. --``` -<EntityDescriptor -xmlns="urn:oasis:names:tc:SAML:2.0:metadata" -ID="_b827a749-cfcb-46b3-ab8b-9f6d14a1294b" -entityID="https://sts.windows.net/72f988bf-86f1-41af-91ab-2d7cd011db45/"> -``` -You can replace the tenant ID in the tenant-independent endpoint with your tenant ID to create a tenant-specific `EntityID` value. The resulting value will be the same as the token issuer. The strategy allows a multi-tenant application to validate the issuer for a given tenant. --The following metadata shows a sample tenant-independent `EntityID` element. Please note, that the `{tenant}` is a literal, not a placeholder. --``` -<EntityDescriptor -xmlns="urn:oasis:names:tc:SAML:2.0:metadata" -ID="="_0e5bd9d0-49ef-4258-bc15-21ce143b61bd" -entityID="https://sts.windows.net/{tenant}/"> -``` --### Token signing certificates -When a service receives a token that is issued by an Azure AD tenant, the signature of the token must be validated with a signing key that is published in the federation metadata document. The federation metadata includes the public portion of the certificates that the tenants use for token signing. The certificate raw bytes appear in the `KeyDescriptor` element. The token signing certificate is valid for signing only when the value of the `use` attribute is `signing`. --A federation metadata document published by Azure AD can have multiple signing keys, such as when Azure AD is preparing to update the signing certificate. When a federation metadata document includes more than one certificate, a service that is validating the tokens should support all certificates in the document. --The following metadata shows a sample `KeyDescriptor` element with a signing key. --``` -<KeyDescriptor use="signing"> -<KeyInfo xmlns="https://www.w3.org/2000/09/xmldsig#"> -<X509Data> -<X509Certificate> -MIIDPjCCAiqgAwIBAgIQVWmXY/+9RqFTeGY1D711EORX/lVXpr+ecGgqfUWF8MPB07XkYuJ54DAuYT318+2XrzMjOtqkT94VkXmxv6dFGhG8YZ8vNMPd4tdj9c0lpvWQdqXtL1TlFRpD/P6UMEigfN0c9oWDg9U7Ilymgei0UXtf1gtcQbc5sSQU0S4vr9YJp2gLFIGK11Iqg4XSGdcI0QWLLkkC6cBukhVnd6BCYbLjTYy3fNs4DzNdemJlxGl8sLexFytBF6YApvSdus3nFXaMCtBGx16HzkK9ne3lobAwL2o79bP4imEGqg+ibvyNmbrwFGnQrBc1jTF9LyQX9q+louxVfHs6ZiVwIDAQABo2IwYDBeBgNVHQEEVzBVgBCxDDsLd8xkfOLKm4Q/SzjtoS8wLTErMCkGA1UEAxMiYWNjb3VudHMuYWNjZXNzY29udHJvbC53aW5kb3dzLm5ldIIQVWmXY/+9RqFA/OG9kFulHDAJBgUrDgMCHQUAA4IBAQAkJtxxm/ErgySlNk69+1odTMP8Oy6L0H17z7XGG3w4TqvTUSWaxD4hSFJ0e7mHLQLQD7oV/erACXwSZn2pMoZ89MBDjOMQA+e6QzGB7jmSzPTNmQgMLA8fWCfqPrz6zgH+1F1gNp8hJY57kfeVPBiyjuBmlTEBsBlzolY9dd/55qqfQk6cgSeCbHCy/RU/iep0+UsRMlSgPNNmqhj5gmN2AFVCN96zF694LwuPae5CeR2ZcVknexOWHYjFM0MgUSw0ubnGl0h9AJgGyhvNGcjQqu9vd1xkupFgaN+f7P3p3EVN5csBg5H94jEcQZT7EKeTiZ6bTrpDAnrr8tDCy8ng -</X509Certificate> -</X509Data> -</KeyInfo> -</KeyDescriptor> - ``` --The `KeyDescriptor` element appears in two places in the federation metadata document; in the WS-Federation-specific section and the SAML-specific section. The certificates published in both sections will be the same. --In the WS-Federation-specific section, a WS-Federation metadata reader would read the certificates from a `RoleDescriptor` element with the `SecurityTokenServiceType` type. --The following metadata shows a sample `RoleDescriptor` element. --``` -<RoleDescriptor xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance" xmlns:fed="https://docs.oasis-open.org/wsfed/federation/200706" xsi:type="fed:SecurityTokenServiceType" protocolSupportEnumeration="https://docs.oasis-open.org/wsfed/federation/200706"> -``` --In the SAML-specific section, a WS-Federation metadata reader would read the certificates from a `IDPSSODescriptor` element. --The following metadata shows a sample `IDPSSODescriptor` element. --``` -<IDPSSODescriptor protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol"> -``` -There are no differences in the format of tenant-specific and tenant-independent certificates. --### WS-Federation endpoint URL -The federation metadata includes the URL that is Azure AD uses for single sign-in and single sign-out in WS-Federation protocol. This endpoint appears in the `PassiveRequestorEndpoint` element. --The following metadata shows a sample `PassiveRequestorEndpoint` element for a tenant-specific endpoint. --``` -<fed:PassiveRequestorEndpoint> -<EndpointReference xmlns="https://www.w3.org/2005/08/addressing"> -<Address> -https://login.microsoftonline.com/72f988bf-86f1-41af-91ab-2d7cd011db45/wsfed -</Address> -</EndpointReference> -</fed:PassiveRequestorEndpoint> -``` -For the tenant-independent endpoint, the WS-Federation URL appears in the WS-Federation endpoint, as shown in the following sample. --``` -<fed:PassiveRequestorEndpoint> -<EndpointReference xmlns="https://www.w3.org/2005/08/addressing"> -<Address> -https://login.microsoftonline.com/common/wsfed -</Address> -</EndpointReference> -</fed:PassiveRequestorEndpoint> -``` --### SAML protocol endpoint URL -The federation metadata includes the URL that Azure AD uses for single sign-in and single sign-out in SAML 2.0 protocol. These endpoints appear in the `IDPSSODescriptor` element. --The sign-in and sign-out URLs appear in the `SingleSignOnService` and `SingleLogoutService` elements. --The following metadata shows a sample `PassiveResistorEndpoint` for a tenant-specific endpoint. --``` -<IDPSSODescriptor protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol"> -… - <SingleLogoutService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="https://login.microsoftonline.com/contoso.onmicrosoft.com/saml2" /> - <SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="https://login.microsoftonline.com/contoso.onmicrosoft.com /saml2" /> - </IDPSSODescriptor> -``` --Similarly the endpoints for the common SAML 2.0 protocol endpoints are published in the tenant-independent federation metadata, as shown in the following sample. --``` -<IDPSSODescriptor protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol"> -… - <SingleLogoutService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="https://login.microsoftonline.com/common/saml2" /> - <SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="https://login.microsoftonline.com/common/saml2" /> - </IDPSSODescriptor> -``` |
active-directory | Conditional Access Dev Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/conditional-access-dev-guide.md | - Title: Azure AD Conditional Access developer guidance -description: Developer guidance and scenarios for Azure AD Conditional Access ------ Previously updated : 02/28/2019---------# Developer guidance for the Azure AD Conditional Access feature ---> [!NOTE] -> For the Microsoft identity platform version of this article, see [Developer guidance for Microsoft Entra Conditional Access](../develop/v2-conditional-access-dev-guide.md). --The Conditional Access feature in Microsoft Entra ID offers one of several ways that you can use to secure your app and protect a service. Conditional Access enables developers and enterprise customers to protect services in a multitude of ways including: --* Multi-factor authentication -* Allowing only Intune enrolled devices to access specific services -* Restricting user locations and IP ranges --For more information on the full capabilities of Conditional Access, see [What is Conditional Access](../conditional-access/overview.md). --For developers building apps for Azure AD, this article shows how you can use Conditional Access and you'll also learn about the impact of accessing resources that you don't have control over that may have Conditional Access policies applied. The article also explores the implications of Conditional Access in the on-behalf-of flow, web apps, accessing Microsoft Graph, and calling APIs. --Knowledge of [single and multi-tenant](../develop/howto-convert-app-to-be-multi-tenant.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json) apps and [common authentication patterns](v1-authentication-scenarios.md) is assumed. --## How does Conditional Access impact an app? --### App types impacted --In most common cases, Conditional Access does not change an app's behavior or requires any changes from the developer. Only in certain cases when an app indirectly or silently requests a token for a service, an app requires code changes to handle Conditional Access "challenges". It may be as simple as performing an interactive sign-in request. --Specifically, the following scenarios require code to handle Conditional Access "challenges": --* Apps performing the on-behalf-of flow -* Apps accessing multiple services/resources -* Single-page apps using ADAL.js -* Web Apps calling a resource --Conditional Access policies can be applied to the app, but also can be applied to a web API your app accesses. To learn more about how to configure a Conditional Access policy, see [Common Conditional Access policies](../conditional-access/concept-conditional-access-policy-common.md). --Depending on the scenario, an enterprise customer can apply and remove Conditional Access policies at any time. In order for your app to continue functioning when a new policy is applied, you need to implement the "challenge" handling. The following examples illustrate challenge handling. --### Conditional Access examples --Some scenarios require code changes to handle Conditional Access whereas others work as is. Here are a few scenarios using Conditional Access to do multi-factor authentication that gives some insight into the difference. --* You are building a single-tenant iOS app and apply a Conditional Access policy. The app signs in a user and doesn't request access to an API. When the user signs in, the policy is automatically invoked and the user needs to perform multi-factor authentication (MFA). -* You are building a native app that uses a middle tier service to access a downstream API. An enterprise customer at the company using this app applies a policy to the downstream API. When an end user signs in, the native app requests access to the middle tier and sends the token. The middle tier performs on-behalf-of flow to request access to the downstream API. At this point, a claims "challenge" is presented to the middle tier. The middle tier sends the challenge back to the native app, which needs to comply with the Conditional Access policy. --#### Microsoft Graph --Microsoft Graph has special considerations when building apps in Conditional Access environments. Generally, the mechanics of Conditional Access behave the same, but the policies your users see will be based on the underlying data your app is requesting from the graph. --Specifically, all Microsoft Graph scopes represent some dataset that can individually have policies applied. Since Conditional Access policies are assigned the specific datasets, Azure AD will enforce Conditional Access policies based on the data behind Graph - rather than Graph itself. --For example, if an app requests the following Microsoft Graph scopes, --``` -scopes="Bookings.Read.All Mail.Read" -``` --An app can expect their users to fulfill all policies set on Bookings and Exchange. Some scopes may map to multiple datasets if it grants access. --### Complying with a Conditional Access policy --For several different app topologies, a Conditional Access policy is evaluated when the session is established. As a Conditional Access policy operates on the granularity of apps and services, the point at which it is invoked depends heavily on the scenario you're trying to accomplish. --When your app attempts to access a service with a Conditional Access policy, it may encounter a Conditional Access challenge. This challenge is encoded in the `claims` parameter that comes in a response from Azure AD. Here's an example of this challenge parameter: --``` -claims={"access_token":{"polids":{"essential":true,"Values":["<GUID>"]}}} -``` --Developers can take this challenge and append it onto a new request to Azure AD. Passing this state prompts the end user to perform any action necessary to comply with the Conditional Access policy. In the following scenarios, specifics of the error and how to extract the parameter are explained. --## Scenarios --### Prerequisites --Microsoft Entra Conditional Access is a feature included in [Microsoft Entra ID P1 or P2](../fundamentals/whatis.md). You can learn more about licensing requirements in the [unlicensed usage report](../reports-monitoring/overview-monitoring-health.md). Developers can join the [Microsoft Developer Network](/), which includes a free subscription to the Enterprise Mobility Suite, which includes Microsoft Entra ID P1 or P2. --### Considerations for specific scenarios --The following information only applies in these Conditional Access scenarios: --* Apps performing the on-behalf-of flow -* Apps accessing multiple services/resources -* Single-page apps using ADAL.js --The following sections discuss common scenarios that are more complex. The core operating principle is Conditional Access policies are evaluated at the time the token is requested for the service that has a Conditional Access policy applied. --## Scenario: App performing the on-behalf-of flow --In this scenario, we walk through the case in which a native app calls a web service/API. In turn, this service does the "on-behalf-of" flow to call a downstream service. In our case, we've applied our Conditional Access policy to the downstream service (Web API 2) and are using a native app rather than a server/daemon app. --![App performing the on-behalf-of flow diagram](./media/conditional-access-dev-guide/app-performing-on-behalf-of-scenario.png) --The initial token request for Web API 1 does not prompt the end user for multi-factor authentication as Web API 1 may not always hit the downstream API. Once Web API 1 tries to request a token on-behalf-of the user for Web API 2, the request fails since the user has not signed in with multi-factor authentication. --Azure AD returns an HTTP response with some interesting data: --> [!NOTE] -> In this instance it's a multi-factor authentication error description, but there's a wide range of `interaction_required` possible pertaining to Conditional Access. --``` -HTTP 400; Bad Request -error=interaction_required -error_description=AADSTS50076: Due to a configuration change made by your administrator, or because you moved to a new location, you must use multi-factor authentication to access '<Web API 2 App/Client ID>'. -claims={"access_token":{"polids":{"essential":true,"Values":["<GUID>"]}}} -``` --In Web API 1, we catch the error `error=interaction_required`, and send back the `claims` challenge to the desktop app. At that point, the desktop app can make a new `acquireToken()` call and append the `claims`challenge as an extra query string parameter. This new request requires the user to do multi-factor authentication and then send this new token back to Web API 1 and complete the on-behalf-of flow. --To try out this scenario, see our [.NET code sample](https://github.com/Azure-Samples/active-directory-dotnet-webapi-onbehalfof-ca). It demonstrates how to pass the claims challenge back from Web API 1 to the native app and construct a new request inside the client app. --## Scenario: App accessing multiple services --In this scenario, we walk through the case in which a web app accesses two services one of which has a Conditional Access policy assigned. Depending on your app logic, there may exist a path in which your app does not require access to both web services. In this scenario, the order in which you request a token plays an important role in the end user experience. --Let's assume we have web service A and B and web service B has our Conditional Access policy applied. While the initial interactive auth request requires consent for both services, the Conditional Access policy is not required in all cases. If the app requests a token for web service B, then the policy is invoked and subsequent requests for web service A also succeeds as follows. --![App accessing multiple-services flow diagram](./media/conditional-access-dev-guide/app-accessing-multiple-services-scenario.png) --Alternatively, if the app initially requests a token for web service A, the end user does not invoke the Conditional Access policy. This allows the app developer to control the end user experience and not force the Conditional Access policy to be invoked in all cases. The tricky case is if the app subsequently requests a token for web service B. At this point, the end user needs to comply with the Conditional Access policy. When the app tries to `acquireToken`, it may generate the following error (illustrated in the following diagram): --``` -HTTP 400; Bad Request -error=interaction_required -error_description=AADSTS50076: Due to a configuration change made by your administrator, or because you moved to a new location, you must use multi-factor authentication to access '<Web API App/Client ID>'. -claims={"access_token":{"polids":{"essential":true,"Values":["<GUID>"]}}} -``` --![App accessing multiple services requesting a new token](./media/conditional-access-dev-guide/app-accessing-multiple-services-new-token.png) --If the app is using the ADAL library, a failure to acquire the token is always retried interactively. When this interactive request occurs, the end user has the opportunity to comply with the Conditional Access. This is true unless the request is a `AcquireTokenSilentAsync` or `PromptBehavior.Never` in which case the app needs to perform an interactive ```AcquireToken``` request to give the end user the opportunity to comply with the policy. --## Scenario: Single-page app (SPA) using ADAL.js --In this scenario, we walk through the case when we have a single-page app (SPA), using ADAL.js to call a Conditional Access protected web API. This is a simple architecture but has some nuances that need to be taken into account when developing around Conditional Access. --In ADAL.js, there are a few functions that obtain tokens: `login()`, `acquireToken(...)`, `acquireTokenPopup(…)`, and `acquireTokenRedirect(…)`. --* `login()` obtains an ID token through an interactive sign-in request but does not obtain access tokens for any service (including a Conditional Access protected web API). -* `acquireToken(…)` can then be used to silently obtain an access token meaning it does not show UI in any circumstance. -* `acquireTokenPopup(…)` and `acquireTokenRedirect(…)` are both used to interactively request a token for a resource meaning they always show sign-in UI. --When an app needs an access token to call a Web API, it attempts an `acquireToken(…)`. If the token session is expired or we need to comply with a Conditional Access policy, then the *acquireToken* function fails and the app uses `acquireTokenPopup()` or `acquireTokenRedirect()`. --![Single-page app using ADAL flow diagram](./media/conditional-access-dev-guide/spa-using-adal-scenario.png) --Let's walk through an example with our Conditional Access scenario. The end user just landed on the site and doesn't have a session. We perform a `login()` call, get an ID token without multi-factor authentication. Then the user hits a button that requires the app to request data from a web API. The app tries to do an `acquireToken()` call but fails since the user has not performed multi-factor authentication yet and needs to comply with the Conditional Access policy. --Azure AD sends back the following HTTP response: --``` -HTTP 400; Bad Request -error=interaction_required -error_description=AADSTS50076: Due to a configuration change made by your administrator, or because you moved to a new location, you must use multi-factor authentication to access '<Web API App/Client ID>'. -``` --Our app needs to catch the `error=interaction_required`. The application can then use either `acquireTokenPopup()` or `acquireTokenRedirect()` on the same resource. The user is forced to do a multi-factor authentication. After the user completes the multi-factor authentication, the app is issued a fresh access token for the requested resource. --To try out this scenario, see our [JS SPA On-behalf-of code sample](https://github.com/Azure-Samples/active-directory-dotnet-webapi-onbehalfof-ca). This code sample uses the Conditional Access policy and web API you registered earlier with a JS SPA to demonstrate this scenario. It shows how to properly handle the claims challenge and get an access token that can be used for your Web API. Alternatively, checkout the general [Angular.js code sample](https://github.com/Azure-Samples/active-directory-angularjs-singlepageapp) for guidance on an Angular SPA --## See also --* To learn more about the capabilities, see [Conditional Access in Microsoft Entra ID](../conditional-access/overview.md). -* For more Microsoft Entra ID code samples, see [GitHub repo of code samples](https://github.com/azure-samples?utf8=%E2%9C%93&q=active-directory). -* For more info on the ADAL SDK's and access the reference documentation, see [library guide](active-directory-authentication-libraries.md). -* To learn more about multi-tenant scenarios, see [How to sign in users using the multi-tenant pattern](../develop/howto-convert-app-to-be-multi-tenant.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json). |
active-directory | Howto Get Appsource Certified | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/howto-get-appsource-certified.md | - Title: How to get AppSource certified for Azure Active Directory| Microsoft Docs -description: Details on how to get your application AppSource certified for Azure Active Directory. -------- Previously updated : 08/21/2018-------# How to get AppSource Certified for Azure Active Directory ---[Microsoft AppSource](https://appsource.microsoft.com/) is a destination for business users to discover, try, and manage line-of-business SaaS applications (standalone SaaS and add-on to existing Microsoft SaaS products). --To list a standalone SaaS application on AppSource, your application must accept single sign-on from work accounts from any company or organization that has Azure Active Directory (Azure AD). The sign-in process must use the [OpenID Connect](v1-protocols-openid-connect-code.md) or [OAuth 2.0](v1-protocols-oauth-code.md) protocols. SAML integration is not accepted for AppSource certification. --## Guides and code samples --If you want to learn about how to integrate your application with Azure AD using Open ID connect, follow our guides and code samples in the [Azure Active Directory developer's guide](v1-overview.md#get-started "Get Started with Azure AD for developers"). --## Multi-tenant applications --A *multi-tenant application* is an application that accepts sign-ins from users from any company or organization that have Azure AD without requiring a separate instance, configuration, or deployment. AppSource recommends that applications implement multi-tenancy to enable the *single-click* free trial experience. --To enable multi-tenancy on your application, follow these steps: -1. Set `Multi-Tenanted` property to `Yes` on your application registration's information in the [Azure portal](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RegisteredApps). By default, applications created in the Azure portal are configured as *[single-tenant](#single-tenant-applications)*. -1. Update your code to send requests to the `common` endpoint. To do this, update the endpoint from `https://login.microsoftonline.com/{yourtenant}` to `https://login.microsoftonline.com/common*`. -1. For some platforms, like ASP.NET, you need also to update your code to accept multiple issuers. --For more information about multi-tenancy, see [How to sign in any Azure Active Directory (Azure AD) user using the multi-tenant application pattern](../develop/howto-convert-app-to-be-multi-tenant.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json). --### Single-tenant applications --A *single-tenant application* is an application that only accepts sign-ins from users of a defined Azure AD instance. External users (including work or school accounts from other organizations, or personal accounts) can sign in to a single-tenant application after adding each user as a guest account to the Azure AD instance that the application is registered. --You can add users as guest accounts to Azure AD through the [Azure AD B2B collaboration](../external-identities/what-is-b2b.md) and you can do this [programmatically](/azure/active-directory-b2c/integrate-with-app-code-samples). When using B2B, users can create a self-service portal that does not require an invitation to sign in. For more info, see [Self-service portal for Azure AD B2B collaboration sign-up](../external-identities/self-service-portal.md). --Single-tenant applications can enable the *Contact Me* experience, but if you want to enable the single-click/free trial experience that AppSource recommends, enable multi-tenancy on your application instead. --## AppSource trial experiences --### Free trial (customer-led trial experience) --The customer-led trial is the experience that AppSource recommends as it offers a single-click access to your application. The following example shows what this experience looks like: --<table > -<tr> <td valign="top" width="33%">1.<br/><img src="media/active-directory-devhowto-appsource-certified/customer-led-trial-step1.png" width="85%" alt="Shows Free trial for customer-led trial experience."/><ul><li>User finds your application in AppSource Web Site</li><li>Selects 'Free trial' option</li></ul></td> - <td valign="top" width="33%">2.<br/><img src="media/active-directory-devhowto-appsource-certified/customer-led-trial-step2.png" width="85%" alt="Shows how user is redirected to a URL in your web site."/><ul><li>AppSource redirects user to a URL in your web site</li><li>Your web site starts the <i>single-sign-on</i> process automatically (on page load)</li></ul></td> - <td valign="top" width="33%">3.<br/><img src="media/active-directory-devhowto-appsource-certified/customer-led-trial-step3.png" width="85%" alt="Shows the Microsoft sign-in page."/><ul><li>User is redirected to Microsoft Sign-in page</li><li>User provides credentials to sign in</li></ul></td> -</tr> -<tr> - <td valign="top" width="33%">4.<br/><img src="media/active-directory-devhowto-appsource-certified/customer-led-trial-step4.png" width="85%" alt="Example: Consent page for an application."/><ul><li>User gives consent for your application</li></ul></td> - <td valign="top" width="33%">5.<br/><img src="media/active-directory-devhowto-appsource-certified/customer-led-trial-step5.png" width="85%" alt="Shows the experience the user sees when redirected back to your site."/><ul><li>Sign-in completes and user is redirected back to your web site</li><li>User starts the free trial</li></ul></td> - <td></td> -</tr> -</table> --### Contact me (partner-led trial experience) --You can use the partner trial experience when a manual or a long-term operation needs to happen to provision the user/company--for example, your application needs to provision virtual machines, database instances, or operations that take much time to complete. In this case, after the user selects the **Request Trial** button and fills out a form, AppSource sends you the user's contact information. When you receive this information, you then provision the environment and send the instructions to the user on how to access the trial experience:<br/><br/> --<table valign="top"> -<tr> - <td valign="top" width="33%">1.<br/><img src="media/active-directory-devhowto-appsource-certified/partner-led-trial-step1.png" width="85%" alt="Shows Contact me for partner-led trial experience"/><ul><li>User finds your application in AppSource web site</li><li>Selects 'Contact Me' option</li></ul></td> - <td valign="top" width="33%">2.<br/><img src="media/active-directory-devhowto-appsource-certified/partner-led-trial-step2.png" width="85%" alt="Shows an example form with contact info"/><ul><li>Fills out a form with contact information</li></ul></td> - <td valign="top" width="33%">3.<br/><br/> - <table bgcolor="#f7f7f7"> - <tr> - <td><img src="media/active-directory-devhowto-appsource-certified/usercontact.png" width="55%" alt="Shows placeholder for user information"/></td> - <td>You receive user information</td> - </tr> - <tr> - <td><img src="media/active-directory-devhowto-appsource-certified/setupenv.png" width="55%" alt="Shows placeholder for setup environment info"/></td> - <td>Setup environment</td> - </tr> - <tr> - <td><img src="media/active-directory-devhowto-appsource-certified/contactcustomer.png" width="55%" alt="Shows placeholder for trial info"/></td> - <td>Contact user with trial info</td> - </tr> - </table><br/><br/> - <ul><li>You receive user's information and setup trial instance</li><li>You send the hyperlink to access your application to the user</li></ul> - </td> -</tr> -<tr> - <td valign="top" width="33%">4.<br/><img src="media/active-directory-devhowto-appsource-certified/partner-led-trial-step3.png" width="85%" alt="Shows the application sign-in screen"/><ul><li>User accesses your application and complete the single-sign-on process</li></ul></td> - <td valign="top" width="33%">5.<br/><img src="media/active-directory-devhowto-appsource-certified/partner-led-trial-step4.png" width="85%" alt="Shows an example consent page for an application"/><ul><li>User gives consent for your application</li></ul></td> - <td valign="top" width="33%">6.<br/><img src="media/active-directory-devhowto-appsource-certified/customer-led-trial-step5.png" width="85%" alt="Shows the experience the user sees when redirected back to your site"/><ul><li>Sign-in completes and user is redirected back to your web site</li><li>User starts the free trial</li></ul></td> -</tr> -</table> --### More information --For more information about the AppSource trial experience, see [this video](https://aka.ms/trialexperienceforwebapps). --## Next Steps --- For more information on building applications that support Azure AD sign-ins, see [Authentication scenarios for Azure AD](./v1-authentication-scenarios.md).-- For information on how to list your SaaS application in AppSource, go see [AppSource Partner Information](https://appsource.microsoft.com/partners)--## Get support --For Azure AD integration, we use [Microsoft Q&A](/answers/) with the community to provide support. --We highly recommend you ask your questions on Microsoft Q&A first and browse existing issues to see if someone has asked your question before. Make sure that your questions or comments are tagged with [`[azure-active-directory]`](/answers/topics/azure-active-directory.html). --Use the following comments section to provide feedback and help us refine and shape our content. --<!--Reference style links --> -[AAD-Auth-Scenarios]:v1-authentication-scenarios.md -[AAD-Auth-Scenarios-Browser-To-WebApp]:v1-authentication-scenarios.md#web-browser-to-web-application -[AAD-Dev-Guide]: v1-overview.md -[AAD-QuickStart-Web-Apps]: v1-overview.md#get-started --<!--Image references--> |
active-directory | Howto Reactivate Disabled Acs Namespaces | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/howto-reactivate-disabled-acs-namespaces.md | - Title: Reactivate disabled Azure Access Control Service (ACS) namespaces -description: Find and enable your Azure Access Control Service (ACS) namespaces and request an extension to keep them enabled until February 4, 2019. -------- Previously updated : 01/21/2019-------# How to: Reactivate disabled Access Control Service namespaces ---On November 2017, we announced that Microsoft Azure Access Control Service (ACS), a service of Azure Active Directory (Azure AD), is being retired on November 7, 2018. --Since then, we've sent emails to the ACS subscriptions' admin email about the ACS retirement 12 months, 9 months, 6 months, 3 months, 1 month, 2 weeks, 1 week, and 1 day before the retirement date of November 7, 2018. --On October 3, 2018, we announced (through email and [a blog post](https://azure.microsoft.com/blog/one-month-retirement-notice-access-control-service/)) an extension offer to customers who can't finish their migration before November 7, 2018. The announcement also had instructions for requesting the extension. --## Why your namespace is disabled --If you haven't opted in for the extension, we'll start to disable ACS namespaces starting November 7, 2018. You must have requested the extension to February 4, 2019 already; otherwise, you will not be able to enable the namespaces through PowerShell. --> [!NOTE] -> You must be a service administrator or co-administrator of the subscription to run the PowerShell commands and request an extension. --## Find and enable your ACS namespaces --You can use ACS PowerShell to list all your ACS namespaces and reactivate ones that have been disabled. --1. Download and install ACS PowerShell: - 1. Go to the PowerShell Gallery and download [Acs.Namespaces](https://www.powershellgallery.com/packages/Acs.Namespaces/1.0.2). - 1. Install the module: -- ```powershell - Install-Module -Name Acs.Namespaces - ``` -- 1. Get a list of all possible commands: -- ```powershell - Get-Command -Module Acs.Namespaces - ``` -- To get help on a specific command, run: -- ```powershell - Get-Help [Command-Name] -Full - ``` - - where `[Command-Name]` is the name of the ACS command. -1. Connect to ACS using the **Connect-AcsAccount** cmdlet. -- You may need to change your execution policy by running **Set-ExecutionPolicy** before you can run the command. -1. List your available Azure subscriptions using the **Get-AcsSubscription** cmdlet. -1. List your ACS namespaces using the **Get-AcsNamespace** cmdlet. -1. Confirm that the namespaces are disabled by confirming that `State` is `Disabled`. -- [![Confirm that the namespaces are disabled](./media/howto-reactivate-disabled-acs-namespaces/confirm-disabled-namespace.png)](./media/howto-reactivate-disabled-acs-namespaces/confirm-disabled-namespace.png#lightbox) -- You can also use `nslookup {your-namespace}.accesscontrol.windows.net` to confirm if the domain is still active. --1. Enable your ACS namespace(s) using the **Enable-AcsNamespace** cmdlet. -- Once you've enabled your namespace(s), you can request an extension so that the namespace(s) won't be disabled again before February 4, 2019. After that date, all requests to ACS will fail. --## Request an extension --We are taking new extension requests starting on January 21, 2019. --We will start disabling namespaces for customers who have requested extensions to February 4, 2019. You can still re-enable namespaces through PowerShell, but the namespaces will be disabled again after 48 hours. --After March 4, 2019, customers will no longer be able to re-enable any namespaces through PowerShell. --Further extensions will no longer be automatically approved. If you need additional time to migrate, contact [Azure support](https://portal.azure.com/#create/Microsoft.Support) to provide a detailed migration timeline. --### To request an extension ---1. Sign in to the [Azure portal](https://portal.azure.com) and create a [new support request](https://portal.azure.com/#create/Microsoft.Support). -1. Fill in the new support request form as shown in the following example. -- | Support request field | Value | - |--|--| - | **Issue type** | `Technical` | - | **Subscription** | Set to your subscription | - | **Service** | `All services` | - | **Resource** | `General question/Resource not available` | - | **Problem type** | `ACS to SAS Migration` | - | **Subject** | Describe the issue | -- ![Shows an example of a new technical support request](./media/howto-reactivate-disabled-acs-namespaces/new-technical-support-request.png) --<!-- --1. Navigate to your ACS namespace's management portal by going to `https://{your-namespace}.accesscontrol.windows.net`. -1. Select the **Read Terms** button to read the [updated Terms of Use](https://azure.microsoft.com/support/legal/access-control/), which will direct you to a page with the updated Terms of Use. -- [![Select the Read Terms button](./media/howto-reactivate-disabled-acs-namespaces/read-terms-button-expanded.png)](./media/howto-reactivate-disabled-acs-namespaces/read-terms-button-expanded.png#lightbox) --1. Select **Request Extension** on the banner at the top of the page. The button will only be enabled after you read the [updated Terms of Use](https://azure.microsoft.com/support/legal/access-control/). -- [![Select the Request Extension button](./media/howto-reactivate-disabled-acs-namespaces/request-extension-button-expanded.png)](./media/howto-reactivate-disabled-acs-namespaces/request-extension-button-expanded.png#lightbox) --1. After the extension request is registered, the page will refresh with a new banner at the top of the page. -- [![Updated page with refreshed banner](./media/howto-reactivate-disabled-acs-namespaces/updated-banner-expanded.png)](./media/howto-reactivate-disabled-acs-namespaces/updated-banner-expanded.png#lightbox) >--## Help and support --- If you run into any issues after following this how-to, contact [Azure support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).-- If you have questions or feedback about ACS retirement, contact us at acsfeedback@microsoft.com.--## Next steps --- Review the information about ACS retirement in [How to: Migrate from the Azure Access Control Service](active-directory-acs-migration.md). |
active-directory | Howto V1 Enable Sso Android | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/howto-v1-enable-sso-android.md | - Title: How to enable cross-app SSO on Android using ADAL -description: How to use the features of the ADAL SDK to enable single sign-on across your applications. -------- Previously updated : 09/24/2018-------# How to: Enable cross-app SSO on Android using ADAL ---Single sign-on (SSO) allows users to only enter their credentials once and have those credentials automatically work across applications and across platforms that other applications may use (such as Microsoft Accounts or a work account from Microsoft 365) no matter the publisher. --Microsoft's identity platform, along with the SDKs, makes it easy to enable SSO within your own suite of apps, or with the broker capability and Authenticator applications, across the entire device. --In this how-to, you'll learn how to configure the SDK within your application to provide SSO to your customers. --## Prerequisites --This how-to assumes that you know how to: --- Provision your app using the legacy portal for Azure Active Directory (Azure AD). For more info, see [Register an app](../develop/quickstart-register-app.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json)-- Integrate your application with the [Azure AD Android SDK](https://github.com/AzureAD/azure-activedirectory-library-for-android).--## Single sign-on concepts --### Identity brokers --Microsoft provides applications for every mobile platform that allow for the bridging of credentials across applications from different vendors and for enhanced features that require a single secure place from where to validate credentials. These are called **brokers**. --On iOS and Android, brokers are provided through downloadable applications that customers either install independently or pushed to the device by a company who manages some, or all, of the devices for their employees. Brokers support managing security just for some applications or the entire device based on IT admin configuration. In Windows, this functionality is provided by an account chooser built in to the operating system, known technically as the Web Authentication Broker. --#### Broker assisted login --Broker-assisted logins are login experiences that occur within the broker application and use the storage and security of the broker to share credentials across all applications on the device that apply the identity platform. The implication being your applications will rely on the broker to sign users in. On iOS and Android, these brokers are provided through downloadable applications that customers either install independently or can be pushed to the device by a company who manages the device for their user. An example of this type of application is the Microsoft Authenticator application on iOS. In Windows, this functionality is provided by an account chooser built in to the operating system, known technically as the Web Authentication Broker. -The experience varies by platform and can sometimes be disruptive to users if not managed correctly. You're probably most familiar with this pattern if you have the Facebook application installed and use Facebook Connect from another application. The identity platform uses the same pattern. --On Android, the account chooser is displayed on top of your application, which is less disruptive to the user. --#### How the broker gets invoked --If a compatible broker is installed on the device, like the Microsoft Authenticator application, the identity SDKs will automatically do the work of invoking the broker for you when a user indicates they wish to log in using any account from the identity platform. --#### How Microsoft ensures the application is valid --The need to ensure the identity of an application calling the broker is crucial to the security provided in broker assisted logins. iOS and Android do not enforce unique identifiers that are valid only for a given application, so malicious applications may "spoof" a legitimate application's identifier and receive the tokens meant for the legitimate application. To ensure Microsoft is always communicating with the right application at runtime, the developer is asked to provide a custom redirectURI when registering their application with Microsoft. **How developers should craft this redirect URI is discussed in detail below.** This custom redirectURI contains the certificate thumbprint of the application and is ensured to be unique to the application by the Google Play Store. When an application calls the broker, the broker asks the Android operating system to provide it with the certificate thumbprint that called the broker. The broker provides this certificate thumbprint to Microsoft in the call to the identity system. If the certificate thumbprint of the application does not match the certificate thumbprint provided to us by the developer during registration, access is denied to the tokens for the resource the application is requesting. This check ensures that only the application registered by the developer receives tokens. --Brokered-SSO logins have the following benefits: --* User experiences SSO across all their applications no matter the vendor. -* Your application can use more advanced business features such as Conditional Access and support Intune scenarios. -* Your application can support certificate-based authentication for business users. -* More secure sign-in experience as the identity of the application and the user are verified by the broker application with additional security algorithms and encryption. --Here is a representation of how the SDKs work with the broker applications to enable SSO: --``` -++ ++ +-+ -| | | | | | -| App 1 | | App 2 | | Someone | -| | | | | else's | -| | | | | App | -++ ++ +-+ -| ADAL SDK | | ADAL SDK | | ADAL SDK | -+--++-+--++- +-+--+ - | | | - | +v+ | - | | | | - | | Microsoft | | - +-> Broker |^-+ - | Application - | | - +-+ - | | - | Broker | - | Storage | - | | - +-+ --``` --### Turning on SSO for broker assisted SSO --The ability for an application to use any broker that is installed on the device is turned off by default. In order to use your application with the broker, you must do some additional configuration and add some code to your application. --The steps to follow are: --1. Enable broker mode in your application code's calling to the MS SDK -2. Establish a new redirect URI and provide that to both the app and your app registration -3. Setting up the correct permissions in the Android manifest --#### Step 1: Enable broker mode in your application --The ability for your application to use the broker is turned on when you create the "settings" or initial setup of your Authentication instance. To do this in your app: --``` -AuthenticationSettings.Instance.setUseBroker(true); -``` --#### Step 2: Establish a new redirect URI with your URL Scheme --In order to ensure that the right application receives the returned the credential tokens, there is a need to make sure the call back to your application in a way that the Android operating system can verify. The Android operating system uses the hash of the certificate in the Google Play store. This hash of the certificate cannot be spoofed by a rogue application. Along with the URI of the broker application, Microsoft ensures that the tokens are returned to the correct application. A unique redirect URI is required to be registered on the application. --Your redirect URI must be in the proper form of: --`msauth://packagename/Base64UrlencodedSignature` --ex: *msauth://com.example.userapp/IcB5PxIyvbLkbFVtBI%2FitkW%2Fejk%3D* --You can register this redirect URI in your app registration using the [Azure portal](https://portal.azure.com/). For more information on Azure AD app registration, see [Integrating with Azure Active Directory](../develop/how-to-integrate.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json). --#### Step 3: Set up the correct permissions in your application --The broker application in Android uses the Accounts Manager feature of the Android OS to manage credentials across applications. In order to use the broker in Android your app manifest must have permissions to use AccountManager accounts. These permissions are discussed in detail in the [Google documentation for Account Manager here](https://developer.android.com/reference/android/accounts/AccountManager.html) --In particular, these permissions are: --``` -GET_ACCOUNTS -USE_CREDENTIALS -MANAGE_ACCOUNTS -``` --### You've configured SSO! --Now the identity SDK will automatically both share credentials across your applications and invoke the broker if it's present on their device. --## Next steps --* Learn about [Single sign-on SAML protocol](../develop/single-sign-on-saml-protocol.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json) |
active-directory | Howto V1 Enable Sso Ios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/howto-v1-enable-sso-ios.md | - Title: How to enable cross-app SSO on iOS using ADAL -description: How to use the features of the ADAL SDK to enable Single Sign On across your applications. -------- Previously updated : 09/24/2018-------# How to: Enable cross-app SSO on iOS using ADAL ---Single sign-on (SSO) allows users to only enter their credentials once and have those credentials automatically work across applications and across platforms that other applications may use (such as Microsoft Accounts or a work account from Microsoft 365) no matter the publisher. --Microsoft's identity platform, along with the SDKs, makes it easy to enable SSO within your own suite of apps, or with the broker capability and Authenticator applications, across the entire device. --In this how-to, you'll learn how to configure the SDK within your application to provide SSO to your customers. --This how-to applies to: --* Azure Active Directory (Azure Active Directory) -* Azure Active Directory B2C -* Azure Active Directory B2B -* Azure Active Directory Conditional Access --## Prerequisites --This how-to assumes that you know how to: --* Provision your app using the legacy portal for Azure AD. For more info, see [Register an app](../develop/quickstart-register-app.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json) -* Integrate your application with the [Azure AD iOS SDK](https://github.com/AzureAD/azure-activedirectory-library-for-objc). --## Single sign-on concepts --### Identity brokers --Microsoft provides applications for every mobile platform that allow for the bridging of credentials across applications from different vendors and for enhanced features that require a single secure place from where to validate credentials. These are called **brokers**. --On iOS and Android, brokers are provided through downloadable applications that customers either install independently or pushed to the device by a company who manages some, or all, of the devices for their employees. Brokers support managing security just for some applications or the entire device based on IT admin configuration. In Windows, this functionality is provided by an account chooser built in to the operating system, known technically as the Web Authentication Broker. --### Patterns for logging in on mobile devices --Access to credentials on devices follow two basic patterns: --* Non-broker assisted logins -* Broker assisted logins --#### Non-broker assisted logins --Non-broker assisted logins are login experiences that happen inline with the application and use the local storage on the device for that application. This storage may be shared across applications but the credentials are tightly bound to the app or suite of apps using that credential. You've most likely experienced this in many mobile applications when you enter a username and password within the application itself. --These logins have the following benefits: --* User experience exists entirely within the application. -* Credentials can be shared across applications that are signed by the same certificate, providing a single sign-on experience to your suite of applications. -* Control around the experience of logging in is provided to the application before and after sign-in. --These logins have the following drawbacks: --* Users cannot experience single-sign on across all apps that use a Microsoft identity, only across those Microsoft identities that your application has configured. -* Your application cannot be used with more advanced business features such as Conditional Access or use the Intune suite of products. -* Your application can't support certificate-based authentication for business users. --Here is a representation of how the SDKs work with the shared storage of your applications to enable SSO: --``` -++ ++ +-+ -| | | | | | -| App 1 | | App 2 | | App 3 | -| | | | | | -| | | | | | -++ ++ +-+ -| ADAL SDK | | ADAL SDK | | ADAL SDK | -++-++--+-+ -| | -| App Shared Storage | -+--+ -``` --#### Broker assisted logins --Broker-assisted logins are login experiences that occur within the broker application and use the storage and security of the broker to share credentials across all applications on the device that apply the identity platform. This means that your applications rely on the broker to sign users in. On iOS and Android, these brokers are provided through downloadable applications that customers either install independently or pushed to the device by a company who manages the device for their user. An example of this type of application is the Microsoft Authenticator application on iOS. In Windows this functionality is provided by an account chooser built in to the operating system, known technically as the Web Authentication Broker. --The experience varies by platform and can sometimes be disruptive to users if not managed correctly. You're probably most familiar with this pattern if you have the Facebook application installed and use Facebook Connect from another application. The identity platform uses the same pattern. --For iOS this leads to a "transition" animation where your application is sent to the background while the Microsoft Authenticator applications comes to the foreground for the user to select which account they would like to sign in with. --For Android and Windows the account chooser is displayed on top of your application, which is less disruptive to the user. --#### How the broker gets invoked --If a compatible broker is installed on the device, like the Microsoft Authenticator application, the SDKs will automatically do the work of invoking the broker for you when a user indicates they wish to log in using any account from the identity platform. This account could be a personal Microsoft Account, a work or school account, or an account that you provide and host in Azure using our B2C and B2B products. --#### How we ensure the application is valid --The need to ensure the identity of an application that calls the broker is crucial to the security we provide in broker assisted logins. Neither iOS nor Android enforces unique identifiers that are valid only for a given application, so malicious applications may "spoof" a legitimate application's identifier and receive the tokens meant for the legitimate application. To ensure we are always communicating with the right application at runtime, we ask the developer to provide a custom redirectURI when registering their application with Microsoft. How developers should craft this redirect URI is discussed in detail below. This custom redirectURI contains the Bundle ID of the application and is ensured to be unique to the application by the Apple App Store. When an application calls the broker, the broker asks the iOS operating system to provide it with the Bundle ID that called the broker. The broker provides this Bundle ID to Microsoft in the call to our identity system. If the Bundle ID of the application does not match the Bundle ID provided to us by the developer during registration, we will deny access to the tokens for the resource the application is requesting. This check ensures that only the application registered by the developer receives tokens. --**The developer has the choice whether the SDK calls the broker or uses the non-broker assisted flow.** However if the developer chooses not to use the broker-assisted flow they lose the benefit of using SSO credentials that the user may have already added on the device and prevents their application from being used with business features Microsoft provides its customers such as Conditional Access, Intune management capabilities, and certificate-based authentication. --These logins have the following benefits: --* User experiences SSO across all their applications no matter the vendor. -* Your application can use more advanced business features such as Conditional Access or use the Intune suite of products. -* Your application can support certificate-based authentication for business users. -* Much more secure sign-in experience as the identity of the application and the user are verified by the broker application with additional security algorithms and encryption. --These logins have the following drawbacks: --* In iOS the user is transitioned out of your application's experience while credentials are chosen. -* Loss of the ability to manage the login experience for your customers within your application. --Here is a representation of how the SDKs work with the broker applications to enable SSO: --``` -++ ++ +-+ -| | | | | | -| App 1 | | App 2 | | Someone | -| | | | | Else's | -| | | | | App | -++ ++ +-+ -| Azure SDK | | Azure SDK | | Azure SDK | -+--++-+--++- +-+--+ - | | | - | +v+ | - | | | | - | | Microsoft | | - +-> Broker |^-+ - | Application - | | - +-+ - | | - | Broker | - | Storage | - | | - +-+ -``` --## Enabling cross-app SSO using ADAL --Here we use the ADAL iOS SDK to: --* Turn on non-broker assisted SSO for your suite of apps -* Turn on support for broker-assisted SSO --### Turning on SSO for non-broker assisted SSO --For non-broker assisted SSO across applications, the SDKs manage much of the complexity of SSO for you. This includes finding the right user in the cache and maintaining a list of logged in users for you to query. --To enable SSO across applications you own you need to do the following: --1. Ensure all your applications use the same Client ID or Application ID. -2. Ensure that all of your applications share the same signing certificate from Apple so that you can share keychains. -3. Request the same keychain entitlement for each of your applications. -4. Tell the SDKs about the shared keychain you want us to use. --#### Using the same Client ID / Application ID for all the applications in your suite of apps --In order for the identity platform to know that it's allowed to share tokens across your applications, each of your applications will need to share the same Client ID or Application ID. This is the unique identifier that was provided to you when you registered your first application in the portal. --Redirect URIs allow you to identify different apps to the Microsoft identity service if it uses the same Application ID. Each application can have multiple Redirect URIs registered in the onboarding portal. Each app in your suite will have a different redirect URI. An example of how this looks is below: --App1 Redirect URI: `x-msauth-mytestiosapp://com.myapp.mytestapp` --App2 Redirect URI: `x-msauth-mytestiosapp://com.myapp.mytestapp2` --App3 Redirect URI: `x-msauth-mytestiosapp://com.myapp.mytestapp3` --.... --These are nested under the same client ID / application ID and looked up based on the redirect URI you return to us in your SDK configuration. --``` -+-+ -| | -| Client ID | -+++ - | - | +--+ - | | App 1 Redirect URI | - +-^+ | - | +--+ - | - | +--+ - +-^+ App 2 Redirect URI | - | | | - | +--+ - | - +-^+--+ - | App 3 Redirect URI | - | | - +--+ --``` --The format of these redirect URIs is explained below. You may use any Redirect URI unless you wish to support the broker, in which case they must look something like the above* --#### Create keychain sharing between applications --Enabling keychain sharing is beyond the scope of this document and covered by Apple in their document [Adding Capabilities](https://developer.apple.com/library/ios/documentation/IDEs/Conceptual/AppDistributionGuide/AddingCapabilities/AddingCapabilities.html). What is important is that you decide what you want your keychain to be called and add that capability across all your applications. --When you do have entitlements set up correctly you should see a file in your project directory entitled `entitlements.plist` that contains something that looks like the following: --``` -<?xml version="1.0" encoding="UTF-8"?> -<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "https://www.apple.com/DTDs/PropertyList-1.0.dtd"> -<plist version="1.0"> -<dict> - <key>keychain-access-groups</key> - <array> - <string>$(AppIdentifierPrefix)com.myapp.mytestapp</string> - <string>$(AppIdentifierPrefix)com.myapp.mycache</string> - </array> -</dict> -</plist> -``` --Once you have the keychain entitlement enabled in each of your applications, and you are ready to use SSO, tell the identity SDK about your keychain by using the following setting in your `ADAuthenticationSettings` with the following setting: --``` -defaultKeychainSharingGroup=@"com.myapp.mycache"; -``` --> [!WARNING] -> When you share a keychain across your applications any application can delete users or worse delete all the tokens across your application. This is particularly disastrous if you have applications that rely on the tokens to do background work. Sharing a keychain means that you must be very careful in any and all remove operations through the identity SDKs. --That's it! The SDK will now share credentials across all your applications. The user list will also be shared across application instances. --### Turning on SSO for broker assisted SSO --The ability for an application to use any broker that is installed on the device is **turned off by default**. In order to use your application with the broker you must do some additional configuration and add some code to your application. --The steps to follow are: --1. Enable broker mode in your application code's call to the MS SDK. -2. Establish a new redirect URI and provide that to both the app and your app registration. -3. Registering a URL Scheme. -4. Add a permission to your info.plist file. --#### Step 1: Enable broker mode in your application --The ability for your application to use the broker is turned on when you create the "context" or initial setup of your Authentication object. You do this by setting your credentials type in your code: --``` -/*! See the ADCredentialsType enumeration definition for details */ -@propertyADCredentialsType credentialsType; -``` -The `AD_CREDENTIALS_AUTO` setting will allow the SDK to try to call out to the broker, `AD_CREDENTIALS_EMBEDDED` will prevent the SDK from calling to the broker. --#### Step 2: Registering a URL Scheme --The identity platform uses URLs to invoke the broker and then return control back to your application. To finish that round trip you need a URL scheme registered for your application that the identity platform will know about. This can be in addition to any other app schemes you may have previously registered with your application. --> [!WARNING] -> We recommend making the URL scheme fairly unique to minimize the chances of another app using the same URL scheme. Apple does not enforce the uniqueness of URL schemes that are registered in the app store. --Below is an example of how this appears in your project configuration. You may also do this in XCode as well: --``` -<key>CFBundleURLTypes</key> -<array> - <dict> - <key>CFBundleTypeRole</key> - <string>Editor</string> - <key>CFBundleURLName</key> - <string>com.myapp.mytestapp</string> - <key>CFBundleURLSchemes</key> - <array> - <string>x-msauth-mytestiosapp</string> - </array> - </dict> -</array> -``` --#### Step 3: Establish a new redirect URI with your URL Scheme --In order to ensure that we always return the credential tokens to the correct application, we need to make sure we call back to your application in a way that the iOS operating system can verify. The iOS operating system reports to the Microsoft broker applications the Bundle ID of the application calling it. This cannot be spoofed by a rogue application. Therefore, we leverage this along with the URI of our broker application to ensure that the tokens are returned to the correct application. We require you to establish this unique redirect URI both in your application and set as a Redirect URI in our developer portal. --Your redirect URI must be in the proper form of: --`<app-scheme>://<your.bundle.id>` --ex: *x-msauth-mytestiosapp://com.myapp.mytestapp* --This redirect URI needs to be specified in your app registration using the [Azure portal](https://portal.azure.com/). For more information on Azure AD app registration, see [Integrating with Azure Active Directory](../develop/how-to-integrate.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json). --##### Step 3a: Add a redirect URI in your app and dev portal to support certificate-based authentication --To support cert-based authentication a second "msauth" needs to be registered in your application and the [Azure portal](https://portal.azure.com/) to handle certificate authentication if you wish to add that support in your application. --`msauth://code/<broker-redirect-uri-in-url-encoded-form>` --ex: *msauth://code/x-msauth-mytestiosapp%3A%2F%2Fcom.myapp.mytestapp* --#### Step 4: Add a configuration parameter to your app --ADAL uses ΓÇôcanOpenURL: to check if the broker is installed on the device. In iOS 9 on, Apple locked down what schemes an application can query for. You will need to add "msauth" to the LSApplicationQueriesSchemes section of your `info.plist file`. --``` - <key>LSApplicationQueriesSchemes</key> - <array> - <string>msauth</string> - </array> --``` --### You've configured SSO! --Now the identity SDK will automatically both share credentials across your applications and invoke the broker if it's present on their device. --## Next steps --* Learn about [Single sign-on SAML protocol](../develop/single-sign-on-saml-protocol.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json) |
active-directory | Native App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/native-app.md | - Title: Native apps in Azure Active Directory -description: Describes what native apps are and the basics on protocol flow, registration, and token expiration for this app type. -------- Previously updated : 09/24/2018-------# Native apps ---Native apps are applications that call a web API on behalf of a user. This scenario is built on the OAuth 2.0 authorization code grant type with a public client, as described in section 4.1 of the [OAuth 2.0 specification](https://tools.ietf.org/html/rfc6749). The native application obtains an access token for the user by using the OAuth 2.0 protocol. This access token is then sent in the request to the web API, which authorizes the user and returns the desired resource. --## Diagram --![Native Application to Web API Diagram](./media/authentication-scenarios/native-app-to-web-api.png) --## Protocol flow --If you are using the AD Authentication Libraries, most of the protocol details described below are handled for you, such as the browser pop-up, token caching, and handling of refresh tokens. --1. Using a browser pop-up, the native application makes a request to the authorization endpoint in Azure AD. This request includes the Application ID and the redirect URI of the native application as shown in the Azure portal, and the application ID URI for the web API. If the user hasn't already signed in, they are prompted to sign in again -1. Azure AD authenticates the user. If it is a multi-tenant application and consent is required to use the application, the user will be required to consent if they haven't already done so. After granting consent and upon successful authentication, Azure AD issues an authorization code response back to the client application's redirect URI. -1. When Azure AD issues an authorization code response back to the redirect URI, the client application stops browser interaction and extracts the authorization code from the response. Using this authorization code, the client application sends a request to Azure AD's token endpoint that includes the authorization code, details about the client application (Application ID and redirect URI), and the desired resource (application ID URI for the web API). -1. The authorization code and information about the client application and web API are validated by Azure AD. Upon successful validation, Azure AD returns two tokens: a JWT access token and a JWT refresh token. In addition, Azure AD returns basic information about the user, such as their display name and tenant ID. -1. Over HTTPS, the client application uses the returned JWT access token to add the JWT string with a "Bearer" designation in the Authorization header of the request to the web API. The web API then validates the JWT token, and if validation is successful, returns the desired resource. -1. When the access token expires, the client application will receive an error that indicates the user needs to authenticate again. If the application has a valid refresh token, it can be used to acquire a new access token without prompting the user to sign in again. If the refresh token expires, the application will need to interactively authenticate the user once again. --> [!NOTE] -> The refresh token issued by Azure AD can be used to access multiple resources. For example, if you have a client application that has permission to call two web APIs, the refresh token can be used to get an access token to the other web API as well. --## Code samples --See the code samples for Native Application to Web API scenarios. And, check back frequently -- we add new samples frequently. [Native Application to Web API](sample-v1-code.md#desktop-and-mobile-public-client-applications-calling-microsoft-graph-or-a-web-api). --## App registration --To register an application with the Azure AD v1.0 endpoint, see [Register an app](../develop/quickstart-register-app.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json). --* Single tenant - Both the native application and the web API must be registered in the same directory in Azure AD. The web API can be configured to expose a set of permissions, which are used to limit the native application's access to its resources. The client application then selects the desired permissions from the "Permissions to Other Applications" drop-down menu in the Azure portal. -* Multi-tenant - First, the native application only ever registered in the developer or publisher's directory. Second, the native application is configured to indicate the permissions it requires to be functional. This list of required permissions is shown in a dialog when a user or administrator in the destination directory gives consent to the application, which makes it available to their organization. Some applications only require user-level permissions, which any user in the organization can consent to. Other applications require administrator-level permissions, which a user in the organization cannot consent to. Only a directory administrator can give consent to applications that require this level of permissions. When the user or administrator consents, only the web API is registered in their directory. --## Token expiration --When the native application uses its authorization code to get a JWT access token, it also receives a JWT refresh token. When the access token expires, the refresh token can be used to re-authenticate the user without requiring them to sign in again. This refresh token is then used to authenticate the user, which results in a new access token and refresh token. --## Next steps --- Learn more about other [Application types and scenarios](app-types.md)-- Learn about the Azure AD [authentication basics](v1-authentication-scenarios.md) |
active-directory | Sample V1 Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/sample-v1-code.md | - Title: Code samples for Azure Active Directory v1.0 -description: Provides an index of Azure Active Directory (v1.0 endpoint) code samples, organized by scenario. ------ Previously updated : 07/15/2019-------# Azure Active Directory code samples (v1.0 endpoint) ---You can use Azure Active Directory (Azure AD) to add authentication and authorization to your web applications and web APIs. --This section provides links to samples you can use to learn more about the Azure AD v1.0 endpoint. These samples show you how it's done along with code snippets that you can use in your applications. On the code sample page, you'll find detailed read-me topics that help with requirements, installation, and set-up. And the code is commented to help you understand the critical sections. --> [!NOTE] -> If you are interested in Microsoft Entra V2 code samples, see [v2.0 code samples by scenario](../develop/sample-v2-code.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json). --> [!WARNING] -> Support for Active Directory Authentication Library (ADAL) will end in December, 2022. Apps using ADAL on existing OS versions will continue to work, but technical support and security updates will end. Without continued security updates, apps using ADAL will become increasingly vulnerable to the latest security attack patterns. For more information, see [Migrate apps to MSAL](../develop/msal-migration.md). --To understand the basic scenario for each sample type, see [Authentication scenarios for Azure AD](v1-authentication-scenarios.md). --You can also contribute to our samples on GitHub. To learn how, see [Azure Active Directory samples and documentation](https://github.com/Azure-Samples?page=3&query=active-directory). --## Single-page applications --This sample shows how to write a single-page application secured with Azure AD. --| Platform | Calls its own API | Calls another Web API | -|--|--|--| -| ![This image shows the JavaScript logo](media/sample-v2-code/logo-js.png) | [javascript-singlepageapp](https://github.com/Azure-Samples/active-directory-javascript-singlepageapp-dotnet-webapi) | -| ![This image shows the Angular JS logo](media/sample-v2-code/logo-angular.png) | [angularjs-singlepageapp](https://github.com/Azure-Samples/active-directory-angularjs-singlepageapp) | [angularjs-singlepageapp-cors](https://github.com/Azure-Samples/active-directory-angularjs-singlepageapp-dotnet-webapi) | --## Web Applications --### Web Applications signing in users, calling Microsoft Graph, or a Web API with the user's identity --The following samples illustrate Web applications signing users. Some of these applications also call the Microsoft Graph or your own Web API, in the name of the signed-in user. --| Platform | Only signs in users | Calls Microsoft Graph | Calls another ASP.NET or ASP.NET Core 2.0 Web API | -|--|--|--|--| -| ![This image shows the ASP.NET Core logo](media/sample-v2-code/logo-netcore.png)</p>ASP.NET Core 2.0 | [dotnet-webapp-openidconnect-aspnetcore](https://github.com/Azure-Samples/active-directory-dotnet-webapp-openidconnect-aspnetcore) | [webapp-webapi-multitenant-openidconnect-aspnetcore](https://github.com/Azure-Samples/active-directory-webapp-webapi-multitenant-openidconnect-aspnetcore/) </p>(Azure AD Graph) | [dotnet-webapp-webapi-openidconnect-aspnetcore](https://github.com/Azure-Samples/active-directory-dotnet-webapp-webapi-openidconnect-aspnetcore) | -| ![This image shows the ASP.NET Framework logo](media/sample-v2-code/logo-netframework.png)</p> ASP.NET 4.5 | </p> [webapp-WSFederation-dotNet](https://github.com/Azure-Samples/active-directory-dotnet-webapp-wsfederation) </p> [dotnet-webapp-webapi-oauth2-useridentity](https://github.com/Azure-Samples/active-directory-dotnet-webapp-webapi-oauth2-useridentity) | [dotnet-webapp-multitenant-openidconnect](https://github.com/Azure-Samples/active-directory-dotnet-webapp-multitenant-openidconnect)</p> (Azure AD Graph) | -| ![This image shows the Python logo](media/sample-v2-code/logo-python.png) | | [python-webapp-graphapi](https://github.com/Azure-Samples/active-directory-python-webapp-graphapi) | -| ![This image shows the Java log](media/sample-v2-code/logo-java.png) | | [java-webapp-openidconnect](https://github.com/azure-samples/active-directory-java-webapp-openidconnect) | -| ![This image shows the PHP logo](media/sample-v2-code/logo-php.png) | | [php-graphapi-web](https://github.com/Azure-Samples/active-directory-php-graphapi-web) | --### Web applications demonstrating role-based access control (authorization) --The following samples show how to implement role-based access control (RBAC). RBAC is used to restrict the permissions of certain features in a web application to certain users. The users are authorized depending on whether they belong to an **Azure AD group** or have a given application **role**. --| Platform | Sample | Description | -|--|--|--| -| ![This image shows the ASP.NET Framework logo](media/sample-v2-code/logo-netframework.png)</p> ASP.NET 4.5 | [dotnet-webapp-groupclaims](https://github.com/Azure-Samples/active-directory-dotnet-webapp-groupclaims) </p> [dotnet-webapp-roleclaims](https://github.com/Azure-Samples/active-directory-dotnet-webapp-roleclaims) | A .NET 4.5 MVC web app that uses Azure AD **roles** for authorization | --## Desktop and mobile public client applications calling Microsoft Graph or a Web API --The following samples illustrate public client applications (desktop/mobile applications) that access the Microsoft Graph or a Web API in the name of a user. Depending on the devices and platforms, applications can sign in users in different ways (flows/grants): --- Interactively-- Silently (with integrated Windows authentication on Windows, or username/password)-- By delegating the interactive sign-in to another device (device code flow used on devices which don't provide web controls)--| Client application | Platform | Flow/Grant | Calls Microsoft Graph | Calls an ASP.NET or ASP.NET Core 2.x Web API | -| | -- | - | -- | - | -| Desktop (WPF) | ![This image shows the .NET/C# logo](media/sample-v2-code/logo-net.png) | Interactive | Part of [`dotnet-native-multitarget`](https://github.com/azure-samples/active-directory-dotnet-native-multitarget) | [`dotnet-native-desktop`](https://github.com/Azure-Samples/active-directory-dotnet-native-desktop) </p> [`dotnet-native-aspnetcore`](https://github.com/Azure-Samples/active-directory-dotnet-native-aspnetcore/)</p> [`dotnet-webapi-manual-jwt-validation`](https://github.com/azure-samples/active-directory-dotnet-webapi-manual-jwt-validation) | -| Mobile (UWP) | ![This image shows the .NET/C#/UWP](media/sample-v2-code/logo-windows.png) | Interactive | [`dotnet-native-uwp-wam`](https://github.com/azure-samples/active-directory-dotnet-native-uwp-wam) </p> This sample uses [WAM](/windows/uwp/security/web-account-manager), not [ADAL.NET](https://aka.ms/adalnet) | [`dotnet-windows-store`](https://github.com/Azure-Samples/active-directory-dotnet-windows-store) (UWP application using ADAL.NET to call a single tenant Web API) </p> [`dotnet-webapi-multitenant-windows-store`](https://github.com/Azure-Samples/active-directory-dotnet-webapi-multitenant-windows-store) (UWP application using ADAL.NET to call a multi-tenant Web API) | -| Mobile (Android, iOS, UWP) | ![This image shows the .NET/C# (Xamarin)](media/sample-v2-code/logo-xamarin.png) | Interactive | [`dotnet-native-multitarget`](https://github.com/azure-samples/active-directory-dotnet-native-multitarget) | -| Mobile (Android) | ![This image shows the Android logo](media/sample-v2-code/logo-android.png) | Interactive | [`android`](https://github.com/Azure-Samples/active-directory-android) | -| Mobile (iOS) | ![This image shows iOS / Objective C or Swift](media/sample-v2-code/logo-ios.png) | Interactive | [`nativeClient-iOS`](https://github.com/azureadquickstarts/nativeclient-ios) | -| Desktop (Console) | ![This image shows the .NET/C# logo](media/sample-v2-code/logo-net.png) | Username / Password </p> Integrated Windows authentication | | [`dotnet-native-headless`](https://github.com/azure-samples/active-directory-dotnet-native-headless) | -| Desktop (Console) | ![This image shows the Java logo](media/sample-v2-code/logo-java.png) | Username / Password | | [`java-native-headless`](https://github.com/Azure-Samples/active-directory-java-native-headless) | -| Desktop (Console) | ![This image shows the .NET Core/C# logo](media/sample-v2-code/logo-netcore.png) | Device code flow | | [`dotnet-deviceprofile`](https://github.com/Azure-Samples/active-directory-dotnet-deviceprofile) | --## Daemon applications (accessing web APIs with the application's identity) --The following samples show desktop or web applications that access the Microsoft Graph or a web API with no user (with the application identity). --Client application | Platform | Flow/Grant | Calls an ASP.NET or ASP.NET Core 2.0 Web API - | -- | - | -- -Daemon app (Console) | ![This image shows the .NET Framework logo](media/sample-v2-code/logo-netframework.png) | Client Credentials with app secret or certificate | [dotnet-daemon](https://github.com/azure-samples/active-directory-dotnet-daemon)</p> [dotnet-daemon-certificate-credential](https://github.com/azure-samples/active-directory-dotnet-daemon-certificate-credential) -Daemon app (Console) | ![This image shows the .NET Core logo](media/sample-v2-code/logo-netcore.png) | Client Credentials with certificate| [dotnetcore-daemon-certificate-credential](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-certificate-credential) -ASP.NET Web App | ![This image shows the .NET Framework logo](media/sample-v2-code/logo-netframework.png) | Client credentials | [dotnet-webapp-webapi-oauth2-appidentity](https://github.com/Azure-Samples/active-directory-dotnet-webapp-webapi-oauth2-appidentity) --## Web APIs --<a name='web-api-protected-by-azure-active-directory'></a> --### Web API protected by Azure Active Directory --The following sample shows how to protect a Node.js web API with Azure AD. --In the previous sections of this article, you can also find other samples illustrating a client application **calling** an ASP.NET or ASP.NET Core **Web API**. These samples are not mentioned again in this section, but you will find them in the last column of the tables above or below --| Platform | Sample | -|--|-| -| ![This image shows the Node.js logo](media/sample-v2-code/logo-nodejs.png) | [node-webapi](https://github.com/Azure-Samples/active-directory-node-webapi) | --### Web API calling Microsoft Graph or another Web API --The following samples demonstrate a web API that calls another web API. The second sample shows how to handle Conditional Access. --| Platform | Calls Microsoft Graph | Calls another ASP.NET or ASP.NET Core 2.0 Web API | -| -- | | - | -| ![This image shows the ASP.NET Framework logo](media/sample-v2-code/logo-netframework.png)</p> ASP.NET 4.5 | [dotnet-webapi-onbehalfof](https://github.com/azure-samples/active-directory-dotnet-webapi-onbehalfof) </p> [dotnet-webapi-onbehalfof-ca](https://github.com/azure-samples/active-directory-dotnet-webapi-onbehalfof-ca) | [dotnet-webapi-onbehalfof](https://github.com/azure-samples/active-directory-dotnet-webapi-onbehalfof) </p> [dotnet-webapi-onbehalfof-ca](https://github.com/azure-samples/active-directory-dotnet-webapi-onbehalfof-ca) | --## Other Microsoft Graph samples --For samples and tutorials that demonstrate different usage patterns for the Microsoft Graph API, including authentication with Azure AD, see [Microsoft Graph Community Samples & Tutorials](https://github.com/microsoftgraph/msgraph-community-samples). --## See also --- [Azure Active Directory Developer's Guide](v1-overview.md)-- [Azure Active Directory Authentication libraries](active-directory-authentication-libraries.md)-- [Microsoft Graph API conceptual and reference](/graph/use-the-api) |
active-directory | Service To Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/service-to-service.md | - Title: Service-to-service apps in Azure Active Directory -description: Describes what service-to-service applications and the basics on protocol flow, registration, and token expiration for this app type. -------- Previously updated : 11/20/2019-------# Service-to-service apps ---Service-to-service applications can be a daemon or server application that needs to get resources from a web API. There are two sub-scenarios that apply to this section: --- A daemon that needs to call a web API, built on OAuth 2.0 client credentials grant type-- In this scenario, it's important to understand a few things. First, user interaction is not possible with a daemon application, which requires the application to have its own identity. An example of a daemon application is a batch job, or an operating system service running in the background. This type of application requests an access token by using its application identity and presenting its Application ID, credential (password or certificate), and application ID URI to Azure AD. After successful authentication, the daemon receives an access token from Azure AD, which is then used to call the web API. --- A server application (such as a web API) that needs to call a web API, built on OAuth 2.0 On-Behalf-Of draft specification-- In this scenario, imagine that a user has authenticated on a native application, and this native application needs to call a web API. Azure AD issues a JWT access token to call the web API. If the web API needs to call another downstream web API, it can use the on-behalf-of flow to delegate the user's identity and authenticate to the second-tier web API. --## Diagram --![Daemon or Server Application to Web API diagram](./media/authentication-scenarios/daemon-server-app-to-web-api.png) --## Protocol flow --### Application identity with OAuth 2.0 client credentials grant --1. First, the server application needs to authenticate with Azure AD as itself, without any human interaction such as an interactive sign-on dialog. It makes a request to Azure AD's token endpoint, providing the credential, Application ID, and application ID URI. -1. Azure AD authenticates the application and returns a JWT access token that is used to call the web API. -1. Over HTTPS, the web application uses the returned JWT access token to add the JWT string with a "Bearer" designation in the Authorization header of the request to the web API. The web API then validates the JWT token, and if validation is successful, returns the desired resource. --### Delegated user identity with OAuth 2.0 On-Behalf-Of Draft Specification --The flow discussed below assumes that a user has been authenticated on another application (such as a native application), and their user identity has been used to acquire an access token to the first-tier web API. --1. The native application sends the access token to the first-tier web API. -1. The first-tier web API sends a request to Azure AD's token endpoint, providing its Application ID and credentials, as well as the user's access token. In addition, the request is sent with an on_behalf_of parameter that indicates the web API is requesting new tokens to call a downstream web API on behalf of the original user. -1. Azure AD verifies that the first-tier web API has permissions to access the second-tier web API and validates the request, returning a JWT access token and a JWT refresh token to the first-tier web API. -1. Over HTTPS, the first-tier web API then calls the second-tier web API by appending the token string in the Authorization header in the request. The first-tier web API can continue to call the second-tier web API as long as the access token and refresh tokens are valid. --## Code samples --See the code samples for Daemon or Server Application to Web API scenarios: [Server or Daemon Application to Web API](sample-v1-code.md#daemon-applications-accessing-web-apis-with-the-applications-identity) --## App registration --* Single tenant - For both the application identity and delegated user identity cases, the daemon or server application must be registered in the same directory in Azure AD. The web API can be configured to expose a set of permissions, which are used to limit the daemon or server's access to its resources. If a delegated user identity type is being used, the server application needs to select the desired permissions. In the **API Permission** page for the application registration, after you've selected **Add a permission** and chosen the API family, choose **Delegated permissions**, and then select your permissions. This step is not required if the application identity type is being used. -* Multi-tenant - First, the daemon or server application is configured to indicate the permissions it requires to be functional. This list of required permissions is shown in a dialog when a user or administrator in the destination directory gives consent to the application, which makes it available to their organization. Some applications only require user-level permissions, which any user in the organization can consent to. Other applications require administrator-level permissions, which a user in the organization cannot consent to. Only a directory administrator can give consent to applications that require this level of permissions. When the user or administrator consents, both of the web APIs are registered in their directory. --## Token expiration --When the first application uses its authorization code to get a JWT access token, it also receives a JWT refresh token. When the access token expires, the refresh token can be used to re-authenticate the user without prompting for credentials. This refresh token is then used to authenticate the user, which results in a new access token and refresh token. --## Next steps --- Learn more about other [Application types and scenarios](app-types.md)-- Learn about the Azure AD [authentication basics](v1-authentication-scenarios.md) |
active-directory | Single Page Application | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/single-page-application.md | - Title: Single-page applications in Azure Active Directory -description: Describes what single-page applications (SPAs) are and the basics on protocol flow, registration, and token expiration for this app type. -------- Previously updated : 09/24/2018-------# Single-page applications ---Single-page applications (SPAs) are typically structured as a JavaScript presentation layer (front end) that runs in the browser, and a web API back end that runs on a server and implements the application's business logic.To learn more about the implicit authorization grant, and help you decide whether it's right for your application scenario, see [Understanding the OAuth2 implicit grant flow in Azure Active Directory](v1-oauth2-implicit-grant-flow.md). --In this scenario, when the user signs in, the JavaScript front end uses [Active Directory Authentication Library for JavaScript (ADAL.JS)](https://github.com/AzureAD/azure-activedirectory-library-for-js) and the implicit authorization grant to obtain an ID token (id_token) from Azure AD. The token is cached and the client attaches it to the request as the bearer token when making calls to its Web API back end, which is secured using the OWIN middleware. --## Diagram --![Single-page application diagram](./media/authentication-scenarios/single-page-app.png) --## Protocol flow --1. The user navigates to the web application. -1. The application returns the JavaScript front end (presentation layer) to the browser. -1. The user initiates sign in, for example by clicking a sign-in link. The browser sends a GET to the Azure AD authorization endpoint to request an ID token. This request includes the application ID and reply URL in the query parameters. -1. Azure AD validates the Reply URL against the registered Reply URL that was configured in the Azure portal. -1. The user signs in on the sign-in page. -1. If authentication is successful, Azure AD creates an ID token and returns it as a URL fragment (#) to the application's Reply URL. For a production application, this Reply URL should be HTTPS. The returned token includes claims about the user and Azure AD that are required by the application to validate the token. -1. The JavaScript client code running in the browser extracts the token from the response to use in securing calls to the application's web API back end. -1. The browser calls the application's web API back end with the ID token in the authorization header. The Azure AD authentication service issues an ID token that can be used as a bearer token if the resource is the same as the client ID (in this case, this is true as the web API is the app's own backend). --## Code samples --See the [code samples for single-page application scenarios](sample-v1-code.md#single-page-applications). Be sure to check back frequently as new samples are added frequently. --## App registration --* Single tenant - If you are building an application just for your organization, it must be registered in your company's directory by using the Azure portal. -* Multi-tenant - If you are building an application that can be used by users outside your organization, it must be registered in your company's directory, but also must be registered in each organization's directory that will be using the application. To make your application available in their directory, you can include a sign-up process for your customers that enables them to consent to your application. When they sign up for your application, they will be presented with a dialog that shows the permissions the application requires, and then the option to consent. Depending on the required permissions, an administrator in the other organization may be required to give consent. When the user or administrator consents, the application is registered in their directory. --After registering the application, it must be configured to use OAuth 2.0 implicit grant protocol. By default, this protocol is disabled for applications. To enable the OAuth2 implicit grant protocol for your application, edit its application manifest from the Azure portal and set the "oauth2AllowImplicitFlow" value to true. For more info, see [Application manifest](../develop/reference-app-manifest.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json). --## Token expiration --Using ADAL.js helps with: --* Refreshing an expired token -* Requesting an access token to call a web API resource --After a successful authentication, Azure AD writes a cookie in the user's browser to establish a session. Note the session exists between the user and Azure AD (not between the user and the web application). When a token expires, ADAL.js uses this session to silently obtain another token. ADAL.js uses a hidden iFrame to send and receive the request using the OAuth implicit grant protocol. ADAL.js can also use this same mechanism to silently obtain access tokens for other web API resources the application calls as long as these resources support cross-origin resource sharing (CORS), are registered in the user's directory, and any required consent was given by the user during sign-in. --## Next steps --* Learn more about other [Application types and scenarios](app-types.md) -* Learn about the Azure AD [authentication basics](v1-authentication-scenarios.md) |
active-directory | V1 Authentication Scenarios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/v1-authentication-scenarios.md | - Title: Azure AD for developers (v1.0) -description: Learn authentication basics for Azure AD for developers (v1.0) such as the app model, API, provisioning, and the most common authentication scenarios. -------- Previously updated : 10/14/2019-----#Customer intent: As an application developer, I want to learn about the basic authentication concepts in Azure AD for developers (v1.0), including the app model, API, provisioning, and supported scenarios, so I understand what I need to do when I create apps that integrate Microsoft sign-in. ---# What is authentication? ---*Authentication* is the act of challenging a party for legitimate credentials, providing the basis for creation of a security principal to be used for identity and access control. In simpler terms, it's the process of proving you are who you say you are. Authentication is sometimes shortened to AuthN. --*Authorization* is the act of granting an authenticated security principal permission to do something. It specifies what data you're allowed to access and what you can do with it. Authorization is sometimes shortened to AuthZ. --Azure Active Directory for developers (v1.0) (Azure AD) simplifies authentication for application developers by providing identity as a service, with support for industry-standard protocols such as OAuth 2.0 and OpenID Connect, as well as open-source libraries for different platforms to help you start coding quickly. --There are two primary use cases in the Azure AD programming model: --* During an OAuth 2.0 authorization grant flow - when the resource owner grants authorization to the client application, allowing the client to access the resource owner's resources. -* During resource access by the client - as implemented by the resource server, using the claims values present in the access token to make access control decisions based upon them. --## Authentication basics in Azure AD --Consider the most basic scenario where identity is required: a user in a web browser needs to authenticate to a web application. The following diagram shows this scenario: --![Overview of sign-on to web application](./media/v1-authentication-scenarios/auth-basics-microsoft-identity-platform.svg) --Here's what you need to know about the various components shown in the diagram: --* Azure AD is the identity provider. The identity provider is responsible for verifying the identity of users and applications that exist in an organization's directory, and issues security tokens upon successful authentication of those users and applications. -* An application that wants to outsource authentication to Azure AD must be registered in Azure Active Directory (Azure AD). Azure AD registers and uniquely identifies the app in the directory. -* Developers can use the open-source Azure AD authentication libraries to make authentication easy by handling the protocol details for you. For more info, see Microsoft identity platform [v2.0 authentication libraries](../develop/reference-v2-libraries.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json) and [v1.0 authentication libraries](active-directory-authentication-libraries.md). -* Once a user has been authenticated, the application must validate the user's security token to ensure that authentication was successful. You can find quickstarts, tutorials, and code samples in a variety of languages and frameworks which show what the application must do. - * To quickly build an app and add functionality like getting tokens, refreshing tokens, signing in a user, displaying some user info, and more, see the **Quickstarts** section of the documentation. - * To get in-depth, scenario-based procedures for top auth developer tasks like obtaining access tokens and using them in calls to the Microsoft Graph API and other APIs, implementing sign-in with Microsoft with a traditional web browser-based app using OpenID Connect, and more, see the **Tutorials** section of the documentation. - * To download code samples, go to [GitHub](https://github.com/Azure-Samples?q=active-directory). -* The flow of requests and responses for the authentication process is determined by the authentication protocol that you used, such as OAuth 2.0, OpenID Connect, WS-Federation, or SAML 2.0. For more info about protocols, see the **Concepts > Authentication protocol** section of the documentation. --In the example scenario above, you can classify the apps according to these two roles: --* Apps that need to securely access resources -* Apps that play the role of the resource itself --### How each flow emits tokens and codes --Depending on how your client is built, it can use one (or several) of the authentication flows supported by Azure AD. These flows can produce a variety of tokens (id_tokens, refresh tokens, access tokens) as well as authorization codes, and require different tokens to make them work. This chart provides an overview: --|Flow | Requires | id_token | access token | refresh token | authorization code | -|--|-|-|--||--| -|[Authorization code flow](v1-protocols-oauth-code.md) | | x | x | x | x| -|[Implicit flow](v1-oauth2-implicit-grant-flow.md) | | x | x | | | -|[Hybrid OIDC flow](v1-protocols-openid-connect-code.md#get-access-tokens)| | x | | | x | -|[Refresh token redemption](v1-protocols-oauth-code.md#refreshing-the-access-tokens) | refresh token | x | x | x| | -|[On-behalf-of flow](v1-oauth2-on-behalf-of-flow.md) | access token| x| x| x| | -|[Client credentials](v1-oauth2-client-creds-grant-flow.md) | | | x (app-only)| | | --Tokens issued via the implicit mode have a length limitation due to being passed back to the browser via the URL (where `response_mode` is `query` or `fragment`). Some browsers have a limit on the size of the URL that can be put in the browser bar and fail when it is too long. Thus, these tokens do not have `groups` or `wids` claims. --Now that you have an overview of the basics, read on to understand the identity app model and API, how provisioning works in Azure AD, and links to detailed info about the common scenarios that Azure AD supports. --## Application model --Azure AD represents applications following a specific model that's designed to fulfill two main functions: --* **Identify the app according to the authentication protocols it supports** - This involves enumerating all the identifiers, URLs, secrets, and related information that are needed at authentication time. Here, Azure AD: -- * Holds all the data required to support authentication at run time. - * Holds all the data for deciding what resources an app might need to access and whether a given request should be fulfilled and under what circumstances. - * Provides the infrastructure for implementing app provisioning within the app developer's tenant and to any other Azure AD tenant. --* **Handle user consent during token request time and facilitate the dynamic provisioning of apps across tenants** - Here, Azure AD: -- * Enables users and administrators to dynamically grant or deny consent for the app to access resources on their behalf. - * Enables administrators to ultimately decide what apps are allowed to do and which users can use specific apps, and how the directory resources are accessed. --In Azure AD, an **application object** describes an application as an abstract entity. Developers work with applications. At deployment time, Azure AD uses a given application object as a blueprint to create a **service principal**, which represents a concrete instance of an application within a directory or tenant. It's the service principal that defines what the app can actually do in a specific target directory, who can use it, what resources it has access to, and so on. Azure AD creates a service principal from an application object through **consent**. --The following diagram shows a simplified Azure AD provisioning flow driven by consent. In it, two tenants exist (A and B), where tenant A owns the application, and tenant B is instantiating the application via a service principal. --![Simplified provisioning flow driven by consent](./media/v1-authentication-scenarios/simplified-provisioning-flow-consent-driven.svg) --In this provisioning flow: --1. A user from tenant B attempts to sign in with the app, the authorization endpoint requests a token for the application. -1. The user credentials are acquired and verified for authentication -1. The user is prompted to provide consent for the app to gain access to tenant B -1. Azure AD uses the application object in tenant A as a blueprint for creating a service principal in tenant B -1. The user receives the requested token --You can repeat this process as many times as you want for other tenants (C, D, and so on). Tenant A retains the blueprint for the app (application object). Users and admins of all the other tenants where the app is given consent retain control over what the application is allowed to do through the corresponding service principal object in each tenant. For more information, see [Application and service principal objects in Microsoft identity platform](../develop/app-objects-and-service-principals.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json). --## Claims in Azure AD security tokens --Security tokens (access and ID tokens) issued by Azure AD contain claims, or assertions of information about the subject that has been authenticated. Applications can use claims for various tasks, including: --* Validate the token -* Identify the subject's directory tenant -* Display user information -* Determine the subject's authorization --The claims present in any given security token are dependent upon the type of token, the type of credential used to authenticate the user, and the application configuration. --A brief description of each type of claim emitted by Azure AD is provided in the table below. For more detailed information, see the [access tokens](../develop/access-tokens.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json) and [ID tokens](../develop/id-tokens.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json) issued by the Azure AD. --| Claim | Description | -| | | -| Application ID | Identifies the application that is using the token. | -| Audience | Identifies the recipient resource the token is intended for. | -| Application Authentication Context Class Reference | Indicates how the client was authenticated (public client vs. confidential client). | -| Authentication Instant | Records the date and time when the authentication occurred. | -| Authentication Method | Indicates how the subject of the token was authenticated (password, certificate, etc.). | -| First Name | Provides the given name of the user as set in Azure AD. | -| Groups | Contains object IDs of Azure AD groups that the user is a member of. | -| Identity Provider | Records the identity provider that authenticated the subject of the token. | -| Issued At | Records the time at which the token was issued, often used for token freshness. | -| Issuer | Identifies the STS that emitted the token as well as the Azure AD tenant. | -| Last Name | Provides the surname of the user as set in Azure AD. | -| Name | Provides a human readable value that identifies the subject of the token. | -| Object ID | Contains an immutable, unique identifier of the subject in Azure AD. | -| Roles | Contains friendly names of Azure AD Application Roles that the user has been granted. | -| Scope | Indicates the permissions granted to the client application. | -| Subject | Indicates the principal about which the token asserts information. | -| Tenant ID | Contains an immutable, unique identifier of the directory tenant that issued the token. | -| Token Lifetime | Defines the time interval within which a token is valid. | -| User Principal Name | Contains the user principal name of the subject. | -| Version | Contains the version number of the token. | --## Next steps --* Learn about the [application types and scenarios supported in Microsoft identity platform](app-types.md) |
active-directory | V1 Oauth2 Client Creds Grant Flow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md | - Title: Azure AD Service to Service Auth using OAuth2.0 -description: This article describes how to use HTTP messages to implement service to service authentication using the OAuth2.0 client credentials grant flow. -------- Previously updated : 02/08/2017-------# Service to service calls using client credentials (shared secret or certificate) ---The OAuth 2.0 Client Credentials Grant Flow permits a web service (*confidential client*) to use its own credentials instead of impersonating a user, to authenticate when calling another web service. In this scenario, the client is typically a middle-tier web service, a daemon service, or web site. For a higher level of assurance, Azure AD also allows the calling service to use a certificate (instead of a shared secret) as a credential. --## Client credentials grant flow diagram -The following diagram explains how the client credentials grant flow works in Azure Active Directory (Azure AD). --![OAuth2.0 Client Credentials Grant Flow](./media/v1-oauth2-client-creds-grant-flow/active-directory-protocols-oauth-client-credentials-grant-flow.jpg) --1. The client application authenticates to the Azure AD token issuance endpoint and requests an access token. -2. The Azure AD token issuance endpoint issues the access token. -3. The access token is used to authenticate to the secured resource. -4. Data from the secured resource is returned to the client application. --## Register the Services in Azure AD -Register both the calling service and the receiving service in Azure Active Directory (Azure AD). For detailed instructions, see [Integrating applications with Azure Active Directory](../develop/quickstart-register-app.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json). --## Request an Access Token -To request an access token, use an HTTP POST to the tenant-specific Azure AD endpoint. --``` -https://login.microsoftonline.com/<tenant id>/oauth2/token -``` --## Service-to-service access token request -There are two cases depending on whether the client application chooses to be secured by a shared secret, or a certificate. --### First case: Access token request with a shared secret -When using a shared secret, a service-to-service access token request contains the following parameters: --| Parameter | Type | Description | -| | | | -| grant_type |required |Specifies the requested grant type. In a Client Credentials Grant flow, the value must be **client_credentials**. | -| client_id |required |Specifies the Azure AD client id of the calling web service. To find the calling application's client ID, in the [Azure portal](https://portal.azure.com), click **Azure Active Directory**, click **App registrations**, click the application. The client_id is the *Application ID* | -| client_secret |required |Enter a key registered for the calling web service or daemon application in Azure AD. To create a key, in the Azure portal, click **Azure Active Directory**, click **App registrations**, click the application, click **Settings**, click **Keys**, and add a Key. URL-encode this secret when providing it. | -| resource |required |Enter the App ID URI of the receiving web service. To find the App ID URI, in the Azure portal, click **Azure Active Directory**, click **App registrations**, click the service application, and then click **Settings** and **Properties**. | --#### Example -The following HTTP POST requests an [access token](../develop/access-tokens.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json) for the `https://service.contoso.com/` web service. The `client_id` identifies the web service that requests the access token. --``` -POST /contoso.com/oauth2/token HTTP/1.1 -Host: login.microsoftonline.com -Content-Type: application/x-www-form-urlencoded --grant_type=client_credentials&client_id=625bc9f6-3bf6-4b6d-94ba-e97cf07a22de&client_secret=qkDwDJlDfig2IpeuUZYKH1Wb8q1V0ju6sILxQQqhJ+s=&resource=https%3A%2F%2Fservice.contoso.com%2F -``` --### Second case: Access token request with a certificate -A service-to-service access token request with a certificate contains the following parameters: --| Parameter | Type | Description | -| | | | -| grant_type |required |Specifies the requested response type. In a Client Credentials Grant flow, the value must be **client_credentials**. | -| client_id |required |Specifies the Azure AD client id of the calling web service. To find the calling application's client ID, in the [Azure portal](https://portal.azure.com), click **Azure Active Directory**, click **App registrations**, click the application. The client_id is the *Application ID* | -| client_assertion_type |required |The value must be `urn:ietf:params:oauth:client-assertion-type:jwt-bearer` | -| client_assertion |required | An assertion (a JSON Web Token) that you need to create and sign with the certificate you registered as credentials for your application. Read about [certificate credentials](../develop/certificate-credentials.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json) to learn how to register your certificate and the format of the assertion.| -| resource | required |Enter the App ID URI of the receiving web service. To find the App ID URI, in the Azure portal, click **Azure Active Directory**, click **App registrations**, click the service application, and then click **Settings** and **Properties**. | --Notice that the parameters are almost the same as in the case of the request by shared secret except that -the client_secret parameter is replaced by two parameters: a client_assertion_type and client_assertion. --#### Example -The following HTTP POST requests an access token for the `https://service.contoso.com/` web service with a certificate. The `client_id` identifies the web service that requests the access token. --``` -POST /<tenant_id>/oauth2/token HTTP/1.1 -Host: login.microsoftonline.com -Content-Type: application/x-www-form-urlencoded --resource=https%3A%2F%contoso.onmicrosoft.com%2Ffc7664b4-cdd6-43e1-9365-c2e1c4e1b3bf&client_id=97e0a5b7-d745-40b6-94fe-5f77d35c6e05&client_assertion_type=urn%3Aietf%3Aparams%3Aoauth%3Aclient-assertion-type%3Ajwt-bearer&client_assertion=eyJhbGciOiJSUzI1NiIsIng1dCI6Imd4OHRHeXN5amNScUtqRlBuZDdSRnd2d1pJMCJ9.eyJ{a lot of characters here}M8U3bSUKKJDEg&grant_type=client_credentials -``` --### Service-to-Service Access Token Response --A success response contains a JSON OAuth 2.0 response with the following parameters: --| Parameter | Description | -| | | -| access_token |The requested access token. The calling web service can use this token to authenticate to the receiving web service. | -| token_type |Indicates the token type value. The only type that Azure AD supports is **Bearer**. For more information about bearer tokens, see The [OAuth 2.0 Authorization Framework: Bearer Token Usage (RFC 6750)](https://www.rfc-editor.org/rfc/rfc6750.txt). | -| expires_in |How long the access token is valid (in seconds). | -| expires_on |The time when the access token expires. The date is represented as the number of seconds from 1970-01-01T0:0:0Z UTC until the expiration time. This value is used to determine the lifetime of cached tokens. | -| not_before |The time from which the access token becomes usable. The date is represented as the number of seconds from 1970-01-01T0:0:0Z UTC until time of validity for the token.| -| resource |The App ID URI of the receiving web service. | --#### Example of response -The following example shows a success response to a request for an access token to a web service. --``` -{ -"access_token":"eyJ0eXAiO ... 0X2tnSQLEANnSPHY0gKcgw", -"token_type":"Bearer", -"expires_in":"3599", -"expires_on":"1388452167", -"resource":"https://service.contoso.com/" -} -``` -## Use the access token to access the secured resource --The service can use the acquired access token to make authenticated requests to the downstream web API by setting the token in the `Authorization` header. --### Example --``` -GET /me?api-version=2013-11-08 HTTP/1.1 -Host: graph.microsoft.com -Authorization: Bearer eyJ0eXAiO ... 0X2tnSQLEANnSPHY0gKcgw -``` --## See also -* [OAuth 2.0 in Azure AD](v1-protocols-oauth-code.md) -* [Sample in C# of the service to service call with a shared secret](https://github.com/Azure-Samples/active-directory-dotnet-daemon) -and [Sample in C# of the service to service call with a certificate](https://github.com/Azure-Samples/active-directory-dotnet-daemon-certificate-credential) |
active-directory | V1 Oauth2 Implicit Grant Flow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/v1-oauth2-implicit-grant-flow.md | - Title: Understanding the OAuth2 implicit grant flow in Azure AD -description: Learn more about Azure Active Directory's implementation of the OAuth2 implicit grant flow, and whether it's right for your application. -------- Previously updated : 08/15/2019-------# Understanding the OAuth2 implicit grant flow in Azure Active Directory (AD) ---The OAuth2 implicit grant is notorious for being the grant with the longest list of security concerns in the OAuth2 specification. And yet, that is the approach implemented by ADAL JS and the one we recommend when writing SPA applications. What gives? It's all a matter of tradeoffs: and as it turns out, the implicit grant is the best approach you can pursue for applications that consume a Web API via JavaScript from a browser. --## What is the OAuth2 implicit grant? --The quintessential [OAuth2 authorization code grant](https://tools.ietf.org/html/rfc6749#section-1.3.1) is the authorization grant that uses two separate endpoints. The authorization endpoint is used for the user interaction phase, which results in an authorization code. The token endpoint is then used by the client for exchanging the code for an access token, and often a refresh token as well. Web applications are required to present their own application credentials to the token endpoint, so that the authorization server can authenticate the client. --The [OAuth2 implicit grant](https://tools.ietf.org/html/rfc6749#section-1.3.2) is a variant of other authorization grants. It allows a client to obtain an access token (and id_token, when using [OpenId Connect](https://openid.net/specs/openid-connect-core-1_0.html)) directly from the authorization endpoint, without contacting the token endpoint nor authenticating the client. This variant was designed for JavaScript based applications running in a Web browser: in the original OAuth2 specification, tokens are returned in a URI fragment. That makes the token bits available to the JavaScript code in the client, but it guarantees they won't be included in redirects toward the server. In OAuth2 implicit grant, the authorization endpoint issues access tokens directly to the client using a redirect URI that was previously supplied. It also has the advantage of eliminating any requirements for cross origin calls, which are necessary if the JavaScript application is required to contact the token endpoint. --An important characteristic of the OAuth2 implicit grant is the fact that such flows never return refresh tokens to the client. The next section shows how this isn't necessary and would in fact be a security issue. --## Suitable scenarios for the OAuth2 implicit grant --The OAuth2 specification declares that the implicit grant has been devised to enable user-agent applications ΓÇô that is to say, JavaScript applications executing within a browser. The defining characteristic of such applications is that JavaScript code is used for accessing server resources (typically a Web API) and for updating the application user experience accordingly. Think of applications like Gmail or Outlook Web Access: when you select a message from your inbox, only the message visualization panel changes to display the new selection, while the rest of the page remains unmodified. This characteristic is in contrast with traditional redirect-based Web apps, where every user interaction results in a full page postback and a full page rendering of the new server response. --Applications that take the JavaScript based approach to its extreme are called single-page applications, or SPAs. The idea is that these applications only serve an initial HTML page and associated JavaScript, with all subsequent interactions being driven by Web API calls performed via JavaScript. However, hybrid approaches, where the application is mostly postback-driven but performs occasional JS calls, are not uncommon ΓÇô the discussion about implicit flow usage is relevant for those as well. --Redirect-based applications typically secure their requests via cookies, however, that approach does not work as well for JavaScript applications. Cookies only work against the domain they have been generated for, while JavaScript calls might be directed toward other domains. In fact, that will frequently be the case: think of applications invoking Microsoft Graph API, Office API, Azure API ΓÇô all residing outside the domain from where the application is served. A growing trend for JavaScript applications is to have no backend at all, relying 100% on third party Web APIs to implement their business function. --Currently, the preferred method of protecting calls to a Web API is to use the OAuth2 bearer token approach, where every call is accompanied by an OAuth2 access token. The Web API examines the incoming access token and, if it finds in it the necessary scopes, it grants access to the requested operation. The implicit flow provides a convenient mechanism for JavaScript applications to obtain access tokens for a Web API, offering numerous advantages in respect to cookies: --* Tokens can be reliably obtained without any need for cross origin calls ΓÇô mandatory registration of the redirect URI to which tokens are return guarantees that tokens are not displaced -* JavaScript applications can obtain as many access tokens as they need, for as many Web APIs they target ΓÇô with no restriction on domains -* HTML5 features like session or local storage grant full control over token caching and lifetime management, whereas cookies management is opaque to the app -* Access tokens aren't susceptible to Cross-site request forgery (CSRF) attacks --The implicit grant flow does not issue refresh tokens, mostly for security reasons. A refresh token isn't as narrowly scoped as access tokens, granting far more power hence inflicting far more damage in case it is leaked out. In the implicit flow, tokens are delivered in the URL, hence the risk of interception is higher than in the authorization code grant. --However, a JavaScript application has another mechanism at its disposal for renewing access tokens without repeatedly prompting the user for credentials. The application can use a hidden iframe to perform new token requests against the authorization endpoint of Azure AD: as long as the browser still has an active session (read: has a session cookie) against the Azure AD domain, the authentication request can successfully occur without any need for user interaction. --This model grants the JavaScript application the ability to independently renew access tokens and even acquire new ones for a new API (provided that the user previously consented for them). This avoids the added burden of acquiring, maintaining, and protecting a high value artifact such as a refresh token. The artifact that makes the silent renewal possible, the Azure AD session cookie, is managed outside of the application. Another advantage of this approach is a user can sign out from Azure AD, using any of the applications signed into Azure AD, running in any of the browser tabs. This results in the deletion of the Azure AD session cookie, and the JavaScript application will automatically lose the ability to renew tokens for the signed out user. --## Is the implicit grant suitable for my app? --The implicit grant presents more risks than other grants, and the areas you need to pay attention to are well documented (for example, [Misuse of Access Token to Impersonate Resource Owner in Implicit Flow][OAuth2-Spec-Implicit-Misuse] and [OAuth 2.0 Threat Model and Security Considerations][OAuth2-Threat-Model-And-Security-Implications]). However, the higher risk profile is largely due to the fact that it is meant to enable applications that execute active code, served by a remote resource to a browser. If you are planning an SPA architecture, have no backend components or intend to invoke a Web API via JavaScript, use of the implicit flow for token acquisition is recommended. --If your application is a native client, the implicit flow isn't a great fit. The absence of the Azure AD session cookie in the context of a native client deprives your application from the means of maintaining a long lived session. Which means your application will repeatedly prompt the user when obtaining access tokens for new resources. --If you are developing a Web application that includes a backend, and consuming an API from its backend code, the implicit flow is also not a good fit. Other grants give you far more power. For example, the OAuth2 client credentials grant provides the ability to obtain tokens that reflect the permissions assigned to the application itself, as opposed to user delegations. This means the client has the ability to maintain programmatic access to resources even when a user is not actively engaged in a session, and so on. Not only that, but such grants give higher security guarantees. For instance, access tokens never transit through the user browser, they don't risk being saved in the browser history, and so on. The client application can also perform strong authentication when requesting a token. --## Next steps --* See [How to integrate an application with Azure AD][ACOM-How-To-Integrate] for additional depth on the application integration process. --<!--Image references--> --<!--Reference style links in use--> -[ACOM-How-To-Integrate]: ../develop/how-to-integrate.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json -[OAuth2-Spec-Implicit-Misuse]: https://tools.ietf.org/html/rfc6749#section-10.16 -[OAuth2-Threat-Model-And-Security-Implications]: https://tools.ietf.org/html/rfc6819 |
active-directory | V1 Oauth2 On Behalf Of Flow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/v1-oauth2-on-behalf-of-flow.md | - Title: Service-to-service authentication with OAuth2.0 on-behalf-of flow -description: This article describes how to use HTTP messages to implement service-to-service authentication with the OAuth2.0 On-Behalf-Of flow. -------- Previously updated : 08/5/2020-------# Service-to-service calls that use delegated user identity in the On-Behalf-Of flow ---The OAuth 2.0 On-Behalf-Of (OBO) flow enables an application that invokes a service or web API to pass user authentication to another service or web API. The OBO flow propagates the delegated user identity and permissions through the request chain. For the middle-tier service to make authenticated requests to the downstream service, it must secure an access token from Azure Active Directory (Azure AD) on behalf of the user. --> [!IMPORTANT] -> As of May 2018, an `id_token` can't be used for the On-Behalf-Of flow. Single-page apps (SPAs) must pass an access token to a middle-tier confidential client to perform OBO flows. For more detail about the clients that can perform On-Behalf-Of calls, see [limitations](#client-limitations). --## On-Behalf-Of flow diagram --The OBO flow starts after the user has been authenticated on an application that uses the [OAuth 2.0 authorization code grant flow](v1-protocols-oauth-code.md). At that point, the application sends an access token (token A) to the middle-tier web API (API A) containing the user's claims and consent to access API A. Next, API A makes an authenticated request to the downstream web API (API B). --These steps constitute the On-Behalf-Of flow: -![Shows the steps in the OAuth2.0 On-Behalf-Of flow](./media/v1-oauth2-on-behalf-of-flow/active-directory-protocols-oauth-on-behalf-of-flow.png) --1. The client application makes a request to API A with the token A. -1. API A authenticates to the Azure AD token issuance endpoint and requests a token to access API B. -1. The Azure AD token issuance endpoint validates API A's credentials with token A and issues the access token for API B (token B). -1. The request to API B contains token B in the authorization header. -1. API B returns data from the secured resource. -->[!NOTE] ->The audience claim in an access token used to request a token for a downstream service must be the ID of the service making the OBO request. The token also must be signed with the Azure Active Directory global signing key (which is the default for applications registered via **App registrations** in the portal). --## Register the application and service in Azure AD --Register both the middle-tier service and the client application in Azure AD. --### Register the middle-tier service --1. Sign in to the [Azure portal](https://portal.azure.com). -1. On the top bar, select your account and look under the **Directory** list to select an Active Directory tenant for your application. -1. Select **More Services** on the left pane and choose **Azure Active Directory**. -1. Select **App registrations** and then **New registration**. -1. Enter a friendly name for the application and select the application type. -1. Under **Supported account types**, select **Accounts in any organizational directory and personal Microsoft accounts**. -1. Set the redirect URI to the base URL. -1. Select **Register** to create the application. -1. In the Azure portal, choose your application and select **Certificates & secrets**. -1. Select **New client secret** and add a secret with a duration of either one year or two years. -1. When you save this page, the Azure portal displays the secret value. Copy and save the secret value in a safe location. -1. Create a scope on your application in the **Expose an API** page for your app, and clicking "Add a scope". The Portal may require you to create an application ID URI as well. --> [!IMPORTANT] -> You need the secret to configure the application settings in your implementation. This secret value is not displayed again, and it isn't retrievable by any other means. Record it as soon as it is visible in the Azure portal. --### Register the client application --1. Sign in to the [Azure portal](https://portal.azure.com). -1. On the top bar, select your account and look under the **Directory** list to select an Active Directory tenant for your application. -1. Select **More Services** on the left pane and choose **Azure Active Directory**. -1. Select **App registrations** and then **New registration**. -1. Enter a friendly name for the application and select the application type. -1. Under **Supported account types**, select **Accounts in any organizational directory and personal Microsoft accounts**. -1. Set the redirect URI to the base URL. -1. Select **Register** to create the application. -1. Configure permissions for your application. In **API permissions**, select **Add a permission** and then **My APIs**. -1. Type the name of the middle-tier service in the text field. -1. Choose **Select Permissions** and then select the scope you created in the last step of registering the middle-tier. --### Configure known client applications --In this scenario, the middle-tier service needs to obtain the user's consent to access the downstream API without a user interaction. The option to grant access to the downstream API must be presented up front as part of the consent step during authentication. --Follow the steps below to explicitly bind the client app's registration in Azure AD with the middle-tier service's registration. This operation merges the consent required by both the client and middle-tier into a single dialog. --1. Go to the middle-tier service registration and select **Manifest** to open the manifest editor. -1. Locate the `knownClientApplications` array property and add the client ID of the client application as an element. -1. Save the manifest by selecting **Save**. --## Service-to-service access token request --To request an access token, make an HTTP POST to the tenant-specific Azure AD endpoint with the following parameters: --``` -https://login.microsoftonline.com/<tenant>/oauth2/token -``` --The client application is secured either by a shared secret or by a certificate. --### First case: Access token request with a shared secret --When using a shared secret, a service-to-service access token request contains the following parameters: --| Parameter | Type | Description | -| | | | -| grant_type |required | The type of the token request. An OBO request uses a JSON Web Token (JWT) so the value must be **urn:ietf:params:oauth:grant-type:jwt-bearer**. | -| assertion |required | The value of the access token used in the request. | -| client_id |required | The app ID assigned to the calling service during registration with Azure AD. To find the app ID in the Azure portal, select **Active Directory**, choose the directory, and then select the application name. | -| client_secret |required | The key registered for the calling service in Azure AD. This value should have been noted at the time of registration. | -| resource |required | The app ID URI of the receiving service (secured resource). To find the app ID URI in the Azure portal, select **Active Directory** and choose the directory. Select the application name, choose **All settings**, and then select **Properties**. | -| requested_token_use |required | Specifies how the request should be processed. In the On-Behalf-Of flow, the value must be **on_behalf_of**. | -| scope |required | A space separated list of scopes for the token request. For OpenID Connect, the scope **openid** must be specified.| --#### Example --The following HTTP POST requests an access token for the https://graph.microsoft.com web API. The `client_id` identifies the service that requests the access token. --``` -// line breaks for legibility only --POST /oauth2/token HTTP/1.1 -Host: login.microsoftonline.com -Content-Type: application/x-www-form-urlencoded --grant_type=urn%3Aietf%3Aparams%3Aoauth%3Agrant-type%3Ajwt-bearer -&client_id=625391af-c675-43e5-8e44-edd3e30ceb15 -&client_secret=0Y1W%2BY3yYb3d9N8vSjvm8WrGzVZaAaHbHHcGbcgG%2BoI%3D -&resource=https%3A%2F%2Fgraph.microsoft.com -&assertion=eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6InowMzl6ZHNGdWl6cEJmQlZLMVRuMjVRSFlPMCIsImtpZCI6InowMzl6ZHNGdWl6cEJmQlZLMVRuMjVRSFlPMCJ9.ewogICJhdWQiOiAiaHR0cHM6Ly9ncmFwaC5taWNyb3NvZnQuY29tIiwKICAiaXNzIjogImh0dHBzOi8vc3RzLndpbmRvd3MubmV0LzI2MDM5Y2NlLTQ4OWQtNDAwMi04MjkzLTViMGM1MTM0ZWFjYi8iLAogICJpYXQiOiAxNDkzNDIzMTY4LAogICJuYmYiOiAxNDkzNDIzMTY4LAogICJleHAiOiAxNDkzNDY2OTUxLAogICJhY3IiOiAiMSIsCiAgImFpbyI6ICJBU1FBMi84REFBQUE1NnZGVmp0WlNjNWdBVWwrY1Z0VFpyM0VvV2NvZEoveWV1S2ZqcTZRdC9NPSIsCiAgImFtciI6IFsKICAgICJwd2QiCiAgXSwKICAiYXBwaWQiOiAiNjI1MzkxYWYtYzY3NS00M2U1LThlNDQtZWRkM2UzMGNlYjE1IiwKICAiYXBwaWRhY3IiOiAiMSIsCiAgImVfZXhwIjogMzAyNjgzLAogICJmYW1pbHlfbmFtZSI6ICJUZXN0IiwKICAiZ2l2ZW5fbmFtZSI6ICJOYXZ5YSIsCiAgImlwYWRkciI6ICIxNjcuMjIwLjEuMTc3IiwKICAibmFtZSI6ICJOYXZ5YSBUZXN0IiwKICAib2lkIjogIjFjZDRiY2FjLWI4MDgtNDIzYS05ZTJmLTgyN2ZiYjFiYjczOSIsCiAgInBsYXRmIjogIjMiLAogICJwdWlkIjogIjEwMDMzRkZGQTEyRUQ3RkUiLAogICJzY3AiOiAiVXNlci5SZWFkIiwKICAic3ViIjogIjNKTUlaSWJlYTc1R2hfWHdDN2ZzX0JDc3kxa1l1ekZKLTUyVm1Zd0JuM3ciLAogICJ0aWQiOiAiMjYwMzljY2UtNDg5ZC00MDAyLTgyOTMtNWIwYzUxMzRlYWNiIiwKICAidW5pcXVlX25hbWUiOiAibmF2eWFAZGRvYmFsaWFub3V0bG9vay5vbm1pY3Jvc29mdC5jb20iLAogICJ1cG4iOiAibmF2eWFAZGRvYmFsaWFub3V0bG9vay5vbm1pY3Jvc29mdC5jb20iLAogICJ1dGkiOiAieEN3ZnpoYS1QMFdKUU9MeENHZ0tBQSIsCiAgInZlciI6ICIxLjAiCn0.cqmUVjfVbqWsxJLUI1Z4FRx1mNQAHP-L0F4EMN09r8FY9bIKeO-0q1eTdP11Nkj_k4BmtaZsTcK_mUygdMqEp9AfyVyA1HYvokcgGCW_Z6DMlVGqlIU4ssEkL9abgl1REHElPhpwBFFBBenOk9iHddD1GddTn6vJbKC3qAaNM5VarjSPu50bVvCrqKNvFixTb5bbdnSz-Qr6n6ACiEimiI1aNOPR2DeKUyWBPaQcU5EAK0ef5IsVJC1yaYDlAcUYIILMDLCD9ebjsy0t9pj_7lvjzUSrbMdSCCdzCqez_MSNxrk1Nu9AecugkBYp3UVUZOIyythVrj6-sVvLZKUutQ -&requested_token_use=on_behalf_of -&scope=openid -``` --### Second case: Access token request with a certificate --A service-to-service access token request with a certificate contains the following parameters: --| Parameter | Type | Description | -| | | | -| grant_type |required | The type of the token request. An OBO request uses a JWT access token so the value must be **urn:ietf:params:oauth:grant-type:jwt-bearer**. | -| assertion |required | The value of the token used in the request. | -| client_id |required | The app ID assigned to the calling service during registration with Azure AD. To find the app ID in the Azure portal, select **Active Directory**, choose the directory, and then select the application name. | -| client_assertion_type |required |The value must be `urn:ietf:params:oauth:client-assertion-type:jwt-bearer` | -| client_assertion |required | A JSON Web Token that you create and sign with the certificate you registered as credentials for your application. See [certificate credentials](../develop/certificate-credentials.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json) to learn about assertion format and about how to register your certificate.| -| resource |required | The app ID URI of the receiving service (secured resource). To find the app ID URI in the Azure portal, select **Active Directory** and choose the directory. Select the application name, choose **All settings**, and then select **Properties**. | -| requested_token_use |required | Specifies how the request should be processed. In the On-Behalf-Of flow, the value must be **on_behalf_of**. | -| scope |required | A space separated list of scopes for the token request. For OpenID Connect, the scope **openid** must be specified.| --These parameters are almost the same as with the request by shared secret except that the `client_secret parameter` is replaced by two parameters: `client_assertion_type` and `client_assertion`. --#### Example --The following HTTP POST requests an access token for the https://graph.microsoft.com web API with a certificate. The `client_id` identifies the service that requests the access token. --``` -// line breaks for legibility only --POST /oauth2/token HTTP/1.1 -Host: login.microsoftonline.com -Content-Type: application/x-www-form-urlencoded --grant_type=urn%3Aietf%3Aparams%3Aoauth%3Agrant-type%3Ajwt-bearer -&client_id=625391af-c675-43e5-8e44-edd3e30ceb15 -&client_assertion_type=urn%3Aietf%3Aparams%3Aoauth%3Aclient-assertion-type%3Ajwt-bearer -&client_assertion=eyJhbGciOiJSUzI1NiIsIng1dCI6Imd4OHRHeXN5amNScUtqRlBuZDdSRnd2d1pJMCJ9.eyJ{a lot of characters here}M8U3bSUKKJDEg -&resource=https%3A%2F%2Fgraph.microsoft.com -&assertion=eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6InowMzl6ZHNGdWl6cEJmQlZLMVRuMjVRSFlPMCIsImtpZCI6InowMzl6ZHNGdWl6cEJmQlZLMVRuMjVRSFlPMCJ9.eyJhdWQiOiJodHRwczovL2Rkb2JhbGlhbm91dGxvb2sub25taWNyb3NvZnQuY29tLzE5MjNmODYyLWU2ZGMtNDFhMy04MWRhLTgwMmJhZTAwYWY2ZCIsImlzcyI6Imh0dHBzOi8vc3RzLndpbmRvd3MubmV0LzI2MDM5Y2NlLTQ4OWQtNDAwMi04MjkzLTViMGM1MTM0ZWFjYi8iLCJpYXQiOjE0OTM0MjMxNTIsIm5iZiI6MTQ5MzQyMzE1MiwiZXhwIjoxNDkzNDY2NjUyLCJhY3IiOiIxIiwiYWlvIjoiWTJaZ1lCRFF2aTlVZEc0LzM0L3dpQndqbjhYeVp4YmR1TFhmVE1QeG8yYlN2elgreHBVQSIsImFtciI6WyJwd2QiXSwiYXBwaWQiOiJiMzE1MDA3OS03YmViLTQxN2YtYTA2YS0zZmRjNzhjMzI1NDUiLCJhcHBpZGFjciI6IjAiLCJlX2V4cCI6MzAyNDAwLCJmYW1pbHlfbmFtZSI6IlRlc3QiLCJnaXZlbl9uYW1lIjoiTmF2eWEiLCJpcGFkZHIiOiIxNjcuMjIwLjEuMTc3IiwibmFtZSI6Ik5hdnlhIFRlc3QiLCJvaWQiOiIxY2Q0YmNhYy1iODA4LTQyM2EtOWUyZi04MjdmYmIxYmI3MzkiLCJwbGF0ZiI6IjMiLCJzY3AiOiJ1c2VyX2ltcGVyc29uYXRpb24iLCJzdWIiOiJEVXpYbkdKMDJIUk0zRW5pbDFxdjZCakxTNUllQy0tQ2ZpbzRxS1MzNEc4IiwidGlkIjoiMjYwMzljY2UtNDg5ZC00MDAyLTgyOTMtNWIwYzUxMzRlYWNiIiwidW5pcXVlX25hbWUiOiJuYXZ5YUBkZG9iYWxpYW5vdXRsb29rLm9ubWljcm9zb2Z0LmNvbSIsInVwbiI6Im5hdnlhQGRkb2JhbGlhbm91dGxvb2sub25taWNyb3NvZnQuY29tIiwidmVyIjoiMS4wIn0.R-Ke-XO7lK0r5uLwxB8g5CrcPAwRln5SccJCfEjU6IUqpqcjWcDzeDdNOySiVPDU_ZU5knJmzRCF8fcjFtPsaA4R7vdIEbDuOur15FXSvE8FvVSjP_49OH6hBYqoSUAslN3FMfbO6Z8YfCIY4tSOB2I6ahQ_x4ZWFWglC3w5mK-_4iX81bqi95eV4RUKefUuHhQDXtWhrSgIEC0YiluMvA4TnaJdLq_tWXIc4_Tq_KfpkvI004ONKgU7EAMEr1wZ4aDcJV2yf22gQ1sCSig6EGSTmmzDuEPsYiyd4NhidRZJP4HiiQh-hePBQsgcSgYGvz9wC6n57ufYKh2wm_Ti3Q -&requested_token_use=on_behalf_of -&scope=openid -``` --## Service-to-service access token response --A success response is a JSON OAuth 2.0 response with the following parameters: --| Parameter | Description | -| | | -| token_type |Indicates the token type value. The only type that Azure AD supports is **Bearer**. For more information about bearer tokens, see the [OAuth 2.0 Authorization Framework: Bearer Token Usage (RFC 6750)](https://www.rfc-editor.org/rfc/rfc6750.txt). | -| scope |The scope of access granted in the token. | -| expires_in |The length of time the access token is valid (in seconds). | -| expires_on |The time when the access token expires. The date is represented as the number of seconds from 1970-01-01T0:0:0Z UTC until the expiration time. This value is used to determine the lifetime of cached tokens. | -| resource |The app ID URI of the receiving service (secured resource). | -| access_token |The requested access token. The calling service can use this token to authenticate to the receiving service. | -| id_token |The requested ID token. The calling service can use this token to verify the user's identity and begin a session with the user. | -| refresh_token |The refresh token for the requested access token. The calling service can use this token to request another access token after the current access token expires. | --### Success response example --The following example shows a success response to a request for an access token for the https://graph.microsoft.com web API. --```json -{ - "token_type":"Bearer", - "scope":"User.Read", - "expires_in":"43482", - "ext_expires_in":"302683", - "expires_on":"1493466951", - "not_before":"1493423168", - "resource":"https://graph.microsoft.com", - "access_token":"eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6InowMzl6ZHNGdWl6cEJmQlZLMVRuMjVRSFlPMCIsImtpZCI6InowMzl6ZHNGdWl6cEJmQlZLMVRuMjVRSFlPMCJ9.eyJhdWQiOiJodHRwczovL2dyYXBoLndpbmRvd3MubmV0IiwiaXNzIjoiaHR0cHM6Ly9zdHMud2luZG93cy5uZXQvMjYwMzljY2UtNDg5ZC00MDAyLTgyOTMtNWIwYzUxMzRlYWNiLyIsImlhdCI6MTQ5MzQyMzE2OCwibmJmIjoxNDkzNDIzMTY4LCJleHAiOjE0OTM0NjY5NTEsImFjciI6IjEiLCJhaW8iOiJBU1FBMi84REFBQUE1NnZGVmp0WlNjNWdBVWwrY1Z0VFpyM0VvV2NvZEoveWV1S2ZqcTZRdC9NPSIsImFtciI6WyJwd2QiXSwiYXBwaWQiOiI2MjUzOTFhZi1jNjc1LTQzZTUtOGU0NC1lZGQzZTMwY2ViMTUiLCJhcHBpZGFjciI6IjEiLCJlX2V4cCI6MzAyNjgzLCJmYW1pbHlfbmFtZSI6IlRlc3QiLCJnaXZlbl9uYW1lIjoiTmF2eWEiLCJpcGFkZHIiOiIxNjcuMjIwLjEuMTc3IiwibmFtZSI6Ik5hdnlhIFRlc3QiLCJvaWQiOiIxY2Q0YmNhYy1iODA4LTQyM2EtOWUyZi04MjdmYmIxYmI3MzkiLCJwbGF0ZiI6IjMiLCJwdWlkIjoiMTAwMzNGRkZBMTJFRDdGRSIsInNjcCI6IlVzZXIuUmVhZCIsInN1YiI6IjNKTUlaSWJlYTc1R2hfWHdDN2ZzX0JDc3kxa1l1ekZKLTUyVm1Zd0JuM3ciLCJ0aWQiOiIyNjAzOWNjZS00ODlkLTQwMDItODI5My01YjBjNTEzNGVhY2IiLCJ1bmlxdWVfbmFtZSI6Im5hdnlhQGRkb2JhbGlhbm91dGxvb2sub25taWNyb3NvZnQuY29tIiwidXBuIjoibmF2eWFAZGRvYmFsaWFub3V0bG9vay5vbm1pY3Jvc29mdC5jb20iLCJ1dGkiOiJ4Q3dmemhhLVAwV0pRT0x4Q0dnS0FBIiwidmVyIjoiMS4wIn0.cqmUVjfVbqWsxJLUI1Z4FRx1mNQAHP-L0F4EMN09r8FY9bIKeO-0q1eTdP11Nkj_k4BmtaZsTcK_mUygdMqEp9AfyVyA1HYvokcgGCW_Z6DMlVGqlIU4ssEkL9abgl1REHElPhpwBFFBBenOk9iHddD1GddTn6vJbKC3qAaNM5VarjSPu50bVvCrqKNvFixTb5bbdnSz-Qr6n6ACiEimiI1aNOPR2DeKUyWBPaQcU5EAK0ef5IsVJC1yaYDlAcUYIILMDLCD9ebjsy0t9pj_7lvjzUSrbMdSCCdzCqez_MSNxrk1Nu9AecugkBYp3UVUZOIyythVrj6-sVvLZKUutQ", - "refresh_token":"AQABAAAAAABnfiG-mA6NTae7CdWW7QfdjKGu9-t1scy_TDEmLi4eLQMjJGt_nAoVu6A4oSu1KsRiz8XyQIPKQxSGfbf2FoSK-hm2K8TYzbJuswYusQpJaHUQnSqEvdaCeFuqXHBv84wjFhuanzF9dQZB_Ng5za9xKlUENrNtlq9XuLNVKzxEyeUM7JyxzdY7JiEphWImwgOYf6II316d0Z6-H3oYsFezf4Xsjz-MOBYEov0P64UaB5nJMvDyApV-NWpgklLASfNoSPGb67Bc02aFRZrm4kLk-xTl6eKE6hSo0XU2z2t70stFJDxvNQobnvNHrAmBaHWPAcC3FGwFnBOojpZB2tzG1gLEbmdROVDp8kHEYAwnRK947Py12fJNKExUdN0njmXrKxNZ_fEM33LHW1Tf4kMX_GvNmbWHtBnIyG0w5emb-b54ef5AwV5_tGUeivTCCysgucEc-S7G8Cz0xNJ_BOiM_4bAv9iFmrm9STkltpz0-Tftg8WKmaJiC0xXj6uTf4ZkX79mJJIuuM7XP4ARIcLpkktyg2Iym9jcZqymRkGH2Rm9sxBwC4eeZXM7M5a7TJ-5CqOdfuE3sBPq40RdEWMFLcrAzFvP0VDR8NKHIrPR1AcUruat9DETmTNJukdlJN3O41nWdZOVoJM-uKN3uz2wQ2Ld1z0Mb9_6YfMox9KTJNzRzcL52r4V_y3kB6ekaOZ9wQ3HxGBQ4zFt-2U0mSszIAA", - "id_token":"eyJ0eXAiOiJKV1QiLCJhbGciOiJub25lIn0.eyJhdWQiOiI2MjUzOTFhZi1jNjc1LTQzZTUtOGU0NC1lZGQzZTMwY2ViMTUiLCJpc3MiOiJodHRwczovL3N0cy53aW5kb3dzLm5ldC8yNjAzOWNjZS00ODlkLTQwMDItODI5My01YjBjNTEzNGVhY2IvIiwiaWF0IjoxNDkzNDIzMTY4LCJuYmYiOjE0OTM0MjMxNjgsImV4cCI6MTQ5MzQ2Njk1MSwiYW1yIjpbInB3ZCJdLCJmYW1pbHlfbmFtZSI6IlRlc3QiLCJnaXZlbl9uYW1lIjoiTmF2eWEiLCJpcGFkZHIiOiIxNjcuMjIwLjEuMTc3IiwibmFtZSI6Ik5hdnlhIFRlc3QiLCJvaWQiOiIxY2Q0YmNhYy1iODA4LTQyM2EtOWUyZi04MjdmYmIxYmI3MzkiLCJwbGF0ZiI6IjMiLCJzdWIiOiJEVXpYbkdKMDJIUk0zRW5pbDFxdjZCakxTNUllQy0tQ2ZpbzRxS1MzNEc4IiwidGlkIjoiMjYwMzljY2UtNDg5ZC00MDAyLTgyOTMtNWIwYzUxMzRlYWNiIiwidW5pcXVlX25hbWUiOiJuYXZ5YUBkZG9iYWxpYW5vdXRsb29rLm9ubWljcm9zb2Z0LmNvbSIsInVwbiI6Im5hdnlhQGRkb2JhbGlhbm91dGxvb2sub25taWNyb3NvZnQuY29tIiwidXRpIjoieEN3ZnpoYS1QMFdKUU9MeENHZ0tBQSIsInZlciI6IjEuMCJ9." -} -``` --### Error response example --The Azure AD token endpoint returns an error response when it tries to acquire an access token for a downstream API that is set with a Conditional Access policy (for example, multi-factor authentication). The middle-tier service should surface this error to the client application so that the client application can provide the user interaction to satisfy the Conditional Access policy. --```json -{ - "error":"interaction_required", - "error_description":"AADSTS50079: Due to a configuration change made by your administrator, or because you moved to a new location, you must enroll in multi-factor authentication to access 'bf8d80f9-9098-4972-b203-500f535113b1'.\r\nTrace ID: b72a68c3-0926-4b8e-bc35-3150069c2800\r\nCorrelation ID: 73d656cf-54b1-4eb2-b429-26d8165a52d7\r\nTimestamp: 2017-05-01 22:43:20Z", - "error_codes":[50079], - "timestamp":"2017-05-01 22:43:20Z", - "trace_id":"b72a68c3-0926-4b8e-bc35-3150069c2800", - "correlation_id":"73d656cf-54b1-4eb2-b429-26d8165a52d7", - "claims":"{\"access_token\":{\"polids\":{\"essential\":true,\"values\":[\"9ab03e19-ed42-4168-b6b7-7001fb3e933a\"]}}}" -} -``` --## Use the access token to access the secured resource --The middle-tier service can use the acquired access token to make authenticated requests to the downstream web API by setting the token in the `Authorization` header. --### Example --``` -GET /me?api-version=2013-11-08 HTTP/1.1 -Host: graph.microsoft.com -Authorization: Bearer eyJ0eXAiO ... 0X2tnSQLEANnSPHY0gKcgw -``` --## SAML assertions obtained with an OAuth2.0 OBO flow --Some OAuth-based web services need to access other web service APIs that accept SAML assertions in non-interactive flows. Azure Active Directory can provide a SAML assertion in response to an On-Behalf-Of flow that uses a SAML-based web service as a target resource. -->[!NOTE] ->This is a non-standard extension to the OAuth 2.0 On-Behalf-Of flow that allows an OAuth2-based application to access web service API endpoints that consume SAML tokens. --> [!TIP] -> When you call a SAML-protected web service from a front-end web application, you can simply call the API and initiate a normal interactive authentication flow with the user's existing session. You only need to use an OBO flow when a service-to-service call requires a SAML token to provide user context. --### Obtain a SAML token by using an OBO request with a shared secret --A service-to-service request for a SAML assertion contains the following parameters: --| Parameter | Type | Description | -| | | | -| grant_type |required | The type of the token request. For a request that uses a JWT, the value must be **urn:ietf:params:oauth:grant-type:jwt-bearer**. | -| assertion |required | The value of the access token used in the request.| -| client_id |required | The app ID assigned to the calling service during registration with Azure AD. To find the app ID in the Azure portal, select **Active Directory**, choose the directory, and then select the application name. | -| client_secret |required | The key registered for the calling service in Azure AD. This value should have been noted at the time of registration. | -| resource |required | The app ID URI of the receiving service (secured resource). This is the resource that will be the Audience of the SAML token. To find the app ID URI in the Azure portal, select **Active Directory** and choose the directory. Select the application name, choose **All settings**, and then select **Properties**. | -| requested_token_use |required | Specifies how the request should be processed. In the On-Behalf-Of flow, the value must be **on_behalf_of**. | -| requested_token_type | required | Specifies the type of token requested. The value can be **urn:ietf:params:oauth:token-type:saml2** or **urn:ietf:params:oauth:token-type:saml1** depending on the requirements of the accessed resource. | --The response contains a SAML token encoded in UTF8 and Base64url. --- **SubjectConfirmationData for a SAML assertion sourced from an OBO call**: If the target application requires a recipient value in **SubjectConfirmationData**, then the value must be a non-wildcard Reply URL in the resource application configuration.-- **The SubjectConfirmationData node**: The node can't contain an **InResponseTo** attribute since it's not part of a SAML response. The application receiving the SAML token must be able to accept the SAML assertion without an **InResponseTo** attribute.--- **Consent**: Consent must have been granted to receive a SAML token containing user data on an OAuth flow. For information on permissions and obtaining administrator consent, see [Permissions and consent in the Azure Active Directory v1.0 endpoint](./v1-permissions-consent.md).--### Response with SAML assertion --| Parameter | Description | -| | | -| token_type |Indicates the token type value. The only type that Azure AD supports is **Bearer**. For more information about bearer tokens, see [OAuth 2.0 Authorization Framework: Bearer Token Usage (RFC 6750)](https://www.rfc-editor.org/rfc/rfc6750.txt). | -| scope |The scope of access granted in the token. | -| expires_in |The length of time the access token is valid (in seconds). | -| expires_on |The time when the access token expires. The date is represented as the number of seconds from 1970-01-01T0:0:0Z UTC until the expiration time. This value is used to determine the lifetime of cached tokens. | -| resource |The app ID URI of the receiving service (secured resource). | -| access_token |The parameter that returns the SAML assertion. | -| refresh_token |The refresh token. The calling service can use this token to request another access token after the current SAML assertion expires. | --- token_type: Bearer-- expires_in: 3296-- ext_expires_in: 0-- expires_on: 1529627844-- resource: `https://api.contoso.com`-- access_token: \<SAML assertion\>-- issued_token_type: urn:ietf:params:oauth:token-type:saml2-- refresh_token: \<Refresh token\>--## Client limitations --Public clients with wildcard reply URLs can't use an `id_token` for OBO flows. However, a confidential client can still redeem **access** tokens acquired through the implicit grant flow even if the public client has a wildcard redirect URI registered. --## Next steps --Learn more about the OAuth 2.0 protocol and another way to perform service-to-service authentication that uses client credentials: --* [Service to service authentication using OAuth 2.0 client credentials grant in Azure AD](v1-oauth2-client-creds-grant-flow.md) -* [OAuth 2.0 in Azure AD](v1-protocols-oauth-code.md) |
active-directory | V1 Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/v1-overview.md | - Title: Azure Active Directory for developers (v1.0) overview -description: This article provides an overview of signing in Microsoft work and school accounts by using the Azure Active Directory v1.0 endpoint and platform. -------- Previously updated : 10/24/2018-------# Azure Active Directory for developers (v1.0) overview ---Azure Active Directory (Azure AD) is a cloud identity service that allows developers to build apps that securely sign in users with a Microsoft work or school account. Azure AD supports developers building both single-tenant, line-of-business (LOB) apps, as well as developers looking to develop multi-tenant apps. In addition to basic sign in, Azure AD also lets apps call both Microsoft APIs like [Microsoft Graph](/graph/overview) and custom APIs that are built on the Azure AD platform. This documentation shows you how to add Azure AD support to your application by using industry standard protocols like OAuth2.0 and OpenID Connect. --> [!NOTE] -> Most of the content on this page focuses on the v1.0 endpoint and platform, which supports only Microsoft work or school accounts. If you want to sign in consumer or personal Microsoft accounts, see the information on the [v2.0 endpoint and platform](../develop/v2-overview.md). The v2.0 endpoint offers a unified developer experience for apps that want to sign in all Microsoft identities. --- [Authentication basics](v1-authentication-scenarios.md) An introduction to authentication with Azure AD.-- [Types of applications](app-types.md) An overview of the authentication scenarios that are supported by Azure AD.--## Get started --The v1.0 quickstarts and tutorials walk you through building an app on your preferred platform using the Azure AD Authentication Library (ADAL) SDK. See the **v1.0 Quickstarts** and **v1.0 Tutorials** in [Microsoft identity platform (Azure Active Directory for developers)](index.yml) to get started. --## How-to guides --See the **v1.0 How-to guides** for detailed info and walkthroughs of the most common tasks in Azure AD. --## Reference topics --The following articles provide detailed information about APIs, protocol messages, and terms that are used in Azure AD. --- [Authentication Libraries (ADAL)](active-directory-authentication-libraries.md) An overview of the libraries and SDKs that are provided by Azure AD.-- [Code samples](sample-v1-code.md) A list of all of the Azure AD code samples.-- [Glossary](../develop/developer-glossary.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json) Terminology and definitions of words that are used throughout this documentation.--## Videos --See [Azure Active Directory developer platform videos](videos.md) for help migrating to the new Microsoft identity platform. |
active-directory | V1 Permissions Consent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/v1-permissions-consent.md | - Title: Permissions in Azure Active Directory -description: Learn about permissions in Azure Active Directory and how to use them. -------- Previously updated : 09/24/2018-------# Permissions and consent in the Azure Active Directory v1.0 endpoint ---Azure Active Directory (Azure AD) makes extensive use of permissions for both OAuth and OpenID Connect (OIDC) flows. When your app receives an access token from Azure AD, the access token will include claims that describe the permissions that your app has in respect to a particular resource. --*Permissions*, also known as *scopes*, make authorization easy for the resource because the resource only needs to check that the token contains the appropriate permission for whatever API the app is calling. --## Types of permissions --Azure AD defines two kinds of permissions: --* **Delegated permissions** - Are used by apps that have a signed-in user present. For these apps, either the user or an administrator consents to the permissions that the app requests and the app is delegated permission to act as the signed-in user when making calls to an API. Depending on the API, the user may not be able to consent to the API directly and would instead [require an administrator to provide "admin consent"](../develop/howto-convert-app-to-be-multi-tenant.md). -* **Application permissions** - Are used by apps that run without a signed-in user present; for example, apps that run as background services or daemons. Application permissions can only be [consented to by administrators](../develop/permissions-consent-overview.md) because they are typically powerful and allow access to data across user-boundaries, or data that would otherwise be restricted to administrators. Users who are defined as owners of the resource application (i.e. the API which publishes the permissions) are also allowed to grant application permissions for the APIs they own. --Effective permissions are the permissions that your app will have when making requests to an API. --* For delegated permissions, the effective permissions of your app will be the least privileged intersection of the delegated permissions the app has been granted (through consent) and the privileges of the currently signed-in user. Your app can never have more privileges than the signed-in user. Within organizations, the privileges of the signed-in user may be determined by policy or by membership in one or more administrator roles. To learn which administrator roles can consent to delegated permissions, see [Administrator role permissions in Azure AD](../roles/permissions-reference.md). - For example, assume your app has been granted the `User.ReadWrite.All` delegated permission in Microsoft Graph. This permission nominally grants your app permission to read and update the profile of every user in an organization. If the signed-in user is a global administrator, your app will be able to update the profile of every user in the organization. However, if the signed-in user is not in an administrator role, your app will be able to update only the profile of the signed-in user. It will not be able to update the profiles of other users in the organization because the user that it has permission to act on behalf of does not have those privileges. -* For application permissions, the effective permissions of your app are the full level of privileges implied by the permission. For example, an app that has the `User.ReadWrite.All` application permission can update the profile of every user in the organization. --## Permission attributes -Permissions in Azure AD have a number of properties that help users, administrators, or app developers make informed decisions about what the permission grants access to. --> [!NOTE] -> You can view the permissions that an Azure AD Application or Service Principal exposes using the Azure portal, or PowerShell. Try this script to view the permissions exposed by Microsoft Graph. -> ```powershell -> Connect-AzureAD -> -> # Get OAuth2 Permissions/delegated permissions -> (Get-AzureADServicePrincipal -filter "DisplayName eq 'Microsoft Graph'").OAuth2Permissions -> -> # Get App roles/application permissions -> (Get-AzureADServicePrincipal -filter "DisplayName eq 'Microsoft Graph'").AppRoles -> ``` --| Property name | Description | Example | -| | | | -| `ID` | Is a GUID value that uniquely identifies this permission. | 570282fd-fa5c-430d-a7fd-fc8dc98a9dca | -| `IsEnabled` | Indicates whether this permission is available for use. | true | -| `Type` | Indicates whether this permission requires user consent or admin consent. | User | -| `AdminConsentDescription` | Is a description that's shown to administrators during the admin consent experiences | Allows the app to read email in user mailboxes. | -| `AdminConsentDisplayName` | Is the friendly name that's shown to administrators during the admin consent experience. | Read user mail | -| `UserConsentDescription` | Is a description that's shown to users during a user consent experience. | Allows the app to read email in your mailbox. | -| `UserConsentDisplayName` | Is the friendly name that's shown to users during a user consent experience. | Read your mail | -| `Value` | Is the string that's used to identify the permission during OAuth 2.0 authorize flows. `Value` may also be combined with the App ID URI string in order to form a fully qualified permission name. | `Mail.Read` | --## Types of consent --Applications in Azure AD rely on consent in order to gain access to necessary resources or APIs. There are a number of kinds of consent that your app may need to know about in order to be successful. If you are defining permissions, you will also need to understand how your users will gain access to your app or API. --* **Static user consent** - Occurs automatically during the [OAuth 2.0 authorize flow](v1-protocols-oauth-code.md#request-an-authorization-code) when you specify the resource that your app wants to interact with. In the static user consent scenario, your app must have already specified all the permissions it needs in the app's configuration in the Azure portal. If the user (or administrator, as appropriate) has not granted consent for this app, then Azure AD will prompt the user to provide consent at this time. -- Learn more about registering an Azure AD app that requests access to a static set of APIs. -* **Dynamic user consent** - Is a feature of the v2 Azure AD app model. In this scenario, your app requests a set of permissions that it needs in the [OAuth 2.0 authorize flow for v2 apps](../develop/permissions-consent-overview.md#requesting-individual-user-consent). If the user has not consented already, they will be prompted to consent at this time. [Learn more about dynamic consent](./azure-ad-endpoint-comparison.md#incremental-and-dynamic-consent). -- > [!IMPORTANT] - > Dynamic consent can be convenient, but presents a big challenge for permissions that require admin consent, since the admin consent experience doesn't know about those permissions at consent time. If you require admin privileged permissions or if your app uses dynamic consent, you must register all of the permissions in the Azure portal (not just the subset of permissions that require admin consent). This enables tenant admins to consent on behalf of all their users. - -* **Admin consent** - Is required when your app needs access to certain high-privilege permissions. Admin consent ensures that administrators have some additional controls before authorizing apps or users to access highly privileged data from the organization. [Learn more about how to grant admin consent](../develop/permissions-consent-overview.md). --## Best practices --### Client best practices --- Only request for permissions that your app needs. Apps with too many permissions are at risk of exposing user data if they are compromised.-- Choose between delegated permissions and application permissions based on the scenario that your app supports.- - Always use delegated permissions if the call is being made on behalf of a user. - - Only use application permissions if the app is non-interactive and not making calls on behalf of any specific user. Application permissions are highly privileged and should only be used when absolutely necessary. -- When using an app based on the v2.0 endpoint, always set the static permissions (those specified in your application registration) to be the superset of the dynamic permissions you request at runtime (those specified in code and sent as query parameters in your authorize request) so that scenarios like admin consent works correctly.--### Resource/API best practices --- Resources that expose APIs should define permissions that are specific to the data or actions that they are protecting. Following this best practice helps to ensure that clients do not end up with permission to access data that they do not need and that users are well informed about what data they are consenting to.-- Resources should explicitly define `Read` and `ReadWrite` permissions separately.-- Resources should mark any permissions that allow access to data across user boundaries as `Admin` permissions.-- Resources should follow the naming pattern `Subject.Permission[.Modifier]`, where:- - `Subject` corresponds with the type of data that is available - - `Permission` corresponds to the action that a user may take upon that data - - `Modifier` is used optionally to describe specializations of another permission - - For example: - - Mail.Read - Allows users to read mail. - - Mail.ReadWrite - Allows users to read or write mail. - - Mail.ReadWrite.All - Allows an administrator or user to access all mail in the organization. |
active-directory | V1 Protocols Oauth Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/v1-protocols-oauth-code.md | - Title: Understand the OAuth 2.0 authorization code flow in Azure AD -description: This article describes how to use HTTP messages to authorize access to web applications and web APIs in your tenant using Azure Active Directory and OAuth 2.0. -------- Previously updated : 12/12/2019-------# Authorize access to Azure Active Directory web applications using the OAuth 2.0 code grant flow ---> [!NOTE] -> If you don't tell the server what resource you plan to call, then the server will not trigger the Conditional Access policies for that resource. So in order to have MFA trigger, you will need to include a resource in your URL. -> --Azure Active Directory (Azure AD) uses OAuth 2.0 to enable you to authorize access to web applications and web APIs in your Azure AD tenant. This guide is language independent, and describes how to send and receive HTTP messages without using any of our [open-source libraries](active-directory-authentication-libraries.md). --The OAuth 2.0 authorization code flow is described in [section 4.1 of the OAuth 2.0 specification](https://tools.ietf.org/html/rfc6749#section-4.1). It is used to perform authentication and authorization in most application types, including web apps and natively installed apps. --## Register your application with your AD tenant -First, register your application with your Azure Active Directory (Azure AD) tenant. This will give you an Application ID for your application, as well as enable it to receive tokens. --1. Sign in to the [Azure portal](https://portal.azure.com). - -1. Choose your Azure AD tenant by selecting your account in the top right corner of the page, followed by selecting the **Switch Directory** navigation and then selecting the appropriate tenant. - - Skip this step if you only have one Azure AD tenant under your account, or if you've already selected the appropriate Azure AD tenant. - -1. In the Azure portal, search for and select **Azure Active Directory**. - -1. In the **Azure Active Directory** left menu, select **App Registrations**, and then select **New registration**. - -1. Follow the prompts and create a new application. It doesn't matter if it is a web application or a public client (mobile & desktop) application for this tutorial, but if you'd like specific examples for web applications or public client applications, check out our [quickstarts](v1-overview.md). - - - **Name** is the application name and describes your application to end users. - - Under **Supported account types**, select **Accounts in any organizational directory and personal Microsoft accounts**. - - Provide the **Redirect URI**. For web applications, this is the base URL of your app where users can sign in. For example, `http://localhost:12345`. For public client (mobile & desktop), Azure AD uses it to return token responses. Enter a value specific to your application. For example, `http://MyFirstAADApp`. - <!--TODO: add once App ID URI is configurable: The **App ID URI** is a unique identifier for your application. The convention is to use `https://<tenant-domain>/<app-name>`, e.g. `https://contoso.onmicrosoft.com/my-first-aad-app`--> - -1. Once you've completed registration, Azure AD will assign your application a unique client identifier (the **Application ID**). You need this value in the next sections, so copy it from the application page. - -1. To find your application in the Azure portal, select **App registrations**, and then select **View all applications**. --## OAuth 2.0 authorization flow --At a high level, the entire authorization flow for an application looks a bit like this: --![OAuth Auth Code Flow](./media/v1-protocols-oauth-code/active-directory-oauth-code-flow-native-app.png) --## Request an authorization code --The authorization code flow begins with the client directing the user to the `/authorize` endpoint. In this request, the client indicates the permissions it needs to acquire from the user. You can get the OAuth 2.0 authorization endpoint for your tenant by selecting **App registrations > Endpoints** in the Azure portal. --``` -// Line breaks for legibility only --https://login.microsoftonline.com/{tenant}/oauth2/authorize? -client_id=6731de76-14a6-49ae-97bc-6eba6914391e -&response_type=code -&redirect_uri=http%3A%2F%2Flocalhost%3A12345 -&response_mode=query -&resource=https%3A%2F%2Fservice.contoso.com%2F -&state=12345 -``` --| Parameter | Type | Description | -| | | | -| tenant |required |The `{tenant}` value in the path of the request can be used to control who can sign into the application. The allowed values are tenant identifiers, for example, `8eaef023-2b34-4da1-9baa-8bc8c9d6a490` or `contoso.onmicrosoft.com` or `common` for tenant-independent tokens | -| client_id |required |The Application ID assigned to your app when you registered it with Azure AD. You can find this in the Azure portal. Click **Azure Active Directory** in the services sidebar, click **App registrations**, and choose the application. | -| response_type |required |Must include `code` for the authorization code flow. | -| redirect_uri |recommended |The redirect_uri of your app, where authentication responses can be sent and received by your app. It must exactly match one of the redirect_uris you registered in the portal, except it must be URL-encoded. For native & mobile apps, you should use the default value of `https://login.microsoftonline.com/common/oauth2/nativeclient`. | -| response_mode |optional |Specifies the method that should be used to send the resulting token back to your app. Can be `query`, `fragment`, or `form_post`. `query` provides the code as a query string parameter on your redirect URI. If you're requesting an ID token using the implicit flow, you cannot use `query` as specified in the [OpenID spec](https://openid.net/specs/oauth-v2-multiple-response-types-1_0.html#Combinations). If you're requesting just the code, you can use `query`, `fragment`, or `form_post`. `form_post` executes a POST containing the code to your redirect URI. The default is `query` for a code flow. | -| state |recommended |A value included in the request that is also returned in the token response. A randomly generated unique value is typically used for [preventing cross-site request forgery attacks](https://tools.ietf.org/html/rfc6749#section-10.12). The state is also used to encode information about the user's state in the app before the authentication request occurred, such as the page or view they were on. | -| resource | recommended |The App ID URI of the target web API (secured resource). To find the App ID URI, in the Azure portal, click **Azure Active Directory**, click **Application registrations**, open the application's **Settings** page, then click **Properties**. It may also be an external resource like `https://graph.microsoft.com`. This is required in one of either the authorization or token requests. To ensure fewer authentication prompts place it in the authorization request to ensure consent is received from the user. | -| scope | **ignored** | For v1 Azure AD apps, scopes must be statically configured in the Azure portal under the applications **Settings**, **Required Permissions**. | -| prompt |optional |Indicate the type of user interaction that is required.<p> Valid values are: <p> *login*: The user should be prompted to reauthenticate. <p> *select_account*: The user is prompted to select an account, interrupting single sign on. The user may select an existing signed-in account, enter their credentials for a remembered account, or choose to use a different account altogether. <p> *consent*: User consent has been granted, but needs to be updated. The user should be prompted to consent. <p> *admin_consent*: An administrator should be prompted to consent on behalf of all users in their organization | -| login_hint |optional |Can be used to pre-fill the username/email address field of the sign-in page for the user, if you know their username ahead of time. Often apps use this parameter during reauthentication, having already extracted the username from a previous sign-in using the `preferred_username` claim. | -| domain_hint |optional |Provides a hint about the tenant or domain that the user should use to sign in. The value of the domain_hint is a registered domain for the tenant. If the tenant is federated to an on-premises directory, AAD redirects to the specified tenant federation server. | -| code_challenge_method | recommended | The method used to encode the `code_verifier` for the `code_challenge` parameter. Can be one of `plain` or `S256`. If excluded, `code_challenge` is assumed to be plaintext if `code_challenge` is included. Azure AAD v1.0 supports both `plain` and `S256`. For more information, see the [PKCE RFC](https://tools.ietf.org/html/rfc7636). | -| code_challenge | recommended | Used to secure authorization code grants via Proof Key for Code Exchange (PKCE) from a native or public client. Required if `code_challenge_method` is included. For more information, see the [PKCE RFC](https://tools.ietf.org/html/rfc7636). | --> [!NOTE] -> If the user is part of an organization, an administrator of the organization can consent or decline on the user's behalf, or permit the user to consent. The user is given the option to consent only when the administrator permits it. -> -> --At this point, the user is asked to enter their credentials and consent to the permissions requested by the app in the Azure portal. Once the user authenticates and grants consent, Azure AD sends a response to your app at the `redirect_uri` address in your request with the code. --### Successful response -A successful response could look like this: --``` -GET HTTP/1.1 302 Found -Location: http://localhost:12345/?code= AwABAAAAvPM1KaPlrEqdFSBzjqfTGBCmLdgfSTLEMPGYuNHSUYBrqqf_ZT_p5uEAEJJ_nZ3UmphWygRNy2C3jJ239gV_DBnZ2syeg95Ki-374WHUP-i3yIhv5i-7KU2CEoPXwURQp6IVYMw-DjAOzn7C3JCu5wpngXmbZKtJdWmiBzHpcO2aICJPu1KvJrDLDP20chJBXzVYJtkfjviLNNW7l7Y3ydcHDsBRKZc3GuMQanmcghXPyoDg41g8XbwPudVh7uCmUponBQpIhbuffFP_tbV8SNzsPoFz9CLpBCZagJVXeqWoYMPe2dSsPiLO9Alf_YIe5zpi-zY4C3aLw5g9at35eZTfNd0gBRpR5ojkMIcZZ6IgAA&session_state=7B29111D-C220-4263-99AB-6F6E135D75EF&state=D79E5777-702E-4260-9A62-37F75FF22CCE -``` --| Parameter | Description | -| | | -| admin_consent |The value is True if an administrator consented to a consent request prompt. | -| code |The authorization code that the application requested. The application can use the authorization code to request an access token for the target resource. | -| session_state |A unique value that identifies the current user session. This value is a GUID, but should be treated as an opaque value that is passed without examination. | -| state |If a state parameter is included in the request, the same value should appear in the response. It's a good practice for the application to verify that the state values in the request and response are identical before using the response. This helps to detect [Cross-Site Request Forgery (CSRF) attacks](https://tools.ietf.org/html/rfc6749#section-10.12) against the client. | --### Error response -Error responses may also be sent to the `redirect_uri` so that the application can handle them appropriately. --``` -GET http://localhost:12345/? -error=access_denied -&error_description=the+user+canceled+the+authentication -``` --| Parameter | Description | -| | | -| error |An error code value defined in Section 5.2 of the [OAuth 2.0 Authorization Framework](https://tools.ietf.org/html/rfc6749). The next table describes the error codes that Azure AD returns. | -| error_description |A more detailed description of the error. This message is not intended to be end-user friendly. | -| state |The state value is a randomly generated non-reused value that is sent in the request and returned in the response to prevent cross-site request forgery (CSRF) attacks. | --#### Error codes for authorization endpoint errors -The following table describes the various error codes that can be returned in the `error` parameter of the error response. --| Error Code | Description | Client Action | -| | | | -| invalid_request |Protocol error, such as a missing required parameter. |Fix and resubmit the request. This is a development error, and is typically caught during initial testing. | -| unauthorized_client |The client application is not permitted to request an authorization code. |This usually occurs when the client application is not registered in Azure AD or is not added to the user's Azure AD tenant. The application can prompt the user with instruction for installing the application and adding it to Azure AD. | -| access_denied |Resource owner denied consent |The client application can notify the user that it cannot proceed unless the user consents. | -| unsupported_response_type |The authorization server does not support the response type in the request. |Fix and resubmit the request. This is a development error, and is typically caught during initial testing. | -| server_error |The server encountered an unexpected error. |Retry the request. These errors can result from temporary conditions. The client application might explain to the user that its response is delayed due to a temporary error. | -| temporarily_unavailable |The server is temporarily too busy to handle the request. |Retry the request. The client application might explain to the user that its response is delayed due to a temporary condition. | -| invalid_resource |The target resource is invalid because it does not exist, Azure AD cannot find it, or it is not correctly configured. |This indicates the resource, if it exists, has not been configured in the tenant. The application can prompt the user with instruction for installing the application and adding it to Azure AD. | --## Use the authorization code to request an access token -Now that you've acquired an authorization code and have been granted permission by the user, you can redeem the code for an access token to the desired resource, by sending a POST request to the `/token` endpoint: --``` -// Line breaks for legibility only --POST /{tenant}/oauth2/token HTTP/1.1 -Host: https://login.microsoftonline.com -Content-Type: application/x-www-form-urlencoded -grant_type=authorization_code -&client_id=2d4d11a2-f814-46a7-890a-274a72a7309e -&code=AwABAAAAvPM1KaPlrEqdFSBzjqfTGBCmLdgfSTLEMPGYuNHSUYBrqqf_ZT_p5uEAEJJ_nZ3UmphWygRNy2C3jJ239gV_DBnZ2syeg95Ki-374WHUP-i3yIhv5i-7KU2CEoPXwURQp6IVYMw-DjAOzn7C3JCu5wpngXmbZKtJdWmiBzHpcO2aICJPu1KvJrDLDP20chJBXzVYJtkfjviLNNW7l7Y3ydcHDsBRKZc3GuMQanmcghXPyoDg41g8XbwPudVh7uCmUponBQpIhbuffFP_tbV8SNzsPoFz9CLpBCZagJVXeqWoYMPe2dSsPiLO9Alf_YIe5zpi-zY4C3aLw5g9at35eZTfNd0gBRpR5ojkMIcZZ6IgAA -&redirect_uri=https%3A%2F%2Flocalhost%3A12345 -&resource=https%3A%2F%2Fservice.contoso.com%2F -&client_secret=p@ssw0rd --//NOTE: client_secret only required for web apps -``` --| Parameter | Type | Description | -| | | | -| tenant |required |The `{tenant}` value in the path of the request can be used to control who can sign into the application. The allowed values are tenant identifiers, for example, `8eaef023-2b34-4da1-9baa-8bc8c9d6a490` or `contoso.onmicrosoft.com` or `common` for tenant-independent tokens | -| client_id |required |The Application Id assigned to your app when you registered it with Azure AD. You can find this in the Azure portal. The Application Id is displayed in the settings of the app registration. | -| grant_type |required |Must be `authorization_code` for the authorization code flow. | -| code |required |The `authorization_code` that you acquired in the previous section | -| redirect_uri |required | A `redirect_uri`registered on the client application. | -| client_secret |required for web apps, not allowed for public clients |The application secret that you created in the Azure portal for your app under **Keys**. It cannot be used in a native app (public client), because client_secrets cannot be reliably stored on devices. It is required for web apps and web APIs (all confidential clients), which have the ability to store the `client_secret` securely on the server side. The client_secret should be URL-encoded before being sent. | -| resource | recommended |The App ID URI of the target web API (secured resource). To find the App ID URI, in the Azure portal, click **Azure Active Directory**, click **Application registrations**, open the application's **Settings** page, then click **Properties**. It may also be an external resource like `https://graph.microsoft.com`. This is required in one of either the authorization or token requests. To ensure fewer authentication prompts place it in the authorization request to ensure consent is received from the user. If in both the authorization request and the token request, the resource` parameters must match. | -| code_verifier | optional | The same code_verifier that was used to obtain the authorization_code. Required if PKCE was used in the authorization code grant request. For more information, see the [PKCE RFC](https://tools.ietf.org/html/rfc7636) | --To find the App ID URI, in the Azure portal, click **Azure Active Directory**, click **Application registrations**, open the application's **Settings** page, then click **Properties**. --### Successful response -Azure AD returns an [access token](../develop/access-tokens.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json) upon a successful response. To minimize network calls from the client application and their associated latency, the client application should cache access tokens for the token lifetime that is specified in the OAuth 2.0 response. To determine the token lifetime, use either the `expires_in` or `expires_on` parameter values. --If a web API resource returns an `invalid_token` error code, this might indicate that the resource has determined that the token is expired. If the client and resource clock times are different (known as a "time skew"), the resource might consider the token to be expired before the token is cleared from the client cache. If this occurs, clear the token from the cache, even if it is still within its calculated lifetime. --A successful response could look like this: --``` -{ - "access_token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6Ik5HVEZ2ZEstZnl0aEV1THdqcHdBSk9NOW4tQSJ9.eyJhdWQiOiJodHRwczovL3NlcnZpY2UuY29udG9zby5jb20vIiwiaXNzIjoiaHR0cHM6Ly9zdHMud2luZG93cy5uZXQvN2ZlODE0NDctZGE1Ny00Mzg1LWJlY2ItNmRlNTdmMjE0NzdlLyIsImlhdCI6MTM4ODQ0MDg2MywibmJmIjoxMzg4NDQwODYzLCJleHAiOjEzODg0NDQ3NjMsInZlciI6IjEuMCIsInRpZCI6IjdmZTgxNDQ3LWRhNTctNDM4NS1iZWNiLTZkZTU3ZjIxNDc3ZSIsIm9pZCI6IjY4Mzg5YWUyLTYyZmEtNGIxOC05MWZlLTUzZGQxMDlkNzRmNSIsInVwbiI6ImZyYW5rbUBjb250b3NvLmNvbSIsInVuaXF1ZV9uYW1lIjoiZnJhbmttQGNvbnRvc28uY29tIiwic3ViIjoiZGVOcUlqOUlPRTlQV0pXYkhzZnRYdDJFYWJQVmwwQ2o4UUFtZWZSTFY5OCIsImZhbWlseV9uYW1lIjoiTWlsbGVyIiwiZ2l2ZW5fbmFtZSI6IkZyYW5rIiwiYXBwaWQiOiIyZDRkMTFhMi1mODE0LTQ2YTctODkwYS0yNzRhNzJhNzMwOWUiLCJhcHBpZGFjciI6IjAiLCJzY3AiOiJ1c2VyX2ltcGVyc29uYXRpb24iLCJhY3IiOiIxIn0.JZw8jC0gptZxVC-7l5sFkdnJgP3_tRjeQEPgUn28XctVe3QqmheLZw7QVZDPCyGycDWBaqy7FLpSekET_BftDkewRhyHk9FW_KeEz0ch2c3i08NGNDbr6XYGVayNuSesYk5Aw_p3ICRlUV1bqEwk-Jkzs9EEkQg4hbefqJS6yS1HoV_2EsEhpd_wCQpxK89WPs3hLYZETRJtG5kvCCEOvSHXmDE6eTHGTnEgsIk--UlPe275Dvou4gEAwLofhLDQbMSjnlV5VLsjimNBVcSRFShoxmQwBJR_b2011Y5IuD6St5zPnzruBbZYkGNurQK63TJPWmRd3mbJsGM0mf3CUQ", - "token_type": "Bearer", - "expires_in": "3600", - "expires_on": "1388444763", - "resource": "https://service.contoso.com/", - "refresh_token": "AwABAAAAvPM1KaPlrEqdFSBzjqfTGAMxZGUTdM0t4B4rTfgV29ghDOHRc2B-C_hHeJaJICqjZ3mY2b_YNqmf9SoAylD1PycGCB90xzZeEDg6oBzOIPfYsbDWNf621pKo2Q3GGTHYlmNfwoc-OlrxK69hkha2CF12azM_NYhgO668yfcUl4VBbiSHZyd1NVZG5QTIOcbObu3qnLutbpadZGAxqjIbMkQ2bQS09fTrjMBtDE3D6kSMIodpCecoANon9b0LATkpitimVCrl-NyfN3oyG4ZCWu18M9-vEou4Sq-1oMDzExgAf61noxzkNiaTecM-Ve5cq6wHqYQjfV9DOz4lbceuYCAA", - "scope": "https%3A%2F%2Fgraph.microsoft.com%2Fmail.read", - "id_token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJub25lIn0.eyJhdWQiOiIyZDRkMTFhMi1mODE0LTQ2YTctODkwYS0yNzRhNzJhNzMwOWUiLCJpc3MiOiJodHRwczovL3N0cy53aW5kb3dzLm5ldC83ZmU4MTQ0Ny1kYTU3LTQzODUtYmVjYi02ZGU1N2YyMTQ3N2UvIiwiaWF0IjoxMzg4NDQwODYzLCJuYmYiOjEzODg0NDA4NjMsImV4cCI6MTM4ODQ0NDc2MywidmVyIjoiMS4wIiwidGlkIjoiN2ZlODE0NDctZGE1Ny00Mzg1LWJlY2ItNmRlNTdmMjE0NzdlIiwib2lkIjoiNjgzODlhZTItNjJmYS00YjE4LTkxZmUtNTNkZDEwOWQ3NGY1IiwidXBuIjoiZnJhbmttQGNvbnRvc28uY29tIiwidW5pcXVlX25hbWUiOiJmcmFua21AY29udG9zby5jb20iLCJzdWIiOiJKV3ZZZENXUGhobHBTMVpzZjd5WVV4U2hVd3RVbTV5elBtd18talgzZkhZIiwiZmFtaWx5X25hbWUiOiJNaWxsZXIiLCJnaXZlbl9uYW1lIjoiRnJhbmsifQ." -} --``` --| Parameter | Description | -| | | -| access_token |The requested access token. This is an opaque string - it depends on what the resource expects to receive, and is not intended for the client to look at. The app can use this token to authenticate to the secured resource, such as a web API. | -| token_type |Indicates the token type value. The only type that Azure AD supports is Bearer. For more information about Bearer tokens, see [OAuth2.0 Authorization Framework: Bearer Token Usage (RFC 6750)](https://www.rfc-editor.org/rfc/rfc6750.txt) | -| expires_in |How long the access token is valid (in seconds). | -| expires_on |The time when the access token expires. The date is represented as the number of seconds from 1970-01-01T0:0:0Z UTC until the expiration time. This value is used to determine the lifetime of cached tokens. | -| resource |The App ID URI of the web API (secured resource). | -| scope |Impersonation permissions granted to the client application. The default permission is `user_impersonation`. The owner of the secured resource can register additional values in Azure AD. | -| refresh_token |An OAuth 2.0 refresh token. The app can use this token to acquire additional access tokens after the current access token expires. Refresh tokens are long-lived, and can be used to retain access to resources for extended periods of time. | -| id_token |An unsigned JSON Web Token (JWT) representing an [ID token](../develop/id-tokens.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json). The app can base64Url decode the segments of this token to request information about the user who signed in. The app can cache the values and display them, but it should not rely on them for any authorization or security boundaries. | --For more information about JSON web tokens, see the [JWT IETF draft specification](https://go.microsoft.com/fwlink/?LinkId=392344). To learn more about `id_tokens`, see the [v1.0 OpenID Connect flow](v1-protocols-openid-connect-code.md). --### Error response -The token issuance endpoint errors are HTTP error codes, because the client calls the token issuance endpoint directly. In addition to the HTTP status code, the Azure AD token issuance endpoint also returns a JSON document with objects that describe the error. --A sample error response could look like this: --``` -{ - "error": "invalid_grant", - "error_description": "AADSTS70002: Error validating credentials. AADSTS70008: The provided authorization code or refresh token is expired. Send a new interactive authorization request for this user and resource.\r\nTrace ID: 3939d04c-d7ba-42bf-9cb7-1e5854cdce9e\r\nCorrelation ID: a8125194-2dc8-4078-90ba-7b6592a7f231\r\nTimestamp: 2016-04-11 18:00:12Z", - "error_codes": [ - 70002, - 70008 - ], - "timestamp": "2016-04-11 18:00:12Z", - "trace_id": "3939d04c-d7ba-42bf-9cb7-1e5854cdce9e", - "correlation_id": "a8125194-2dc8-4078-90ba-7b6592a7f231" -} -``` -| Parameter | Description | -| | | -| error |An error code string that can be used to classify types of errors that occur, and can be used to react to errors. | -| error_description |A specific error message that can help a developer identify the root cause of an authentication error. | -| error_codes |A list of STS-specific error codes that can help in diagnostics. | -| timestamp |The time at which the error occurred. | -| trace_id |A unique identifier for the request that can help in diagnostics. | -| correlation_id |A unique identifier for the request that can help in diagnostics across components. | --#### HTTP status codes -The following table lists the HTTP status codes that the token issuance endpoint returns. In some cases, the error code is sufficient to describe the response, but if there are errors, you need to parse the accompanying JSON document and examine its error code. --| HTTP Code | Description | -| | | -| 400 |Default HTTP code. Used in most cases and is typically due to a malformed request. Fix and resubmit the request. | -| 401 |Authentication failed. For example, the request is missing the client_secret parameter. | -| 403 |Authorization failed. For example, the user does not have permission to access the resource. | -| 500 |An internal error has occurred at the service. Retry the request. | --#### Error codes for token endpoint errors -| Error Code | Description | Client Action | -| | | | -| invalid_request |Protocol error, such as a missing required parameter. |Fix and resubmit the request | -| invalid_grant |The authorization code is invalid or has expired. |Try a new request to the `/authorize` endpoint | -| unauthorized_client |The authenticated client is not authorized to use this authorization grant type. |This usually occurs when the client application is not registered in Azure AD or is not added to the user's Azure AD tenant. The application can prompt the user with instruction for installing the application and adding it to Azure AD. | -| invalid_client |Client authentication failed. |The client credentials are not valid. To fix, the application administrator updates the credentials. | -| unsupported_grant_type |The authorization server does not support the authorization grant type. |Change the grant type in the request. This type of error should occur only during development and be detected during initial testing. | -| invalid_resource |The target resource is invalid because it does not exist, Azure AD cannot find it, or it is not correctly configured. |This indicates the resource, if it exists, has not been configured in the tenant. The application can prompt the user with instruction for installing the application and adding it to Azure AD. | -| interaction_required |The request requires user interaction. For example, an additional authentication step is required. | Instead of a non-interactive request, retry with an interactive authorization request for the same resource. | -| temporarily_unavailable |The server is temporarily too busy to handle the request. |Retry the request. The client application might explain to the user that its response is delayed due to a temporary condition. | --## Use the access token to access the resource -Now that you've successfully acquired an `access_token`, you can use the token in requests to Web APIs, by including it in the `Authorization` header. The [RFC 6750](https://www.rfc-editor.org/rfc/rfc6750.txt) specification explains how to use bearer tokens in HTTP requests to access protected resources. --### Sample request -``` -GET /data HTTP/1.1 -Host: service.contoso.com -Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6Ik5HVEZ2ZEstZnl0aEV1THdqcHdBSk9NOW4tQSJ9.eyJhdWQiOiJodHRwczovL3NlcnZpY2UuY29udG9zby5jb20vIiwiaXNzIjoiaHR0cHM6Ly9zdHMud2luZG93cy5uZXQvN2ZlODE0NDctZGE1Ny00Mzg1LWJlY2ItNmRlNTdmMjE0NzdlLyIsImlhdCI6MTM4ODQ0MDg2MywibmJmIjoxMzg4NDQwODYzLCJleHAiOjEzODg0NDQ3NjMsInZlciI6IjEuMCIsInRpZCI6IjdmZTgxNDQ3LWRhNTctNDM4NS1iZWNiLTZkZTU3ZjIxNDc3ZSIsIm9pZCI6IjY4Mzg5YWUyLTYyZmEtNGIxOC05MWZlLTUzZGQxMDlkNzRmNSIsInVwbiI6ImZyYW5rbUBjb250b3NvLmNvbSIsInVuaXF1ZV9uYW1lIjoiZnJhbmttQGNvbnRvc28uY29tIiwic3ViIjoiZGVOcUlqOUlPRTlQV0pXYkhzZnRYdDJFYWJQVmwwQ2o4UUFtZWZSTFY5OCIsImZhbWlseV9uYW1lIjoiTWlsbGVyIiwiZ2l2ZW5fbmFtZSI6IkZyYW5rIiwiYXBwaWQiOiIyZDRkMTFhMi1mODE0LTQ2YTctODkwYS0yNzRhNzJhNzMwOWUiLCJhcHBpZGFjciI6IjAiLCJzY3AiOiJ1c2VyX2ltcGVyc29uYXRpb24iLCJhY3IiOiIxIn0.JZw8jC0gptZxVC-7l5sFkdnJgP3_tRjeQEPgUn28XctVe3QqmheLZw7QVZDPCyGycDWBaqy7FLpSekET_BftDkewRhyHk9FW_KeEz0ch2c3i08NGNDbr6XYGVayNuSesYk5Aw_p3ICRlUV1bqEwk-Jkzs9EEkQg4hbefqJS6yS1HoV_2EsEhpd_wCQpxK89WPs3hLYZETRJtG5kvCCEOvSHXmDE6eTHGTnEgsIk--UlPe275Dvou4gEAwLofhLDQbMSjnlV5VLsjimNBVcSRFShoxmQwBJR_b2011Y5IuD6St5zPnzruBbZYkGNurQK63TJPWmRd3mbJsGM0mf3CUQ -``` --### Error Response -Secured resources that implement RFC 6750 issue HTTP status codes. If the request does not include authentication credentials or is missing the token, the response includes an `WWW-Authenticate` header. When a request fails, the resource server responds with the HTTP status code and an error code. --The following is an example of an unsuccessful response when the client request does not include the bearer token: --``` -HTTP/1.1 401 Unauthorized -WWW-Authenticate: Bearer authorization_uri="https://login.microsoftonline.com/contoso.com/oauth2/authorize", error="invalid_token", error_description="The access token is missing.", -``` --#### Error parameters -| Parameter | Description | -| | | -| authorization_uri |The URI (physical endpoint) of the authorization server. This value is also used as a lookup key to get more information about the server from a discovery endpoint. <p><p> The client must validate that the authorization server is trusted. When the resource is protected by Azure AD, it is sufficient to verify that the URL begins with `https://login.microsoftonline.com` or another hostname that Azure AD supports. A tenant-specific resource should always return a tenant-specific authorization URI. | -| error |An error code value defined in Section 5.2 of the [OAuth 2.0 Authorization Framework](https://tools.ietf.org/html/rfc6749). | -| error_description |A more detailed description of the error. This message is not intended to be end-user friendly. | -| resource_id |Returns the unique identifier of the resource. The client application can use this identifier as the value of the `resource` parameter when it requests a token for the resource. <p><p> It is important for the client application to verify this value, otherwise a malicious service might be able to induce an **elevation-of-privileges** attack <p><p> The recommended strategy for preventing an attack is to verify that the `resource_id` matches the base of the web API URL that being accessed. For example, if `https://service.contoso.com/data` is being accessed, the `resource_id` can be `https://service.contoso.com/`. The client application must reject a `resource_id` that does not begin with the base URL unless there is a reliable alternate way to verify the id. | --#### Bearer scheme error codes -The RFC 6750 specification defines the following errors for resources that use the WWW-Authenticate header and Bearer scheme in the response. --| HTTP Status Code | Error Code | Description | Client Action | -| | | | | -| 400 |invalid_request |The request is not well-formed. For example, it might be missing a parameter or using the same parameter twice. |Fix the error and retry the request. This type of error should occur only during development and be detected in initial testing. | -| 401 |invalid_token |The access token is missing, invalid, or is revoked. The value of the error_description parameter provides additional detail. |Request a new token from the authorization server. If the new token fails, an unexpected error has occurred. Send an error message to the user and retry after random delays. | -| 403 |insufficient_scope |The access token does not contain the impersonation permissions required to access the resource. |Send a new authorization request to the authorization endpoint. If the response contains the scope parameter, use the scope value in the request to the resource. | -| 403 |insufficient_access |The subject of the token does not have the permissions that are required to access the resource. |Prompt the user to use a different account or to request permissions to the specified resource. | --## Refreshing the access tokens --Access Tokens are short-lived and must be refreshed after they expire to continue accessing resources. You can refresh the `access_token` by submitting another `POST` request to the `/token` endpoint, but this time providing the `refresh_token` instead of the `code`. Refresh tokens are valid for all resources that your client has already been given consent to access - thus, a refresh token issued on a request for `resource=https://graph.microsoft.com` can be used to request a new access token for `resource=https://contoso.com/api`. --Refresh tokens do not have specified lifetimes. Typically, the lifetimes of refresh tokens are relatively long. However, in some cases, refresh tokens expire, are revoked, or lack sufficient privileges for the desired action. Your application needs to expect and handle errors returned by the token issuance endpoint correctly. --When you receive a response with a refresh token error, discard the current refresh token and request a new authorization code or access token. In particular, when using a refresh token in the Authorization Code Grant flow, if you receive a response with the `interaction_required` or `invalid_grant` error codes, discard the refresh token and request a new authorization code. --A sample request to the **tenant-specific** endpoint (you can also use the **common** endpoint) to get a new access token using a refresh token looks like this: --``` -// Line breaks for legibility only --POST /{tenant}/oauth2/token HTTP/1.1 -Host: https://login.microsoftonline.com -Content-Type: application/x-www-form-urlencoded --client_id=6731de76-14a6-49ae-97bc-6eba6914391e -&refresh_token=OAAABAAAAiL9Kn2Z27UubvWFPbm0gLWQJVzCTE9UkP3pSx1aXxUjq... -&grant_type=refresh_token -&resource=https%3A%2F%2Fservice.contoso.com%2F -&client_secret=JqQX2PNo9bpM0uEihUPzyrh // NOTE: Only required for web apps -``` --### Successful response -A successful token response will look like: --``` -{ - "token_type": "Bearer", - "expires_in": "3600", - "expires_on": "1460404526", - "resource": "https://service.contoso.com/", - "access_token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6Ik5HVEZ2ZEstZnl0aEV1THdqcHdBSk9NOW4tQSJ9.eyJhdWQiOiJodHRwczovL3NlcnZpY2UuY29udG9zby5jb20vIiwiaXNzIjoiaHR0cHM6Ly9zdHMud2luZG93cy5uZXQvN2ZlODE0NDctZGE1Ny00Mzg1LWJlY2ItNmRlNTdmMjE0NzdlLyIsImlhdCI6MTM4ODQ0MDg2MywibmJmIjoxMzg4NDQwODYzLCJleHAiOjEzODg0NDQ3NjMsInZlciI6IjEuMCIsInRpZCI6IjdmZTgxNDQ3LWRhNTctNDM4NS1iZWNiLTZkZTU3ZjIxNDc3ZSIsIm9pZCI6IjY4Mzg5YWUyLTYyZmEtNGIxOC05MWZlLTUzZGQxMDlkNzRmNSIsInVwbiI6ImZyYW5rbUBjb250b3NvLmNvbSIsInVuaXF1ZV9uYW1lIjoiZnJhbmttQGNvbnRvc28uY29tIiwic3ViIjoiZGVOcUlqOUlPRTlQV0pXYkhzZnRYdDJFYWJQVmwwQ2o4UUFtZWZSTFY5OCIsImZhbWlseV9uYW1lIjoiTWlsbGVyIiwiZ2l2ZW5fbmFtZSI6IkZyYW5rIiwiYXBwaWQiOiIyZDRkMTFhMi1mODE0LTQ2YTctODkwYS0yNzRhNzJhNzMwOWUiLCJhcHBpZGFjciI6IjAiLCJzY3AiOiJ1c2VyX2ltcGVyc29uYXRpb24iLCJhY3IiOiIxIn0.JZw8jC0gptZxVC-7l5sFkdnJgP3_tRjeQEPgUn28XctVe3QqmheLZw7QVZDPCyGycDWBaqy7FLpSekET_BftDkewRhyHk9FW_KeEz0ch2c3i08NGNDbr6XYGVayNuSesYk5Aw_p3ICRlUV1bqEwk-Jkzs9EEkQg4hbefqJS6yS1HoV_2EsEhpd_wCQpxK89WPs3hLYZETRJtG5kvCCEOvSHXmDE6eTHGTnEgsIk--UlPe275Dvou4gEAwLofhLDQbMSjnlV5VLsjimNBVcSRFShoxmQwBJR_b2011Y5IuD6St5zPnzruBbZYkGNurQK63TJPWmRd3mbJsGM0mf3CUQ", - "refresh_token": "AwABAAAAv YNqmf9SoAylD1PycGCB90xzZeEDg6oBzOIPfYsbDWNf621pKo2Q3GGTHYlmNfwoc-OlrxK69hkha2CF12azM_NYhgO668yfcUl4VBbiSHZyd1NVZG5QTIOcbObu3qnLutbpadZGAxqjIbMkQ2bQS09fTrjMBtDE3D6kSMIodpCecoANon9b0LATkpitimVCrl PM1KaPlrEqdFSBzjqfTGAMxZGUTdM0t4B4rTfgV29ghDOHRc2B-C_hHeJaJICqjZ3mY2b_YNqmf9SoAylD1PycGCB90xzZeEDg6oBzOIPfYsbDWNf621pKo2Q3GGTHYlmNfwoc-OlrxK69hkha2CF12azM_NYhgO668yfmVCrl-NyfN3oyG4ZCWu18M9-vEou4Sq-1oMDzExgAf61noxzkNiaTecM-Ve5cq6wHqYQjfV9DOz4lbceuYCAA" -} -``` -| Parameter | Description | -| | | -| token_type |The token type. The only supported value is **bearer**. | -| expires_in |The remaining lifetime of the token in seconds. A typical value is 3600 (one hour). | -| expires_on |The date and time on which the token expires. The date is represented as the number of seconds from 1970-01-01T0:0:0Z UTC until the expiration time. | -| resource |Identifies the secured resource that the access token can be used to access. | -| scope |Impersonation permissions granted to the native client application. The default permission is **user_impersonation**. The owner of the target resource can register alternate values in Azure AD. | -| access_token |The new access token that was requested. | -| refresh_token |A new OAuth 2.0 refresh_token that can be used to request new access tokens when the one in this response expires. | --### Error response -A sample error response could look like this: --``` -{ - "error": "invalid_resource", - "error_description": "AADSTS50001: The application named https://foo.microsoft.com/mail.read was not found in the tenant named 295e01fc-0c56-4ac3-ac57-5d0ed568f872. This can happen if the application has not been installed by the administrator of the tenant or consented to by any user in the tenant. You might have sent your authentication request to the wrong tenant.\r\nTrace ID: ef1f89f6-a14f-49de-9868-61bd4072f0a9\r\nCorrelation ID: b6908274-2c58-4e91-aea9-1f6b9c99347c\r\nTimestamp: 2016-04-11 18:59:01Z", - "error_codes": [ - 50001 - ], - "timestamp": "2016-04-11 18:59:01Z", - "trace_id": "ef1f89f6-a14f-49de-9868-61bd4072f0a9", - "correlation_id": "b6908274-2c58-4e91-aea9-1f6b9c99347c" -} -``` --| Parameter | Description | -| | | -| error |An error code string that can be used to classify types of errors that occur, and can be used to react to errors. | -| error_description |A specific error message that can help a developer identify the root cause of an authentication error. | -| error_codes |A list of STS-specific error codes that can help in diagnostics. | -| timestamp |The time at which the error occurred. | -| trace_id |A unique identifier for the request that can help in diagnostics. | -| correlation_id |A unique identifier for the request that can help in diagnostics across components. | --For a description of the error codes and the recommended client action, see [Error codes for token endpoint errors](#error-codes-for-token-endpoint-errors). --## Next steps -To learn more about the Azure AD v1.0 endpoint and how to add authentication and authorization to your web applications and web APIs, see [sample applications](sample-v1-code.md). |
active-directory | V1 Protocols Openid Connect Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/v1-protocols-openid-connect-code.md | - Title: Authorize web app access with OpenID Connect & Azure AD -description: This article describes how to use HTTP messages to authorize access to web applications and web APIs in your tenant using Azure Active Directory and OpenID Connect. -------- Previously updated : 09/05/2019-------# Authorize access to web applications using OpenID Connect and Azure Active Directory ---[OpenID Connect](https://openid.net/specs/openid-connect-core-1_0.html) is a simple identity layer built on top of the OAuth 2.0 protocol. OAuth 2.0 defines mechanisms to obtain and use [**access tokens**](../develop/access-tokens.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json) to access protected resources, but they do not define standard methods to provide identity information. OpenID Connect implements authentication as an extension to the OAuth 2.0 authorization process. It provides information about the end user in the form of an [`id_token`](../develop/id-tokens.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json) that verifies the identity of the user and provides basic profile information about the user. --OpenID Connect is our recommendation if you are building a web application that is hosted on a server and accessed via a browser. --## Register your application with your AD tenant -First, register your application with your Azure Active Directory (Azure AD) tenant. This will give you an Application ID for your application, as well as enable it to receive tokens. --1. Sign in to the [Azure portal](https://portal.azure.com). - -1. Choose your Azure AD tenant by selecting your account in the top right corner of the page, followed by selecting the **Switch Directory** navigation and then selecting the appropriate tenant. - - Skip this step if you only have one Azure AD tenant under your account, or if you've already selected the appropriate Azure AD tenant. - -1. In the Azure portal, search for and select **Azure Active Directory**. - -1. In the **Azure Active Directory** left menu, select **App Registrations**, and then select **New registration**. - -1. Follow the prompts and create a new application. It doesn't matter if it is a web application or a public client (mobile & desktop) application for this tutorial, but if you'd like specific examples for web applications or public client applications, check out our [quickstarts](v1-overview.md). - - - **Name** is the application name and describes your application to end users. - - Under **Supported account types**, select **Accounts in any organizational directory and personal Microsoft accounts**. - - Provide the **Redirect URI**. For web applications, this is the base URL of your app where users can sign in. For example, `http://localhost:12345`. For public client (mobile & desktop), Azure AD uses it to return token responses. Enter a value specific to your application. For example, `http://MyFirstAADApp`. - <!--TODO: add once App ID URI is configurable: The **App ID URI** is a unique identifier for your application. The convention is to use `https://<tenant-domain>/<app-name>`, e.g. `https://contoso.onmicrosoft.com/my-first-aad-app`--> - -1. Once you've completed registration, Azure AD will assign your application a unique client identifier (the **Application ID**). You need this value in the next sections, so copy it from the application page. - -1. To find your application in the Azure portal, select **App registrations**, and then select **View all applications**. --## Authentication flow using OpenID Connect --The most basic sign-in flow contains the following steps - each of them is described in detail below. --![OpenId Connect Authentication Flow](./media/v1-protocols-openid-connect-code/active-directory-oauth-code-flow-web-app.png) --## OpenID Connect metadata document --OpenID Connect describes a metadata document that contains most of the information required for an app to perform sign-in. This includes information such as the URLs to use and the location of the service's public signing keys. The OpenID Connect metadata document can be found at: --``` -https://login.microsoftonline.com/{tenant}/.well-known/openid-configuration -``` -The metadata is a simple JavaScript Object Notation (JSON) document. See the following snippet for an example. The snippet's contents are fully described in the [OpenID Connect specification](https://openid.net). Note that providing a tenant ID rather than `common` in place of {tenant} above will result in tenant-specific URIs in the JSON object returned. --``` -{ - "authorization_endpoint": "https://login.microsoftonline.com/{tenant}/oauth2/authorize", - "token_endpoint": "https://login.microsoftonline.com/{tenant}/oauth2/token", - "token_endpoint_auth_methods_supported": - [ - "client_secret_post", - "private_key_jwt", - "client_secret_basic" - ], - "jwks_uri": "https://login.microsoftonline.com/common/discovery/keys" - "userinfo_endpoint":"https://login.microsoftonline.com/{tenant}/openid/userinfo", - ... -} -``` --If your app has custom signing keys as a result of using the [claims-mapping](../develop/saml-claims-customization.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json) feature, you must append an `appid` query parameter containing the app ID in order to get a `jwks_uri` pointing to your app's signing key information. For example: `https://login.microsoftonline.com/{tenant}/.well-known/openid-configuration?appid=6731de76-14a6-49ae-97bc-6eba6914391e` contains a `jwks_uri` of `https://login.microsoftonline.com/{tenant}/discovery/keys?appid=6731de76-14a6-49ae-97bc-6eba6914391e`. --## Send the sign-in request --When your web application needs to authenticate the user, it must direct the user to the `/authorize` endpoint. This request is similar to the first leg of the [OAuth 2.0 Authorization Code Flow](v1-protocols-oauth-code.md), with a few important distinctions: --* The request must include the scope `openid` in the `scope` parameter. -* The `response_type` parameter must include `id_token`. -* The request must include the `nonce` parameter. --So a sample request would look like this: --``` -// Line breaks for legibility only --GET https://login.microsoftonline.com/{tenant}/oauth2/authorize? -client_id=6731de76-14a6-49ae-97bc-6eba6914391e -&response_type=id_token -&redirect_uri=http%3A%2F%2Flocalhost%3a12345 -&response_mode=form_post -&scope=openid -&state=12345 -&nonce=7362CAEA-9CA5-4B43-9BA3-34D7C303EBA7 -``` --| Parameter | Type | Description | -| | | | -| tenant |required |The `{tenant}` value in the path of the request can be used to control who can sign into the application. The allowed values are tenant identifiers, for example, `8eaef023-2b34-4da1-9baa-8bc8c9d6a490` or `contoso.onmicrosoft.com` or `common` for tenant-independent tokens | -| client_id |required |The Application ID assigned to your app when you registered it with Azure AD. You can find this in the Azure portal. Click **Azure Active Directory**, click **App Registrations**, choose the application and locate the Application ID on the application page. | -| response_type |required |Must include `id_token` for OpenID Connect sign-in. It may also include other response_types, such as `code` or `token`. | -| scope | recommended | The OpenID Connect specification requires the scope `openid`, which translates to the "Sign you in" permission in the consent UI. This and other OIDC scopes are ignored on the v1.0 endpoint, but is still a best practice for standards-compliant clients. | -| nonce |required |A value included in the request, generated by the app, that is included in the resulting `id_token` as a claim. The app can then verify this value to mitigate token replay attacks. The value is typically a randomized, unique string or GUID that can be used to identify the origin of the request. | -| redirect_uri | recommended |The redirect_uri of your app, where authentication responses can be sent and received by your app. It must exactly match one of the redirect_uris you registered in the portal, except it must be URL-encoded. If missing, the user agent will be sent back to one of the redirect URIs registered for the app, at random. The maximum length is 255 bytes | -| response_mode |optional |Specifies the method that should be used to send the resulting authorization_code back to your app. Supported values are `form_post` for *HTTP form post* and `fragment` for *URL fragment*. For web applications, we recommend using `response_mode=form_post` to ensure the most secure transfer of tokens to your application. The default for any flow including an id_token is `fragment`.| -| state |recommended |A value included in the request that is returned in the token response. It can be a string of any content that you wish. A randomly generated unique value is typically used for [preventing cross-site request forgery attacks](https://tools.ietf.org/html/rfc6749#section-10.12). The state is also used to encode information about the user's state in the app before the authentication request occurred, such as the page or view they were on. | -| prompt |optional |Indicates the type of user interaction that is required. Currently, the only valid values are 'login', 'none', and 'consent'. `prompt=login` forces the user to enter their credentials on that request, negating single-sign on. `prompt=none` is the opposite - it ensures that the user is not presented with any interactive prompt whatsoever. If the request cannot be completed silently via single-sign on, the endpoint returns an error. `prompt=consent` triggers the OAuth consent dialog after the user signs in, asking the user to grant permissions to the app. | -| login_hint |optional |Can be used to pre-fill the username/email address field of the sign-in page for the user, if you know their username ahead of time. Often apps use this parameter during reauthentication, having already extracted the username from a previous sign-in using the `preferred_username` claim. | --At this point, the user is asked to enter their credentials and complete the authentication. --### Sample response --A sample response, sent to the `redirect_uri` specified in the sign-in request after the user has authenticated, could look like this: --``` -POST / HTTP/1.1 -Host: localhost:12345 -Content-Type: application/x-www-form-urlencoded --id_token=eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6Ik1uQ19WWmNB...&state=12345 -``` --| Parameter | Description | -| | | -| id_token |The `id_token` that the app requested. You can use the `id_token` to verify the user's identity and begin a session with the user. | -| state |A value included in the request that is also returned in the token response. A randomly generated unique value is typically used for [preventing cross-site request forgery attacks](https://tools.ietf.org/html/rfc6749#section-10.12). The state is also used to encode information about the user's state in the app before the authentication request occurred, such as the page or view they were on. | --### Error response --Error responses may also be sent to the `redirect_uri` so the app can handle them appropriately: --``` -POST / HTTP/1.1 -Host: localhost:12345 -Content-Type: application/x-www-form-urlencoded --error=access_denied&error_description=the+user+canceled+the+authentication -``` --| Parameter | Description | -| | | -| error |An error code string that can be used to classify types of errors that occur, and can be used to react to errors. | -| error_description |A specific error message that can help a developer identify the root cause of an authentication error. | --#### Error codes for authorization endpoint errors --The following table describes the various error codes that can be returned in the `error` parameter of the error response. --| Error Code | Description | Client Action | -| | | | -| invalid_request |Protocol error, such as a missing required parameter. |Fix and resubmit the request. This is a development error, and is typically caught during initial testing. | -| unauthorized_client |The client application is not permitted to request an authorization code. |This usually occurs when the client application is not registered in Azure AD or is not added to the user's Azure AD tenant. The application can prompt the user with instruction for installing the application and adding it to Azure AD. | -| access_denied |Resource owner denied consent |The client application can notify the user that it cannot proceed unless the user consents. | -| unsupported_response_type |The authorization server does not support the response type in the request. |Fix and resubmit the request. This is a development error, and is typically caught during initial testing. | -| server_error |The server encountered an unexpected error. |Retry the request. These errors can result from temporary conditions. The client application might explain to the user that its response is delayed due to a temporary error. | -| temporarily_unavailable |The server is temporarily too busy to handle the request. |Retry the request. The client application might explain to the user that its response is delayed due to a temporary condition. | -| invalid_resource |The target resource is invalid because it does not exist, Azure AD cannot find it, or it is not correctly configured. |This indicates the resource, if it exists, has not been configured in the tenant. The application can prompt the user with instruction for installing the application and adding it to Azure AD. | --## Validate the id_token --Just receiving an `id_token` is not sufficient to authenticate the user; you must validate the signature and verify the claims in the `id_token` per your app's requirements. The Azure AD endpoint uses JSON Web Tokens (JWTs) and public key cryptography to sign tokens and verify that they are valid. --You can choose to validate the `id_token` in client code, but a common practice is to send the `id_token` to a backend server and perform the validation there. --You may also wish to validate additional claims depending on your scenario. Some common validations include: --* Ensuring the user/organization has signed up for the app. -* Ensuring the user has proper authorization/privileges using the `wids` or `roles` claims. -* Ensuring a certain strength of authentication has occurred, such as multi-factor authentication. --Once you have validated the `id_token`, you can begin a session with the user and use the claims in the `id_token` to obtain information about the user in your app. This information can be used for display, records, personalization, etc. For more information about `id_tokens` and claims, read [AAD id_tokens](../develop/id-tokens.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json). --## Send a sign-out request --When you wish to sign the user out of the app, it is not sufficient to clear your app's cookies or otherwise end the session with the user. You must also redirect the user to the `end_session_endpoint` for sign-out. If you fail to do so, the user will be able to reauthenticate to your app without entering their credentials again, because they will have a valid single sign-on session with the Azure AD endpoint. --You can simply redirect the user to the `end_session_endpoint` listed in the OpenID Connect metadata document: --``` -GET https://login.microsoftonline.com/common/oauth2/logout? -post_logout_redirect_uri=http%3A%2F%2Flocalhost%2Fmyapp%2F --``` --| Parameter | Type | Description | -| | | | -| post_logout_redirect_uri |recommended |The URL that the user should be redirected to after successful sign out. This URL must match one of the redirect URIs registered for your application in the app registration portal. If *post_logout_redirect_uri* is not included, the user is shown a generic message. | --## Single sign-out --When you redirect the user to the `end_session_endpoint`, Azure AD clears the user's session from the browser. However, the user may still be signed in to other applications that use Azure AD for authentication. To enable those applications to sign the user out simultaneously, Azure AD sends an HTTP GET request to the registered `LogoutUrl` of all the applications that the user is currently signed in to. Applications must respond to this request by clearing any session that identifies the user and returning a `200` response. If you wish to support single sign out in your application, you must implement such a `LogoutUrl` in your application's code. You can set the `LogoutUrl` from the Azure portal: --1. Sign in to the [Azure portal](https://portal.azure.com). -2. Choose your Active Directory by clicking on your account in the top right corner of the page. -3. From the left hand navigation panel, choose **Azure Active Directory**, then choose **App registrations** and select your application. -4. Click on **Settings**, then **Properties** and find the **Logout URL** text box. --## Token Acquisition -Many web apps need to not only sign the user in, but also access a web service on behalf of that user using OAuth. This scenario combines OpenID Connect for user authentication while simultaneously acquiring an `authorization_code` that can be used to get `access_tokens` using the [OAuth Authorization Code Flow](v1-protocols-oauth-code.md#use-the-authorization-code-to-request-an-access-token). --## Get Access Tokens -To acquire access tokens, you need to modify the sign-in request from above: --``` -// Line breaks for legibility only --GET https://login.microsoftonline.com/{tenant}/oauth2/authorize? -client_id=6731de76-14a6-49ae-97bc-6eba6914391e // Your registered Application ID -&response_type=id_token+code -&redirect_uri=http%3A%2F%2Flocalhost%3a12345 // Your registered Redirect Uri, url encoded -&response_mode=form_post // `form_post' or 'fragment' -&scope=openid -&resource=https%3A%2F%2Fservice.contoso.com%2F // The identifier of the protected resource (web API) that your application needs access to -&state=12345 // Any value, provided by your app -&nonce=678910 // Any value, provided by your app -``` --By including permission scopes in the request and using `response_type=code+id_token`, the `authorize` endpoint ensures that the user has consented to the permissions indicated in the `scope` query parameter, and return your app an authorization code to exchange for an access token. --### Successful response --A successful response, sent to the `redirect_uri` using `response_mode=form_post`, looks like: --``` -POST /myapp/ HTTP/1.1 -Host: localhost -Content-Type: application/x-www-form-urlencoded --id_token=eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6Ik1uQ19WWmNB...&code=AwABAAAAvPM1KaPlrEqdFSBzjqfTGBCmLdgfSTLEMPGYuNHSUYBrq...&state=12345 -``` --| Parameter | Description | -| | | -| id_token |The `id_token` that the app requested. You can use the `id_token` to verify the user's identity and begin a session with the user. | -| code |The authorization_code that the app requested. The app can use the authorization code to request an access token for the target resource. Authorization_codes are short lived, and typically expire after about 10 minutes. | -| state |If a state parameter is included in the request, the same value should appear in the response. The app should verify that the state values in the request and response are identical. | --### Error response --Error responses may also be sent to the `redirect_uri` so the app can handle them appropriately: --``` -POST /myapp/ HTTP/1.1 -Host: localhost -Content-Type: application/x-www-form-urlencoded --error=access_denied&error_description=the+user+canceled+the+authentication -``` --| Parameter | Description | -| | | -| error |An error code string that can be used to classify types of errors that occur, and can be used to react to errors. | -| error_description |A specific error message that can help a developer identify the root cause of an authentication error. | --For a description of the possible error codes and their recommended client action, see [Error codes for authorization endpoint errors](#error-codes-for-authorization-endpoint-errors). --Once you've gotten an authorization `code` and an `id_token`, you can sign the user in and get [access tokens](../develop/access-tokens.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json) on their behalf. To sign the user in, you must validate the `id_token` exactly as described above. To get access tokens, you can follow the steps described in the "Use the authorization code to request an access token" section of our [OAuth code flow documentation](v1-protocols-oauth-code.md#use-the-authorization-code-to-request-an-access-token). --## Next steps --* Learn more about the [access tokens](../develop/access-tokens.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json). -* Learn more about the [`id_token` and claims](../develop/id-tokens.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json). |
active-directory | Videos | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/videos.md | - Title: Azure ADAL to MSAL migration videos -description: Videos that help you migrate from the Azure Active Directory developer platform to the Microsoft identity platform ------- Previously updated : 02/12/2020-------# Azure Active Directory developer platform videos --Learn about the new Microsoft identity platform and how to migrate to it from the Azure Active Directory (Azure AD) developer platform. The videos are typically 1-2 minutes long. --## Migrate from v1.0 to v2.0 --**Learn about migrating to the latest version of the Microsoft identity platform** -- :::column::: - New Microsoft identity platform overview - :::column-end::: - :::column::: - > [!VIDEO https://www.youtube.com/embed/bNlcFuIo3r8] - :::column-end::: - :::column::: - Introduction to the MSAL libraries - :::column-end::: - :::column::: - > [!VIDEO https://www.youtube.com/embed/apbbx2n4tnU] - :::column-end::: - :::column::: - Endpoints and the benefits of moving to v2.0 - :::column-end::: - :::column::: - > [!VIDEO https://www.youtube.com/embed/qpdC45tZYDg] - :::column-end::: - :::column::: - Migrating your ADAL codebase to MSAL - :::column-end::: - :::column::: - > [!VIDEO https://www.youtube.com/embed/xgL_z9yCnrE] - :::column-end::: - :::column::: - Why migrate from ADAL to MSAL - :::column-end::: - :::column::: - > [!VIDEO https://www.youtube.com/embed/qpdC45tZYDg] - :::column-end::: - :::column::: - Advantages of MSAL over ADAL - :::column-end::: - :::column::: - > [!VIDEO https://www.youtube.com/embed/q-TDszj2O-4] - :::column-end::: --## Next steps --Learn about the new [Microsoft identity platform](../develop/index.yml) |
active-directory | Web Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/web-api.md | - Title: Web API apps in Azure Active Directory -description: Describes what web API applications are and the basics on protocol flow, registration, and token expiration for this app type. -------- Previously updated : 09/24/2018-------# Web API ---Web API apps are web applications that need to get resources from a web API. In this scenario, there are two identity types that the web application can use to authenticate and call the web API: --- **Application identity** - This scenario uses OAuth 2.0 client credentials grant to authenticate as the application and access the web API. When using an application identity, the web API can only detect that the web application is calling it, as the web API does not receive any information about the user. If the application receives information about the user, it will be sent via the application protocol, and it is not signed by Azure AD. The web API trusts that the web application authenticated the user. For this reason, this pattern is called a trusted subsystem.-- **Delegated user identity** - This scenario can be accomplished in two ways: OpenID Connect, and OAuth 2.0 authorization code grant with a confidential client. The web application obtains an access token for the user, which proves to the web API that the user successfully authenticated to the web application and that the web application was able to obtain a delegated user identity to call the web API. This access token is sent in the request to the web API, which authorizes the user and returns the desired resource.--Both the application identity and delegated user identity types are discussed in the flow below. The key difference between them is that the delegated user identity must first acquire an authorization code before the user can sign in and gain access to the web API. --## Diagram --![Web Application to Web API diagram](./media/authentication-scenarios/web-app-to-web-api.png) --## Protocol flow --### Application identity with OAuth 2.0 client credentials grant --1. A user is signed in to Azure AD in the web application (see the **Web apps** section for more info). -1. The web application needs to acquire an access token so that it can authenticate to the web API and retrieve the desired resource. It makes a request to Azure AD's token endpoint, providing the credential, application ID, and web API's application ID URI. -1. Azure AD authenticates the application and returns a JWT access token that is used to call the web API. -1. Over HTTPS, the web application uses the returned JWT access token to add the JWT string with a "Bearer" designation in the Authorization header of the request to the web API. The web API then validates the JWT token, and if validation is successful, returns the desired resource. --### Delegated user identity with OpenID Connect --1. A user is signed in to a web application using Azure AD (see the Web Browser to Web Application section above). If the user of the web application has not yet consented to allowing the web application to call the web API on its behalf, the user will need to consent. The application will display the permissions it requires, and if any of these are administrator-level permissions, a normal user in the directory will not be able to consent. This consent process only applies to multi-tenant applications, not single tenant applications, as the application will already have the necessary permissions. When the user signed in, the web application received an ID token with information about the user, as well as an authorization code. -1. Using the authorization code issued by Azure AD, the web application sends a request to Azure AD's token endpoint that includes the authorization code, details about the client application (Application ID and redirect URI), and the desired resource (application ID URI for the web API). -1. The authorization code and information about the web application and web API are validated by Azure AD. Upon successful validation, Azure AD returns two tokens: a JWT access token and a JWT refresh token. -1. Over HTTPS, the web application uses the returned JWT access token to add the JWT string with a "Bearer" designation in the Authorization header of the request to the web API. The web API then validates the JWT token, and if validation is successful, returns the desired resource. --### Delegated user identity with OAuth 2.0 authorization code grant --1. A user is already signed in to a web application, whose authentication mechanism is independent of Azure AD. -1. The web application requires an authorization code to acquire an access token, so it issues a request through the browser to Azure AD's authorization endpoint, providing the Application ID and redirect URI for the web application after successful authentication. The user signs in to Azure AD. -1. If the user of the web application has not yet consented to allowing the web application to call the web API on its behalf, the user will need to consent. The application will display the permissions it requires, and if any of these are administrator-level permissions, a normal user in the directory will not be able to consent. This consent applies to both single and multi-tenant application. In the single tenant case, an admin can perform admin consent to consent on behalf of their users. This can be done using the `Grant Permissions` button in the [Azure portal](https://portal.azure.com). -1. After the user has consented, the web application receives the authorization code that it needs to acquire an access token. -1. Using the authorization code issued by Azure AD, the web application sends a request to Azure AD's token endpoint that includes the authorization code, details about the client application (Application ID and redirect URI), and the desired resource (application ID URI for the web API). -1. The authorization code and information about the web application and web API are validated by Azure AD. Upon successful validation, Azure AD returns two tokens: a JWT access token and a JWT refresh token. -1. Over HTTPS, the web application uses the returned JWT access token to add the JWT string with a "Bearer" designation in the Authorization header of the request to the web API. The web API then validates the JWT token, and if validation is successful, returns the desired resource. --## Code samples --See the code samples for Web Application to Web API scenarios. And, check back frequently -- new samples are added frequently. Web [Application to Web API](sample-v1-code.md#web-applications-signing-in-users-calling-microsoft-graph-or-a-web-api-with-the-users-identity). --## App registration --To register an application with the Azure AD v1.0 endpoint, see [Register an app](../develop/quickstart-register-app.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json). --* Single tenant - For both the application identity and delegated user identity cases, the web application and the web API must be registered in the same directory in Azure AD. The web API can be configured to expose a set of permissions, which are used to limit the web application's access to its resources. If a delegated user identity type is being used, the web application needs to select the desired permissions from the **Permissions to other applications** drop-down menu in the Azure portal. This step is not required if the application identity type is being used. -* Multi-tenant - First, the web application is configured to indicate the permissions it requires to be functional. This list of required permissions is shown in a dialog when a user or administrator in the destination directory gives consent to the application, which makes it available to their organization. Some applications only require user-level permissions, which any user in the organization can consent to. Other applications require administrator-level permissions, which a user in the organization cannot consent to. Only a directory administrator can give consent to applications that require this level of permissions. When the user or administrator consents, the web application and the web API are both registered in their directory. --## Token expiration --When the web application uses its authorization code to get a JWT access token, it also receives a JWT refresh token. When the access token expires, the refresh token can be used to reauthenticate the user without requiring them to sign in again. This refresh token is then used to authenticate the user, which results in a new access token and refresh token. --## Next steps --- Learn more about other [Application types and scenarios](app-types.md)-- Learn about the Azure AD [authentication basics](v1-authentication-scenarios.md) |
active-directory | Web App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/web-app.md | - Title: Web apps in Azure Active Directory -description: Describes what web apps are and the basics on protocol flow, registration, and token expiration for this app type. -------- Previously updated : 09/24/2018-------# Web apps ---Web apps are applications that authenticate a user in a web browser to a web application. In this scenario, the web application directs the user's browser to sign them in to Azure AD. Azure AD returns a sign-in response through the user's browser, which contains claims about the user in a security token. This scenario supports sign-on using the OpenID Connect, SAML 2.0, and WS-Federation protocols. --## Diagram --![Authentication flow for browser to web application](./media/authentication-scenarios/web-browser-to-web-api.png) --## Protocol flow --1. When a user visits the application and needs to sign in, they are redirected via a sign-in request to the authentication endpoint in Azure AD. -1. The user signs in on the sign-in page. -1. If authentication is successful, Azure AD creates an authentication token and returns a sign-in response to the application's Reply URL that was configured in the Azure portal. For a production application, this Reply URL should be HTTPS. The returned token includes claims about the user and Azure AD that are required by the application to validate the token. -1. The application validates the token by using a public signing key and issuer information available at the federation metadata document for Azure AD. After the application validates the token, it starts a new session with the user. This session allows the user to access the application until it expires. --## Code samples --See the code samples for web browser to web application scenarios. And, check back frequently as new samples are added frequently. --## App registration --To register a web app, see [Register an app](../develop/quickstart-register-app.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json). --* Single tenant - If you are building an application just for your organization, it must be registered in your company's directory by using the Azure portal. -* Multi-tenant - If you are building an application that can be used by users outside your organization, it must be registered in your company's directory, but also must be registered in each organization's directory that will be using the application. To make your application available in their directory, you can include a sign-up process for your customers that enables them to consent to your application. When they sign up for your application, they will be presented with a dialog that shows the permissions the application requires, and then the option to consent. Depending on the required permissions, an administrator in the other organization may be required to give consent. When the user or administrator consents, the application is registered in their directory. --## Token expiration --The user's session expires when the lifetime of the token issued by Azure AD expires. Your application can shorten this time period if desired, such as signing out users based on a period of inactivity. When the session expires, the user will be prompted to sign in again. --## Next steps --* Learn more about other [Application types and scenarios](app-types.md) -* Learn about the Azure AD [authentication basics](v1-authentication-scenarios.md) |
ai-services | Concept Add On Capabilities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-add-on-capabilities.md | monikerRange: 'doc-intel-3.1.0' > [!NOTE] >-> Add-on capabilities for Document Intelligence Studio are available with the Read and Layout models starting with the `2023-02-28-preview` and later releases. +> Add-on capabilities for Document Intelligence Studio are available with the Read and Layout models starting with the `2023-07-31 (GA)` and later releases. > > Add-on capabilities are available within all models except for the [Business card model](concept-business-card.md). -Document Intelligence now supports more sophisticated analysis capabilities. These optional capabilities can be enabled and disabled depending on the scenario of the document extraction. The following add-on capabilities are available for`2023-02-28-preview` and later releases: +Document Intelligence supports more sophisticated analysis capabilities. These optional features can be enabled and disabled depending on the scenario of the document extraction. The following add-on capabilities are available for `2023-07-31 (GA)` and later releases: ++Document Intelligence now supports more sophisticated analysis capabilities. These optional capabilities can be enabled and disabled depending on the scenario of the document extraction. The following add-on capabilities are available for `2023-07-31 (GA)` and later releases: * [`ocr.highResolution`](#high-resolution-extraction) |
ai-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/overview.md | Azure AI Document Intelligence is a cloud-based [Azure AI service](../../ai-serv | ✔️ [**Document analysis models**](#document-analysis-models) | ✔️ [**Prebuilt models**](#prebuilt-models) | ✔️ [**Custom models**](#custom-model-overview) | -### Document analysis models +## Document analysis models Document analysis models enable text extraction from forms and documents and return structured business-ready content ready for your organization's action, use, or progress. Document analysis models enable text extraction from forms and documents and ret :::column-end::: :::row-end::: -### Prebuilt models +## Prebuilt models Prebuilt models enable you to add intelligent document processing to your apps and flows without having to train and build your own models. Prebuilt models enable you to add intelligent document processing to your apps a :::column-end::: :::row-end::: -### Custom models +## Custom models Custom models are trained using your labeled datasets to extract distinct data from forms and documents, specific to your use cases. Standalone custom models can be combined to create composed models. You can use Document Intelligence to automate document processing in application > [!div class="nextstepaction"] > [Return to custom model types](#custom-models) -### Add-on capabilities +## Add-on capabilities ++Document Intelligence supports optional features that can be enabled and disabled depending on the document extraction scenario. The following add-on capabilities are available for`2023-07-31 (GA)` and later releases: ++* [`ocr.highResolution`](concept-add-on-capabilities.md#high-resolution-extraction) ++* [`ocr.formula`](concept-add-on-capabilities.md#formula-extraction) ++* [`ocr.font`](concept-add-on-capabilities.md#font-property-extraction) ++* [`ocr.barcode`](concept-add-on-capabilities.md#barcode-property-extraction) :::moniker-end |
ai-services | Use Your Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/use-your-data.md | There is an [upload limit](../quotas-limits.md), and there are some caveats abou > [!TIP] > For documents and datasets with long text, you should use the available [data preparation script](https://go.microsoft.com/fwlink/?linkid=2244395). The script chunks data so that your response with the service will be more accurate. This script also supports scanned PDF files and images. -There are three different sources of data that you can use with Azure OpenAI on your data. +There are two different sources of data that you can use with Azure OpenAI on your data. * Blobs in an Azure storage container that you provide * Local files uploaded using the Azure OpenAI Studio |
aks | Azure Cni Overlay | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md | If you wish to restrict traffic between workloads in the cluster, we recommend u ## Maximum pods per node -You can configure the maximum number of pods per node at the time of cluster creation or when you add a new node pool. The default for Azure CNI Overlay is 30. The maximum value you can specify in Azure CNI Overlay is 250, and the minimum value is 10. The maximum pods per node value configured during creation of a node pool applies to the nodes in that node pool only. +You can configure the maximum number of pods per node at the time of cluster creation or when you add a new node pool. The default for Azure CNI Overlay is 250. The maximum value you can specify in Azure CNI Overlay is 250, and the minimum value is 10. The maximum pods per node value configured during creation of a node pool applies to the nodes in that node pool only. ## Choosing a network model to use |
aks | Configure Kubenet Dual Stack | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kubenet-dual-stack.md | This article shows you how to use dual-stack networking with an AKS cluster. For ## Prerequisites * All prerequisites from [configure kubenet networking](configure-kubenet.md) apply.-* AKS dual-stack clusters require Kubernetes version v1.21.2 or greater. v1.22.2 or greater is recommended to take advantage of the [out-of-tree cloud controller manager][aks-out-of-tree], which is the default on v1.22 and up. +* AKS dual-stack clusters require Kubernetes version v1.21.2 or greater. v1.22.2 or greater is recommended. * If using Azure Resource Manager templates, schema version 2021-10-01 is required. ## Overview of dual-stack networking in Kubernetes |
aks | Deploy Marketplace | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/deploy-marketplace.md | Included among these solutions are Kubernetes application-based container offers - Deploy the application on your AKS cluster. - Monitor usage and billing information. - ## Limitations This feature is currently supported only in the following regions: Verify the deployment by using the following command to list the extensions that az k8s-extension list --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters ``` +++ ## Manage the offer lifecycle You can view the extension instance from the cluster by using the following comm az k8s-extension show --name <extension-name> --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters ``` +++ ## Monitor billing and usage information Select an application, then select the uninstall button to remove the extension az k8s-extension delete --name <extension-name> --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters ``` +++ ## Troubleshooting If you experience issues, see the [troubleshooting checklist for failed deployme - Learn more about [exploring and analyzing costs][billing]. - Learn more about [deploying a Kubernetes application programmatically using Azure CLI](/azure/aks/deploy-application-az-cli)+- Learn more about [deploying a Kubernetes application through an ARM template](/azure/aks/deploy-application-template) <!-- LINKS --> [azure-marketplace]: /marketplace/azure-marketplace-overview+ [cluster-extensions]: ./cluster-extensions.md+ [billing]: ../cost-management-billing/costs/quick-acm-cost-analysis.md-[marketplace-troubleshoot]: /troubleshoot/azure/azure-kubernetes/troubleshoot-failed-kubernetes-deployment-offer ++[marketplace-troubleshoot]: /troubleshoot/azure/azure-kubernetes/troubleshoot-failed-kubernetes-deployment-offer ++ |
aks | Image Cleaner | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/image-cleaner.md | -It's common to use pipelines to build and deploy images on Azure Kubernetes Service (AKS) clusters. While great for image creation, this process often doesn't account for the stale images left behind and can lead to image bloat on cluster nodes. These images may contain vulnerabilities, which may create security issues. To remove security risks in your clusters, you can clean these unreferenced images. Manually cleaning images can be time intensive. Image Cleaner performs automatic image identification and removal, which mitigates the risk of stale images and reduces the time required to clean them up. +It's common to use pipelines to build and deploy images on Azure Kubernetes Service (AKS) clusters. While great for image creation, this process often doesn't account for the stale images left behind and can lead to image bloat on cluster nodes. These images might contain vulnerabilities, which might create security issues. To remove security risks in your clusters, you can clean these unreferenced images. Manually cleaning images can be time intensive. Image Cleaner performs automatic image identification and removal, which mitigates the risk of stale images and reduces the time required to clean them up. > [!NOTE] > Image Cleaner is a feature based on [Eraser](https://eraser-dev.github.io/eraser). Image Cleaner doesn't yet support Windows node pools or AKS virtual nodes. ## How Image Cleaner works -When you enable Image Cleaner, it deploys an `eraser-controller-manager` pod, which generates an `ImageList` CRD. The eraser pods running on each node clean up any unreferenced and vulnerable images according to the `ImageList`. A [trivy][trivy] scan helps determine vulnerability and flags images with a classification of `LOW`, `MEDIUM`, `HIGH`, or `CRITICAL`. Image Cleaner automatically generates an updated `ImageList` based on a set time interval and can also be supplied manually. Once Image Cleaner generates an `ImageList`, it removes all images in the list from node VMs. +After you enable Image Cleaner, there will be a controller manager pod named `eraser-controller-manager` deployed to your cluster. --## Configuration options With Image Cleaner, you can choose between manual and automatic mode and the following configuration options: +## Configuration options + |Name|Description|Required| |-|--|--| |`--enable-image-cleaner`|Enable the Image Cleaner feature for an AKS cluster|Yes, unless disable is specified| |`--disable-image-cleaner`|Disable the Image Cleaner feature for an AKS cluster|Yes, unless enable is specified| |`--image-cleaner-interval-hours`|This parameter determines the interval time (in hours) Image Cleaner uses to run. The default value for Azure CLI is one week, the minimum value is 24 hours and the maximum is three months.|Not required for Azure CLI, required for ARM template or other clients| +### Automatic mode +Once `eraser-controller-manager` is deployed, ++ - it will start first time's clean up immediately and create worker pods per node named like `eraser-aks-xxxxx` + - inside each worker pod, there are 3 containers: + - collector: collect unused images + - trivy-scanner: leverage [trivy](https://github.com/aquasecurity/trivy) to scan image vulnerabilities + - remover: remove used images with vulnerabilities + - after clean up, worker pod will be deleted and its next schedule up is after the `--image-cleaner-interval-hours` you have set ++### Manual mode ++You can also manually trigger the clean up by defining a CRD object `ImageList`. Then `eraser-contoller-manager` will create worker pod per node as well to finish manual removal. ++++ > [!NOTE] > After disabling Image Cleaner, the old configuration still exists. This means if you enable the feature again without explicitly passing configuration, the existing value is used instead of the default. With Image Cleaner, you can choose between manual and automatic mode and the fol * Enable Image Cleaner on a new AKS cluster using the [`az aks create`][az-aks-create] command with the `--enable-image-cleaner` parameter. ```azurecli-interactive- az aks create -g myResourceGroup -n myManagedCluster \ + az aks create \ + --resource-group myResourceGroup \ + --name myManagedCluster \ --enable-image-cleaner ``` With Image Cleaner, you can choose between manual and automatic mode and the fol * Enable Image Cleaner on an existing AKS cluster using the [`az aks update`][az-aks-update] command. ```azurecli-interactive- az aks update -g myResourceGroup -n myManagedCluster \ + az aks update \ + --resource-group myResourceGroup \ + --name myManagedCluster \ --enable-image-cleaner ``` With Image Cleaner, you can choose between manual and automatic mode and the fol * Update the Image Cleaner interval on a new or existing AKS cluster using the `--image-cleaner-interval-hours` parameter. ```azurecli-interactive- # Update the interval on a new cluster - az aks create -g myResourceGroup -n myManagedCluster \ + # Create a new cluster with specifying the interval + az aks create \ + --resource-group myResourceGroup \ + --name myManagedCluster \ --enable-image-cleaner \ --image-cleaner-interval-hours 48+ # Update the interval on an existing cluster- az aks update -g myResourceGroup -n myManagedCluster \ + az aks update \ + --resource-group myResourceGroup \ + --name myManagedCluster \ + --enable-image-cleaner \ --image-cleaner-interval-hours 48 ``` -After you enable the feature, the `eraser-controller-manager-xxx` pod and `collector-aks-xxx` pod are deployed. The `eraser-aks-xxx` pod contains *three* containers: -- - **Scanner container**: Performs vulnerability image scans - - **Collector container**: Collects nonrunning and unused images - - **Remover container**: Removes these images from cluster nodes --Image Cleaner generates an `ImageList` containing nonrunning and vulnerable images at the desired interval based on your configuration. Image Cleaner automatically removes these images from cluster nodes. - ## Manually remove images using Image Cleaner -1. Create an `ImageList` using the following example YAML named `image-list.yml`. +* Example to manually remove image `docker.io/library/alpine:3.7.3` if it is unused. - ```yml - apiVersion: eraser.sh/v1alpha1 + ```bash + cat <<EOF | kubectl apply -f - + apiVersion: eraser.sh/v1 kind: ImageList metadata: name: imagelist spec: images: - docker.io/library/alpine:3.7.3- // You can also use "*" to specify all non-running images - ``` --2. Apply the `ImageList` to your cluster using the `kubectl apply` command. -- ```bash - kubectl apply -f image-list.yml + EOF ``` - Applying the `ImageList` triggers a job named `eraser-aks-xxx`, which causes Image Cleaner to remove the desired images from all nodes. Unlike the `eraser-aks-xxx` pod under autoclean with *three* containers, the eraser-pod here has only *one* container. - ## Image exclusion list Images specified in the exclusion list aren't removed from the cluster. Image Cleaner supports system and user-defined exclusion lists. It's not supported to edit the system exclusion list. Images specified in the exclusion list aren't removed from the cluster. Image Cl * Check the system exclusion list using the following `kubectl get` command. ```bash- kubectl get -n kube-system cm eraser-system-exclusion -o yaml + kubectl get -n kube-system configmap eraser-system-exclusion -o yaml ``` ### Create a user-defined exclusion list Images specified in the exclusion list aren't removed from the cluster. Image Cl kubectl label configmap excluded eraser.sh/exclude.list=true -n kube-system ``` -3. Verify the images are in the exclusion list using the following `kubectl logs` command. -- ```bash - kubectl logs -n kube-system <eraser-pod-name> - ``` --## Image Cleaner image logs --Deletion image logs are stored in `eraser-aks-nodepool-xxx` pods for manually deleted images and in `collector-aks-nodes-xxx` pods for automatically deleted images. --You can view these logs using the `kubectl logs <pod name> -n kubesystem` command. However, this command may return only the most recent logs, since older logs are routinely deleted. To view all logs, follow these steps to enable the [Azure Monitor add-on](./monitor-aks.md) and use the Container Insights pod log table. --1. Ensure Azure Monitoring is enabled on your cluster. For detailed steps, see [Enable Container Insights on AKS clusters](../azure-monitor/containers/container-insights-enable-aks.md#existing-aks-cluster). --2. Get the Log Analytics resource ID using the [`az aks show`][az-aks-show] command. -- ```azurecli - az aks show -g myResourceGroup -n myManagedCluster - ``` -- After a few minutes, the command returns JSON-formatted information about the solution, including the workspace resource ID. -- ```json - "addonProfiles": { - "omsagent": { - "config": { - "logAnalyticsWorkspaceResourceID": "/subscriptions/<WorkspaceSubscription>/resourceGroups/<DefaultWorkspaceRG>/providers/Microsoft.OperationalInsights/workspaces/<defaultWorkspaceName>" - }, - "enabled": true - } - } - ``` --3. In the Azure portal, search for the workspace resource ID, then select **Logs**. --4. Copy this query into the table, replacing `name` with either `eraser-aks-nodepool-xxx` (for manual mode) or `collector-aks-nodes-xxx` (for automatic mode). -- ```kusto - let startTimestamp = ago(1h); - KubePodInventory - | where TimeGenerated > startTimestamp - | project ContainerID, PodName=Name, Namespace - | where PodName contains "name" and Namespace startswith "kube-system" - | distinct ContainerID, PodName - | join - ( - ContainerLog - | where TimeGenerated > startTimestamp - ) - on ContainerID - // at this point before the next pipe, columns from both tables are available to be "projected". Due to both - // tables having a "Name" column, we assign an alias as PodName to one column which we actually want - | project TimeGenerated, PodName, LogEntry, LogEntrySource - | summarize by TimeGenerated, LogEntry - | order by TimeGenerated desc - ``` --5. Select **Run**. Any deleted image logs appear in the **Results** area. -- :::image type="content" source="media/image-cleaner/eraser-log-analytics.png" alt-text="Screenshot showing deleted image logs in the Azure portal." lightbox="media/image-cleaner/eraser-log-analytics.png"::: - ## Disable Image Cleaner * Disable Image Cleaner on your cluster using the [`az aks update`][az-aks-update] command with the `--disable-image-cleaner` parameter. ```azurecli-interactive- az aks update -g myResourceGroup -n myManagedCluster \ + az aks update \ + --resource-group myResourceGroup \ + --name myManagedCluster \ --disable-image-cleaner ``` +## FAQ ++### How to check eraser version is using? +``` +kubectl get configmap -n kube-system eraser-manager-config | grep tag -C 3 +``` ++### Does Image Cleaner support other vulnerability scanners besides trivy-scanner? +No. ++### Can I specify vulnerability levels for images to clean? +Currently no. The default settings for vulnerablity levels are: +- `LOW` +- `MEDIUM` +- `HIGH` +- `CRITICAL` ++And they cannot be customized. ++### How to review images were cleaned up by Image Cleaner? ++Image logs are stored in worker pod - `eraser-aks-xxxxx` and ++- when `eraser-aks-xxxxx` is alive, you can run below commands to view deletion logs. +```bash +kubectl logs -n kube-system <worker-pod-name> -c collector +kubectl logs -n kube-system <worker-pod-name> -c trivy-scanner +kubectl logs -n kube-system <worker-pod-name> -c remover +``` ++- when `eraser-aks-xxxxx` was deleted, you can follow these steps to enable the [Azure Monitor add-on](./monitor-aks.md) and use the Container Insights pod log table to view historical pod logs. + 1. Ensure Azure Monitoring is enabled on your cluster. For detailed steps, see [Enable Container Insights on AKS clusters](../azure-monitor/containers/container-insights-enable-aks.md#existing-aks-cluster). ++ 2. Get the Log Analytics resource ID using the [`az aks show`][az-aks-show] command. ++ ```azurecli + az aks show -g myResourceGroup -n myManagedCluster + ``` ++ After a few minutes, the command returns JSON-formatted information about the solution, including the workspace resource ID. ++ ```json + "addonProfiles": { + "omsagent": { + "config": { + "logAnalyticsWorkspaceResourceID": "/subscriptions/<WorkspaceSubscription>/resourceGroups/<DefaultWorkspaceRG>/providers/Microsoft.OperationalInsights/workspaces/<defaultWorkspaceName>" + }, + "enabled": true + } + } + ``` ++ 3. In the Azure portal, search for the workspace resource ID, then select **Logs**. ++ 4. Copy this query into the table, replacing `name` with `eraser-aks-xxxxx` (worker pod name). ++ ```kusto + let startTimestamp = ago(1h); + KubePodInventory + | where TimeGenerated > startTimestamp + | project ContainerID, PodName=Name, Namespace + | where PodName contains "name" and Namespace startswith "kube-system" + | distinct ContainerID, PodName + | join + ( + ContainerLog + | where TimeGenerated > startTimestamp + ) + on ContainerID + // at this point before the next pipe, columns from both tables are available to be "projected". Due to both + // tables having a "Name" column, we assign an alias as PodName to one column which we actually want + | project TimeGenerated, PodName, LogEntry, LogEntrySource + | summarize by TimeGenerated, LogEntry + | order by TimeGenerated desc + ``` ++ 5. Select **Run**. Any deleted image logs appear in the **Results** area. ++ :::image type="content" source="media/image-cleaner/eraser-log-analytics.png" alt-text="Screenshot showing deleted image logs in the Azure portal." lightbox="media/image-cleaner/eraser-log-analytics.png"::: + <!-- LINKS --> [azure-cli-install]: /cli/azure/install-azure-cli |
aks | Ingress Tls | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-tls.md | The transport layer security (TLS) protocol uses certificates to provide securit You can bring your own certificates and integrate them with the Secrets Store CSI driver. Alternatively, you can use [cert-manager][cert-manager], which automatically generates and configures [Let's Encrypt][lets-encrypt] certificates. Two applications run in the AKS cluster, each of which is accessible over a single IP address. -> [!NOTE] +> [!IMPORTANT] +> Microsoft **_does not_** manage or support cert-manager and any issues stemming from its use. For issues with cert-manager, see [cert-manager troubleshooting][cert-manager-troubleshooting] documentation. +> > There are two open source ingress controllers for Kubernetes based on Nginx: one is maintained by the Kubernetes community ([kubernetes/ingress-nginx][nginx-ingress]), and one is maintained by NGINX, Inc. ([nginxinc/kubernetes-ingress]). This article uses the *Kubernetes community ingress controller*. ## Before you begin You can also: [helm]: https://helm.sh/ [helm-cli]: ./kubernetes-helm.md [cert-manager]: https://github.com/jetstack/cert-manager+[cert-manager-troubleshooting]: https://cert-manager.io/docs/troubleshooting/ [cert-manager-certificates]: https://cert-manager.io/docs/concepts/certificate/ [ingress-shim]: https://cert-manager.io/docs/usage/ingress/ [cert-manager-cluster-issuer]: https://cert-manager.io/docs/concepts/issuer/ |
aks | Out Of Tree | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/out-of-tree.md | - Title: Enable Cloud Controller Manager (preview) on your Azure Kubernetes Service (AKS) cluster -description: Learn how to enable the Out of Tree cloud provider (preview) on your Azure Kubernetes Service (AKS) cluster. -- Previously updated : 06/19/2023----# Enable Cloud Controller Manager (preview) on your Azure Kubernetes Service (AKS) cluster --As a cloud provider, Microsoft Azure works closely with the Kubernetes community to support our infrastructure on behalf of users. --Previously, cloud provider integration with Kubernetes was *in-tree*, where any changes to cloud specific features would follow the standard Kubernetes release cycle. When issues were fixed or enhancements were rolled out, they would need to be within the Kubernetes community's release cycle. --The Kubernetes community is now adopting an ***out-of-tree*** model, where cloud providers control releases independently of the core Kubernetes release schedule through the [cloud-provider-azure][cloud-provider-azure] component. As part of this cloud-provider-azure component, we're also introducing a cloud-node-manager component, which is a component of the Kubernetes node lifecycle controller. A DaemonSet in the *kube-system* namespace deploys this component. --The Cloud Storage Interface (CSI) drivers are included by default in Kubernetes version 1.21 and higher. --> [!NOTE] -> When you enable the Cloud Controller Manager (preview) on your AKS cluster, it also enables the out-of-tree CSI drivers. --## Prerequisites --You must have the following resources installed: --* The Azure CLI. For more information, see [Install the Azure CLI][install-azure-cli]. -* Kubernetes version 1.20.x or higher. --## Install the aks-preview Azure CLI extension ---1. Install the aks-preview extension using the [`az extension add`][az-extension-add] command. -- ```azurecli - az extension add --name aks-preview - ``` --2. Update to the latest version of the extension released using the [`az extension update`][az-extension-update] command. -- ```azurecli - az extension update --name aks-preview - ``` --## Register the 'EnableCloudControllerManager' feature flag --1. Register the `EnableCloudControllerManager` feature flag using the [`az feature register`][az-feature-register] command. -- ```azurecli-interactive - az feature register --namespace "Microsoft.ContainerService" --name "EnableCloudControllerManager" - ``` -- It takes a few minutes for the status to show *Registered*. --2. Verify the registration status using the [`az feature show`][az-feature-show] command. -- ```azurecli-interactive - az feature show --namespace "Microsoft.ContainerService" --name "EnableCloudControllerManager" - ``` --3. When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider using the [`az provider register`][az-provider-register] command. -- ```azurecli-interactive - az provider register --namespace Microsoft.ContainerService - ``` --## Create a new AKS cluster with Cloud Controller Manager --* Create a new AKS cluster with Cloud Controller Manager using the [`az aks create`][az-aks-create] command and include the parameter `EnableCloudControllerManager=True` as an `--aks-custom-header`. -- ```azurecli-interactive - az aks create -n aks -g myResourceGroup --aks-custom-headers EnableCloudControllerManager=True - ``` --## Upgrade an AKS cluster to Cloud Controller Manager on an existing cluster --* Upgrade an existing AKS cluster with Cloud Controller Manager using the [`az aks upgrade`][az-aks-upgrade] command and include the parameter `EnableCloudControllerManager=True` as an `--aks-custom-header`. -- ```azurecli-interactive - az aks upgrade -n aks -g myResourceGroup -k <version> --aks-custom-headers EnableCloudControllerManager=True - ``` --## Verify component deployment --* Verify the component deployment using the following `kubectl get po` command. -- ```azurecli-interactive - kubectl get po -n kube-system | grep cloud-node-manager - ``` --## Next steps --* For more information on CSI drivers, and the default behavior for Kubernetes versions higher than 1.21, review the [CSI documentation][csi-docs]. -* For more information on the Kubernetes community direction regarding out-of-tree providers, see the [community blog post][community-blog]. --<!-- LINKS - internal --> -[az-provider-register]: /cli/azure/provider#az-provider-register -[az-feature-register]: /cli/azure/feature#az-feature-register -[az-feature-show]: /cli/azure/feature#az-feature-show -[csi-docs]: csi-storage-drivers.md -[install-azure-cli]: /cli/azure/install-azure-cli -[az-extension-add]: /cli/azure/extension#az-extension-add -[az-extension-update]: /cli/azure/extension#az-extension-update -[az-aks-create]: /cli/azure/aks#az-aks-create -[az-aks-upgrade]: /cli/azure/aks#az-aks-upgrade --<!-- LINKS - External --> -[community-blog]: https://kubernetes.io/blog/2019/04/17/the-future-of-cloud-providers-in-kubernetes -[cloud-provider-azure]: https://github.com/kubernetes-sigs/cloud-provider-azure |
api-management | Api Management Howto Setup Delegation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-setup-delegation.md | Recommended steps for creating a new delegation endpoint to implement on your si | Parameter | Description | | | -- |- | **operation** | Identifies the delegation request type. Valid product subscription requests options are: <ul><li>**Subscribe**: a request to subscribe the user to a given product with provided ID (see below).</li><li>**Unsubscribe**: a request to unsubscribe a user from a product.</li><li>**Renew**: a request to renew a subscription (for example, that may be expiring)</li></ul> | + | **operation** | Identifies the delegation request type. Valid product subscription requests options are: <ul><li>**Subscribe**: a request to subscribe the user to a given product with provided ID (see below).</li><li>**Unsubscribe**: a request to unsubscribe a user from a product</li></ul> | | **productId** | On *Subscribe*, the product ID that the user requested subscription. | | **userId** | On *Subscribe*, the requesting user's ID. |- | **subscriptionId** | On *Unsubscribe* and *Renew*, the product subscription ID. | + | **subscriptionId** | On *Unsubscribe*, the product subscription ID. | | **salt** | A special salt string used for computing a security hash. | | **sig** | A computed security hash used for comparison to your own computed hash. | Recommended steps for creating a new delegation endpoint to implement on your si HMAC(salt + '\n' + productId + '\n' + userId) ``` - For *Unsubscribe* or *Renew*: + For *Unsubscribe*: ``` HMAC(salt + '\n' + subscriptionId) ``` |
app-service | Deploy Configure Credentials | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-configure-credentials.md | For [local Git deployment](deploy-local-git.md), you can also use the [az webapp ```azurecli-interactive az webapp deployment list-publishing-credentials --resource-group <group-name> --name <app-name> --query scmUri ```+Note that the returned Git remote URI doesn't contain `/<app-name>.git` at the end. When you add the remote URI, make sure to append `/<app-name>.git` to avoid an error 22 with `git-http-push`. Additionally, when using `git remote add ... ` via shells that use the dollar sign for variable interpolation (such as bash), escape any dollar signs (`\$`) in the username or password. Failure to escape this character can result in authentication errors. # [Azure PowerShell](#tab/powershell) |
app-service | Deploy Ftp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-ftp.md | Possible values for `--ftps-state` are `AllAllowed` (FTP and FTPS enabled), `Dis ## Troubleshoot FTP deployment -- [How can I troubleshoot FTP deployment?](#how-can-i-troubleshoot-ftp-deployment)-- [I'm not able to FTP and publish my code. How can I resolve the issue?](#im-not-able-to-ftp-and-publish-my-code-how-can-i-resolve-the-issue)-- [How can I connect to FTP in Azure App Service via passive mode?](#how-can-i-connect-to-ftp-in-azure-app-service-via-passive-mode)+ - [How can I troubleshoot FTP deployment?](#how-can-i-troubleshoot-ftp-deployment) + - [I'm not able to FTP and publish my code. How can I resolve the issue?](#im-not-able-to-ftp-and-publish-my-code-how-can-i-resolve-the-issue) + - [How can I connect to FTP in Azure App Service via passive mode?](#how-can-i-connect-to-ftp-in-azure-app-service-via-passive-mode) + - [Why is my connection failing when attempting to connect over FTPS using explicit encryption?](#why-is-my-connection-failing-when-attempting-to-connect-over-ftps-using-explicit-encryption) + - [How can I determine the method that was used to deploy my Azure App Service?](#how-can-i-determine-the-method-that-was-used-to-deploy-my-azure-app-service) #### How can I troubleshoot FTP deployment? Check that you've entered the correct [hostname](#get-ftps-endpoint) and [creden #### How can I connect to FTP in Azure App Service via passive mode? Azure App Service supports connecting via both Active and Passive mode. Passive mode is preferred because your deployment machines are usually behind a firewall (in the operating system or as part of a home or business network). See an [example from the WinSCP documentation](https://winscp.net/docs/ui_login_connection). -### How can I determine the method that was used to deploy my Azure App Service? +#### Why is my connection failing when attempting to connect over FTPS using explicit encryption? +FTPS allows establishing the TLS secure connection in either an Explicit or Implicit way. + - If you are connecting in an Implicit way, the connection is established via port 990. + - If you are connecting in an Explicit way, the connection is established via port 21. +++One thing to be aware that can affect your connection success is the URL used, this will depend on the Client Application used. +The portal will have just the URL as "ftps://" and you might need to change this. + - If the URL used starts with "ftp://", the connection is implied to be on port 21. + - If it starts with "ftps://", the connection is implied to be Implicit and on port 990. ++ Make sure to not mix both, such as attempting to connect to "ftps://" and using port 21, as it will fail to connect, even if you wish to do Explicit encryption. + The reason for this is due to an Explicit connection starting as a plain FTP connection before the AUTH method. ++#### How can I determine the method that was used to deploy my Azure App Service? Let us say you take over owning an app and you wish to find out how the Azure App Service was deployed so you can make changes and deploy them. You can determine how an Azure App Service was deployed by checking the application settings. If the app was deployed using an external package URL, you will see the WEBSITE_RUN_FROM_PACKAGE setting in the application settings with a URL value. Or if it was deployed using zip deploy, you will see the WEBSITE_RUN_FROM_PACKAGE setting with a value of 1. If the app was deployed using Azure DevOps, you will see the deployment history in the Azure DevOps portal. If Azure Functions Core Tools was used, you will see the deployment history in the Azure portal. ## More resources |
azure-arc | Api Extended Security Updates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/api-extended-security-updates.md | Title: Programmatically deploy and manage Azure Arc Extended Security Updates licenses description: Learn how to programmatically deploy and manage Azure Arc Extended Security Updates licenses for Windows Server 2012. Previously updated : 10/02/2023 Last updated : 10/23/2023 https://management.azure.com/subscriptions/SUBSCRIPTION_ID/resourceGroups/RESOUR "location": "SAME_REGION_AS_MACHINE", "properties": { "esuProfile": {- "assignedLicense": "" } } } |
azure-arc | License Extended Security Updates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/license-extended-security-updates.md | Title: License provisioning guidelines for Extended Security Updates for Windows Server 2012 description: Learn about license provisioning guidelines for Extended Security Updates for Windows Server 2012 through Azure Arc. Previously updated : 10/10/2023 Last updated : 10/24/2023 In all cases, you're required to attest to conformance with SA or SPLA. There is As you migrate and modernize your Windows Server 2012 and Windows 2012 R2 infrastructure through the end of 2023, you can utilize the flexibility of monthly billing with Windows Server 2012 ESUs enabled by Azure Arc for cost savings benefits. -As servers no longer require ESUs because they've been migrated to Azure, Azure VMware Solution (AVS), or Azure Stack HCI (where theyΓÇÖre eligible for free ESUs), or updated to Windows Server 2016 or higher, you can modify the number of cores associated with a license or delete/deactivate licenses. You can also link the license to a new scope of additional servers. See [Programmatically deploy and manage Azure Arc Extended Security Updates licenses](api-extended-security-updates.md) to learn more. +As servers no longer require ESUs because they've been migrated to Azure, Azure VMware Solution (AVS), or Azure Stack HCI (where theyΓÇÖre eligible for free ESUs), or updated to Windows Server 2016 or higher, you can modify the number of cores associated with a license or delete/deactivate licenses. You can also link the license to a new scope of additional servers. See [Programmatically deploy and manage Azure Arc Extended Security Updates licenses](api-extended-security-updates.md) to learn more. For information about no-cost ESUs through Azure Stack HCI, see [Free Extended Security Updates through Azure Stack HCI](/azure-stack/hci/manage/azure-benefits-esu?tabs=windows-server-2012). > [!NOTE] > This process is not automatic; billing is tied to the activated licenses and you are responsible for modifying your provisioned licensing to take advantage of cost savings. |
azure-arc | Onboard Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-portal.md | Before you get started, be sure to review the [prerequisites](prerequisites.md) If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. > [!NOTE]-> Follow best security practices and avoid using an Azure account with Owner access to onboard servers. Instead, use an account that only has the Azure Connected Machine onboarding or Azure Connected Machine resource administrator role assignment. See [Azure Identity Management and access control security best practices](/azure/security/fundamentals/identity-management-best-practices) for more information. +> Follow best security practices and avoid using an Azure account with Owner access to onboard servers. Instead, use an account that only has the Azure Connected Machine onboarding or Azure Connected Machine resource administrator role assignment. See [Azure Identity Management and access control security best practices](/azure/security/fundamentals/identity-management-best-practices#use-role-based-access-control) for more information. > ## Generate the installation script from the Azure portal |
azure-arc | Troubleshoot Extended Security Updates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/troubleshoot-extended-security-updates.md | Title: How to troubleshoot delivery of Extended Security Updates for Windows Server 2012 through Azure Arc description: Learn how to troubleshoot delivery of Extended Security Updates for Windows Server 2012 through Azure Arc. Previously updated : 09/14/2023 Last updated : 10/24/2023 If you're unable to successfully link your Azure Arc-enabled server to an activa - **Operating system:** Only Azure Arc-enabled servers running the Windows Server 2012 and 2012 R2 operating system are eligible to enroll in Extended Security Updates. -- **Environment:** The connected machine should not be running on Azure Stack HCI, Azure VMware solution, or as an Azure virtual machine. In these scenarios, WS2012 ESUs are available for free.+- **Environment:** The connected machine should not be running on Azure Stack HCI, Azure VMware solution, or as an Azure virtual machine. In these scenarios, WS2012 ESUs are available for free. For information about no-cost ESUs through Azure Stack HCI, see [Free Extended Security Updates through Azure Stack HCI](/azure-stack/hci/manage/azure-benefits-esu?tabs=windows-server-2012). - **License properties:** Verify the license is activated and has been allocated sufficient physical or virtual cores to support the intended scope of servers. |
azure-arc | Administer Arc Vmware | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/administer-arc-vmware.md | -In this article, you'll learn how to perform various administrative operations related to Azure Arc-enabled VMware vSphere (preview): +In this article, you learn how to perform various administrative operations related to Azure Arc-enabled VMware vSphere (preview): - Upgrading the Azure Arc resource bridge (preview) - Updating the credentials Each of these operations requires either SSH key to the resource bridge VM or th Azure Arc-enabled VMware vSphere requires the Arc resource bridge to connect your VMware vSphere environment with Azure. Periodically, new images of Arc resource bridge will be released to include security and feature updates. > [!NOTE]-> To upgrade the Arc resource bridge VM to the latest version, you will need to perform the onboarding again with the **same resource IDs**. This will cause some downtime as operations performed through Arc during this time might fail. +> To upgrade the Arc resource bridge VM to the latest version, you need to perform the onboarding again with the **same resource IDs**. This will cause some downtime as operations performed through Arc during this time might fail. To upgrade to the latest version of the resource bridge, perform the following steps: az account set -s <subscription id> az arcappliance get-credentials -n <name of the appliance> -g <resource group name> az arcappliance update-infracredentials vmware --kubeconfig kubeconfig ```-For more details on the commands see [`az arcappliance get-credentials`](/cli/azure/arcappliance#az-arcappliance-get-credentials) and [`az arcappliance update-infracredentials vmware`](/cli/azure/arcappliance/update-infracredentials#az-arcappliance-update-infracredentials-vmware). +For more details on the commands, see [`az arcappliance get-credentials`](/cli/azure/arcappliance#az-arcappliance-get-credentials) and [`az arcappliance update-infracredentials vmware`](/cli/azure/arcappliance/update-infracredentials#az-arcappliance-update-infracredentials-vmware). To update the credentials used by the VMware cluster extension on the resource bridge. This command can be run from anywhere with `connectedvmware` CLI extension installed. |
azure-arc | Azure Arc Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/azure-arc-agent.md | + + Title: Azure Arc agent +description: Learn about Azure Arc agent + Last updated : 10/23/2023+++++++++# Azure Arc agent ++The Azure Connected Machine agent enables you to manage your Windows and Linux machines hosted outside of Azure on your corporate network or other cloud providers. ++## Agent components +++The Azure Connected Machine agent package contains several logical components bundled together: ++* The Hybrid Instance Metadata service (HIMDS) manages the connection to Azure and the connected machine's Azure identity. ++* The guest configuration agent provides functionality such as assessing whether the machine complies with required policies and enforcing compliance. ++ Note the following behavior with Azure Policy [guest configuration](../../governance/machine-configuration/overview.md) for a disconnected machine: ++ * An Azure Policy assignment that targets disconnected machines is unaffected. + * Guest assignment is stored locally for 14 days. Within the 14-day period, if the Connected Machine agent reconnects to the service, policy assignments are reapplied. + * Assignments are deleted after 14 days and aren't reassigned to the machine after the 14-day period. ++* The Extension agent manages VM extensions, including install, uninstall, and upgrade. Azure downloads extensions and copies them to the `%SystemDrive%\%ProgramFiles%\AzureConnectedMachineAgent\ExtensionService\downloads` folder on Windows, and to `/opt/GC_Ext/downloads` on Linux. On Windows, the extension installs to the path `%SystemDrive%\Packages\Plugins\<extension>`, and on Linux the extension installs to `/var/lib/waagent/<extension>`. ++>[!NOTE] +> The [Azure Monitor agent](../../azure-monitor/agents/azure-monitor-agent-overview.md) (AMA) is a separate agent that collects monitoring data, and it does not replace the Connected Machine agent; the AMA only replaces the Log Analytics agent, Diagnostics extension, and Telegraf agent for both Windows and Linux machines. ++## Agent resources ++The following information describes the directories and user accounts used by the Azure Connected Machine agent. ++### Windows agent installation details ++The Windows agent is distributed as a Windows Installer package (MSI). Download the Windows agent from the [Microsoft Download Center](https://aka.ms/AzureConnectedMachineAgent). +Installing the Connected Machine agent for Window applies the following system-wide configuration changes: ++* The installation process creates the following folders during setup. ++ | Directory | Description | + |--|-| + | %ProgramFiles%\AzureConnectedMachineAgent | azcmagent CLI and instance metadata service executables.| + | %ProgramFiles%\AzureConnectedMachineAgent\ExtensionService\GC | Extension service executables.| + | %ProgramFiles%\AzureConnectedMachineAgent\GCArcService\GC | Guest configuration (policy) service executables.| + | %ProgramData%\AzureConnectedMachineAgent | Configuration, log and identity token files for azcmagent CLI and instance metadata service.| + | %ProgramData%\GuestConfig | Extension package downloads, guest configuration (policy) definition downloads, and logs for the extension and guest configuration services.| + | %SYSTEMDRIVE%\packages | Extension package executables. | ++* Installing the agent creates the following Windows services on the target machine. ++ | Service name | Display name | Process name | Description | + |--|--|--|-| + | himds | Azure Hybrid Instance Metadata Service | himds | Synchronizes metadata with Azure and hosts a local REST API for extensions and applications to access the metadata and request Microsoft Entra managed identity tokens | + | GCArcService | Guest configuration Arc Service | gc_service | Audits and enforces Azure guest configuration policies on the machine. | + | ExtensionService | Guest configuration Extension Service | gc_service | Installs, updates, and manages extensions on the machine. | ++* Agent installation creates the following virtual service account. ++ | Virtual Account | Description | + ||-| + | NT SERVICE\\himds | Unprivileged account used to run the Hybrid Instance Metadata Service. | ++ > [!TIP] + > This account requires the *Log on as a service* right. This right is automatically granted during agent installation, but if your organization configures user rights assignments with Group Policy, you might need to adjust your Group Policy Object to grant the right to **NT SERVICE\\himds** or **NT SERVICE\\ALL SERVICES** to allow the agent to function. ++* Agent installation creates the following local security group. ++ | Security group name | Description | + ||-| + | Hybrid agent extension applications | Members of this security group can request Microsoft Entra tokens for the system-assigned managed identity | ++* Agent installation creates the following environmental variables ++ | Name | Default value | Description | + |||| + | IDENTITY_ENDPOINT | `http://localhost:40342/metadata/identity/oauth2/token` | + | IMDS_ENDPOINT | `http://localhost:40342` | ++* There are several log files available for troubleshooting, described in the following table. ++ | Log | Description | + |--|-| + | %ProgramData%\AzureConnectedMachineAgent\Log\himds.log | Records details of the heartbeat and identity agent component. | + | %ProgramData%\AzureConnectedMachineAgent\Log\azcmagent.log | Contains the output of the azcmagent tool commands. | + | %ProgramData%\GuestConfig\arc_policy_logs\gc_agent.log | Records details about the guest configuration (policy) agent component. | + | %ProgramData%\GuestConfig\ext_mgr_logs\gc_ext.log | Records details about extension manager activity (extension install, uninstall, and upgrade events). | + | %ProgramData%\GuestConfig\extension_logs | Directory containing logs for individual extensions. | ++* The process creates the local security group **Hybrid agent extension applications**. ++* After uninstalling the agent, the following artifacts remain: ++ * %ProgramData%\AzureConnectedMachineAgent\Log + * %ProgramData%\AzureConnectedMachineAgent + * %ProgramData%\GuestConfig + * %SystemDrive%\packages ++### Linux agent installation details ++The preferred package format for the distribution (`.rpm` or `.deb`) that's hosted in the Microsoft [package repository](https://packages.microsoft.com/) provides the Connected Machine agent for Linux. The shell script bundle [Install_linux_azcmagent.sh](https://aka.ms/azcmagent) installs and configures the agent. ++Installing, upgrading, and removing the Connected Machine agent isn't required after server restart. ++Installing the Connected Machine agent for Linux applies the following system-wide configuration changes. ++* Setup creates the following installation folders. ++ | Directory | Description | + |--|-| + | /opt/azcmagent/ | azcmagent CLI and instance metadata service executables. | + | /opt/GC_Ext/ | Extension service executables. | + | /opt/GC_Service/ | Guest configuration (policy) service executables. | + | /var/opt/azcmagent/ | Configuration, log and identity token files for azcmagent CLI and instance metadata service.| + | /var/lib/GuestConfig/ | Extension package downloads, guest configuration (policy) definition downloads, and logs for the extension and guest configuration services.| ++* Installing the agent creates the following daemons. ++ | Service name | Display name | Process name | Description | + |--|--|--|-| + | himdsd.service | Azure Connected Machine Agent Service | himds | This service implements the Hybrid Instance Metadata service (IMDS) to manage the connection to Azure and the connected machine's Azure identity.| + | gcad.service | GC Arc Service | gc_linux_service | Audits and enforces Azure guest configuration policies on the machine. | + | extd.service | Extension Service | gc_linux_service | Installs, updates, and manages extensions on the machine. | ++* There are several log files available for troubleshooting, described in the following table. ++ | Log | Description | + |--|-| + | /var/opt/azcmagent/log/himds.log | Records details of the heartbeat and identity agent component. | + | /var/opt/azcmagent/log/azcmagent.log | Contains the output of the azcmagent tool commands. | + | /var/lib/GuestConfig/arc_policy_logs | Records details about the guest configuration (policy) agent component. | + | /var/lib/GuestConfig/ext_mgr_logs | Records details about extension manager activity (extension install, uninstall, and upgrade events). | + | /var/lib/GuestConfig/extension_logs | Directory containing logs for individual extensions. | ++* Agent installation creates the following environment variables, set in `/lib/systemd/system.conf.d/azcmagent.conf`. ++ | Name | Default value | Description | + |||-| + | IDENTITY_ENDPOINT | `http://localhost:40342/metadata/identity/oauth2/token` | + | IMDS_ENDPOINT | `http://localhost:40342` | ++* After uninstalling the agent, the following artifacts remain: ++ * /var/opt/azcmagent + * /var/lib/GuestConfig ++## Agent resource governance ++The Azure Connected Machine agent is designed to manage agent and system resource consumption. The agent approaches resource governance under the following conditions: ++* The Guest Configuration agent can use up to 5% of the CPU to evaluate policies. +* The Extension Service agent can use up to 5% of the CPU to install, upgrade, run, and delete extensions. Some extensions might apply more restrictive CPU limits once installed. The following exceptions apply: ++ | Extension type | Operating system | CPU limit | + | -- | - | | + | AzureMonitorLinuxAgent | Linux | 60% | + | AzureMonitorWindowsAgent | Windows | 100% | + | AzureSecurityLinuxAgent | Linux | 30% | + | LinuxOsUpdateExtension | Linux | 60% | + | MDE.Linux | Linux | 60% | + | MicrosoftDnsAgent | Windows | 100% | + | MicrosoftMonitoringAgent | Windows | 60% | + | OmsAgentForLinux | Windows | 60%| ++During normal operations, defined as the Azure Connected Machine agent being connected to Azure and not actively modifying an extension or evaluating a policy, you can expect the agent to consume the following system resources: ++| | Windows | Linux | +| | - | -- | +| **CPU usage (normalized to 1 core)** | 0.07% | 0.02% | +| **Memory usage** | 57 MB | 42 MB | ++The performance data above was gathered in April 2023 on virtual machines running Windows Server 2022 and Ubuntu 20.04. The actual agent performance and resource consumption vary based on the hardware and software configuration of your servers. ++## Instance metadata ++Metadata information about a connected machine is collected after the Connected Machine agent registers with Azure Arc-enabled servers, specifically: ++* Operating system name, type, and version +* Computer name +* Computer manufacturer and model +* Computer fully qualified domain name (FQDN) +* Domain name (if joined to an Active Directory domain) +* Active Directory and DNS fully qualified domain name (FQDN) +* UUID (BIOS ID) +* Connected Machine agent heartbeat +* Connected Machine agent version +* Public key for managed identity +* Policy compliance status and details (if using guest configuration policies) +* SQL Server installed (Boolean value) +* Cluster resource ID (for Azure Stack HCI nodes) +* Hardware manufacturer +* Hardware model +* CPU family, socket, physical core and logical core counts +* Total physical memory +* Serial number +* SMBIOS asset tag +* Cloud provider +* Amazon Web Services (AWS) metadata, when running in AWS: + * Account ID + * Instance ID + * Region +* Google Cloud Platform (GCP) metadata, when running in GCP: + * Instance ID + * Image + * Machine type + * Project ID + * Project number + * Service accounts + * Zone ++The agent requests the following metadata information from Azure: ++* Resource location (region) +* Virtual machine ID +* Tags +* Microsoft Entra managed identity certificate +* Guest configuration policy assignments +* Extension requests - install, update, and delete. ++> [!NOTE] +> Azure Arc-enabled servers don't store/process customer data outside the region the customer deploys the service instance in. ++## Next steps ++- [Connect VMware vCenter Server to Azure Arc](quick-start-connect-vcenter-to-arc-using-script.md). +- [Install Arc agent at scale for your VMware VMs](enable-guest-management-at-scale.md). |
azure-arc | Azure Arc Resource Bridge | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/azure-arc-resource-bridge.md | + + Title: Azure Arc resource bridge (preview) +description: Learn about Azure Arc resource bridge (preview) + Last updated : 10/23/2023+++++++#Customer intent: As an IT infrastructure admin, I want to know about the Azure Arc resource bridge (preview) that facilitates the Arc connection between vCenter server and Azure. +++# Azure Arc resource bridge (preview) ++Azure Arc resource bridge (preview) is a Microsoft managed product that is part of the core Azure Arc platform. It's designed to host other Azure Arc services. The resource bridge supports VM self-servicing and management from Azure, for virtualized Windows and Linux virtual machines hosted in an on-premises environment on [Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-overview), VMware [(Arc-enabled VMware vSphere)](/azure/azure-arc/vmware-vsphere), and System Center Virtual Machine Manager (SCVMM) [(Arc-enabled SCVMM preview)](/azure/azure-arc/system-center-virtual-machine-manager). ++Azure Arc resource bridge (preview) is a Kubernetes management cluster installed on the customerΓÇÖs on-premises infrastructure. The resource bridge is provided with the credentials to the infrastructure control plane that allows it to apply guest management services on the on-premises resources. Arc resource bridge enables projection of on-premises resources as ARM resources and management from ARM as **Arc-enabled** Azure resources. ++Arc resource bridge delivers the following benefits: ++- Enables VM self-servicing from Azure without having to create and manage a Kubernetes cluster. ++- Fully supported by Microsoft, including updates to core components. ++- Supports deployment to any private cloud hosted on Hyper-V or VMware from the Azure portal or using the Azure Command Line Interface (CLI). ++## Overview ++Azure Arc resource bridge (preview) hosts other components such as [custom locations](custom-locations.md), cluster extensions, and other Azure Arc agents to deliver the level of functionality with the private cloud infrastructures it supports. ++This complex system is composed of three layers: ++- The base layer represents the resource bridge and the Arc agents. ++- The platform layer that includes the custom location and cluster extension. ++- The solution layer for each service supported by Arc resource bridge (that is, the different type of VMs). +++Azure Arc resource bridge (preview) can host other Azure services or solutions running on-premises. For this preview, there are two objects hosted on the Arc resource bridge (preview): ++- Cluster extension: The Azure service deployed to run on-premises. For the preview release, it supports three + - Azure Arc-enabled VMware + - Azure Arc-enabled Azure Stack HCI + - Azure Arc-enabled System Center Virtual Machine Manager (SCVMM) ++- Custom locations: A deployment target where you can create Azure resources. It maps to different resources for different Azure services. For example, for Arc-enabled VMware, the custom locations resource maps to an instance of vCenter, and for Arc-enabled Azure Stack HCI, it maps to an HCI cluster instance. ++Custom locations and cluster extension are both Azure resources, which are linked to the Azure Arc resource bridge (preview) resource in Azure Resource Manager. When you create an on-premises VM from Azure, you can select the custom location, and that routes that *create action* to the mapped vCenter, Azure Stack HCI cluster, or SCVMM. ++Some resources are unique to the infrastructure. For example, vCenter has a resource pool, network, and template resources. During VM creation, these resources need to be specified. With Azure Stack HCI, you just need to select the custom location, network, and template to create a VM. ++To summarize, the Azure resources are projections of the resources running in your on-premises private cloud. If the on-premises resource isn't healthy, it can affect the health of the related resources that are projected in Azure. For example, if the resource bridge is deleted by accident, all the resources projected in Azure by the resource bridge are affected. The on-premises VMs in your on-premises private cloud aren't affected, as they are running on vCenter but you won't be able to start or stop the VMs from Azure. It's not recommended to directly manage or modify the resource bridge using any on-premises applications. ++## Benefits of Azure Arc resource bridge (preview) ++Through Azure Arc resource bridge (preview), you can represent a subset of your vCenter resources in Azure to enable self-service by registering resource pools, networks, and VM templates. Integration with Azure allows you to manage access to your vCenter resources in Azure to maintain a secure environment. You can also perform various operations on the VMware virtual machines that are enabled by Arc-enabled VMware vSphere: ++- Start, stop, and restart a virtual machine +- Control access and add Azure tags +- Add, remove, and update network interfaces +- Add, remove, and update disks and update VM size (CPU cores and memory) +- Enable guest management +- Install extensions ++## Regional resiliency ++While Azure has many redundancy features at every level of failure, if a service impacting event occurs, the current release of Azure Arc resource bridge (preview) doesn't support cross-region failover or other resiliency capabilities. If the service becomes unavailable, the on-premises VMs continue to operate unaffected. +Management from Azure is unavailable during that service outage. ++## Supported versions ++Arc resource bridge typically releases a new version on a monthly cadence, at the end of the month. Delays can occur that could push the release date further out. Regardless of when a new release comes out, if you are within n-3 supported versions, then your Arc resource bridge version is supported. To stay updated on releases, visit the [Arc resource bridge release notes](https://github.com/Azure/ArcResourceBridge/releases) on GitHub. To learn more about upgrade options, visit [Upgrade Arc resource bridge](../resource-bridge/upgrade.md). ++## Next steps ++[Learn more about Arc-enabled VMware vSphere](/azure/azure-arc/vmware-vsphere). |
azure-arc | Browse And Enable Vcenter Resources In Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/browse-and-enable-vcenter-resources-in-azure.md | Title: Enable your VMware vCenter resources in Azure description: Learn how to browse your vCenter inventory and represent a subset of your VMware vCenter resources in Azure to enable self-service. Previously updated : 08/18/2023 Last updated : 11/06/2023 |
azure-arc | Custom Locations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/custom-locations.md | + + Title: Custom locations for VMware vSphere +description: Learn about custom locations for VMware vSphere + Last updated : 10/23/2023+++++++#Customer intent: As an IT infrastructure admin, I want to know about the concepts behind Azure Arc. +++# Custom locations for VMware vSphere ++As an extension of the Azure location construct, a *custom location* provides a reference as a deployment target which administrators can set up and users can point to when creating an Azure resource. It abstracts the backend infrastructure details from application developers, database admin users, or other users in the organization. ++## Custom location for on-premises vCenter server ++Since the custom location is an Azure Resource Manager resource that supports [Azure role-based access control (Azure RBAC)](/azure/role-based-access-control/overview), an administrator or operator can determine which users have access to create resource instances on the compute, storage, networking, and other vCenter resources to deploy and manage VMs. ++For example, an IT administrator could create a custom location **Contoso-vCenter** representing the vCenter server in your organization's data center. The operator can then assign Azure RBAC permissions to application developers on this custom location so that they can deploy virtual machines. The developers can then deploy these virtual machines without having to know details of the vCenter management server. ++Custom locations create the granular [RoleBindings and ClusterRoleBindings](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding) necessary for other Azure services to access the VMware resources. ++## Next steps ++[Connect VMware vCenter Server to Azure Arc](./quick-start-connect-vcenter-to-arc-using-script.md). |
azure-arc | Enable Guest Management At Scale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/enable-guest-management-at-scale.md | -In this article, you will learn how to install Arc agents at scale for VMware VMs and use Azure management capabilities. +In this article, you learn how to install Arc agents at scale for VMware VMs and use Azure management capabilities. ## Prerequisites |
azure-arc | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/overview.md | Title: What is Azure Arc-enabled VMware vSphere (preview)? -description: Azure Arc-enabled VMware vSphere (preview) extends Azure governance and management capabilities to VMware vSphere infrastructure and delivers a consistent management experience across both platforms. +description: Azure Arc-enabled VMware vSphere extends Azure governance and management capabilities to VMware vSphere infrastructure and delivers a consistent management experience across both platforms. Last updated 08/21/2023 All guest OS-based capabilities are provided by enabling guest management (insta The easiest way to think of this is as follows: -- Azure Arc-enabled servers interact on the guest operating system level, with no awareness of the underlying infrastructure fabric and the virtualization platform that they're running on. Since Arc-enabled servers also support bare-metal machines, there may, in fact, not even be a host hypervisor in some cases.+- Azure Arc-enabled servers interact on the guest operating system level, with no awareness of the underlying infrastructure fabric and the virtualization platform that they're running on. Since Arc-enabled servers also support bare-metal machines, there can, in fact, not even be a host hypervisor in some cases. - Azure Arc-enabled VMware vSphere is a superset of Arc-enabled servers that extends management capabilities beyond the guest operating system to the VM itself. This provides lifecycle management and CRUD (Create, Read, Update, and Delete) operations on a VMware vSphere VM. These lifecycle management capabilities are exposed in the Azure portal and look and feel just like a regular Azure VM. Azure Arc-enabled VMware vSphere also provides guest operating system managementΓÇöin fact, it uses the same components as Azure Arc-enabled servers. -You have the flexibility to start with either option, and incorporate the other one later without any disruption. With both the options, you will enjoy the same consistent experience. +You have the flexibility to start with either option, and incorporate the other one later without any disruption. With both the options, you enjoy the same consistent experience. ## Supported VMware vSphere versions Azure Arc-enabled VMware vSphere (preview) currently works with vCenter Server versions 6.7, 7, and 8. > [!NOTE]-> Azure Arc-enabled VMware vSphere (preview) supports vCenters with a maximum of 9500 VMs. If your vCenter has more than 9500 VMs, we don't recommend you to use Arc-enabled VMware vSphere with it at this point. +> Azure Arc-enabled VMware vSphere (preview) supports vCenters with a maximum of 9500 VMs. If your vCenter has more than 9500 VMs, we don't recommend you to use Arc-enabled VMware vSphere with it at this point. ## Supported regions |
azure-arc | Perform Vm Ops Through Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/perform-vm-ops-through-azure.md | Title: Perform VM operations on VMware VMs through Azure description: Learn how to view the operations that you can do on VMware virtual machines and install the Log Analytics agent. Previously updated : 10/17/2023 Last updated : 08/18/2023 If you no longer need the VM, you can delete it. ## Next steps -[Create a VM using Azure Arc-enabled vSphere](quick-start-create-a-vm.md). +[Tutorial - Create a VM using Azure Arc-enabled vSphere](quick-start-create-a-vm.md). |
azure-arc | Quick Start Connect Vcenter To Arc Using Script | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md | Title: Connect VMware vCenter Server to Azure Arc by using the helper script -description: In this quickstart, you'll learn how to use the helper script to connect your VMware vCenter Server instance to Azure Arc. +description: In this quickstart, you learn how to use the helper script to connect your VMware vCenter Server instance to Azure Arc. Last updated 09/05/2022-To start using the Azure Arc-enabled VMware vSphere (preview) features, you need to connect your VMware vCenter Server instance to Azure Arc. This quickstart shows you how to connect your VMware vCenter Server instance to Azure Arc by using a helper script. +To start using the Azure Arc-enabled VMware vSphere features, you need to connect your VMware vCenter Server instance to Azure Arc. This quickstart shows you how to connect your VMware vCenter Server instance to Azure Arc by using a helper script. First, the script deploys a virtual appliance called [Azure Arc resource bridge (preview)](../resource-bridge/overview.md) in your vCenter environment. Then, it installs a VMware cluster extension to provide a continuous connection between vCenter Server and Azure Arc. > [!IMPORTANT]-> This article describes a way to connect a generic vCenter Server to Azure Arc. If you are trying to enable Arc for Azure VMware Solution (AVS) private cloud, please follow this guide instead - [Deploy Arc for Azure VMware Solution](../../azure-vmware/deploy-arc-for-azure-vmware-solution.md). With the Arc for AVS onboarding process you will need to provide fewer inputs and Arc capabilities are better integrated into the AVS private cloud portal experience. +> This article describes a way to connect a generic vCenter Server to Azure Arc. If you're trying to enable Arc for Azure VMware Solution (AVS) private cloud, please follow this guide instead - [Deploy Arc for Azure VMware Solution](../../azure-vmware/deploy-arc-for-azure-vmware-solution.md). With the Arc for AVS onboarding process you need to provide fewer inputs and Arc capabilities are better integrated into the AVS private cloud portal experience. ## Prerequisites First, the script deploys a virtual appliance called [Azure Arc resource bridge - A datastore with a minimum of 100 GB of free disk space available through the resource pool or cluster. > [!NOTE]-> Azure Arc-enabled VMware vSphere (preview) supports vCenter Server instances with a maximum of 9,500 virtual machines (VMs). If your vCenter Server instance has more than 9,500 VMs, we don't recommend that you use Azure Arc-enabled VMware vSphere with it at this point. +> Azure Arc-enabled VMware vSphere supports vCenter Server instances with a maximum of 9,500 virtual machines (VMs). If your vCenter Server instance has more than 9,500 VMs, we don't recommend that you use Azure Arc-enabled VMware vSphere with it at this point. ### vSphere account You need a vSphere account that can: - Read all inventory. - Deploy and update VMs to all the resource pools (or clusters), networks, and VM templates that you want to use with Azure Arc. -This account is used for the ongoing operation of Azure Arc-enabled VMware vSphere (preview) and the deployment of the Azure Arc resource bridge (preview) VM. +This account is used for the ongoing operation of Azure Arc-enabled VMware vSphere and the deployment of the Azure Arc resource bridge (preview) VM. ### Workstation You need a Windows or Linux machine that can access both your vCenter Server ins 12. Select **Next: Download and run script**. -13. If your subscription is not registered with all the required resource providers, a **Register** button will appear. Select the button before you proceed to the next step. +13. If your subscription isn't registered with all the required resource providers, a **Register** button will appear. Select the button before you proceed to the next step. :::image type="content" source="media/register-arc-vmware-providers.png" alt-text="Screenshot that shows the button to register required resource providers during vCenter onboarding to Azure Arc."::: A typical onboarding that uses the script takes 30 to 60 minutes. During the pro | **vCenter password** | Enter the password for the vSphere account. | | **Data center selection** | Select the name of the datacenter (as shown in the vSphere client) where the Azure Arc resource bridge VM should be deployed. | | **Network selection** | Select the name of the virtual network or segment to which the Azure Arc resource bridge VM must be connected. This network should allow the appliance to communicate with vCenter Server and the Azure endpoints (or internet). |-| **Static IP / DHCP** | For deploying Azure Arc resource bridge, the preferred configuration is to use Static IP. Enter **n** to select static IP configuration. While not recommended, if you have DHCP server in your network and want to use it instead, enter **y**. If you are using a DHCP server, reserve the IP address assigned to the Azure Arc Resource Bridge VM (Appliance VM IP). If you use DHCP, the cluster configuration IP address still needs to be a static IP address. </br>When you choose a static IP configuration, you're asked for the following information: </br> 1. **Static IP address prefix**: Network address in CIDR notation. For example: **192.168.0.0/24**. </br> 2. **Static gateway**: Gateway address. For example: **192.168.0.0**. </br> 3. **DNS servers**: IP address(es) of DNS server(s) used by Azure Arc resource bridge VM for DNS resolution. Azure Arc resource bridge VM must be able to resolve external sites, like mcr.microsoft.com and the vCenter server. </br> 4. **Start range IP**: Minimum size of two available IP addresses is required. One IP address is for the Azure Arc resource bridge VM, and the other is reserved for upgrade scenarios. Provide the starting IP address of that range. Ensure the Start range IP has internet access. </br> 5. **End range IP**: Last IP address of the IP range requested in the previous field. Ensure the End range IP has internet access. </br>| -| **Control Plane IP address** | Azure Arc resource bridge (preview) runs a Kubernetes cluster, and its control plane always requires a static IP address. Provide an IP address that meets the following requirements: <ul> <li>The IP address must have internet access. </li><li>The IP address must be within the subnet defined by IP address prefix.</li> <li> If you are using static IP address option for resource bridge VM IP address, the control plane IP address must be outside of the IP address range provided for the VM (Start range IP - End range IP).</li> <li> If there is a DHCP service on the network, the IP address must be outside of DHCP range. </li> </ul>| +| **Static IP / DHCP** | For deploying Azure Arc resource bridge, the preferred configuration is to use Static IP. Enter **n** to select static IP configuration. While not recommended, if you have DHCP server in your network and want to use it instead, enter **y**. If you're using a DHCP server, reserve the IP address assigned to the Azure Arc Resource Bridge VM (Appliance VM IP). If you use DHCP, the cluster configuration IP address still needs to be a static IP address. </br>When you choose a static IP configuration, you're asked for the following information: </br> 1. **Static IP address prefix**: Network address in CIDR notation. For example: **192.168.0.0/24**. </br> 2. **Static gateway**: Gateway address. For example: **192.168.0.0**. </br> 3. **DNS servers**: IP address(es) of DNS server(s) used by Azure Arc resource bridge VM for DNS resolution. Azure Arc resource bridge VM must be able to resolve external sites, like mcr.microsoft.com and the vCenter server. </br> 4. **Start range IP**: Minimum size of two available IP addresses is required. One IP address is for the Azure Arc resource bridge VM, and the other is reserved for upgrade scenarios. Provide the starting IP address of that range. Ensure the Start range IP has internet access. </br> 5. **End range IP**: Last IP address of the IP range requested in the previous field. Ensure the End range IP has internet access. </br>| +| **Control Plane IP address** | Azure Arc resource bridge (preview) runs a Kubernetes cluster, and its control plane always requires a static IP address. Provide an IP address that meets the following requirements: <br> - The IP address must have internet access. <br> - The IP address must be within the subnet defined by IP address prefix. <br> - If you're using static IP address option for resource bridge VM IP address, the control plane IP address must be outside of the IP address range provided for the VM (Start range IP - End range IP). <br> - If there's a DHCP service on the network, the IP address must be outside of DHCP range.| | **Resource pool** | Select the name of the resource pool to which the Azure Arc resource bridge VM will be deployed. | | **Data store** | Select the name of the datastore to be used for the Azure Arc resource bridge VM. | | **Folder** | Select the name of the vSphere VM and the template folder where the Azure Arc resource bridge's VM will be deployed. | |
azure-arc | Quick Start Create A Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/quick-start-create-a-vm.md | Title: Create a virtual machine on VMware vCenter using Azure Arc -description: In this quickstart, you'll learn how to create a virtual machine on VMware vCenter using Azure Arc - Previously updated : 08/18/2023+description: In this quickstart, you learn how to create a virtual machine on VMware vCenter using Azure Arc + Last updated : 10/23/2023 # Customer intent: As a self-service user, I want to provision a VM using vCenter resources through Azure so that I can deploy my code -# Quickstart: Create a virtual machine on VMware vCenter using Azure Arc +# Create a virtual machine on VMware vCenter using Azure Arc -Once your administrator has connected a VMware vCenter to Azure, represented VMware vCenter resources in Azure, and provided you permissions on those resources, you'll create a virtual machine. +Once your administrator has connected a VMware vCenter to Azure, represented VMware vCenter resources in Azure, and provided you with permissions on those resources, you'll create a virtual machine. ## Prerequisites Once your administrator has connected a VMware vCenter to Azure, represented VMw :::image type="content" source="media/browse-virtual-machines.png" alt-text="Screenshot showing the unified browse experience for Azure and Arc virtual machines."::: -2. Click **Add** and then select **Azure Arc machine** from the drop-down. +2. Select **Add** and then select **Azure Arc machine** from the drop-down. :::image type="content" source="media/create-azure-arc-virtual-machine-1.png" alt-text="Screenshot showing the Basic tab for creating an Azure Arc virtual machine."::: Once your administrator has connected a VMware vCenter to Azure, represented VMw 10. (Optional) Add tags to the VM resource if necessary. -11. Select **Create** after reviewing all the properties. It should take a few minutes to provision the VM. +11. Select **Create** after reviewing all the properties. It should take a few minutes to create the VM. ## Next steps -- [Perform operations on VMware VMs in Azure](perform-vm-ops-through-azure.md)+[Perform operations on VMware VMs in Azure](perform-vm-ops-through-azure.md). |
azure-arc | Recover From Resource Bridge Deletion | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/recover-from-resource-bridge-deletion.md | -In this article, you'll learn how to recover the Azure Arc resource bridge (preview) connection into a working state in disaster scenarios such as accidental deletion. In such cases, the connection between on-premises infrastructure and Azure is lost and any operations performed through Arc will fail. +In this article, you learn how to recover the Azure Arc resource bridge (preview) connection into a working state in disaster scenarios such as accidental deletion. In such cases, the connection between on-premises infrastructure and Azure is lost and any operations performed through Arc will fail. ## Recovering the Arc resource bridge in case of VM deletion |
azure-arc | Remove Vcenter From Arc Vmware | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/remove-vcenter-from-arc-vmware.md | description: This article explains the steps to cleanly remove your VMware vCent Previously updated : 10/17/2023 Last updated : 03/28/2022 To run the deboarding script, follow these steps: - **AVSId**: The Azure resource ID of the AVS instance. Specifying vCenterId or AVSId is mandatory. -- **ApplianceConfigFilePath (optional)**: Path to kubeconfig, output from deploy command. Providing applianceconfigfilepath will also delete the appliance VM running on the vCenter.+- **ApplianceConfigFilePath (optional)**: Path to kubeconfig, output from deploy command. Providing applianceconfigfilepath also deletes the appliance VM running on the vCenter. -- **Force**: Using the Force flag will delete all the Azure resources without reaching resource bridge. Use this option if resource bridge VM isn't in running state. +- **Force**: Using the Force flag deletes all the Azure resources without reaching resource bridge. Use this option if resource bridge VM isn't in running state. ### Remove VMware vSphere resources from Azure manually If you aren't using the deboarding script, follow these steps to remove the VMwa 6. Select **Remove from Azure**. - This action will only remove these resource representations from Azure. The resources will continue to remain in your vCenter. + This action only removes these resource representations from Azure. The resources continue to remain in your vCenter. 7. Do the steps 4, 5, and 6 for **Resources pools/clusters/hosts**, **Templates**, **Networks**, and **Datastores** |
azure-arc | Setup And Manage Self Service Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/setup-and-manage-self-service-access.md | Title: Set up and manage self-service access to VMware resources through Azure RBAC -description: Learn how to manage access to your on-premises VMware resources through Azure Role-Based Access Control (RBAC). +description: Learn how to manage access to your on-premises VMware resources through Azure role-based access control (Azure RBAC). Last updated 08/21/2023 -Once your VMware vSphere resources are enabled in Azure, the final step in setting up a self-service experience for your teams is to provide them with access. This article describes how to use built-in roles to manage granular access to VMware resources through Azure Role-based Access Control (RBAC) and allow your teams to deploy and manage VMs. +Once your VMware vSphere resources are enabled in Azure, the final step in setting up a self-service experience for your teams is to provide them with access. This article describes how to use built-in roles to manage granular access to VMware resources through Azure role-based access control (RBAC) and allow your teams to deploy and manage VMs. ## Prerequisites You must assign this role on individual resource pool (or cluster or host), netw 3. Navigate to the **Resourcepools/clusters/hosts** in **vCenter inventory** section in the table of contents. -3. Find and select resourcepool (or cluster or host). This will take you to the Arc resource representing the resourcepool. +3. Find and select resourcepool (or cluster or host). This takes you to the Arc resource representing the resourcepool. 4. Select **Access control (IAM)** in the table of contents. You must assign this role on individual resource pool (or cluster or host), netw If you have organized your vSphere resources into a resource group, you can provide the same role at the resource group scope. -Your users now have access to VMware vSphere cloud resources. However, your users will also need to have permissions on the subscription/resource group where they would like to deploy and manage VMs. +Your users now have access to VMware vSphere cloud resources. However, your users also need to have permissions on the subscription/resource group where they would like to deploy and manage VMs. ## Provide access to subscription or resource group where VMs will be deployed The **Azure Arc VMware VM Contributor** role is a built-in role that provides pe ## Next steps -[Create a VM using Azure Arc-enabled vSphere](quick-start-create-a-vm.md). +[Tutorial - Create a VM using Azure Arc-enabled vSphere](quick-start-create-a-vm.md). |
azure-arc | Support Matrix For Arc Enabled Vmware Vsphere | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/support-matrix-for-arc-enabled-vmware-vsphere.md | The following requirements must be met in order to use Azure Arc-enabled VMware Azure Arc-enabled VMware vSphere (preview) works with vCenter Server versions 6.7, 7 and 8. > [!NOTE]-> Azure Arc-enabled VMware vSphere (preview) currently supports vCenters with a maximum of 9500 VMs. If your vCenter has more than 9500 VMs, it is not recommended to use Arc-enabled VMware vSphere with it at this point. +> Azure Arc-enabled VMware vSphere (preview) currently supports vCenters with a maximum of 9500 VMs. If your vCenter has more than 9500 VMs, it's not recommended to use Arc-enabled VMware vSphere with it at this point. ### Required vSphere account privileges Additionally, be sure that the requirements below are met in order to enable gue ### Supported operating systems -Make sure you are using a version of the Windows or Linux [operating systems that are officially supported for the Azure Connected Machine agent](../servers/prerequisites.md#supported-operating-systems). Only x86-64 (64-bit) architectures are supported. x86 (32-bit) and ARM-based architectures, including x86-64 emulation on arm64, aren't supported operating environments. +Make sure you're using a version of the Windows or Linux [operating systems that are officially supported for the Azure Connected Machine agent](../servers/prerequisites.md#supported-operating-systems). Only x86-64 (64-bit) architectures are supported. x86 (32-bit) and ARM-based architectures, including x86-64 emulation on arm64, aren't supported operating environments. ### Software requirements |
azure-arc | Switch To New Preview Version | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/switch-to-new-preview-version.md | -# Customer intent: As a VI admin, I want to switch to the new preview version of Arc-enabled VMware vSphere (preview) and leverage the associated capabilities +# Customer intent: As a VI admin, I want to switch to the new preview version of Arc-enabled VMware vSphere and leverage the associated capabilities # Switch to the new preview version -On August 21, 2023, we rolled out major changes to Azure Arc-enabled VMware vSphere preview. We are now announcing a new preview. By switching to the new preview version, you can use all the Azure management services that are available for Arc-enabled Servers. +On August 21, 2023, we rolled out major changes to Azure Arc-enabled VMware vSphere preview. We're now announcing a new preview. By switching to the new preview version, you can use all the Azure management services that are available for Arc-enabled Servers. > [!NOTE]-> If you're new to Arc-enabled VMware vSphere (preview), you will be able to leverage the new capabilities by default. To get started with the preview, see [Quickstart: Connect VMware vCenter Server to Azure Arc by using the helper script](quick-start-connect-vcenter-to-arc-using-script.md). +> If you're new to Arc-enabled VMware vSphere (preview), you will be able to leverage the new capabilities by default. To get started with the new preview, see [Quickstart: Connect VMware vCenter Server to Azure Arc by using the helper script](quick-start-connect-vcenter-to-arc-using-script.md). ## Switch to the new preview version (Existing preview customer) -If you are an existing **Azure Arc-enabled VMware** customer, for VMs that are Azure-enabled, follow these steps to switch to the new preview version: +If you're an existing **Azure Arc-enabled VMware** customer, for VMs that are Azure-enabled, follow these steps to switch to the new preview version: >[!Note] >If you had enabled guest management on any of the VMs, remove [VM extensions](/azure/azure-arc/vmware-vsphere/remove-vcenter-from-arc-vmware#step-1-remove-vm-extensions) and [disconnect agents](/azure/azure-arc/vmware-vsphere/remove-vcenter-from-arc-vmware#step-2-disconnect-the-agent-from-azure-arc). |
azure-arc | Troubleshoot Guest Management Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/troubleshoot-guest-management-issues.md | -This article provides information on how to troubleshoot and resolve the issues that may occur while you enable guest management on Arc-enabled VMware vSphere virtual machines. +This article provides information on how to troubleshoot and resolve the issues that can occur while you enable guest management on Arc-enabled VMware vSphere virtual machines. ## Troubleshoot issues while enabling Guest Management on a domain-joined Linux VM This article provides information on how to troubleshoot and resolve the issues ### Additional information -The parameter `ad_gpo_map_batch` according to the [sssd mainpage](https://jhrozek.fedorapeople.org/sssd/1.13.4/man/sssd-ad.5.html): +The parameter `ad_gpo_map_batch` according to the [sssd main page](https://jhrozek.fedorapeople.org/sssd/1.13.4/man/sssd-ad.5.html): A comma-separated list of Pluggable Authentication Module (PAM) service names for which GPO-based access control is evaluated based on the BatchLogonRight and DenyBatchLogonRight policy settings. Default: The default set of PAM service names includes: - crond: - `vmtoolsd` PAM is enabled for SSSD evaluation. For any request coming through VMware tools, SSSD will be invoked since VMware tools use this PAM for authenticating to the Linux Guest VM. + `vmtoolsd` PAM is enabled for SSSD evaluation. For any request coming through VMware tools, SSSD is invoked since VMware tools use this PAM for authenticating to the Linux Guest VM. #### References |
azure-cache-for-redis | Cache Best Practices Client Libraries | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-best-practices-client-libraries.md | clusterServersConfig: timeout: 5000 retryAttempts: 3 retryInterval: 3000+ checkLockSyncedSlaves: false failedSlaveReconnectionInterval: 15000 failedSlaveCheckInterval: 60000 subscriptionsPerConnection: 5 |
azure-functions | Dotnet Isolated In Process Differences | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-in-process-differences.md | Use the following table to compare feature and functional differences between th | [Supported .NET versions](#supported-versions) | Long Term Support (LTS) versions<sup>6</sup>,<br/>Standard Term Support (STS) versions,<br/>.NET Framework | Long Term Support (LTS) versions<sup>6</sup> | | Core packages | [Microsoft.Azure.Functions.Worker](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/)<br/>[Microsoft.Azure.Functions.Worker.Sdk](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk) | [Microsoft.NET.Sdk.Functions](https://www.nuget.org/packages/Microsoft.NET.Sdk.Functions/) | | Binding extension packages | [Microsoft.Azure.Functions.Worker.Extensions.*](https://www.nuget.org/packages?q=Microsoft.Azure.Functions.Worker.Extensions) | [Microsoft.Azure.WebJobs.Extensions.*](https://www.nuget.org/packages?q=Microsoft.Azure.WebJobs.Extensions) |-| Durable Functions | [Supported](durable/durable-functions-isolated-create-first-csharp.md?pivots=code-editor-visualstudio) (Support does not yet include Durable Entities) | [Supported](durable/durable-functions-overview.md) | +| Durable Functions | [Supported](durable/durable-functions-dotnet-isolated-overview.md)| [Supported](durable/durable-functions-overview.md) | | Model types exposed by bindings | Simple types<br/>JSON serializable types<br/>Arrays/enumerations<br/>[Service SDK types](dotnet-isolated-process-guide.md#sdk-types)<sup>4</sup> | Simple types<br/>[JSON serializable](/dotnet/api/system.text.json.jsonserializeroptions) types<br/>Arrays/enumerations<br/>Service SDK types<sup>4</sup> | | HTTP trigger model types| [HttpRequestData] / [HttpResponseData]<br/>[HttpRequest] / [IActionResult] (using [ASP.NET Core integration][aspnetcore-integration])<sup>5</sup>| [HttpRequest] / [IActionResult]<sup>5</sup><br/>[HttpRequestMessage] / [HttpResponseMessage] | | Output binding interactions | Return values in an expanded model with:<br/> - single or [multiple outputs](dotnet-isolated-process-guide.md#multiple-output-bindings)<br/> - arrays of outputs| Return values (single output only),<br/>`out` parameters,<br/>`IAsyncCollector` | |
azure-functions | Durable Functions Bindings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-bindings.md | public String sayHello(@DurableActivityTrigger(name = "name") String name) { You can use regular input and output bindings in addition to the activity trigger binding. ::: zone pivot="programming-language-javascript" -For example, you can take the input to your activity binding, and send a message to an EventHub using the EventHub output binding: +For example, you can take the input to your activity binding, and send a message to an Event Hub using the Event Hubs output binding: ```json { Internally, this trigger binding polls the configured durable store for new enti The entity trigger is configured using the [EntityTriggerAttribute](/dotnet/api/microsoft.azure.webjobs.extensions.durabletask.entitytriggerattribute) .NET attribute. > [!NOTE]-> Entity triggers aren't yet supported for isolated worker process apps. +> Entity triggers are currently in **preview** for isolated worker process apps. [Learn more.](durable-functions-dotnet-entities.md) ::: zone-end ::: zone pivot="programming-language-javascript,programming-language-powershell" The entity trigger is defined by the following JSON object in the `bindings` array of *function.json*: |
azure-functions | Durable Functions Dotnet Entities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-dotnet-entities.md | We currently offer two APIs for defining entities: - The **function-based syntax** is a lower-level interface that represents entities as functions. It provides precise control over how the entity operations are dispatched, and how the entity state is managed. -This article focuses primarily on the class-based syntax, as we expect it to be better suited for most applications. However, the [function-based syntax](#function-based-syntax) may be appropriate for applications that wish to define or manage their own abstractions for entity state and operations. Also, it may be appropriate for implementing libraries that require genericity not currently supported by the class-based syntax. +This article focuses primarily on the class-based syntax, as we expect it to be better suited for most applications. However, the [function-based syntax](#function-based-syntax) can be appropriate for applications that wish to define or manage their own abstractions for entity state and operations. Also, it can be appropriate for implementing libraries that require genericity not currently supported by the class-based syntax. > [!NOTE] > The class-based syntax is just a layer on top of the function-based syntax, so both variants can be used interchangeably in the same application. > [!NOTE]-> Entities are not currently supported in Durable Functions for the dotnet-isolated worker. +> Durable entities support for the dotnet-isolated worker is currently in **preview**. You can find more samples and provide feedback in the [durabletask-dotnet](https://github.com/microsoft/durabletask-dotnet) GitHub repo. ## Defining entity classes The following example is an implementation of a `Counter` entity that stores a single value of type integer, and offers four operations `Add`, `Reset`, `Get`, and `Delete`. +### [In-process](#tab/in-process) ```csharp [JsonObject(MemberSerialization.OptIn)] public class Counter public class Counter The `Run` function contains the boilerplate required for using the class-based syntax. It must be a *static* Azure Function. It executes once for each operation message that is processed by the entity. When `DispatchAsync<T>` is called and the entity isn't already in memory, it constructs an object of type `T` and populates its fields from the last persisted JSON found in storage (if any). Then it invokes the method with the matching name. -The `EntityTrigger` Function, `Run` in this sample, does not need to reside within the Entity class itself. It may reside within any valid location for an Azure Function: inside the top-level namespace, or inside a top-level class. However, if nested deeper (e.g, the Function is declared inside a *nested* class), then this Function will not be recognized by the latest runtime. +The `EntityTrigger` Function, `Run` in this sample, does not need to reside within the Entity class itself. It can reside within any valid location for an Azure Function: inside the top-level namespace, or inside a top-level class. However, if nested deeper (e.g, the Function is declared inside a *nested* class), then this Function will not be recognized by the latest runtime. > [!NOTE] > The state of a class-based entity is **created implicitly** before the entity processes an operation, and can be **deleted explicitly** in an operation by calling `Entity.Current.DeleteState()`. +### [Isolated worker process](#tab/isolated-process) +There are two ways of defining an entity as a class in the C# isolated worker model. They produce entities with different state serialization structures. ++With the following approach, the entire object is serialized when defining an entity. +```csharp +public class Counter +{ + public int Value { get; set; } ++ public void Add(int amount) + { + this.Value += amount; + } ++ public Task Reset() + { + this.Value = 0; + return Task.CompletedTask; + } ++ public Task<int> Get() + { + return Task.FromResult(this.Value); + } ++ // Delete is implicitly defined when defining an entity this way ++ [Function(nameof(Counter))] + public static Task Run([EntityTrigger] TaskEntityDispatcher dispatcher) + => dispatcher.DispatchAsync<Counter>(); +} +``` +A `TaskEntity<TState>`-based implementation, which makes it easy to use dependency injection. In this case, state is deserialized to the `State` property, and no other property is serialized/deserialized. ++```csharp +public class Counter : TaskEntity<int> +{ + readonly ILogger logger; ++ public Counter(ILogger<Counter> logger) + { + this.logger = logger; + } ++ public int Add(int amount) + { + this.State += amount; + } ++ public Reset() + { + this.State = 0; + return Task.CompletedTask; + } ++ public Task<int> Get() + { + return Task.FromResult(this.State); + } ++ // Delete is implicitly defined when defining an entity this way ++ [Function(nameof(Counter))] + public static Task Run([EntityTrigger] TaskEntityDispatcher dispatcher) + => dispatcher.DispatchAsync<Counter>(); +} +``` +> [!WARNING] +> When writing entities that derive from `ITaskEntity` or `TaskEntity<TState>`, it is important to **not** name your entity trigger method `RunAsync`. This will cause runtime errors when invoking the entity as there is an ambiguous match with the method name "RunAsync" due to `ITaskEntity` already defining an instance-level "RunAsync". ++### Deleting entities in the isolated model ++Deleting an entity in the isolated model is accomplished by setting the entity state to `null`. How this is accomplished depends on what entity implementation path is being used. ++- When deriving from `ITaskEntity` or using [function based syntax](#function-based-syntax), delete is accomplished by calling `TaskEntityOperation.State.SetState(null)`. +- When deriving from `TaskEntity<TState>`, delete is implicitly defined. However, it can be overridden by defining a method `Delete` on the entity. State can also be deleted from any operation via `this.State = null`. + - To delete via setting state to null will require `TState` to be nullable. + - The implicitly defined delete operation will delete non-nullable `TState`. +- When using a POCO as your state (not deriving from `TaskEntity<TState>`), delete is implicitly defined. It is possible to override the delete operation by defining a method `Delete` on the POCO. However, there is no way to set state to `null` in the POCO route so the implicitly defined delete operation is the only true delete. +++ ### Class Requirements Entity classes are POCOs (plain old CLR objects) that require no special superclasses, interfaces, or attributes. However: Operations also have access to functionality provided by the `Entity.Current` co For example, we can modify the counter entity so it starts an orchestration when the counter reaches 100 and passes the entity ID as an input argument: +#### [In-Process](#tab/in-process) ```csharp- public void Add(int amount) +public void Add(int amount) +{ + if (this.Value < 100 && this.Value + amount >= 100) {- if (this.Value < 100 && this.Value + amount >= 100) - { - Entity.Current.StartNewOrchestration("MilestoneReached", Entity.Current.EntityId); - } - this.Value += amount; + Entity.Current.StartNewOrchestration("MilestoneReached", Entity.Current.EntityId); }+ this.Value += amount; +} ```+#### [Isolated worker process](#tab/isolated-process) +```csharp +public void Add(int amount, TaskEntityContext context) +{ + if (this.Value < 100 && this.Value + amount >= 100) + { + context.ScheduleNewOrchestration("MilestoneReached", context.Id); + } ++ this.Value += amount; +} +``` + ## Accessing entities directly Class-based entities can be accessed directly, using explicit string names for t ### Example: client signals entity The following Azure Http Function implements a DELETE operation using REST conventions. It sends a delete signal to the counter entity whose key is passed in the URL path.-+#### [In-process](#tab/in-process) ```csharp [FunctionName("DeleteCounter")] public static async Task<HttpResponseMessage> DeleteCounter( public static async Task<HttpResponseMessage> DeleteCounter( return req.CreateResponse(HttpStatusCode.Accepted); } ```+#### [Isolated worker process](#tab/isolated-process) ++```csharp +[Function("DeleteCounter")] +public static async Task<HttpResponseData> DeleteCounter( + [HttpTrigger(AuthorizationLevel.Function, "delete", Route = "Counter/{entityKey}")] HttpRequestData req, + [DurableClient] DurableTaskClient client, string entityKey) +{ + var entityId = new EntityInstanceId("Counter", entityKey); + await client.Entities.SignalEntityAsync(entityId, "Delete"); + return req.CreateResponse(HttpStatusCode.Accepted); +} +``` + ### Example: client reads entity state The following Azure Http Function implements a GET operation using REST conventions. It reads the current state of the counter entity whose key is passed in the URL path. +#### [In-process](#tab/in-process) ```csharp [FunctionName("GetCounter")] public static async Task<HttpResponseMessage> GetCounter( public static async Task<HttpResponseMessage> GetCounter( return req.CreateResponse(state); } ```- > [!NOTE]-> The object returned by `ReadEntityStateAsync` is just a local copy, that is, a snapshot of the entity state from some earlier point in time. In particular, it may be stale, and modifying this object has no effect on the actual entity. +> The object returned by `ReadEntityStateAsync` is just a local copy, that is, a snapshot of the entity state from some earlier point in time. In particular, it can be stale, and modifying this object has no effect on the actual entity. ++#### [Isolated worker process](#tab/isolated-process) ++```csharp +[Function("GetCounter")] +public static async Task<HttpResponseData> GetCounter( + [HttpTrigger(AuthorizationLevel.Function, "get", Route = "Counter/{entityKey}")] HttpRequestData req, + [DurableClient] DurableTaskClient client, string entityKey) +{ + var entityId = new EntityInstanceId("Counter", entityKey); + EntityMetadata<int>? entity = await client.Entities.GetEntityAsync<int>(entityId); + HttpResponseData response = request.CreateResponse(HttpStatusCode.OK); + await response.WriteAsJsonAsync(entity.State); ++ return response; +} +``` + ### Example: orchestration first signals, then calls entity The following orchestration signals a counter entity to increment it, and then calls the same entity to read its latest value. +#### [In-process](#tab/in-process) ```csharp [FunctionName("IncrementThenGet")] public static async Task<int> Run( public static async Task<int> Run( } ``` +#### [Isolated worker process](#tab/isolated-process) ++```csharp +[Function("IncrementThenGet")] +public static async Task<int> Run([OrchestrationTrigger] TaskOrchestrationContext context) +{ + var entityId = new EntityInstanceId("Counter", "myCounter"); ++ // One-way signal to the entity - does not await a response + await context.Entities.SignalEntityAsync(entityId, "Add", 1); ++ // Two-way call to the entity which returns a value - awaits the response + int currentValue = await context.Entities.CallEntityAsync<int>(entityId, "Get"); ++ return currentValue; +} +``` ++ ## Accessing entities through interfaces Interfaces can be used for accessing entities via generated proxy objects. This approach ensures that the name and argument type of an operation matches what is implemented. We recommend using interfaces for accessing entities whenever possible. public class Counter : ICounter Entity classes and entity interfaces are similar to the grains and grain interfaces popularized by [Orleans](https://www.microsoft.com/research/project/orleans-virtual-actors/). For a more information about similarities and differences between Durable Entities and Orleans, see [Comparison with virtual actors](durable-functions-entities.md#comparison-with-virtual-actors). -Besides providing type checking, interfaces are useful for a better separation of concerns within the application. For example, since an entity may implement multiple interfaces, a single entity can serve multiple roles. Also, since an interface may be implemented by multiple entities, general communication patterns can be implemented as reusable libraries. +Besides providing type checking, interfaces are useful for a better separation of concerns within the application. For example, since an entity can implement multiple interfaces, a single entity can serve multiple roles. Also, since an interface can be implemented by multiple entities, general communication patterns can be implemented as reusable libraries. ### Example: client signals entity through interface +#### [In-Process](#tab/in-process) Client code can use `SignalEntityAsync<TEntityInterface>` to send signals to entities that implement `TEntityInterface`. For example: ```csharp In this example, the `proxy` parameter is a dynamically generated instance of `I > The `SignalEntityAsync` APIs can be used only for one-way operations. Even if an operation returns `Task<T>`, the value of the `T` parameter will always be null or `default`, not the actual result. For example, it doesn't make sense to signal the `Get` operation, as no value is returned. Instead, clients can use either `ReadStateAsync` to access the counter state directly, or can start an orchestrator function that calls the `Get` operation. +#### [Isolated worker process](#tab/isolated-process) ++This is currently not supported in the .NET isolated worker. +++ ### Example: orchestration first signals, then calls entity through proxy +#### [In-Process](#tab/in-process) + To call or signal an entity from within an orchestration, `CreateEntityProxy` can be used, along with the interface type, to generate a proxy for the entity. This proxy can then be used to call or signal operations: ```csharp public static async Task<int> Run( Implicitly, any operations that return `void` are signaled, and any operations that return `Task` or `Task<T>` are called. One can change this default behavior, and signal operations even if they return Task, by using the `SignalEntity<IInterfaceType>` method explicitly. +#### [Isolated worker process](#tab/isolated-process) ++This is currently not supported in the .NET isolated worker. +++ ### Shorter option for specifying the target When calling or signaling an entity using an interface, the first argument must specify the target entity. The target can be specified either by specifying the entity ID, or, in cases where there's just one class that implements the entity, just the entity key: In the example above, we chose to include several attributes to make the underly - We annotate the class with `[JsonObject(MemberSerialization.OptIn)]` to remind us that the class must be serializable, and to persist only members that are explicitly marked as JSON properties. - We annotate the fields to be persisted with `[JsonProperty("name")]` to remind us that a field is part of the persisted entity state, and to specify the property name to be used in the JSON representation. -However, these attributes aren't required; other conventions or attributes are permitted as long as they work with Json.NET. For example, one may use `[DataContract]` attributes, or no attributes at all: +However, these attributes aren't required; other conventions or attributes are permitted as long as they work with Json.NET. For example, one can use `[DataContract]` attributes, or no attributes at all: ```csharp [DataContract] By default, the name of the class is *not* stored as part of the JSON representa ### Making changes to class definitions -Some care is required when making changes to a class definition after an application has been run, because the stored JSON object may no longer match the new class definition. Still, it is often possible to deal correctly with changing data formats as long as one understands the deserialization process used by `JsonConvert.PopulateObject`. +Some care is required when making changes to a class definition after an application has been run, because the stored JSON object can no longer match the new class definition. Still, it is often possible to deal correctly with changing data formats as long as one understands the deserialization process used by `JsonConvert.PopulateObject`. For example, here are some examples of changes and their effect: Sometimes we want to exert more control over how entity objects are constructed. Occasionally we need to perform some special initialization before dispatching an operation to an entity that has never been accessed, or that has been deleted. To specify this behavior, one can add a conditional before the `DispatchAsync`: +#### [In-process](#tab/in-process) + ```csharp [FunctionName(nameof(Counter))] public static Task Run([EntityTrigger] IDurableEntityContext ctx) public static Task Run([EntityTrigger] IDurableEntityContext ctx) return ctx.DispatchAsync<Counter>(); } ```+#### [Isolated worker process](#tab/isolated-process) ++```csharp +public class Counter : TaskEntity<int> +{ + protected override int InitializeState(TaskEntityOperation operation) + { + // This is called when state is null, giving a chance to customize first-access of entity. + return 10; + } +} +``` + ### Bindings in entity classes Unlike regular functions, entity class methods don't have direct access to input The following example shows how a `CloudBlobContainer` reference from the [blob input binding](../functions-bindings-storage-blob-input.md) can be made available to a class-based entity. +#### [In-process](#tab/in-process) + ```csharp public class BlobBackedEntity { public class BlobBackedEntity } } ```+#### [Isolated worker process](#tab/isolated-process) ++```csharp +public class BlobBackedEntity : TaskEntity<object?> +{ + private BlobContainerClient Container { get; set; } ++ [Function(nameof(BlobBackedEntity))] + public Task DispatchAsync( + [EntityTrigger] TaskEntityDispatcher dispatcher, + [BlobInput("my-container")] BlobContainerClient container) + { + this.Container = container; + return dispatcher.DispatchAsync(this); + } +} +``` + For more information on bindings in Azure Functions, see the [Azure Functions Triggers and Bindings](../functions-triggers-bindings.md) documentation. For more information on bindings in Azure Functions, see the [Azure Functions Tr Entity classes support [Azure Functions Dependency Injection](../functions-dotnet-dependency-injection.md). The following example demonstrates how to register an `IHttpClientFactory` service into a class-based entity. +#### [In-process](#tab/in-process) + ```csharp [assembly: FunctionsStartup(typeof(MyNamespace.Startup))] namespace MyNamespace } } ```+#### [Isolated worker process](#tab/isolated-process) ++The following demonstrates how to configure an `HttpClient` in the `program.cs` file to be imported later in the entity class. ++```csharp +public class Program +{ + public static void Main() + { + IHost host = new HostBuilder() + .ConfigureFunctionsWorkerDefaults((IFunctionsWorkerApplicationBuilder workerApplication) => + { + workerApplication.Services.AddHttpClient<HttpEntity>() + .ConfigureHttpClient(client => {/* configure http client here */}); + }) + .Build(); ++ host.Run(); + } +} +``` + The following snippet demonstrates how to incorporate the injected service into your entity class. +#### [In-process](#tab/in-process) + ```csharp public class HttpEntity { public class HttpEntity } ``` +#### [Isolated worker process](#tab/isolated-process) ++```csharp +public class HttpEntity : TaskEntity<object?> +{ + private readonly HttpClient client; ++ public HttpEntity(HttpClient client) + { + this.client = client; + } ++ public async Task<int> GetAsync(string url) + { + using var response = await this.client.GetAsync(url); + return (int)response.StatusCode; + } ++ [Function(nameof(HttpEntity))] + public static Task Run([EntityTrigger] TaskEntityDispatcher dispatcher) + => dispatcher.DispatchAsync<HttpEntity>(); +} +``` ++ > [!NOTE] > To avoid issues with serialization, make sure to exclude fields meant to store injected values from the serialization. > [!NOTE]-> Unlike when using constructor injection in regular .NET Azure Functions, the functions entry point method for class-based entities *must* be declared `static`. Declaring a non-static function entry point may cause conflicts between the normal Azure Functions object initializer and the Durable Entities object initializer. +> Unlike when using constructor injection in regular .NET Azure Functions, the functions entry point method for class-based entities *must* be declared `static`. Declaring a non-static function entry point can cause conflicts between the normal Azure Functions object initializer and the Durable Entities object initializer. ## Function-based syntax -So far we have focused on the class-based syntax, as we expect it to be better suited for most applications. However, the function-based syntax can be appropriate for applications that wish to define or manage their own abstractions for entity state and operations. Also, it may be appropriate when implementing libraries that require genericity not currently supported by the class-based syntax. +So far we have focused on the class-based syntax, as we expect it to be better suited for most applications. However, the function-based syntax can be appropriate for applications that wish to define or manage their own abstractions for entity state and operations. Also, it can be appropriate when implementing libraries that require genericity not currently supported by the class-based syntax. With the function-based syntax, the Entity Function explicitly handles the operation dispatch, and explicitly manages the state of the entity. For example, the following code shows the *Counter* entity implemented using the function-based syntax. +### [In-process](#tab/in-process) + ```csharp [FunctionName("Counter")] public static void Counter([EntityTrigger] IDurableEntityContext ctx) Finally, the following members are used to signal other entities, or start new o * `SignalEntity(EntityId, operation, input)`: sends a one-way message to an entity. * `CreateNewOrchestration(orchestratorFunctionName, input)`: starts a new orchestration. +### [Isolated worker process](#tab/isolated-process) ++```csharp +[Function(nameof(Counter))] +public static Task DispatchAsync([EntityTrigger] TaskEntityDispatcher dispatcher) +{ + return dispatcher.DispatchAsync(operation => + { + if (operation.State.GetState(typeof(int)) is null) + { + operation.State.SetState(0); + } ++ switch (operation.Name.ToLowerInvariant()) + { + case "add": + int state = operation.State.GetState<int>(); + state += operation.GetInput<int>(); + operation.State.SetState(state); + return new(state); + case "reset": + operation.State.SetState(0); + break; + case "get": + return new(operation.State.GetState<int>()); + case "delete": + operation.State.SetState(null); + break; + } ++ return default; + }); +} +``` ++ ## Next steps > [!div class="nextstepaction"] |
azure-functions | Durable Functions Dotnet Isolated Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-dotnet-isolated-overview.md | Using this model lets you get all the great benefits that come with the Azure Fu - Support for strongly typed calls and class-based activities and orchestrations (NOTE: in preview. For more information, see [here](#source-generator-and-class-based-activities-and-orchestrations).) - Plus all the benefits of the Azure Functions .NET isolated worker. -### Feature parity with in-process Durable Functions --Not all features from in-process Durable Functions have been migrated to the isolated worker yet. Some known missing features that will be addressed at a later date are: --- Durable Entities-- `CallHttpAsync`- ### Source generator and class-based activities and orchestrations **Requirement**: add `<PackageReference Include="Microsoft.DurableTask.Generators" Version="1.0.0-preview.1" />` to your project. By adding the source generator package, you get access to two new features: -- **Class-based activities and orchestrations**, an alternative way to write Durable Functions. Instead of "function-based", you write strongly-typed classes, which inherit types from the Durable SDK.+- **Class-based activities and orchestrations**, an alternative way to write Durable Functions. Instead of "function-based", you write strongly typed classes, which inherit types from the Durable SDK. - **Strongly typed extension methods** for invoking sub orchestrations and activities. These extension methods can also be used from "function-based" activities and orchestrations. #### Function-based example |
azure-functions | Durable Functions Entities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-entities.md | Entity functions define operations for reading and updating small pieces of stat Entities provide a means for scaling out applications by distributing the work across many entities, each with a modestly sized state. > [!NOTE]-> Entity functions and related functionality are only available in [Durable Functions 2.0](durable-functions-versions.md#migrate-from-1x-to-2x) and above. They are currently supported in .NET in-proc, JavaScript, and Python, but not in .NET isolated worker, PowerShell, or Java. +> Entity functions and related functionality are only available in [Durable Functions 2.0](durable-functions-versions.md#migrate-from-1x-to-2x) and above. They are currently supported in .NET in-proc, .NET isolated worker ([preview](durable-functions-dotnet-entities.md)), JavaScript, and Python, but not in PowerShell or Java. ## General concepts Currently, the two distinct APIs for defining entities are: **Class-based syntax (.NET only)**, where entities and operations are represented by classes and methods. This syntax produces more easily readable code and allows operations to be invoked in a type-safe way. The class-based syntax is a thin layer on top of the function-based syntax, so both variants can be used interchangeably in the same application. -# [C#](#tab/csharp) +# [C# (In-proc)](#tab/in-process) ### Example: Function-based syntax - C# The state of this entity is an object of type `Counter`, which contains a field For more information on the class-based syntax and how to use it, see [Defining entity classes](durable-functions-dotnet-entities.md#defining-entity-classes). +# [C# (Isolated)](#tab/isolated-process) +### Example: Function-based syntax - C# ++```csharp +[Function(nameof(Counter))] +public static Task DispatchAsync([EntityTrigger] TaskEntityDispatcher dispatcher) +{ + return dispatcher.DispatchAsync(operation => + { + if (operation.State.GetState(typeof(int)) is null) + { + operation.State.SetState(0); + } ++ switch (operation.Name.ToLowerInvariant()) + { + case "add": + int state = operation.State.GetState<int>(); + state += operation.GetInput<int>(); + operation.State.SetState(state); + return new(state); + case "reset": + operation.State.SetState(0); + break; + case "get": + return new(operation.State.GetState<int>()); + case "delete": + operation.State.SetState(null); + break; + } ++ return default; + }); +} +``` ++### Example: Class-based syntax - C# +The following example shows the implementation of the `Counter` entity using classes and methods. +```csharp +public class Counter +{ + public int CurrentValue { get; set; } ++ public void Add(int amount) => this.CurrentValue += amount; ++ public void Reset() => this.CurrentValue = 0; ++ public int Get() => this.CurrentValue; ++ [Function(nameof(Counter))] + public static Task RunEntityAsync([EntityTrigger] TaskEntityDispatcher dispatcher) + { + return dispatcher.DispatchAsync<Counter>(); + } +} ++``` +The following example implements a `Counter` entity by directly implementing `TaskEntity<TState>`, which gives the added benefit of being able to use Dependency Injection. ++```csharp +public class Counter : TaskEntity<int> +{ + readonly ILogger logger; ++ public Counter(ILogger<Counter> logger) + { + this.logger = logger; + } ++ public void Add(int amount) => this.State += amount; ++ public void Reset() => this.State = 0; ++ public int Get() => this.State; ++ [Function(nameof(Counter))] + public Task RunEntityAsync([EntityTrigger] TaskEntityDispatcher dispatcher) + { + return dispatcher.DispatchAsync(this); + } +} +``` +You can also dispatch by using a static method. +```csharp +[Function(nameof(Counter))] +public static Task RunEntityStaticAsync([EntityTrigger] TaskEntityDispatcher dispatcher) +{ + return dispatcher.DispatchAsync<Counter>(); +} +``` + # [JavaScript](#tab/javascript) ### Example: JavaScript entity The following examples illustrate these various ways of accessing entities. To access entities from an ordinary Azure Function, which is also known as a client function, use the [entity client binding](durable-functions-bindings.md#entity-client). The following example shows a queue-triggered function signaling an entity using this binding. -# [C#](#tab/csharp) +# [C# (In-proc)](#tab/in-process) > [!NOTE] > For simplicity, the following examples show the loosely typed syntax for accessing entities. In general, we recommend that you [access entities through interfaces](durable-functions-dotnet-entities.md#accessing-entities-through-interfaces) because it provides more type checking. public static Task Run( } ``` +# [C# (Isolated)](#tab/isolated-process) ++```csharp +[Function("AddFromQueue")] +public static Task Run( + [QueueTrigger("durable-function-trigger")] string input, [DurableClient] DurableTaskClient client) +{ + // Entity operation input comes from the queue message content. + var entityId = new EntityInstanceId(nameof(Counter), "myCounter"); + int amount = int.Parse(input); + return client.Entities.SignalEntityAsync(entityId, "Add", amount); +} +``` + # [JavaScript](#tab/javascript) ```javascript The term *signal* means that the entity API invocation is one-way and asynchrono Client functions can also query the state of an entity, as shown in the following example: -# [C#](#tab/csharp) +# [C# (In-proc)](#tab/in-process) ```csharp [FunctionName("QueryCounter")] public static async Task<HttpResponseMessage> Run( return req.CreateResponse(HttpStatusCode.OK, stateResponse.EntityState); } ```+# [C# (Isolated)](#tab/isolated-process) +```csharp +[Function("QueryCounter")] +public static async Task<HttpResponseData> Run( + [HttpTrigger(AuthorizationLevel.Function)] HttpRequestData req, + [DurableClient] DurableTaskClient client) +{ + var entityId = new EntityInstanceId(nameof(Counter), "myCounter"); + EntityMetadata<int>? entity = await client.Entities.GetEntityAsync<int>(entityId); ++ if (entity is null) + { + return request.CreateResponse(HttpStatusCode.NotFound); + } + + HttpResponseData response = request.CreateResponse(HttpStatusCode.OK); + await response.WriteAsJsonAsync(entity); ++ return response; +} +``` # [JavaScript](#tab/javascript) Entity state queries are sent to the Durable tracking store and return the entit Orchestrator functions can access entities by using APIs on the [orchestration trigger binding](durable-functions-bindings.md#orchestration-trigger). The following example code shows an orchestrator function calling and signaling a `Counter` entity. -# [C#](#tab/csharp) +# [C# (In-proc)](#tab/in-process) ```csharp [FunctionName("CounterOrchestration")] public static async Task Run( } ``` +# [C# (Isolated)](#tab/isolated-process) ++```csharp +[Function("CounterOrchestration")] +public static async Task Run([OrchestrationTrigger] TaskOrchestrationContext context) +{ + var entityId = new EntityInstanceId(nameof(Counter), "myCounter"); ++ // Two-way call to the entity which returns a value - awaits the response + int currentValue = await context.Entities.CallEntityAsync<int>(entityId, "Get"); ++ if (currentValue < 10) + { + // One-way signal to the entity which updates the value - does not await a response + await context.Entities.SignalEntityAsync(entityId, "Add", 1); + } +} +``` + # [JavaScript](#tab/javascript) ```javascript Only orchestrations are capable of calling entities and getting a response, whic An entity function can send signals to other entities, or even itself, while it executes an operation. For example, we can modify the previous `Counter` entity example so that it sends a "milestone-reached" signal to some monitor entity when the counter reaches the value 100. -# [C#](#tab/csharp) +# [C# (In-proc)](#tab/in-process) ```csharp case "add": For example, we can modify the previous `Counter` entity example so that it send break; ``` +# [C# (Isolated)](#tab/isolated-process) ++```csharp +case "add": + var currentValue = operation.State.GetState<int>(); + var amount = operation.GetInput<int>(); + if (currentValue < 100 && currentValue + amount >= 100) + { + operation.Context.SignalEntity(new EntityInstanceId("MonitorEntity", ""), "milestone-reached", operation.Context.EntityInstanceId); + } ++ operation.State.SetState(currentValue + amount); + break; +``` + # [JavaScript](#tab/javascript) ```javascript |
azure-functions | Durable Functions Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-overview.md | zone_pivot_groups: df-languages ## <a name="language-support"></a>Supported languages -Durable Functions is designed to work with all Azure Functions programming languages but may have different minimum requirements for each language. The following table shows the minimum supported app configurations: +Durable Functions is designed to work with all Azure Functions programming languages but might have different minimum requirements for each language. The following table shows the minimum supported app configurations: | Language stack | Azure Functions Runtime versions | Language worker version | Minimum bundles version | | - | - | - | - | In this example, the values `F1`, `F2`, `F3`, and `F4` are the names of other fu ::: zone pivot="csharp" -# [C# (InProc)](#tab/in-process) +#### [In-process](#tab/in-process) ```csharp [FunctionName("Chaining")] public static async Task<object> Run( You can use the `context` parameter to invoke other functions by name, pass parameters, and return function output. Each time the code calls `await`, the Durable Functions framework checkpoints the progress of the current function instance. If the process or virtual machine recycles midway through the execution, the function instance resumes from the preceding `await` call. For more information, see the next section, Pattern #2: Fan out/fan in. -# [C# (Isolated)](#tab/isolated-process) +#### [Isolated worker process](#tab/isolated-process) ```csharp [Function("Chaining")] public double functionChaining( } ``` -You can use the `ctx` object to invoke other functions by name, pass parameters, and return function output. The output of these method calls is a `Task<V>` object where `V` is the type of data returned by the invoked function. Each time you call `Task<V>.await()`, the Durable Functions framework checkpoints the progress of the current function instance. If the process unexpectedly recycles midway through the execution, the function instance resumes from the preceding `Task<V>.await()` call. For more information, see the next section, Pattern #2: Fan out/fan in. +You can use the `ctx` object to invoke other functions by name, pass parameters, and return function output. The output of these methods is a `Task<V>` object where `V` is the type of data returned by the invoked function. Each time you call `Task<V>.await()`, the Durable Functions framework checkpoints the progress of the current function instance. If the process unexpectedly recycles midway through the execution, the function instance resumes from the preceding `Task<V>.await()` call. For more information, see the next section, Pattern #2: Fan out/fan in. ::: zone-end The Durable Functions extension handles this pattern with relatively simple code ::: zone pivot="csharp" -# [C# (InProc)](#tab/in-process) +#### [In-process](#tab/in-process) ```csharp [FunctionName("FanOutFanIn")] The fan-out work is distributed to multiple instances of the `F2` function. The The automatic checkpointing that happens at the `await` call on `Task.WhenAll` ensures that a potential midway crash or reboot doesn't require restarting an already completed task. -# [C# (Isolated)](#tab/isolated-process) +#### [Isolated worker process](#tab/isolated-process) ```csharp [Function("FanOutFanIn")] The following code implements a basic monitor: ::: zone pivot="csharp" -# [C# (InProc)](#tab/in-process) +#### [In-process](#tab/in-process) ```csharp [FunctionName("MonitorJobStatus")] public static async Task Run( } ``` -# [C# (Isolated)](#tab/isolated-process) +#### [Isolated worker process](#tab/isolated-process) ```csharp [Function("MonitorJobStatus")] When a request is received, a new orchestration instance is created for that job Many automated processes involve some kind of human interaction. Involving humans in an automated process is tricky because people aren't as highly available and as responsive as cloud services. An automated process might allow for this interaction by using timeouts and compensation logic. -An approval process is an example of a business process that involves human interaction. Approval from a manager might be required for an expense report that exceeds a certain dollar amount. If the manager doesn't approve the expense report within 72 hours (maybe the manager went on vacation), an escalation process kicks in to get the approval from someone else (perhaps the manager's manager). +An approval process is an example of a business process that involves human interaction. Approval from a manager might be required for an expense report that exceeds a certain dollar amount. If the manager doesn't approve the expense report within 72 hours (might be the manager went on vacation), an escalation process kicks in to get the approval from someone else (perhaps the manager's manager). ![A diagram of the human interaction pattern](./media/durable-functions-concepts/approval.png) These examples create an approval process to demonstrate the human interaction p ::: zone pivot="csharp" -# [C# (InProc)](#tab/in-process) +#### [In-process](#tab/in-process) ```csharp [FunctionName("ApprovalWorkflow")] public static async Task Run( To create the durable timer, call `context.CreateTimer`. The notification is received by `context.WaitForExternalEvent`. Then, `Task.WhenAny` is called to decide whether to escalate (timeout happens first) or process the approval (the approval is received before timeout). -# [C# (Isolated)](#tab/isolated-process) +#### [Isolated worker process](#tab/isolated-process) ```csharp [Function("ApprovalWorkflow")] An event can also be raised using the durable orchestration client from another ::: zone pivot="csharp" -# [C# (InProc)](#tab/in-process) +#### [In-process](#tab/in-process) ```csharp [FunctionName("RaiseEventToOrchestration")] public static async Task Run( } ``` -# [C# (Isolated)](#tab/isolated-process) +#### [Isolated worker process](#tab/isolated-process) ```csharp [Function("RaiseEventToOrchestration")] public void raiseEventToOrchestration( ### <a name="aggregator"></a>Pattern #6: Aggregator (stateful entities) -The sixth pattern is about aggregating event data over a period of time into a single, addressable *entity*. In this pattern, the data being aggregated may come from multiple sources, may be delivered in batches, or may be scattered over long-periods of time. The aggregator might need to take action on event data as it arrives, and external clients may need to query the aggregated data. +The sixth pattern is about aggregating event data over a period of time into a single, addressable *entity*. In this pattern, the data being aggregated might come from multiple sources, might be delivered in batches, or might be scattered over long-periods of time. The aggregator might need to take action on event data as it arrives, and external clients might need to query the aggregated data. ![Aggregator diagram](./media/durable-functions-concepts/aggregator.png) You can use [Durable entities](durable-functions-entities.md) to easily implemen ::: zone pivot="csharp" -# [C# (InProc)](#tab/in-process) +> [!NOTE] +> Support for Durable entities is currently in **preview** for the .NET-isolated worker. [Learn more.](durable-functions-dotnet-entities.md) ++#### [In-process](#tab/in-process) ```csharp [FunctionName("Counter")] public class Counter } ``` -# [C# (Isolated)](#tab/isolated-process) +#### [Isolated worker process](#tab/isolated-process) ++```csharp +[Function(nameof(Counter))] +public static Task DispatchAsync([EntityTrigger] TaskEntityDispatcher dispatcher) +{ + return dispatcher.DispatchAsync(operation => + { + if (operation.State.GetState(typeof(int)) is null) + { + operation.State.SetState(0); + } ++ switch (operation.Name.ToLowerInvariant()) + { + case "add": + int state = operation.State.GetState<int>(); + state += operation.GetInput<int>(); + operation.State.SetState(state); + return new(state); + case "reset": + operation.State.SetState(0); + break; + case "get": + return new(operation.State.GetState<int>()); + case "delete": + operation.State.SetState(null); + break; + } ++ return default; + }); +} +``` ++Durable entities can also be modeled as classes in .NET. This model can be useful if the list of operations is fixed and becomes large. The following example is an equivalent implementation of the `Counter` entity using .NET classes and methods. ++```csharp +public class Counter +{ + public int CurrentValue { get; set; } ++ public void Add(int amount) => this.CurrentValue += amount; ++ public void Reset() => this.CurrentValue = 0; -Durable entities are currently not supported in the .NET-isolated worker. + public int Get() => this.CurrentValue; + [Function(nameof(Counter))] + public static Task RunEntityAsync([EntityTrigger] TaskEntityDispatcher dispatcher) + { + return dispatcher.DispatchAsync<Counter>(); + } +} +``` ::: zone-end ::: zone pivot="javascript" Clients can enqueue *operations* for (also known as "signaling") an entity funct ::: zone pivot="csharp" -# [C# (InProc)](#tab/in-process) +#### [In-process](#tab/in-process) ```csharp [FunctionName("EventHubTriggerCSharp")] public static async Task Run( > [!NOTE] > Dynamically generated proxies are also available in .NET for signaling entities in a type-safe way. And in addition to signaling, clients can also query for the state of an entity function using [type-safe methods](durable-functions-dotnet-entities.md#accessing-entities-through-interfaces) on the orchestration client binding. -# [C# (Isolated)](#tab/isolated-process) +#### [Isolated worker process](#tab/isolated-process) -Durable entities are currently not supported in the .NET-isolated worker. +```csharp +[Function("EventHubTriggerCSharp")] +public static async Task Run( + [EventHubTrigger("device-sensor-events", Connection = "EventHubConnection", IsBatched = false)] EventData input, + [DurableClient] DurableTaskClient client) +{ + var metricType = (string)input.Properties["metric"]; + var delta = Convert.ToInt32(input.Data); + var entityId = new EntityInstanceId("Counter", metricType); + await client.Entities.SignalEntityAsync(entityId, "add", delta); +} +``` ::: zone-end ::: zone pivot="javascript" The following video highlights the benefits of Durable Functions: > [!VIDEO https://learn.microsoft.com/Shows/Azure-Friday/Durable-Functions-in-Azure-Functions/player] -For a more in-depth discussion of Durable Functions and the underlying technology, see the following video (it's focused on .NET, but the concepts also apply to other supported languages): --> [!VIDEO https://learn.microsoft.com/Events/dotnetConf/2018/S204/player] - Because Durable Functions is an advanced extension for [Azure Functions](../functions-overview.md), it isn't appropriate for all applications. For a comparison with other Azure orchestration technologies, see [Compare Azure Functions and Azure Logic Apps](../functions-compare-logic-apps-ms-flow-webjobs.md#compare-azure-functions-and-azure-logic-apps). ## Next steps |
azure-functions | Functions Bindings Kafka Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-kafka-trigger.md | Kafka messages are passed to the function as strings and string arrays that are In a Premium plan, you must enable runtime scale monitoring for the Kafka output to be able to scale out to multiple instances. To learn more, see [Enable runtime scaling](functions-bindings-kafka.md#enable-runtime-scaling). +You can't use the **Test/Run** feature of the **Code + Test** page in the Azure Portal to work with Kafka triggers. You must instead send test events directly to the topic being monitored by the trigger. + For a complete set of supported host.json settings for the Kafka trigger, see [host.json settings](functions-bindings-kafka.md#hostjson-settings). [!INCLUDE [functions-bindings-kafka-connections](../../includes/functions-bindings-kafka-connections.md)] |
azure-functions | Functions Reference Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-python.md | Azure Functions supports the following Python versions: | Functions version | Python\* versions | | -- | :--: |-| 4.x | 3.11 (preview) <br/>3.10<br/>3.9<br/> 3.8<br/>3.7 | +| 4.x | 3.11<br/>3.10<br/>3.9<br/>3.8<br/>3.7 | | 3.x | 3.9<br/> 3.8<br/>3.7 |-| 2.x | 3.7 | \* Official Python distributions The Azure Functions Python worker requires a specific set of libraries. You can > If your function app's *requirements.txt* file contains an `azure-functions-worker` entry, remove it. The functions worker is automatically managed by the Azure Functions platform, and we regularly update it with new features and bug fixes. Manually installing an old version of worker in the *requirements.txt* file might cause unexpected issues. > [!NOTE]-> If your package contains certain libraries that might collide with worker's dependencies (for example, protobuf, tensorflow, or grpcio), configure [`PYTHON_ISOLATE_WORKER_DEPENDENCIES`](functions-app-settings.md#python_isolate_worker_dependencies) to `1` in app settings to prevent your application from referring to worker's dependencies. This feature is in preview. +> If your package contains certain libraries that might collide with worker's dependencies (for example, protobuf, tensorflow, or grpcio), configure [`PYTHON_ISOLATE_WORKER_DEPENDENCIES`](functions-app-settings.md#python_isolate_worker_dependencies) to `1` in app settings to prevent your application from referring to worker's dependencies. ### The Azure Functions Python library |
azure-government | Documentation Government Manage Marketplace Partners | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-manage-marketplace-partners.md | Currently, Azure Government Marketplace supports only the following offer types: - Virtual Machines > Bring your own license - Virtual Machines > Pay-as-you-go - Azure Application > Solution template / Managed app-- Azure containers > Bring your own license+- Azure containers (container images) > Bring your own license - IoT Edge modules > Bring your own license ## Publishing Make sure that any virtual machine extensions your solution template relies on a - Subscribe to the [Azure Government blog](https://blogs.msdn.microsoft.com/azuregov/) - Get help on Stack Overflow by using the [azure-gov](https://stackoverflow.com/questions/tagged/azure-gov) tag++ |
azure-maps | How To Use Feedback Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-feedback-tool.md | -Azure Maps has been available since May 2018. Azure Maps has been providing fresh map data, easy-to-use REST APIs, and powerful SDKs to support our enterprise customers with different kind of business use cases. The real world is changing every second, and itΓÇÖs crucial for us to provide a factual digital representation to our customers. Our customers that are planning to open or close facilities need our maps to update promptly. So, they can efficiently plan for delivery, maintenance, or customer service at the right facilities. We have created the Azure Maps data feedback site to empower our customers to provide direct data feedback. CustomersΓÇÖ data feedback goes directly to our data providers and their map editors. They can quickly evaluate and incorporate feedback into our mapping products. +Azure Maps has been available since 2018. Azure Maps has been providing fresh map data, easy-to-use REST APIs, and powerful SDKs to support our enterprise customers with different kind of business use cases. The real world is changing every second, and itΓÇÖs crucial for us to provide a factual digital representation to our customers. Our customers that are planning to open or close facilities need our maps to update promptly. So, they can efficiently plan for delivery, maintenance, or customer service at the right facilities. We have created the Azure Maps data feedback site to empower our customers to provide direct data feedback. CustomersΓÇÖ data feedback goes directly to our data providers and their map editors. They can quickly evaluate and incorporate feedback into our mapping products. [Azure Maps Data feedback site] provides an easy way for our customers to provide map data feedback, especially on business points of interest and residential addresses. This article guides you on how to provide different kinds of feedback using the Azure Maps feedback site. ## Add a business place or a residential address -You may want to provide feedback about a missing point of interest or a residential address. There are two ways to do so. Open the Azure Map data feedback site, search for the missing location's coordinates, and then select **Add a place**. +You can provide feedback about a missing point of interest or a residential address. There are two ways to do so. Open the Azure Map data feedback site, search for the missing location's coordinates, and then select **Add a place**. - ![search missing location](./media/how-to-use-feedback-tool/search-poi.png) Or, you can interact with the map. Select the location to drop a pin at the coordinate then select **Add a place**. - ![add pin](./media/how-to-use-feedback-tool/add-poi.png) Once selected, you're directed to a form to provide the corresponding details for the place. - ![add a place](./media/how-to-use-feedback-tool/add-a-place.png) ## Fix a business place or a residential address The feedback site also allows you to search and locate a business place or an address. You can provide feedback to fix the address or the pin location, if they aren't correct. To provide feedback to fix the address, use the search bar to search for a business place or residential address. Select the location of your interest from the results list, then **Fix this place**. - ![search place to fix](./media/how-to-use-feedback-tool/fix-place.png) To provide feedback to fix the address, fill out the **Fix a place** form, then select **Submit**. To provide feedback to fix the address, fill out the **Fix a place** form, then If the pin location for the place is wrong, select the **The pin location is incorrect** checkbox. Move the pin to the correct location, and then select **Submit**. - ![move pin location](./media/how-to-use-feedback-tool/move-pin.png) ## Add a comment In addition to letting you search for a location, the feedback tool also lets you add a free form text comment for details related to the location. To add a comment, search for the location or select the location, write a comment in the **Add a comment** field then **Submit**. - ![add comment](./media/how-to-use-feedback-tool/add-comment.png) ## Track status You can also track the status of your request by selecting the **I want to track status** box and providing your email while making a request. You receive a tracking link in the email that provides an up-to-date status of your request. - ![feedback status](./media/how-to-use-feedback-tool/feedback-status.png) ## Next steps |
azure-maps | Webgl Custom Layer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/webgl-custom-layer.md | -The Azure Maps Web SDK supports creating custom layers -using [WebGL]. WebGL is based -on [OpenGL ES] and enables rendering 2D and 3D -graphics in web browsers. +The Azure Maps Web SDK supports creating custom layers using [WebGL]. WebGL is based on [OpenGL ES] and enables rendering 2D and 3D graphics in web browsers. -Using WebGL, you can build high-performance interactive -graphics that render in the browser in real-time that support -scenarios like simulations, data visualization, animations and -3D modeling. +Using WebGL, you can build high-performance interactive graphics that render in the browser in real-time that support scenarios like simulations, data visualization, animations and 3D modeling. -Developers can access the WebGL context of the map during -rendering and use custom WebGL layers to integrate with other -libraries such as [three.js] and [deck.gl] -to provide enriched and interactive content on the map. +Developers can access the WebGL context of the map during rendering and use custom WebGL layers to integrate with other libraries such as [three.js] and [deck.gl] to provide enriched and interactive content on the map. ## Add a WebGL layer -Before you can add a WebGL layer to a map, you need to have an object -that implements the `WebGLRenderer` interface. First, create a WebGL -layer by providing an `id` and `renderer` object to the constructor, -then add the layer to the map to have it rendered. +Before you can add a WebGL layer to a map, you need to have an object that implements the `WebGLRenderer` interface. First, create a WebGL layer by providing an `id` and `renderer` object to the constructor, then add the layer to the map to have it rendered. The following sample code demonstrates how to add a WebGL layer to a map: map.layers.add(new atlas.layer.WebGLLayer("layerId", This sample renders a triangle on the map using a WebGL layer. -![A screenshot showing a triangle rendered on a map, using a WebGL layer.](./media/how-to-webgl-custom-layer/triangle.png) For a fully functional sample with source code, see [Simple 2D WebGL layer] in the Azure Maps Samples. -The map's camera matrix is used to project spherical Mercator point to -`gl` coordinates. Mercator point \[0, 0\] represents the top left corner -of the Mercator world and \[1, 1\] represents the bottom right corner. -When `renderingMode` is `"3d"`, the z coordinate is conformal. -A box with identical x, y, and z lengths in Mercator units would be -rendered as a cube. +The map's camera matrix is used to project spherical Mercator point to `gl` coordinates. Mercator point \[0, 0\] represents the top left corner +of the Mercator world and \[1, 1\] represents the bottom right corner. When `renderingMode` is `"3d"`, the z coordinate is conformal. +A box with identical x, y, and z lengths in Mercator units would be rendered as a cube. The `MercatorPoint` class has `fromPosition`, `fromPositions`, and `toFloat32Array` static methods that can be used to convert a geospatial This sample renders an animated 3D parrot on the map. For a fully functional sample with source code, see [Three custom WebGL layer] in the Azure Maps Samples. -The `onAdd` function loads a `.glb` file into memory and instantiates -[three.js] objects such as Camera, Scene, Light, and a `THREE.WebGLRenderer`. +The `onAdd` function loads a `.glb` file into memory and instantiates [three.js] objects such as Camera, Scene, Light, and a `THREE.WebGLRenderer`. -The `render` function calculates the projection matrix of the camera -and renders the model to the scene. +The `render` function calculates the projection matrix of the camera and renders the model to the scene. >[!TIP] >-> - To have a continuous and smooth animation, you can trigger the repaint of -a single frame by calling `map.triggerRepaint()` in the `render` function. -> - To enable anti-aliasing simply set `antialias` to `true` as -one of the style options while creating the map. +> - To have a continuous and smooth animation, you can trigger the repaint of a single frame by calling `map.triggerRepaint()` in the `render` function. +> - To enable anti-aliasing simply set `antialias` to `true` as one of the style options while creating the map. ## Render a 3D model using babylon.js The `onAdd` function instantiates a BABYLON engine and a scene. It then loads a The `render` function calculates the projection matrix of the camera and renders the model to the scene. -![A screenshot showing an example of rendering a 3D model using babylon.js.](./media/how-to-webgl-custom-layer/render-3d-model.png) For a fully functional sample with source code, see [Babylon custom WebGL layer] in the Azure Maps Samples. ## Render a deck.gl layer -A WebGL layer can be used to render layers from the [deck.gl] -library. The following sample demonstrates the data visualization of -people migration flow in the United States from county to county -within a certain time range. +A WebGL layer can be used to render layers from the [deck.gl] library. The following sample demonstrates the data visualization of +people migration flow in the United States from county to county within a certain time range. You need to add the following script file. class DeckGLLayer extends atlas.layer.WebGLLayer { This sample renders an arc-layer google the [deck.gl] library. -![A screenshot showing an arc-layer from the Deck G L library.](./media/how-to-webgl-custom-layer/arc-layer.png) For a fully functional sample with source code, see [Deck GL custom WebGL layer] in the Azure Maps Samples. |
azure-monitor | Azure Monitor Agent Extension Versions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md | We strongly recommended to always update to the latest version, or opt in to the ## Version details | Release Date | Release notes | Windows | Linux | |:|:|:|:|+| October 2023| **Linux** <ul><li>Add support for Process metrics counters for Log Analytics upload and Azure Monitor Metrics<li>Use rsyslog omfwd TCP for improved syslog reliability</li><li>Support Palo Alto CEF logs where hostname is followed by 2 spaces</li><li>Bug and reliability improvements</li></ui> |None|1.28.0| | September 2023| **Windows** <ul><li>Fix issue with high CPU usage due to excessive Windows Event Logs subscription reset</li><li>Reduce fluentbit resource usage by limiting tracked files older than 3 days and limiting logging to errors only</li><li>Fix race-condition where resource_id is unavailable when agent is restarted</li><li>Fix race-condition when AMA vm-extension is provisioned involving disable command</li><li>Update MetricExtension version to 2.2023.721.1630</li><li>Update Troubleshooter to v1.5.14 </li></ul>|1.20.0| None | | August 2023| **Windows** <ul><li>AMA: Allow prefixes in the tag names to handle regression</li><li>Updating package version for AzSecPack 4.28 release</li></ul>**Linux**<ul><li> Comming soon</li></ui>|1.19.0| Comming Soon | | July 2023| **Windows** <ul><li>Fix crash when Event Log subscription callback throws errors.<li>MetricExtension updated to 2.2023.609.2051</li></ui> |1.18.0|None| |
azure-monitor | Create Diagnostic Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/create-diagnostic-settings.md | Title: Create diagnostic settings in Azure Monitor description: Learn how to send Azure Monitor platform metrics and logs to Azure Monitor Logs, Azure Storage, or Azure Event Hubs with diagnostic settings.--++ -# [Azure portal](#tab/portal) +> [!IMPORTANT] +>The Retention Policy as set in the Diagnostic Setting settings is now deprecated and can no longer be used. Use the Azure Storage Lifecycle Policy to manage the length of time that your logs are retained. For more information, see [Migrate diagnostic settings storage retention to Azure Storage lifecycle management](./migrate-to-azure-storage-lifecycle-policy.md) ++## [Azure portal](#tab/portal) You can configure diagnostic settings in the Azure portal either from the Azure Monitor menu or from the menu for the resource. You can configure diagnostic settings in the Azure portal either from the Azure After a few moments, the new setting appears in your list of settings for this resource. Logs are streamed to the specified destinations as new event data is generated. It might take up to 15 minutes between when an event is emitted and when it [appears in a Log Analytics workspace](../logs/data-ingestion-time.md). -# [PowerShell](#tab/powershell) +## [PowerShell](#tab/powershell) Use the [New-AzDiagnosticSetting](/powershell/module/az.monitor/new-azdiagnosticsetting) cmdlet to create a diagnostic setting with [Azure PowerShell](../powershell-samples.md). See the documentation for this cmdlet for descriptions of its parameters. > [!IMPORTANT] > You can't use this method for an activity log. Instead, use [Create diagnostic setting in Azure Monitor by using an Azure Resource Manager template](./resource-manager-diagnostic-settings.md) to create a Resource Manager template and deploy it with PowerShell. -The following example PowerShell cmdlet creates a diagnostic setting for all logs and metrics for a key vault by using Log Analytics Workspace. +The following example PowerShell cmdlet creates a diagnostic setting for all logs, or for audit logs, and metrics for a key vault by using Log Analytics Workspace. ```powershell $KV= Get-AzKeyVault -ResourceGroupName <resource group name> -VaultName <key vault name> $Law= Get-AzOperationalInsightsWorkspace -ResourceGroupName <resource group name $metric = @() $log = @()-$metric += New-AzDiagnosticSettingMetricSettingsObject -Enabled $true -Category AllMetrics -RetentionPolicyDay 30 -RetentionPolicyEnabled $true -$log += New-AzDiagnosticSettingLogSettingsObject -Enabled $true -CategoryGroup allLogs -RetentionPolicyDay 30 -RetentionPolicyEnabled $true -$log += New-AzDiagnosticSettingLogSettingsObject -Enabled $true -CategoryGroup audit -RetentionPolicyDay 30 -RetentionPolicyEnabled $true +$metric += New-AzDiagnosticSettingMetricSettingsObject -Enabled $true -Category AllMetrics +# For all available logs, use: +$log = New-AzDiagnosticSettingLogSettingsObject -Enabled $true -CategoryGroup allLogs +# or, for audit logs, use: +$log = New-AzDiagnosticSettingLogSettingsObject -Enabled $true -CategoryGroup audit New-AzDiagnosticSetting -Name 'KeyVault-Diagnostics' -ResourceId $KV.ResourceId -WorkspaceId $Law.ResourceId -Log $log -Metric $metric -Verbose ``` -# [CLI](#tab/cli) +## [CLI](#tab/cli) Use the [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create) command to create a diagnostic setting with the [Azure CLI](/cli/azure/monitor). See the documentation for this command for descriptions of its parameters. The following example command creates a diagnostic setting by using all three de To specify [resource-specific mode](resource-logs.md#resource-specific) if the service supports it, add the `export-to-resource-specific` parameter with a value of `true`.` -**CMD client** --```azurecli -az monitor diagnostic-settings create ^ name KeyVault-Diagnostics ^resource /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myresourcegroup/providers/Microsoft.KeyVault/vaults/mykeyvault ^logs "[{""category"": ""AuditEvent"",""enabled"": true}]" ^metrics "[{""category"": ""AllMetrics"",""enabled"": true}]" ^storage-account /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount ^workspace /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/myresourcegroup/providers/microsoft.operationalinsights/workspaces/myworkspace ^event-hub-rule /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myresourcegroup/providers/Microsoft.EventHub/namespaces/myeventhub/authorizationrules/RootManageSharedAccessKey ^export-to-resource-specific true-``` --**PowerShell client** --```azurecli -az monitor diagnostic-settings create ` name KeyVault-Diagnostics `resource /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myresourcegroup/providers/Microsoft.KeyVault/vaults/mykeyvault `logs '[{""category"": ""AuditEvent"",""enabled"": true}]' `metrics '[{""category"": ""AllMetrics"",""enabled"": true}]' `storage-account /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount `workspace /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/myresourcegroup/providers/microsoft.operationalinsights/workspaces/myworkspace `event-hub-rule /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myresourcegroup/providers/Microsoft.EventHub/namespaces/myeventhub/authorizationrules/RootManageSharedAccessKey `export-to-resource-specific true-``` --**Bash client** - ```azurecli az monitor diagnostic-settings create \ --name KeyVault-Diagnostics \resource /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myresourcegroup/providers/Microsoft.KeyVault/vaults/mykeyvault \+--resource /subscriptions/<subscription ID>/resourceGroups/<resource group name>/providers/Microsoft.KeyVault/vaults/mykeyvault \ --logs '[{"category": "AuditEvent","enabled": true}]' \ --metrics '[{"category": "AllMetrics","enabled": true}]' \storage-account /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount \workspace /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/myresourcegroup/providers/microsoft.operationalinsights/workspaces/myworkspace \event-hub-rule /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myresourcegroup/providers/Microsoft.EventHub/namespaces/myeventhub/authorizationrules/RootManageSharedAccessKey \+--storage-account /subscriptions/<subscription ID>/resourceGroups/<resource group name>/providers/Microsoft.Storage/storageAccounts/<storage account name> \ +--workspace /subscriptions/<subscription ID>/resourcegroups/<resource group name>/providers/microsoft.operationalinsights/workspaces/<log analytics workspace name> \ +--event-hub-rule /subscriptions/<subscription ID>/resourceGroups/<resource group name>/providers/Microsoft.EventHub/namespaces/<event hub namespace>/authorizationrules/RootManageSharedAccessKey \ +--event-hub <event hub name> \ --export-to-resource-specific true ``` -# [Resource Manager](#tab/arm) +## [Resource Manager](#tab/arm) The following JSON template provides an example for creating a diagnostic setting to send all audit logs to a log analytics workspace. Keep in mind that the `apiVersion` can change depending on the resource in the scope. The following JSON template provides an example for creating a diagnostic settin { "category": null, "categoryGroup": "audit",- "enabled": true, - "retentionPolicy": { - "enabled": false, - "days": 0 - } + "enabled": true } ] } The following JSON template provides an example for creating a diagnostic settin } ``` -# [REST API](#tab/api) +## [REST API](#tab/api) To create or update diagnostic settings by using the [Azure Monitor REST API](/rest/api/monitor/), see [Diagnostic settings](/rest/api/monitor/diagnosticsettings). -# [Azure Policy](#tab/policy) +## [Azure Policy](#tab/policy) For details on using Azure Policy to create diagnostic settings at scale, see [Create diagnostic settings at scale by using Azure Policy](diagnostic-settings-policy.md). Every effort is made to ensure all log data is sent correctly to your destinatio ## Next steps - [Review how to work with diagnostic settings in Azure Monitor](./diagnostic-settings.md)-+- [Migrate diagnostic settings storage retention to Azure Storage lifecycle management](./migrate-to-azure-storage-lifecycle-policy.md) - [Read more about Azure platform logs](./platform-logs-overview.md) |
azure-monitor | Data Collection Rule Structure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-structure.md | Title: Structure of a data collection rule in Azure Monitor (preview) + Title: Structure of a data collection rule in Azure Monitor description: Details on the structure of different kinds of data collection rule in Azure Monitor. Last updated 08/08/2023 ms.reviwer: nikeist- -# Structure of a data collection rule in Azure Monitor (preview) +# Structure of a data collection rule in Azure Monitor + [Data collection rules (DCRs)](data-collection-rule-overview.md) determine how to collect and process telemetry sent to Azure. Some DCRs will be created and managed by Azure Monitor. You might create other DCRs to customize data collection for your particular requirements. This article describes the structure of DCRs for creating and editing DCRs in those cases where you need to work with them directly. ## Custom logs The definition indicates which streams should be sent to which destinations. ## Next steps [Overview of data collection rules and methods for creating them](data-collection-rule-overview.md)+ |
azure-monitor | Diagnostic Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/diagnostic-settings.md | The following table provides unique requirements for each destination including ## Controlling costs -There's a cost for collecting data in a Log Analytics workspace, so you should only collect the categories you require for each service. The data volume for resource logs varies significantly between services. +There's a cost for collecting data in a Log Analytics workspace, so only collect the categories you require for each service. The data volume for resource logs varies significantly between services. You might also not want to collect platform metrics from Azure resources because this data is already being collected in Metrics. Only configure your diagnostic data to collect metrics if you need metric data in the workspace for more complex analysis with log queries. Diagnostic settings don't allow granular filtering of resource logs. You might also not want to collect platform metrics from Azure resources because ## Next steps - [Create diagnostic settings for Azure Monitor platform metrics and logs](./create-diagnostic-settings.md)-+- [Migrate diagnostic settings storage retention to Azure Storage lifecycle management](./migrate-to-azure-storage-lifecycle-policy.md) - [Read more about Azure platform logs](./platform-logs-overview.md) |
azure-monitor | Data Ingestion Time | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-ingestion-time.md | Azure data adds more time to become available at a data collection endpoint for - **Azure platform metrics** are available in under a minute in the metrics database, but they take another 3 minutes to be exported to the data collection endpoint. - **Resource logs** typically add 30 to 90 seconds, depending on the Azure service. Some Azure services (specifically, Azure SQL Database and Azure Virtual Network) currently report their logs at 5-minute intervals. Work is in progress to improve this time further. To examine this latency in your environment, see the [query that follows](#check-ingestion-time).-- **Activity log** data is ingested in 30 seconds when you use the recommended subscription-level diagnostic settings to send them into Azure Monitor Logs. They might take 10 to 15 minutes if you instead use the legacy integration.+- **Activity logs** are available for analysis in 3 to 10 minutes. ### Management solutions collection |
azure-monitor | Log Analytics Workspace Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-analytics-workspace-overview.md | Title: Log Analytics workspace overview description: Overview of Log Analytics workspace, which stores data for Azure Monitor Logs. na Previously updated : 10/01/2022 Last updated : 10/24/2023 # Log Analytics workspace overview |
azure-netapp-files | Azure Netapp Files Service Levels | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-service-levels.md | Service levels are an attribute of a capacity pool. Service levels are defined a ## Supported service levels -Azure NetApp Files supports three service levels: *Ultra*, *Premium*, and *Standard*. --* <a name="Ultra"></a>Ultra storage +Azure NetApp Files supports three service levels: *Ultra*, *Premium*, and *Standard*. +* <a name="Ultra"></a>Ultra storage: The Ultra service level provides up to 128 MiB/s of throughput per 1 TiB of capacity provisioned. -* <a name="Premium"></a>Premium storage -+* <a name="Premium"></a>Premium storage: The Premium service level provides up to 64 MiB/s of throughput per 1 TiB of capacity provisioned. -* <a name="Standard"></a>Standard storage -+* <a name="Standard"></a>Standard storage: The Standard service level provides up to 16 MiB/s of throughput per 1 TiB of capacity provisioned. - * Standard storage with cool access + * Standard storage with cool access: The throughput experience for this service level is the same as the Standard service level for data that is in the hot tier. But it may differ when data that resides in the cool tier is accessed. For more information, see [Standard storage with cool access in Azure NetApp Files](cool-access-introduction.md#effects-of-cool-access-on-data). ## Throughput limits |
azure-netapp-files | Faq Smb | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-smb.md | However, you can map multiple NetApp accounts that are under the same subscripti ## Does Azure NetApp Files support Microsoft Entra ID? -Both [Microsoft Entra Domain Services](../active-directory-domain-services/overview.md) and [Active Directory Domain Services (AD DS)](/windows-server/identity/ad-ds/get-started/virtual-dc/active-directory-domain-services-overview) are supported. You can use existing Active Directory domain controllers with Azure NetApp Files. Domain controllers can reside in Azure as virtual machines, or on premises via ExpressRoute or S2S VPN. Azure NetApp Files doesn't support AD join for [Microsoft Entra ID](../active-directory/fundamentals/index.yml) at this time. +Both [Microsoft Entra Domain Services](../active-directory-domain-services/overview.md) and [Active Directory Domain Services (AD DS)](/windows-server/identity/ad-ds/get-started/virtual-dc/active-directory-domain-services-overview) are supported. You can use existing Active Directory domain controllers with Azure NetApp Files. Domain controllers can reside in Azure as virtual machines, or on premises via ExpressRoute or S2S VPN. Azure NetApp Files doesn't support AD join for [Microsoft Entra ID](../active-directory/fundamentals/index.yml) at this time. However, you can use Microsoft Entra ID with [hybrid identities](/entr). If you're using Azure NetApp Files with Microsoft Entra Domain Services, the organizational unit path is `OU=AADDC Computers` when you configure Active Directory for your NetApp account. For more information about this update, see [KB5021130: How to manage the Netlog ## What versions of Windows Server Active Directory are supported? -Azure NetApp Files supports Windows Server 2008r2SP1-2019 versions of Active Directory Domain Services. +Azure NetApp Files supports Windows Server 2012-2022 versions of Active Directory Domain Services. ## IΓÇÖm having issues connecting to my SMB share. What should I do? |
azure-resource-manager | Move Support Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-support-resources.md | Before starting your move operation, review the [checklist](./move-resource-grou > | applicationgatewaywebapplicationfirewallpolicies | No | No | No | > | applicationsecuritygroups | **Yes** | **Yes** | No | > | azurefirewalls | No | No | No |-> | bastionhosts | No | No | No | +> | bastionhosts | Yes | No | No | > | bgpservicecommunities | No | No | No | > | connections | **Yes** | **Yes** | No | > | ddoscustompolicies | **Yes** | **Yes** | No | |
azure-signalr | Howto Enable Geo Replication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/howto-enable-geo-replication.md | For **server connections**, the failover and recovery work the same way as it do > [!NOTE] > * This failover mechanism is for Azure SignalR service. Regional outages of app server are beyond the scope of this document. +## Disable or enable the replica endpoint +When setting up a replica, you have the option to enable or disable its endpoint. If it's disabled, the primary FQDN's DNS resolution won't include the replica, and therefore, traffic won't be directed to it. ++![Diagram of Azure SignalR replica endpoint setting. ](./media/howto-enable-geo-replication/signalr-replica-endpoint-setting.png "Replica Endpoint Setting") ++You can also enable of disable the endpoint after it's been created. On the primary resource's replicas blade, click the ellipsis button on the right side of the replica and choose **Enable Endpoint** or **Disable Endpoint**: ++![Diagram of Azure SignalR replica endpoint modification. ](./media/howto-enable-geo-replication/signalr-replica-endpoint-modify.png "Replica Endpoint Modify") ++Before deleting a replication, consider disabling its endpoint first. Over time, existing connections will disconnect. As no new connections are coming, the replication becomes idle finally. This ensures a seamless deletion process. + +This feature is also useful for troubleshooting regional issues. ++> [!NOTE] +> * Due to the DNS cache, it may take several minutes for the DNS update to take effect. +> * Existing connections remain unaffected until they disconnect. + ## Impact on performance after adding replicas After replicas are enabled, clients will naturally distribute based on their geographical locations. While SignalR takes on the responsibility to synchronize data across these replicas, you'll be pleased to know that the associated overhead on [Server Load](signalr-concept-performance.md#quick-evaluation-using-metrics) is minimal for most common use cases. |
azure-web-pubsub | Howto Enable Geo Replication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-enable-geo-replication.md | Once the issue in `eastus` is resolved and the region is back online, the health This failover and recovery process is **automatic** and requires no manual intervention. +## Disable or enable the replica endpoint +When setting up a replica, you have the option to enable or disable its endpoint. If it's disabled, the primary FQDN's DNS resolution won't include the replica, and therefore, traffic won't be directed to it. ++![Diagram of Azure Web PubSub replica endpoint setting. ](./media/howto-enable-geo-replication/web-pubsub-replica-endpoint-setting.png "Replica Endpoint Setting") ++You can also enable of disable the endpoint after it's been created. On the primary resource's replicas blade, click the ellipsis button on the right side of the replica and choose **Enable Endpoint** or **Disable Endpoint**: ++![Diagram of Azure Web PubSub replica endpoint modification. ](./media/howto-enable-geo-replication/web-pubsub-replica-endpoint-modify.png "Replica Endpoint Modify") ++Before deleting a replication, consider disabling its endpoint first. Over time, existing connections will disconnect. As no new connections are coming, the replication becomes idle finally. This ensures a seamless deletion process. + +This feature is also useful for troubleshooting regional issues. ++> [!NOTE] +> * Due to the DNS cache, it may take several minutes for the DNS update to take effect. +> * Existing connections remain unaffected until they disconnect. + ## Impact on performance after enabling geo-replication feature After replicas are enabled, clients will naturally distribute based on their geographical locations. While Web PubSub takes on the responsibility to synchronize data across these replicas, you'll be pleased to know that the associated overhead on [Server Load](concept-performance.md#quick-evaluation-using-metrics) is minimal for most common use cases. |
bastion | Bastion Connect Vm Scale Set | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-connect-vm-scale-set.md | Make sure that you have set up an Azure Bastion host for the virtual network in ## <a name="rdp"></a>Connect -This section shows you the basic steps to connect to your virtual machine scale set. +This section helps you connect to your virtual machine scale set. 1. Open the [Azure portal](https://portal.azure.com). Go to the virtual machine scale set that you want to connect to. This section shows you the basic steps to connect to your virtual machine scale :::image type="content" source="./media/bastion-connect-vm-scale-set/select-connect.png" alt-text="Screenshot shows select the connect button and choose Bastion from the dropdown." lightbox="./media/bastion-connect-vm-scale-set/select-connect.png"::: -1. On the **Bastion** page, fill in the required settings. The settings you can select depend on the virtual machine to which you're connecting, and the [Bastion SKU](configuration-settings.md#skus) tier that you're using. The Standard SKU gives you more connection options than the Basic SKU. For more information about settings, see [Bastion configuration settings](configuration-settings.md). +1. On the **Bastion** page, fill in the required settings. The settings you can select depend on the virtual machine to which you're connecting, and the [Bastion SKU](configuration-settings.md#skus) tier that you're using. For more information about settings and SKUs, see [Bastion configuration settings](configuration-settings.md). 1. After filling in the values on the Bastion page, select **Connect** to connect to the instance. |
bastion | Bastion Connect Vm Ssh Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-connect-vm-ssh-linux.md | description: Learn how to use Azure Bastion to connect to Linux VM using SSH. Previously updated : 04/25/2023 Last updated : 10/13/2023 # Create an SSH connection to a Linux VM using Azure Bastion -This article shows you how to securely and seamlessly create an SSH connection to your Linux VMs located in an Azure virtual network directly through the Azure portal. When you use Azure Bastion, your VMs don't require a client, agent, or additional software. +This article shows you how to securely and seamlessly create an SSH connection to your Linux VMs located in an Azure virtual network directly through the Azure portal. When you use Azure Bastion, your VMs don't require a client, agent, or additional software. Azure Bastion provides secure connectivity to all of the VMs in the virtual network in which it's provisioned. Using Azure Bastion protects your virtual machines from exposing RDP/SSH ports to the outside world, while still providing secure access using RDP/SSH. For more information, see the [What is Azure Bastion?](bastion-overview.md) overview article. When connecting to a Linux virtual machine using SSH, you can use both username/ Make sure that you have set up an Azure Bastion host for the virtual network in which the VM resides. For more information, see [Create an Azure Bastion host](./tutorial-create-host-portal.md). Once the Bastion service is provisioned and deployed in your virtual network, you can use it to connect to any VM in this virtual network. -The connection settings and features that are available depend on the Bastion SKU you're using. +The connection settings and features that are available depend on the Bastion SKU you're using. Make sure your Bastion deployment is using the required SKU. * To see the available features and settings per SKU tier, see the [SKUs and features](bastion-overview.md#sku) section of the Bastion overview article. * To check the SKU tier of your Bastion deployment and upgrade if necessary, see [Upgrade a Bastion SKU](upgrade-sku.md). |
bastion | Bastion Connect Vm Ssh Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-connect-vm-ssh-windows.md | description: Learn how to use Azure Bastion to connect to Windows VM using SSH. Previously updated : 10/18/2022 Last updated : 10/13/2023 -Azure Bastion provides secure connectivity to all of the VMs in the virtual network in which it is provisioned. Using Azure Bastion protects your virtual machines from exposing RDP/SSH ports to the outside world, while still providing secure access using RDP/SSH. For more information, see the [What is Azure Bastion?](bastion-overview.md). +Azure Bastion provides secure connectivity to all of the VMs in the virtual network in which it's provisioned. Using Azure Bastion protects your virtual machines from exposing RDP/SSH ports to the outside world, while still providing secure access using RDP/SSH. For more information, see the [What is Azure Bastion?](bastion-overview.md). > [!NOTE] > If you want to create an SSH connection to a Windows VM, Azure Bastion must be configured using the Standard SKU. In order to make a connection, the following roles are required: In order to connect to the Windows VM via SSH, you must have the following ports open on your VM: * Inbound port: SSH (22) *or*-* Inbound port: Custom value (you will then need to specify this custom port when you connect to the VM via Azure Bastion) +* Inbound port: Custom value (you'll then need to specify this custom port when you connect to the VM via Azure Bastion) See the [Azure Bastion FAQ](bastion-faq.md) for additional requirements. Currently, Azure Bastion only supports connecting to Windows VMs via SSH using * :::image type="content" source="./media/bastion-connect-vm-ssh-windows/connect.png" alt-text="Screenshot shows the overview for a virtual machine in Azure portal with Connect selected." lightbox="./media/bastion-connect-vm-ssh-windows/connect.png"::: -1. On the **Bastion** connection page, click the **Connection Settings** arrow to expand all the available settings. If you are using a Bastion **Standard** SKU, you have more available settings than a Basic SKU. +1. On the **Bastion** connection page, click the **Connection Settings** arrow to expand all the available settings. Notice that if you're using the Bastion **Standard** SKU, you have more available settings. :::image type="content" source="./media/bastion-connect-vm-ssh-windows/connection-settings.png" alt-text="Screenshot shows connection settings."::: |
bastion | Bastion Create Host Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-create-host-powershell.md | description: Learn how to deploy Azure Bastion using PowerShell. Previously updated : 06/08/2023 Last updated : 10/05/2023 # Customer intent: As someone with a networking background, I want to deploy Bastion and connect to a VM. Verify that you have an Azure subscription. If you don't already have an Azure s You can use the following example values when creating this configuration, or you can substitute your own. -**Basic VNet and VM values:** +** Example VNet and VM values:** |**Name** | **Value** | | | | This section helps you create a virtual network, subnets, and deploy Azure Basti $vnet = Get-AzVirtualNetwork -Name "TestVNet1" -ResourceGroupName "TestRG1" ``` - Add the subnet. + Add the subnet. ```azurepowershell-interactive Add-AzVirtualNetworkSubnetConfig ` This section helps you create a virtual network, subnets, and deploy Azure Basti -AllocationMethod Static -Sku Standard ``` -1. Create a new Azure Bastion resource in the AzureBastionSubnet using the [New-AzBastion](/powershell/module/az.network/new-azbastion) command. The following example uses the **Basic SKU**. However, you can also deploy Bastion using the Standard SKU by changing the -Sku value to "Standard". The Standard SKU lets you configure more Bastion features and connect to VMs using more connection types. For more information, see [Bastion SKUs](configuration-settings.md#skus). +1. Create a new Azure Bastion resource in the AzureBastionSubnet using the [New-AzBastion](/powershell/module/az.network/new-azbastion) command. The following example uses the **Basic SKU**. However, you can also deploy Bastion using the Standard SKU by changing the -Sku value to "Standard". The Standard SKU lets you configure more Bastion features and connect to VMs using more connection types. You can also deploy Bastion automatically using the [Developer SKU](quickstart-developer-sku.md). For more information, see [Bastion SKUs](configuration-settings.md#skus). ```azurepowershell-interactive New-AzBastion -ResourceGroupName "TestRG1" -Name "VNet1-bastion" ` |
bastion | Bastion Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-faq.md | description: Learn about frequently asked questions for Azure Bastion. Previously updated : 10/03/2023 Last updated : 10/13/2023 # Azure Bastion FAQ No. UDR isn't supported on an Azure Bastion subnet. For scenarios that include both Azure Bastion and Azure Firewall/Network Virtual Appliance (NVA) in the same virtual network, you donΓÇÖt need to force traffic from an Azure Bastion subnet to Azure Firewall because the communication between Azure Bastion and your VMs is private. For more information, see [Accessing VMs behind Azure Firewall with Bastion](https://azure.microsoft.com/blog/accessing-virtual-machines-behind-azure-firewall-with-azure-bastion/). -### <a name="upgradesku"></a> Can I upgrade from a Basic SKU to a Standard SKU? +### <a name="all-skus"></a> What SKU should I use? ++Azure Bastion has multiple SKUs. You should select a SKU based on your connection and feature requirements. For a full list of SKU tiers and supported connections and features, see the [Configuration settings](configuration-settings.md#skus) article. ++### <a name="upgradesku"></a> Can I upgrade a SKU? Yes. For steps, see [Upgrade a SKU](upgrade-sku.md). For more information about SKUs, see the [Configuration settings](configuration-settings.md#skus) article. -### <a name="downgradesku"></a> Can I downgrade from a Standard SKU to a Basic SKU? +### <a name="downgradesku"></a> Can I downgrade a SKU? -No. Downgrading from a Standard SKU to a Basic SKU isn't supported. For more information about SKUs, see the [Configuration settings](configuration-settings.md#skus) article. +No. Downgrading a SKU isn't supported. For more information about SKUs, see the [Configuration settings](configuration-settings.md#skus) article. ### <a name="virtual-desktop"></a>Does Bastion support connectivity to Azure Virtual Desktop? |
bastion | Bastion Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-overview.md | - Azure Bastion is a service you deploy that lets you connect to a virtual machine using your browser and the Azure portal, or via the native SSH or RDP client already installed on your local computer. The Azure Bastion service is a fully platform-managed PaaS service that you provision inside your virtual network. It provides secure and seamless RDP/SSH connectivity to your virtual machines directly over TLS from the Azure portal or via native client. When you connect via Azure Bastion, your virtual machines don't need a public IP address, agent, or special client software. +Azure Bastion is a fully managed PaaS service that you provision to securely connect to virtual machines via private IP address. It provides secure and seamless RDP/SSH connectivity to your virtual machines directly over TLS from the Azure portal, or via the native SSH or RDP client already installed on your local computer. When you connect via Azure Bastion, your virtual machines don't need a public IP address, agent, or special client software. -Bastion provides secure RDP and SSH connectivity to all of the VMs in the virtual network in which it is provisioned. Using Azure Bastion protects your virtual machines from exposing RDP/SSH ports to the outside world, while still providing secure access using RDP/SSH. +Bastion provides secure RDP and SSH connectivity to all of the VMs in the virtual network for which it's provisioned. Using Azure Bastion protects your virtual machines from exposing RDP/SSH ports to the outside world, while still providing secure access using RDP/SSH. ++The following diagram shows connections to virtual machines via a Bastion deployment that uses a Basic or Standard SKU. :::image type="content" source="./media/bastion-overview/architecture.png" alt-text="Diagram showing Azure Bastion architecture." lightbox="./media/bastion-overview/architecture.png"::: Bastion provides secure RDP and SSH connectivity to all of the VMs in the virtua |Benefit |Description| |--|--| |RDP and SSH through the Azure portal|You can get to the RDP and SSH session directly in the Azure portal using a single-click seamless experience.|-|Remote Session over TLS and firewall traversal for RDP/SSH|Azure Bastion uses an HTML5 based web client that is automatically streamed to your local device. Your RDP/SSH session is over TLS on port 443. This enables the traffic to traverse firewalls more securely. Bastion supports TLS 1.2 and above. Older TLS versions are not supported.| +|Remote Session over TLS and firewall traversal for RDP/SSH|Azure Bastion uses an HTML5 based web client that is automatically streamed to your local device. Your RDP/SSH session is over TLS on port 443. This enables the traffic to traverse firewalls more securely. Bastion supports TLS 1.2 and above. Older TLS versions aren't supported.| |No Public IP address required on the Azure VM| Azure Bastion opens the RDP/SSH connection to your Azure VM by using the private IP address on your VM. You don't need a public IP address on your virtual machine.| |No hassle of managing Network Security Groups (NSGs)| You don't need to apply any NSGs to the Azure Bastion subnet. Because Azure Bastion connects to your virtual machines over private IP, you can configure your NSGs to allow RDP/SSH from Azure Bastion only. This removes the hassle of managing NSGs each time you need to securely connect to your virtual machines. For more information about NSGs, see [Network Security Groups](../virtual-network/network-security-groups-overview.md#security-rules).| |No need to manage a separate bastion host on a VM |Azure Bastion is a fully managed platform PaaS service from Azure that is hardened internally to provide you secure RDP/SSH connectivity.| Bastion provides secure RDP and SSH connectivity to all of the VMs in the virtua ## <a name="sku"></a>SKUs -Azure Bastion has two available SKUs, Basic and Standard. For more information, including how to upgrade a SKU, see the [Configuration settings](configuration-settings.md#skus) article. --The following table shows features and corresponding SKUs. +Azure Bastion offers multiple SKU tiers. The following table shows features and corresponding SKUs. [!INCLUDE [Azure Bastion SKUs](../../includes/bastion-sku.md)] +For more information about SKUs, including how to upgrade a SKU and information about the new Developer SKU, see the [Configuration settings](configuration-settings.md#skus) article. + ## <a name="architecture"></a>Architecture -Azure Bastion is deployed to a virtual network and supports virtual network peering. Specifically, Azure Bastion manages RDP/SSH connectivity to VMs created in the local or peered virtual networks. +This section applies to all SKU tiers except the Developer SKU, which is deployed differently. Azure Bastion is deployed to a virtual network and supports virtual network peering. Specifically, Azure Bastion manages RDP/SSH connectivity to VMs created in the local or peered virtual networks. RDP and SSH are some of the fundamental means through which you can connect to your workloads running in Azure. Exposing RDP/SSH ports over the Internet isn't desired and is seen as a significant threat surface. This is often due to protocol vulnerabilities. To contain this threat surface, you can deploy bastion hosts (also known as jump-servers) at the public side of your perimeter network. Bastion host servers are designed and configured to withstand attacks. Bastion servers also provide RDP and SSH connectivity to the workloads sitting behind the bastion, as well as further inside the network. -Currently, by default, new Bastion deployments don't support zone redundancies. Previously deployed bastions may or may not be zone-redundant. The exceptions are Bastion deployments in Korea Central and Southeast Asia, which do support zone redundancies. +Currently, by default, new Bastion deployments don't support zone redundancies. Previously deployed bastions might, or might not, be zone-redundant. The exceptions are Bastion deployments in Korea Central and Southeast Asia, which do support zone redundancies. :::image type="content" source="./media/bastion-overview/architecture.png" alt-text="Diagram showing Azure Bastion architecture." lightbox="./media/bastion-overview/architecture.png"::: -This figure shows the architecture of an Azure Bastion deployment. In this diagram: +This figure shows the architecture of an Azure Bastion deployment. This diagram doesn't apply to the Developer SKU. In this diagram: * The Bastion host is deployed in the virtual network that contains the AzureBastionSubnet subnet that has a minimum /26 prefix. * The user connects to the Azure portal using any HTML5 browser. For frequently asked questions, see the Bastion [FAQ](bastion-faq.md). ## Next steps -* [Quickstart: Deploy Bastion using default settings](quickstart-host-portal.md). -* [Tutorial: Deploy Bastion using specified settings](tutorial-create-host-portal.md). -* [Learn module: Introduction to Azure Bastion](/training/modules/intro-to-azure-bastion/). -* Learn about some of the other key [networking capabilities](../networking/fundamentals/networking-overview.md) of Azure. +* [Quickstart: Quickstart: Deploy Bastion automatically - Basic SKU](quickstart-host-portal.md) +* [Quickstart: Deploy Bastion automatically - Developer SKU](quickstart-developer-sku.md) +* [Tutorial: Deploy Bastion using specified settings](tutorial-create-host-portal.md) +* [Learn module: Introduction to Azure Bastion](/training/modules/intro-to-azure-bastion/) +* Learn about some of the other key [networking capabilities](../networking/fundamentals/networking-overview.md) of Azure * [Learn more about Azure network security](../networking/security/index.yml) |
bastion | Configuration Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/configuration-settings.md | The sections in this article discuss the resources and settings for Azure Bastio ## <a name="skus"></a>SKUs -A SKU is also known as a Tier. Azure Bastion supports two SKU types: Basic and Standard. The SKU is configured in the Azure portal during the workflow when you configure Bastion. You can [upgrade a Basic SKU to a Standard SKU](#upgradesku). +A SKU is also known as a Tier. Azure Bastion supports multiple SKU tiers. When you configure Bastion, you select the SKU tier. You decide the SKU tier based on the features that you want to use. The following table shows the availability of features per corresponding SKU. -* The **Basic SKU** provides base functionality, enabling Azure Bastion to manage RDP/SSH connectivity to virtual machines (VMs) without exposing public IP addresses on the target application VMs. -* The **Standard SKU** enables premium features. -The following table shows the availability of features per corresponding SKU. +### Developer SKU (Preview) +The Bastion Developer SKU is a new, lower-cost, lightweight SKU. This SKU is ideal for Dev/Test users who want to securely connect to their VMs and that don't need additional features or scaling. You can connect to one Azure VM at a time directly through the Virtual Machine connect page. ++The Developer SKU has different requirements and limitations than the other SKU tiers. See [Deploy Bastion automatically - Developer SKU](quickstart-developer-sku.md) for more information and deployment steps. ### Specify SKU | Method | SKU Value | Links | | | | |-| Azure portal | Tier - Basic or Standard | [Tutorial](tutorial-create-host-portal.md) | +| Azure portal | Tier - Developer | [Quickstart](quickstart-developer-sku.md)| | Azure portal | Tier - Basic| [Quickstart](quickstart-host-portal.md) |+| Azure portal | Tier - Basic or Standard | [Tutorial](tutorial-create-host-portal.md) | | Azure PowerShell | Tier - Basic or Standard |[How-to](bastion-create-host-powershell.md) | | Azure CLI | Tier - Basic or Standard | [How-to](create-host-cli.md) | ### <a name="upgradesku"></a>Upgrade a SKU -Azure Bastion supports upgrading from a Basic to a Standard SKU. +You can always [upgrade a SKU](upgrade-sku.md) to add more features. > [!NOTE]-> Downgrading from a Standard SKU to a Basic SKU is not supported. To downgrade, you must delete and recreate Azure Bastion. +> Downgrading a SKU is not supported. To downgrade, you must delete and recreate Azure Bastion. > You can configure this setting using the following method: You can configure this setting using the following method: ## <a name="subnet"></a>Azure Bastion subnet - >[!IMPORTANT] - >For Azure Bastion resources deployed on or after November 2, 2021, the minimum AzureBastionSubnet size is /26 or larger (/25, /24, etc.). All Azure Bastion resources deployed in subnets of size /27 prior to this date are unaffected by this change and will continue to work, but we highly recommend increasing the size of any existing AzureBastionSubnet to /26 in case you choose to take advantage of [host scaling](./configure-host-scaling.md) in the future. - > +>[!IMPORTANT] +>For Azure Bastion resources deployed on or after November 2, 2021, the minimum AzureBastionSubnet size is /26 or larger (/25, /24, etc.). All Azure Bastion resources deployed in subnets of size /27 prior to this date are unaffected by this change and will continue to work, but we highly recommend increasing the size of any existing AzureBastionSubnet to /26 in case you choose to take advantage of [host scaling](./configure-host-scaling.md) in the future. +> -Azure Bastion requires a dedicated subnet: **AzureBastionSubnet**. You must create this subnet in the same virtual network that you want to deploy Azure Bastion to. The subnet must have the following configuration: +When you deploy Azure Bastion using any SKU except the Developer SKU, Bastion requires a dedicated subnet named **AzureBastionSubnet**. You must create this subnet in the same virtual network that you want to deploy Azure Bastion to. The subnet must have the following configuration: * Subnet name must be *AzureBastionSubnet*. * Subnet size must be /26 or larger (/25, /24 etc.). * For host scaling, a /26 or larger subnet is recommended. Using a smaller subnet space limits the number of scale units. For more information, see the [Host scaling](#instance) section of this article.-* The subnet must be in the same VNet and resource group as the bastion host. +* The subnet must be in the same virtual network and resource group as the bastion host. * The subnet can't contain other resources. You can configure this setting using the following methods: You can configure this setting using the following methods: ## <a name="public-ip"></a>Public IP address -Azure Bastion requires a Public IP address. The Public IP must have the following configuration: +Azure Bastion deployments require a Public IP address, except Developer SKU deployments. The Public IP must have the following configuration: * The Public IP address SKU must be **Standard**. * The Public IP address assignment/allocation method must be **Static**. Azure Bastion requires a Public IP address. The Public IP must have the followin You can configure this setting using the following methods: -| Method | Value | Links | Requires Standard SKU| -| | | | -- | -| Azure portal | Public IP address |[Azure portal](https://portal.azure.com)| Yes | -| Azure PowerShell | -PublicIpAddress| [cmdlet](/powershell/module/az.network/new-azbastion#parameters) | Yes | -| Azure CLI | --public-ip create |[command](/cli/azure/network/public-ip) | Yes | +| Method | Value | Links | +| | | | +| Azure portal | Public IP address |[Azure portal](https://portal.azure.com)| +| Azure PowerShell | -PublicIpAddress| [cmdlet](/powershell/module/az.network/new-azbastion#parameters) | +| Azure CLI | --public-ip create |[command](/cli/azure/network/public-ip) | ## <a name="instance"></a>Instances and host scaling -An instance is an optimized Azure VM that is created when you configure Azure Bastion. It's fully managed by Azure and runs all of the processes needed for Azure Bastion. An instance is also referred to as a scale unit. You connect to client VMs via an Azure Bastion instance. When you configure Azure Bastion using the Basic SKU, two instances are created. If you use the Standard SKU, you can specify the number of instances. This is called **host scaling**. +An instance is an optimized Azure VM that is created when you configure Azure Bastion. It's fully managed by Azure and runs all of the processes needed for Azure Bastion. An instance is also referred to as a scale unit. You connect to client VMs via an Azure Bastion instance. When you configure Azure Bastion using the Basic SKU, two instances are created. If you use the Bastion Standard SKU, you can specify the number of instances. This is called **host scaling**. Each instance can support 20 concurrent RDP connections and 40 concurrent SSH connections for medium workloads (see [Azure subscription limits and quotas](../azure-resource-manager/management/azure-subscription-service-limits.md) for more information). The number of connections per instances depends on what actions you're taking when connected to the client VM. For example, if you're doing something data intensive, it creates a larger load for the instance to process. Once the concurrent sessions are exceeded, another scale unit (instance) is required. You can specify the port that you want to use to connect to your VMs. By default Custom port values are supported for the Standard SKU only. -## Shareable link (Preview) +## Shareable link The Bastion **Shareable Link** feature lets users connect to a target resource using Azure Bastion without accessing the Azure portal. |
bastion | Quickstart Developer Sku | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/quickstart-developer-sku.md | + + Title: 'Quickstart: Deploy Bastion using the Developer SKU: Azure portal' +description: Learn how to deploy Bastion using the Developer SKU. +++ Last updated : 10/16/2023+++++# Quickstart: Deploy Bastion using the Developer SKU (Preview) ++In this quickstart, you'll learn how to deploy Azure Bastion using the Developer SKU. After Bastion is deployed, you can connect to virtual machines (VM) in the virtual network via Bastion using the private IP address of the VM. The VMs you connect to don't need a public IP address, client software, agent, or a special configuration. For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md) ++> [!IMPORTANT] +> [!INCLUDE [Pricing](../../includes/bastion-pricing.md)] ++## About the Developer SKU ++The Bastion Developer SKU is a new [lower-cost](https://azure.microsoft.com/pricing/details/azure-bastion/), lightweight SKU. This SKU is ideal for Dev/Test users who want to securely connect to their VMs if they don't need additional features or scaling. With the Developer SKU, you can connect to one Azure VM at a time directly through the virtual machine connect page. ++When you deploy Bastion using the Developer SKU, the deployment requirements are different than when you deploy using other SKUs. Typically when you create a bastion host, a host is deployed to the AzureBastionSubnet in your virtual network. The Bastion host is dedicated for your use. When using the Developer SKU, a bastion host isn't deployed to your virtual network and you don't need a AzureBastionSubnet. However, the Developer SKU bastion host isn't a dedicated resource and is, instead, part of a shared pool. ++Because the Developer SKU bastion resource isn't dedicated, the features for the Developer SKU are limited. See the Bastion configuration settings [SKU](configuration-settings.md) section for features by SKU. For more information about pricing, see the [Pricing](https://azure.microsoft.com/pricing/details/azure-bastion/) page. You can always upgrade the Developer SKU to a higher SKU if you need more features. See [Upgrade a SKU](upgrade-sku.md). ++## <a name="prereq"></a>Prerequisites ++* Verify that you have an Azure subscription. If you don't already have an Azure subscription, you can activate your [MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details) or sign up for a [free account](https://azure.microsoft.com/pricing/free-trial). ++* **A VM in a VNet**. ++ When you deploy Bastion using default values, the values are pulled from the virtual network in which your VM resides. Within the context of this exercise, we use this VM both as the starting point to deploy Bastion, and also to demonstrate how to connect to a VM via Bastion. ++ * If you don't already have a VM in a virtual network, create one using [Quickstart: Create a Windows VM](../virtual-machines/windows/quick-create-portal.md), or [Quickstart: Create a Linux VM](../virtual-machines/linux/quick-create-portal.md). + * If you need example values, see the [Example values](#values) section. + * If you already have a virtual network, make sure it's selected on the Networking tab when you create your VM. + * If you don't have a virtual network, you can create one at the same time you create your VM. ++* **Required VM roles:** ++ * Reader role on the virtual machine. + * Reader role on the NIC with private IP of the virtual machine. + +* **Required VM ports inbound ports:** ++ * 3389 for Windows VMs + * 22 for Linux VMs +++### <a name="values"></a>Example values ++You can use the following example values when creating this configuration as an exercise, or you can substitute your own. ++**Basic VNet and VM values:** ++|**Name** | **Value** | +| | | +| Virtual machine| TestVM | +| Resource group | TestRG1 | +| Region | East US | +| Virtual network | VNet1 | +| Address space | 10.1.0.0/16 | +| Subnets | FrontEnd: 10.1.0.0/24 | ++### Workflow ++* Deploy Bastion automatically using the Developer SKU. +* After you deploy Bastion, you'll then connect to your VM via the portal using RDP/SSH connectivity and the VM's private IP address. +* If your VM has a public IP address that you don't need for anything else, you can remove it. ++## <a name="createvmset"></a>Deploy Bastion ++When you create Azure Bastion using default settings, the settings are configured for you. You can't modify or specify values for a default deployment. ++1. Sign in to the [Azure portal](https://portal.azure.com). +1. In the portal, go to the VM to which you want to connect. The values from the virtual network in which this VM resides will be used to create the Bastion deployment. +1. On the page for your VM, in the **Operations** section on the left menu, select **Bastion**. You can also get to this page via your **Virtual Network/Bastion** in the portal. +1. On the **Bastion** page, select **Deploy Bastion Developer**. ++ :::image type="content" source="./media/deploy-host-developer-sku/deploy-bastion-developer.png" alt-text="Screenshot of the Bastion page showing Deploy Bastion." lightbox="./media/deploy-host-developer-sku/deploy-bastion-developer.png"::: ++1. Bastion begins deploying. This can take around 10 minutes to complete. ++## <a name="connect"></a>Connect to a VM ++> [!NOTE] +> Before connecting to a VM, verify that your NSG rules allow traffic to ports 22 and 3389 from the private IP address 168.63.129.16. ++When the Bastion deployment is complete, the screen changes to the **Connect** page. ++1. Type your authentication credentials. Then, select **Connect**. ++ :::image type="content" source="./media/quickstart-host-portal/connect-vm.png" alt-text="Screenshot shows the Connect using Azure Bastion dialog." lightbox="./media/quickstart-host-portal/connect-vm.png"::: ++1. The connection to this virtual machine via Bastion will open directly in the Azure portal (over HTML5) using port 443 and the Bastion service. Select **Allow** when asked for permissions to the clipboard. This lets you use the remote clipboard arrows on the left of the screen. ++ * When you connect, the desktop of the VM might look different than the example screenshot. + * Using keyboard shortcut keys while connected to a VM might not result in the same behavior as shortcut keys on a local computer. For example, when connected to a Windows VM from a Windows client, CTRL+ALT+END is the keyboard shortcut for CTRL+ALT+Delete on a local computer. To do this from a Mac while connected to a Windows VM, the keyboard shortcut is Fn+CTRL+ALT+Backspace. ++ :::image type="content" source="./media/quickstart-host-portal/connected.png" alt-text="Screenshot showing a Bastion RDP connection selected." lightbox="./media/quickstart-host-portal/connected.png"::: ++### <a name="audio"></a>To enable audio output +++## <a name="remove"></a>Remove VM public IP address +++## Clean up resources ++When you're done using the virtual network and the virtual machines, delete the resource group and all of the resources it contains: ++1. Enter the name of your resource group in the **Search** box at the top of the portal and select it from the search results. ++1. Select **Delete resource group**. ++1. Enter your resource group for **TYPE THE RESOURCE GROUP NAME** and select **Delete**. ++## Next steps ++In this quickstart, you deployed Bastion using the Developer SKKU, and then connected to a virtual machine securely via Bastion. Next, you can configure more features and work with VM connections. ++> [!div class="nextstepaction"] +> [Upgrade SKUs](upgrade-sku.md) ++> [!div class="nextstepaction"] +> [Azure Bastion configuration settings and features](configuration-settings.md) |
bastion | Quickstart Host Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/quickstart-host-portal.md | Title: 'Quickstart: Deploy Bastion with default settings' + Title: 'Quickstart: Deploy Azure Bastion automatically - Basic SKU' description: Learn how to deploy Bastion with default settings from the Azure portal. Previously updated : 10/03/2023 Last updated : 10/12/2023 -# Quickstart: Deploy Azure Bastion with default settings +# Quickstart: Deploy Bastion automatically - Basic SKU -In this quickstart, you'll learn how to deploy Azure Bastion with default settings to your virtual network using the Azure portal. After Bastion is deployed, you can connect (SSH/RDP) to virtual machines (VM) in the virtual network via Bastion using the private IP address of the VM. The VMs you connect to don't need a public IP address, client software, agent, or a special configuration. For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md) +In this quickstart, you'll learn how to deploy Azure Bastion automatically in the Azure portal using default settings and the Basic SKU. After Bastion is deployed, you can connect (SSH/RDP) to virtual machines (VM) in the virtual network via Bastion using the private IP address of the VM. The VMs you connect to don't need a public IP address, client software, agent, or a special configuration. ++The default SKU for this type of deployment is the Basic SKU. If you want to deploy using the Developer SKU instead, see [Deploy Bastion automatically - Developer SKU](quickstart-developer-sku.md). If you want to deploy using the Standard SKU, see the [Tutorial - Deploy Bastion using specified settings](tutorial-create-host-portal.md). For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md) :::image type="content" source="./media/create-host/host-architecture.png" alt-text="Diagram showing Azure Bastion architecture." lightbox="./media/create-host/host-architecture.png"::: The steps in this article help you do the following: -* Deploy Bastion with default settings from your VM resource using the Azure portal. When you deploy using default settings, the settings are based on the virtual network to which Bastion will be deployed. -* After you deploy Bastion, you'll then connect to your VM via the portal using RDP/SSH connectivity and the VM's private IP address. +* Deploy Bastion with default settings from your VM resource using the Azure portal. When you deploy using default settings, the settings are based on the virtual network to which Bastion will be deployed. +* After you deploy Bastion, you'll then connect to your VM via the portal using RDP/SSH connectivity and the VM's private IP address. * If your VM has a public IP address that you don't need for anything else, you can remove it. > [!IMPORTANT] The steps in this article help you do the following: * Verify that you have an Azure subscription. If you don't already have an Azure subscription, you can activate your [MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details) or sign up for a [free account](https://azure.microsoft.com/pricing/free-trial). * **A VM in a VNet**. - When you deploy Bastion using default values, the values are pulled from the VNet in which your VM resides. This VM doesn't become a part of the Bastion deployment itself, but you do connect to it later in the exercise. + When you deploy Bastion using default values, the values are pulled from the virtual network in which your VM resides. This VM doesn't become a part of the Bastion deployment itself, but you do connect to it later in the exercise. - * If you don't already have a VM in a VNet, create one using [Quickstart: Create a Windows VM](../virtual-machines/windows/quick-create-portal.md), or [Quickstart: Create a Linux VM](../virtual-machines/linux/quick-create-portal.md). + * If you don't already have a VM in a virtual network, create one using [Quickstart: Create a Windows VM](../virtual-machines/windows/quick-create-portal.md), or [Quickstart: Create a Linux VM](../virtual-machines/linux/quick-create-portal.md). * If you need example values, see the [Example values](#values) section. * If you already have a virtual network, make sure it's selected on the Networking tab when you create your VM. * If you don't have a virtual network, you can create one at the same time you create your VM. You can use the following example values when creating this configuration, or yo **Bastion values:** -When you deploy from VM settings, Bastion is automatically configured with default values from the VNet +When you deploy from VM settings, Bastion is automatically configured with default values from the virtual network. |**Name** | **Default value** | |||-|AzureBastionSubnet | This subnet is created within the VNet as a /26 | +|AzureBastionSubnet | This subnet is created within the virtual network as a /26 | |SKU | Basic | | Name | Based on the virtual network name | | Public IP address name | Based on the virtual network name | ## <a name="createvmset"></a>Deploy Bastion -When you create Azure Bastion using default settings, the settings are configured for you. You can't modify or specify additional values for a default deployment. After deployment completes, you can always go to the bastion host **Configuration** page to select additional settings and features. For example, the default SKU is the Basic SKU. You can later upgrade to the Standard SKU to support more features. For more information, see [About configuration settings](configuration-settings.md). +When you create Azure Bastion in the portal using **Deploy Bastion**, Azure Bastion deploys automatically using default settings and the Basic SKU. You can't modify or specify additional values for a default deployment. After deployment completes, you can go to the bastion host **Configuration** page to select certain additional settings and features. You can also upgrade a SKU later to add more features, but you can't downgrade a SKU once Bastion is deployed. For more information, see [About configuration settings](configuration-settings.md). 1. Sign in to the [Azure portal](https://portal.azure.com). 1. In the portal, go to the VM to which you want to connect. The values from the virtual network in which this VM resides will be used to create the Bastion deployment.-1. On the page for your VM, in the **Operations** section on the left menu, select **Bastion**. When the **Bastion** page opens, it checks to see if you have enough available address space to create the AzureBastionSubnet. If you don't, you'll see settings to allow you to add more address space to your VNet to meet this requirement. -1. On the **Bastion** page, you can view some of the values that will be used when creating the bastion host for your virtual network. Select **Deploy Bastion** to deploy bastion using default settings. -- :::image type="content" source="./media/quickstart-host-portal/deploy.png" alt-text="Screenshot of Deploy Bastion." lightbox="./media/quickstart-host-portal/deploy.png"::: +1. On the page for your VM, in the **Operations** section on the left menu, select **Bastion**. +1. On the Bastion page, select the arrow next to **Dedicated Deployment Options** to expand the section. + :::image type="content" source="./media/quickstart-host-portal/deploy-bastion-automatically.png" alt-text="Screenshot showing how to expand Dedicated Deployment Options and Deploy Bastion." lightbox="./media/quickstart-host-portal/deploy-bastion-automatically.png"::: +1. In the Create Bastion section, select **Deploy Bastion**. 1. Bastion begins deploying. This can take around 10 minutes to complete. > [!NOTE] When the Bastion deployment is complete, the screen changes to the **Connect** p 1. The connection to this virtual machine via Bastion will open directly in the Azure portal (over HTML5) using port 443 and the Bastion service. Select **Allow** when asked for permissions to the clipboard. This lets you use the remote clipboard arrows on the left of the screen. - * When you connect, the desktop of the VM may look different than the example screenshot. - * Using keyboard shortcut keys while connected to a VM may not result in the same behavior as shortcut keys on a local computer. For example, when connected to a Windows VM from a Windows client, CTRL+ALT+END is the keyboard shortcut for CTRL+ALT+Delete on a local computer. To do this from a Mac while connected to a Windows VM, the keyboard shortcut is Fn+CTRL+ALT+Backspace. + * When you connect, the desktop of the VM might look different than the example screenshot. + * Using keyboard shortcut keys while connected to a VM might not result in the same behavior as shortcut keys on a local computer. For example, when connected to a Windows VM from a Windows client, CTRL+ALT+END is the keyboard shortcut for CTRL+ALT+Delete on a local computer. To do this from a Mac while connected to a Windows VM, the keyboard shortcut is Fn+CTRL+ALT+Backspace. - :::image type="content" source="./media/quickstart-host-portal/connected.png" alt-text="Screenshot of RDP connection." lightbox="./media/quickstart-host-portal/connected.png"::: + :::image type="content" source="./media/quickstart-host-portal/connected.png" alt-text="Screenshot shows an RDP connection to a virtual machine." lightbox="./media/quickstart-host-portal/connected.png"::: ### <a name="audio"></a>To enable audio output |
bastion | Tutorial Create Host Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/tutorial-create-host-portal.md | description: Learn how to deploy Bastion using settings that you specify - Azure Previously updated : 10/03/2023 Last updated : 10/13/2023 # Tutorial: Deploy Bastion using specified settings -This tutorial helps you deploy Azure Bastion from the Azure portal using your own specified manual settings. When you use manual settings, you can specify configuration values such as instance counts and the SKU at the time of deployment. After Bastion is deployed, you can connect (SSH/RDP) to virtual machines in the virtual network via Bastion using the private IP address of the VM. When you connect to a VM, it doesn't need a public IP address, client software, agent, or a special configuration. +This tutorial helps you deploy Azure Bastion from the Azure portal using your own specified manual settings. This article helps you deploy Bastion using a SKU that you specify. The SKU determines the features and connections that are available for your deployment. For more information about SKUs, see [Configuration settings - SKUs](configuration-settings.md#skus). +In the Azure portal, when you use the **Configure Manually** option to deploy Bastion, you can specify configuration values such as instance counts and SKUs at the time of deployment. After Bastion is deployed, you can connect (SSH/RDP) to virtual machines in the virtual network via Bastion using the private IP address of the VM. When you connect to a VM, it doesn't need a public IP address, client software, agent, or a special configuration. -In this tutorial, you deploy Bastion using the Standard SKU tier and adjust host scaling (instance count). After the deployment is complete, you connect to your VM via private IP address. If your VM has a public IP address that you don't need for anything else, you can remove it. -Azure Bastion is a PaaS service that's maintained for you, not a bastion host that you install on one of your VMs and maintain yourself. For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md) +In this tutorial, you deploy Bastion using the Standard SKU tier and adjust host scaling (instance count), which the Standard SKU supports. You could optionally deploy using a lower SKU, but you won't be able to adjust host scaling. After the deployment is complete, you connect to your VM via private IP address. If your VM has a public IP address that you don't need for anything else, you can remove it. In this tutorial, you'll learn how to: In this tutorial, you'll learn how to: ## Prerequisites * If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-* A [virtual network](../virtual-network/quick-create-portal.md). This will be the VNet to which you deploy Bastion. +* A [virtual network](../virtual-network/quick-create-portal.md). This will be the virtual network to which you deploy Bastion. * A virtual machine in the virtual network. This VM isn't a part of the Bastion configuration and doesn't become a bastion host. You connect to this VM later in this tutorial via Bastion. If you don't have a VM, create one using [Quickstart: Create a VM](../virtual-machines/windows/quick-create-portal.md). * **Required VM roles:** You can use the following example values when creating this configuration, or yo ## <a name="createhost"></a>Deploy Bastion -This section helps you deploy Bastion to your VNet. Once Bastion is deployed, you can connect securely to any VM in the VNet using its private IP address. +This section helps you deploy Bastion to your virtual network. Once Bastion is deployed, you can connect securely to any VM in the virtual network using its private IP address. > [!IMPORTANT] > [!INCLUDE [Pricing](../../includes/bastion-pricing.md)] This section helps you deploy Bastion to your VNet. Once Bastion is deployed, yo 1. On the page for your virtual network, in the left pane, select **Bastion** to open the **Bastion** page. -1. On the Bastion page, select **Configure manually**. This lets you configure specific additional settings when deploying Bastion to your VNet. +1. On the Bastion page, expand **Dedicated Deployment Options**. +1. Select **Configure manually**. This lets you configure specific additional settings (such as the SKU) when deploying Bastion to your virtual network. :::image type="content" source="./media/tutorial-create-host-portal/manual-configuration.png" alt-text="Screenshot of Bastion page showing configure bastion on my own." lightbox="./media/tutorial-create-host-portal/manual-configuration.png"::: This section helps you deploy Bastion to your VNet. Once Bastion is deployed, yo * **Region**: The Azure public region in which the resource will be created. Choose the region in which your virtual network resides. - * **Tier:** The tier is also known as the **SKU**. For this tutorial, select **Standard**. The Standard SKU lets you configure the instance count for host scaling and other features. For more information about features that require the Standard SKU, see [Configuration settings - SKU](configuration-settings.md#skus). + * **Tier:** The tier is also known as the **SKU**. For this tutorial, select **Standard**. For information about the features available for each SKU, see [Configuration settings - SKU](configuration-settings.md#skus). - * **Instance count:** This is the setting for **host scaling**. It's configured in scale unit increments. Use the slider or type a number to configure the instance count that you want. For this tutorial, you can select the instance count you'd prefer. For more information, see [Host scaling](configuration-settings.md#instance) and [Pricing](https://azure.microsoft.com/pricing/details/azure-bastion). + * **Instance count:** This is the setting for **host scaling** and is available for the Standard SKU. Host scaling is configured in scale unit increments. Use the slider or type a number to configure the instance count that you want. For this tutorial, you can select the instance count you'd prefer. For more information, see [Host scaling](configuration-settings.md#instance) and [Pricing](https://azure.microsoft.com/pricing/details/azure-bastion). :::image type="content" source="./media/tutorial-create-host-portal/instance-values.png" alt-text="Screenshot of Bastion page instance values." lightbox="./media/tutorial-create-host-portal/instance-values.png"::: -1. Configure the **virtual networks** settings. Select your VNet from the dropdown. If you don't see your VNet in the dropdown list, make sure you selected the correct Region in the previous settings on this page. +1. Configure the **virtual networks** settings. Select your virtual network from the dropdown. If you don't see your virtual network in the dropdown list, make sure you selected the correct Region in the previous settings on this page. 1. To configure the AzureBastionSubnet, select **Manage subnet configuration**. You can use any of the following detailed articles to connect to a VM. Some conn [!INCLUDE [Links to Connect to VM articles](../../includes/bastion-vm-connect-article-list.md)] -You can also use the basic [Connection steps](#steps) in the section below to connect to your VM. +You can also use the basic [Connection steps](#steps) in the following section to connect to your VM. ### <a name="steps"></a>Connection steps |
bastion | Upgrade Sku | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/upgrade-sku.md | Title: 'Upgrade or view a SKU: portal' -description: Learn how to view a SKU and change tiers from the Basic to the Standard SKU. +description: Learn how to view a SKU and upgrade SKU tiers. Previously updated : 06/08/2023 Last updated : 10/13/2023 # View or upgrade a SKU -This article helps you view and upgrade Azure Bastion from the Basic SKU tier to the Standard SKU tier. Once you upgrade, you can't revert back to the Basic SKU without deleting and reconfiguring Bastion. For more information about features and SKUs, see [Configuration settings](configuration-settings.md). +This article helps you view and upgrade your Bastion SKU. Once you upgrade, you can't revert back to a lower SKU without deleting and reconfiguring Bastion. For more information about features and SKUs, see [Configuration settings](configuration-settings.md). [!INCLUDE [Pricing](../../includes/bastion-pricing.md)] To view the SKU for your bastion host, use the following steps. 1. Sign in to the [Azure portal](https://portal.azure.com). 1. In the Azure portal, go to your bastion host.-1. In the left pane, select **Configuration** to open the Configuration page. In the following example, Bastion is configured to use the **Basic** SKU tier. +1. In the left pane, select **Configuration** to open the Configuration page. In the following example, Bastion is configured to use the **Developer** SKU tier. Notice that the SKU affects the features that you can configure for Bastion. You can upgrade to a higher SKU using the steps in the next sections. - Notice that when you use the Basic SKU, the features you can configure are limited. You can upgrade to a higher SKU using the steps in the next section. + :::image type="content" source="./media/upgrade-sku/developer-sku.png" alt-text="Screenshot of the configuration page with the Developer SKU." lightbox="./media/upgrade-sku/developer-sku.png"::: - :::image type="content" source="./media/upgrade-sku/view-sku.png" alt-text="Screenshot of the configuration page with the Basic SKU." lightbox="./media/upgrade-sku/view-sku.png"::: +## Upgrade from the Developer SKU -## Upgrade a SKU +When you upgrade from a Developer SKU to a dedicated deployment SKU, you need to create a public IP address and an Azure Bastion subnet. -Use the following steps to upgrade to the Standard SKU. +Use the following steps to upgrade to a higher SKU. ++1. In the Azure portal, go to your virtual network and add a new subnet. The subnet must be named **AzureBastionSubnet** and must be /26 or larger. (/25, /24 etc.). This subnet will be used exclusively by Azure Bastion. +1. Next, go to the portal page for your **Bastion** host. +1. On the **Configuration** page, for **Tier**, select a SKU. Notice that the available features change, depending on the SKU you select. The following screenshot shows the required values. ++ :::image type="content" source="./media/upgrade-sku/sku-values.png" alt-text="Screenshot of tier select dropdown with Standard selected." lightbox="./media/upgrade-sku/sku-values.png"::: +1. Create a new public IP address value unless you have already created one for your bastion host, in which case, select the value. +1. Because you already created the AzureBastionSubnet, the **Subnet** field will automatically populate. +1. You can add features at the same time you upgrade the SKU. You don't need to upgrade the SKU and then go back to add the features as a separate step. +1. Select **Apply** to apply changes. The bastion host updates. This takes about 10 minutes to complete. ++## Upgrade from a Basic SKU ++Use the following steps to upgrade to a higher SKU. 1. In the Azure portal, go to your Bastion host.-1. On the **Configuration** page, for **Tier**, select **Standard**. - :::image type="content" source="./media/upgrade-sku/upgrade-sku.png" alt-text="Screenshot of tier select dropdown with Standard selected." lightbox="./media/upgrade-sku/upgrade-sku.png"::: +1. On the **Configuration** page, for **Tier**, select a higher SKU. + 1. You can add features at the same time you upgrade the SKU. You don't need to upgrade the SKU and then go back to add the features as a separate step. 1. Select **Apply** to apply changes. The bastion host updates. This takes about 10 minutes to complete. |
cloud-services | Cloud Services Guestos Msrc Releases | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md | The following tables show the Microsoft Security Response Center (MSRC) updates | Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced | | | | | | |-| Rel 23-10 | [5031361] | Latest Cumulative Update(LCU) | [6.63] | Oct 10, 2023 | -| Rel 23-10 | [5031364] | Latest Cumulative Update(LCU) | [7.32] | Oct 10, 2023 | -| Rel 23-10 | [5031362] | Latest Cumulative Update(LCU) | [5.87] | Oct 10, 2023 | -| Rel 23-10 | [5029938] | .NET Framework 3.5 Security and Quality Rollup | [2.143] | Oct 10, 2023 | -| Rel 23-10 | [5029933] | .NET Framework 4.7.2 Cumulative Update LKG | [2.143] | Sep 12, 2023 | -| Rel 23-10 | [5029915] | .NET Framework 3.5 Security and Quality Rollup LKG | [4.123] | Oct 10, 2023 | -| Rel 23-10 | [5029916] | .NET Framework 4.7.2 Cumulative Update LKG | [4.123] | Oct 10, 2023 | -| Rel 23-10 | [5030160] | .NET Framework 4.7.2 Security and Quality Rollup | [2.142] | Oct 10, 2023 | -| Rel 23-10 | [5030160] | .NET Framework 3.5 Security and Quality Rollup LKG | [3.131] | Oct 10, 2023 | -| Rel 23-10 | [5029932] | .NET Framework 4.7.2 Cumulative Update LKG | [3.131] | Oct 10, 2023 | -| Rel 23-10 | [5029931] | .NET Framework DotNet | [6.63] | Oct 10, 2023 | -| Rel 23-10 | [5029928] | .NET Framework 4.8 Security and Quality Rollup LKG | [7.32] | Oct 10, 2023 | -| Rel 23-10 | [5031408] | Monthly Rollup | [2.143] | Oct 10, 2023 | -| Rel 23-10 | [5031442] | Monthly Rollup | [3.131] | Oct 10, 2023 | -| Rel 23-10 | [5031419] | Monthly Rollup | [4.123] | Oct 10, 2023 | -| Rel 23-10 | [5031469] | Servicing Stack Update | [3.131] | Oct 10, 2023 | -| Rel 23-10 | [5030329] | Servicing Stack Update LKG | [4.123] | Sep 12, 2023 | -| Rel 23-10 | [5030504] | Servicing Stack Update LKG | [5.87] | Sep 12, 2023 | -| Rel 23-10 | [5031658] | Servicing Stack Update LKG | [2.143] | Oct 10, 2023 | -| Rel 23-10 | [4494175] | January '20 Microcode | [5.87] | Sep 1, 2020 | -| Rel 23-10 | [4494175] | January '20 Microcode | [6.63] | Sep 1, 2020 | -| Rel 23-10 | 5031590 | Servicing Stack Update | [7.31] | | -| Rel 23-10 | 5031589 | Servicing Stack Update | [6.62] | | +| Rel 23-10 | [5031361] | Latest Cumulative Update(LCU) | [6.64] | Oct 10, 2023 | +| Rel 23-10 | [5031364] | Latest Cumulative Update(LCU) | [7.34] | Oct 10, 2023 | +| Rel 23-10 | [5031362] | Latest Cumulative Update(LCU) | [5.88] | Oct 10, 2023 | +| Rel 23-10 | [5029938] | .NET Framework 3.5 Security and Quality Rollup | [2.144] | Oct 10, 2023 | +| Rel 23-10 | [5029933] | .NET Framework 4.7.2 Cumulative Update LKG | [2.144] | Sep 12, 2023 | +| Rel 23-10 | [5029915] | .NET Framework 3.5 Security and Quality Rollup LKG | [4.124] | Oct 10, 2023 | +| Rel 23-10 | [5029916] | .NET Framework 4.7.2 Cumulative Update LKG | [4.124] | Oct 10, 2023 | +| Rel 23-10 | [5030160] | .NET Framework 4.7.2 Security and Quality Rollup | [2.144] | Oct 10, 2023 | +| Rel 23-10 | [5030160] | .NET Framework 3.5 Security and Quality Rollup LKG | [3.132] | Oct 10, 2023 | +| Rel 23-10 | [5029932] | .NET Framework 4.7.2 Cumulative Update LKG | [3.132] | Oct 10, 2023 | +| Rel 23-10 | [5029931] | .NET Framework DotNet | [6.64] | Oct 10, 2023 | +| Rel 23-10 | [5029928] | .NET Framework 4.8 Security and Quality Rollup LKG | [7.34] | Oct 10, 2023 | +| Rel 23-10 | [5031408] | Monthly Rollup | [2.144] | Oct 10, 2023 | +| Rel 23-10 | [5031442] | Monthly Rollup | [3.132] | Oct 10, 2023 | +| Rel 23-10 | [5031419] | Monthly Rollup | [4.124] | Oct 10, 2023 | +| Rel 23-10 | [5031469] | Servicing Stack Update | [3.132] | Oct 10, 2023 | +| Rel 23-10 | [5030329] | Servicing Stack Update LKG | [4.124] | Sep 12, 2023 | +| Rel 23-10 | [5030504] | Servicing Stack Update LKG | [5.88] | Sep 12, 2023 | +| Rel 23-10 | [5031658] | Servicing Stack Update LKG | [2.144] | Oct 10, 2023 | +| Rel 23-10 | [4494175] | January '20 Microcode | [5.88] | Sep 1, 2020 | +| Rel 23-10 | [4494175] | January '20 Microcode | [6.64] | Sep 1, 2020 | +| Rel 23-10 | 5031590 | Servicing Stack Update | [7.34] | | +| Rel 23-10 | 5031589 | Servicing Stack Update | [6.64] | | [5031361]: https://support.microsoft.com/kb/5031361 [5031364]: https://support.microsoft.com/kb/5031364 The following tables show the Microsoft Security Response Center (MSRC) updates [5031658]: https://support.microsoft.com/kb/5031658 [4494175]: https://support.microsoft.com/kb/4494175 [4494175]: https://support.microsoft.com/kb/4494175-[2.143]: ./cloud-services-guestos-update-matrix.md#family-2-releases -[3.131]: ./cloud-services-guestos-update-matrix.md#family-3-releases -[4.123]: ./cloud-services-guestos-update-matrix.md#family-4-releases -[5.87]: ./cloud-services-guestos-update-matrix.md#family-5-releases -[6.63]: ./cloud-services-guestos-update-matrix.md#family-6-releases -[7.32]: ./cloud-services-guestos-update-matrix.md#family-7-releases +[2.144]: ./cloud-services-guestos-update-matrix.md#family-2-releases +[3.132]: ./cloud-services-guestos-update-matrix.md#family-3-releases +[4.124]: ./cloud-services-guestos-update-matrix.md#family-4-releases +[5.88]: ./cloud-services-guestos-update-matrix.md#family-5-releases +[6.64]: ./cloud-services-guestos-update-matrix.md#family-6-releases +[7.34]: ./cloud-services-guestos-update-matrix.md#family-7-releases |
cloud-services | Cloud Services Guestos Update Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-update-matrix.md | Unsure about how to update your Guest OS? Check [this][cloud updates] out. ## News updates +###### **October 23, 2023** +The October Guest OS has released. + ###### **September 26, 2023** The September Guest OS has released. The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-7.34_202310-01 | October 23, 2023 | Post 7.36 | | WA-GUEST-OS-7.32_202309-01 | September 25, 2023 | Post 7.34 |-| WA-GUEST-OS-7.30_202308-01 | August 21, 2023 | Post 7.32 | +|~~WA-GUEST-OS-7.30_202308-01~~| August 21, 2023 | October 23, 2023 | |~~WA-GUEST-OS-7.28_202307-01~~| July 27, 2023 | September 25, 2023 | |~~WA-GUEST-OS-7.27_202306-02~~| July 8, 2023 | August 21, 2023 | |~~WA-GUEST-OS-7.25_202305-01~~| May 19, 2023 | July 27, 2023 | The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-6.64_202310-01 | October 23, 2023 | Post 6.66 | | WA-GUEST-OS-6.62_202309-01 | September 25, 2023 | Post 6.64 |-| WA-GUEST-OS-6.61_202308-01 | August 21, 2023 | Post 6.63 | +|~~WA-GUEST-OS-6.61_202308-01~~| August 21, 2023 | October 23, 2023 | |~~WA-GUEST-OS-6.60_202307-01~~| July 27, 2023 | September 25, 2023 | |~~WA-GUEST-OS-6.59_202306-02~~| July 8, 2023 | August 21, 2023 | |~~WA-GUEST-OS-6.57_202305-01~~| May 19, 2023 | July 27, 2023 | The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-5.88_202310-01 | October 23, 2023 | Post 5.90 | | WA-GUEST-OS-5.86_202309-01 | September 25, 2023 | Post 5.88 |-| WA-GUEST-OS-5.85_202308-01 | August 21, 2023 | Post 5.87 | +|~~WA-GUEST-OS-5.85_202308-01~~| August 21, 2023 | October 23, 2023 | |~~WA-GUEST-OS-5.84_202307-01~~| July 27, 2023 | September 25, 2023 | |~~WA-GUEST-OS-5.83_202306-02~~| July 8, 2023 | August 21, 2023 | |~~WA-GUEST-OS-5.81_202305-01~~| May 19, 2023 | July 27, 2023 | The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-4.124_202310-01 | October 23, 2023 | Post 4.126 | | WA-GUEST-OS-4.122_202309-01 | September 25, 2023 | Post 4.124 |-| WA-GUEST-OS-4.121_202308-01 | August 21, 2023 | Post 4.123 | +|~~WA-GUEST-OS-4.121_202308-01~~| August 21, 2023 | October 23, 2023 | |~~WA-GUEST-OS-4.120_202307-01~~| July 27, 2023 | September 25, 2023 | |~~WA-GUEST-OS-4.119_202306-02~~| July 8, 2023 | August 21, 2023 | |~~WA-GUEST-OS-4.117_202305-01~~| May 19, 2023 | July 27, 2023 | The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-3.132_202310-01 | October 23, 2023 | Post 3.134 | | WA-GUEST-OS-3.130_202309-01 | September 25, 2023 | Post 3.132 |-| WA-GUEST-OS-3.129_202308-01 | August 21, 2023 | Post 3.131 | +|~~WA-GUEST-OS-3.129_202308-01~~| August 21, 2023 | October 23, 2023 | |~~WA-GUEST-OS-3.128_202307-01~~| July 27, 2023 | September 25, 2023 | |~~WA-GUEST-OS-3.127_202306-02~~| July 8, 2023 | August 21, 2023 | |~~WA-GUEST-OS-3.125_202305-01~~| May 19, 2023 | July 27, 2023 | The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-2.144_202310-01 | October 23, 2023 | Post 2.146 | | WA-GUEST-OS-2.142_202309-01 | September 25, 2023 | Post 2.144 |-| WA-GUEST-OS-2.141_202308-01 | August 21, 2023 | Post 2.143 | +|~~WA-GUEST-OS-2.141_202308-01~~| August 21, 2023 | October 23, 2023 | |~~WA-GUEST-OS-2.140_202307-01~~| July 27, 2023 | September 25, 2023 | |~~WA-GUEST-OS-2.139_202306-02~~| July 8, 2023 | August 21, 2023 | |~~WA-GUEST-OS-2.137_202305-01~~| May 19, 2023 | July 27, 2023 | |
communication-services | Play Action | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/play-action.md | The play action provided through the Azure Communication Services Call Automatio You can use the newly announced integration between [Azure Communication Services and Azure AI services](./azure-communication-services-azure-cognitive-services-integration.md) to play personalized responses using Azure [Text-To-Speech](../../../../articles/cognitive-services/Speech-Service/text-to-speech.md). You can use human like prebuilt neural voices out of the box or create custom neural voices that are unique to your product or brand. For more information on supported voices, languages and locales see [Language and voice support for the Speech service](../../../../articles/cognitive-services/Speech-Service/language-support.md). (Supported in public preview) > [!NOTE]-> Azure Communication Services currently only supports WAV files formatted as mono channel audio recorded at 16KHz. You can create your own audio files using [Speech synthesis with Audio Content Creation tool](../../../ai-services/Speech-Service/how-to-audio-content-creation.md). +> Azure Communication Services currently supports two file formats, MP3 files and WAV files formatted as 16-bit PCM mono channel audio recorded at 16KHz. You can create your own audio files using [Speech synthesis with Audio Content Creation tool](../../../ai-services/Speech-Service/how-to-audio-content-creation.md). ## Prebuilt Neural Text to Speech voices Microsoft uses deep neural networks to overcome the limits of traditional speech synthesis with regard to stress and intonation in spoken language. Prosody prediction and voice synthesis occur simultaneously, resulting in a more fluid and natural sounding output. You can use these neural voices to make interactions with your chatbots and voice assistants more natural and engaging. There are over 100 prebuilt voices to choose from. Learn more about [Azure Text-to-Speech voices](../../../../articles/cognitive-services/Speech-Service/language-support.md). |
communication-services | Direct Routing Infrastructure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/direct-routing-infrastructure.md | The port ranges of the Media Processors are shown in the following table: Media Processors are placed in the same datacenters as SIP proxies: - NOAM (US South Central, two in US West and US East datacenters)-- Europe (UK South, France Central, Amsterdam and Dublin datacenters)+- Europe (EU West, EU North, Sweden, France Central) - Asia (Singapore datacenter) - Japan (JP East and West datacenters) - Australia (AU East and Southeast datacenters) - LATAM (Brazil South) - Africa (South Africa North) - ## Media traffic: Codecs ### Leg between SBC and Cloud Media Processor. |
communication-services | Get Started Rooms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/rooms/get-started-rooms.md | The table below lists the main properties of `room` objects: | `roomId` | Unique `room` identifier. | | `validFrom` | Earliest time a `room` can be used. | | `validUntil` | Latest time a `room` can be used. |+| `pstnDialOutEnabled` | Enable or disable dialing out to a PSTN number in a room.| | `participants` | List of participants to a `room`. Specified as a `CommunicationIdentifier`. | | `roleType` | The role of a room participant. Can be either `Presenter`, `Attendee`, or `Consumer`. | |
cosmos-db | Configure Custom Partitioning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/configure-custom-partitioning.md | description: Learn how to trigger custom partitioning from Azure Synapse Spark n Previously updated : 09/29/2022 Last updated : 10/24/2023 -# Configure custom partitioning to partition analytical store data (Preview) +# Configure custom partitioning to partition analytical store data [!INCLUDE[NoSQL, MongoDB, Gremlin](includes/appliesto-nosql-mongodb-gremlin.md)] Custom partitioning enables you to partition analytical store data, on fields that are commonly used as filters in analytical queries, resulting in improved query performance. To learn more about custom partitioning, see [what is custom partitioning](custo To use custom partitioning, you must enable Azure Synapse Link on your Azure Cosmos DB account. To learn more, see [how to configure Azure Synapse Link](configure-synapse-link.md). Custom partitioning execution can be triggered from Azure Synapse Spark notebook using Azure Synapse Link for Azure Cosmos DB. -> [!IMPORTANT] -> Custom partitioning feature is currently in public preview. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). - > [!NOTE] > Azure Cosmos DB accounts should have Azure Synapse Link enabled to take advantage of custom partitioning. Custom partitioning is currently supported for Azure Synapse Spark 2.0 only. - > [!NOTE] > Synapse Link for Gremlin API is now in preview. You can enable Synapse Link in your new or existing graphs using Azure CLI. For more information on how to configure it, click [here](configure-synapse-link.md). |
cosmos-db | Convert Vcore To Request Unit | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/convert-vcore-to-request-unit.md | Then the recommended request units for Azure Cosmos DB API for NoSQL are Provisioned RU/s, API for NoSQL = (600 RU/s/vCore) * (12 vCores) / (3) = 2,400 RU/s ` -And the recommended request units for Azure Cosmso DB for MongoDB are +And the recommended request units for Azure Cosmos DB for MongoDB are ` Provisioned RU/s, API for MongoDB = (1,000 RU/s/vCore) * (12 vCores) / (3) = 4,000 RU/s Then the recommended request units for Azure Cosmos DB API for NoSQL are Provisioned RU/s, API for NoSQL = (600 RU/s/vCore) * (36 vCores) / (3) = 7,200 RU/s ` -And the recommended request units for Azure Cosmso DB for MongoDB are +And the recommended request units for Azure Cosmos DB for MongoDB are ` Provisioned RU/s, API for MongoDB = (1,000 RU/s/vCore) * (36 vCores) / (3) = 12,000 RU/s Then the recommended request units for Azure Cosmos DB API for NoSQL are Provisioned RU/s, API for NoSQL = (600 RU/s/vCore) * (36 vCores) / (3) = 7,200 RU/s ` -And the recommended request units for Azure Cosmso DB for MongoDB are +And the recommended request units for Azure Cosmos DB for MongoDB are ` Provisioned RU/s, API for MongoDB = (1,000 RU/s/vCore) * (36 vCores) / (3) = 12,000 RU/s The table below summarizes the relationship between *vCores* and *vCPUs* for Azu ## Next steps * [Learn about Azure Cosmos DB pricing](https://azure.microsoft.com/pricing/details/cosmos-db/) * [Learn how to plan and manage costs for Azure Cosmos DB](plan-manage-costs.md)-* [Review options for migrating to Azure Cosmos DB](migration-choices.md) * [Plan your migration to Azure Cosmos DB for MongoDB](mongodb/pre-migration-steps.md). This doc includes links to different migration tools that you can use once you are finished planning. [regions]: https://azure.microsoft.com/regions/ |
cosmos-db | Custom Partitioning Analytical Store | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/custom-partitioning-analytical-store.md | description: Custom partitioning enables you to partition the analytical store d Previously updated : 11/02/2021 Last updated : 10/24/2023 # Custom partitioning in Azure Synapse Link for Azure Cosmos DB Custom partitioning enables you to partition analytical store data, on fields that are commonly used as filters in analytical queries, resulting in improved query performance. -In this article, you'll learn how to partition your data in Azure Cosmos DB analytical store using keys that are critical for your analytical workloads. It also explains how to take advantage of the improved query performance with partition pruning. You'll also learn how the partitioned store helps to improve the query performance when your workloads have a significant number of updates or deletes. +In this article, you learn how to partition your data in Azure Cosmos DB analytical store using keys that are critical for your analytical workloads. It also explains how to take advantage of the improved query performance with partition pruning. You also learn how custom partitioning improves the queries performance when your workloads have a significant number of updates or deletes. > [!NOTE] > Azure Cosmos DB accounts and containers should have [Azure Synapse Link](synapse-link.md) enabled to take advantage of custom partitioning. ## How does it work? -Analytical store partitioning is independent of partitioning in the transactional store. By default, analytical store isn't partitioned. If you want to query analytical store frequently based on fields such as Date, Time, Category etc. you leverage custom partitioning to create a separate partitioned store based on these keys. You can choose a single field or a combination of fields from your dataset as the analytical store partition key. +Analytical store partitioning is independent of partitioning in the transactional store. By default, analytical store isn't partitioned. If you want to query analytical store frequently based on fields such as Date, Time, Category etc. you can use custom partitioning to create a separate partitioned store based on these keys. You can choose a single field or a combination of fields from your dataset as the analytical store partition key. You can trigger partitioning from an Azure Synapse Spark notebook using Azure Synapse Link. You can schedule it to run as a background job, once or twice a day but can be executed more often, if needed. You can trigger partitioning from an Azure Synapse Spark notebook using Azure Sy :::image type="content" source="./media/custom-partitioning-analytical-store/partitioned-store-architecture.png" alt-text="Architecture of partitioned store in Azure Synapse Link for Azure Cosmos DB" lightbox="./media/custom-partitioning-analytical-store/partitioned-store-architecture.png" border="false"::: -The partitioned store contains Azure Cosmos DB analytical data until the last timestamp you ran your partitioning job. When you query your analytical data using the partition key filters in Synapse Spark, Synapse Link will automatically merge the data in partitioned store with the most recent data from the analytical store. This way it gives you the latest results for your queries. Although it merges the data before querying, the delta isnΓÇÖt written back to the partitioned store. As the delta between data in analytical store and partitioned store widens, the query times on partitioned data may vary. Triggering partitioning job more frequently will reduce this delta. Each time you execute the partition job, only incremental changes in the analytical store will be processed, instead of the full data set. +The partitioned store contains Azure Cosmos DB analytical data until the last timestamp you ran your partitioning job. When you query analytical data using the partition key filters, Synapse Link automatically merges partitioned store data with the most recent changes in analytical store. This way it gives you the latest results for your queries. Although it merges the data before querying, the delta isnΓÇÖt written back to the partitioned store. As the delta between data in analytical store and partitioned store widens, the query times on partitioned data may vary. Triggering partitioning job more frequently reduces this delta. Each time you execute the partition job, only incremental changes in the analytical store are processed, instead of the full data set. ## When to use? Using partitioned store is optional when querying analytical data in Azure Cosmo * High volume of update or delete operations * Slow data ingestion -Except for the workloads that meet above requirements, if you are querying live data using query filters that are different from the partition keys, we recommend that you query directly from the analytical store. This is especially true if the partitioning jobs aren't scheduled to run frequently. +If you are querying live data using query filters different from partition keys, we recommend that you query analytical store directly. ## Benefits Because the data corresponding to each unique partition key is colocated in the ### Flexibility to partition your analytical data -You can have multiple partitioning strategies for a given analytical store container. You could use composite or separate partition keys based on your query requirements. Please see partition strategies for guidance on this. +You can have multiple partitioning strategies for a given analytical store container. You could use composite or separate partition keys based on your query requirements. ### Query performance improvements In addition to the query improvements from partition pruning, custom partitioning also results in improved query performance for the following workloads: -* **Update/delete heavy workloads** - Instead of keeping track of multiple versions of records in the analytical store and loading them during each query execution, the partitioned store only contains the latest version of the data. This significantly improves the query performance when you have update/delete heavy workloads. +* **Update/delete heavy workloads** - Instead of keeping track of multiple versions of records in the analytical store and loading them during each query execution, the partitioned store only contains the latest version of the data. This capability significantly improves the query performance when you have update/delete heavy workloads. * **Slow data ingestion workloads** - Partitioning compacts analytical data and so, if your workload has slow data ingestion, this compaction could result in better query performance It is important to note that custom partitioning ensures complete transactional ## Security -If you configured [managed private endpoints](analytical-store-private-endpoints.md) for your analytical store, to ensure network isolation for partitioned store, we recommend that you also add managed private endpoints for the partitioned store. The partitioned store is primary storage account associated with your Synapse workspace. +If you configured [managed private endpoints](analytical-store-private-endpoints.md) for your analytical store, we recommend to add managed private endpoints for the partitioned store too. The partitioned store is primary storage account associated with your Synapse workspace. Similarly, if you configured [customer-managed keys on analytical store](how-to-setup-cmk.md#is-it-possible-to-use-customer-managed-keys-with-the-azure-cosmos-db-analytical-store), you must directly enable it on the Synapse workspace primary storage account, which is the partitioned store, as well. ## Partitioning strategies-You could use one or more partition keys for your analytical data. If you are using multiple partition keys, below are some recommendations on how to partition the data: +You could use one or more partition keys for your analytical data. If you are using multiple partition keys, here are some recommendations on how to partition the data: - **Using composite keys:** Say, you want to frequently query based on Key1 and Key2. For example, "Query for all records where ReadDate = ΓÇÿ2021-10-08ΓÇÖ and Location = ΓÇÿSydneyΓÇÖ". - In this case, using composite keys will be more efficient, to look up all records that match the ReadDate and the records that match Location within that ReadDate. + In this case, using composite keys are more efficient, to look up all records that match the ReadDate and the records that match Location within that ReadDate. Sample configuration options: ```python You could use one or more partition keys for your analytical data. If you are us .option("spark.cosmos.asns.basePath", "/mnt/CosmosDBPartitionedStore/") \ ``` - Now, on above partitioned store, if you want to only query based on "Location" filter: - * You may want to query analytical store directly. Partitioned store will scan all records by ReadDate first and then by Location. + Now can query based on "Location" filter: + * You may want to query analytical store directly. Partitioned store scans all records by ReadDate first and then by Location. So, depending on your workload and cardinality of your analytical data, you may get better results by querying analytical store directly. * You could also run another partition job to also partition based on ΓÇÿLocationΓÇÖ on the same partitioned store. You could use one or more partition keys for your analytical data. If you are us .option("spark.cosmos.asns.partition.keys", "Location String") \ .option("spark.cosmos.asns.basePath", "/mnt/CosmosDBPartitionedStore/") \ ``` - Please note that it's not efficient to now frequently query based on "ReadDate" and "Location" filters together, on above partitioning. Composite keys will give - better query performance in that case. + Note that it's not efficient to now frequently query based on "ReadDate" and "Location" filters together, on above partitioning. Composite keys allow better query performance in that case. ## Limitations You could use one or more partition keys for your analytical data. If you are us ## Pricing -In addition to the [Azure Synapse Link pricing](synapse-link.md#pricing), you'll incur the following charges when using custom partitioning: +In addition to the [Azure Synapse Link pricing](synapse-link.md#pricing), you incur the following charges when using custom partitioning: * You are [billed](https://azure.microsoft.com/pricing/details/synapse-analytics/#pricing) for using Synapse Apache Spark pools when you run partitioning jobs on analytical store. -* The partitioned data is stored in the primary Azure Data Lake Storage Gen2 account associated with your Azure Synapse Analytics workspace. You'll incur the costs associated with using the ADLS Gen2 storage and transactions. These costs are determined by the storage required by partitioned analytical data and data processed for analytical queries in Synapse respectively. For more information on pricing, please visit the [Azure Data Lake Storage pricing page](https://azure.microsoft.com/pricing/details/storage/data-lake/). +* The partitioned data is stored in the primary Azure Data Lake Storage Gen2 account associated with your Azure Synapse Analytics workspace. You incur the costs associated with using the ADLS Gen2 storage and transactions. These costs are determined by the storage required by partitioned analytical data and data processed for analytical queries in Synapse respectively. For more information on pricing, please visit the [Azure Data Lake Storage pricing page](https://azure.microsoft.com/pricing/details/storage/data-lake/). ## Frequently asked questions |
cosmos-db | How To Migrate Desktop Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-migrate-desktop-tool.md | +# CustomerIntent: As a database owner, I want to use a tool to perform migration to Azure Cosmos DB so that I can streamline large and complex migrations. # Migrate data to Azure Cosmos DB using the desktop data migration tool Now, migrate data from a JSON array to the newly created Azure Cosmos DB for NoS Using JSON Source Using Cosmos-nosql Sink ```--## Next steps --- Review [options for migrating data to Azure Cosmos DB](migration-choices.md). |
cosmos-db | How To Move Regions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-move-regions.md | Azure Cosmos DB does not natively support migrating account metadata from one re > [!IMPORTANT] > It is not necessary to migrate the account metadata if the data is stored or moved to a different region. The region in which the account metadata resides has no impact on the performance, security or any other operational aspects of your Azure Cosmos DB account. -A near-zero-downtime migration for the API for NoSQL requires the use of the [change feed](change-feed.md) or a tool that uses it. If you're migrating from the API for MongoDB, Cassandra, or another API, or to learn more about options for migrating data between accounts, see [Options to migrate your on-premises or cloud data to Azure Cosmos DB](migration-choices.md). +A near-zero-downtime migration for the API for NoSQL requires the use of the [change feed](change-feed.md) or a tool that uses it. The following steps demonstrate how to migrate an Azure Cosmos DB account for the API for NoSQL and its data from one region to another: |
cosmos-db | Migration Choices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/migration-choices.md | - Title: Azure Cosmos DB Migration options -description: This doc describes the various options to migrate your on-premises or cloud data to Azure Cosmos DB ------ Previously updated : 04/02/2022--# Options to migrate your on-premises or cloud data to Azure Cosmos DB --You can load data from various data sources to Azure Cosmos DB. Since Azure Cosmos DB supports multiple APIs, the targets can be any of the existing APIs. The following are some scenarios where you migrate data to Azure Cosmos DB: --* Move data from one Azure Cosmos DB container to another container within the Azure Cosmos DB account (could be in the same database or a different database). -* Move data from one Azure Cosmos DB account to another Azure Cosmos DB account (could be in the same region or a different region, same subscription or a different one). -* Move data from a source such as Azure blob storage, a JSON file, Oracle database, Couchbase, DynamoDB to Azure Cosmos DB. --In order to support migration paths from the various sources to the different Azure Cosmos DB APIs, there are multiple solutions that provide specialized handling for each migration path. This document lists the available solutions and describes their advantages and limitations. --## Factors affecting the choice of migration tool --The following factors determine the choice of the migration tool: --* **Online vs offline migration**: Many migration tools provide a path to do a one-time migration only. This means that the applications accessing the database might experience a period of downtime. Some migration solutions provide a way to do a live migration where there's a replication pipeline set up between the source and the target. --* **Data source**: The existing data can be in various data sources like Oracle DB2, Datastax Cassanda, Azure SQL Database, PostgreSQL, etc. The data can also be in an existing Azure Cosmos DB account and the intent of migration can be to change the data model or repartition the data in a container with a different partition key. --* **Azure Cosmos DB API**: For the API for NoSQL in Azure Cosmos DB, there are a variety of tools developed by the Azure Cosmos DB team which aid in the different migration scenarios. All of the other APIs have their own specialized set of tools developed and maintained by the community. Since Azure Cosmos DB supports these APIs at a wire protocol level, these tools should work as-is while migrating data into Azure Cosmos DB too. However, they might require custom handling for throttles as this concept is specific to Azure Cosmos DB. --* **Size of data**: Most migration tools work very well for smaller datasets. When the data set exceeds a few hundred gigabytes, the choices of migration tools are limited. --* **Expected migration duration**: Migrations can be configured to take place at a slow, incremental pace that consumes less throughput or can consume the entire throughput provisioned on the target Azure Cosmos DB container and complete the migration in less time. --## Azure Cosmos DB API for NoSQL --If you need help with capacity planning, consider reading our [guide to estimating RU/s using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md). -* If you're migrating from a vCores- or server-based platform and you need guidance on estimating request units, consider reading our [guide to estimating RU/s based on vCores](estimate-ru-with-capacity-planner.md). --|Migration type|Solution|Supported sources|Supported targets|Considerations| -|||||| -|Offline|[Intra-account container copy](intra-account-container-copy.md)|Azure Cosmos DB for NoSQL|Azure Cosmos DB for NoSQL|• CLI-based; No set up needed. <br/>• Supports large datasets.| -|Offline|[Azure Cosmos DB desktop data migration tool](how-to-migrate-desktop-tool.md)|•Azure Cosmos DB for NoSQL<br/>•Azure Cosmos DB for MongoDB<br/>•Azure Cosmos DB for Table<br/>•Azure Table storage<br/>•JSON Files<br/>•MongoDB<br/>•SQL Server<br/>|•Azure Cosmos DB for NoSQL<br/>•Azure Cosmos DB for MongoDB<br/>•Azure Cosmos DB for Table<br/>•Azure Table storage<br/>•JSON Files<br/>•MongoDB<br/>•SQL Server<br/>|• Command-line tool<br/>• Open-source| -|Offline|[Azure Data Factory](../data-factory/connector-azure-cosmos-db.md)| •JSON/CSV Files<br/>•Azure Cosmos DB for NoSQL<br/>•Azure Cosmos DB for MongoDB<br/>•MongoDB <br/>•SQL Server<br/>•Table Storage<br/>•Azure Blob Storage <br/> <br/>See the [Azure Data Factory](../data-factory/connector-overview.md) article for other supported sources.|•Azure Cosmos DB for NoSQL<br/>•Azure Cosmos DB for MongoDB<br/>•JSON Files <br/><br/> See the [Azure Data Factory](../data-factory/connector-overview.md) article for other supported targets. |• Easy to set up and supports multiple sources.<br/>• Makes use of the Azure Cosmos DB bulk executor library. <br/>• Suitable for large datasets. <br/>• Lack of checkpointing - It means that if an issue occurs during the course of migration, you need to restart the whole migration process.<br/>• Lack of a dead letter queue - It means that a few erroneous files can stop the entire migration process.| -|Offline|[Azure Cosmos DB Spark connector](./nosql/quickstart-spark.md)|Azure Cosmos DB for NoSQL. <br/><br/>You can use other sources with additional connectors from the Spark ecosystem.| Azure Cosmos DB for NoSQL. <br/><br/>You can use other targets with additional connectors from the Spark ecosystem.| • Makes use of the Azure Cosmos DB bulk executor library. <br/>• Suitable for large datasets. <br/>• Needs a custom Spark setup. <br/>• Spark is sensitive to schema inconsistencies and this can be a problem during migration. | -|Online|[Azure Cosmos DB Spark connector + Change Feed sample](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-cosmos-spark_3_2-12/Samples/DatabricksLiveContainerMigration)|Azure Cosmos DB for NoSQL. <br/><br/>Uses Azure Cosmos DB Change Feed to stream all historic data as well as live updates.| Azure Cosmos DB for NoSQL. <br/><br/>You can use other targets with additional connectors from the Spark ecosystem.| • Makes use of the Azure Cosmos DB bulk executor library. <br/>• Suitable for large datasets. <br/>• Needs a custom Spark setup. <br/>• Spark is sensitive to schema inconsistencies and this can be a problem during migration. | -|Offline|[Custom tool with Azure Cosmos DB bulk executor library](migrate.md)| The source depends on your custom code | Azure Cosmos DB for NoSQL| • Provides checkpointing, dead-lettering capabilities which increases migration resiliency. <br/>• Suitable for very large datasets (10 TB+). <br/>• Requires custom setup of this tool running as an App Service. | -|Online|[Azure Cosmos DB Functions + ChangeFeed API](change-feed-functions.md)| Azure Cosmos DB for NoSQL | Azure Cosmos DB for NoSQL| • Easy to set up. <br/>• Works only if the source is an Azure Cosmos DB container. <br/>• Not suitable for large datasets. <br/>• Doesn't capture deletes from the source container. | -|Online|[Striim](cosmosdb-sql-api-migrate-data-striim.md)| •Oracle <br/>•Apache Cassandra<br/><br/> See the [Striim website](https://www.striim.com/sources-and-targets/) for other supported sources. |•Azure Cosmos DB for NoSQL <br/>• Azure Cosmos DB for Cassandra<br/><br/> See the [Striim website](https://www.striim.com/sources-and-targets/) for other supported targets. | • Works with a large variety of sources like Oracle, DB2, SQL Server.<br/>• Easy to build ETL pipelines and provides a dashboard for monitoring. <br/>• Supports larger datasets. <br/>• Since this is a third-party tool, it needs to be purchased from the marketplace and installed in the user's environment.| --## Azure Cosmos DB API for MongoDB --Follow the [pre-migration guide](mongodb/pre-migration-steps.md) to plan your migration. -* If you need help with capacity planning, consider reading our [guide to estimating RU/s using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md). -* If you're migrating from a vCores- or server-based platform and you need guidance on estimating request units, consider reading our [guide to estimating RU/s based on vCores](convert-vcore-to-request-unit.md). --When you're ready to migrate, you can find detailed guidance on migration tools below -* [Offline migration using Intra-account container copy](intra-account-container-copy.md) -* [Offline migration using MongoDB native tools](mongodb/tutorial-mongotools-cosmos-db.md) -* [Offline migration using Azure database migration service (DMS)](../dms/tutorial-mongodb-cosmos-db.md) -* [Online migration using Azure database migration service (DMS)](../dms/tutorial-mongodb-cosmos-db-online.md) -* [Offline/online migration using Azure Databricks and Spark](mongodb/migrate-databricks.md) --Then, follow our [post-migration guide](mongodb/post-migration-optimization.md) to optimize your Azure Cosmos DB data estate once you've migrated. --A summary of migration pathways from your current solution to Azure Cosmso DB for MongoDB is provided below: --|Migration type|Solution|Supported sources|Supported targets|Considerations| -|||||| -|Offline|[Intra-account container copy](intra-account-container-copy.md)|Azure Cosmos DB for MongoDB|Azure Cosmos DB for MongoDB|• Command-line tool; No set up needed.<br/>• Suitable for large datasets| -|Offline|[Azure Cosmos DB desktop data migration tool](how-to-migrate-desktop-tool.md)|•Azure Cosmos DB for NoSQL<br/>•Azure Cosmos DB for MongoDB<br/>•Azure Cosmos DB for Table<br/>•Azure Table storage<br/>•JSON Files<br/>•MongoDB<br/>•SQL Server<br/>|•Azure Cosmos DB for NoSQL<br/>•Azure Cosmos DB for MongoDB<br/>•Azure Cosmos DB for Table<br/>•Azure Table storage<br/>•JSON Files<br/>•MongoDB<br/>•SQL Server<br/>|• Command-line tool<br/>• Open-source| -|Online|[Azure Database Migration Service](../dms/tutorial-mongodb-cosmos-db-online.md)| MongoDB|Azure Cosmos DB for MongoDB |• Makes use of the Azure Cosmos DB bulk executor library. <br/>• Suitable for large datasets and takes care of replicating live changes. <br/>• Works only with other MongoDB sources.| -|Offline|[Azure Database Migration Service](../dms/tutorial-mongodb-cosmos-db.md)| MongoDB| Azure Cosmos DB for MongoDB| • Makes use of the Azure Cosmos DB bulk executor library. <br/>• Suitable for large datasets and takes care of replicating live changes. <br/>• Works only with other MongoDB sources.| -|Offline|[Azure Data Factory](../data-factory/connector-azure-cosmos-db-mongodb-api.md)| •JSON/CSV Files<br/>•Azure Cosmos DB for NoSQL<br/>•Azure Cosmos DB for MongoDB <br/>•MongoDB<br/>•SQL Server<br/>•Table Storage<br/>•Azure Blob Storage <br/><br/> See the [Azure Data Factory](../data-factory/connector-overview.md) article for other supported sources. | •Azure Cosmos DB for NoSQL<br/>•Azure Cosmos DB for MongoDB <br/>• JSON files <br/><br/> See the [Azure Data Factory](../data-factory/connector-overview.md) article for other supported targets.| • Easy to set up and supports multiple sources. <br/>• Makes use of the Azure Cosmos DB bulk executor library. <br/>• Suitable for large datasets. <br/>• Lack of checkpointing means that any issue during the course of migration would require a restart of the whole migration process.<br/>• Lack of a dead letter queue would mean that a few erroneous files could stop the entire migration process. <br/>• Needs custom code to increase read throughput for certain data sources.| -|Offline|Existing Mongo Tools ([mongodump](mongodb/tutorial-mongotools-cosmos-db.md#mongodumpmongorestore), [mongorestore](mongodb/tutorial-mongotools-cosmos-db.md#mongodumpmongorestore), [Studio3T](mongodb/connect-using-mongochef.md))|•MongoDB<br/>•Azure Cosmos DB for MongoDB<br/> | Azure Cosmos DB for MongoDB| • Easy to set up and integration. <br/>• Needs custom handling for throttles.| --## Azure Cosmos DB API for Cassandra --If you need help with capacity planning, consider reading our [guide to estimating RU/s using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md). --|Migration type|Solution|Supported sources|Supported targets|Considerations| -|||||| -|Offline|[Intra-account container copy](intra-account-container-copy.md)|Azure Cosmos DB API for Cassandra | Azure Cosmos DB API for Cassandra| • CLI-based; No set up needed. <br/>• Supports large datasets.| -|Offline|[cqlsh COPY command](cassandr#migrate-data-by-using-the-cqlsh-copy-command)|CSV Files | Azure Cosmos DB API for Cassandra| • Easy to set up. <br/>• Not suitable for large datasets. <br/>• Works only when the source is a Cassandra table.| -|Offline|[Copy table with Spark](cassandr#migrate-data-by-using-spark) | •Apache Cassandra<br/> | Azure Cosmos DB API for Cassandra | • Can make use of Spark capabilities to parallelize transformation and ingestion. <br/>• Needs configuration with a custom retry policy to handle throttles.| -|Online|[Dual-write proxy + Spark](cassandr)| •Apache Cassandra<br/>|•Azure Cosmos DB API for Cassandra <br/>| • Supports larger datasets, but careful attention required for setup and validation. <br/>• Open-source tools, no purchase required.| -|Online|[Striim (from Oracle DB/Apache Cassandra)](cassandr)| •Oracle<br/>•Apache Cassandra<br/><br/> See the [Striim website](https://www.striim.com/sources-and-targets/) for other supported sources.|•Azure Cosmos DB API for NoSQL<br/>•Azure Cosmos DB API for Cassandra <br/><br/> See the [Striim website](https://www.striim.com/sources-and-targets/) for other supported targets.| • Works with a large variety of sources like Oracle, DB2, SQL Server. <br/>• Easy to build ETL pipelines and provides a dashboard for monitoring. <br/>• Supports larger datasets. <br/>• Since this is a third-party tool, it needs to be purchased from the marketplace and installed in the user's environment.| -|Online|[Arcion (from Oracle DB/Apache Cassandra)](cassandr)|•Oracle<br/>•Apache Cassandra<br/><br/>See the [Arcion website](https://www.arcion.io/) for other supported sources. |Azure Cosmos DB API for Cassandra. <br/><br/>See the [Arcion website](https://www.arcion.io/) for other supported targets. | • Supports larger datasets. <br/>• Since this is a third-party tool, it needs to be purchased from the marketplace and installed in the user's environment.| --## Other APIs --For APIs other than the API for NoSQL, API for MongoDB and the API for Cassandra, there are various tools supported by each of the API's existing ecosystems. --### API for Gremlin --* [Graph bulk executor library](gremlin/bulk-executor-dotnet.md) -* [Gremlin Spark](https://github.com/Azure/azure-cosmosdb-spark/blob/2.4/samples/graphframes/main.scala) --### API for Table --* [Azure Cosmos DB desktop data migration tool](how-to-migrate-desktop-tool.md) --## Next steps --* Trying to do capacity planning for a migration to Azure Cosmos DB? - * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md) - * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md) -* Learn more by trying out the sample applications consuming the bulk executor library in [.NET](nosql/bulk-executor-dotnet.md) and [Java](bulk-executor-java.md). -* The bulk executor library is integrated into the Azure Cosmos DB Spark connector, to learn more, see [Azure Cosmos DB Spark connector](./nosql/quickstart-spark.md) article. -* Contact the Azure Cosmos DB product team by opening a support ticket under the "General Advisory" problem type and "Large (TB+) migrations" problem subtype for additional help with large scale migrations. |
cosmos-db | How To Configure Capabilities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-configure-capabilities.md | Capabilities are features that can be added or removed to your API for MongoDB a | `EnableMongoRetryableWrites` | Enables support for retryable writes on the account. | Yes | | `EnableMongo16MBDocumentSupport` | Enables support for inserting documents up to 16 MB in size. | No | | `EnableUniqueCompoundNestedDocs` | Enables support for compound and unique indexes on nested fields if the nested field isn't an array. | No |-| `EnableTtlOnCustomPath` | Provides the ability to set a custom Time to Live (TTL) on any one field in a collection. | No | +| `EnableTtlOnCustomPath` | Provides the ability to set a custom Time to Live (TTL) on any one field in a collection. Setting TTL on partial unique index property is not supported.┬╣ | No | | `EnablePartialUniqueIndex` | Enables support for a unique partial index, so you have more flexibility to specify exactly which fields in documents you'd like to index. | No | | `EnableUniqueIndexReIndex` | Enables support for unique index re-indexing for Cosmos DB for MongoDB RU. ┬╣ | No | |
cosmos-db | How To Setup Rbac | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-setup-rbac.md | Privileges are actions that can be performed on a specific resource. For example A role has one or more privileges. Roles are assigned to users (zero or more) to enable them to perform the actions defined in those privileges. Roles are stored within a single database. ### Diagnostic log auditing-An another column called `userId` has been added to the `MongoRequests` table in the Azure Portal Diagnostics feature. This column identifies which user performed which data plan operation. The value in this column is empty when RBAC isn't enabled. +Another column called `userId` has been added to the `MongoRequests` table in the Azure Portal Diagnostics feature. This column identifies which user performed which data plan operation. The value in this column is empty when RBAC isn't enabled. ## Available Privileges #### Query and Write |
cosmos-db | Quickstart Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/quickstart-python.md | Remove-AzResourceGroup @parameters 1. On the **Are you sure you want to delete** dialog, enter the name of the resource group, and then select **Delete**. --## Next steps --In this quickstart, you learned how to create an Azure Cosmos DB for MongoDB account, create a database, and create a collection using the PyMongo driver. You can now dive deeper into the Azure Cosmos DB for MongoDB to import more data, perform complex queries, and manage your Azure Cosmos DB MongoDB resources. --> [!div class="nextstepaction"] -> [Options to migrate your on-premises or cloud data to Azure Cosmos DB](../migration-choices.md) |
cosmos-db | Dynamo To Cosmos | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/dynamo-to-cosmos.md | The following JSON object represents the data format in Azure Cosmos DB } ``` -## Migrate your data --There are various options available to migrate your data to Azure Cosmos DB. To learn more, see the [Options to migrate your on-premises or cloud data to Azure Cosmos DB](../migration-choices.md) article. - ## Migrate your code This article is scoped to migrate an application's code to Azure Cosmos DB, which is the critical aspect of database migration. To help you reduce learning curve, the following sections include a side-by-side code comparison between Amazon DynamoDB and Azure Cosmos DB's equivalent code snippet. |
cosmos-db | Secure Access To Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/secure-access-to-data.md | For information and sample code to configure RBAC for the Azure Cosmos DB for Mo Resource tokens provide access to the application resources within a database. Resource tokens: -- Provide access to specific containers, partition keys, documents, attachments, stored procedures, triggers, and UDFs.+- Provide access to specific containers, partition keys, documents, attachments. - Are created when a [user](#users) is granted [permissions](#permissions) to a specific resource. - Are recreated when a permission resource is acted upon on by POST, GET, or PUT call. - Use a hash resource token specifically constructed for the user, resource, and permission. |
cost-management-billing | Limited Time Central Poland | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/limited-time-central-poland.md | These terms and conditions (hereinafter referred to as "terms") govern the limit |`F8s`|`F8s v2`|`Fsv2 Type2`|`Fsv2 Type3`| |`Fsv2 Type4`|`SQLG7_AMD_IaaS`|`SQLG7_AMD_NVME`| | -The 66 percent saving is based on one DS1 v2 Azure VM for Linux in the Poland Central region running for 36 months at a pay-as-you-go rate as of September 2023. Actual savings vary based on location, term commitment, instance type, or usage. The savings doesn't include operating system costs. +The 66 percent saving is based on one DS1 v2 Azure VM for Linux in the Poland Central region running for 36 months at a pay-as-you-go rate as of September 2023. Actual savings vary based on location, term commitment, instance type, or usage. The savings doesn't include operating system costs. For more information about how the savings are calculated, see [Poland Central reservation savings](/legal/cost-management-billing/reservations/poland-central-limited-time). **Eligibility** - The Offer is open to individuals who meet the following criteria: |
cost-management-billing | Savings Plan Compute Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/savings-plan-compute-overview.md | Usage from [savings plan-eligible resources](https://azure.microsoft.com/pricing In addition, virtual machines used with the [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/products/kubernetes-service/) and [Azure Virtual Desktop (AVD)](https://azure.microsoft.com/products/virtual-desktop/) are eligible for the savings plan. -It's important to consider your hourly spend when you determine your hourly commitment. Azure provides commitment recommendations based on usage from your last 30 days. The recommendations may be found in: +It's important to consider your hourly spend when you determine your hourly commitment. Azure provides commitment recommendations based on usage from your last 30 days. The recommendations are found in: - [Azure Advisor](https://portal.azure.com/#view/Microsoft_Azure_Expert/AdvisorMenuBlade/%7E/score) - The savings plan purchase experience in the [Azure portal](https://portal.azure.com/) The complete list of savings plan eligible products is found in your price sheet ## How is a savings plan billed? -The savings plan is charged to the payment method tied to the subscription. The savings plan cost is deducted from your Azure Prepayment (previously called monetary commitment) balance, if available. When your Azure Prepayment balance doesn't cover the cost of the savings plan, you're billed the overage. If you have a subscription from an individual plan with pay-as-you-go rates, the credit card you have in your account is billed immediately for up-front and for monthly purchases. Monthly payments that's you've made appear on your invoice. When you're billed by invoice, you see the charges on your next invoice. +The savings plan is charged to the payment method tied to the subscription. The savings plan cost is deducted from your Azure Prepayment (previously called monetary commitment) balance, if available. When your Azure Prepayment balance doesn't cover the cost of the savings plan, you're billed the overage. If you have a subscription from an individual plan with pay-as-you-go rates, the credit card you have in your account is billed immediately for up-front and for monthly purchases. Monthly payments that you've made appear on your invoice. When get billed by invoice, you see the charges on your next invoice. ## Who can buy a savings plan? Savings plan purchases can't be canceled or refunded. ## Charges covered by savings plan -- Virtual Machines - A savings plan only covers the virtual machine compute costs. It doesn't cover other software, Windows, networking, or storage charges. Virtual machines don't include BareMetal Infrastructure or the Av1 series. Spot VMs aren't covered by savings plans.+- Virtual Machines - A savings plan only covers the virtual machine compute costs. It doesn't cover other software, Windows, networking, or storage charges. Virtual machines don't include BareMetal Infrastructure or the :::no-loc text="Av1"::: series. Spot VMs aren't covered by savings plans. - Azure Dedicated Hosts - Only the compute costs are included with the dedicated hosts. - Container Instances+- Azure Container Apps - Azure Premium Functions - Azure App Services - The Azure savings plan for compute can only be applied to the App Service upgraded Premium v3 plan and the upgraded Isolated v2 plan. - On-demand Capacity Reservation For Windows virtual machines and SQL Database, the savings plan discount doesn't ## Need help? Contact us. -If you have Azure savings plan for compute questions, contact your account team, or [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). Temporarily, Microsoft will only provide Azure savings plan for compute expert support requests in English. +If you have Azure savings plan for compute questions, contact your account team, or [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). Temporarily, Microsoft only provides Azure savings plan for compute expert support requests in English. ## Next steps |
data-factory | How To Diagnostic Logs And Metrics For Managed Airflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-diagnostic-logs-and-metrics-for-managed-airflow.md | Azure Data Factory offers comprehensive metrics for Airflow Integration Runtimes 5. You can set up an alert rule that triggers when specific conditions are met by your metrics. Refer to guide: [Overview of Azure Monitor alerts - Azure Monitor | Microsoft Learn](/azure/azure-monitor/alerts/alerts-overview) -6. Click on Save to Dashboard, once your chat is complete, else your chart disappears. +6. Click on Save to Dashboard, once your chart is complete, else your chart disappears. :::image type="content" source="media/diagnostics-logs-and-metrics-for-managed-airflow/save-to-dashboard.png" alt-text="Screenshot that shows save to dashboard." lightbox="media/diagnostics-logs-and-metrics-for-managed-airflow/save-to-dashboard.png"::: For more information: [https://learn.microsoft.com/azure/azure-monitor/reference/supported-metrics/microsoft-datafactory-factories-metrics](/azure/azure-monitor/reference/supported-metrics/microsoft-datafactory-factories-metrics) |
data-factory | Tutorial Pipeline Return Value | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-pipeline-return-value.md | There are a few options for value types, including Type Name | Description -- | -- String | The most straight forward of all. It expects a string value.-Expression | It allows you to reference output from previous activities. +Expression | It allows you to reference output from previous activities. You can use string interpolation here to include in-line expression values such as ```"The value is @{guid()}"```. Array | It expects an array of _string values_. Press "enter" key to separate values in the array Boolean | True or False Null | Signal place holder status; the value is constant _null_ |
ddos-protection | Test Through Simulations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/test-through-simulations.md | Our testing partners' simulation environments are built within Azure. You can on For this tutorial, you'll create a test environment that includes: - A DDoS protection plan - A virtual network-- A Azure Bastion host +- An Azure Bastion host - A load balancer - Two virtual machines. BreakingPoint Cloud offers: - A simplified user interface and an ΓÇ£out-of-the-boxΓÇ¥ experience. - pay-per-use model.-- Pre-defined DDoS test sizing and test duration profiles enable safer validations by eliminating the potential of configuration errors.v+- Predefined DDoS test sizing and test duration profiles enable safer validations by eliminating the potential of configuration errors. > [!NOTE] > For BreakingPoint Cloud, you must first [create a BreakingPoint Cloud account](https://www.ixiacom.com/products/breakingpoint-cloud). |
defender-for-cloud | Upcoming Changes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md | If you're looking for the latest release notes, you can find them in the [What's ## Planned changes -| Planned change | Estimated date for change | -|--|--| -| [Replacing the "Key Vaults should have purge protection enabled" recommendation with combined recommendation "Key Vaults should have deletion protection enabled"](#replacing-the-key-vaults-should-have-purge-protection-enabled-recommendation-with-combined-recommendation-key-vaults-should-have-deletion-protection-enabled) | June 2023| -| [Preview alerts for DNS servers to be deprecated](#preview-alerts-for-dns-servers-to-be-deprecated) | August 2023 | -| [Classic connectors for multicloud will be retired](#classic-connectors-for-multicloud-will-be-retired) | September 2023 | -| [Change to the Log Analytics daily cap](#change-to-the-log-analytics-daily-cap) | September 2023 | -| [DevOps Resource Deduplication for Defender for DevOps](#devops-resource-deduplication-for-defender-for-devops) | November 2023 | -| [Changes to Attack Path's Azure Resource Graph table scheme](#changes-to-attack-paths-azure-resource-graph-table-scheme) | November 2023 | -| [Defender for Cloud plan and strategy for the Log Analytics agent deprecation](#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation) | August 2024 | +| Planned change | Announcement date | Estimated date for change | +|--|--|--| +| [Four alerts are set to be deprecated](#four-alerts-are-set-to-be-deprecated) | October 23, 2023 | November 23, 2023 | +| [Replacing the "Key Vaults should have purge protection enabled" recommendation with combined recommendation "Key Vaults should have deletion protection enabled"](#replacing-the-key-vaults-should-have-purge-protection-enabled-recommendation-with-combined-recommendation-key-vaults-should-have-deletion-protection-enabled) | | June 2023| +| [Preview alerts for DNS servers to be deprecated](#preview-alerts-for-dns-servers-to-be-deprecated) | | August 2023 | +| [Classic connectors for multicloud will be retired](#classic-connectors-for-multicloud-will-be-retired) | | September 2023 | +| [Change to the Log Analytics daily cap](#change-to-the-log-analytics-daily-cap) | | September 2023 | +| [DevOps Resource Deduplication for Defender for DevOps](#devops-resource-deduplication-for-defender-for-devops) | | November 2023 | +| [Changes to Attack Path's Azure Resource Graph table scheme](#changes-to-attack-paths-azure-resource-graph-table-scheme) | | November 2023 | +| [Deprecating two security incidents](#deprecating-two-security-incidents) | | November 2023 | +| [Defender for Cloud plan and strategy for the Log Analytics agent deprecation](#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation) | | August 2024 | ++## Four alerts are set to be deprecated ++Announcement date: October 23, 2023 +Estimated date for change: November 23, 2023 -### Replacing the "Key Vaults should have purge protection enabled" recommendation with combined recommendation "Key Vaults should have deletion protection enabled" +As part of our quality improvement process, the following security alerts are set to be deprecated: ++- `Possible data exfiltration detected (K8S.NODE_DataEgressArtifacts)` +- `Executable found running from a suspicious location (K8S.NODE_SuspectExecutablePath)` +- `Suspicious process termination burst (VM_TaskkillBurst)` +- `PsExec execution detected (VM_RunByPsExec)` ++## Replacing the "Key Vaults should have purge protection enabled" recommendation with combined recommendation "Key Vaults should have deletion protection enabled" **Estimated date for change: June 2023** The `Key Vaults should have purge protection enabled` recommendation is deprecat See the [full index of Azure Policy built-in policy definitions for Key Vault](../key-vault/policy-reference.md) -### Preview alerts for DNS servers to be deprecated +## Preview alerts for DNS servers to be deprecated **Estimated date for change: August 2023** The following table lists the alerts to be deprecated: | Anonymity network activity (Preview) | DNS_DarkWeb | | Anonymity network activity using web proxy (Preview) | DNS_DarkWebProxy | -### Classic connectors for multicloud will be retired +## Classic connectors for multicloud will be retired **Estimated date for change: September 15, 2023** How to migrate to the native security connectors: - [Connect your AWS account to Defender for Cloud](quickstart-onboard-aws.md) - [Connect your GCP project to Defender for Cloud](quickstart-onboard-gcp.md) -### Change to the Log Analytics daily cap +## Change to the Log Analytics daily cap Azure monitor offers the capability to [set a daily cap](../azure-monitor/logs/daily-cap.md) on the data that is ingested on your Log analytics workspaces. However, Defender for Cloud security events are currently not supported in those exclusions. At that time, all billable data types will be capped if the daily cap is met. Th Learn more about [workspaces with Microsoft Defender for Cloud](../azure-monitor/logs/daily-cap.md#workspaces-with-microsoft-defender-for-cloud). +## DevOps Resource Deduplication for Defender for DevOps ++**Estimated date for change: November 2023** ++To improve the Defender for DevOps user experience and enable further integration with Defender for Cloud's rich set of capabilities, Defender for DevOps will no longer support duplicate instances of a DevOps organization to be onboarded to an Azure tenant. ++If you don't have an instance of a DevOps organization onboarded more than once to your organization, no further action is required. If you do have more than one instance of a DevOps organization onboarded to your tenant, the subscription owner will be notified and will need to delete the DevOps Connector(s) they don't want to keep by navigating to Defender for Cloud Environment Settings. +Customers will have until November 14, 2023 to resolve this issue. After this date, only the most recent DevOps Connector created where an instance of the DevOps organization exists will remain onboarded to Defender for DevOps. For example, if Organization Contoso exists in both connectorA and connectorB, and connectorB was created after connectorA, then connectorA will be removed from Defender for DevOps. ++## Changes to Attack Path's Azure Resource Graph table scheme ++**Estimated date for change: November 2023** ++The Attack Path's Azure Resource Graph (ARG) table scheme will be updated. The `attackPathType` property will be removed and additional properties will be added. ++## Defender for Cloud plan and strategy for the Log Analytics agent deprecation ++**Estimated date for change: August 2024** ++The Azure Log Analytics agent, also known as the Microsoft Monitoring Agent (MMA) will be [retired in August 2024.](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/) As a result, features of the two Defender for Cloud plans that rely on the Log Analytics agent are impacted, and they have updated strategies: [Defender for Servers](#defender-for-servers) and [Defender for SQL Server on machines](#defender-for-sql-server-on-machines). -#### Key strategy points +### Key strategy points - The Azure monitoring Agent (AMA) won’t be a requirement of the Defender for Servers offering, but will remain required as part of Defender for SQL. - Defender for Servers MMA-based features and capabilities will be deprecated in their Log Analytics version in August 2024, and delivered over alternative infrastructures, before the MMA deprecation date. - In addition, the currently shared autoprovisioning process that provides the installation and configuration of both agents (MMA/AMA), will be adjusted accordingly. --#### Defender for Servers +### Defender for Servers The following table explains how each capability will be provided after the Log Analytics agent retirement: The following table explains how each capability will be provided after the Log | File integrity monitoring | The [current GA version](file-integrity-monitoring-enable-log-analytics.md) based on the Log Analytics agent won’t be available after August 2024. The FIM [Public Preview version](file-integrity-monitoring-enable-ama.md) based on Azure Monitor Agent (AMA), will be deprecated when the alternative is provided over Defender for Endpoint.| A new version of this feature will be provided based on Microsoft Defender for Endpoint integration by April 2024. | | The [500-MB benefit](faq-defender-for-servers.yml#is-the-500-mb-of-free-data-ingestion-allowance-applied-per-workspace-or-per-machine-) for data ingestion | The [500-MB benefit](faq-defender-for-servers.yml#is-the-500-mb-of-free-data-ingestion-allowance-applied-per-workspace-or-per-machine-) for data ingestion over the defined tables will remain supported via the AMA agent for the machines under subscriptions covered by Defender for Servers P2. Every machine is eligible for the benefit only once, even if both Log Analytics agent and Azure Monitor agent are installed on it. | | -##### Log analytics and Azure Monitoring agents autoprovisioning experience +#### Log analytics and Azure Monitoring agents autoprovisioning experience -The current provisioning process that provides the installation and configuration of both agents (MMA/AMA), will be adjusted according to the plan mentioned above:  +The current provisioning process that provides the installation and configuration of both agents (MMA/AMA), will be adjusted according to the plan mentioned above: -1. MMA auto-provisioning mechanism and its related policy initiative will remain optional and supported until August 2024 through the Defender for Cloud platform.    -1. In October 2023:  - 1. The current shared ‘Log Analytics agent’/’Azure Monitor agent’ auto-provisioning mechanism will be updated and applied to ‘Log Analytics agent’ only.   +1. MMA auto-provisioning mechanism and its related policy initiative will remain optional and supported until August 2024 through the Defender for Cloud platform. +1. In October 2023: + 1. The current shared ‘Log Analytics agent’/’Azure Monitor agent’ auto-provisioning mechanism will be updated and applied to ‘Log Analytics agent’ only. - 1. **Azure Monitor agent** (AMA) related Public Preview policy initiatives will be deprecated and replaced with the new auto-provisioning process for Azure Monitor agent (AMA), targeting only Azure registered SQL servers (SQL Server on Azure VM/ Arc-enabled SQL Server).  + 1. **Azure Monitor agent** (AMA) related Public Preview policy initiatives will be deprecated and replaced with the new auto-provisioning process for Azure Monitor agent (AMA), targeting only Azure registered SQL servers (SQL Server on Azure VM/ Arc-enabled SQL Server). -1. Current customers with AMA with the Public Preview policy initiative enabled will still be supported but are recommended to migrate to the new policy.  +1. Current customers with AMA with the Public Preview policy initiative enabled will still be supported but are recommended to migrate to the new policy. To ensure the security of your servers and receive all the security updates from Defender for Servers, make sure to have [Defender for Endpoint integration](integration-defender-for-endpoint.md) and [agentless disk scanning](concept-agentless-data-collection.md) enabled on your subscriptions. This will also keep your servers up-to-date with the alternative deliverables. -#### Agents migration planning  +### Agents migration planning -**First, all Defender for Servers customers are advised to enable Defender for Endpoint integration and agentless disk scanning as part of the Defender for Servers offering, at no additional cost.** This will ensure you are automatically covered with the new alternative deliverables, with no additional onboarding required.     +**First, all Defender for Servers customers are advised to enable Defender for Endpoint integration and agentless disk scanning as part of the Defender for Servers offering, at no additional cost.** This will ensure you are automatically covered with the new alternative deliverables, with no additional onboarding required. -Following that, plan your migration plan according to your organization requirements:  +Following that, plan your migration plan according to your organization requirements: |Azure Monitor agent (AMA) required (for Defender for SQL or other scenarios)|FIM/EPP discovery/Baselined is required as part of Defender for Server|What should I do| | -- | -- | -- |-|No |Yes |You can remove MMA starting April 2024, using GA version of Defender for Server capabilities according to your needs (preview versions will be available earlier)  | -|No |No |You can remove MMA starting now | -|Yes |No |You can start migration from MMA to AMA now | -|Yes |Yes |You can either start migration from MMA to AMA starting April 2024 or alternatively, you can use both agents side by side starting now. | +|No|Yes|You can remove MMA starting April 2024, using GA version of Defender for Server capabilities according to your needs (preview versions will be available earlier)| +|No|No|You can remove MMA starting now| +|Yes|No|You can start migration from MMA to AMA now| +|Yes|Yes|You can either start migration from MMA to AMA starting April 2024 or alternatively, you can use both agents side by side starting now.| -**Customers with Log analytics Agent** **(MMA) enabled**  +**Customers with Log analytics Agent** **(MMA) enabled** -- If the following features are required in your organization: File Integrity Monitoring (FIM), Endpoint Protection recommendations, OS misconfigurations (security baselines recommendations), you can start retiring from MMA in April 2024 when an alternative will be delivered in GA (preview versions will be available earlier). +- If the following features are required in your organization: File Integrity Monitoring (FIM), Endpoint Protection recommendations, OS misconfigurations (security baselines recommendations), you can start retiring from MMA in April 2024 when an alternative will be delivered in GA (preview versions will be available earlier). -- If the features mentioned above are required in your organization, and Azure Monitor agent (AMA) is required for other services as well, you can start migrating from MMA to AMA in April 2024. Alternatively, use both MMA and AMA to get all GA features, then remove MMA in April 2024. +- If the features mentioned above are required in your organization, and Azure Monitor agent (AMA) is required for other services as well, you can start migrating from MMA to AMA in April 2024. Alternatively, use both MMA and AMA to get all GA features, then remove MMA in April 2024. -- If the features mentioned above are not required, and Azure Monitor agent (AMA) is required for other services, you can start migrating from MMA to AMA now. However, note that the preview Defender for Servers capabilities over AMA will be deprecated in April 2024. +- If the features mentioned above are not required, and Azure Monitor agent (AMA) is required for other services, you can start migrating from MMA to AMA now. However, note that the preview Defender for Servers capabilities over AMA will be deprecated in April 2024. -**Customers with Azure Monitor agent (AMA) enabled**  +**Customers with Azure Monitor agent (AMA) enabled** -No action is required from your end.  +No action is required from your end. - You’ll receive all Defender for Servers GA capabilities through Agentless and Defender for Endpoint. The following features will be available in GA in April 2024: File Integrity Monitoring (FIM), Endpoint Protection recommendations, OS misconfigurations (security baselines recommendations). The preview Defender for Servers capabilities over AMA will be deprecated in April 2024. > [!IMPORTANT] > For more information about how to plan for this change, see [Microsoft Defender for Cloud - strategy and plan towards Log Analytics Agent (MMA) deprecation](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/microsoft-defender-for-cloud-strategy-and-plan-towards-log/ba-p/3883341). -#### Defender for SQL Server on machines +### Defender for SQL Server on machines The Defender for SQL Server on machines plan relies on the Log Analytics agent (MMA) / Azure monitoring agent (AMA) to provide Vulnerability Assessment and Advanced Threat Protection to IaaS SQL Server instances. The plan supports Log Analytics agent autoprovisioning in GA, and Azure Monitoring agent autoprovisioning in Public Preview. The following section describes the planned introduction of a new and improved S | SQL-targeted AMA autoprovisioning GA release | December 2023 | GA release of a SQL-targeted AMA autoprovisioning process. Following the release, it will be defined as the default option for all new customers. | | MMA deprecation | August 2024 | The current MMA autoprovisioning process and its related policy initiative will be deprecated. It can still be used customers, but they won't be eligible for support. | -### DevOps Resource Deduplication for Defender for DevOps --**Estimated date for change: November 2023** --To improve the Defender for DevOps user experience and enable further integration with Defender for Cloud's rich set of capabilities, Defender for DevOps will no longer support duplicate instances of a DevOps organization to be onboarded to an Azure tenant. --If you don't have an instance of a DevOps organization onboarded more than once to your organization, no further action is required. If you do have more than one instance of a DevOps organization onboarded to your tenant, the subscription owner will be notified and will need to delete the DevOps Connector(s) they don't want to keep by navigating to Defender for Cloud Environment Settings. --Customers will have until November 14, 2023 to resolve this issue. After this date, only the most recent DevOps Connector created where an instance of the DevOps organization exists will remain onboarded to Defender for DevOps. For example, if Organization Contoso exists in both connectorA and connectorB, and connectorB was created after connectorA, then connectorA will be removed from Defender for DevOps. --### Changes to Attack Path's Azure Resource Graph table scheme +## Deprecating two security incidents **Estimated date for change: November 2023** -The Attack Path's Azure Resource Graph (ARG) table scheme will be updated. The `attackPathType` property wil be removed and additional properties will be added. --### Defender for Cloud plan and strategy for the Log Analytics agent deprecation --**Estimated date for change: August 2024** --The Azure Log Analytics agent, also known as the Microsoft Monitoring Agent (MMA) will be [retired in August 2024.](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/) As a result, features of the two Defender for Cloud plans that rely on the Log Analytics agent are impacted, and they have updated strategies: [Defender for Servers](#defender-for-servers) and [Defender for SQL Server on machines](#defender-for-sql-server-on-machines). --## Deprecating two security incidents --**Estimated date for change: November 2023** --Following quality improvement process, the following security incidents are set to be deprecated: 'Security incident detected suspicious virtual machines activity' and 'Security incident detected on multiple machines'. +Following quality improvement process, the following security incidents are set to be deprecated: `Security incident detected suspicious virtual machines activity` and `Security incident detected on multiple machines`. ## Next steps For all recent changes to Defender for Cloud, see [What's new in Microsoft Defender for Cloud?](release-notes.md).- |
dns | Dns Private Resolver Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-overview.md | An inbound endpoint enables name resolution from on-premises or other private lo The inbound endpoint requires a subnet in the VNet where itΓÇÖs provisioned. The subnet can only be delegated to **Microsoft.Network/dnsResolvers** and can't be used for other services. DNS queries received by the inbound endpoint ingress to Azure. You can resolve names in scenarios where you have Private DNS zones, including VMs that are using auto registration, or Private Link enabled services. > [!NOTE]-> The IP address assigned to an inbound endpoint can be static or dynamic. If you select static, you can't choose a [reserved IP address in the subnet](../virtual-network/virtual-networks-faq.md#are-there-any-restrictions-on-using-ip-addresses-within-these-subnets). If you choose a dynamic IP address, the fifth available IP address in the subnet is assigned. For example, 10.0.0.4 is the fifth IP address in the 10.0.0.0/28 subnet. If the inbound endpoint is reprovisioned, this IP address could change, but normally the 5th IP address in the subnet is used again. The dynamic IP address does not change unless the inbound endpoint is reprovisioned. +> The IP address assigned to an inbound endpoint can be specified as **static** or **dynamic**. For more information, see [static and dynamic endpoint IP addresses](private-resolver-endpoints-rulesets.md#static-and-dynamic-endpoint-ip-addresses). ## Outbound endpoints |
dns | Private Resolver Endpoints Rulesets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-resolver-endpoints-rulesets.md | The architecture for Azure DNS Private Resolver is summarized in the following f As the name suggests, inbound endpoints ingress to Azure. Inbound endpoints provide an IP address to forward DNS queries from on-premises and other locations outside your virtual network. DNS queries sent to the inbound endpoint are resolved using Azure DNS. Private DNS zones that are linked to the virtual network where the inbound endpoint is provisioned are resolved by the inbound endpoint. -The IP address associated with an inbound endpoint is always part of the private virtual network address space where the private resolver is deployed. No other resources can exist in the same subnet with the inbound endpoint. The following screenshot shows an inbound endpoint with a virtual IP address (VIP) of **10.10.0.4** inside the subnet `snet-E-inbound` provisioned within a virtual network with address space of 10.10.0.0/16. +The IP address associated with an inbound endpoint is always part of the private virtual network address space where the private resolver is deployed. No other resources can exist in the same subnet with the inbound endpoint. -![View inbound endpoints](./media/private-resolver-endpoints-rulesets/east-inbound-endpoint.png) +### Static and dynamic endpoint IP addresses -> [!NOTE] -> The IP address assigned to an inbound endpoint can be static or dynamic. If you select static, you can't choose a [reserved IP address in the subnet](../virtual-network/virtual-networks-faq.md#are-there-any-restrictions-on-using-ip-addresses-within-these-subnets). If you choose a dynamic IP address, the fifth available IP address in the subnet is assigned. For example, 10.0.0.4 is the fifth IP address in the 10.0.0.0/28 subnet. If the inbound endpoint is reprovisioned, this IP address could change, but normally the 5th IP address in the subnet is used again. The dynamic IP address does not change unless the inbound endpoint is reprovisioned. +The IP address assigned to an inbound endpoint can be static or dynamic. If you select static, you can't choose a [reserved IP address in the subnet](../virtual-network/virtual-networks-faq.md#are-there-any-restrictions-on-using-ip-addresses-within-these-subnets). If you choose a dynamic IP address, the fifth available IP address in the subnet is assigned. For example, 10.10.0.4 is the fifth IP address in the 10.10.0.0/28 subnet (.0, .1, .2, .3, .4). If the inbound endpoint is reprovisioned, this IP address could change, but normally the 5th IP address in the subnet is used again. The dynamic IP address does not change unless the inbound endpoint is reprovisioned. The following example specifies a static IP address: ++<br><img src="./media/private-resolver-endpoints-rulesets/static-inbound-endpoint.png" alt="A screenshot displaying how to choose a static IP address." width="60%"> ++The following example shows provisioning of an inbound endpoint with a virtual IP address (VIP) of **10.10.0.4** inside the subnet `snet-E-inbound` within a virtual network with address space of 10.10.0.0/16. ++![A screenshot showing inbound endpoints.](./media/private-resolver-endpoints-rulesets/east-inbound-endpoint.png) ## Outbound endpoints |
firewall | Deploy Multi Public Ip Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/deploy-multi-public-ip-powershell.md | This feature enables the following scenarios: - **DNAT** - You can translate multiple standard port instances to your backend servers. For example, if you have two public IP addresses, you can translate TCP port 3389 (RDP) for both IP addresses. - **SNAT** - Additional ports are available for outbound SNAT connections, reducing the potential for SNAT port exhaustion. Azure Firewall randomly selects the source public IP address to use for a connection. If you have any downstream filtering on your network, you need to allow all public IP addresses associated with your firewall. Consider using a [public IP address prefix](../virtual-network/ip-services/public-ip-address-prefix.md) to simplify this configuration. -Azure Firewall with multiple public IP addresses is available via the Azure portal, Azure PowerShell, Azure CLI, REST, and templates. You can deploy an Azure Firewall with up to 250 public IP addresses. +Azure Firewall with multiple public IP addresses is available via the Azure portal, Azure PowerShell, Azure CLI, REST, and templates.\ +You can deploy an Azure Firewall with up to 250 public IP addresses, however DNAT destination rules will also count toward the 250 maximum. +Public IPs + DNAT destination rule = 250 max. The following Azure PowerShell examples show how you can configure, add, and remove public IP addresses for Azure Firewall. |
firewall | Protect Azure Kubernetes Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/protect-azure-kubernetes-service.md | az group create --name $RG --location $LOC Create a virtual network with two subnets to host the AKS cluster and the Azure Firewall. Each has their own subnet. Let's start with the AKS network. -``` +```azurecli # Dedicated virtual network with AKS subnet az network vnet create \ |
healthcare-apis | Dicom Digital Pathology | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-digital-pathology.md | Although the [DICOM standard now supports whole-slide images (WSI)](https://dico Here are some samples open source tools to build your own converter: -- [Orthanc - DICOM Server (orthanc-server.com)](https://www.orthanc-server.com/static.php?page=wsi)-- [OpenSlide](https://github.com/openslide/openslide)-+- [WSIDicomizer](https://github.com/imi-bigpicture/wsidicomizer) ### Storage We recommend using any WSI Viewer that can be configured with a DICOMWeb service Sample open-source viewer -- [Slim (MGB)](https://github.com/herrmannlab/slim)+- [OHIF Viewer](https://github.com/microsoft/dicom-ohif) +- [Slim](https://github.com/herrmannlab/slim) Follow the [CORS guidelines](configure-cross-origin-resource-sharing.md) if the Viewer directly interacts with the DICOM service +++ ## Find an ISV partner Reach out to dicom-support@microsoft.com if you want to work with our partner ISVs that provide end-to-end solutions and support. |
healthcare-apis | Github Projects | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/github-projects.md | Title: Related GitHub Projects for Azure Health Data Services -description: List all Open Source (GitHub) repositories +description: Lists all Open Source (GitHub) repositories Previously updated : 06/06/2022 Last updated : 10/18/2023 -# GitHub Projects ++# GitHub projects We have many open-source projects on GitHub that provide you the source code and instructions to deploy services for various uses. You're always welcome to visit our GitHub repositories to learn and experiment with our features and products. We have many open-source projects on GitHub that provide you the source code and * The [Azure Health Data Services Toolkit](https://github.com/microsoft/azure-health-data-services-toolkit) helps you extend the functionality of Azure Health Data Services by providing a consistent toolset to build custom operations to modify the core service behavior. -## FHIR Server +## FHIR server * [microsoft/fhir-server](https://github.com/microsoft/fhir-server/): open-source FHIR Server, which is the basis for FHIR service * For information about the latest releases, see [Release notes](https://github.com/microsoft/fhir-server/releases) We have many open-source projects on GitHub that provide you the source code and * Integrated with the FHIR service and FHIR server for Azure in the form of `de-identified $export` operation * For FHIR data, it can also be used with Azure Data Factory (ADF) pipeline by reading FHIR data from Azure blob storage and writing back the anonymized data -## Analytic Pipelines +## Analytics Pipelines FHIR Analytics Pipelines help you build components and pipelines for rectangularizing and moving FHIR data from Azure FHIR servers namely [Azure Health Data Services FHIR Server](./../healthcare-apis/index.yml), [Azure API for FHIR](./../healthcare-apis/azure-api-for-fhir/index.yml), and [FHIR Server for Azure](https://github.com/microsoft/fhir-server) to [Azure Data Lake](https://azure.microsoft.com/solutions/data-lake/) and thereby make it available for analytics with [Azure Synapse Analytics](https://azure.microsoft.com/services/synapse-analytics/), [Power BI](https://powerbi.microsoft.com/), and [Azure Machine Learning](https://azure.microsoft.com/services/machine-learning/). In this article, you learned about some of Azure Health Data Services open-sourc >[Overview of Azure Health Data Services](healthcare-apis-overview.md) (FHIR®) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.+ |
key-vault | Hsm Protected Keys Byok | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/hsm-protected-keys-byok.md | Refer to your HSM vendor's documentation to download and install the BYOK tool. Transfer the BYOK file to your connected computer. > [!NOTE]-> Importing RSA 1,024-bit keys is not supported. Importing Elliptic Curve key with curve P-256K is not supported. +> Importing RSA 1,024-bit keys is not supported. Importing Elliptic Curve key with curve P-256K is supported. > > **Known issue**: Importing an RSA 4K target key from Luna HSMs is only supported with firmware 7.4.0 or newer. |
key-vault | Backup Restore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/backup-restore.md | -# Full backup and restore +# Full backup and restore and selective key restore > [!NOTE] > This feature is only available for resource type managed HSM. You must provide following information to execute a full backup: Backup is a long running operation but will immediately return a Job ID. You can check the status of backup process using this Job ID. The backup process creates a folder inside the designated container with a following naming pattern **`mhsm-{HSM_NAME}-{YYYY}{MM}{DD}{HH}{mm}{SS}`**, where HSM_NAME is the name of managed HSM being backed up and YYYY, MM, DD, HH, MM, mm, SS are the year, month, date, hour, minutes, and seconds of date/time in UTC when the backup command was received. -While the backup is in progress, the HSM may not operate at full throughput as some HSM partitions will be busy performing the backup operation. +While the backup is in progress, the HSM might not operate at full throughput as some HSM partitions will be busy performing the backup operation. > [!IMPORTANT] > Public internet access must **not** be blocked from the storage accounts being used to backup or restore resources. sas=$(az storage container generate-sas -n mhsmdemobackupcontainer --account-nam az keyvault restore start --hsm-name mhsmdemo2 --storage-account-name mhsmdemobackup --blob-container-name mhsmdemobackupcontainer --storage-container-SAS-token $sas --backup-folder mhsm-mhsmdemo-2020083120161860 ``` +## Selective key restore ++Selective key restore allows you to restore one individual key with all its key versions from a previous backup to an HSM. ++``` +az keyvault restore start --hsm-name mhsmdemo2 --storage-account-name mhsmdemobackup --blob-container-name mhsmdemobackupcontainer --storage-container-SAS-token $sas --backup-folder mhsm-mhsmdemo-2020083120161860 -ΓÇôkey-name rsa-key2 +``` + ## Next Steps - See [Manage a Managed HSM using the Azure CLI](key-management.md). - Learn more about [Managed HSM Security Domain](security-domain.md) |
load-balancer | Egress Only | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/egress-only.md | Title: Outbound-only load balancer configuration -description: In this article, learn about how to create an internal load balancer with outbound NAT. +description: This article provides a step-by-step guide on how to configure an "egress only" setup using Azure Load Balancer with outbound NAT and Azure Bastion. Deploy public and internal load balancers to create outbound connectivity for VMs behind an internal load balancer. Previously updated : 12/27/2022 Last updated : 10/24/2023 This configuration provides outbound NAT for an internal load balancer scenario, - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -## Create virtual network and load balancers -In this section, you'll create a virtual network and subnet for the load balancers and the virtual machine. You'll next create the load balancers. --### Create the virtual network --In this section, you'll create the virtual network and subnets for the virtual machine, load balancer, and bastion host. -- > [!IMPORTANT] -- > [!INCLUDE [Pricing](../../includes/bastion-pricing.md)] -- > --1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual Networks** in the search results. --1. In **Virtual networks**, select **+ Create**. --1. In **Create virtual network**, enter or select this information in the **Basics** tab: -- | **Setting** | **Value** | - ||--| - | **Project Details** | | - | Subscription | Select your Azure subscription | - | Resource Group | Select **Create new**. </br> In **Name** enter **myResourceGroupLB** </br> Select **OK**. | - | **Instance details** | | - | Name | Enter **myVNet** | - | Region | Select **(US) East US 2** | --1. Select the **Security** tab. --1. Under **Azure Bastion**, select **Enable Azure Bastion**. Enter this information: -- | Setting | Value | - |--|-| - | Azure Bastion name | Enter **myBastionHost** | - --1. Select the **IP addresses** tab or select the **Next: IP addresses** button at the bottom of the page. --1. In the **IP addresses** tab, select **Add an IP address space**, and enter this information: -- | Setting | Value | - |--|-| - | Starting Address | Enter **10.1.0.0** | - | Address space size | Select **/16** | --1. Select **Add**. - -1. Select **Add a subnet**, enter this information: -- | Setting | Value | - |--|-| - | Subnet name | Enter **myBackendSubnet** | - | Starting address | Enter **10.1.0.0** | - | Subnet size | Select **/24** | --1. Select **Add**. --1. Select **Add a subnet**, enter this information: -- | Setting | Value | - |--|-| - | Subnet template | Azure Bastion | - | Starting address | Enter **10.1.1.0** | - | Subnet size | Select **/26** | - -1. Select **Add**. - -1. Select the **Review + create** tab or select the **Review + create** button. --1. Select **Create**. --### Create internal load balancer +## Create internal load balancer In this section, you'll create the internal load balancer. In this section, you'll create the internal load balancer. | | | | **Project details** | | | Subscription | Select your subscription. | - | Resource group | Select **myResourceGroupLB**. | + | Resource group | Select **lb-resource-group**. | | **Instance details** | |- | Name | Enter **myInternalLoadBalancer** | - | Region | Select **(US) East US 2**. | + | Name | Enter **lb-internal** | + | Region | Select **(US) East US**. | | SKU | Leave the default **Standard**. | | Type | Select **Internal**. | In this section, you'll create the internal load balancer. 1. In **Frontend IP configuration**, select **+ Add a frontend IP**. -1. Enter **LoadBalancerFrontend** in **Name**. +1. Enter **lb-int-frontend** in **Name**. -1. Select **myBackendSubnet** in **Subnet**. +1. Select **backend-subnet** in **Subnet**. 1. Select **Dynamic** for **Assignment**. In this section, you'll create the internal load balancer. 1. In the **Backend pools** tab, select **+ Add a backend pool**. -1. Enter **myInternalBackendPool** for **Name** in **Add backend pool**. +1. Enter **lb-int-backend-pool** for **Name** in **Add backend pool**. 1. Select **NIC** or **IP Address** for **Backend Pool Configuration**. In this section, you'll create the internal load balancer. 1. Select **Create**. -### Create public load balancer +## Create public load balancer In this section, you'll create the public load balancer. In this section, you'll create the public load balancer. | | | | **Project details** | | | Subscription | Select your subscription. | - | Resource group | Select **myResourceGroupLB**. | + | Resource group | Select **lb-resource-group**. | | **Instance details** | |- | Name | Enter **myPublicLoadBalancer** | - | Region | Select **(US) East US 2**. | + | Name | Enter **lb-public** | + | Region | Select **(US) East US**. | | SKU | Leave the default **Standard**. | | Type | Select **Public**. | | Tier | Leave the default **Regional**. | In this section, you'll create the public load balancer. 1. In **Frontend IP configuration**, select **+ Add a frontend IP**. -1. Enter **LoadBalancerFrontend** in **Name**. +1. Enter **lb-ext-frontend** in **Name**. 1. Select **IPv4** or **IPv6** for the **IP version**. In this section, you'll create the public load balancer. 1. Select **Create new** in **Public IP address**. -1. In **Add a public IP address**, enter **myPublicIP** for **Name**. +1. In **Add a public IP address**, enter **lb-public-ip** for **Name**. 1. Select **Zone-redundant** in **Availability zone**. In this section, you'll create the public load balancer. 1. In the **Backend pools** tab, select **+ Add a backend pool**. -1. Enter **myPublicBackendPool** for **Name** in **Add backend pool**. +1. Enter **lb-pub-backend-pool** for **Name** in **Add backend pool**. -1. Select **myVNet** in **Virtual network**. +1. Select **lb-VNet** in **Virtual network**. 1. Select **NIC** or **IP Address** for **Backend Pool Configuration**. You'll create a virtual machine in this section. During creation, you'll add it |--|-| | **Project Details** | | | Subscription | Select your Azure subscription |- | Resource Group | Select **myResourceGroupLB** | + | Resource Group | Select **lb-resource-group** | | **Instance details** | |- | Virtual machine name | Enter **myVM** | - | Region | Select **(US) East US 2** | + | Virtual machine name | Enter **lb-VM** | + | Region | Select **(US) East US** | | Availability Options | Select **No infrastructure redundancy required** |- | Image | Select **Windows Server 2019 Datacenter - Gen2** | + | Security type | Select **Standard**. | + | Image | Select **Windows Server 2022 Datacenter: Azure Edition - Gen2** | | Azure Spot instance | Leave the default of unchecked. | | Size | Choose VM size or take default setting | | **Administrator account** | | You'll create a virtual machine in this section. During creation, you'll add it | Setting | Value | |-|-| | **Network interface** | |- | Virtual network | **myVNet** | - | Subnet | **myBackendSubnet** | + | Virtual network | **lb-VNet** | + | Subnet | **backend-subnet** | | Public IP | Select **None**. | | NIC network security group | Select **Advanced**|- | Configure network security group | Leave the default of **Basic**. | + | Configure network security group | Leave the default of **vm-NSG**. This might be different if you choose a different name for your VM. | 1. Under **Load balancing**, select the following: | Setting | Value | |-|-| | Load-balancing options | Select **Azure load balancing** |- | Select a load balancer | Select **myInternalLoadBalancer** | - | Select a backend pool | Select **myInternalBackendPool** | + | Select a load balancer | Select **lb-internal** | + | Select a backend pool | Select **lb-int-backend-pool** | 1. Select **Review + create**. 1. Review the settings, and then select **Create**. -### Add VM to backend pool of public load balancer +## Add VM to backend pool of public load balancer In this section, you'll add the virtual machine you created previously to the backend pool of the public load balancer. 1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results. -1. Select **myPublicLoadBalancer**. +1. Select **lb-public**. -1. Select **Backend pools** in **Settings** in **myPublicLoadBalancer**. +1. Select **Backend pools** in **Settings** in **lb-public**. -1. Select **myPublicBackendPool** under **Backend pool** in the **Backend pools** page. +1. Select **lb-pub-backend-pool** under **Backend pool** in the **Backend pools** page. -1. In **myPublicBackendPool**, select **myVNet** in **Virtual network**. +1. In **lb-pub-backend-pool**, select **lb-VNet** in **Virtual network**. 1. In **Virtual machines**, select the blue **+ Add** button. -1. Select the box next to **myVM** in **Add virtual machines to backend pool**. +1. Select the box next to **lb-VM** in **Add virtual machines to backend pool**. 1. Select **Add**. 1. Select **Save**.+ ## Test connectivity before outbound rule 1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. -1. Select **myVM**. +1. Select **lb-VM**. 1. In the **Overview** page, select **Connect**, then **Bastion**. In this section, you'll add the virtual machine you created previously to the ba 1. Open Internet Explorer. -1. Enter **https://whatsmyip.org** in the address bar. +1. Enter **https://whatsmyip.org** in the address bar. -1. The connection should fail. By default, standard public load balancer [doesn't allow outbound traffic without a defined outbound rule](load-balancer-overview.md#securebydefault). +1. The connection should fail. By default, standard public load balancer [doesn't allow outbound traffic without a defined outbound rule](load-balancer-overview.md#securebydefault). ## Create a public load balancer outbound rule 1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results. -1. Select **myPublicLoadBalancer**. +1. Select **lb-public**. -1. Select **Outbound rules** in **Settings** in **myPublicLoadBalancer**. +1. Select **Outbound rules** in **Settings** in **lb-public**. 1. Select **+ Add** in **Outbound rules**. In this section, you'll add the virtual machine you created previously to the ba | Setting | Value | | - | -- | | Name | Enter **myOutboundRule**. |- | Frontend IP address | Select **LoadBalancerFrontEnd**.| + | Frontend IP address | Select **lb-ext-frontend**.| | Protocol | Leave the default of **All**. | | Idle timeout (minutes) | Move slider to **15 minutes**.| | TCP Reset | Select **Enabled**.|- | Backend pool | Select **myPublicBackendPool**.| + | Backend pool | Select **lb-pub-backend-pool**.| | **Port allocation** | | | Port allocation | Select **Manually choose number of outbound ports**. | | **Outbound ports** | | In this section, you'll add the virtual machine you created previously to the ba 1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. -1. Select **myVM**. +1. Select **lb-VM**. 1. On the **Overview** page, select **Connect**, then **Bastion**. In this section, you'll add the virtual machine you created previously to the ba 1. The connection should succeed. -1. The IP address displayed should be the frontend IP address of **myPublicLoadBalancer**. +1. The IP address displayed should be the frontend IP address of **lb-public**. ## Clean up resources When no longer needed, delete the resource group, load balancers, VM, and all related resources. -To do so, select the resource group **myResourceGroupLB** and then select **Delete**. +To do so, select the resource group **lb-resource-group** and then select **Delete**. ## Next steps |
load-balancer | Quickstart Load Balancer Standard Public Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-public-portal.md | During the creation of the load balancer, you'll configure: | Protocol | Select **TCP**. | | Port | Enter **80**. | | Backend port | Enter **80**. |- | Health probe | Select **Create new**. </br> In **Name**, enter **lb-health-probe**. </br> Select **TCP** in **Protocol**. </br> Leave the rest of the defaults, and select **Save**. | + | Health probe | Select **Create new**. </br> In **Name**, enter **lb-health-probe**. </br> Select **HTTP** in **Protocol**. </br> Leave the rest of the defaults, and select **Save**. | | Session persistence | Select **None**. | | Idle timeout (minutes) | Enter or select **15**. |- | TCP reset | Select **Enabled**. | - | Floating IP | Select **Disabled**. | + | Enable TCP reset | Select checkbox. | + | Enable Floating IP | Leave unchecked. | | Outbound source network address translation (SNAT) | Leave the default of **(Recommended) Use outbound rules to provide backend pool members access to the internet.** | 1. Select **Save**. |
load-balancer | Tutorial Load Balancer Port Forwarding Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-load-balancer-port-forwarding-portal.md | Title: "Tutorial: Create a single virtual machine inbound NAT rule - Azure portal" -description: In this tutorial, learn how to configure port forwarding using Azure Load Balancer to create a connection to a single virtual machine in an Azure virtual network. +description: Learn to configure port forwarding using Azure Load Balancer and NAT gateway to create a connection to a single virtual machine in an Azure virtual network. Previously updated : 07/18/2023 Last updated : 10/24/2023 In this tutorial, you learn how to: Sign in to the [Azure portal](https://portal.azure.com). -## Create virtual network and virtual machines --A virtual network and subnet is required for the resources in the tutorial. In this section, you create a virtual network and virtual machines for the later steps. --1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. --1. In **Virtual machines**, select **+ Create** > **+ Virtual machine**. - -1. In **Create a virtual machine**, enter or select the following values in the **Basics** tab: -- | Setting | Value | - | - | -- | - | **Project details** | | - | Subscription | Select your subscription. | - | Resource group | Select **Create new**. </br> Enter *myResourceGroup*. </br> Select **OK**. | - | **Instance details** | | - | Virtual machine name | Enter *myVM1*. | - | Region | Select **(US) West US 2**. | - | Availability options | Select **Availability zone**. | - | Availability zone | Select **Zone 1**. | - | Security type | Select **Standard**. | - | Image | Select **Ubuntu Server 20.04 LTS - Gen2**. | - | Azure Spot instance | Leave the default of unchecked. | - | Size | Select a VM size. | - | **Administrator account** | | - | Authentication type | Select **SSH public key**. | - | Username | Enter *azureuser*. | - | SSH public key source | Select **Generate new key pair**. | - | Key pair name | Enter *myKey*. | - | **Inbound port rules** | | - | Public inbound ports | Select **None**. | -- :::image type="content" source="./media/tutorial-load-balancer-port-forwarding-portal/create-vm-portal.png" alt-text="Screenshot of create virtual machine."::: --1. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**. --1. In the **Networking** tab, enter or select the following information. -- | Setting | Value | - | - | -- | - | **Network interface** | | - | Virtual network | Select **Create new**. </br> Enter *myVNet* in **Name**. </br> In **Address space**, under **Address range**, enter *10.1.0.0/16*. </br> In **Subnets**, under **Subnet name**, enter *myBackendSubnet*. </br> In **Address range**, enter *10.1.0.0/24*. </br> Select **OK**. | - | Subnet | Select **myBackendSubnet**. | - | Public IP | Select **None**. | - | NIC network security group | Select **Advanced**. | - | Configure network security group | Select **Create new**. </br> Enter *myNSG* in **Name**. </br> Select **+ Add an inbound rule** under **Inbound rules**. </br> In **Service**, select **HTTP**. </br> Enter *100* in **Priority**. </br> Enter *myNSGRule* for **Name**. </br> Select **Add**. </br> Select **OK**. | --1. Select the **Review + create** tab, or select the **Review + create** button at the bottom of the page. --1. Select **Create**. --1. At the **Generate new key pair** prompt, select **Download private key and create resource**. Your key file is downloaded as myKey.pem. Ensure you know where the .pem file was downloaded, you'll need the path to the key file in later steps. --1. Follow the steps 1 through 7 to create another VM with the following values and all the other settings the same as **myVM1**: -- | Setting | Value | - | - | -- | - | **Basics** | | - | **Instance details** | | - | Virtual machine name | Enter *myVM2* | - | Availability zone | Select **Zone 2** | - | **Administrator account** | | - | Authentication type | Select **SSH public key** | - | SSH public key source | Select **Use existing key stored in Azure**. | - | Stored Keys | Select **myKey**. | - | **Inbound port rules** | | - | Public inbound ports | Select **None**. | - | **Networking** | | - | **Network interface** | | - | Public IP | Select **None**. | - | NIC network security group | Select **Advanced**. | - | Configure network security group | Select the existing **myNSG** | ## Create a load balancer You create a load balancer in this section. The frontend IP, backend pool, load- | | | | **Project details** | | | Subscription | Select your subscription. | - | Resource group | Select **myResourceGroup**. | + | Resource group | Select **load-balancer-rg**. | | **Instance details** | |- | Name | Enter *myLoadBalancer* | - | Region | Select **West US 2**. | + | Name | Enter *load-balancer* | + | Region | Select **East US**. | | SKU | Leave the default **Standard**. | | Type | Select **Public**. | | Tier | Leave the default **Regional**. | You create a load balancer in this section. The frontend IP, backend pool, load- 1. In **Frontend IP configuration**, select **+ Add a frontend IP configuration**. -1. Enter *myFrontend* in **Name**. +1. Enter *lb-frontend* in **Name**. 1. Select **IPv4** or **IPv6** for the **IP version**. You create a load balancer in this section. The frontend IP, backend pool, load- 1. Select **Create new** in **Public IP address**. -1. In **Add a public IP address**, enter *myPublicIP* for **Name**. +1. In **Add a public IP address**, enter *lb-frontend-ip* for **Name**. 1. Select **Zone-redundant** in **Availability zone**. You create a load balancer in this section. The frontend IP, backend pool, load- | Setting | Value | | - | -- |- | Name | Enter *myBackendPool*. | - | Virtual network | Select **myVNet (myResourceGroup)**. | + | Name | Enter *lb-backend-pool*. | + | Virtual network | Select **lb-vnet (load-balancer-rg)**. | | Backend Pool Configuration | Select **NIC**. | 1. Select **+ Add** in **Virtual machines**. -1. Select the checkboxes next to **myVM1** and **myVM2** in **Add virtual machines to backend pool**. +1. Select the checkboxes next to **lb-vm1** and **lb-vm2** in **Add virtual machines to backend pool**. 1. Select **Add** and then select **Save**. You create a load balancer in this section. The frontend IP, backend pool, load- | Setting | Value | | - | -- |- | Name | Enter *myHTTPRule* | + | Name | Enter *lb-HTTP-rule* | | IP Version | Select **IPv4** or **IPv6** depending on your requirements. |- | Frontend IP address | Select **myFrontend (To be created)**. | - | Backend pool | Select **myBackendPool**. | + | Frontend IP address | Select **lb-frontend (To be created)**. | + | Backend pool | Select **lb-backend-pool**. | | Protocol | Select **TCP**. | | Port | Enter *80*. | | Backend port | Enter *80*. |- | Health probe | Select **Create new**. </br> In **Name**, enter *myHealthProbe*. </br> Select **TCP** in **Protocol**. </br> Leave the rest of the defaults, and select **Save**. | + | Health probe | Select **Create new**. </br> In **Name**, enter *lb-health-probe*. </br> Select **TCP** in **Protocol**. </br> Leave the rest of the defaults, and select **Save**. | | Session persistence | Select **None**. | | Idle timeout (minutes) | Enter or select **15**. | | Enable TCP reset | Select **checkbox** to enable. | You create a load balancer in this section. The frontend IP, backend pool, load- | Setting | Value | | - | -- |- | Name | Enter *myNATRuleVM1-221*. | - | Target virtual machine | Select **myVM1**. | - | Network IP configuration | Select **ipconfig1 (10.1.0.4)**. | - | Frontend IP address | Select **myFrontend (To be created)**. | + | Name | Enter *lb-NAT-rule-VM1-221*. | + | Target virtual machine | Select **lb-vm1**. | + | Network IP configuration | Select **ipconfig1 (10.0.0.4)**. | + | Frontend IP address | Select **lb-frontend (To be created)**. | | Frontend Port | Enter *221*. | | Service Tag | Select **Custom**. | | Backend port | Enter *22*. | You create a load balancer in this section. The frontend IP, backend pool, load- | Idle timeout (minutes) | Leave the default **4**. | | Enable Floating IP | Leave the default of unchecked. | -1. Select **Add**. +2. Select **Add**. -1. Select **+ Add an inbound nat rule**. +3. Select **+ Add an inbound nat rule**. -1. In **Add inbound NAT rule**, enter or select the following information. +4. In **Add inbound NAT rule**, enter or select the following information. | Setting | Value | | - | -- |- | Name | Enter *myNATRuleVM2-222*. | - | Target virtual machine | Select **myVM2**. | - | Network IP configuration | Select **ipconfig1 (10.1.0.5)**. | - | Frontend IP address | Select **myFrontend**. | + | Name | Enter *lb-NAT-rule-VM2-222*. | + | Target virtual machine | Select **lb-vm2**. | + | Network IP configuration | Select **ipconfig1 (10.0.0.5)**. | + | Frontend IP address | Select **lb-frontend**. | | Frontend Port | Enter *222*. | | Service Tag | Select **Custom**. | | Backend port | Enter *22*. | You create a load balancer in this section. The frontend IP, backend pool, load- | Idle timeout (minutes) | Leave the default **4**. | | Enable Floating IP | Leave the default of unchecked. | -1. Select **Add**. +5. Select **Add**. -1. Select the blue **Review + create** button at the bottom of the page. +6. Select the blue **Review + create** button at the bottom of the page. -1. Select **Create**. +7. Select **Create**. ## Create a NAT gateway For more information about outbound connections and Azure Virtual Network NAT, s | - | -- | | **Project details** | | | Subscription | Select your subscription. |- | Resource group | Select **myResourceGroup**. | + | Resource group | Select **load-balancer-rg**. | | **Instance details** | |- | NAT gateway name | Enter *myNATgateway*. | - | Region | Select **West US 2**. | + | NAT gateway name | Enter *lb-nat-gateway*. | + | Region | Select **East US**. | | Availability zone | Select **None**. | | Idle timeout (minutes) | Enter *15*. | For more information about outbound connections and Azure Virtual Network NAT, s 1. In **Outbound IP**, select **Create a new public IP address** next to **Public IP addresses**. -1. Enter *myNATGatewayIP* in **Name** in **Add a public IP address**. +1. Enter *nat-gw-public-ip* in **Name** in **Add a public IP address**. 1. Select **OK**. 1. Select the **Subnet** tab or select the **Next: Subnet** button at the bottom of the page. -1. In **Virtual network** in the **Subnet** tab, select **myVNet**. +1. In **Virtual network** in the **Subnet** tab, select **lb-vnet**. -1. Select **myBackendSubnet** under **Subnet name**. +1. Select **backend-subnet** under **Subnet name**. 1. Select the blue **Review + create** button at the bottom of the page, or select the **Review + create** tab. In this section, you'll SSH to the virtual machines through the inbound NAT rule 1. In the search box at the top of the portal, enter *Load balancer*. Select **Load balancers** in the search results. -1. Select **myLoadBalancer**. +1. Select **load-balancer**. 1. Select **Fronted IP configuration** in **Settings**. -1. In the **Frontend IP configuration**, make note of the **IP address** for **myFrontend**. In this example, it's **20.99.165.176**. +1. In the **Frontend IP configuration**, make note of the **IP address** for **lb-frontend**. In this example, it's **20.99.165.176**. :::image type="content" source="./media/tutorial-load-balancer-port-forwarding-portal/get-public-ip.png" alt-text="Screenshot of public IP in Azure portal."::: 1. If you're using a Mac or Linux computer, open a Bash prompt. If you're using a Windows computer, open a PowerShell prompt. -1. At your prompt, open an SSH connection to **myVM1**. Replace the IP address with the address you retrieved in the previous step and port **221** you used for the myVM1 inbound NAT rule. Replace the path to the .pem with the path to where the key file was downloaded. +1. At your prompt, open an SSH connection to **lb-vm1**. Replace the IP address with the address you retrieved in the previous step and port **221** you used for the lb-vm1 inbound NAT rule. Replace the path to the .pem with the path to where the key file was downloaded. ```console- ssh -i .\Downloads\myKey.pem azureuser@20.99.165.176 -p 221 + ssh -i .\Downloads\lb-key-pair.pem azureuser@20.99.165.176 -p 221 ``` > [!TIP] In this section, you'll SSH to the virtual machines through the inbound NAT rule 1. Enter `Exit` to leave the SSH session -1. At your prompt, open an SSH connection to **myVM2**. Replace the IP address with the address you retrieved in the previous step and port **222** you used for the myVM2 inbound NAT rule. Replace the path to the .pem with the path to where the key file was downloaded. +1. At your prompt, open an SSH connection to **lb-vm2**. Replace the IP address with the address you retrieved in the previous step and port **222** you used for the lb-vm2 inbound NAT rule. Replace the path to the .pem with the path to where the key file was downloaded. ```console- ssh -i .\Downloads\myKey.pem azureuser@20.99.165.176 -p 222 + ssh -i .\Downloads\lb-key-pair.pem azureuser@20.99.165.176 -p 222 ``` 1. From your SSH session, update your package sources and then install the latest NGINX package. If you're not going to continue to use this application, delete the virtual mach 1. In the search box at the top of the portal, enter **Resource group**. Select **Resource groups** in the search results. -1. Select **myResourceGroup** in **Resource groups**. +1. Select **load-balancer-rg** in **Resource groups**. 1. Select **Delete resource group**. -1. Enter *myResourceGroup* in **TYPE THE RESOURCE GROUP NAME:**. Select **Delete**. +1. Enter *load-balancer-rg* in **TYPE THE RESOURCE GROUP NAME:**. Select **Delete**. ## Next steps |
machine-learning | How To End To End Llmops With Prompt Flow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-end-to-end-llmops-with-prompt-flow.md | In this guide, we will use a runtime to run your prompt flow. You need to create Go to workspace portal, select **Prompt flow** -> **Runtime** -> **Add**, then follow the instruction to create your own connections -## Setup variables with for prompt flow and GitHub Actions +## Setup variables for prompt flow and GitHub Actions Clone repo to your local machine. |
machine-learning | How To Integrate With Llm App Devops | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-integrate-with-llm-app-devops.md | When developing applications using LLM, it's common to have a standardized appli For developers experienced in code development who seek a more efficient LLMOps iteration process, the following key features and benefits you can gain from prompt flow code experience: -- **Flow versioning in code repository**. You can define your flow in YAML format, which can stay aligned with the referenced source files in a folder structure.+- **Flow versioning in code repository**. You can define your flow in YAML format, which can stay aligned with the referenced source files in a folder structure. - **Integrate flow run with CI/CD pipeline**. You can trigger flow runs using the prompt flow CLI or SDK, which can be seamlessly integrated into your CI/CD pipeline and delivery process. - **Smooth transition from local to cloud**. You can easily export your flow folder to your local or code repository for version control, local development and sharing. Similarly, the flow folder can be effortlessly imported back to the cloud for further authoring, testing, deployment in cloud resources. Overview of the flow folder structure and the key files it contains: - **Source code files (.py, .jinja2)**: The flow folder also includes user-managed source code files, which are referred to by the tools/nodes in the flow. - Files in Python (.py) format can be referenced by the python tool for defining custom python logic. - Files in Jinja2 (.jinja2) format can be referenced by the prompt tool or LLM tool for defining prompt context.-- **Non-source files**: The flow folder may also contain non-source files such as utility files and data files that can be included in the source files.+- **Non-source files**: The flow folder can also contain non-source files such as utility files and data files that can be included in the source files. Once the flow is created, you can navigate to the Flow Authoring Page to view and operate the flow files in the right file explorer. This allows you to view, edit, and manage your files. Any modifications made to the files will be directly reflected in the file share storage. Alternatively, you can access all the flow folders directly within the Azure Mac :::image type="content" source="./media/how-to-integrate-with-llm-app-devops/notebook-user-path.png" alt-text="Screenshot of notebooks in Azure Machine Learning in the prompt flow folder showing the files. " lightbox = "./media/how-to-integrate-with-llm-app-devops/notebook-user-path.png"::: -## Versioning prompt flow in repository +## Versioning prompt flow in code repository -To check in your flow into your code repository, you can easily export the flow folder from the flow authoring page to your local system. This will download a package containing all the files from the explorer to your local machine, which you can then check into your code repository. +To check in your flow into your code repository, you can easily export the flow folder from the flow authoring page to your local system. This will download a package containing all the files from the explorer to your local machine, which you can then check into your code repository. :::image type="content" source="./media/how-to-integrate-with-llm-app-devops/flow-export.png" alt-text="Screenshot of showing the download button in the file explorer." lightbox = "./media/how-to-integrate-with-llm-app-devops/flow-export.png"::: pf.get_metrics("evaluation_run_name") ### Local development and testing -During iterative development, as you refine and fine-tune your flow or prompts, you may find it beneficial to carry out multiple iterations locally within your code repository. The community version, **Prompt flow VS Code extension** and **Prompt flow local SDK & CLI** is provided to facilitate pure local development and testing without Azure binding. +During iterative development, as you refine and fine-tune your flow or prompts, you might find it beneficial to carry out multiple iterations locally within your code repository. The community version, **Prompt flow VS Code extension** and **Prompt flow local SDK & CLI** is provided to facilitate pure local development and testing without Azure binding. #### Prompt flow VS Code extension The last step to go to production is to deploy your flow as an online endpoint i For more information on how to deploy your flow, see [Deploy flows to Azure Machine Learning managed online endpoint for real-time inference with CLI and SDK](how-to-deploy-to-code.md). +## Collaborating on flow development in production ++In the context of developing a LLM-based application with Prompt flow, collaboration amongst team members is often essential. Team members might be engaged in the same flow authoring and testing, working on diverse facets of the flow or making iterative changes and enhancements concurrently. ++Such collaboration necessitates an efficient and streamlined approach to sharing code, tracking modifications, managing versions, and integrating these changes into the final project. ++The introduction of the Prompt flow **SDK/CLI** and the **Visual Studio Code Extension** as part of the code experience of Prompt flow facilitates easy collaboration on flow development within your code repository. It is advisable to utilize a cloud-based **code repository**, such as GitHub or Azure DevOps, for tracking changes, managing versions, and integrating these modifications into the final project. ++### Best practice for collaborative development ++1. Authoring and single testing your flow locally - Code repository and VSC Extension ++ - The first step of this collaborative process involves using a code repository as the base for your project code, which includes the Prompt Flow code. + - This centralized repository enables efficient organization, tracking of all code changes, and collaboration among team members. + - Once the repository is set up, team members can leverage the VSC extension for local authoring and single input testing of the flow. + - This standardized integrated development environment fosters collaboration among multiple members working on different aspects of the flow. + :::image type="content" source="media/how-to-integrate-with-llm-app-devops/prompt-flow-local-develop.png" alt-text="Screenshot of local development. " lightbox = "media/how-to-integrate-with-llm-app-devops/prompt-flow-local-develop.png"::: +1. Cloud-based experimental batch testing and evaluation - Prompt flow CLI/SDK and workspace portal UI + - Following the local development and testing phase, flow developers can use the pfazure CLI or SDK to submit batch runs and evaluation runs from the local flow files to the cloud. + - This action provides a way for cloud resource consuming, results to be stored persistently and managed efficiently with a portal UI in the Azure Machine Learning workspace. This step allows for cloud resource consumption including compute and storage and further endpoint for deployments. + :::image type="content" source="media/how-to-integrate-with-llm-app-devops/pfazure-run.png" alt-text="Screenshot of pfazure command to submit run to cloud. " lightbox = "media/how-to-integrate-with-llm-app-devops/pfazure-run.png"::: + - Post submissions to cloud, team members can access the cloud portal UI to view the results and manage the experiments efficiently. + - This cloud workspace provides a centralized location for gathering and managing all the runs history, logs, snapshots, comprehensive results including the instance level inputs and outputs. + :::image type="content" source="media/how-to-integrate-with-llm-app-devops/pfazure-run-snapshot.png" alt-text="Screenshot of cloud run snapshot. " lightbox = "media/how-to-integrate-with-llm-app-devops/pfazure-run-snapshot.png"::: + - In the run list that records all run history from during the development, team members can easily compare the results of different runs, aiding in quality analysis and necessary adjustments. + :::image type="content" source="media/how-to-integrate-with-llm-app-devops/cloud-run-list.png" alt-text="Screenshot of run list in workspace. " lightbox = "media/how-to-integrate-with-llm-app-devops/cloud-run-list.png"::: + :::image type="content" source="media/how-to-integrate-with-llm-app-devops/cloud-run-compare.png" alt-text="Screenshot of run comparison in workspace. " lightbox = "media/how-to-integrate-with-llm-app-devops/cloud-run-compare.png"::: +1. Local iterative development or one-step UI deployment for production + - Following the analysis of experiments, team members can return to the code repository for additional development and fine-tuning. Subsequent runs can then be submitted to the cloud in an iterative manner. + - This iterative approach ensures consistent enhancement until the team is satisfied with the quality ready for production. + - Once the team is fully confident in the quality of the flow, it can be seamlessly deployed via a UI wizard as an online endpoint in Azure Machine Learning. Once the team is entirely confident in the flow's quality, it can be seamlessly transitioned into production via a UI deploy wizard as an online endpoint in a robust cloud environment. + - This deployment on an online endpoint can be based on a run snapshot, allowing for stable and secure serving, further resource allocation and usage tracking, and log monitoring in the cloud. + :::image type="content" source="media/how-to-integrate-with-llm-app-devops/deploy-from-snapshot.png" alt-text="Screenshot of deploying flow from a run snapshot. " lightbox = "media/how-to-integrate-with-llm-app-devops/deploy-from-snapshot.png"::: + :::image type="content" source="media/how-to-integrate-with-llm-app-devops/deploy-wizard.png" alt-text="Screenshot of deploy wizard. " lightbox = "media/how-to-integrate-with-llm-app-devops/deploy-wizard.png"::: ++### Why we recommend using the code repository for collaborative development +For iterative development, a combination of a local development environment and a version control system, such as Git, is typically more effective. You can make modifications and test your code locally, then commit the changes to Git. This creates an ongoing record of your changes and offers the ability to revert to earlier versions if necessary. ++When **sharing flows** across different environments is required, using a cloud-based code repository like GitHub or Azure Repos is advisable. This enables you to access the most recent version of your code from any location and provides tools for collaboration and code management. ++By following this best practice, teams can create a seamless, efficient, and productive collaborative environment for Prompt flow development. + ## Next steps - [Set up end-to-end LLMOps with Prompt Flow and GitHub](how-to-end-to-end-llmops-with-prompt-flow.md) |
postgresql | Concepts Extensions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-extensions.md | After extensions are allow-listed and loaded, these must be installed in your da Azure Database for PostgreSQL supports a subset of key PostgreSQL extensions as listed below. This information is also available by running `SHOW azure.extensions;`. Extensions not listed in this document aren't supported on Azure Database for PostgreSQL - Flexible Server. You can't create or load your own extension in Azure Database for PostgreSQL. -## Postgres 15 extensions. +## Postgres 15 extensions The following extensions are available in Azure Database for PostgreSQL - Flexible Servers, which have Postgres version 15. |
private-link | Manage Private Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/manage-private-endpoint.md | Use **[az network private-endpoint-connection delete](/cli/azure/network/private +> [!NOTE] +> Connections that have been previously denied can't be approved. You must remove the connection and create a new one. ++ ## Next steps - [Learn about Private Endpoints](private-endpoint-overview.md) |
private-link | Private Endpoint Dns | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns.md | For Azure services, use the recommended zone names as described in the following | Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders | |||||-| Azure Automation (Microsoft.Automation/automationAccounts) | Webhook </br> DSCAndHybridWorker | privatelink.azure-automation.net | azure-automation.net | +| Azure Automation (Microsoft.Automation/automationAccounts) | Webhook <br> DSCAndHybridWorker | privatelink.azure-automation.net | {regionCode}.azure-automation.net | | Azure SQL Database (Microsoft.Sql/servers) | sqlServer | privatelink.database.windows.net | database.windows.net | | Azure SQL Managed Instance (Microsoft.Sql/managedInstances) | managedInstance | privatelink.{dnsPrefix}.database.windows.net | {instanceName}.{dnsPrefix}.database.windows.net | | Azure Synapse Analytics (Microsoft.Synapse/workspaces) | Sql | privatelink.sql.azuresynapse.net | sql.azuresynapse.net | |
remote-rendering | Entities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/concepts/entities.md | ApiHandle<CutPlaneComponent> cutplane = entity->FindComponentOfType<CutPlaneComp ### Querying transforms -Transform queries are synchronous calls on the object. It's important to note that transforms queried through the API are local space transforms, relative to the object's parent. Exceptions are root objects, for which local space and world space are identical. --> [!NOTE] -> There is no dedicated API to query the world space transform of arbitrary objects. +Transform queries are synchronous calls on the object. It's important to note that transforms stored on the API side are local space transforms, relative to the object's parent. Exceptions are root objects, for which local space and world space are identical. ```cs // local space transform of the entity Double3 translation = entity.Position; Quaternion rotation = entity.Rotation;+Float3 scale = entity.Scale; ``` ```cpp // local space transform of the entity Double3 translation = entity->GetPosition(); Quaternion rotation = entity->GetRotation();+Float3 scale = entity->GetScale(); +``` ++In case all tree transform components (position, rotation and scale) need to be retrieved or set simultaneously, it's recommended to use the entity's `LocalTransform` property: ++```cs +// local space transform of the entity +Transform localTransform = entity.LocalTransform; +Double3 translation = localTransform.Position; +Quaternion rotation = localTransform.Rotation; +Float3 scale = localTransform.Scale; +``` ++```cpp +// local space transform of the entity +Transform localTransform = entity->GetLocalTransform(); +Double3& translation = localTransform.Position; +Quaternion& rotation = localTransform.Rotation; +Float3& scale = localTransform.Scale; +``` ++There's also a helper function to retrieve an entity's global (world space) transform: ++```cs +// global space transform of the entity +Transform globalTransform = entity.GlobalTransform; +Double3 translation = globalTransform.Position; +``` ++```cpp +// global space transform of the entity +Transform globalTransform = entity->GetGlobalTransform(); +Double3& translation = globalTransform.Position; +``` ++When `GlobalTransform` is called, the global transform is computed on-the-fly by traversing up the entity hierarchy. This traversal involves significant computation, but compared to doing the same operations on the client side through class `Entity`, the built-in function is faster. Still, calling `GlobalTransform` on a larger set of entities might impose a performance bottleneck. ++`LocalToGlobalMatrix` is a variant of `GlobalTransform` that computes the global transform as a matrix, which is convenient in the context of Unity: ++```cs +UnityEngine.Matrix4x4 globalMatrix = entity.LocalToGlobalMatrix.toUnity(); +UnityEngine.Vector3 localPos = new UnityEngine.Vector3(0, 0, 0); +UnityEngine.Vector3 globalPos = globalMatrix.MultiplyPoint(localPos); + ``` ### Querying spatial bounds |
remote-rendering | Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/tutorials/unity/security/security.md | In this tutorial, you learn how to: * This tutorial builds on [Tutorial: Refining materials, lighting, and effects](..\materials-lighting-effects\materials-lighting-effects.md). -## Why additional security is needed +## Why extra security is needed The current state of the application and its access to your Azure resources looks like this: Let's modify **RemoteRenderingCoordinator** to load a custom model, from a linke /// <param name="parent">The parent Transform for this remote entity</param> /// <param name="progress">A call back method that accepts a float progress value [0->1]</param> /// <returns></returns>- public async Task<Entity> LoadModel(string storageAccountName, string blobName, string modelPath, Transform parent = null, Action<float> progress = null) + public async Task<Entity> LoadModel(string storageAccountName, string blobName, string modelPath, UnityEngine.Transform parent = null, Action<float> progress = null) { //Create a root object to parent a loaded model to var modelEntity = ARRSessionService.CurrentActiveSession.Connection.CreateEntity(); Let's modify **RemoteRenderingCoordinator** to load a custom model, from a linke } ``` - For the most part, this code is identical to the original `LoadModel` method, however we've replaced the SAS version of the method calls with the non-SAS versions. + This code is identical to the original `LoadModel` method, however we've replaced the SAS version of the method calls with the non-SAS versions. - The additional inputs `storageAccountName` and `blobName` have also been added to the arguments. We call this new **LoadModel** method from another method similar to the first **LoadTestModel** method we created in the first tutorial. + The extra inputs `storageAccountName` and `blobName` have also been added to the arguments. We call this new **LoadModel** method from another method similar to the first **LoadTestModel** method we created in the first tutorial. 1. Add the following method to **RemoteRenderingCoordinator** just after **LoadTestModel** Let's modify **RemoteRenderingCoordinator** to load a custom model, from a linke > If you [run the **Conversion.ps1**](../../../quickstarts/convert-model.md#run-the-conversion) script, without the "-UseContainerSas" argument, the script will output all of the above values for your instead of the SAS token. ![Linked Model](./media/converted-output.png) 1. For the time being, remove or disable the GameObject **TestModel**, to make room for your custom model to load. 1. Play the scene and connect to a remote session.-1. Right click on your **RemoteRenderingCoordinator** and select **Load Linked Custom Model**. +1. Open the context menu on **RemoteRenderingCoordinator** and select **Load Linked Custom Model**. ![Load linked model](./media/load-linked-model.png) These steps have increased the security of the application by removing the SAS token from the local application. With this change, the current state of the application and its access to your Az ![Even better security](./media/security-three.png) -Since the User Credentials aren't stored on the device (or in this case even entered on the device), their exposure risk is low. Now the device is using a user-specific, time-limited Access Token to access ARR, which uses access control (IAM) to access the Blob Storage. These two steps have removed the "passwords" from the source code and increased security considerably. However, this isn't the most security available, moving the model and session management to a web service will improve security further. Additional security considerations are discussed in the [Commercial Readiness](../commercial-ready/commercial-ready.md) chapter. +Since the User Credentials aren't stored on the device (or in this case even entered on the device), their exposure risk is low. Now the device is using a user-specific, time-limited Access Token to access ARR, which uses access control (IAM) to access the Blob Storage. These two steps have removed the "passwords" from the source code and increased security considerably. However, this isn't the most security available, moving the model and session management to a web service will improve security further. Extra security considerations are discussed in the [Commercial Readiness](../commercial-ready/commercial-ready.md) chapter. <a name='testing-aad-auth'></a> ### Testing Microsoft Entra auth -In the Unity Editor, when Microsoft Entra auth is active, you'll need to authenticate every time you launch the application. On device, the authentication step happens the first time and only be required again when the token expires or is invalidated. +In the Unity Editor, when Microsoft Entra auth is active, you need to authenticate every time you launch the application. On device, the authentication step happens the first time and only be required again when the token expires or is invalidated. 1. Add the **Microsoft Entra authentication** component to the **RemoteRenderingCoordinator** GameObject. |
remote-rendering | View Remote Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/tutorials/unity/view-remote-models/view-remote-models.md | To get access to the Azure Remote Rendering service, you first need to [create a > The [ARR samples repository](https://github.com/Azure/azure-remote-rendering) contains a project with all the tutorials completed, it can be used as a reference. Look in *Unity\Tutorial-Complete* for the complete Unity project. From the Unity Hub, create a new project.-In this example, we'll assume the project is being created in a folder called **RemoteRendering**. +In this example, we assume the project is being created in a folder called **RemoteRendering**. :::image type="content" source="./media/unity-new-project.PNG" alt-text="Screenshot of Unity Hub showing the Create a New Unity Project dialog. The panel 3D is selected."::: Follow the instructions on how to [add the Azure Remote Rendering and OpenXR pac ![Screenshot of the Unity Color wheel dialog. The color is set to 0 for all R G B A components.](./media/color-wheel-black.png) -1. Set **Clipping Planes** to *Near = 0.1* and *Far = 20*. This means rendering will clip geometry that is closer than 10 cm or farther than 20 meters. +1. Set **Clipping Planes** to *Near = 0.1* and *Far = 20*. This setup means rendering clips geometry that is closer than 10 cm or farther than 20 meters. ![Screenshot of the Unity inspector for a Camera component.](./media/camera-properties.png) Follow the instructions on how to [add the Azure Remote Rendering and OpenXR pac 1. Open *Edit > Project Settings...* 1. Select **Quality** from the left list menu- 1. Change the **Default Quality Level** of all platforms to *Low*. This setting will enable more efficient rendering of local content and doesn't affect the quality of remotely rendered content. + 1. Change the **Default Quality Level** of all platforms to *Low*. This setting enables more efficient rendering of local content and doesn't affect the quality of remotely rendered content. ![Screenshot of the Unity Project Settings dialog. The Quality entry is selected in the list on the left. The context menu for the default quality level is opened on the right. The low entry is selected.](./media/settings-quality.png) Follow the instructions on how to [add the Azure Remote Rendering and OpenXR pac 1. Select **XR Plugin Management** from the left list menu 1. Click the **Install XR Plugin Management** button. 1. Select the **Universal Windows Platform settings** tab, represented as a Windows icon.- 1. Click the **Open XR** checkbox under **Plug-In Providers** - 1. If a dialog opens that asks you to enable the native platform backends for the new input system click **No**. + 1. Select the **Open XR** checkbox under **Plug-In Providers** + 1. If a dialog opens that asks you to enable the native platform backends for the new input system select **No**. ![Screenshot of the Unity Project Settings dialog. The XR Plug-in Management entry is selected in the list on the left. The tab with the windows logo is highlighted on the right. The Open XR checkbox below it is also highlighted.](./media/xr-plugin-management-settings.png) Perform the following steps to validate that the project settings are correct. ## Create a script to coordinate Azure Remote Rendering connection and state -There are four basic stages to show remotely rendered models, outlined in the flowchart below. Each stage must be performed in order. The next step is to create a script which will manage the application state and proceed through each required stage. +There are four basic stages to show remotely rendered models, outlined in the flowchart below. Each stage must be performed in order. The next step is to create a script that manages the application state and proceed through each required stage. ![Diagram of the four stages required to load a model.](./media/remote-render-stack-0.png) Your project should look like this: ![Screenshot of Unity Project hierarchy containing the new script.](./media/project-structure.png) - This coordinator script will track and manage the remote rendering state. Of note, some of this code is used for maintaining state, exposing functionality to other components, triggering events, and storing application-specific data that is not *directly* related to Azure Remote Rendering. Use the code below as a starting point, and we'll address and implement the specific Azure Remote Rendering code later in the tutorial. + This coordinator script tracks and manages the remote rendering state. Of note, some of this code is used for maintaining state, exposing functionality to other components, triggering events, and storing application-specific data that isn't *directly* related to Azure Remote Rendering. Use the code below as a starting point, and we'll address and implement the specific Azure Remote Rendering code later in the tutorial. 1. Open **RemoteRenderingCoordinator** in your code editor and replace its entire content with the code below: public class RemoteRenderingCoordinator : MonoBehaviour } [Header("Development Account Credentials")]- [SerializeField] - private string accountId = "<enter your account id here>"; - public string AccountId { - get => accountId.Trim(); - set => accountId = value; - } - [SerializeField] private string accountDomain = "<enter your account domain here>"; public string AccountDomain { get => accountDomain.Trim(); set => accountDomain = value;- } + } ++ [SerializeField] + private string accountId = "<enter your account id here>"; + public string AccountId { + get => accountId.Trim(); + set => accountId = value; + } [SerializeField] private string accountKey = "<enter your account key here>"; public class RemoteRenderingCoordinator : MonoBehaviour /// <param name="progress">A call back method that accepts a float progress value [0->1]</param> /// <param name="parent">The parent Transform for this remote entity</param> /// <returns>An awaitable Remote Rendering Entity</returns>- public async Task<Entity> LoadModel(string modelPath, Transform parent = null, Action<float> progress = null) + public async Task<Entity> LoadModel(string modelPath, UnityEngine.Transform parent = null, Action<float> progress = null) { //Implement me return null; The remote rendering coordinator and its required script (*ARRServiceUnity*) are ## Initialize Azure Remote Rendering -Now that we have the framework for our coordinator, we will implement each of the four stages starting with **Initialize Remote Rendering**. +Now that we have the framework for our coordinator, we'll implement each of the four stages starting with **Initialize Remote Rendering**. ![Diagram of the four stages required to load a model. The first stage "Initialize Remote Rendering" is highlighted.](./media/remote-render-stack-1.png) -**Initialize** tells Azure Remote Rendering which camera object to use for rendering and progresses the state machine into **NotAuthorized**. This means it's initialized but not yet authorized to connect to a session. Since starting an ARR session incurs a cost, we need to confirm the user wants to proceed. +**Initialize** tells Azure Remote Rendering which camera object to use for rendering and progresses the state machine into **NotAuthorized**. This state means it's initialized but not yet authorized to connect to a session. Since starting an ARR session incurs a cost, we need to confirm the user wants to proceed. When entering the **NotAuthorized** state, **CheckAuthorization** is called, which invokes the **RequestingAuthorization** event and determines which account credentials to use (**AccountInfo** is defined near the top of the class and uses the credentials you defined via the Unity Inspector in the step above). When entering the **NotAuthorized** state, **CheckAuthorization** is called, whi } ``` -In order to progress from **NotAuthorized** to **NoSession**, we'd typically present a modal dialog to the user so they can choose (and we'll do just that in another chapter). For now, we'll automatically bypass the authorization check by calling **ByPassAuthentication** as soon as the **RequestingAuthorization** event is triggered. +In order to progress from **NotAuthorized** to **NoSession**, we'd typically present a modal dialog to the user so they can choose (and we do just that in another chapter). For now, we automatically bypass the authorization check by calling **ByPassAuthentication** as soon as the **RequestingAuthorization** event is triggered. 1. Select the **RemoteRenderingCoordinator** GameObject and find the **OnRequestingAuthorization** Unity Event exposed in the Inspector of the **RemoteRenderingCoordinator** component. 1. Add a new event by pressing the '+' in the lower right. 1. Drag the component on to its own event, to reference itself. ![Screenshot of the Unity inspector of the Remote Rendering Coordinator Script. The title bar of the component is highlighted and an arrow connects it to the On Requesting Authorization event.](./media/bypass-authorization-add-event.png)-1. In the drop down select **RemoteRenderingCoordinator -> BypassAuthorization**.\ +1. In the drop down, select **RemoteRenderingCoordinator -> BypassAuthorization**.\ ![Screenshot of the On Requesting Authorization event.](./media/bypass-authorization-event.png) ## Create or join a remote session -The second stage is to Create or Join a Remote Rendering Session (see [Remote Rendering Sessions](../../../concepts/sessions.md) for more information). +The second stage is to Create or Join a Remote Rendering Session (for more information about rendering sessions, see [Remote Rendering Sessions](../../../concepts/sessions.md)). ![Diagram of the four stages required to load a model. The second stage "Create or Join Remote Rendering Session" is highlighted.](./media/remote-render-stack-2.png) -The remote session is where the models will be rendered. The **JoinRemoteSession( )** method will attempt to join an existing session, tracked with the **LastUsedSessionID** property or if there is an assigned active session ID on **SessionIDOverride**. **SessionIDOverride** is intended for your debugging purposes only, it should only be used when you know the session exists and would like to explicitly connect to it. +The remote session is where the models will be rendered. The **JoinRemoteSession( )** method attempts to join an existing session, tracked with the **LastUsedSessionID** property or if there's an assigned active session ID on **SessionIDOverride**. **SessionIDOverride** is intended for your debugging purposes only, it should only be used when you know the session exists and would like to explicitly connect to it. -If no sessions are available, a new session will be created. Creating a new session is, however, a time-consuming operation. Therefore, you should try to create sessions only when required and reuse them whenever possible (see [Commercial Ready: Session pooling, scheduling, and best practices](../commercial-ready/commercial-ready.md#fast-startup-time-strategies) for more information on managing sessions). +If no sessions are available, a new session is created. Creating a new session is, however, a time-consuming operation. Therefore, you should try to create sessions only when required and reuse them whenever possible (see [Commercial Ready: Session pooling, scheduling, and best practices](../commercial-ready/commercial-ready.md#fast-startup-time-strategies) for more information on managing sessions). > [!TIP] > **StopRemoteSession()** will end the active session. To prevent unnecessary charges, you should always stop sessions when they are no longer needed. -The state machine will now progress to either **ConnectingToNewRemoteSession** or **ConnectingToExistingRemoteSession**, depending on available sessions. Both opening an existing session or creating a new session will trigger the **ARRSessionService.OnSessionStatusChanged** event, executing our **OnRemoteSessionStatusChanged** method. Ideally, this will result in advancing the state machine to **RemoteSessionReady**. +The state machine will now progress to either **ConnectingToNewRemoteSession** or **ConnectingToExistingRemoteSession**, depending on available sessions. Both opening an existing session or creating a new session trigger the **ARRSessionService.OnSessionStatusChanged** event, executing our **OnRemoteSessionStatusChanged** method. Ideally, this results in advancing the state machine to **RemoteSessionReady**. 1. To join a new session, modify the code to replace the **JoinRemoteSession( )** and **StopRemoteSession( )** methods with the completed examples below: public async void StopRemoteSession() } ``` -If you want to save time by reusing sessions, make sure to deactivate the option **Auto-Stop Session** in the *ARRServiceUnity* component. Keep in mind that this will leave sessions running, even when no one is connected to them. Your session may run for as long as your *MaxLeaseTime* before it is shut down by the server (The value for *MaxLeaseTime* can be modified in the Remote Rendering Coordinator, under *New Session Defaults*). On the other hand, if you automatically shut down every session when disconnecting, you will have to wait for a new session to be started every time, which can be a somewhat lengthy process. +If you want to save time by reusing sessions, make sure to deactivate the option **Auto-Stop Session** in the *ARRServiceUnity* component. Keep in mind that this will leave sessions running, even when no one is connected to them. Your session may run for as long as your *MaxLeaseTime* before it is shut down by the server (The value for *MaxLeaseTime* can be modified in the Remote Rendering Coordinator, under *New Session Defaults*). On the other hand, if you automatically shut down every session when disconnecting, you'll have to wait for a new session to be started every time, which can be a lengthy process. > [!NOTE] > Stopping a session will take immediate effect and cannot be undone. Once stopped, you have to create a new session, with the same startup overhead. Next, the application needs to connect its local runtime to the remote session. ![Diagram of the four stages required to load a model. The third stage "Connect Local Runtime to Remote Session" is highlighted.](./media/remote-render-stack-3.png) -The application also needs to listen for events about the connection between the runtime and the current session; those state changes are handled in **OnLocalRuntimeStatusChanged**. This code will advance our state to **ConnectingToRuntime**. Once connected in **OnLocalRuntimeStatusChanged**, the state will advance to **RuntimeConnected**. Connecting to the runtime is the last state the coordinator concerns itself with, which means the application is done with all the common configuration and is ready to begin the session-specific work of loading and rendering models. +The application also needs to listen for events about the connection between the runtime and the current session; those state changes are handled in **OnLocalRuntimeStatusChanged**. This code advances our state to **ConnectingToRuntime**. Once connected in **OnLocalRuntimeStatusChanged**, the state advances to **RuntimeConnected**. Connecting to the runtime is the last state the coordinator concerns itself with, which means the application is done with all the common configuration and is ready to begin the session-specific work of loading and rendering models. 1. Replace the **ConnectRuntimeToRemoteSession( )** and **DisconnectRuntimeFromRemoteSession( )** methods with the completed versions below. 1. It's important to take note of the Unity method **LateUpdate** and that it's updating the current active session. This allows the current session to send/receive messages and update the frame buffer with the frames received from the remote session. It's critical to ARR functioning correctly. public void DisconnectRuntimeFromRemoteSession() ## Load a model -With the required foundation in place, you are ready to load a model into the remote session and start receiving frames. +With the required foundation in place, you're ready to load a model into the remote session and start receiving frames. ![Diagram of the four stages required to load a model. The fourth stage "Load and view a Model" is highlighted.](./media/remote-render-stack-4.png) -The **LoadModel** method is designed to accept a model path, progress handler, and parent transform. These arguments will be used to load a model into the remote session, update the user on the loading progress, and orient the remotely rendered model based on the parent transform. +The **LoadModel** method is designed to accept a model path, progress handler, and parent transform. These arguments are used to load a model into the remote session, update the user on the loading progress, and orient the remotely rendered model based on the parent transform. 1. Replace the **LoadModel** method entirely with the code below: The **LoadModel** method is designed to accept a model path, progress handler, a /// <param name="parent">The parent Transform for this remote entity</param> /// <param name="progress">A call back method that accepts a float progress value [0->1]</param> /// <returns>An awaitable Remote Rendering Entity</returns>- public async Task<Entity> LoadModel(string modelPath, Transform parent = null, Action<float> progress = null) + public async Task<Entity> LoadModel(string modelPath, UnityEngine.Transform parent = null, Action<float> progress = null) { //Create a root object to parent a loaded model to var modelEntity = ARRSessionService.CurrentActiveSession.Connection.CreateEntity(); The code above is performing the following steps: 1. Create a [Remote Entity](../../../concepts/entities.md). 1. Create a local GameObject to represent the remote entity.-1. Configure the local GameObject to sync its state (i.e. Transform) to the remote entity every frame. +1. Configure the local GameObject to sync its state (that is, Transform) to the remote entity every frame. 1. Load model data from Blob Storage into the remote entity. 1. Return the parent Entity, for later reference. We now have all the code required to view a remotely rendered model, all four of 1. Save your code. 1. Press the Play button in the Unity Editor to start the process of connecting to Azure Remote Rendering and creating a new session.-1. You will not see much in the Game view, however, the Console will show the state of the application changing. It will likely progress to `ConnectingToNewRemoteSession`, and stay there, possibly for up to five minutes. +1. You won't see much in the Game view, however, the Console shows the state of the application changing. It will likely progress to `ConnectingToNewRemoteSession`, and stay there, possibly for up to five minutes. 1. Select the **RemoteRenderingCoordinator** GameObject to see its attached scripts in the inspector. Watch the **Service** component update as it progresses through its initialization and connection steps. 1. Monitor the Console output - waiting for the state to change to **RuntimeConnected**.-1. Once the runtime is connected, right-click on the **RemoteRenderingCoordinator** in the inspector to expose the context menu. Then, click the **Load Test Model** option in the context menu, added by the `[ContextMenu("Load Test Model")]` part of our code above. +1. Once the runtime is connected, right-click on the **RemoteRenderingCoordinator** in the inspector to expose the context menu. Then, select the **Load Test Model** option in the context menu, added by the `[ContextMenu("Load Test Model")]` part of our code above. ![Screenshot of the Unity inspector of the Remote Rendering Coordinator Script. Highlights instruct to first right-click on the title bar and then select Load Test Model from the context menu.](./media/load-test-model.png) We now have all the code required to view a remotely rendered model, all four of ![Screenshot of Unity running the project in Play mode. A car engine is rendered in the center of the viewport.](./media/test-model-rendered.png) -Congratulations! You've created a basic application capable of viewing remotely rendered models using Azure Remote Rendering. In the next tutorial, we will integrate MRTK and import our own models. +Congratulations! You've created a basic application capable of viewing remotely rendered models using Azure Remote Rendering. In the next tutorial, we'll integrate MRTK and import our own models. > [!div class="nextstepaction"] > [Next: Interfaces and custom models](../custom-models/custom-models.md) |
security | Customer Lockbox Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/customer-lockbox-overview.md | The following services are currently supported for Customer Lockbox: - Azure Storage - Azure Subscription Transfers - Azure Synapse Analytics-- Azure Unified Vision Service - Commerce AI (Intelligent Recommendations) - DevCenter / DevBox - ElasticSan |
security | Shared Responsibility Ai | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/shared-responsibility-ai.md | + + Title: AI shared responsibility model - Microsoft Azure +description: "Understand the shared responsibility model and which tasks are handled by the AI platform or application provider, and which tasks are handled by you." ++documentationcenter: na +++editor: na ++ms.assetid: ++++ na + Last updated : 10/23/2023++++# Artificial intelligence (AI) shared responsibility model ++As you consider and evaluate AI enabled integration, it's critical to understand the shared responsibility model and which tasks the AI platform or application provider handle and which tasks you handle. The workload responsibilities vary depending on whether the AI integration is based on Software as a Service (SaaS), Platform as a Service (PaaS), or Infrastructure as a Service (IaaS). ++## Division of responsibility +As with cloud services, you have options when implementing AI capabilities for your organization. Depending on which option you choose, you take responsibility for different parts of the necessary operations and policies needed to use AI safely. ++The following diagram illustrates the areas of responsibility between you and Microsoft according to the type of deployment. +++## AI layer overview +An AI enabled application consists of three layers of functionality that group together tasks, which you or an AI provider perform. The security responsibilities generally reside with whoever performs the tasks, but an AI provider might choose to expose security or other controls as a configuration option to you as appropriate. These three layers include: ++### AI platform +The AI platform layer provides the AI capabilities to the applications. At the platform layer, there's a need to build and safeguard the infrastructure that runs the AI model, training data, and specific configurations that change the behavior of the model, such as weights and biases. This layer provides access to functionality via APIs, which pass text known as a *Metaprompt* to the AI model for processing, then return the generated outcome, known as a *Prompt-Response*. ++**AI platform security considerations** - To protect the AI platform from malicious inputs, a safety system must be built to filter out the potentially harmful instructions sent to the AI model (inputs). As AI models are generative, there's also a potential that some harmful content might be generated and returned to the user (outputs). Any safety system must first protect against potentially harmful inputs and outputs of many classifications including hate, jailbreaks, and others. These classifications will likely evolve over time based on model knowledge, locale, and industry. ++Microsoft has built-in safety systems for both PaaS and SaaS offerings: ++- PaaS - [Azure OpenAI Service](../../ai-services/openai/overview.md) +- SaaS - [Microsoft Security Copilot](https://www.microsoft.com/security/business/ai-machine-learning/microsoft-security-copilot) ++### AI application +The AI application accesses the AI capabilities and provides the service or interface that the user consumes. The components in this layer can vary from relatively simple to highly complex, depending on the application. The simplest standalone AI applications act as an interface to a set of APIs taking a text-based user-prompt and passing that data to the model for a response. More complex AI applications include the ability to ground the user-prompt with extra context, including a persistence layer, semantic index, or via plugins to allow access to more data sources. Advanced AI applications might also interface with existing applications and systems. Existing applications and systems might work across text, audio, and images to generate various types of content. ++**AI application security considerations** - An application safety system must be built to protect the AI application from malicious activities. The safety system provides deep inspection of the content being used in the Metaprompt sent to the AI model. The safety system also inspects the interactions with any plugins, data connectors, and other AI applications (known as AI Orchestration). One way you can incorporate this in your own IaaS/PaaS based AI application is to use the [Azure AI Content Safety](https://azure.microsoft.com/products/ai-services/ai-content-safety/) service. Other capabilities are available depending on your needs. ++### AI usage +The AI usage layer describes how the AI capabilities are ultimately used and consumed. Generative AI offers a new type of user/computer interface that is fundamentally different from other computer interfaces, such as API, command-prompt, and graphical user interfaces (GUIs). The generative AI interface is both interactive and dynamic, allowing the computer capabilities to adjust to the user and their intent. The generative AI interface contrasts with previous interfaces that primarily force users to learn the system design and functionality and adjust to it. This interactivity allows user input, instead of application designers, to have a high level of influence of the output of the system, making safety guardrails critical to protecting people, data, and business assets. ++**AI usage security considerations** - Protecting AI usage is similar to any computer system as it relies on security assurances for identity and access controls, device protections and monitoring, data protection and governance, administrative controls, and other controls. ++More emphasis is required on user behavior and accountability because of the increased influence users have on the output of the systems. It's critical to update acceptable use policies and educate users on the difference of standard IT applications to AI enabled applications. These should include AI specific considerations related to security, privacy, and ethics. Additionally, users should be educated on AI based attacks that can be used to trick them with convincing fake text, voices, videos, and more. ++AI specific attack types are defined in: ++- [Microsoft Security Response Center's (MSRC) vulnerability severity classification for AI systems](https://www.microsoft.com/msrc/aibugbar) +- [MITRE Adversarial Threat Landscape for Artificial-Intelligence Systems (ATLAS)](https://atlas.mitre.org/) +- [OWASP top 10 for Large Language Model (LLM) applications](https://owasp.org/www-project-top-10-for-large-language-model-applications/) +- [OWASP Machine Learning (ML) security top 10](https://owasp.org/www-project-machine-learning-security-top-10/) +- [NIST AI risk management framework](https://www.nist.gov/itl/ai-risk-management-framework) ++## Security lifecycle +As with security for other types of capability, it's critical to plan for a complete approach. A complete approach includes people, process, and technology across the full security lifecycle: identify, protect, detect, respond, recover, and govern. Any gap or weakness in this lifecycle could have you: ++- Fail to secure important assets +- Experience easily preventable attacks +- Unable to handle attacks +- Unable to rapidly restore business critical services +- Apply controls inconsistently ++To learn more about the unique nature of AI threat testing, read how [Microsoft AI Red Team is building the future of safer AI](https://www.microsoft.com/security/blog/2023/08/07/microsoft-ai-red-team-building-future-of-safer-ai/). ++## Configure before customize +Microsoft recommends organizations start with SaaS based approaches like the Copilot model for their initial adoption of AI and for all subsequent AI workloads. This minimizes the level of responsibility and expertise your organization has to provide to design, operate, and secure these highly complex capabilities. ++If the current "off the shelf" capabilities don't meet the specific needs for a workload, you can adopt a PaaS model by using AI services, such as [Azure OpenAI Service](../../ai-services/openai/overview.md), to meet those specific requirements. ++Custom model building should only be adopted by organizations with deep expertise in data science and the security, privacy, and ethical considerations of AI. ++To help bring AI to the world, Microsoft is developing Copilot solutions for each of the main productivity solutions: from Bing and Windows, to GitHub and Office 365. Microsoft is developing full stack solutions for all types of productivity scenarios. These are offered as SaaS solutions. Built into the user interface of the product, they're tuned to assist the user with specific tasks to increase productivity. ++Microsoft ensures that every Copilot solution is engineered following our strong principles for [AI governance](https://blogs.microsoft.com/on-the-issues/2023/05/25/how-do-we-best-govern-ai/). ++## Next steps +Learn more about Microsoft's product development requirements for responsible AI in the [Microsoft Responsible AI Standard](https://www.microsoft.com/ai/principles-and-approach/). ++Learn about [shared responsibilities for cloud computing](shared-responsibility.md). |
sentinel | Ci Cd Custom Deploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/ci-cd-custom-deploy.md | Rather than passing parameters as inline values in your content files, consider 1. Is there a workspace-mapped parameter file? This would be a parameter file in the same directory as the content files that ends with *.parameters-\<WorkspaceID>.json* 1. Is there a default parameter file? This would be any parameter file in the same directory as the content files that ends with *.parameters.json* -It is encouraged to map your parameter files through through the configuration file or by specifying the workspace ID in the file name to avoid clashes in scenarios with multiple deployments. +It is encouraged to map your parameter files through the configuration file or by specifying the workspace ID in the file name to avoid clashes in scenarios with multiple deployments. > [!IMPORTANT] > Once a parameter file match is determined based on the above mapping precedence, the pipeline will ignore any remaining mappings. |
service-bus-messaging | Service Bus Amqp Protocol Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-amqp-protocol-guide.md | Unlike earlier expired draft versions from the AMQP working group that are still The protocol can be used for symmetric peer-to-peer communication, for interaction with message brokers that support queues and publish/subscribe entities, as Azure Service Bus does. It can also be used for interaction with messaging infrastructure where the interaction patterns are different from regular queues, as is the case with Azure Event Hubs. An event hub acts like a queue when events are sent to it, but acts more like a serial storage service when events are read from it; it somewhat resembles a tape drive. The client picks an offset into the available data stream and is then served all events from that offset to the latest available. -The AMQP 1.0 protocol is designed to be extensible, enabling further specifications to enhance its capabilities. The three extension specifications discussed in this document illustrate this. For communication over existing HTTPS/WebSockets infrastructure, configuring the native AMQP TCP ports may be difficult. A binding specification defines how to layer AMQP over WebSockets. For interacting with the messaging infrastructure in a request/response fashion for management purposes or to provide advanced functionality, the AMQP management specification defines the required basic interaction primitives. For federated authorization model integration, the AMQP claims-based-security specification defines how to associate and renew authorization tokens associated with links. +The AMQP 1.0 protocol is designed to be extensible, enabling further specifications to enhance its capabilities. The three extension specifications discussed in this document illustrate this. For communication over existing HTTPS/WebSockets infrastructure, configuring the native AMQP TCP ports might be difficult. A binding specification defines how to layer AMQP over WebSockets. For interacting with the messaging infrastructure in a request/response fashion for management purposes or to provide advanced functionality, the AMQP management specification defines the required basic interaction primitives. For federated authorization model integration, the AMQP claims-based-security specification defines how to associate and renew authorization tokens associated with links. ## Basic AMQP scenarios Clients that use AMQP connections over TCP require ports 5671 and 5672 to be ope A .NET client would fail with a SocketException ("An attempt was made to access a socket in a way forbidden by its access permissions") if these ports are blocked by the firewall. The feature can be disabled by setting `EnableAmqpLinkRedirect=false` in the connection string, which forces the clients to communicate with the remote service over port 5671. +The AMQP **WebSocket binding** provides a mechanism for tunneling an AMQP connection over a WebSocket transport. This binding creates a tunnel over the TCP port 443, which is equivalent to AMQP 5671 connections. Use AMQP WebSockets if you are behind a firewall that blocks TCP connections over ports 5671, 5672 but allows TCP connections over port 443 (https). + ### Links The lock on a message is released when the transfer is settled into one of the t Even though the Service Bus APIs don't directly expose such an option today, a lower-level AMQP protocol client can use the link-credit model to turn the "pull-style" interaction of issuing one unit of credit for each receive request into a "push-style" model by issuing a large number of link credits and then receive messages as they become available without any further interaction. Push is supported through the [ServiceBusProcessor.PrefetchCount](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) or [ServiceBusReceiver.PrefetchCount](/dotnet/api/azure.messaging.servicebus.servicebusreceiver) property settings. When they're non-zero, the AMQP client uses it as the link credit. -In this context, it's important to understand that the clock for the expiration of the lock on the message inside the entity starts when the message is taken from the entity, not when the message is put on the wire. Whenever the client indicates readiness to receive messages by issuing link credit, it's therefore expected to be actively pulling messages across the network and be ready to handle them. Otherwise the message lock may have expired before the message is even delivered. The use of link-credit flow control should directly reflect the immediate readiness to deal with available messages dispatched to the receiver. +In this context, it's important to understand that the clock for the expiration of the lock on the message inside the entity starts when the message is taken from the entity, not when the message is put on the wire. Whenever the client indicates readiness to receive messages by issuing link credit, it's therefore expected to be actively pulling messages across the network and be ready to handle them. Otherwise the message lock might have expired before the message is even delivered. The use of link-credit flow control should directly reflect the immediate readiness to deal with available messages dispatched to the receiver. -In summary, the following sections provide a schematic overview of the performative flow during different API interactions. Each section describes a different logical operation. Some of those interactions may be "lazy," meaning they may only be performed when required. Creating a message sender may not cause a network interaction until the first message is sent or requested. +In summary, the following sections provide a schematic overview of the performative flow during different API interactions. Each section describes a different logical operation. Some of those interactions might be "lazy," meaning they might only be performed when required. Creating a message sender might not cause a network interaction until the first message is sent or requested. The arrows in the following table show the performative flow direction. The operations are grouped by an identifier `txn-id`. For transactional interaction, the client acts as a `transaction controller` , which controls the operations that should be grouped together. Service Bus Service acts as a `transactional resource` and performs work as requested by the `transaction controller`. -The client and service communicate over a `control link` , which is established by the client. The `declare` and `discharge` messages are sent by the controller over the control link to allocate and complete transactions respectively (they don't represent the demarcation of transactional work). The actual send/receive isn't performed on this link. Each transactional operation requested is explicitly identified with the desired `txn-id` and therefore may occur on any link on the Connection. If the control link is closed while there exist non-discharged transactions it created, then all such transactions are immediately rolled back, and attempts to perform further transactional work on them will lead to failure. Messages on control link must not be pre settled. +The client and service communicate over a `control link` , which is established by the client. The `declare` and `discharge` messages are sent by the controller over the control link to allocate and complete transactions respectively (they don't represent the demarcation of transactional work). The actual send/receive isn't performed on this link. Each transactional operation requested is explicitly identified with the desired `txn-id` and therefore might occur on any link on the Connection. If the control link is closed while there exist non-discharged transactions it created, then all such transactions are immediately rolled back, and attempts to perform further transactional work on them will lead to failure. Messages on control link must not be pre settled. Every connection has to initiate its own control link to be able to start and end transactions. The service defines a special target that functions as a `coordinator`. The client/controller establishes a control link to this target. Control link is outside the boundary of an entity, that is, same control link can be used to initiate and discharge transactions for multiple entities. The default security model of AMQP discussed in the introduction is based on SAS AMQPΓÇÖs SASL integration has two drawbacks: -* All credentials and tokens are scoped to the connection. A messaging infrastructure may want to provide differentiated access control on a per-entity basis; for example, allowing the bearer of a token to send to queue A but not to queue B. With the authorization context anchored on the connection, itΓÇÖs not possible to use a single connection and yet use different access tokens for queue A and queue B. -* Access tokens are typically only valid for a limited time. This validity requires the user to periodically reacquire tokens and provides an opportunity to the token issuer to refuse issuing a fresh token if the userΓÇÖs access permissions have changed. AMQP connections may last for long periods of time. The SASL model only provides a chance to set a token at connection time, which means that the messaging infrastructure either has to disconnect the client when the token expires or it needs to accept the risk of allowing continued communication with a client whoΓÇÖs access rights may have been revoked in the interim. +* All credentials and tokens are scoped to the connection. A messaging infrastructure might want to provide differentiated access control on a per-entity basis; for example, allowing the bearer of a token to send to queue A but not to queue B. With the authorization context anchored on the connection, itΓÇÖs not possible to use a single connection and yet use different access tokens for queue A and queue B. +* Access tokens are typically only valid for a limited time. This validity requires the user to periodically reacquire tokens and provides an opportunity to the token issuer to refuse issuing a fresh token if the userΓÇÖs access permissions have changed. AMQP connections might last for long periods of time. The SASL model only provides a chance to set a token at connection time, which means that the messaging infrastructure either has to disconnect the client when the token expires or it needs to accept the risk of allowing continued communication with a client whoΓÇÖs access rights might have been revoked in the interim. The AMQP CBS specification, implemented by Service Bus, enables an elegant workaround for both of those issues: It allows a client to associate access tokens with each node, and to update those tokens before they expire, without interrupting the message flow. |
storage | Blob Inventory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-inventory.md | Several filters are available for customizing a blob inventory report: | Filter name | Filter type | Notes | Required? | |--|--|--|--| | blobTypes | Array of predefined enum values | Valid values are `blockBlob` and `appendBlob` for hierarchical namespace enabled accounts, and `blockBlob`, `appendBlob`, and `pageBlob` for other accounts. This field isn't applicable for inventory on a container, (objectType: `container`). | Yes |+| creationTime | Number | Specifies the number of days ago within which the blob must have been created. For example, a value of `3` includes in the report only those blobs which were created in the last 3 days. | No | | prefixMatch | Array of up to 10 strings for prefixes to be matched. | If you don't define *prefixMatch* or provide an empty prefix, the rule applies to all blobs within the storage account. A prefix must be a container name prefix or a container name. For example, `container`, `container1/foo`. | No | | excludePrefix | Array of up to 10 strings for prefixes to be excluded. | Specifies the blob paths to exclude from the inventory report.<br><br>An *excludePrefix* must be a container name prefix or a container name. An empty *excludePrefix* would mean that all blobs with names matching any *prefixMatch* string will be listed.<br><br>If you want to include a certain prefix, but exclude some specific subset from it, then you could use the excludePrefix filter. For example, if you want to include all blobs under `container-a` except those under the folder `container-a/folder`, then *prefixMatch* should be set to `container-a` and *excludePrefix* should be set to `container-a/folder`. | No | | includeSnapshots | boolean | Specifies whether the inventory should include snapshots. Default is `false`. This field isn't applicable for inventory on a container, (objectType: `container`). | No | |
storage | Storage Blob Containers List Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list-java.md | When you list the containers in an Azure Storage account from your code, you can ## About container listing options -To list containers in your storage account, call the following method: +When listing containers from your code, you can specify options to manage how results are returned from Azure Storage. You can specify the number of results to return in each set of results, and then retrieve the subsequent sets. You can also filter the results by a prefix, and return container metadata with the results. These options are described in the following sections. ++To list containers in a storage account, call the following method: - [listBlobContainers](/java/api/com.azure.storage.blob.blobserviceclient) -The overloads for this method provide additional options for managing how containers are returned by the listing operation. These options are described in the following sections. +This method returns an iterable of type [BlobContainerItem](/java/api/com.azure.storage.blob.models.blobcontaineritem). Containers are ordered lexicographically by name. ### Manage how many results are returned -By default, a listing operation returns up to 5000 results at a time. To return a smaller set of results, provide a nonzero value for the size of the page of results to return. +By default, a listing operation returns up to 5000 results at a time. To return a smaller set of results, provide a nonzero value for the size of the page of results to return. You can set this value using the following method: ++- [ListBlobContainersOptions.setMaxResultsPerPage](/java/api/com.azure.storage.blob.models.listblobcontainersoptions#com-azure-storage-blob-models-listblobcontainersoptions-setmaxresultsperpage(java-lang-integer)) ++The examples presented in this article show you how to return results in pages. To learn more about pagination concepts, see [Pagination with the Azure SDK for Java](/azure/developer/java/sdk/pagination). ### Filter results with a prefix -To filter the list of containers, specify a string for the `prefix` parameter. The prefix string can include one or more characters. Azure Storage then returns only the containers whose names start with that prefix. +To filter the list of containers, specify a string for the `prefix` parameter. The prefix string can include one or more characters. Azure Storage then returns only the containers whose names start with that prefix. You can set this value using the following method: ++- [ListBlobContainersOptions.setPrefix](/java/api/com.azure.storage.blob.models.listblobcontainersoptions#com-azure-storage-blob-models-listblobcontainersoptions-setprefix(java-lang-string)) ++### Include container metadata ++To include container metadata with the results, create a `BlobContainerListDetails` instance and pass `true` to the following method: ++- [BlobContainerListDetails.setRetrieveMetadata](/java/api/com.azure.storage.blob.models.blobcontainerlistdetails#com-azure-storage-blob-models-blobcontainerlistdetails-setretrievemetadata(boolean)) ++Then pass the `BlobContainerListDetails` object to the following method: ++- [ListBlobContainersOptions.setDetails](/java/api/com.azure.storage.blob.models.listblobcontainersoptions#com-azure-storage-blob-models-listblobcontainersoptions-setdetails(com-azure-storage-blob-models-blobcontainerlistdetails)) ++### Include deleted containers ++To include soft-deleted containers with the results, create a `BlobContainerListDetails` instance and pass `true` to the following method: ++- [BlobContainerListDetails.setRetrieveDeleted](/java/api/com.azure.storage.blob.models.blobcontainerlistdetails#com-azure-storage-blob-models-blobcontainerlistdetails-setretrievedeleted(boolean)) ++Then pass the `BlobContainerListDetails` object to the following method: ++- [ListBlobContainersOptions.setDetails](/java/api/com.azure.storage.blob.models.listblobcontainersoptions#com-azure-storage-blob-models-listblobcontainersoptions-setdetails(com-azure-storage-blob-models-blobcontainerlistdetails)) -## Example: List containers +## Code examples -The following example list containers and filters the results by a specified prefix: +The following example lists containers and filters the results by a specified prefix: :::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/blob-devguide/blob-devguide-containers/src/main/java/com/blobs/devguide/containers/ContainerList.java" id="Snippet_ListContainers"::: |
storage | Storage Blob Containers List Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list-python.md | When you list the containers in an Azure Storage account from your code, you can ## About container listing options -To list containers in your storage account, call the following method: +When listing containers from your code, you can specify options to manage how results are returned from Azure Storage. You can specify the number of results to return in each set of results, and then retrieve the subsequent sets. You can also filter the results by a prefix, and return container metadata with the results. These options are described in the following sections. ++To list containers in a storage account, call the following method: - [BlobServiceClient.list_containers](/python/api/azure-storage-blob/azure.storage.blob.blobserviceclient#azure-storage-blob-blobserviceclient-list-containers) +This method returns an iterable of type [ContainerProperties](/python/api/azure-storage-blob/azure.storage.blob.containerproperties). Containers are ordered lexicographically by name. + ### Manage how many results are returned -By default, a listing operation returns up to 5000 results at a time. To return a smaller set of results, provide a nonzero value for the size of the page of results to return. +By default, a listing operation returns up to 5000 results at a time. To return a smaller set of results, provide a nonzero value for the `results_per_page` keyword argument. ### Filter results with a prefix -To filter the list of containers, specify a string or character for the `name_starts_with` parameter. The prefix string can include one or more characters. Azure Storage then returns only the containers whose names start with that prefix. +To filter the list of containers, specify a string or character for the `name_starts_with` keyword argument. The prefix string can include one or more characters. Azure Storage then returns only the containers whose names start with that prefix. ++### Include container metadata ++To include container metadata with the results, set the `include_metadata` keyword argument to `True`. Azure Storage includes metadata with each container returned, so you don't need to fetch the container metadata separately. ++### Include deleted containers ++To include soft-deleted containers with the results, set the `include_deleted` keyword argument to `True`. ## Code examples |
storage | Storage Blob Containers List | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list.md | When you list the containers in an Azure Storage account from your code, you can ## About container listing options +When listing containers from your code, you can specify options to manage how results are returned from Azure Storage. You can specify the number of results to return in each set of results, and then retrieve the subsequent sets. You can also filter the results by a prefix, and return container metadata with the results. These options are described in the following sections. + To list containers in your storage account, call one of the following methods: - [GetBlobContainers](/dotnet/api/azure.storage.blobs.blobserviceclient.getblobcontainers) - [GetBlobContainersAsync](/dotnet/api/azure.storage.blobs.blobserviceclient.getblobcontainersasync) -The overloads for these methods provide additional options for managing how containers are returned by the listing operation. These options are described in the following sections. +These methods return a list of [BlobContainerItem](/dotnet/api/azure.storage.blobs.models.blobcontaineritem) objects. Containers are ordered lexicographically by name. ### Manage how many results are returned -By default, a listing operation returns up to 5000 results at a time. To return a smaller set of results, provide a nonzero value for the size of the page of results to return. --If your storage account contains more than 5000 containers, or if you have specified a page size such that the listing operation returns a subset of containers in the storage account, then Azure Storage returns a *continuation token* with the list of containers. A continuation token is an opaque value that you can use to retrieve the next set of results from Azure Storage. --In your code, check the value of the continuation token to determine whether it is empty. When the continuation token is empty, then the set of results is complete. If the continuation token is not empty, then call the listing method again, passing in the continuation token to retrieve the next set of results, until the continuation token is empty. +By default, a listing operation returns up to 5000 results at a time, but you can specify the number of results that you want each listing operation to return. The examples presented in this article show you how to return results in pages. To learn more about pagination concepts, see [Pagination with the Azure SDK for .NET](/dotnet/azure/sdk/pagination). ### Filter results with a prefix To filter the list of containers, specify a string for the `prefix` parameter. The prefix string can include one or more characters. Azure Storage then returns only the containers whose names start with that prefix. -### Return metadata +### Include container metadata ++To include container metadata with the results, specify the `Metadata` value for the [BlobContainerTraits](/dotnet/api/azure.storage.blobs.models.blobcontainertraits) enum. Azure Storage includes metadata with each container returned, so you don't need to fetch the container metadata separately. ++### Include deleted containers -To return container metadata with the results, specify the **Metadata** value for the [BlobContainerTraits](/dotnet/api/azure.storage.blobs.models.blobcontainertraits) enum. Azure Storage includes metadata with each container returned, so you do not need to also fetch the container metadata. +To include soft-deleted containers with the results, specify the `Deleted` value for the [BlobContainerStates](/dotnet/api/azure.storage.blobs.models.blobcontainerstates) enum. -## Example: List containers +## Code example: List containers The following example asynchronously lists the containers in a storage account that begin with a specified prefix. The example lists containers that begin with the specified prefix and returns the specified number of results per call to the listing operation. It then uses the continuation token to get the next segment of results. The example also returns container metadata with the results. |
stream-analytics | Kafka Output | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/kafka-output.md | Last updated 10/19/2023 Azure Stream Analytics allows you to connect directly to Kafka clusters as a producer to output data. The solution is low code and entirely managed by the Azure Stream Analytics team at Microsoft, allowing it to meet business compliance standards. The Kafka Adapters are backward compatible and support all versions with the latest client release starting from version 0.10. Users can connect to Kafka clusters inside a VNET and Kafka clusters with a public endpoint, depending on the configurations. The configuration relies on existing Kafka configuration conventions. Supported compression types are None, Gzip, Snappy, LZ4, and Zstd. +## Configuration +The following table lists the property names and their description for creating a Kafka output: + +| Property name | Description | +||-| +| Input/Output Alias | A friendly name used in queries to reference your input or output | +| Bootstrap server addresses | A list of host/port pairs to establish the connection to the Kafka cluster. | +| Kafka topic | A unit of your Kafka cluster you want to write events to. | +| Security Protocol | How you want to connect to your Kafka cluster. Azure Stream Analytics supports mTLS, SASL_SSL, SASL_PLAINTEXT or None. | +| Event Serialization format | The serialization format (JSON, CSV, Avro) of the outgoing data stream. | +| Partition key | Azure Stream Analytics assigns partitions using round partitioning. | +| Kafka event compression type | The compression type used for outgoing data streams, such as Gzip, Snappy, Lz4, Zstd, or None. | + ## Authentication and encryption You can use four types of security protocols to connect to your Kafka clusters: You can use four types of security protocols to connect to your Kafka clusters: > [!IMPORTANT]-> Confluent Cloud supports authenticating using API Keys, OAuth, or SAML single sign-on (SSO). Azure Stream Analytics does not currently support these authentication options. -> +> Confluent Cloud supports authentication using API Keys, OAuth, or SAML single sign-on (SSO). Azure Stream Analytics does not support authentication using OAuth or SAML single sign-on (SSO). +> You can connect to confluent cloud using an API Key that has topic-level access via the SASL_SSL security protocol. ++### Connect to Confluent Cloud using API key ++The ASA Kafka adapter is a librdkafka-based client, and to connect to confluent cloud, you will need TLS certificates that confluent cloud uses for server auth. +Confluent uses TLS certificates from Let’s Encrypt, an open certificate authority (CA) ++To authenticate using the API Key confluent offers, you must use the SASL_SSL protocol and complete the configuration as follows: ++| Setting | Value | + | | | + | Username | Key/ Username from API Key | + | Password | Secret/ Password from API key | + | KeyVault | Name of Azure Key vault with Uploaded certificate from Let’s Encrypt | + | Certificate | Certificate uploaded to KeyVault downloaded from Let’s Encrypt (You can download the ISRG Root X1 Self-sign cert in PEM format) | -### Key vault integration +## Key vault integration > [!NOTE] > When using trust store certificates with mTLS or SASL_SSL security protocols, you must have Azure Key vault and managed identity configured for your Azure Stream Analytics job.-> -Azure Stream Analytics integrates seamlessly with Azure Key vault to access stored secrets needed for authentication and encryption when using mTLS or SASL_SSL security protocols. Your Azure Stream Analytics job connects to Azure Key vault using managed identity to ensure a secure connection and avoid the exfiltration of secrets. +> -You'll need to use Azure CLI to upload the certificates as a secret into Key vault in PEM format. +Azure Stream Analytics integrates seamlessly with Azure Key vault to access stored secrets needed for authentication and encryption when using mTLS or SASL_SSL security protocols. Your Azure Stream Analytics job connects to your Azure Key vault using managed identity to ensure a secure connection and avoid the exfiltration of secrets. -### VNET integration -When configuring your Azure Stream Analytics job to connect to your Kafka clusters, depending on your configuration, you may have to configure your job to access your Kafka clusters, which are behind a firewall or inside a virtual network. You can visit the Azure Stream Analytics VNET documentation to learn more about configuring private endpoints to access resources inside a virtual network or behind a firewall. +Certificates are stored as secrets in the key vault and must be in PEM format. +The following command can upload the certificate as a secret to your key vault. You need "Administrator" access to your Key vault for this command to work properly. ++```azurecli-interactive +az keyvault secret set --vault-name <your key vault> --name <name of the secret> --file <file path to secret> ++``` ++### Grant the Stream Analytics job permissions to access the certificate in the key vault +For your Azure Stream Analytics job to access the certificate in your key vault and read the secret for authentication using managed identity, the service principal you created when you configured managed identity for your Azure Stream Analytics job must have special permissions to the key vault. ++1. Select **Access control (IAM)**. ++1. Select **Add** > **Add role assignment** to open the **Add role assignment** page. ++1. Assign the role using the following configuration: ++ | Setting | Value | + | | | + | Role | Key vault secret reader | + | Assign access to | User, group, or service principal | + | Members | \<Name of your Stream Analytics job> | +++### VNET integration +When configuring your Azure Stream Analytics job to connect to your Kafka clusters, depending on your configuration, you might have to configure your job to access your Kafka clusters, which are behind a firewall or inside a virtual network. You can visit the Azure Stream Analytics VNET documentation to learn more about configuring private endpoints to access resources inside a virtual network or behind a firewall. -### Configuration -The following table lists the property names and their description for creating a Kafka output: - -| Property name | Description | -||-| -| Input/Output Alias | A friendly name used in queries to reference your input or output | -| Bootstrap server addresses | A list of host/port pairs to establish the connection to the Kafka cluster. | -| Kafka topic | A unit of your Kafka cluster you want to write events to. | -| Security Protocol | How you want to connect to your Kafka cluster. Azure Stream Analytics supports mTLS, SASL_SSL, SASL_PLAINTEXT or None. | -| Event Serialization format | The serialization format (JSON, CSV, Avro) of the outgoing data stream. | -| Partition key | Azure Stream Analytics assigns partitions using round partitioning. | -| Kafka event compression type | The compression type used for outgoing data streams, such as Gzip, Snappy, Lz4, Zstd, or None. | ### Limitations * When configuring your Azure Stream Analytics jobs to use VNET/SWIFT, your job must be configured with at least six (6) streaming units. The following table lists the property names and their description for creating > For direct help with using the Azure Stream Analytics Kafka adapter, please reach out to [askasa@microsoft.com](mailto:askasa@microsoft.com). > + ## Next steps > [!div class="nextstepaction"] > [Quickstart: Create a Stream Analytics job by using the Azure portal](stream-analytics-quick-create-portal.md) |
stream-analytics | Stream Analytics Define Kafka Input | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-define-kafka-input.md | The following are the major use cases: Azure Stream Analytics lets you connect directly to Kafka clusters to ingest data. The solution is low code and entirely managed by the Azure Stream Analytics team at Microsoft, allowing it to meet business compliance standards. The Kafka Adapters are backward compatible and support all versions with the latest client release starting from version 0.10. Users can connect to Kafka clusters inside a VNET and Kafka clusters with a public endpoint, depending on the configurations. The configuration relies on existing Kafka configuration conventions. Supported compression types are None, Gzip, Snappy, LZ4, and Zstd. +### Configuration +The following table lists the property names and their description for creating a Kafka Input: ++| Property name | Description | +||-| +| Input/Output Alias | A friendly name used in queries to reference your input or output | +| Bootstrap server addresses | A list of host/port pairs to establish the connection to the Kafka cluster. | +| Kafka topic | A unit of your Kafka cluster you want to write events to. | +| Security Protocol | How you want to connect to your Kafka cluster. Azure Stream Analytics supports mTLS, SASL_SSL, SASL_PLAINTEXT or None. | +| Event Serialization format | The serialization format (JSON, CSV, Avro, Parquet, Protobuf) of the incoming data stream. | ++ ## Authentication and encryption You can use four types of security protocols to connect to your Kafka clusters: You can use four types of security protocols to connect to your Kafka clusters: > [!IMPORTANT]-> Confluent Cloud supports authenticating using API Keys, OAuth, or SAML single sign-on (SSO). Azure Stream Analytics does not currently support these authentication options. +> Confluent Cloud supports authentication using API Keys, OAuth, or SAML single sign-on (SSO). Azure Stream Analytics does not support authentication using OAuth or SAML single sign-on (SSO). +> You can connect to confluent cloud using an API Key that has topic-level access via the SASL_SSL security protocol. +### Connect to Confluent Cloud using API key -### Key vault integration +The ASA Kafka adapter is a librdkafka-based client, and to connect to confluent cloud, you will need TLS certificates that confluent cloud uses for server auth. +Confluent uses TLS certificates from Let’s Encrypt, an open certificate authority (CA) ++To authenticate using the API Key confluent offers, you must use the SASL_SSL protocol and complete the configuration as follows: ++| Setting | Value | + | | | + | Username | Key/ Username from API Key | + | Password | Secret/ Password from API key | + | KeyVault | Name of Azure Key vault with Uploaded certificate from Let’s Encrypt | + | Certificate | Certificate uploaded to KeyVault downloaded from Let’s Encrypt (You can download the ISRG Root X1 Self-sign cert in PEM format) | + ++## Key vault integration > [!NOTE] > When using trust store certificates with mTLS or SASL_SSL security protocols, you must have Azure Key vault and managed identity configured for your Azure Stream Analytics job. > -Azure Stream Analytics integrates seamlessly with Azure Key vault to access stored secrets needed for authentication and encryption when using mTLS or SASL_SSL security protocols. Your Azure Stream Analytics job connects to Azure Key vault using managed identity to ensure a secure connection and avoid the exfiltration of secrets. +Azure Stream Analytics integrates seamlessly with Azure Key vault to access stored secrets needed for authentication and encryption when using mTLS or SASL_SSL security protocols. Your Azure Stream Analytics job connects to your Azure Key vault using managed identity to ensure a secure connection and avoid the exfiltration of secrets. -You can store the certificates as Key vault certificates or Key vault secrets. Private keys are in PEM format. +Certificates are stored as secrets in the key vault and must be in PEM format. -### VNET integration -When configuring your Azure Stream Analytics job to connect to your Kafka clusters, depending on your configuration, you may have to configure your job to access your Kafka clusters, which are behind a firewall or inside a virtual network. You can visit the Azure Stream Analytics VNET documentation to learn more about configuring private endpoints to access resources inside a virtual network or behind a firewall. +The following command can upload the certificate as a secret to your key vault. You need "Administrator" access to your Key vault for this command to work properly. +```azurecli-interactive +az keyvault secret set --vault-name <your key vault> --name <name of the secret> --file <file path to secret> -### Configuration -The following table lists the property names and their description for creating a Kafka Input: +``` -| Property name | Description | -||-| -| Input/Output Alias | A friendly name used in queries to reference your input or output | -| Bootstrap server addresses | A list of host/port pairs to establish the connection to the Kafka cluster. | -| Kafka topic | A unit of your Kafka cluster you want to write events to. | -| Security Protocol | How you want to connect to your Kafka cluster. Azure Stream Analytics supports mTLS, SASL_SSL, SASL_PLAINTEXT or None. | -| Event Serialization format | The serialization format (JSON, CSV, Avro, Parquet, Protobuf) of the incoming data stream. | +### Grant the Stream Analytics job permissions to access the certificate in the key vault +For your Azure Stream Analytics job to access the certificate in your key vault and read the secret for authentication using managed identity, the service principal you created when you configured managed identity for your Azure Stream Analytics job must have special permissions to the key vault. ++1. Select **Access control (IAM)**. +1. Select **Add** > **Add role assignment** to open the **Add role assignment** page. ++1. Assign the role using the following configuration: ++ | Setting | Value | + | | | + | Role | Key vault secret reader | + | Assign access to | User, group, or service principal | + | Members | \<Name of your Stream Analytics job> | ++ +### VNET integration +When configuring your Azure Stream Analytics job to connect to your Kafka clusters, depending on your configuration, you might have to configure your job to access your Kafka clusters, which are behind a firewall or inside a virtual network. You can visit the Azure Stream Analytics VNET documentation to learn more about configuring private endpoints to access resources inside a virtual network or behind a firewall. ### Limitations * When configuring your Azure Stream Analytics jobs to use VNET/SWIFT, your job must be configured with at least six (6) streaming units. * When using mTLS or SASL_SSL with Azure Key vault, you must convert your Java Key Store to PEM format. * The minimum version of Kafka you can configure Azure Stream Analytics to connect to is version 0.10.+* Azure Stream Analytics does not support authentication to confluent cloud using OAuth or SAML single sign-on (SSO). You must use API Key via the SASL_SSL protocol + > [!NOTE] > For direct help with using the Azure Stream Analytics Kafka adapter, please reach out to [askasa@microsoft.com](mailto:askasa@microsoft.com). |
synapse-analytics | Memory Concurrency Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/memory-concurrency-limits.md | The following table shows the maximum concurrent queries and concurrency slots f | Service Level | Maximum concurrent queries | Concurrency slots available | Slots used by staticrc10 | Slots used by staticrc20 | Slots used by staticrc30 | Slots used by staticrc40 | Slots used by staticrc50 | Slots used by staticrc60 | Slots used by staticrc70 | Slots used by staticrc80 | |:-:|:--:|::|::|:-:|:-:|:-:|:-:|:-:|:-:|:-:|-| DW100c | 4 | 4 | 1 | 2 | 4 | 4 | 4 | 4 | 4 | 4 | -| DW200c | 8 | 8 | 1 | 2 | 4 | 8 | 8 | 8 | 8 | 8 | +| DW100c | 4 | 4 | 1 | 2 | 4 | 4 | 4 | 4 | 4 | 4 | +| DW200c | 8 | 8 | 1 | 2 | 4 | 8 | 8 | 8 | 8 | 8 | | DW300c | 12 | 12 | 1 | 2 | 4 | 8 | 8 | 8 | 8 | 8 | | DW400c | 16 | 16 | 1 | 2 | 4 | 8 | 16 | 16 | 16 | 16 | | DW500c | 20 | 20 | 1 | 2 | 4 | 8 | 16 | 16 | 16 | 16 | The following table shows the maximum concurrent queries and concurrency slots f **Dynamic resource classes** -The following table shows the maximum concurrent queries and concurrency slots for each [dynamic resource class](resource-classes-for-workload-management.md). Dynamic resource classes use a 3-10-22-70 memory percentage allocation for small-medium-large-xlarge resource classes across all service levels. +The following table shows the maximum concurrent queries and concurrency slots for each [dynamic resource class](resource-classes-for-workload-management.md). Dynamic resource classes use a 3-10-22-70 memory percentage allocation for small-medium-large-xlarge resource classes across service level DW1000c to DW30000c. For memory allocation under DW1000c, please refer to the document [dynamic resource class](resource-classes-for-workload-management.md). | Service Level | Maximum concurrent queries | Concurrency slots available | Slots used by smallrc | Slots used by mediumrc | Slots used by largerc | Slots used by xlargerc | |:-:|:--:|::|::|:-:|::|:-:| |
virtual-desktop | Troubleshoot Client Windows Basic Shared | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-client-windows-basic-shared.md | + + Title: Basic troubleshooting for the Remote Desktop client for Windows - Azure Virtual Desktop +description: Troubleshoot issues you might experience with the Remote Desktop client for Windows when connecting to Azure Virtual Desktop, Windows 365, and Dev Box. ++zone_pivot_groups: azure-virtual-desktop-windows-client-troubleshoot ++ Last updated : 10/12/2023+++# Basic troubleshooting for the Remote Desktop client for Windows ++> [!TIP] +> Select a button at the top of this article to choose which product you're connecting to and see the relevant documentation. ++This article provides some simple troubleshooting steps to try first for issues you might encounter when using the [Remote Desktop client for Windows](users/connect-windows.md?toc=%2Fazure%2Fvirtual-desktop%2Ftoc.json) to connect to Azure Virtual Desktop. ++This article provides some simple troubleshooting steps to try first for issues you might encounter when using the [Remote Desktop client for Windows](/windows-365/end-user-access-cloud-pc#remote-desktop) to connect to a Cloud PC in Windows 365. ++This article provides some simple troubleshooting steps to try first for issues you might encounter when using the [Remote Desktop client for Windows](../dev-box/tutorial-connect-to-dev-box-with-remote-desktop-app.md) to connect to Dev Box. ++## Basic troubleshooting ++There are a few basic troubleshooting steps you can try if you're having issues connecting to your desktops or applications: ++1. Make sure you're connected to the internet. ++1. Try to connect to your desktops or applications from the Azure Virtual Desktop web client. For more information, see [Connect to Azure Virtual Desktop with the Remote Desktop web client](users/connect-web.md). ++1. Make sure you're using the latest version of the Remote Desktop client. By default, the client automatically updates when a new version is available. To check for updates manually, see [Update the client](users/client-features-windows.md#update-the-client). ++1. If the connection fails frequently or you notice performance issues, check the status of the connection. You can find connection information in the connection bar, by selecting the signal icon: ++ :::image type="content" source="media/troubleshoot-client-windows-basic-shared/troubleshoot-windows-client-connection-information.png" alt-text="A screenshot showing the connection bar in the Remote Desktop client for Windows."::: ++1. Check the estimated connection round trip time (RTT) from your current location to the Azure Virtual Desktop service. For more information, see [Azure Virtual Desktop Experience Estimator](https://azure.microsoft.com/products/virtual-desktop/assessment/#estimation-tool) ++There are a few basic troubleshooting steps you can try if you're having issues connecting to your Cloud PC: ++1. Make sure you're connected to the internet. ++1. Make sure your Cloud PC is running. For more information, see [User actions](/windows-365/end-user-access-cloud-pc#user-actions). ++1. Try to connect to your Cloud PC from the Windows 365 web client. For more information, see [Access a Cloud PC](/windows-365/end-user-access-cloud-pc#home-page). ++1. Make sure you're using the latest version of the Remote Desktop client. By default, the client automatically updates when a new version is available. To check for updates manually, see [Update the client](users/client-features-windows.md?context=%2Fwindows-365%2Fcontext%2Fpr-context#update-the-client). ++1. If the connection fails frequently or you notice performance issues, check the status of the connection. You can find connection information in the connection bar, by selecting the signal icon: ++ :::image type="content" source="media/troubleshoot-client-windows-basic-shared/troubleshoot-windows-client-connection-information.png" alt-text="A screenshot showing the connection bar in the Remote Desktop client for Windows."::: ++1. Restart your Cloud PC from the Windows 365 portal. For more information, see [User actions](/windows-365/end-user-access-cloud-pc#user-actions). ++1. If none of the previous steps resolved your issue, you can use the *Troubleshoot* tool in the Windows 365 portal to diagnose and repair some common Cloud PC connectivity issues. To learn how to use the Troubleshoot, see [User actions](/windows-365/end-user-access-cloud-pc#user-actions). ++There are a few basic troubleshooting steps you can try if you're having issues connecting to your dev box: ++1. Make sure you're connected to the internet. ++1. Make sure your dev box is running. For more information, see [Shutdown, restart or start a dev box](../dev-box/how-to-create-dev-boxes-developer-portal.md#shutdown-restart-or-start-a-dev-box). ++1. Try to connect to your dev box from the Dev Box developer portal. For more information, see [Connect to a dev box](../dev-box/quickstart-create-dev-box.md#connect-to-a-dev-box). ++1. Make sure you're using the latest version of the Remote Desktop client. By default, the client automatically updates when a new version is available. To check for updates manually, see [Update the client](users/client-features-windows.md?toc=%2Fazure%2Fdev-box%2Ftoc.json#update-the-client). ++1. If the connection fails frequently or you notice performance issues, check the status of the connection. You can find connection information in the connection bar, by selecting the signal icon: ++ :::image type="content" source="media/troubleshoot-client-windows-basic-shared/troubleshoot-windows-client-connection-information.png" alt-text="A screenshot showing the connection bar in the Remote Desktop client for Windows."::: ++1. Restart your dev box from the Dev Box developer portal. ++1. If none of the previous steps resolved your issue, you can use the *Troubleshoot & repair* tool in the developer portal to diagnose and repair some common dev box connectivity issues. To learn how to use the Troubleshoot & repair tool, see [Troubleshoot and resolve dev box remote desktop connectivity issues](../dev-box/how-to-troubleshoot-repair-dev-box.md). ++## Client stops responding or can't be opened ++If the client stops responding or can't be opened, you might need to reset user data. If you can open the client, you can reset user data from the **About** menu. The default settings for the client will be restored and you'll be unsubscribed from all workspaces. ++To reset user data from the client: ++1. Open the *Remote Desktop* app on your device. ++1. Select the three dots at the top right-hand corner to show the menu, then select **About**. ++1. In the section **Reset user data**, select **Reset**. To confirm you want to reset your user data, select **Continue**. ++## Issue isn't listed here ++If your issue isn't listed here, ask your Azure Virtual Desktop administrator for support, or see [Troubleshoot the Remote Desktop client for Windows when connecting to Azure Virtual Desktop](troubleshoot-client-windows.md) for further troubleshooting steps. ++If your issue isn't listed here, ask your Windows 365 administrator for support, or see [Troubleshooting for Windows 365](/windows-365/enterprise/troubleshooting) for information about how to open a support case for Windows 365. ++If none of the above steps can help, ask your Dev Box administrator for support. |
virtual-desktop | Whats New Client Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-windows.md | description: Learn about recent changes to the Remote Desktop client for Windows Previously updated : 10/17/2023 Last updated : 10/24/2023 # What's new in the Remote Desktop client for Windows The following table lists the current versions available for the public and Insi | Release | Latest version | Download | ||-|-| | Public | 1.2.4677 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139369) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139456)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139370) |-| Insider | 1.2.4677 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139144)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139368) | +| Insider | 1.2.4763 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139144)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139368) | ++## Updates for version 1.2.4763 (Insider) ++*Date published: October 24, 2023* ++Download: [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233), [Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139144), [Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139368) ++- Added a link to the troubleshooting documentation to error messages to help users resolve minor issues without needing to contact Microsoft Support. +- Improved the connection bar user interface (UI). +- Fixed an issue that caused the client to stop responding when a user tries to resize the client window during a Teams video call. +- Fixed a bug that prevented the client from loading more than 255 workspaces. +- Fixed an authentication issue that allowed users to choose a different account whenever the client required more interaction. +- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues. ## Updates for version 1.2.4677 |
virtual-network | Create Custom Ip Address Prefix Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-portal.md | To utilize the Azure BYOIP feature, you must perform the following steps prior t ### Requirements and prefix readiness * The address range must be owned by you and registered under your name with the one of the five major Regional Internet Registries:- * [American Registry for Internet Numbers (ARIN)](https://www.arin.net/) - * [Réseaux IP Européens Network Coordination Centre (RIPE NCC)](https://www.ripe.net/) - * [Asia Pacific Network Information Centre Regional Internet Registries (APNIC)](https://www.apnic.net/) - * [Latin America and Caribbean Network Information Centre (LACNIC)](https://www.lacnic.net/) - * [African Network Information Centre (AFRINIC)](https://afrinic.net/) + * [American Registry for Internet Numbers (ARIN)](https://www.arin.net/) + * [Réseaux IP Européens Network Coordination Centre (RIPE NCC)](https://www.ripe.net/) + * [Asia Pacific Network Information Centre Regional Internet Registries (APNIC)](https://www.apnic.net/) + * [Latin America and Caribbean Network Information Centre (LACNIC)](https://www.lacnic.net/) + * [African Network Information Centre (AFRINIC)](https://afrinic.net/) * The address range must be no smaller than a /24 so it will be accepted by Internet Service Providers. |
virtual-network | Create Custom Ip Address Prefix Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-powershell.md | To utilize the Azure BYOIP feature, you must perform the following steps prior t ### Requirements and prefix readiness * The address range must be owned by you and registered under your name with the one of the five major Regional Internet Registries:- * [American Registry for Internet Numbers (ARIN)](https://www.arin.net/) - * [Réseaux IP Européens Network Coordination Centre (RIPE NCC)](https://www.ripe.net/) - * [Asia Pacific Network Information Centre Regional Internet Registries (APNIC)](https://www.apnic.net/) - * [Latin America and Caribbean Network Information Centre (LACNIC)](https://www.lacnic.net/) - * [African Network Information Centre (AFRINIC)](https://afrinic.net/) + * [American Registry for Internet Numbers (ARIN)](https://www.arin.net/) + * [Réseaux IP Européens Network Coordination Centre (RIPE NCC)](https://www.ripe.net/) + * [Asia Pacific Network Information Centre Regional Internet Registries (APNIC)](https://www.apnic.net/) + * [Latin America and Caribbean Network Information Centre (LACNIC)](https://www.lacnic.net/) + * [African Network Information Centre (AFRINIC)](https://afrinic.net/) * The address range must be no smaller than a /24 so it will be accepted by Internet Service Providers. |
virtual-network | Network Security Group How It Works | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/network-security-group-how-it-works.md | For inbound traffic, Azure processes the rules in a network security group assoc - **VM3**: Since there's no network security group associated to *Subnet2*, traffic is allowed into the subnet and processed by *NSG2*, because *NSG2* is associated to the network interface attached to *VM3*. -- **VM4**: Traffic is allowed to *VM4,* because a network security group isn't associated to *Subnet3*, or the network interface in the virtual machine. All network traffic is allowed through a subnet and network interface if they don't have a network security group associated to them.+- **VM4**: Traffic is blocked to *VM4,* because a network security group isn't associated to *Subnet3*, or the network interface in the virtual machine. All network traffic is blocked through a subnet and network interface if they don't have a network security group associated to them. ## Outbound traffic |
web-application-firewall | Application Gateway Crs Rulegroups Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-crs-rulegroups-rules.md | If the anomaly score is 5 or greater, and the WAF is in Prevention mode, the req For example, a single *Critical* rule match is enough for the WAF to block a request when in Prevention mode, because the overall anomaly score is 5. However, one *Warning* rule match only increases the anomaly score by 3, which isn't enough by itself to block the traffic. When an anomaly rule is triggered, it shows a "Matched" action in the logs. If the anomaly score is 5 or greater, there is a separate rule triggered with either "Blocked" or "Detected" action depending on whether WAF policy is in Prevention or Detection mode. For more information, please see [Anomaly Scoring mode](ag-overview.md#anomaly-scoring-mode). -### DRS 2.1 (preview) +### DRS 2.1 DRS 2.1 rules offer better protection than earlier versions of the DRS. It includes additional rules developed by the Microsoft Threat Intelligence team and updates to signatures to reduce false positives. It also supports transformations beyond just URL decoding. The following rule groups and rules are available when using Web Application Fir # [DRS 2.1](#tab/drs21) -## <a name="drs21"></a> 2.1 rule sets (preview) +## <a name="drs21"></a> 2.1 rule sets ### <a name="general-21"></a> General |RuleId|Description| |