Updates from: 08/05/2023 01:18:22
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-domain-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/policy-reference.md
Title: Built-in policy definitions for Azure Active Directory Domain Services description: Lists Azure Policy built-in policy definitions for Azure Active Directory Domain Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/25/2023 Last updated : 08/03/2023
active-directory Howto Mfa Mfasettings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-mfasettings.md
Default number: *+1 (855) 330-8653*
The following table lists more numbers for different countries.
-| Country | Number |
-|:--|:-|
-| Croatia | +385 15507766 |
-| Ghana | +233 308250245 |
-| Sri Lanka | +94 117750440 |
-| Ukraine | +380 443332393 |
--
+| Country | Number(s) |
+|:|:-|
+| Austria | +43 6703062076 |
+| Bangladesh | +880 9604606026 |
+| Croatia | +385 15507766 |
+| Ecuador | +593 964256042 |
+| Estonia | +372 6712726 |
+| France | +33 744081468 |
+| Ghana | +233 308250245 |
+| Greece | +30 2119902739 |
+| Guatemala | +502 23055056 |
+| Hong Kong | +852 25716964 |
+| India | +91 3371568300, +91 1205089400, +91 4471566601, +91 2271897557, +91 1203524400, +91 3335105700, +91 2235544120, +91 4435279600|
+| Jordan | +962 797639442 |
+| Kenya | +254 709605276 |
+| Netherlands | +31 202490048 |
+| Nigeria | +234 7080627886 |
+| Pakistan | +92 4232618686 |
+| Poland | +48 699740036 |
+| Saudi Arabia | +966 115122726 |
+| South Africa | +27 872405062 |
+| Spain | +34 913305144 |
+| Sri Lanka | +94 117750440 |
+| Sweden | +46 701924176 |
+| Taiwan | +886 277515260 |
+| Turkey | +90 8505404893 |
+| Ukraine | +380 443332393 |
+| United Arab Emirates | +971 44015046 |
+| Vietnam | +84 2039990161 |
> [!NOTE] > When Azure AD Multi-Factor Authentication calls are placed through the public telephone network, sometimes the calls are routed through a carrier that doesn't support caller ID. Because of this, caller ID isn't guaranteed, even though Azure AD Multi-Factor Authentication always sends it. This applies both to phone calls and text messages provided by Azure AD Multi-Factor Authentication. If you need to validate that a text message is from Azure AD Multi-Factor Authentication, see [What SMS short codes are used for sending messages?](multi-factor-authentication-faq.yml#what-sms-short-codes-are-used-for-sending-sms-messages-to-my-users-).
active-directory Howto Mfaserver Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-deploy.md
Previously updated : 10/10/2022 Last updated : 08/04/2023
When a master Azure MFA Server goes offline, the subordinate servers can still p
### Prepare your environment
-Make sure the server that you're using for Azure Multi-Factor Authentication meets the following requirements:
+Make sure the server that you're using for Azure Multi-Factor Authentication meets the following requirements.
| Azure Multi-Factor Authentication Server Requirements | Description | |: |: | | Hardware |<li>200 MB of hard disk space</li><li>x32 or x64 capable processor</li><li>1 GB or greater RAM</li> |
-| Software |<li>Windows Server 2019</li><li>Windows Server 2016</li><li>Windows Server 2012 R2</li><li>Windows Server 2012</li><li>Windows Server 2008/R2 (with [ESU](/lifecycle/faq/extended-security-updates) only)</li><li>Windows 10</li><li>Windows 8.1, all editions</li><li>Windows 8, all editions</li><li>Windows 7, all editions (with [ESU](/lifecycle/faq/extended-security-updates) only)</li><li>Microsoft .NET 4.0 Framework</li><li>IIS 7.0 or greater if installing the user portal or web service SDK</li> |
+| Software |<li>Windows Server 2019<sup>1</sup></li><li>Windows Server 2016</li><li>Windows Server 2012 R2</li><li>Windows Server 2012</li><li>Windows Server 2008/R2 (with [ESU](/lifecycle/faq/extended-security-updates) only)</li><li>Windows 10</li><li>Windows 8.1, all editions</li><li>Windows 8, all editions</li><li>Windows 7, all editions (with [ESU](/lifecycle/faq/extended-security-updates) only)</li><li>Microsoft .NET 4.0 Framework</li><li>IIS 7.0 or greater if installing the user portal or web service SDK</li> |
| Permissions | Domain Administrator or Enterprise Administrator account to register with Active Directory |
+<sup>1</sup>If Azure MFA Server fails to activate on an Azure VM that runs Windows Server 2019, try using another version of Windows Server.
+ ### Azure MFA Server Components There are three web components that make up Azure MFA Server:
active-directory Concept Conditional Access Cloud Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-cloud-apps.md
When a Conditional Access policy targets the Microsoft Admin Portals cloud app,
Other Microsoft admin portals will be added over time.
+> [!IMPORTANT]
+> Microsoft Admin Poratls (preview) is not currently supported in Government clouds.
+ > [!NOTE] > The Microsoft Admin Portals app applies to interactive sign-ins to the listed admin portals only. Sign-ins to the underlying resources or services like Microsoft Graph or Azure Resource Manager APIs are not covered by this application. Those resources are protected by the [Microsoft Azure Management](#microsoft-azure-management) app. This enables customers to move along the MFA adoption journey for admins without impacting automation that relies on APIs and PowerShell. When you are ready, Microsoft recommends using a [policy requiring administrators perform MFA always](howto-conditional-access-policy-admin-mfa.md) for comprehensive protection.
active-directory How To Policy Mfa Admin Portals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/how-to-policy-mfa-admin-portals.md
Microsoft recommends securing access to any Microsoft admin portals like Microsoft Entra, Microsoft 365, Exchange, and Azure. Using the [Microsoft Admin Portals (Preview)](concept-conditional-access-cloud-apps.md#microsoft-admin-portals-preview) app organizations can control interactive access to Microsoft admin portals.
+> [!IMPORTANT]
+> Microsoft Admin Poratls (preview) is not currently supported in Government clouds.
+ ## User exclusions [!INCLUDE [active-directory-policy-exclusions](../../../includes/active-directory-policy-exclude-user.md)]
active-directory Console App Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/console-app-quickstart.md
- Title: "Quickstart: Call Microsoft Graph from a console application"
-description: In this quickstart, you learn how a console application can get an access token and call an API protected by Microsoft identity platform, using the app's own identity
-------- Previously updated : 12/06/2022--
-zone_pivot_groups: console-app-quickstart
-#Customer intent: As an app developer, I want to learn how my console app can get an access token and call an API that's protected by the Microsoft identity platform by using the client credentials flow.
--
-# Quickstart: Acquire a token and call the Microsoft Graph API by using a console app's identity
----
active-directory Console Quickstart Portal Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/console-quickstart-portal-nodejs.md
> [!div renderon="docs"] > Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article: >
-> > [Quickstart: Node.js console app that calls an API](console-app-quickstart.md?pivots=devlang-nodejs)
+> > [Quickstart: Acquire a token and call Microsoft Graph from a Node.js console app](quickstart-console-app-nodejs-acquire-token.md)
> > We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
active-directory Daemon Quickstart Portal Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/daemon-quickstart-portal-java.md
> [!div renderon="docs"] > Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article: >
-> > [Quickstart: Java daemon that calls a protected API](console-app-quickstart.md?pivots=devlang-java)
+> > [Quickstart: Acquire a token and call Microsoft Graph from a Java daemon app](quickstart-daemon-app-java-acquire-token.md)
> > We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
active-directory Daemon Quickstart Portal Netcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/daemon-quickstart-portal-netcore.md
> [!div renderon="docs"] > Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article: >
-> > [Quickstart: .NET Core console that calls an API](console-app-quickstart.md?pivots=devlang-dotnet-core)
+> > [Quickstart: Acquire a token and call Microsoft Graph in a .NET Core console app](quickstart-console-app-netcore-acquire-token.md)
> > We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
active-directory Daemon Quickstart Portal Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/daemon-quickstart-portal-python.md
> [!div renderon="docs"] > Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article: >
-> > [Quickstart: Python console app that calls an API](console-app-quickstart.md?pivots=devlang-python)
+> > [Quickstart: Acquire a token and call Microsoft Graph from a Python daemon app](quickstart-daemon-app-python-acquire-token.md)
> > We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
active-directory Desktop App Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/desktop-app-quickstart.md
- Title: "Quickstart: Sign in users and call Microsoft Graph in a desktop app"
-description: In this quickstart, learn how a desktop application can get an access token and call an API protected by the Microsoft identity platform.
-------- Previously updated : 01/27/2023--
-zone_pivot_groups: desktop-app-quickstart
-#Customer intent: As an application developer, I want to learn how my desktop application can get an access token and call an API that's protected by the Microsoft identity platform.
--
-# Quickstart: Acquire a token and call Microsoft Graph API from a desktop application
---
active-directory Desktop Quickstart Portal Nodejs Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/desktop-quickstart-portal-nodejs-desktop.md
> [!div renderon="docs"] > Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article: >
-> > [Quickstart: Node.js Electron desktop app with user sign-in](desktop-app-quickstart.md?pivots=devlang-nodejs-electron)
+> > [Quickstart: Sign in users and call Microsoft Graph from a Node.js desktop app](quickstart-desktop-app-nodejs-electron-sign-in.md)
> > We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
active-directory Desktop Quickstart Portal Uwp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/desktop-quickstart-portal-uwp.md
> [!div renderon="docs"] > Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article: >
-> > [Quickstart: Universal Windows Platform (UWP) desktop app with user sign-in](desktop-app-quickstart.md?pivots=devlang-uwp)
+> > [Quickstart: Sign in users and call Microsoft Graph in a Universal Windows Platform app](quickstart-desktop-app-uwp-sign-in.md)
> > We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
active-directory Desktop Quickstart Portal Wpf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/desktop-quickstart-portal-wpf.md
> [!div renderon="docs"] > Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article: >
-> > [Quickstart: Windows Presentation Foundation (WPF) desktop app that signs in users and calls a web API](desktop-app-quickstart.md?pivots=devlang-windows-desktop)
+> > [Quickstart: Sign in users and call Microsoft Graph in a Windows desktop app](quickstart-desktop-app-wpf-sign-in.md)
> > We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
active-directory Jwt Claims Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/jwt-claims-customization.md
As another example, consider when Britta Simon tries to sign in using the follow
As a final example, consider what happens if Britta has no `user.othermail` configured or it's empty. The claim falls back to `user.extensionattribute1` ignoring the condition entry in both cases.
+## Security considerations
+Applications that receive tokens rely on claim values that are authoritatively issued by Azure AD and can't be tampered with. When you modify the token contents through claims customization, these assumptions may no longer be correct. Applications must explicitly acknowledge that tokens have been modified by the creator of the customization to protect themselves from customizations created by malicious actors. This can be done in one the following ways:
+
+- [Configure a custom signing key](#configure-a-custom-signing-key)
+- [update the application manifest to accept mapped claims](#update-the-application-manifest).
+
+Without this, Azure AD returns an [AADSTS50146 error code](reference-aadsts-error-codes.md#aadsts-error-codes).
+
+## Configure a custom signing key
+For multi-tenant apps, a custom signing key should be used. Don't set `acceptMappedClaims` in the app manifest. when setting up an app in the Azure portal, you get an app registration object and a service principal in your tenant. That app is using the Azure global sign-in key, which can't be used for customizing claims in tokens. To get custom claims in tokens, create a custom sign-in key from a certificate and add it to service principal. For testing purposes, you can use a self-signed certificate. After configuring the custom signing key, your application code needs to validate the token signing key.
+
+Add the following information to the service principal:
+
+- Private key (as a [key credential](/graph/api/resources/keycredential?view=graph-rest-1.0&preserve-view=true))
+- Password (as a [password credential](/graph/api/resources/passwordcredential?view=graph-rest-1.0&preserve-view=true))
+- Public key (as a [key credential](/graph/api/resources/keycredential?view=graph-rest-1.0&preserve-view=true))
+
+Extract the private and public key base-64 encoded from the PFX file export of your certificate. Make sure that the `keyId` for the `keyCredential` used for "Sign" matches the `keyId` of the `passwordCredential`. You can generate the `customkeyIdentifier` by getting the hash of the cert's thumbprint.
+
+## Request
+The following example shows the format of the HTTP PATCH request to add a custom signing key to a service principal. The "key" value in the `keyCredentials` property is shortened for readability. The value is base-64 encoded. For the private key, the property usage is "Sign". For the public key, the property usage is "Verify".
+
+```
+PATCH https://graph.microsoft.com/v1.0/servicePrincipals/f47a6776-bca7-4f2e-bc6c-eec59d058e3e
+
+Content-type: servicePrincipals/json
+Authorization: Bearer {token}
+
+{
+ "keyCredentials":[
+ {
+ "customKeyIdentifier": "lY85bR8r6yWTW6jnciNEONwlVhDyiQjdVLgPDnkI5mA=",
+ "endDateTime": "2021-04-22T22:10:13Z",
+ "keyId": "4c266507-3e74-4b91-aeba-18a25b450f6e",
+ "startDateTime": "2020-04-22T21:50:13Z",
+ "type": "X509CertAndPassword",
+ "usage": "Sign",
+ "key":"MIIKIAIBAz.....HBgUrDgMCERE20nuTptI9MEFCh2Ih2jaaLZBZGeZBRFVNXeZmAAgIH0A==",
+ "displayName": "CN=contoso"
+ },
+ {
+ "customKeyIdentifier": "lY85bR8r6yWTW6jnciNEONwlVhDyiQjdVLgPDnkI5mA=",
+ "endDateTime": "2021-04-22T22:10:13Z",
+ "keyId": "e35a7d11-fef0-49ad-9f3e-aacbe0a42c42",
+ "startDateTime": "2020-04-22T21:50:13Z",
+ "type": "AsymmetricX509Cert",
+ "usage": "Verify",
+ "key": "MIIDJzCCAg+gAw......CTxQvJ/zN3bafeesMSueR83hlCSyg==",
+ "displayName": "CN=contoso"
+ }
+
+ ],
+ "passwordCredentials": [
+ {
+ "customKeyIdentifier": "lY85bR8r6yWTW6jnciNEONwlVhDyiQjdVLgPDnkI5mA=",
+ "keyId": "4c266507-3e74-4b91-aeba-18a25b450f6e",
+ "endDateTime": "2022-01-27T19:40:33Z",
+ "startDateTime": "2020-04-20T19:40:33Z",
+ "secretText": "mypassword"
+ }
+ ]
+}
+```
+
+## Configure a custom signing key using PowerShell
+Use PowerShell to [instantiate an MSAL Public Client Application](msal-net-initializing-client-applications.md#initializing-a-public-client-application-from-code) and use the [Authorization Code Grant](v2-oauth2-auth-code-flow.md) flow to obtain a delegated permission access token for Microsoft Graph. Use the access token to call Microsoft Graph and configure a custom signing key for the service principal. After configuring the custom signing key, your application code needs to [validate the token signing key](#validate-token-signing-key).
+
+To run this script you need:
+
+- The object ID of your application's service principal, found in the Overview blade of your application's entry in Enterprise Applications in the Azure portal.
+- An app registration to sign in a user and get an access token to call Microsoft Graph. Get the application (client) ID of this app in the Overview blade of the application's entry in App registrations in the Azure portal. The app registration should have the following configuration:
+ - A redirect URI of "http://localhost" listed in the **Mobile and desktop applications** platform configuration.
+ - In **API permissions**, Microsoft Graph delegated permissions **Application.ReadWrite.All** and **User.Read** (make sure you grant Admin consent to these permissions).
+- A user who logs in to get the Microsoft Graph access token. The user should be one of the following Azure AD administrative roles (required to update the service principal):
+ - Cloud Application Administrator
+ - Application Administrator
+ - Global Administrator
+- A certificate to configure as a custom signing key for our application. You can either create a self-signed certificate or obtain one from your trusted certificate authority. The following certificate components are used in the script:
+ - public key (typically a .cer file)
+ - private key in PKCS#12 format (in .pfx file)
+ - password for the private key (pfx file)
+
+> [!IMPORTANT]
+> The private key must be in PKCS#12 format since Azure AD doesn't support other format types. Using the wrong format can result in the error "Invalid certificate: Key value is invalid certificate" when using Microsoft Graph to PATCH the service principal with a `keyCredentials` containing the certificate information.
+
+```
+$fqdn="fourthcoffeetest.onmicrosoft.com" # this is used for the 'issued to' and 'issued by' field of the certificate
+$pwd="mypassword" # password for exporting the certificate private key
+$location="C:\\temp" # path to folder where both the pfx and cer file will be written to
+
+# Create a self-signed cert
+$cert = New-SelfSignedCertificate -certstorelocation cert:\currentuser\my -DnsName $fqdn
+$pwdSecure = ConvertTo-SecureString -String $pwd -Force -AsPlainText
+$path = 'cert:\currentuser\my\' + $cert.Thumbprint
+$cerFile = $location + "\\" + $fqdn + ".cer"
+$pfxFile = $location + "\\" + $fqdn + ".pfx"
+
+# Export the public and private keys
+Export-PfxCertificate -cert $path -FilePath $pfxFile -Password $pwdSecure
+Export-Certificate -cert $path -FilePath $cerFile
+
+$ClientID = "<app-id>"
+$loginURL = "https://login.microsoftonline.com"
+$tenantdomain = "fourthcoffeetest.onmicrosoft.com"
+$redirectURL = "http://localhost" # this reply URL is needed for PowerShell Core
+[string[]] $Scopes = "https://graph.microsoft.com/.default"
+$pfxpath = $pfxFile # path to pfx file
+$cerpath = $cerFile # path to cer file
+$SPOID = "<service-principal-id>"
+$graphuri = "https://graph.microsoft.com/v1.0/serviceprincipals/$SPOID"
+$password = $pwd # password for the pfx file
+
+
+# choose the correct folder name for MSAL based on PowerShell version 5.1 (.Net) or PowerShell Core (.Net Core)
+
+if ($PSVersionTable.PSVersion.Major -gt 5)
+ {
+ $core = $true
+ $foldername = "netcoreapp2.1"
+ }
+else
+ {
+ $core = $false
+ $foldername = "net45"
+ }
+
+# Load the MSAL/microsoft.identity/client assembly -- needed once per PowerShell session
+[System.Reflection.Assembly]::LoadFrom((Get-ChildItem C:/Users/<username>/.nuget/packages/microsoft.identity.client/4.32.1/lib/$foldername/Microsoft.Identity.Client.dll).fullname) | out-null
+
+$global:app = $null
+
+$ClientApplicationBuilder = [Microsoft.Identity.Client.PublicClientApplicationBuilder]::Create($ClientID)
+[void]$ClientApplicationBuilder.WithAuthority($("$loginURL/$tenantdomain"))
+[void]$ClientApplicationBuilder.WithRedirectUri($redirectURL)
+
+$global:app = $ClientApplicationBuilder.Build()
+
+Function Get-GraphAccessTokenFromMSAL {
+ [Microsoft.Identity.Client.AuthenticationResult] $authResult = $null
+ $AquireTokenParameters = $global:app.AcquireTokenInteractive($Scopes)
+ [IntPtr] $ParentWindow = [System.Diagnostics.Process]::GetCurrentProcess().MainWindowHandle
+ if ($ParentWindow)
+ {
+ [void]$AquireTokenParameters.WithParentActivityOrWindow($ParentWindow)
+ }
+ try {
+ $authResult = $AquireTokenParameters.ExecuteAsync().GetAwaiter().GetResult()
+ }
+ catch {
+ $ErrorMessage = $_.Exception.Message
+ Write-Host $ErrorMessage
+ }
+
+ return $authResult
+}
+
+$myvar = Get-GraphAccessTokenFromMSAL
+if ($myvar)
+{
+ $GraphAccessToken = $myvar.AccessToken
+ Write-Host "Access Token: " $myvar.AccessToken
+ #$GraphAccessToken = "eyJ0eXAiOiJKV1QiL ... iPxstltKQ"
+
+
+ # this is for PowerShell Core
+ $Secure_String_Pwd = ConvertTo-SecureString $password -AsPlainText -Force
+
+ # reading certificate files and creating Certificate Object
+ if ($core)
+ {
+ $pfx_cert = get-content $pfxpath -AsByteStream -Raw
+ $cer_cert = get-content $cerpath -AsByteStream -Raw
+ $cert = Get-PfxCertificate -FilePath $pfxpath -Password $Secure_String_Pwd
+ }
+ else
+ {
+ $pfx_cert = get-content $pfxpath -Encoding Byte
+ $cer_cert = get-content $cerpath -Encoding Byte
+ # Write-Host "Enter password for the pfx file..."
+ # calling Get-PfxCertificate in PowerShell 5.1 prompts for password
+ # $cert = Get-PfxCertificate -FilePath $pfxpath
+ $cert = [System.Security.Cryptography.X509Certificates.X509Certificate2]::new($pfxpath, $password)
+ }
+
+ # base 64 encode the private key and public key
+ $base64pfx = [System.Convert]::ToBase64String($pfx_cert)
+ $base64cer = [System.Convert]::ToBase64String($cer_cert)
+
+ # getting id for the keyCredential object
+ $guid1 = New-Guid
+ $guid2 = New-Guid
+
+ # get the custom key identifier from the certificate thumbprint:
+ $hasher = [System.Security.Cryptography.HashAlgorithm]::Create('sha256')
+ $hash = $hasher.ComputeHash([System.Text.Encoding]::UTF8.GetBytes($cert.Thumbprint))
+ $customKeyIdentifier = [System.Convert]::ToBase64String($hash)
+
+ # get end date and start date for our keycredentials
+ $endDateTime = ($cert.NotAfter).ToUniversalTime().ToString( "yyyy-MM-ddTHH:mm:ssZ" )
+ $startDateTime = ($cert.NotBefore).ToUniversalTime().ToString( "yyyy-MM-ddTHH:mm:ssZ" )
+
+ # building our json payload
+ $object = [ordered]@{
+ keyCredentials = @(
+ [ordered]@{
+ customKeyIdentifier = $customKeyIdentifier
+ endDateTime = $endDateTime
+ keyId = $guid1
+ startDateTime = $startDateTime
+ type = "X509CertAndPassword"
+ usage = "Sign"
+ key = $base64pfx
+ displayName = "CN=fourthcoffeetest"
+ },
+ [ordered]@{
+ customKeyIdentifier = $customKeyIdentifier
+ endDateTime = $endDateTime
+ keyId = $guid2
+ startDateTime = $startDateTime
+ type = "AsymmetricX509Cert"
+ usage = "Verify"
+ key = $base64cer
+ displayName = "CN=fourthcoffeetest"
+ }
+ )
+ passwordCredentials = @(
+ [ordered]@{
+ customKeyIdentifier = $customKeyIdentifier
+ keyId = $guid1
+ endDateTime = $endDateTime
+ startDateTime = $startDateTime
+ secretText = $password
+ }
+ )
+ }
+
+ $json = $object | ConvertTo-Json -Depth 99
+ Write-Host "JSON Payload:"
+ Write-Output $json
+
+ # Request Header
+ $Header = @{}
+ $Header.Add("Authorization","Bearer $($GraphAccessToken)")
+ $Header.Add("Content-Type","application/json")
+
+ try
+ {
+ Invoke-RestMethod -Uri $graphuri -Method "PATCH" -Headers $Header -Body $json
+ }
+ catch
+ {
+ # Dig into the exception to get the Response details.
+ # Note that value__ is not a typo.
+ Write-Host "StatusCode:" $_.Exception.Response.StatusCode.value__
+ Write-Host "StatusDescription:" $_.Exception.Response.StatusDescription
+ }
+
+ Write-Host "Complete Request"
+}
+else
+{
+ Write-Host "Fail to get Access Token"
+}
+```
+
+## Validate token signing key
+Apps that have claims mapping enabled must validate their token signing keys by appending `appid={client_id}` to their [OpenID Connect metadata requests](v2-protocols-oidc.md#fetch-the-openid-configuration-document). The following example shows the format of the OpenID Connect metadata document you should use:
+
+```
+https://login.microsoftonline.com/{tenant}/v2.0/.well-known/openid-configuration?appid={client-id}
+```
+
+## Update the application manifest
+For single tenant apps, you can set the `acceptMappedClaims` property to `true` in the [application manifest](reference-app-manifest.md). As documented on the [apiApplication resource type](/graph/api/resources/apiapplication?view=graph-rest-1.0&preserve-view=true#properties), this allows an application to use claims mapping without specifying a custom signing key.
+
+>[!WARNING]
+>Do not set the acceptMappedClaims property to true for multi-tenant apps, which can allow malicious actors to create claims-mapping policies for your app.
+
+The requested token audience is required to use a verified domain name of your Azure AD tenant, which means you should set the `Application ID URI` (represented by the `identifierUris` in the application manifest) for example to `https://contoso.com/my-api` or (simply using the default tenant name) `https://contoso.onmicrosoft.com/my-api`.
+
+If you're not using a verified domain, Azure AD returns an `AADSTS501461` error code with message "_AcceptMappedClaims is only supported for a token audience matching the application GUID or an audience within the tenant's verified domains. Either change the resource identifier or use an application-specific signing key."
+ ## Advanced claims options Configure advanced claims options for OIDC applications to expose the same claim as SAML tokens. Also for applications that intend to use the same claim for both SAML2.0 and OIDC response tokens.
active-directory Mobile App Quickstart Portal Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/mobile-app-quickstart-portal-android.md
Title: "Quickstart: Add sign in with Microsoft to an Android app"
+ Title: "Quickstart: Sign in users and call Microsoft Graph from an Android app"
description: In this quickstart, learn how Android applications can call an API that requires access tokens issued by the Microsoft identity platform.
> [!div renderon="docs"] > Welcome! This probably isn't the page you were expecting. We're currently working on a fix, but for now, please use the link below - it should take you to the right article: >
-> > [Quickstart: Android app with user sign-in](mobile-app-quickstart.md?pivots=devlang-android)
+> > [Quickstart: Sign in users and call Microsoft Graph from an Android app](quickstart-mobile-app-android-sign-in.md)
> > We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
active-directory Mobile App Quickstart Portal Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/mobile-app-quickstart-portal-ios.md
> [!div renderon="docs"] > Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article: >
-> > [Quickstart: iOS or macOS app that signs in users and calls a web API](mobile-app-quickstart.md?pivots=devlang-ios)
+> > [Quickstart: Sign in users and call Microsoft Graph from an iOS or macOS app](quickstart-mobile-app-ios-sign-in.md)
> > We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
active-directory Mobile App Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/mobile-app-quickstart.md
- Title: "Quickstart: Add sign in with Microsoft to a mobile app"
-description: In this quickstart, learn how a mobile app can sign in users, get an access token from the Microsoft identity platform, and call the Microsoft Graph API.
-------- Previously updated : 01/14/2022--
-zone_pivot_groups: mobile-app-quickstart
-#Customer intent: As an application developer, I want to learn how to sign in users and call Microsoft Graph from my mobile application.
--
-# Quickstart: Sign in users and call the Microsoft Graph API from a mobile application
---
active-directory Quickstart Console App Netcore Acquire Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-console-app-netcore-acquire-token.md
+
+ Title: "Quickstart: Acquire a token and call Microsoft Graph in a .NET Core console app"
+description: In this quickstart, you learn how a .NET Core sample app can use the client credentials flow to get a token and call Microsoft Graph.
+++++++ Last updated : 03/13/2023+++
+#Customer intent: As an application developer, I want to learn how my .NET Core app can get an access token and call an API that's protected by the Microsoft identity platform by using the client credentials flow.
++
+# Quickstart: Acquire a token and call Microsoft Graph in a .NET Core console app
+
+The following quickstart uses a code sample to demonstrates how a .NET Core console application can get an access token to call the Microsoft Graph API and display a [list of users](/graph/api/user-list) in the directory. It also demonstrates how a job or a Windows service can run with an application identity, instead of a user's identity. The sample console application in this quickstart is also a daemon application, therefore it's a confidential client application.
+
+The following diagram shows how the sample app works:
+
+![Diagram that shows how the sample app generated by this quickstart works.](media/quickstart-v2-netcore-daemon/netcore-daemon-intro.svg)
+
+## Prerequisites
+
+This quickstart requires [.NET Core 6.0 SDK](https://dotnet.microsoft.com/download).
+
+## Register and download the app
+
+The application can be built using either an automatic or manual configuration.
+
+### Automatic configuration
+
+To register and automatically configure the app and then download the code sample, follow these steps:
+
+1. Go to the [Azure portal page for app registration](https://portal.azure.com/?Microsoft_AAD_RegisteredApps=true#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/DotNetCoreDaemonQuickstartPage/sourceType/docs).
+1. Enter a name for your application and select **Register**.
+1. Follow the instructions to download and automatically configure your new application in one click.
+
+### Manual configuration
+
+To manually configure your application and code sample, use the following procedures.
+
+#### Step 1: Register your application
++
+To register the application and add the registration information to the solution manually, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. If access to multiple tenants is available, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which to register the application.
+1. Search for and select **Azure Active Directory**.
+1. Under **Manage**, select **App registrations** > **New registration**.
+1. For **Name**, enter a name for the application. For example, enter **Daemon-console**. Users of the app will see this name, and can be changed later.
+1. Select **Register** to create the application.
+1. Under **Manage**, select **Certificates & secrets**.
+1. Under **Client secrets**, select **New client secret**, enter a name, and then select **Add**. Record the secret value in a safe location for use in a later step.
+1. Under **Manage**, select **API Permissions** > **Add a permission**. Select **Microsoft Graph**.
+1. Select **Application permissions**.
+1. Under the **User** node, select **User.Read.All**, and then select **Add permissions**.
+
+#### Step 2: Download your Visual Studio project
+
+[Download the Visual Studio project](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/archive/master.zip)
+
+This project can be run in either Visual Studio or Visual Studio for Mac and can be downloaded from the [code sample](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/archive/master.zip).
++
+#### Step 3: Configure your Visual Studio project
+
+1. Extract the *.zip* file to a local folder that's close to the root of the disk to avoid errors caused by path length limitations on Windows. For example, extract to *C:\Azure-Samples*.
+
+1. Open the solution in Visual Studio: *1-Call-MSGraph\daemon-console.sln* (optional).
+1. In *appsettings.json*, replace the values of `Tenant`, `ClientId`, and `ClientSecret`. The value for the application (client) ID and the directory (tenant) ID, can be found in the app's **Overview** page on the Azure portal.
+
+ ```json
+ "TenantId": "Enter_the_Tenant_Id_Here",
+ "ClientId": "Enter_the_Application_Id_Here",
+ "ClientSecret": "Enter_the_Client_Secret_Here"
+ ```
+
+ In the code:
+ - `Enter_the_Application_Id_Here` is the application (client) ID for the registered application.
+ - Replace `Enter_the_Tenant_Id_Here` with the tenant ID or tenant name (for example, `contoso.microsoft.com`).
+ - Replace `Enter_the_Client_Secret_Here` with the client secret that you created in step 1.
+ To generate a new key, go to the **Certificates & secrets** page.
+
+#### Step 4: Admin consent
+
+Running the application now results in the output `HTTP 403 - Forbidden* error: "Insufficient privileges to complete the operation`. This error occurs because any app-only permission requires a global administrator of the directory to give consent to the application. Select one of the following options, depending on the role.
+
+##### Global tenant administrator
+
+For a global tenant administrator, go to **Enterprise applications** in the Azure portal. Select the app registration, and select **Permissions** from the **Security** section of the left pane. Then select the large button labeled **Grant admin consent for {Tenant Name}** (where **{Tenant Name}** is the name of the directory).
+
+##### Standard user
+
+For a standard user of your tenant, ask a global administrator to grant admin consent to the application. To do this, provide the following URL to the administrator:
+
+```url
+https://login.microsoftonline.com/Enter_the_Tenant_Id_Here/adminconsent?client_id=Enter_the_Application_Id_Here
+```
+
+In the URL:
+
+- Replace `Enter_the_Tenant_Id_Here` with the tenant ID or tenant name (for example, `contoso.microsoft.com`).
+- `Enter_the_Application_Id_Here` is the application (client) ID for the registered application.
+
+The error `AADSTS50011: No reply address is registered for the application` may be displayed after you grant consent to the app by using the preceding URL. This error occurs because the application and the URL don't have a redirect URI. This can be ignored.
+
+#### Step 5: Run the application
+
+In Visual Studio, press **F5** to run the application. Otherwise, run the application via command prompt, console, or terminal:
+
+```dotnetcli
+cd {ProjectFolder}\1-Call-MSGraph\daemon-console
+dotnet run
+```
+
+In that code:
+
+- `{ProjectFolder}` is the folder where you extracted the .zip file. An example is `C:\Azure-Samples\active-directory-dotnetcore-daemon-v2`.
+
+The number of users in Azure Active Directory should be displayed as a result.
+
+This quickstart application uses a client secret to identify itself as a confidential client. The client secret is added as a plain-text file to the project files. For security reasons, it's recommended to use a certificate instead of a client secret before considering the application as a production application. For more information on how to use a certificate, see [these instructions](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/#variation-daemon-application-using-client-credentials-with-certificates).
+
+## More information
+
+This section provides an overview of the code required to sign in users. The overview can be useful to understand how the code works, what the main arguments are, and how to add sign-in to an existing .NET Core console application.
+
+### Microsoft.Identity.Web.GraphServiceClient
+
+Microsoft Identity Web (in the [Microsoft.Identity.Web.TokenAcquisition](https://www.nuget.org/packages/Microsoft.Identity.Web.TokenAcquisition) package) is the library that's used to request tokens for accessing an API protected by the Microsoft identity platform. This quickstart requests tokens by using the application's own identity instead of delegated permissions. The authentication flow in this case is known as a [client credentials OAuth flow](v2-oauth2-client-creds-grant-flow.md). For more information on how to use MSAL.NET with a client credentials flow, see [this article](https://aka.ms/msal-net-client-credentials). Given the daemon app in this quickstart calls Microsoft Graph, you install the [Microsoft.Identity.Web.GraphServiceClient](https://www.nuget.org/packages/Microsoft.Identity.Web.GraphServiceClient) package, which handles automatically authenticated requests to Microsoft Graph (and references itself Microsoft.Identity.Web.TokenAcquisition)
+
+Microsoft.Identity.Web.GraphServiceClient can be installed by running the following command in the Visual Studio Package Manager Console:
+
+```dotnetcli
+dotnet add package Microsoft.Identity.Web.GraphServiceClient
+```
+
+### Application initialization
+
+Add the reference for Microsoft.Identity.Web by adding the following code:
+
+```csharp
+using Microsoft.Extensions.Configuration;
+using Microsoft.Extensions.DependencyInjection;
+using Microsoft.Graph;
+using Microsoft.Identity.Abstractions;
+using Microsoft.Identity.Web;
+```
+
+Then, initialize the app with the following code:
+
+```csharp
+// Get the Token acquirer factory instance. By default it reads an appsettings.json
+// file if it exists in the same folder as the app (make sure that the
+// "Copy to Output Directory" property of the appsettings.json file is "Copy if newer").
+TokenAcquirerFactory tokenAcquirerFactory = TokenAcquirerFactory.GetDefaultInstance();
+
+// Configure the application options to be read from the configuration
+// and add the services you need (Graph, token cache)
+IServiceCollection services = tokenAcquirerFactory.Services;
+services.AddMicrosoftGraph();
+// By default, you get an in-memory token cache.
+// For more token cache serialization options, see https://aka.ms/msal-net-token-cache-serialization
+
+// Resolve the dependency injection.
+var serviceProvider = tokenAcquirerFactory.Build();
+```
+
+This code uses the configuration defined in the appsettings.json file:
+
+```json
+{
+ "AzureAd": {
+ "Instance": "https://login.microsoftonline.com/",
+ "TenantId": "[Enter here the tenantID or domain name for your Azure AD tenant]",
+ "ClientId": "[Enter here the ClientId for your application]",
+ "ClientCredentials": [
+ {
+ "SourceType": "ClientSecret",
+ "ClientSecret": "[Enter here a client secret for your application]"
+ }
+ ]
+}
+}
+```
+
+ | Element | Description |
+ |||
+ | `ClientSecret` | The client secret created for the application in the Azure portal. |
+ | `ClientId` | The application (client) ID for the application registered in the Azure portal. This value can be found on the app's **Overview** page in the Azure portal. |
+ | `Instance` | (Optional) The security token service (STS) could instance endpoint for the app to authenticate. It's usually `https://login.microsoftonline.com/` for the public cloud.|
+ | `TenantId` | Name of the tenant or the tenant ID.|
+
+For more information, see the [reference documentation for `ConfidentialClientApplication`](/dotnet/api/microsoft.identity.web.tokenacquirerfactory).
+
+### Calling Microsoft Graph
+
+To request a token by using the app's identity, use the `AcquireTokenForClient` method:
+
+```csharp
+GraphServiceClient graphServiceClient = serviceProvider.GetRequiredService<GraphServiceClient>();
+var users = await graphServiceClient.Users
+ .GetAsync(r => r.Options.WithAppOnly());
+```
++
+## Next steps
+
+To learn more about daemon applications, see the scenario overview:
+
+> [!div class="nextstepaction"]
+> [Daemon application that calls web APIs](scenario-daemon-overview.md)
active-directory Quickstart Console App Nodejs Acquire Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-console-app-nodejs-acquire-token.md
+
+ Title: "Quickstart: Acquire a token and call Microsoft Graph from a Node.js console app"
+description: In this quickstart, you download and run a code sample that shows how a Node.js console application can get an access token and call an API protected by a Microsoft identity platform endpoint, using the app's own identity
++++++ Last updated : 09/09/2022+
+#Customer intent: As an application developer, I want to learn how my Node.js app can get an access token and call an API that is protected by a Microsoft identity platform endpoint using client credentials flow.
+++
+# Quickstart: Acquire a token and call Microsoft Graph from a Node.js console app
+
+In this quickstart, you download and run a code sample that demonstrates how a Node.js console application can get an access token using the app's identity to call the Microsoft Graph API and display a [list of users](/graph/api/user-list) in the directory. The code sample demonstrates how an unattended job or Windows service can run with an application identity, instead of a user's identity.
+
+This quickstart uses the [Microsoft Authentication Library for Node.js (MSAL Node)](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node) with the [client credentials grant](v2-oauth2-client-creds-grant-flow.md).
+
+## Prerequisites
+
+* [Node.js](https://nodejs.org/en/download/)
+* [Visual Studio Code](https://code.visualstudio.com/download) or another code editor
++
+## Register and download the sample application
+
+Follow the steps below to get started.
+
+#### Step 1: Register the application
++
+To register your application and add the app's registration information to your solution manually, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
+1. Search for and select **Azure Active Directory**.
+1. Under **Manage**, select **App registrations** > **New registration**.
+1. Enter a **Name** for your application, for example `msal-node-cli`. Users of your app might see this name, and you can change it later.
+1. Select **Register**.
+1. Under **Manage**, select **Certificates & secrets**.
+1. Under **Client secrets**, select **New client secret**, enter a name, and then select **Add**. Record the secret value in a safe location for use in a later step.
+1. Under **Manage**, select **API Permissions** > **Add a permission**. Select **Microsoft Graph**.
+1. Select **Application permissions**.
+1. Under **User** node, select **User.Read.All**, then select **Add permissions**.
+
+#### Step 2: Download the Node.js sample project
+
+[Download the code sample](https://github.com/azure-samples/ms-identity-javascript-nodejs-console/archive/main.zip)
+
+#### Step 3: Configure the Node.js sample project
+
+1. Extract the zip file to a local folder close to the root of the disk, for example, *C:/Azure-Samples*.
+1. Edit *.env* and replace the values of the fields `TENANT_ID`, `CLIENT_ID`, and `CLIENT_SECRET` with the following snippet:
+
+ ```
+ "TENANT_ID": "Enter_the_Tenant_Id_Here",
+ "CLIENT_ID": "Enter_the_Application_Id_Here",
+ "CLIENT_SECRET": "Enter_the_Client_Secret_Here"
+ ```
+ Where:
+ - `Enter_the_Application_Id_Here` - is the **Application (client) ID** of the application you registered earlier. Find this ID on the app registration's **Overview** pane in the Azure portal.
+ - `Enter_the_Tenant_Id_Here` - replace this value with the **Tenant ID** or **Tenant name** (for example, contoso.microsoft.com). Find these values on the app registration's **Overview** pane in the Azure portal.
+ - `Enter_the_Client_Secret_Here` - replace this value with the client secret you created earlier. To generate a new key, use **Certificates & secrets** in the app registration settings in the Azure portal.
+
+ Using a plaintext secret in the source code poses an increased security risk for your application. Although the sample in this quickstart uses a plaintext client secret, it's only for simplicity. We recommend using [certificate credentials](active-directory-certificate-credentials.md) instead of client secrets in your confidential client applications, especially those apps you intend to deploy to production.
+
+3. Edit *.env* and replace the Azure AD and Microsoft Graph endpoints with the following values:
+ - For the Azure AD endpoint, replace `Enter_the_Cloud_Instance_Id_Here` with `https://login.microsoftonline.com`.
+ - For the Microsoft Graph endpoint, replace `Enter_the_Graph_Endpoint_Here` with `https://graph.microsoft.com/`.
+
+#### Step 4: Admin consent
+
+If you try to run the application at this point, you'll receive *HTTP 403 - Forbidden* error: `Insufficient privileges to complete the operation`. This error happens because any *app-only permission* requires **admin consent**: a global administrator of your directory must give consent to your application. Select one of the options below depending on your role:
+
+##### Global tenant administrator
+
+If you're a global tenant administrator, go to **API Permissions** page in the Azure portal's Application Registration and select **Grant admin consent for {Tenant Name}** (where {Tenant Name} is the name of your directory).
+
+##### Standard user
+
+If you're a standard user of your tenant, then you need to ask a global administrator to grant **admin consent** for your application. To do this, give the following URL to your administrator:
+
+```url
+https://login.microsoftonline.com/Enter_the_Tenant_Id_Here/adminconsent?client_id=Enter_the_Application_Id_Here
+```
+
+ Where:
+ * `Enter_the_Tenant_Id_Here` - replace this value with the **Tenant Id** or **Tenant name** (for example, contoso.microsoft.com)
+ * `Enter_the_Application_Id_Here` - is the **Application (client) ID** for the application you registered.
+
+#### Step 5: Run the application
+
+Locate the sample's root folder (where `package.json` resides) in a command prompt or console. You'll need to install the dependencies your sample app requires before running it for the first time:
+
+```console
+npm install
+```
+
+Then, run the application via command prompt or console:
+
+```console
+node . --op getUsers
+```
+
+You should see on the console output some JSON fragment representing a list of users in your Azure AD directory.
+
+## About the code
+
+Below, some of the important aspects of the sample application are discussed.
+
+### MSAL Node
+
+[MSAL Node](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node) is the library used to sign in users and request tokens used to access an API protected by Microsoft identity platform. As described, this quickstart requests tokens by application permissions (using the application's own identity) instead of delegated permissions. The authentication flow used in this case is known as [OAuth 2.0 client credentials flow](v2-oauth2-client-creds-grant-flow.md). For more information on how to use MSAL Node with daemon apps, see [Scenario: Daemon application](scenario-daemon-overview.md).
+
+ You can install MSAL Node by running the following npm command.
+
+```console
+npm install @azure/msal-node --save
+```
+
+### MSAL initialization
+
+You can add the reference for MSAL by adding the following code:
+
+```javascript
+const msal = require('@azure/msal-node');
+```
+
+Then, initialize MSAL using the following code:
+
+```javascript
+const msalConfig = {
+ auth: {
+ clientId: "Enter_the_Application_Id_Here",
+ authority: "https://login.microsoftonline.com/Enter_the_Tenant_Id_Here",
+ clientSecret: "Enter_the_Client_Secret_Here",
+ }
+};
+const cca = new msal.ConfidentialClientApplication(msalConfig);
+```
+
+| Where: |Description |
+|||
+| `clientId` | Is the **Application (client) ID** for the application registered in the Azure portal. You can find this value in the app's **Overview** page in the Azure portal. |
+| `authority` | The STS endpoint for user to authenticate. Usually `https://login.microsoftonline.com/{tenant}` for public cloud, where {tenant} is the name of your tenant or your tenant ID.|
+| `clientSecret` | Is the client secret created for the application in Azure portal. |
+
+For more information, please see the [reference documentation for `ConfidentialClientApplication`](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-node/docs/initialize-confidential-client-application.md)
+
+### Requesting tokens
+
+To request a token using app's identity, use `acquireTokenByClientCredential` method:
+
+```javascript
+const tokenRequest = {
+ scopes: [ 'https://graph.microsoft.com/.default' ],
+};
+
+const tokenResponse = await cca.acquireTokenByClientCredential(tokenRequest);
+```
+
+|Where:| Description |
+|||
+| `tokenRequest` | Contains the scopes requested. For confidential clients, this should use the format similar to `{Application ID URI}/.default` to indicate that the scopes being requested are the ones statically defined in the app object set in the Azure portal (for Microsoft Graph, `{Application ID URI}` points to `https://graph.microsoft.com`). For custom web APIs, `{Application ID URI}` is defined under **Expose an API** section in Azure portal's Application Registration. |
+| `tokenResponse` | The response contains an access token for the scopes requested. |
++
+## Next steps
+
+To learn more about daemon/console app development with MSAL Node, see the tutorial:
+
+> [!div class="nextstepaction"]
+> [Daemon application that calls web APIs](tutorial-v2-nodejs-console.md)
active-directory Quickstart Daemon App Java Acquire Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-daemon-app-java-acquire-token.md
+
+ Title: "Quickstart: Acquire a token and call Microsoft Graph from a Java daemon app"
+description: In this quickstart, you learn how a Java app can get an access token and call an API protected by Microsoft identity platform endpoint, using the app's own identity
+++++++ Last updated : 01/10/2022++
+#Customer intent: As an application developer, I want to learn how my Java app can get an access token and call an API that's protected by Microsoft identity platform endpoint using client credentials flow.
++
+# Quickstart: Acquire a token and call Microsoft Graph from a Java daemon app
+
+In this quickstart, you download and run a code sample that demonstrates how a Java application can get an access token using the app's identity to call the Microsoft Graph API and display a [list of users](/graph/api/user-list) in the directory. The code sample demonstrates how an unattended job or Windows service can run with an application identity, instead of a user's identity.
++
+![Diagram showing how the sample app generated by this quickstart works.](media/quickstart-v2-java-daemon/java-console-daemon.svg)
+
+## Prerequisites
+
+To run this sample, you need:
+
+- [Java Development Kit (JDK)](https://openjdk.java.net/) 8 or greater
+- [Maven](https://maven.apache.org/)
++
+## Register and download your quickstart app
+
+You have two options to start your quickstart application: Express (Option 1 below), and Manual (Option 2)
+
+### Option 1: Register and auto configure your app and then download your code sample
+
+1. Go to the [Azure portal - App registrations](https://portal.azure.com/?Microsoft_AAD_RegisteredApps=true#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/JavaDaemonQuickstartPage/sourceType/docs) quickstart experience.
+1. Enter a name for your application and select **Register**.
+1. Follow the instructions to download and automatically configure your new application with just one click.
+
+### Option 2: Register and manually configure your application and code sample
+
+#### Step 1: Register your application
++
+To register your application and add the app's registration information to your solution manually, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
+1. Search for and select **Azure Active Directory**.
+1. Under **Manage**, select **App registrations** > **New registration**.
+1. Enter a **Name** for your application, for example `Daemon-console`. Users of your app might see this name, and you can change it later.
+1. Select **Register**.
+1. Under **Manage**, select **Certificates & secrets**.
+1. Under **Client secrets**, select **New client secret**, enter a name, and then select **Add**. Record the secret value in a safe location for use in a later step.
+1. Under **Manage**, select **API Permissions** > **Add a permission**. Select **Microsoft Graph**.
+1. Select **Application permissions**.
+1. Under **User** node, select **User.Read.All**, then select **Add permissions**.
+
+#### Step 2: Download the Java project
+[Download the Java daemon project](https://github.com/Azure-Samples/ms-identity-java-daemon/archive/master.zip)
+
+#### Step 3: Configure the Java project
+
+1. Extract the zip file to a local folder close to the root of the disk, for example, *C:\Azure-Samples*.
+1. Navigate to the sub folder **msal-client-credential-secret**.
+1. Edit *src\main\resources\application.properties* and replace the values of the fields `AUTHORITY`, `CLIENT_ID`, and `SECRET` with the following snippet:
+
+ ```
+ AUTHORITY=https://login.microsoftonline.com/Enter_the_Tenant_Id_Here/
+ CLIENT_ID=Enter_the_Application_Id_Here
+ SECRET=Enter_the_Client_Secret_Here
+ ```
+ Where:
+ - `Enter_the_Application_Id_Here` - is the **Application (client) ID** for the application you registered.
+ - `Enter_the_Tenant_Id_Here` - replace this value with the **Tenant Id** or **Tenant name** (for example, contoso.microsoft.com).
+ - `Enter_the_Client_Secret_Here` - replace this value with the client secret created on step 1.
+
+>[!TIP]
+>To find the values of **Application (client) ID**, **Directory (tenant) ID**, go to the app's **Overview** page in the Azure portal. To generate a new key, go to **Certificates & secrets** page.
+
+#### Step 4: Admin consent
+
+If you try to run the application at this point, you'll receive *HTTP 403 - Forbidden* error: `Insufficient privileges to complete the operation`. This error happens because any *app-only permission* requires Admin consent: a global administrator of your directory must give consent to your application. Select one of the options below depending on your role:
+
+##### Global tenant administrator
++
+If you are a global tenant administrator, go to **API Permissions** page in **App registrations** in the Azure portal and select **Grant admin consent for {Tenant Name}** (Where {Tenant Name} is the name of your directory).
+
+##### Standard user
+
+If you're a standard user of your tenant, then you need to ask a global administrator to grant admin consent for your application. To do this, give the following URL to your administrator:
+
+```url
+https://login.microsoftonline.com/Enter_the_Tenant_Id_Here/adminconsent?client_id=Enter_the_Application_Id_Here
+```
+
+ Where:
+ * `Enter_the_Tenant_Id_Here` - replace this value with the **Tenant Id** or **Tenant name** (for example, contoso.microsoft.com)
+ * `Enter_the_Application_Id_Here` - is the **Application (client) ID** for the application you registered.
++
+#### Step 5: Run the application
+
+You can test the sample directly by running the main method of ClientCredentialGrant.java from your IDE.
+
+From your shell or command line:
+
+```
+$ mvn clean compile assembly:single
+```
+
+This will generate a msal-client-credential-secret-1.0.0.jar file in your /targets directory. Run this using your Java executable like below:
+
+```
+$ java -jar msal-client-credential-secret-1.0.0.jar
+```
+
+After running, the application should display the list of users in the configured tenant.
+
+> [!IMPORTANT]
+> This quickstart application uses a client secret to identify itself as confidential client. Because the client secret is added as a plain-text to your project files, for security reasons, it is recommended that you use a certificate instead of a client secret before considering the application as production application. For more information on how to use a certificate, see [these instructions](https://github.com/Azure-Samples/ms-identity-java-daemon/tree/master/msal-client-credential-certificate) in the same GitHub repository for this sample, but in the second folder **msal-client-credential-certificate**.
+
+## More information
+
+### MSAL Java
+
+[MSAL Java](https://github.com/AzureAD/microsoft-authentication-library-for-java) is the library used to sign in users and request tokens used to access an API protected by Microsoft identity platform. As described, this quickstart requests tokens by using the application own identity instead of delegated permissions. The authentication flow used in this case is known as *[client credentials oauth flow](v2-oauth2-client-creds-grant-flow.md)*. For more information on how to use MSAL Java with daemon apps, see [this article](scenario-daemon-overview.md).
+
+Add MSAL4J to your application by using Maven or Gradle to manage your dependencies by making the following changes to the application's pom.xml (Maven) or build.gradle (Gradle) file.
+
+In *pom.xml*:
+
+```xml
+<dependency>
+ <groupId>com.microsoft.azure</groupId>
+ <artifactId>msal4j</artifactId>
+ <version>1.0.0</version>
+</dependency>
+```
+
+In build.gradle:
+
+```$xslt
+compile group: 'com.microsoft.azure', name: 'msal4j', version: '1.0.0'
+```
+
+### MSAL initialization
+
+Add a reference to MSAL for Java by adding the following code to the top of the file where you will be using MSAL4J:
+
+```Java
+import com.microsoft.aad.msal4j.*;
+```
+
+Then, initialize MSAL using the following code:
+
+```Java
+IClientCredential credential = ClientCredentialFactory.createFromSecret(CLIENT_SECRET);
+
+ConfidentialClientApplication cca =
+ ConfidentialClientApplication
+ .builder(CLIENT_ID, credential)
+ .authority(AUTHORITY)
+ .build();
+```
+
+| Where: |Description |
+|||
+| `CLIENT_SECRET` | Is the client secret created for the application in Azure portal. |
+| `CLIENT_ID` | Is the **Application (client) ID** for the application registered in the Azure portal. You can find this value in the app's **Overview** page in the Azure portal. |
+| `AUTHORITY` | The STS endpoint for user to authenticate. Usually `https://login.microsoftonline.com/{tenant}` for public cloud, where {tenant} is the name of your tenant or your tenant ID.|
+
+### Requesting tokens
+
+To request a token using app's identity, use `acquireToken` method:
+
+```Java
+IAuthenticationResult result;
+ try {
+ SilentParameters silentParameters =
+ SilentParameters
+ .builder(SCOPE)
+ .build();
+
+ // try to acquire token silently. This call will fail since the token cache does not
+ // have a token for the application you are requesting an access token for
+ result = cca.acquireTokenSilently(silentParameters).join();
+ } catch (Exception ex) {
+ if (ex.getCause() instanceof MsalException) {
+
+ ClientCredentialParameters parameters =
+ ClientCredentialParameters
+ .builder(SCOPE)
+ .build();
+
+ // Try to acquire a token. If successful, you should see
+ // the token information printed out to console
+ result = cca.acquireToken(parameters).join();
+ } else {
+ // Handle other exceptions accordingly
+ throw ex;
+ }
+ }
+ return result;
+```
+
+|Where:| Description |
+|||
+| `SCOPE` | Contains the scopes requested. For confidential clients, this should use the format similar to `{Application ID URI}/.default` to indicate that the scopes being requested are the ones statically defined in the app object set in the Azure portal (for Microsoft Graph, `{Application ID URI}` points to `https://graph.microsoft.com`). For custom web APIs, `{Application ID URI}` is defined under the **Expose an API** section in **App registrations** in the Azure portal.|
++
+## Next steps
+
+To learn more about daemon applications, see the scenario landing page.
+
+> [!div class="nextstepaction"]
+> [Daemon application that calls web APIs](scenario-daemon-overview.md)
active-directory Quickstart Daemon App Python Acquire Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-daemon-app-python-acquire-token.md
+
+ Title: "Quickstart: Acquire a token and call Microsoft Graph from a Python daemon app"
+description: In this quickstart, you learn how a Python process can get an access token and call an API protected by Microsoft identity platform, using the app's own identity
+++++++ Last updated : 03/28/2023+++
+#Customer intent: As an application developer, I want to learn how my Python app can get an access token and call an API that's protected by the Microsoft identity platform using client credentials flow.
++
+# Quickstart: Acquire a token and call Microsoft Graph from a Python daemon app
+
+In this quickstart, you download and run a code sample that demonstrates how a Python application can get an access token using the app's identity to call the Microsoft Graph API and display a [list of users](/graph/api/user-list) in the directory. The code sample demonstrates how an unattended job or Windows service can run with an application identity, instead of a user's identity.
+
+![Diagram showing how the sample app generated by this quickstart works.](media/quickstart-v2-python-daemon/python-console-daemon.svg)
+
+## Prerequisites
+
+To run this sample, you need:
+
+- [Python 3+](https://www.python.org/downloads/release/python-364/)
+- [MSAL Python](https://github.com/AzureAD/microsoft-authentication-library-for-python)
++
+## Register and download your quickstart app
+
+#### Step 1: Register your application
++
+To register your application and add the app's registration information to your solution manually, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
+1. Search for and select **Azure Active Directory**.
+1. Under **Manage**, select **App registrations** > **New registration**.
+1. Enter a **Name** for your application, for example `Daemon-console`. Users of your app might see this name, and you can change it later.
+1. Select **Register**.
+1. Under **Manage**, select **Certificates & secrets**.
+1. Under **Client secrets**, select **New client secret**, enter a name, and then select **Add**. Record the secret value in a safe location for use in a later step.
+1. Under **Manage**, select **API Permissions** > **Add a permission**. Select **Microsoft Graph**.
+1. Select **Application permissions**.
+1. Under **User** node, select **User.Read.All**, then select **Add permissions**.
+
+#### Step 2: Download the Python project
+
+[Download the Python daemon project](https://github.com/Azure-Samples/ms-identity-python-daemon/archive/master.zip)
+
+#### Step 3: Configure the Python project
+
+1. Extract the zip file to a local folder close to the root of the disk, for example, **C:\Azure-Samples**.
+1. Navigate to the sub folder **1-Call-MsGraph-WithSecret**.
+1. Edit **parameters.json** and replace the values of the fields `authority`, `client_id`, and `secret` with the following snippet:
+
+ ```json
+ "authority": "https://login.microsoftonline.com/Enter_the_Tenant_Id_Here",
+ "client_id": "Enter_the_Application_Id_Here",
+ "secret": "Enter_the_Client_Secret_Here"
+ ```
+ Where:
+ - `Enter_the_Application_Id_Here` - is the **Application (client) ID** for the application you registered.
+ - `Enter_the_Tenant_Id_Here` - replace this value with the **Tenant Id** or **Tenant name** (for example, contoso.microsoft.com)
+ - `Enter_the_Client_Secret_Here` - replace this value with the client secret created on step 1.
+
+> [!TIP]
+> To find the values of **Application (client) ID**, **Directory (tenant) ID**, go to the app's **Overview** page in the Azure portal. To generate a new key, go to **Certificates & secrets** page.
++
+#### Step 4: Admin consent
+
+If you try to run the application at this point, you'll receive *HTTP 403 - Forbidden* error: `Insufficient privileges to complete the operation`. This error happens because any *app-only permission* requires Admin consent: a global administrator of your directory must give consent to your application. Select one of the options below depending on your role:
+
+##### Global tenant administrator
+
+If you're a global tenant administrator, go to **API Permissions** page in **App registrations** in the Azure portal and select **Grant admin consent for {Tenant Name}** (Where {Tenant Name} is the name of your directory).
++
+##### Standard user
+
+If you're a standard user of your tenant, ask a global administrator to grant admin consent for your application. To do this, give the following URL to your administrator:
+
+```url
+https://login.microsoftonline.com/Enter_the_Tenant_Id_Here/adminconsent?client_id=Enter_the_Application_Id_Here
+```
+
+Where:
+ * `Enter_the_Tenant_Id_Here` - replace this value with the **Tenant Id** or **Tenant name** (for example, contoso.microsoft.com)
+ * `Enter_the_Application_Id_Here` - is the **Application (client) ID** for the application you registered previously.
++
+#### Step 5: Run the application
+
+You'll need to install the dependencies of this sample once.
+
+```console
+pip install -r requirements.txt
+```
+
+Then, run the application via command prompt or console:
+
+```console
+python confidential_client_secret_sample.py parameters.json
+```
+
+You should see on the console output some Json fragment representing a list of users in your Azure AD directory.
+
+> [!IMPORTANT]
+> This quickstart application uses a client secret to identify itself as confidential client. Because the client secret is added as a plain-text to your project files, for security reasons, it is recommended that you use a certificate instead of a client secret before considering the application as production application. For more information on how to use a certificate, see [these instructions](https://github.com/Azure-Samples/ms-identity-python-daemon/blob/master/2-Call-MsGraph-WithCertificate/README.md) in the same GitHub repository for this sample, but in the second folder **2-Call-MsGraph-WithCertificate**.
+
+## More information
+
+### MSAL Python
+
+[MSAL Python](https://github.com/AzureAD/microsoft-authentication-library-for-python) is the library used to sign in users and request tokens used to access an API protected by Microsoft identity platform. As described, this quickstart requests tokens by using the application own identity instead of delegated permissions. The authentication flow used in this case is known as *[client credentials oauth flow](v2-oauth2-client-creds-grant-flow.md)*. For more information on how to use MSAL Python with daemon apps, see [this article](scenario-daemon-overview.md).
+
+ You can install MSAL Python by running the following pip command.
+
+```powershell
+pip install msal
+```
+
+### MSAL initialization
+
+You can add the reference for MSAL by adding the following code:
+
+```Python
+import msal
+```
+
+Then, initialize MSAL using the following code:
+
+```Python
+app = msal.ConfidentialClientApplication(
+ config["client_id"], authority=config["authority"],
+ client_credential=config["secret"])
+```
+
+| Where: |Description |
+|||
+| `config["secret"]` | Is the client secret created for the application in Azure portal. |
+| `config["client_id"]` | Is the **Application (client) ID** for the application registered in the Azure portal. You can find this value in the app's **Overview** page in the Azure portal. |
+| `config["authority"]` | The STS endpoint for user to authenticate. Usually `https://login.microsoftonline.com/{tenant}` for public cloud, where {tenant} is the name of your tenant or your tenant ID.|
+
+For more information, please see the [reference documentation for `ConfidentialClientApplication`](https://msal-python.readthedocs.io/en/latest/#confidentialclientapplication).
+
+### Requesting tokens
+
+To request a token using app's identity, use `AcquireTokenForClient` method:
+
+```Python
+result = None
+result = app.acquire_token_silent(config["scope"], account=None)
+
+if not result:
+ logging.info("No suitable token exists in cache. Let's get a new one from AAD.")
+ result = app.acquire_token_for_client(scopes=config["scope"])
+```
+
+|Where:| Description |
+|||
+| `config["scope"]` | Contains the scopes requested. For confidential clients, this should use the format similar to `{Application ID URI}/.default` to indicate that the scopes being requested are the ones statically defined in the app object set in the Azure portal (for Microsoft Graph, `{Application ID URI}` points to `https://graph.microsoft.com`). For custom web APIs, `{Application ID URI}` is defined under the **Expose an API** section in **App registrations** in the Azure portal.|
+
+For more information, please see the [reference documentation for `AcquireTokenForClient`](https://msal-python.readthedocs.io/en/latest/#msal.ConfidentialClientApplication.acquire_token_for_client).
++
+## Next steps
+
+To learn more about daemon applications, see the scenario landing page.
+
+> [!div class="nextstepaction"]
+> [Daemon application that calls web APIs](scenario-daemon-overview.md)
active-directory Quickstart Desktop App Nodejs Electron Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-desktop-app-nodejs-electron-sign-in.md
+
+ Title: "Quickstart: Sign in users and call Microsoft Graph from a Node.js desktop app"
+description: In this quickstart, you learn how a Node.js Electron desktop application can sign-in users and get an access token to call an API protected by a Microsoft identity platform endpoint
++++++ Last updated : 01/14/2022++
+#Customer intent: As an application developer, I want to learn how my Node.js Electron desktop application can get an access token and call an API that's protected by a Microsoft identity platform endpoint.
++
+# Quickstart: Sign in users and call Microsoft Graph from a Node.js desktop app
+
+In this quickstart, you download and run a code sample that demonstrates how an Electron desktop application can sign in users and acquire access tokens to call the Microsoft Graph API.
+
+This quickstart uses the [Microsoft Authentication Library for Node.js (MSAL Node)](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node) with the [authorization code flow with PKCE](v2-oauth2-auth-code-flow.md).
+
+## Prerequisites
+
+* [Node.js](https://nodejs.org/en/download/)
+* [Visual Studio Code](https://code.visualstudio.com/download) or another code editor
++
+## Register and download the sample application
+
+Follow the steps below to get started.
+
+#### Step 1: Register the application
++
+To register your application and add the app's registration information to your solution manually, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
+1. Search for and select **Azure Active Directory**.
+1. Under **Manage**, select **App registrations** > **New registration**.
+1. Enter a **Name** for your application, for example `msal-node-desktop`. Users of your app might see this name, and you can change it later.
+1. Select **Register** to create the application.
+1. Under **Manage**, select **Authentication**.
+1. Select **Add a platform** > **Mobile and desktop applications**.
+1. In the **Redirect URIs** section, enter `http://localhost`.
+1. Select **Configure**.
+
+#### Step 2: Download the Electron sample project
++
+[Download the code sample](https://github.com/azure-samples/ms-identity-javascript-nodejs-desktop/archive/main.zip)
+
+#### Step 3: Configure the Electron sample project
+
+*Extract the project, open the *ms-identity-JavaScript-nodejs-desktop-main* folder, and then open *.authConfig.js* file. Replace the value as follows:
+
+| Variable | Description | Example(s) |
+|--|--||
+| `Enter_the_Cloud_Instance_Id_Here` | The Azure cloud instance in which your application is registered | `https://login.microsoftonline.com/` (include the trailing forward-slash)|
+| `Enter_the_Tenant_Id_Here` | Tenant ID or Primary domain | `contoso.microsoft.com` or `cbe899ec-5f5c-4efe-b7a0-599505d3d54f` |
+| `Enter_the_Application_Id_Here` | Client ID of the application you registered | `fa29b4c9-7675-4b61-8a0a-bf7b2b4fda91` |
+| `Enter_the_Redirect_Uri_Here` | Redirect Uri of the application you registered | `msalfa29b4c9-7675-4b61-8a0a-bf7b2b4fda91://auth` |
+| `Enter_the_Graph_Endpoint_Here` | The Microsoft Graph API cloud instance that your app will call | `https://graph.microsoft.com/` (include the trailing forward-slash)|
+
+Your file should look similar to below:
+
+ ```javascript
+ const AAD_ENDPOINT_HOST = "https://login.microsoftonline.com/"; // include the trailing slash
+
+ const msalConfig = {
+ auth: {
+ clientId: "fa29b4c9-7675-4b61-8a0a-bf7b2b4fda91",
+ authority: `${AAD_ENDPOINT_HOST}/cbe899ec-5f5c-4efe-b7a0-599505d3d54f`,
+ },
+ system: {
+ loggerOptions: {
+ loggerCallback(loglevel, message, containsPii) {
+ console.log(message);
+ },
+ piiLoggingEnabled: false,
+ logLevel: LogLevel.Verbose,
+ }
+ }
+ }
+
+ const GRAPH_ENDPOINT_HOST = "https://graph.microsoft.com/"; // include the trailing slash
+
+ const protectedResources = {
+ graphMe: {
+ endpoint: `${GRAPH_ENDPOINT_HOST}v1.0/me`,
+ scopes: ["User.Read"],
+ }
+ };
+
+ module.exports = {
+ msalConfig: msalConfig,
+ protectedResources: protectedResources,
+ };
+
+ ```
+
+#### Step 4: Run the application
+
+1. You'll need to install the dependencies of this sample once:
+
+ ```console
+ cd ms-identity-javascript-nodejs-desktop-main
+ npm install
+ ```
+
+1. Then, run the application via command prompt or console:
+
+ ```console
+ npm start
+ ```
+
+1. Select **Sign in** to start the sign-in process.
+
+ The first time you sign in, you're prompted to provide your consent to allow the application to sign you in and access your profile. After you're signed in successfully, you'll be redirected back to the application.
+
+## More information
+
+### How the sample works
+
+When a user selects the **Sign In** button for the first time, `acquireTokenInteractive` method of MSAL Node is called. This method redirects the user to sign-in with the *Microsoft identity platform endpoint*, obtains an **authorization code**, and then exchanges it for an access token.
+
+### MSAL Node
+
+[MSAL Node](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node) is the library used to sign in users and request tokens used to access an API protected by Microsoft identity platform. For more information on how to use MSAL Node with desktop apps, see [this article](scenario-desktop-overview.md).
+
+You can install MSAL Node by running the following npm command.
+
+```console
+npm install @azure/msal-node --save
+```
+## Next steps
+
+To learn more about Electron desktop app development with MSAL Node, see the tutorial:
+
+> [!div class="nextstepaction"]
+> [Tutorial: Sign in users and call the Microsoft Graph API in an Electron desktop app](tutorial-v2-nodejs-desktop.md)
active-directory Quickstart Desktop App Uwp Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-desktop-app-uwp-sign-in.md
+
+ Title: "Quickstart: Sign in users and call Microsoft Graph in a Universal Windows Platform app"
+description: In this quickstart, learn how a Universal Windows Platform (UWP) application can get an access token and call an API protected by Microsoft identity platform.
+++++++ Last updated : 05/19/2022+++
+#Customer intent: As an application developer, I want to learn how my Universal Windows Platform (UWP) application can get an access token and call an API that's protected by the Microsoft identity platform.
++
+# Quickstart: Sign in users and call Microsoft Graph in a Universal Windows Platform app
+
+In this quickstart, you download and run a code sample that demonstrates how a Universal Windows Platform (UWP) application can sign in users and get an access token to call the Microsoft Graph API.
+
+See [How the sample works](#how-the-sample-works) for an illustration.
+
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* [Visual Studio](https://visualstudio.microsoft.com/vs/)
+
+## Register and download your quickstart app
+
+You have two options to start your quickstart application:
+* [Express] [Option 1: Register and auto configure your app and then download your code sample](#option-1-register-and-auto-configure-your-app-and-then-download-your-code-sample)
+* [Manual] [Option 2: Register and manually configure your application and code sample](#option-2-register-and-manually-configure-your-application-and-code-sample)
+
+### Option 1: Register and auto configure your app and then download your code sample
+
+1. Go to the [Azure portal - App registrations](https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/UwpQuickstartPage/sourceType/docs) quickstart experience.
+1. Enter a name for your application and select **Register**.
+1. Follow the instructions to download and automatically configure your new application.
+
+### Option 2: Register and manually configure your application and code sample
+
+#### Step 1: Register your application
++
+To register your application and add the app's registration information to your solution, follow these steps:
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
+1. Search for and select **Azure Active Directory**.
+1. Under **Manage**, select **App registrations** > **New registration**.
+1. Enter a **Name** for your application, for example `UWP-App-calling-MsGraph`. Users of your app might see this name, and you can change it later.
+1. In the **Supported account types** section, select **Accounts in any organizational directory and personal Microsoft accounts (for example, Skype, Xbox, Outlook.com)**.
+1. Select **Register** to create the application, and then record the **Application (client) ID** for use in a later step.
+1. Under **Manage**, select **Authentication**.
+1. Select **Add a platform** > **Mobile and desktop applications**.
+1. Under **Redirect URIs**, select `https://login.microsoftonline.com/common/oauth2/nativeclient`.
+1. Select **Configure**.
+
+#### Step 2: Download the project
+
+[Download the UWP sample application](https://github.com/Azure-Samples/active-directory-dotnet-native-uwp-v2/archive/msal3x.zip)
++
+#### Step 3: Configure the project
+
+1. Extract the .zip archive to a local folder close to the root of your drive. For example, into **C:\Azure-Samples**.
+1. Open the project in Visual Studio. Install the **Universal Windows Platform development** workload and any individual SDK components if prompted.
+1. In *MainPage.Xaml.cs*, change the value of the `ClientId` variable to the **Application (Client) ID** of the application you registered earlier.
+
+ ```csharp
+ private const string ClientId = "Enter_the_Application_Id_here";
+ ```
+
+ You can find the **Application (client) ID** on the app's **Overview** pane in the Azure portal (**Azure Active Directory** > **App registrations** > *{Your app registration}*).
+1. Create and then select a new self-signed test certificate for the package:
+ 1. In the **Solution Explorer**, double-click the *Package.appxmanifest* file.
+ 1. Select **Packaging** > **Choose Certificate...** > **Create...**.
+ 1. Enter a password and then select **OK**. A certificate called *Native_UWP_V2_TemporaryKey.pfx* is created.
+ 1. Select **OK** to dismiss the **Choose a certificate** dialog, and then verify that you see *Native_UWP_V2_TemporaryKey.pfx* in Solution Explorer.
+ 1. In the **Solution Explorer**, right-click the **Native_UWP_V2** project and select **Properties**.
+ 1. Select **Signing**, and then select the .pfx you created in the **Choose a strong name key file** drop-down.
+
+#### Step 4: Run the application
+
+To run the sample application on your local machine:
+
+1. In the Visual Studio toolbar, choose the right platform (probably **x64** or **x86**, not ARM). The target device should change from *Device* to *Local Machine*.
+1. Select **Debug** > **Start Without Debugging**.
+
+ If you're prompted to do so, you might first need to enable **Developer Mode**, and then **Start Without Debugging** again to launch the app.
+
+When the app's window appears, you can select the **Call Microsoft Graph API** button, enter your credentials, and consent to the permissions requested by the application. If successful, the application displays some token information and data obtained from the call to the Microsoft Graph API.
+
+## How the sample works
+
+![Diagram showing how the sample app generated by this quickstart works.](media/quickstart-v2-uwp/uwp-intro.svg)
+
+### MSAL.NET
+
+MSAL ([Microsoft.Identity.Client](/dotnet/api/microsoft.identity.client?)) is the library used to sign in users and request security tokens. The security tokens are used to access an API protected by the Microsoft Identity platform. You can install MSAL by running the following command in Visual Studio's *Package Manager Console*:
+
+```powershell
+Install-Package Microsoft.Identity.Client
+```
+
+### MSAL initialization
+
+You can add the reference for MSAL by adding the following code:
+
+```csharp
+using Microsoft.Identity.Client;
+```
+
+Then, MSAL is initialized using the following code:
+
+```csharp
+public static IPublicClientApplication PublicClientApp;
+PublicClientApp = PublicClientApplicationBuilder.Create(ClientId)
+ .WithRedirectUri("https://login.microsoftonline.com/common/oauth2/nativeclient")
+ .Build();
+```
+
+The value of `ClientId` is the **Application (client) ID** of the app you registered in the Azure portal. You can find this value in the app's **Overview** page in the Azure portal.
+
+### Requesting tokens
+
+MSAL has two methods for acquiring tokens in a UWP app: [`AcquireTokenInteractive`](/dotnet/api/microsoft.identity.client.acquiretokeninteractiveparameterbuilder?) and [`AcquireTokenSilent`](/dotnet/api/microsoft.identity.client.acquiretokensilentparameterbuilder).
+
+#### Get a user token interactively
+
+Some situations require forcing users to interact with the Microsoft identity platform through a pop-up window to either validate their credentials or to give consent. Some examples include:
+
+- The first-time users sign in to the application
+- When users may need to reenter their credentials because the password has expired
+- When your application is requesting access to a resource, that the user needs to consent to
+- When two factor authentication is required
+
+```csharp
+authResult = await PublicClientApp.AcquireTokenInteractive(scopes)
+ .ExecuteAsync();
+```
+
+The `scopes` parameter contains the scopes being requested, such as `{ "user.read" }` for Microsoft Graph or `{ "api://<Application ID>/access_as_user" }` for custom web APIs.
+
+#### Get a user token silently
+
+Use the `AcquireTokenSilent` method to obtain tokens to access protected resources after the initial `AcquireTokenInteractive` method. You donΓÇÖt want to require the user to validate their credentials every time they need to access a resource. Most of the time you want token acquisitions and renewal without any user interaction
+
+```csharp
+var accounts = await PublicClientApp.GetAccountsAsync();
+var firstAccount = accounts.FirstOrDefault();
+authResult = await PublicClientApp.AcquireTokenSilent(scopes, firstAccount)
+ .ExecuteAsync();
+```
+
+* `scopes` contains the scopes being requested, such as `{ "user.read" }` for Microsoft Graph or `{ "api://<Application ID>/access_as_user" }` for custom web APIs.
+* `firstAccount` specifies the first user account in the cache (MSAL supports multiple users in a single app).
++
+## Next steps
+
+Try out the Windows desktop tutorial for a complete step-by-step guide on building applications and new features, including a full explanation of this quickstart.
+
+> [!div class="nextstepaction"]
+> [UWP - Call Graph API tutorial](tutorial-v2-windows-uwp.md)
active-directory Quickstart Desktop App Wpf Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-desktop-app-wpf-sign-in.md
+
+ Title: "Quickstart: Sign in users and call Microsoft Graph in a Windows desktop app"
+description: Learn how a Windows Presentation Foundation (WPF) application can get an access token and call an API protected by the Microsoft identity platform.
+++++++ Last updated : 09/09/2022+++
+#Customer intent: As an application developer, I want to learn how my Windows Presentation Foundation (WPF) application can get an access token and call an API that's protected by the Microsoft identity platform.
++
+# Quickstart: Sign in users and call Microsoft Graph in a Windows desktop app
+
+In this quickstart, you download and run a code sample that demonstrates how a Windows Presentation Foundation (WPF) application can sign in users and get an access token to call the Microsoft Graph API. The desktop app you build uses the authorization code flow paired with the Proof Key for Code Exchange (PKCE) standard.
+
+See [How the sample works](#how-the-sample-works) for an illustration.
++
+## Prerequisites
+
+* [Visual Studio](https://visualstudio.microsoft.com/vs/) with the [Universal Windows Platform development](/windows/uwp/get-started/get-set-up) workload installed
+
+## Register and download your quickstart app
+You have two options to start your quickstart application:
+* [Express] [Option 1: Register and auto configure your app and then download your code sample](#option-1-register-and-auto-configure-your-app-and-then-download-your-code-sample)
+* [Manual] [Option 2: Register and manually configure your application and code sample](#option-2-register-and-manually-configure-your-application-and-code-sample)
+
+### Option 1: Register and auto configure your app and then download your code sample
+
+1. Go to the [Azure portal - App registrations](https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/WinDesktopQuickstartPage/sourceType/docs)quickstart experience.
+1. Enter a name for your application and select **Register**.
+1. Follow the instructions to download and automatically configure your new application with just one click.
+
+### Option 2: Register and manually configure your application and code sample
+
+#### Step 1: Register your application
++
+To register your application and add the app's registration information to your solution manually, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
+1. Search for and select **Azure Active Directory**.
+1. Under **Manage**, select **App registrations** > **New registration**.
+1. Enter a **Name** for your application, for example `Win-App-calling-MsGraph`. Users of your app might see this name, and you can change it later.
+1. In the **Supported account types** section, select **Accounts in any organizational directory and personal Microsoft accounts (for example, Skype, Xbox, Outlook.com)**.
+1. Select **Register** to create the application.
+1. Under **Manage**, select **Authentication**.
+1. Select **Add a platform** > **Mobile and desktop applications**.
+1. In the **Redirect URIs** section, select `https://login.microsoftonline.com/common/oauth2/nativeclient` and in **Custom redirect URIs** add `ms-appx-web://microsoft.aad.brokerplugin/{client_id}` where `{client_id}` is the application (client) ID of your application (the same GUID that appears in the `msal{client_id}://auth` checkbox).
+1. Select **Configure**.
+
+#### Step 2: Download the project
+
+[Download the WPF sample application](https://github.com/Azure-Samples/active-directory-dotnet-desktop-msgraph-v2/archive/msal3x.zip)
++
+#### Step 3: Configure the project
+1. Extract the zip file to a local folder close to the root of the disk, for example, **C:\Azure-Samples**.
+1. Open the project in Visual Studio.
+1. Edit **App.Xaml.cs** and replace the values of the fields `ClientId` and `Tenant` with the following code:
+
+ ```csharp
+ private static string ClientId = "Enter_the_Application_Id_here";
+ private static string Tenant = "Enter_the_Tenant_Info_Here";
+ ```
+
+Where:
+- `Enter_the_Application_Id_here` - is the **Application (client) ID** for the application you registered.
+
+ To find the value of **Application (client) ID**, go to the app's **Overview** page in the Azure portal.
+- `Enter_the_Tenant_Info_Here` - is set to one of the following options:
+ - If your application supports **Accounts in this organizational directory**, replace this value with the **Tenant Id** or **Tenant name** (for example, contoso.microsoft.com)
+ - If your application supports **Accounts in any organizational directory**, replace this value with `organizations`
+ - If your application supports **Accounts in any organizational directory and personal Microsoft accounts**, replace this value with `common`.
+
+ To find the values of **Directory (tenant) ID** and **Supported account types**, go to the app's **Overview** page in the Azure portal.
+
+#### Step 4: Run the application
+
+To build and run the sample application in Visual Studio, select the **Debug menu** > **Start Debugging**, or press the F5 key. Your application's MainWindow is displayed.
+
+When the app's main window appears, select the Call Microsoft Graph API button. You'll be prompted to sign in using your Azure Active Directory account (work or school account) or Microsoft account (live.com, outlook.com) credentials.
+
+If you're running the application for the first time, you'll be prompted to provide consent to allow the application to access your user profile and sign you in. After consenting to the requested permissions, the application displays that you've successfully logged in. You should see some basic token information and user data obtained from the call to the Microsoft Graph API.
+
+## More information
+
+### How the sample works
+![Diagram showing how the sample app generated by this quickstart works.](media/quickstart-v2-windows-desktop/windesktop-intro.svg)
+
+### MSAL.NET
+MSAL ([Microsoft.Identity.Client](https://www.nuget.org/packages/Microsoft.Identity.Client)) is the library used to sign in users and request tokens used to access an API protected by Microsoft identity platform. You can install MSAL by running the following command in Visual Studio's **Package Manager Console**:
+
+```powershell
+Install-Package Microsoft.Identity.Client -IncludePrerelease
+```
+
+### MSAL initialization
+
+You can add the reference for MSAL by adding the following code:
+
+```csharp
+using Microsoft.Identity.Client;
+```
+
+Then, initialize MSAL using the following code:
+
+```csharp
+IPublicClientApplication publicClientApp = PublicClientApplicationBuilder.Create(ClientId)
+ .WithRedirectUri("https://login.microsoftonline.com/common/oauth2/nativeclient")
+ .WithAuthority(AzureCloudInstance.AzurePublic, Tenant)
+ .Build();
+```
+
+|Where: | Description |
+|||
+| `ClientId` | Is the **Application (client) ID** for the application registered in the Azure portal. You can find this value in the app's **Overview** page in the Azure portal. |
+
+### Requesting tokens
+
+MSAL has two methods for acquiring tokens: `AcquireTokenInteractive` and `AcquireTokenSilent`.
+
+#### Get a user token interactively
+
+Some situations require forcing users interact with the Microsoft identity platform through a pop-up window to either validate their credentials or to give consent. Some examples include:
+
+- The first time users sign in to the application
+- When users may need to reenter their credentials because the password has expired
+- When your application is requesting access to a resource that the user needs to consent to
+- When two factor authentication is required
+
+```csharp
+authResult = await app.AcquireTokenInteractive(_scopes)
+ .ExecuteAsync();
+```
+
+|Where:| Description |
+|||
+| `_scopes` | Contains the scopes being requested, such as `{ "user.read" }` for Microsoft Graph or `{ "api://<Application ID>/access_as_user" }` for custom web APIs. |
+
+#### Get a user token silently
+
+You don't want to require the user to validate their credentials every time they need to access a resource. Most of the time you want token acquisitions and renewal without any user interaction. You can use the `AcquireTokenSilent` method to obtain tokens to access protected resources after the initial `AcquireTokenInteractive` method:
+
+```csharp
+var accounts = await app.GetAccountsAsync();
+var firstAccount = accounts.FirstOrDefault();
+authResult = await app.AcquireTokenSilent(scopes, firstAccount)
+ .ExecuteAsync();
+```
+
+|Where: | Description |
+|||
+| `scopes` | Contains the scopes being requested, such as `{ "user.read" }` for Microsoft Graph or `{ "api://<Application ID>/access_as_user" }` for custom web APIs. |
+| `firstAccount` | Specifies the first user in the cache (MSAL support multiple users in a single app). |
++
+## Next steps
+
+Try out the Windows desktop tutorial for a complete step-by-step guide on building applications and new features, including a full explanation of this quickstart.
+
+> [!div class="nextstepaction"]
+> [Call Graph API tutorial](tutorial-v2-windows-desktop.md)
active-directory Quickstart Mobile App Android Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-mobile-app-android-sign-in.md
+
+ Title: "Quickstart: Sign in users and call Microsoft Graph from an Android app"
+description: In this quickstart, learn how Android applications can call an API that requires access tokens issued by the Microsoft identity platform.
+++++++ Last updated : 05/24/2023++
+#Customer intent: As an application developer, I want to learn how Android native apps can call protected APIs that require login and access tokens using the Microsoft identity platform.
++
+# Quickstart: Sign in users and call Microsoft Graph from an Android app
+
+In this quickstart, you download and run a code sample that demonstrates how an Android application can sign in users and get an access token to call the Microsoft Graph API.
+
+See [How the sample works](#how-the-sample-works) for an illustration.
+
+Applications must be represented by an app object in Azure Active Directory (Azure AD) so that the Microsoft identity platform can provide tokens to your application.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Android Studio
+- Android 16+
+
+## Step 1: Get the sample app
+
+[Download the code](https://github.com/Azure-Samples/ms-identity-android-java/archive/master.zip).
+
+## Step 2: Run the sample app
+
+Select your emulator, or physical device, from Android Studio's **available devices** dropdown and run the app.
+
+The sample app starts on the **Single Account Mode** screen. A default scope, **user.read**, is provided by default, which is used when reading your own profile data during the Microsoft Graph API call. The URL for the Microsoft Graph API call is provided by default. You can change both of these if you wish.
+
+![Screenshot of the MSAL sample app showing single and multiple account usage.](media/quickstart-v2-android/quickstart-sample-app.png)
+
+Use the app menu to change between single and multiple account modes.
+
+In single account mode, sign in using a work or home account:
+
+1. Select **Get graph data interactively** to prompt the user for their credentials. You'll see the output from the call to the Microsoft Graph API in the bottom of the screen.
+2. Once signed in, select **Get graph data silently** to make a call to the Microsoft Graph API without prompting the user for credentials again. You'll see the output from the call to the Microsoft Graph API in the bottom of the screen.
+
+In multiple account mode, you can repeat the same steps. Additionally, you can remove the signed-in account, which also removes the cached tokens for that account.
+
+## How the sample works
+
+![Diagram showing how the sample app generated by this quickstart works.](media/quickstart-v2-android/android-intro.svg)
+
+The code is organized into fragments that show how to write a single and multiple accounts MSAL app. The code files are organized as follows:
+
+| File | Demonstrates |
+| | - |
+| MainActivity | Manages the UI |
+| MSGraphRequestWrapper | Calls the Microsoft Graph API using the token provided by MSAL |
+| MultipleAccountModeFragment | Initializes a multi-account application, loads a user account, and gets a token to call the Microsoft Graph API |
+| SingleAccountModeFragment | Initializes a single-account application, loads a user account, and gets a token to call the Microsoft Graph API |
+| res/auth_config_multiple_account.json | The multiple account configuration file |
+| res/auth_config_single_account.json | The single account configuration file |
+| Gradle Scripts/build.grade (Module:app) | The MSAL library dependencies are added here |
+
+We'll now look at these files in more detail and call out the MSAL-specific code in each.
+
+### Adding MSAL to the app
+
+MSAL ([com.microsoft.identity.client](https://javadoc.io/doc/com.microsoft.identity.client/msal)) is the library used to sign in users and request tokens used to access an API protected by Microsoft identity platform. Gradle 3.0+ installs the library when you add the following to **Gradle Scripts** > **build.gradle (Module: app)** under **Dependencies**:
+
+```java
+dependencies {
+ ...
+ implementation 'com.microsoft.identity.client:msal:4.5.0'
+ ...
+}
+```
+
+This instructs Gradle to download and build MSAL from maven central.
+
+You must also add references to maven to the **allprojects** > **repositories** portion of the **build.gradle (Module: app)** like so:
+
+```java
+allprojects {
+ repositories {
+ mavenCentral()
+ google()
+ mavenLocal()
+ maven {
+ url 'https://pkgs.dev.azure.com/MicrosoftDeviceSDK/DuoSDK-Public/_packaging/Duo-SDK-Feed/maven/v1'
+ }
+ maven {
+ name "vsts-maven-adal-android"
+ url "https://identitydivision.pkgs.visualstudio.com/_packaging/AndroidADAL/maven/v1"
+ credentials {
+ username System.getenv("ENV_VSTS_MVN_ANDROIDADAL_USERNAME") != null ? System.getenv("ENV_VSTS_MVN_ANDROIDADAL_USERNAME") : project.findProperty("vstsUsername")
+ password System.getenv("ENV_VSTS_MVN_ANDROIDADAL_ACCESSTOKEN") != null ? System.getenv("ENV_VSTS_MVN_ANDROIDADAL_ACCESSTOKEN") : project.findProperty("vstsMavenAccessToken")
+ }
+ }
+ jcenter()
+ }
+}
+```
+
+### MSAL imports
+
+The imports that are relevant to the MSAL library are `com.microsoft.identity.client.*`. For example, you'll see `import com.microsoft.identity.client.PublicClientApplication;` which is the namespace for the `PublicClientApplication` class, which represents your public client application.
+
+### SingleAccountModeFragment.java
+
+This file demonstrates how to create a single account MSAL app and call a Microsoft Graph API.
+
+Single account apps are only used by a single user. For example, you might just have one account that you sign into your mapping app with.
+
+#### Single account MSAL initialization
+
+In `SingleAccountModeFragment.java`, in `onCreateView()` method, a single account `PublicClientApplication` is created using the config information stored in the `auth_config_single_account.json` file. This is how you initialize the MSAL library for use in a single-account MSAL app:
+
+```java
+...
+// Creates a PublicClientApplication object with res/raw/auth_config_single_account.json
+PublicClientApplication.createSingleAccountPublicClientApplication(getContext(),
+ R.raw.auth_config_single_account,
+ new IPublicClientApplication.ISingleAccountApplicationCreatedListener() {
+ @Override
+ public void onCreated(ISingleAccountPublicClientApplication application) {
+ /**
+ * This test app assumes that the app is only going to support one account.
+ * This requires "account_mode" : "SINGLE" in the config json file.
+ **/
+ mSingleAccountApp = application;
+ loadAccount();
+ }
+
+ @Override
+ public void onError(MsalException exception) {
+ displayError(exception);
+ }
+ });
+```
+
+#### Sign in a user
+
+In `SingleAccountModeFragment.java`, the code to sign in a user is in `initializeUI()`, in the `signInButton` click handler.
+
+Call `signIn()` before trying to acquire tokens. `signIn()` behaves as though `acquireToken()` is called, resulting in an interactive prompt for the user to sign in.
+
+Signing in a user is an asynchronous operation. A callback is passed that calls the Microsoft Graph API and update the UI once the user signs in:
+
+```java
+mSingleAccountApp.signIn(getActivity(), null, getScopes(), getAuthInteractiveCallback());
+```
+
+#### Sign out a user
+
+In `SingleAccountModeFragment.java`, the code to sign out a user is in `initializeUI()`, in the `signOutButton` click handler. Signing a user out is an asynchronous operation. Signing the user out also clears the token cache for that account. A callback is created to update the UI once the user account is signed out:
+
+```java
+mSingleAccountApp.signOut(new ISingleAccountPublicClientApplication.SignOutCallback() {
+ @Override
+ public void onSignOut() {
+ updateUI(null);
+ performOperationOnSignOut();
+ }
+
+ @Override
+ public void onError(@NonNull MsalException exception) {
+ displayError(exception);
+ }
+});
+```
+
+#### Get a token interactively or silently
+
+To present the fewest number of prompts to the user, you'll typically get a token silently. Then, if there's an error, attempt to get a token interactively. The first time the app calls `signIn()`, it effectively acts as a call to `acquireToken()`, which will prompt the user for credentials.
+
+Some situations when the user may be prompted to select their account, enter their credentials, or consent to the permissions your app has requested are:
+
+- The first time the user signs in to the application
+- If a user resets their password, they'll need to enter their credentials
+- If consent is revoked
+- If your app explicitly requires consent
+- When your application is requesting access to a resource for the first time
+- When MFA or other Conditional Access policies are required
+
+The code to get a token interactively, that is with UI that will involve the user, is in `SingleAccountModeFragment.java`, in `initializeUI()`, in the `callGraphApiInteractiveButton` click handler:
+
+```java
+/**
+ * If acquireTokenSilent() returns an error that requires an interaction (MsalUiRequiredException),
+ * invoke acquireToken() to have the user resolve the interrupt interactively.
+ *
+ * Some example scenarios are
+ * - password change
+ * - the resource you're acquiring a token for has a stricter set of requirement than your Single Sign-On refresh token.
+ * - you're introducing a new scope which the user has never consented for.
+ **/
+mSingleAccountApp.acquireToken(getActivity(), getScopes(), getAuthInteractiveCallback());
+```
+
+If the user has already signed in, `acquireTokenSilentAsync()` allows apps to request tokens silently as shown in `initializeUI()`, in the `callGraphApiSilentButton` click handler:
+
+```java
+/**
+ * Once you've signed the user in,
+ * you can perform acquireTokenSilent to obtain resources without interrupting the user.
+ **/
+ mSingleAccountApp.acquireTokenSilentAsync(getScopes(), AUTHORITY, getAuthSilentCallback());
+```
+
+#### Load an account
+
+The code to load an account is in `SingleAccountModeFragment.java` in `loadAccount()`. Loading the user's account is an asynchronous operation, so callbacks to handle when the account loads, changes, or an error occurs is passed to MSAL. The following code also handles `onAccountChanged()`, which occurs when an account is removed, the user changes to another account, and so on.
+
+```java
+private void loadAccount() {
+ ...
+
+ mSingleAccountApp.getCurrentAccountAsync(new ISingleAccountPublicClientApplication.CurrentAccountCallback() {
+ @Override
+ public void onAccountLoaded(@Nullable IAccount activeAccount) {
+ // You can use the account data to update your UI or your app database.
+ updateUI(activeAccount);
+ }
+
+ @Override
+ public void onAccountChanged(@Nullable IAccount priorAccount, @Nullable IAccount currentAccount) {
+ if (currentAccount == null) {
+ // Perform a cleanup task as the signed-in account changed.
+ performOperationOnSignOut();
+ }
+ }
+
+ @Override
+ public void onError(@NonNull MsalException exception) {
+ displayError(exception);
+ }
+ });
+```
+
+#### Call Microsoft Graph
+
+When a user is signed in, the call to Microsoft Graph is made via an HTTP request by `callGraphAPI()` which is defined in `SingleAccountModeFragment.java`. This function is a wrapper that simplifies the sample by doing some tasks such as getting the access token from the `authenticationResult` and packaging the call to the MSGraphRequestWrapper, and displaying the results of the call.
+
+```java
+private void callGraphAPI(final IAuthenticationResult authenticationResult) {
+ MSGraphRequestWrapper.callGraphAPIUsingVolley(
+ getContext(),
+ graphResourceTextView.getText().toString(),
+ authenticationResult.getAccessToken(),
+ new Response.Listener<JSONObject>() {
+ @Override
+ public void onResponse(JSONObject response) {
+ /* Successfully called graph, process data and send to UI */
+ ...
+ }
+ },
+ new Response.ErrorListener() {
+ @Override
+ public void onErrorResponse(VolleyError error) {
+ ...
+ }
+ });
+}
+```
+
+### auth_config_single_account.json
+
+This is the configuration file for a MSAL app that uses a single account.
+
+See [Understand the Android MSAL configuration file ](msal-configuration.md) for an explanation of these fields.
+
+Note the presence of `"account_mode" : "SINGLE"`, which configures this app to use a single account.
+
+`"client_id"` is preconfigured to use an app object registration that Microsoft maintains.
+`"redirect_uri"`is preconfigured to use the signing key provided with the code sample.
+
+```json
+{
+ "client_id": "0984a7b6-bc13-4141-8b0d-8f767e136bb7",
+ "authorization_user_agent": "DEFAULT",
+ "redirect_uri": "msauth://com.azuresamples.msalandroidapp/1wIqXSqBj7w%2Bh11ZifsnqwgyKrY%3D",
+ "account_mode": "SINGLE",
+ "broker_redirect_uri_registered": true,
+ "authorities": [
+ {
+ "type": "AAD",
+ "audience": {
+ "type": "AzureADandPersonalMicrosoftAccount",
+ "tenant_id": "common"
+ }
+ }
+ ]
+}
+```
+
+### MultipleAccountModeFragment.java
+
+This file demonstrates how to create a multiple account MSAL app and call a Microsoft Graph API.
+
+An example of a multiple account app is a mail app that allows you to work with multiple user accounts such as a work account and a personal account.
+
+#### Multiple account MSAL initialization
+
+In the `MultipleAccountModeFragment.java` file, in `onCreateView()`, a multiple account app object (`IMultipleAccountPublicClientApplication`) is created using the config information stored in the `auth_config_multiple_account.json file`:
+
+```java
+// Creates a PublicClientApplication object with res/raw/auth_config_multiple_account.json
+PublicClientApplication.createMultipleAccountPublicClientApplication(getContext(),
+ R.raw.auth_config_multiple_account,
+ new IPublicClientApplication.IMultipleAccountApplicationCreatedListener() {
+ @Override
+ public void onCreated(IMultipleAccountPublicClientApplication application) {
+ mMultipleAccountApp = application;
+ loadAccounts();
+ }
+
+ @Override
+ public void onError(MsalException exception) {
+ ...
+ }
+ });
+```
+
+The created `MultipleAccountPublicClientApplication` object is stored in a class member variable so that it can be used to interact with the MSAL library to acquire tokens and load and remove the user account.
+
+#### Load an account
+
+Multiple account apps usually call `getAccounts()` to select the account to use for MSAL operations. The code to load an account is in the `MultipleAccountModeFragment.java` file, in `loadAccounts()`. Loading the user's account is an asynchronous operation. So a callback handles the situations when the account is loaded, changes, or an error occurs.
+
+```java
+/**
+ * Load currently signed-in accounts, if there's any.
+ **/
+private void loadAccounts() {
+ if (mMultipleAccountApp == null) {
+ return;
+ }
+
+ mMultipleAccountApp.getAccounts(new IPublicClientApplication.LoadAccountsCallback() {
+ @Override
+ public void onTaskCompleted(final List<IAccount> result) {
+ // You can use the account data to update your UI or your app database.
+ accountList = result;
+ updateUI(accountList);
+ }
+
+ @Override
+ public void onError(MsalException exception) {
+ displayError(exception);
+ }
+ });
+}
+```
+
+#### Get a token interactively or silently
+
+Some situations when the user may be prompted to select their account, enter their credentials, or consent to the permissions your app has requested are:
+
+- The first time users sign in to the application
+- If a user resets their password, they'll need to enter their credentials
+- If consent is revoked
+- If your app explicitly requires consent
+- When your application is requesting access to a resource for the first time
+- When MFA or other Conditional Access policies are required
+
+Multiple account apps should typically acquire tokens interactively, that is with UI that involves the user, with a call to `acquireToken()`. The code to get a token interactively is in the `MultipleAccountModeFragment.java` file in `initializeUI()`, in the `callGraphApiInteractiveButton` click handler:
+
+```java
+/**
+ * Acquire token interactively. It will also create an account object for the silent call as a result (to be obtained by getAccount()).
+ *
+ * If acquireTokenSilent() returns an error that requires an interaction,
+ * invoke acquireToken() to have the user resolve the interrupt interactively.
+ *
+ * Some example scenarios are
+ * - password change
+ * - the resource you're acquiring a token for has a stricter set of requirement than your SSO refresh token.
+ * - you're introducing a new scope which the user has never consented for.
+ **/
+mMultipleAccountApp.acquireToken(getActivity(), getScopes(), getAuthInteractiveCallback());
+```
+
+Apps shouldn't require the user to sign in every time they request a token. If the user has already signed in, `acquireTokenSilentAsync()` allows apps to request tokens without prompting the user, as shown in the `MultipleAccountModeFragment.java` file, in`initializeUI()` in the `callGraphApiSilentButton` click handler:
+
+```java
+/**
+ * Performs acquireToken without interrupting the user.
+ *
+ * This requires an account object of the account you're obtaining a token for.
+ * (can be obtained via getAccount()).
+ */
+mMultipleAccountApp.acquireTokenSilentAsync(getScopes(),
+ accountList.get(accountListSpinner.getSelectedItemPosition()),
+ AUTHORITY,
+ getAuthSilentCallback());
+```
+
+#### Remove an account
+
+The code to remove an account, and any cached tokens for the account, is in the `MultipleAccountModeFragment.java` file in `initializeUI()` in the handler for the remove account button. Before you can remove an account, you need an account object, which you obtain from MSAL methods like `getAccounts()` and `acquireToken()`. Because removing an account is an asynchronous operation, the `onRemoved` callback is supplied to update the UI.
+
+```java
+/**
+ * Removes the selected account and cached tokens from this app (or device, if the device is in shared mode).
+ **/
+mMultipleAccountApp.removeAccount(accountList.get(accountListSpinner.getSelectedItemPosition()),
+ new IMultipleAccountPublicClientApplication.RemoveAccountCallback() {
+ @Override
+ public void onRemoved() {
+ ...
+ /* Reload account asynchronously to get the up-to-date list. */
+ loadAccounts();
+ }
+
+ @Override
+ public void onError(@NonNull MsalException exception) {
+ displayError(exception);
+ }
+ });
+```
+
+### auth_config_multiple_account.json
+
+This is the configuration file for a MSAL app that uses multiple accounts.
+
+See [Understand the Android MSAL configuration file ](msal-configuration.md) for an explanation of the various fields.
+
+Unlike the [auth_config_single_account.json](#auth_config_single_accountjson) configuration file, this config file has `"account_mode" : "MULTIPLE"` instead of `"account_mode" : "SINGLE"` because this is a multiple account app.
+
+`"client_id"` is preconfigured to use an app object registration that Microsoft maintains.
+`"redirect_uri"`is preconfigured to use the signing key provided with the code sample.
+
+```json
+{
+ "client_id": "0984a7b6-bc13-4141-8b0d-8f767e136bb7",
+ "authorization_user_agent": "DEFAULT",
+ "redirect_uri": "msauth://com.azuresamples.msalandroidapp/1wIqXSqBj7w%2Bh11ZifsnqwgyKrY%3D",
+ "account_mode": "MULTIPLE",
+ "broker_redirect_uri_registered": true,
+ "authorities": [
+ {
+ "type": "AAD",
+ "audience": {
+ "type": "AzureADandPersonalMicrosoftAccount",
+ "tenant_id": "common"
+ }
+ }
+ ]
+}
+```
++
+## Next steps
+
+Move on to the Android tutorial in which you build an Android app that gets an access token from the Microsoft identity platform and uses it to call the Microsoft Graph API.
+
+> [!div class="nextstepaction"]
+> [Tutorial: Sign in users and call the Microsoft Graph from an Android application](tutorial-v2-android.md)
active-directory Quickstart Mobile App Ios Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-mobile-app-ios-sign-in.md
+
+ Title: "Quickstart: Sign in users and call Microsoft Graph from an iOS or macOS app"
+description: In this quickstart, learn how an iOS or macOS app can sign in users, get an access token from the Microsoft identity platform, and call the Microsoft Graph API.
+++++++ Last updated : 01/14/2022+++
+#Customer intent: As an application developer, I want to learn how to sign in users and call Microsoft Graph from my iOS or macOS application.
++
+# Quickstart: Sign in users and call Microsoft Graph from an iOS or macOS app
+
+In this quickstart, you download and run a code sample that demonstrates how a native iOS or macOS application can sign in users and get an access token to call the Microsoft Graph API.
+
+The quickstart applies to both iOS and macOS apps. Some steps are needed only for iOS apps and will be indicated as such.
+
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* XCode 10+
+* iOS 10+
+* macOS 10.12+
+
+## How the sample works
+
+![Diagram showing how the sample app generated by this quickstart works.](media/quickstart-v2-ios/ios-intro.svg)
+
+## Register and download your quickstart app
+You have two options to start your quickstart application:
+* [Express] [Option 1: Register and auto configure your app and then download your code sample](#option-1-register-and-auto-configure-your-app-and-then-download-the-code-sample)
+* [Manual] [Option 2: Register and manually configure your application and code sample](#option-2-register-and-manually-configure-your-application-and-code-sample)
+
+### Option 1: Register and auto configure your app and then download the code sample
+#### Step 1: Register your application
+To register your app,
+1. Go to the [Azure portal - App registrations](https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/IosQuickstartPage/sourceType/docs) quickstart experience.
+1. Enter a name for your application and select **Register**.
+1. Follow the instructions to download and automatically configure your new application with just one click.
+
+### Option 2: Register and manually configure your application and code sample
+
+#### Step 1: Register your application
++
+To register your application and add the app's registration information to your solution manually, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
+1. Search for and select **Azure Active Directory**.
+1. Under **Manage**, select **App registrations** > **New registration**.
+1. Enter a **Name** for your application. Users of your app might see this name, and you can change it later.
+1. Select **Register**.
+1. Under **Manage**, select **Authentication** > **Add Platform** > **iOS**.
+1. Enter the **Bundle Identifier** for your application. The bundle identifier is a unique string that uniquely identifies your application, for example `com.<yourname>.identitysample.MSALMacOS`. Make a note of the value you use. Note that the iOS configuration is also applicable to macOS applications.
+1. Select **Configure** and save the **MSAL Configuration** details for later in this quickstart.
+1. Select **Done**.
+
+#### Step 2: Download the sample project
+
+- [Download the code sample for iOS](https://github.com/Azure-Samples/active-directory-ios-swift-native-v2/archive/master.zip)
+- [Download the code sample for macOS](https://github.com/Azure-Samples/active-directory-macOS-swift-native-v2/archive/master.zip)
+
+#### Step 3: Install dependencies
+
+1. Extract the zip file.
+2. In a terminal window, navigate to the folder with the downloaded code sample and run `pod install` to install the latest MSAL library.
+
+#### Step 4: Configure your project
+If you selected Option 1 above, you can skip these steps.
+1. Open the project in XCode.
+1. Edit **ViewController.swift** and replace the line starting with 'let kClientID' with the following code snippet. Remember to update the value for `kClientID` with the clientID that you saved when you registered your app in the portal earlier in this quickstart:
+
+ ```swift
+ let kClientID = "Enter_the_Application_Id_Here"
+ ```
+
+1. If you're building an app for [Azure AD national clouds](/graph/deployments#app-registration-and-token-service-root-endpoints), replace the line starting with 'let kGraphEndpoint' and 'let kAuthority' with correct endpoints. For global access, use default values:
+
+ ```swift
+ let kGraphEndpoint = "https://graph.microsoft.com/"
+ let kAuthority = "https://login.microsoftonline.com/common"
+ ```
+
+1. Other endpoints are documented [here](/graph/deployments#app-registration-and-token-service-root-endpoints). For example, to run the quickstart with Azure AD Germany, use following:
+
+ ```swift
+ let kGraphEndpoint = "https://graph.microsoft.de/"
+ let kAuthority = "https://login.microsoftonline.de/common"
+ ```
+
+3. Open the project settings. In the **Identity** section, enter the **Bundle Identifier** that you entered into the portal.
+4. Right-click **Info.plist** and select **Open As** > **Source Code**.
+5. Under the dict root node, replace `Enter_the_bundle_Id_Here` with the ***Bundle Id*** that you used in the portal. Notice the `msauth.` prefix in the string.
+
+ ```xml
+ <key>CFBundleURLTypes</key>
+ <array>
+ <dict>
+ <key>CFBundleURLSchemes</key>
+ <array>
+ <string>msauth.Enter_the_Bundle_Id_Here</string>
+ </array>
+ </dict>
+ </array>
+ ```
+
+6. Build and run the app!
+
+## More Information
+
+Read these sections to learn more about this quickstart.
+
+### Get MSAL
+
+MSAL ([MSAL.framework](https://github.com/AzureAD/microsoft-authentication-library-for-objc)) is the library used to sign in users and request tokens used to access an API protected by Microsoft identity platform. You can add MSAL to your application using the following process:
+
+```
+$ vi Podfile
+```
+
+Add the following to this podfile (with your project's target):
+
+```
+use_frameworks!
+
+target 'MSALiOS' do
+ pod 'MSAL'
+end
+```
+
+Run CocoaPods installation command:
+
+`pod install`
+
+### Initialize MSAL
+
+You can add the reference for MSAL by adding the following code:
+
+```swift
+import MSAL
+```
+
+Then, initialize MSAL using the following code:
+
+```swift
+let authority = try MSALAADAuthority(url: URL(string: kAuthority)!)
+
+let msalConfiguration = MSALPublicClientApplicationConfig(clientId: kClientID, redirectUri: nil, authority: authority)
+self.applicationContext = try MSALPublicClientApplication(configuration: msalConfiguration)
+```
+
+|Where: | Description |
+|||
+| `clientId` | The Application ID from the application registered in *portal.azure.com* |
+| `authority` | The Microsoft identity platform. In most of cases this will be `https://login.microsoftonline.com/common` |
+| `redirectUri` | The redirect URI of the application. You can pass 'nil' to use the default value, or your custom redirect URI. |
+
+### For iOS only, additional app requirements
+
+Your app must also have the following in your `AppDelegate`. This lets MSAL SDK handle token response from the Auth broker app when you do authentication.
+
+```swift
+func application(_ app: UIApplication, open url: URL, options: [UIApplication.OpenURLOptionsKey : Any] = [:]) -> Bool {
+
+ return MSALPublicClientApplication.handleMSALResponse(url, sourceApplication: options[UIApplication.OpenURLOptionsKey.sourceApplication] as? String)
+}
+```
+
+> [!NOTE]
+> On iOS 13+, if you adopt `UISceneDelegate` instead of `UIApplicationDelegate`, place this code into the `scene:openURLContexts:` callback instead (See [Apple's documentation](https://developer.apple.com/documentation/uikit/uiscenedelegate/3238059-scene?language=objc)).
+> If you support both UISceneDelegate and UIApplicationDelegate for compatibility with older iOS, MSAL callback needs to be placed into both places.
+
+```swift
+func scene(_ scene: UIScene, openURLContexts URLContexts: Set<UIOpenURLContext>) {
+
+ guard let urlContext = URLContexts.first else {
+ return
+ }
+
+ let url = urlContext.url
+ let sourceApp = urlContext.options.sourceApplication
+
+ MSALPublicClientApplication.handleMSALResponse(url, sourceApplication: sourceApp)
+}
+```
+
+Finally, your app must have an `LSApplicationQueriesSchemes` entry in your ***Info.plist*** alongside the `CFBundleURLTypes`. The sample comes with this included.
+
+ ```xml
+ <key>LSApplicationQueriesSchemes</key>
+ <array>
+ <string>msauthv2</string>
+ <string>msauthv3</string>
+ </array>
+ ```
+
+### Sign in users & request tokens
+
+MSAL has two methods used to acquire tokens: `acquireToken` and `acquireTokenSilent`.
+
+#### acquireToken: Get a token interactively
+
+Some situations require users to interact with Microsoft identity platform. In these cases, the end user may be required to select their account, enter their credentials, or consent to your app's permissions. For example,
+
+* The first time users sign in to the application
+* If a user resets their password, they'll need to enter their credentials
+* When your application is requesting access to a resource for the first time
+* When MFA or other Conditional Access policies are required
+
+```swift
+let parameters = MSALInteractiveTokenParameters(scopes: kScopes, webviewParameters: self.webViewParamaters!)
+self.applicationContext!.acquireToken(with: parameters) { (result, error) in /* Add your handling logic */}
+```
+
+|Where:| Description |
+|||
+| `scopes` | Contains the scopes being requested (that is, `[ "user.read" ]` for Microsoft Graph or `[ "<Application ID URL>/scope" ]` for custom web APIs (`api://<Application ID>/access_as_user`)) |
+
+#### acquireTokenSilent: Get an access token silently
+
+Apps shouldn't require their users to sign in every time they request a token. If the user has already signed in, this method allows apps to request tokens silently.
+
+```swift
+self.applicationContext!.getCurrentAccount(with: nil) { (currentAccount, previousAccount, error) in
+
+ guard let account = currentAccount else {
+ return
+ }
+
+ let silentParams = MSALSilentTokenParameters(scopes: self.kScopes, account: account)
+ self.applicationContext!.acquireTokenSilent(with: silentParams) { (result, error) in /* Add your handling logic */}
+}
+```
+
+|Where: | Description |
+|||
+| `scopes` | Contains the scopes being requested (that is, `[ "user.read" ]` for Microsoft Graph or `[ "<Application ID URL>/scope" ]` for custom web APIs (`api://<Application ID>/access_as_user`)) |
+| `account` | The account a token is being requested for. This quickstart is about a single account application. If you want to build a multi-account app you'll need to define logic to identify which account to use for token requests using `accountsFromDeviceForParameters:completionBlock:` and passing correct `accountIdentifier` |
++
+## Next steps
+
+Move on to the step-by-step tutorial in which you build an iOS or macOS app that gets an access token from the Microsoft identity platform and uses it to call the Microsoft Graph API.
+
+> [!div class="nextstepaction"]
+> [Tutorial: Sign in users and call Microsoft Graph from an iOS or macOS app](tutorial-v2-ios.md)
active-directory Quickstart Single Page App Angular Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-single-page-app-angular-sign-in.md
+
+ Title: "Quickstart: Sign in users in a single-page app (SPA) and call the Microsoft Graph API using Angular"
+description: In this quickstart, learn how a JavaScript Angular single-page application (SPA) can sign in users of personal accounts, work accounts, and school accounts by using the authorization code flow and call Microsoft Graph.
+++++++ Last updated : 07/27/2023+++
+#Customer intent: As an app developer, I want to learn how to get access tokens and refresh tokens by using the Microsoft identity platform so that my JavaScript Angular app can sign in users of personal accounts, work accounts, and school accounts.
++
+# Quickstart: Sign in users in a single-page app (SPA) and call the Microsoft Graph API using Angular
+
+In this quickstart, you download and run a code sample that demonstrates how a JavaScript Angular single-page application (SPA) can sign in users and call Microsoft Graph using the authorization code flow. The code sample demonstrates how to get an access token to call the Microsoft Graph API or any web API.
+
+See [How the sample works](#how-the-sample-works) for an illustration.
+
+This quickstart uses MSAL Angular v2 with the authorization code flow.
+
+## Prerequisites
+
+* Azure subscription - [Create an Azure subscription for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
+* [Node.js](https://nodejs.org/en/download/)
+* [Visual Studio Code](https://code.visualstudio.com/download) or another code editor
++
+## Register and download your quickstart application
++
+To start your quickstart application, use either of the following options.
+
+### Option 1 (Express): Register and auto configure your app and then download your code sample
+
+1. Go to the [Azure portal - App registrations](https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade/quickStartType/AngularSpaQuickstartPage/sourceType/docs) quickstart experience.
+1. Enter a name for your application.
+1. Under **Supported account types**, select **Accounts in any organizational directory and personal Microsoft accounts**.
+1. Select **Register**.
+1. Go to the quickstart pane and follow the instructions to download and automatically configure your new application.
+
+### Option 2 (Manual): Register and manually configure your application and code sample
+
+#### Step 1: Register your application
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
+1. Search for and select **Azure Active Directory**.
+1. Under **Manage**, select **App registrations** > **New registration**.
+1. Enter a **Name** for your application. Users of your app might see this name, and you can change it later.
+1. Under **Supported account types**, select **Accounts in any organizational directory and personal Microsoft accounts**.
+1. Select **Register**. On the app **Overview** page, note the **Application (client) ID** value for later use.
+1. Under **Manage**, select **Authentication**.
+1. Under **Platform configurations**, select **Add a platform**. In the pane that opens select **Single-page application**.
+1. Set the **Redirect URIs** value to `http://localhost:4200/`. This is the default port NodeJS will listen on your local machine. WeΓÇÖll return the authentication response to this URI after successfully authenticating the user.
+1. Select **Configure** to apply the changes.
+1. Under **Platform Configurations** expand **Single-page application**.
+1. Confirm that under **Grant types** ![Already configured](media/quickstart-v2-javascript/green-check.png) Your Redirect URI is eligible for the Authorization Code Flow with PKCE.
+
+#### Step 2: Download the project
+
+To run the project with a web server by using Node.js, [download the core project files](https://github.com/Azure-Samples/ms-identity-javascript-angular-spa/archive/main.zip).
+
+#### Step 3: Configure your JavaScript app
+
+In the *src* folder, open the *app* folder then open the *app.module.ts* file and update the `clientID`, `authority`, and `redirectUri` values in the `auth` object.
+
+```javascript
+// MSAL instance to be passed to msal-angular
+export function MSALInstanceFactory(): IPublicClientApplication {
+ return new PublicClientApplication({
+ auth: {
+ clientId: 'Enter_the_Application_Id_Here',
+ authority: 'Enter_the_Cloud_Instance_Id_Here/Enter_the_Tenant_Info_Here',
+ redirectUri: 'Enter_the_Redirect_Uri_Here'
+ },
+ cache: {
+ cacheLocation: BrowserCacheLocation.LocalStorage,
+ storeAuthStateInCookie: isIE, // set to true for IE 11 },
+ });
+}
+```
+
+Modify the values in the `auth` section as described here:
+
+- `Enter_the_Application_Id_Here` is the **Application (client) ID** for the application you registered.
+
+ To find the value of **Application (client) ID**, go to the app registration's **Overview** page in the Azure portal.
+- `Enter_the_Cloud_Instance_Id_Here` is the instance of the Azure cloud. For the main or global Azure cloud, enter `https://login.microsoftonline.com`. For **national** clouds (for example, China), see [National clouds](authentication-national-cloud.md).
+- `Enter_the_Tenant_info_here` is set to one of the following:
+ - If your application supports *accounts in this organizational directory*, replace this value with the **Tenant ID** or **Tenant name**. For example, `contoso.microsoft.com`.
+
+ To find the value of the **Directory (tenant) ID**, go to the app registration's **Overview** page in the Azure portal.
+ - If your application supports *accounts in any organizational directory*, replace this value with `organizations`.
+ - If your application supports *accounts in any organizational directory and personal Microsoft accounts*, replace this value with `common`. **For this quickstart**, use `common`.
+ - To restrict support to *personal Microsoft accounts only*, replace this value with `consumers`.
+
+ To find the value of **Supported account types**, go to the app registration's **Overview** page in the Azure portal.
+- `Enter_the_Redirect_Uri_Here` is `http://localhost:4200/`.
+
+The `authority` value in your *app.module.ts* should be similar to the following if you're using the main (global) Azure cloud:
+
+```javascript
+authority: "https://login.microsoftonline.com/common",
+```
+
+Scroll down in the same file and update the `graphMeEndpoint`.
+- Replace the string `Enter_the_Graph_Endpoint_Herev1.0/me` with `https://graph.microsoft.com/v1.0/me`
+- `Enter_the_Graph_Endpoint_Herev1.0/me` is the endpoint that API calls will be made against. For the main (global) Microsoft Graph API service, enter `https://graph.microsoft.com/` (include the trailing forward-slash). For more information, see the [documentation](/graph/deployments).
+
+```javascript
+export function MSALInterceptorConfigFactory(): MsalInterceptorConfiguration {
+ const protectedResourceMap = new Map<string, Array<string>>();
+ protectedResourceMap.set('Enter_the_Graph_Endpoint_Herev1.0/me', ['user.read']);
+
+ return {
+ interactionType: InteractionType.Redirect,
+ protectedResourceMap
+ };
+}
+```
+
+ #### Step 4: Run the project
+
+Run the project with a web server by using Node.js:
+
+1. To start the server, run the following commands from within the project directory:
+ ```console
+ npm install
+ npm start
+ ```
+1. Browse to `http://localhost:4200/`.
+
+1. Select **Login** to start the sign-in process and then call the Microsoft Graph API.
+
+ The first time you sign in, you're prompted to provide your consent to allow the application to access your profile and sign you in. After you're signed in successfully, click the **Profile** button to display your user information on the page.
+
+## More information
+
+### How the sample works
+
+![Diagram showing the authorization code flow for a single-page application.](media/quickstart-v2-javascript-auth-code/diagram-01-auth-code-flow.png)
+
+### msal.js
+
+The MSAL.js library signs in users and requests the tokens that are used to access an API that's protected by the Microsoft identity platform.
+
+If you have Node.js installed, you can download the latest version by using the Node.js Package Manager (npm):
+
+```console
+npm install @azure/msal-browser @azure/msal-angular@2
+```
+
+## Next steps
+
+For a detailed step-by-step guide on building the auth code flow application using vanilla JavaScript, see the following tutorial:
+
+> [!div class="nextstepaction"]
+> [Tutorial to sign in users and call Microsoft Graph](tutorial-v2-javascript-auth-code.md)
active-directory Quickstart Single Page App Javascript Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-single-page-app-javascript-sign-in.md
+
+ Title: "Quickstart: Sign in users in a single-page app (SPA) and call the Microsoft Graph API using JavaScript"
+description: In this quickstart, learn how a JavaScript single-page application (SPA) can sign in users of personal accounts, work accounts, and school accounts by using the authorization code flow.
++++++++ Last updated : 07/27/2023+++
+#Customer intent: As an app developer, I want to learn how to get access tokens and refresh tokens by using the Microsoft identity platform so that my JavaScript app can sign in users of personal accounts, work accounts, and school accounts.
++
+# Quickstart: Sign in users in a single-page app (SPA) and call the Microsoft Graph API using JavaScript
+
+In this quickstart, you download and run a code sample that demonstrates how a JavaScript single-page application (SPA) can sign in users and call Microsoft Graph using the authorization code flow with Proof Key for Code Exchange (PKCE). The code sample demonstrates how to get an access token to call the Microsoft Graph API or any web API.
+
+See [How the sample works](#how-the-sample-works) for an illustration.
+
+## Prerequisites
+
+* Azure subscription - [Create an Azure subscription for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
+* [Node.js](https://nodejs.org/en/download/)
+* [Visual Studio Code](https://code.visualstudio.com/download) or another code editor
++
+## Register and download your quickstart application
++
+To start your quickstart application, use either of the following options.
+
+### Option 1 (Express): Register and auto configure your app and then download your code sample
+
+1. Go to the [Azure portal - App registrations](https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade/quickStartType/AngularSpaQuickstartPage/sourceType/docs) quickstart experience.
+1. Enter a name for your application.
+1. Under **Supported account types**, select **Accounts in any organizational directory and personal Microsoft accounts**.
+1. Select **Register**.
+1. Go to the quickstart pane and follow the instructions to download and automatically configure your new application.
+
+### Option 2 (Manual): Register and manually configure your application and code sample
+
+#### Step 1: Register your application
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
+1. Search for and select **Azure Active Directory**.
+1. Under **Manage**, select **App registrations** > **New registration**.
+1. Enter a **Name** for your application. Users of your app might see this name, and you can change it later.
+1. Under **Supported account types**, select **Accounts in any organizational directory and personal Microsoft accounts**.
+1. Select **Register**. On the app **Overview** page, note the **Application (client) ID** value for later use.
+1. Under **Manage**, select **Authentication**.
+1. Under **Platform configurations**, select **Add a platform**. In the pane that opens select **Single-page application**.
+1. Set the **Redirect URI** value to `http://localhost:3000/`.
+1. Select **Configure**.
+
+#### Step 2: Download the project
+
+To run the project with a web server by using Node.js, [download the core project files](https://github.com/Azure-Samples/ms-identity-javascript-v2/archive/master.zip).
++
+#### Step 3: Configure your JavaScript app
+
+In the *app* folder, open the *authConfig.js* file, and then update the `clientID`, `authority`, and `redirectUri` values in the `msalConfig` object.
+
+```javascript
+// Config object to be passed to MSAL on creation
+const msalConfig = {
+ auth: {
+ clientId: "Enter_the_Application_Id_Here",
+ authority: "Enter_the_Cloud_Instance_Id_Here/Enter_the_Tenant_Info_Here",
+ redirectUri: "Enter_the_Redirect_Uri_Here",
+ },
+ cache: {
+ cacheLocation: "sessionStorage", // This configures where your cache will be stored
+ storeAuthStateInCookie: false, // Set this to "true" if you are having issues on IE11 or Edge
+ }
+};
+```
+
+Modify the values in the `msalConfig` section:
+
+- `Enter_the_Application_Id_Here` is the **Application (client) ID** for the application you registered.
+
+ To find the value of **Application (client) ID**, go to the app registration's **Overview** page in the Azure portal.
+- `Enter_the_Cloud_Instance_Id_Here` is the Azure cloud instance. For the main or global Azure cloud, enter `https://login.microsoftonline.com`. For **national** clouds (for example, China), see [National clouds](authentication-national-cloud.md).
+- `Enter_the_Tenant_info_here` is one of the following:
+ - If your application supports *accounts in this organizational directory*, replace this value with the **Tenant ID** or **Tenant name**. For example, `contoso.microsoft.com`.
+
+ To find the value of the **Directory (tenant) ID**, go to the app registration's **Overview** page in the Azure portal.
+ - If your application supports *accounts in any organizational directory*, replace this value with `organizations`.
+ - If your application supports *accounts in any organizational directory and personal Microsoft accounts*, replace this value with `common`. **For this quickstart**, use `common`.
+ - To restrict support to *personal Microsoft accounts only*, replace this value with `consumers`.
+
+ To find the value of **Supported account types**, go to the app registration's **Overview** page in the Azure portal.
+- `Enter_the_Redirect_Uri_Here` is `http://localhost:3000/`.
+
+The `authority` value in your *authConfig.js* should be similar to the following if you're using the main (global) Azure cloud:
+
+```javascript
+authority: "https://login.microsoftonline.com/common",
+```
+
+Next, open the *graphConfig.js* file to update the `graphMeEndpoint` and `graphMailEndpoint` values in the `apiConfig` object.
+
+```javascript
+ // Add here the endpoints for MS Graph API services you would like to use.
+ const graphConfig = {
+ graphMeEndpoint: "Enter_the_Graph_Endpoint_Herev1.0/me",
+ graphMailEndpoint: "Enter_the_Graph_Endpoint_Herev1.0/me/messages"
+ };
+
+ // Add here scopes for access token to be used at MS Graph API endpoints.
+ const tokenRequest = {
+ scopes: ["Mail.Read"]
+ };
+```
+
+`Enter_the_Graph_Endpoint_Here` is the endpoint that API calls are made against. For the main (global) Microsoft Graph API service, enter `https://graph.microsoft.com/` (include the trailing forward-slash). For more information about Microsoft Graph on national clouds, see [National cloud deployment](/graph/deployments).
+
+If you're using the main (global) Microsoft Graph API service, the `graphMeEndpoint` and `graphMailEndpoint` values in the *graphConfig.js* file should be similar to the following:
+
+```javascript
+graphMeEndpoint: "https://graph.microsoft.com/v1.0/me",
+graphMailEndpoint: "https://graph.microsoft.com/v1.0/me/messages"
+```
+
+#### Step 4: Run the project
+
+Run the project with a web server by using Node.js.
+
+1. To start the server, run the following commands from within the project directory:
+
+ ```console
+ npm install
+ npm start
+ ```
+
+1. Go to `http://localhost:3000/`.
+
+1. Select **Sign In** to start the sign-in process and then call the Microsoft Graph API.
+
+ The first time you sign in, you're prompted to provide your consent to allow the application to access your profile and sign you in. After you're signed in successfully, your user profile information is displayed on the page.
+
+## More information
+
+### How the sample works
+
+![Diagram showing the authorization code flow for a single-page application.](media/quickstart-v2-javascript-auth-code/diagram-01-auth-code-flow.png)
+
+### MSAL.js
+
+The MSAL.js library signs in users and requests the tokens that are used to access an API that's protected by Microsoft identity platform. The sample's *https://docsupdatetracker.net/index.html* file contains a reference to the library:
+
+```html
+<script type="text/javascript" src="https://alcdn.msauth.net/browser/2.0.0-beta.0/js/msal-browser.js" integrity=
+"sha384-r7Qxfs6PYHyfoBR6zG62DGzptfLBxnREThAlcJyEfzJ4dq5rqExc1Xj3TPFE/9TH" crossorigin="anonymous"></script>
+```
+
+If you have Node.js installed, you can download the latest version by using the Node.js Package Manager (npm):
+
+```console
+npm install @azure/msal-browser
+```
+
+## Next steps
+
+For a more detailed step-by-step guide on building the application used in this quickstart, see the following tutorial:
+
+> [!div class="nextstepaction"]
+> [Tutorial to sign in users and call Microsoft Graph](tutorial-v2-javascript-auth-code.md)
active-directory Quickstart Single Page App React Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-single-page-app-react-sign-in.md
+
+ Title: "Quickstart: Sign in users in a single-page app (SPA) and call the Microsoft Graph API using React"
+description: In this quickstart, learn how a JavaScript React single-page application (SPA) can sign in users of personal accounts, work accounts, and school accounts by using the authorization code flow and call Microsoft Graph.
++++++++ Last updated : 07/27/2023+++
+#Customer intent: As an app developer, I want to learn how to login, logout, conditionally render components to authenticated users, and acquire an access token for a protected resource such as Microsoft Graph by using the Microsoft identity platform so that my JavaScript React app can sign in users of personal accounts, work accounts, and school accounts.
++
+# Quickstart: Sign in users in a single-page app (SPA) and call the Microsoft Graph API using React
++
+In this quickstart, you download and run a code sample that demonstrates how a JavaScript React single-page application (SPA) can sign in users and call Microsoft Graph using the authorization code flow. The code sample demonstrates how to get an access token to call the Microsoft Graph API or any web API.
+
+See [How the sample works](#how-the-sample-works) for an illustration.
+
+## Prerequisites
+
+* Azure subscription - [Create an Azure subscription for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
+* [Node.js](https://nodejs.org/en/download/)
+* [Visual Studio Code](https://code.visualstudio.com/download) or another code editor
++
+## Register and download your quickstart application
++
+To start your quickstart application, use either of the following options.
+
+### Option 1 (Express): Register and auto configure your app and then download your code sample
+
+1. Go to the [Azure portal - App registrations](https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade/quickStartType/AngularSpaQuickstartPage/sourceType/docs) quickstart experience.
+1. Enter a name for your application.
+1. Under **Supported account types**, select **Accounts in any organizational directory and personal Microsoft accounts**.
+1. Select **Register**.
+1. Go to the quickstart pane and follow the instructions to download and automatically configure your new application.
+
+### Option 2 (Manual): Register and manually configure your application and code sample
+
+#### Step 1: Register your application
++
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
+1. Search for and select **Azure Active Directory**.
+Under **Manage**, select **App registrations** > **New registration**.
+1. When the **Register an application** page appears, enter a name for your application.
+1. Under **Supported account types**, select **Accounts in any organizational directory and personal Microsoft accounts**.
+1. Select **Register**. On the app **Overview** page, note the **Application (client) ID** value for later use.
+1. Under **Manage**, select **Authentication**.
+1. Under **Platform configurations**, select **Add a platform**. In the pane that opens select **Single-page application**.
+1. Set the **Redirect URIs** value to `http://localhost:3000/`. This is the default port NodeJS will listen on your local machine. WeΓÇÖll return the authentication response to this URI after successfully authenticating the user.
+1. Select **Configure** to apply the changes.
+1. Under **Platform Configurations** expand **Single-page application**.
+1. Confirm that under **Grant types** ![Already configured](media/quickstart-v2-javascript/green-check.png) Your Redirect URI is eligible for the Authorization Code Flow with PKCE.
+
+#### Step 2: Download the project
++
+To run the project with a web server by using Node.js, [download the core project files](https://github.com/Azure-Samples/ms-identity-javascript-react-spa/archive/main.zip).
+
+#### Step 3: Configure your JavaScript app
+
+In the *src* folder, open the *authConfig.js* file and update the `clientID`, `authority`, and `redirectUri` values in the `msalConfig` object.
+
+```javascript
+/**
+* Configuration object to be passed to MSAL instance on creation.
+* For a full list of MSAL.js configuration parameters, visit:
+* https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/configuration.md
+*/
+export const msalConfig = {
+ auth: {
+ clientId: "Enter_the_Application_Id_Here",
+ authority: "Enter_the_Cloud_Instance_Id_Here/Enter_the_Tenant_Info_Here",
+ redirectUri: "Enter_the_Redirect_Uri_Here"
+ },
+ cache: {
+ cacheLocation: "sessionStorage", // This configures where your cache will be stored
+ storeAuthStateInCookie: false, // Set this to "true" if you are having issues on IE11 or Edge
+ },
+```
+
+Modify the values in the `msalConfig` section as described here:
+
+- `Enter_the_Application_Id_Here` is the **Application (client) ID** for the application you registered.
+
+ To find the value of **Application (client) ID**, go to the app registration's **Overview** page in the Azure portal.
+- `Enter_the_Cloud_Instance_Id_Here` is the instance of the Azure cloud. For the main or global Azure cloud, enter `https://login.microsoftonline.com`. For **national** clouds (for example, China), see [National clouds](authentication-national-cloud.md).
+- `Enter_the_Tenant_info_here` is set to one of the following:
+ - If your application supports *accounts in this organizational directory*, replace this value with the **Tenant ID** or **Tenant name**. For example, `contoso.microsoft.com`.
+
+ To find the value of the **Directory (tenant) ID**, go to the app registration's **Overview** page in the Azure portal.
+ - If your application supports *accounts in any organizational directory*, replace this value with `organizations`.
+ - If your application supports *accounts in any organizational directory and personal Microsoft accounts*, replace this value with `common`. **For this quickstart**, use `common`.
+ - To restrict support to *personal Microsoft accounts only*, replace this value with `consumers`.
+
+ To find the value of **Supported account types**, go to the app registration's **Overview** page in the Azure portal.
+- `Enter_the_Redirect_Uri_Here` is `http://localhost:3000/`.
+
+The `authority` value in your *authConfig.js* should be similar to the following if you're using the main (global) Azure cloud:
+
+```javascript
+authority: "https://login.microsoftonline.com/common",
+```
+
+Scroll down in the same file and update the `graphMeEndpoint`.
+- Replace the string `Enter_the_Graph_Endpoint_Herev1.0/me` with `https://graph.microsoft.com/v1.0/me`
+- `Enter_the_Graph_Endpoint_Herev1.0/me` is the endpoint that API calls will be made against. For the main (global) Microsoft Graph API service, enter `https://graph.microsoft.com/` (include the trailing forward-slash). For more information, see the [documentation](/graph/deployments).
+
+```javascript
+ // Add here the endpoints for MS Graph API services you would like to use.
+ export const graphConfig = {
+ graphMeEndpoint: "Enter_the_Graph_Endpoint_Herev1.0/me"
+ };
+```
+
+#### Step 4: Run the project
+
+Run the project with a web server by using Node.js:
+
+1. To start the server, run the following commands from within the project directory:
+ ```console
+ npm install
+ npm start
+ ```
+1. Browse to `http://localhost:3000/`.
+
+1. Select **Sign In** to start the sign-in process and then call the Microsoft Graph API.
+
+ The first time you sign in, you're prompted to provide your consent to allow the application to access your profile and sign you in. After you're signed in successfully, click on the **Request Profile Information** to display your profile information on the page.
+
+## More information
+
+### How the sample works
+
+![Diagram showing the authorization code flow for a single-page application.](media/quickstart-v2-javascript-auth-code/diagram-01-auth-code-flow.png)
+
+### msal.js
+
+The MSAL.js library signs in users and requests the tokens that are used to access an API that's protected by the Microsoft identity platform.
+
+If you have Node.js installed, you can download the latest version by using the Node.js Package Manager (npm):
+
+```console
+npm install @azure/msal-browser @azure/msal-react
+```
+
+## Next steps
+
+Next, try a step-by-step tutorial to learn how to build a React SPA from scratch that signs in users and calls the Microsoft Graph API to get user profile data:
+
+> [!div class="nextstepaction"]
+> [Tutorial: Sign in users and call Microsoft Graph](tutorial-v2-react.md)
active-directory Quickstart V2 Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-android.md
> [!div renderon="docs"] > Welcome! This probably isn't the page you were expecting. We're currently working on a fix, but for now, please use the link below - it should take you to the right article: >
-> > [Quickstart: Android app with user sign-in](mobile-app-quickstart.md?pivots=devlang-android)
+> > [Quickstart: Sign in users and call Microsoft Graph from an Android app](quickstart-mobile-app-android-sign-in.md)
> > We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
active-directory Quickstart V2 Aspnet Core Web Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-aspnet-core-web-api.md
> [!div renderon="docs"] > Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article: >
-> > [Quickstart:Protect an ASP.NET Core web API](web-api-quickstart.md?pivots=devlang-aspnet-core)
+> > [Quickstart: Call an ASP.NET web API that is protected by the Microsoft identity platform](quickstart-web-api-aspnet-protect-api.md)
> > We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
active-directory Quickstart V2 Aspnet Core Webapp Calls Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-aspnet-core-webapp-calls-graph.md
> [!div renderon="docs"] > Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article: >
-> > [Quickstart: ASP.NET Core web app that signs in users and calls a web API](web-app-quickstart.md?pivots=devlang-aspnet-core)
+> > [Quickstart: Add sign-in with Microsoft to an ASP.NET Core web app](quickstart-web-app-aspnet-core-sign-in.md)
> > We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
active-directory Quickstart V2 Aspnet Core Webapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-aspnet-core-webapp.md
> [!div renderon="docs"] > Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article: >
-> > [Quickstart: ASP.NET Core web app with user sign-in](web-app-quickstart.md?pivots=devlang-aspnet-core)
+> > [Quickstart: Add sign-in with Microsoft to an ASP.NET Core web app](quickstart-web-app-aspnet-core-sign-in.md)
> > We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
active-directory Quickstart V2 Aspnet Webapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-aspnet-webapp.md
> [!div renderon="docs"] > Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article: >
-> > [Quickstart: ASP.NET web app that signs in users](web-app-quickstart.md?pivots=devlang-aspnet)
+> > [Quickstart: Add sign-in with Microsoft to an ASP.NET web app](quickstart-web-app-aspnet-sign-in.md)
> > We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
active-directory Quickstart V2 Dotnet Native Aspnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-dotnet-native-aspnet.md
> [!div renderon="docs"] > Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article: >
-> > [Quickstart: Call a protected ASP.NET web API](web-api-quickstart.md?pivots=devlang-aspnet)
+> > [Quickstart: Call a protected ASP.NET web API](quickstart-web-api-aspnet-protect-api.md)
> > We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
active-directory Quickstart V2 Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-ios.md
> [!div renderon="docs"] > Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article: >
-> > [Quickstart: iOS or macOS app that signs in users and calls a web API](mobile-app-quickstart.md?pivots=devlang-ios)
+> > [Quickstart: Sign in users and call Microsoft Graph from an iOS or macOS app](quickstart-mobile-app-ios-sign-in.md)
> > We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
active-directory Quickstart V2 Java Daemon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-java-daemon.md
> [!div renderon="docs"] > Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article: >
-> > [Quickstart: Java daemon that calls a protected API](console-app-quickstart.md?pivots=devlang-java)
+> > [Quickstart: Acquire a token and call Microsoft Graph from a Java daemon app](quickstart-daemon-app-java-acquire-token.md)
> > We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
active-directory Quickstart V2 Java Webapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-java-webapp.md
> [!div renderon="docs"] > Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article: >
-> > [Quickstart: Java web app with user sign-in](web-app-quickstart.md?pivots=devlang-java)
+> > [Quickstart: Add sign-in with Microsoft to a Java web app](quickstart-web-app-java-sign-in.md)
> > We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
active-directory Quickstart V2 Javascript Auth Code Angular https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-javascript-auth-code-angular.md
> [!div renderon="docs"] > Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article: >
-> > [Quickstart: Angular single-page app with user sign-in](single-page-app-quickstart.md?pivots=devlang-angular)
+> > [Quickstart: Sign in users in single-page apps (SPA) via the authorization code flow with Proof Key for Code Exchange (PKCE) using Angular](quickstart-single-page-app-angular-sign-in.md)
> > We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
active-directory Quickstart V2 Javascript Auth Code React https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-javascript-auth-code-react.md
> [!div renderon="docs"] > Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article: >
-> > [Quickstart: React single-page app with user sign-in](single-page-app-quickstart.md?pivots=devlang-react)
+> > [Quickstart: Sign in users in single-page apps (SPA) via the authorization code flow with Proof Key for Code Exchange (PKCE) using React](quickstart-single-page-app-react-sign-in.md)
> > We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
active-directory Quickstart V2 Javascript Auth Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-javascript-auth-code.md
> [!div renderon="docs"] > Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article: >
-> > [Quickstart: JavaScript single-page app with user sign-in](single-page-app-quickstart.md?pivots=devlang-javascript)
+> > [Quickstart: Sign in users in single-page apps (SPA) via the authorization code flow with Proof Key for Code Exchange (PKCE) using JavaScript](quickstart-single-page-app-javascript-sign-in.md)
> > We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
active-directory Quickstart V2 Netcore Daemon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-netcore-daemon.md
> [!div renderon="docs"] > Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article: >
-> > [Quickstart: .NET Core console that calls an API](console-app-quickstart.md?pivots=devlang-dotnet-core)
+> > [Quickstart: Acquire a token and call Microsoft Graph in a .NET Core console app](quickstart-console-app-netcore-acquire-token.md)
> > We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
active-directory Quickstart V2 Nodejs Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-nodejs-console.md
> [!div renderon="docs"] > Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article: >
-> > [Quickstart: Node.js console app that calls an API](console-app-quickstart.md?pivots=devlang-nodejs)
+> > [Quickstart: Acquire a token and call Microsoft Graph from a Node.js console app](quickstart-console-app-nodejs-acquire-token.md)
> > We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
active-directory Quickstart V2 Nodejs Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-nodejs-desktop.md
> [!div renderon="docs"] > Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article: >
-> > [Quickstart: Node.js Electron desktop app with user sign-in](desktop-app-quickstart.md?pivots=devlang-nodejs-electron)
+> > [Quickstart: Sign in users and call Microsoft Graph from a Node.js desktop app](quickstart-desktop-app-nodejs-electron-sign-in.md)
> > We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
active-directory Quickstart V2 Nodejs Webapp Msal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-nodejs-webapp-msal.md
> [!div renderon="docs"] > Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article: >
-> > [Quickstart: Node.js web app that signs in users with MSAL Node](web-app-quickstart.md?pivots=devlang-nodejs-msal)
+> > [Quickstart: Add authentication to a Node.js web app with MSAL Node](quickstart-web-app-nodejs-msal-sign-in.md)
> > We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
active-directory Quickstart V2 Python Daemon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-python-daemon.md
> [!div renderon="docs"] > Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article: >
-> > [Quickstart: Python console app that calls an API](console-app-quickstart.md?pivots=devlang-python)
+> > [Quickstart: Acquire a token and call Microsoft Graph from a Python daemon app](quickstart-daemon-app-python-acquire-token.md)
> > We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
active-directory Quickstart V2 Python Webapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-python-webapp.md
> [!div renderon="docs"] > Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article: >
-> > [Quickstart: Python web app with user sign-in](web-app-quickstart.md?pivots=devlang-python)
+> > [Quickstart: Add sign-in with Microsoft to a Python web app](quickstart-web-app-python-sign-in.md)
> > We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
active-directory Quickstart V2 Uwp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-uwp.md
> [!div renderon="docs"] > Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article: >
-> > [Quickstart: Universal Windows Platform (UWP) desktop app with user sign-in](desktop-app-quickstart.md?pivots=devlang-uwp)
+> > [Quickstart: Sign in users and call Microsoft Graph in a Universal Windows Platform app](quickstart-desktop-app-uwp-sign-in.md)
> > We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
active-directory Quickstart V2 Windows Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-windows-desktop.md
> [!div renderon="docs"] > Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article: >
-> > [Quickstart: Windows Presentation Foundation (WPF) desktop app that signs in users and calls a web API](desktop-app-quickstart.md?pivots=devlang-windows-desktop)
+> > [Quickstart: Sign in users and call Microsoft Graph in a Windows desktop app](quickstart-desktop-app-wpf-sign-in.md)
> > We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
active-directory Quickstart Web Api Aspnet Core Protect Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-web-api-aspnet-core-protect-api.md
+
+ Title: "Quickstart: Protect an ASP.NET Core web API with the Microsoft identity platform"
+description: In this quickstart, you download and modify a code sample that demonstrates how to protect an ASP.NET Core web API by using the Microsoft identity platform for authorization.
+++++++ Last updated : 04/16/2023+++
+#Customer intent: As an application developer, I want to know how to write an ASP.NET Core web API that uses the Microsoft identity platform to authorize API requests from clients.
++
+# Quickstart: Protect an ASP.NET Core web API with the Microsoft identity platform
+
+The following quickstart uses a ASP.NET Core web API code sample to demonstrate how to restrict resource access to authorized accounts. The sample supports authorization of personal Microsoft accounts and accounts in any Azure Active Directory (Azure AD) organization.
+
+## Prerequisites
+
+- Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- [Azure Active Directory tenant](quickstart-create-new-tenant.md)
+- [.NET Core SDK 6.0+](https://dotnet.microsoft.com/)
+- [Visual Studio 2022](https://visualstudio.microsoft.com/vs/) or [Visual Studio Code](https://code.visualstudio.com/)
+
+## Step 1: Register the application
++
+First, register the web API in your Azure AD tenant and add a scope by following these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. If access to multiple tenants is available, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which to register the application.
+1. Search for and select **Azure Active Directory**.
+1. Under **Manage**, select **App registrations** > **New registration**.
+1. For **Name**, enter a name for the application. For example, enter **AspNetCoreWebApi-Quickstart**. Users of the app will see this name, and can be changed later.
+1. Select **Register**.
+1. Under **Manage**, select **Expose an API** > **Add a scope**. For **Application ID URI**, accept the default by selecting **Save and continue**, and then enter the following details:
+ - **Scope name**: `access_as_user`
+ - **Who can consent?**: **Admins and users**
+ - **Admin consent display name**: `Access AspNetCoreWebApi-Quickstart`
+ - **Admin consent description**: `Allows the app to access AspNetCoreWebApi-Quickstart as the signed-in user.`
+ - **User consent display name**: `Access AspNetCoreWebApi-Quickstart`
+ - **User consent description**: `Allow the application to access AspNetCoreWebApi-Quickstart on your behalf.`
+ - **State**: **Enabled**
+1. Select **Add scope** to complete the scope addition.
+
+## Step 2: Download the ASP.NET Core project
+
+[Download the ASP.NET Core solution](https://github.com/Azure-Samples/active-directory-dotnet-native-aspnetcore-v2/archive/aspnetcore3-1.zip) from GitHub.
+
+## Step 3: Configure the ASP.NET Core project
+
+In this step, the sample code will be configured to work with the app registration that was created earlier.
+
+1. Extract the *.zip* file to a local folder that's close to the root of the disk to avoid errors caused by path length limitations on Windows. For example, extract to *C:\Azure-Samples*.
+
+1. Open the solution in the *webapp* folder in your code editor.
+1. In *appsettings.json*, replace the values of `ClientId`, and `TenantId`. The value for the application (client) ID and the directory (tenant) ID, can be found in the app's **Overview** page on the Azure portal.
+
+ ```json
+ "ClientId": "Enter_the_Application_Id_here",
+ "TenantId": "Enter_the_Tenant_Info_Here"
+ ```
+
+ - `Enter_the_Application_Id_Here` is the application (client) ID for the registered application.
+ - Replace `Enter_the_Tenant_Info_Here` with one of the following:
+ - If the application supports **Accounts in this organizational directory only**, replace this value with the directory (tenant) ID (a GUID) or tenant name (for example, `contoso.onmicrosoft.com`). The directory (tenant) ID can be found on the app's **Overview** page.
+ - If the application supports **Accounts in any organizational directory**, replace this value with `organizations`.
+ - If the application supports **All Microsoft account users**, leave this value as `common`.
+
+For this quickstart, don't change any other values in the *appsettings.json* file.
+
+### Step 4: Run the sample
+
+1. Open a terminal and change directory to the project folder.
+
+ ```powershell
+ cd webapi
+ ```
+
+1. Run the following command to build the solution:
+
+ ```powershell
+ dotnet run
+ ```
+
+If the build has been successful, the following output is displayed:
+
+```powershell
+Building...
+info: Microsoft.Hosting.Lifetime[0]
+ Now listening on: https://localhost:{port}
+info: Microsoft.Hosting.Lifetime[0]
+ Now listening on: http://localhost:{port}
+info: Microsoft.Hosting.Lifetime[0]
+ Application started. Press Ctrl+C to shut down.
+...
+```
+
+## How the sample works
+
+The web API receives a token from a client application, and the code in the web API validates the token. This scenario is explained in more detail in [Scenario: Protected web API](scenario-protected-web-api-overview.md).
+
+### Startup class
+
+The *Microsoft.AspNetCore.Authentication* middleware uses a `Startup` class that's executed when the hosting process starts. In its `ConfigureServices` method, the `AddMicrosoftIdentityWebApi` extension method provided by *Microsoft.Identity.Web* is called.
+
+```csharp
+ public void ConfigureServices(IServiceCollection services)
+ {
+ services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
+ .AddMicrosoftIdentityWebApi(Configuration, "AzureAd");
+ }
+```
+
+The `AddAuthentication()` method configures the service to add JwtBearer-based authentication.
+
+The line that contains `.AddMicrosoftIdentityWebApi` adds the Microsoft identity platform authorization to the web API. It's then configured to validate access tokens issued by the Microsoft identity platform based on the information in the `AzureAD` section of the *appsettings.json* configuration file:
+
+| *appsettings.json* key | Description |
+||-|
+| `ClientId` | Application (client) ID of the application registered in the Azure portal. |
+| `Instance` | Security token service (STS) endpoint for the user to authenticate. This value is typically `https://login.microsoftonline.com/`, indicating the Azure public cloud. |
+| `TenantId` | Name of the tenant or its tenant ID (a GUID), or `common` to sign in users with work or school accounts or Microsoft personal accounts. |
+
+The `Configure()` method contains two important methods, `app.UseAuthentication()` and `app.UseAuthorization()`, that enable their named functionality:
+
+```csharp
+// The runtime calls this method. Use this method to configure the HTTP request pipeline.
+public void Configure(IApplicationBuilder app, IHostingEnvironment env)
+{
+ // more code
+ app.UseAuthentication();
+ app.UseAuthorization();
+ // more code
+}
+```
+
+### Protecting a controller, a controller's method, or a Razor page
+
+ A controller or controller methods can be protected by using the `[Authorize]` attribute. This attribute restricts access to the controller or methods by allowing only authenticated users. An authentication challenge can be started to access the controller if the user isn't authenticated.
+
+```csharp
+namespace webapi.Controllers
+{
+ [Authorize]
+ [RequiredScope("access_as_user")]
+ [ApiController]
+ [Route("[controller]")]
+ public class WeatherForecastController : ControllerBase
+```
+
+### Validation of scope in the controller
+
+The code in the API verifies that the required scopes are in the token by using `[RequiredScope("access_as_user")]` attribute:
+
+```csharp
+namespace webapi.Controllers
+{
+ [Authorize]
+ [RequiredScope("access_as_user")]
+ [ApiController]
+ [Route("[controller]")]
+ public class WeatherForecastController : ControllerBase
+ {
+ [HttpGet]
+ public IEnumerable<WeatherForecast> Get()
+ {
+ // some code here
+ }
+ }
+}
+```
++
+## Next steps
+
+The following GitHub repository contains the ASP.NET Core web API code sample instructions and more samples that show how to achieve the following:
+
+- Add authentication to a new ASP.NET Core web API.
+- Call the web API from a desktop application.
+- Call downstream APIs like Microsoft Graph and other Microsoft APIs.
+
+> [!div class="nextstepaction"]
+> [ASP.NET Core web API tutorials on GitHub](https://github.com/Azure-Samples/active-directory-dotnet-native-aspnetcore-v2)
active-directory Quickstart Web Api Aspnet Protect Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-web-api-aspnet-protect-api.md
+
+ Title: "Quickstart: Call an ASP.NET web API that is protected by the Microsoft identity platform"
+description: In this quickstart, learn how to call an ASP.NET web API that's protected by the Microsoft identity platform from a Windows Desktop (WPF) application.
+++++++ Last updated : 04/17/2023+++
+#Customer intent: As an application developer, I want to know how to set up OpenId Connect authentication in a web application that's built by using Node.js with Express.
++
+# Quickstart: Call an ASP.NET web API that is protected by the Microsoft identity platform
+
+The following quickstart uses, uses a code sample that demonstrates how to protect an ASP.NET web API by restricting access to its resources to authorized accounts only. The sample supports authorization of personal Microsoft accounts and accounts in any Azure Active Directory (Azure AD) organization.
+
+The article also uses a Windows Presentation Foundation (WPF) app to demonstrate how to request an access token to access a web API.
+
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* Visual Studio 2022. Download [Visual Studio for free](https://www.visualstudio.com/downloads/).
+
+## Clone or download the sample
+
+The code sample can be obtained in two ways:
+
+* Clone it from your shell or command line:
+
+ ```console
+ git clone https://github.com/AzureADQuickStarts/AppModelv2-NativeClient-DotNet.git
+ ```
+
+* [Download it as a ZIP file](https://github.com/AzureADQuickStarts/AppModelv2-NativeClient-DotNet/archive/complete.zip).
++
+## Register the web API (TodoListService)
++
+Register your web API in **App registrations** in the Azure portal.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application.
+1. Find and select **Azure Active Directory**.
+1. Under **Manage**, select **App registrations** > **New registration**.
+1. Enter a **Name** for your application, for example `AppModelv2-NativeClient-DotNet-TodoListService`. Users of your app might see this name, and you can change it later.
+1. For **Supported account types**, select **Accounts in any organizational directory**.
+1. Select **Register** to create the application.
+1. On the app **Overview** page, look for the **Application (client) ID** value, and then record it for later use. You'll need it to configure the Visual Studio configuration file for this project (that is, `ClientId` in the *TodoListService\appsettings.json* file).
+1. Under **Manage**, select **Expose an API** > **Add a scope**. Accept the proposed Application ID URI (`api://{clientId}`) by selecting **Save and continue**, and then enter the following information:
+
+ 1. For **Scope name**, enter `access_as_user`.
+ 1. For **Who can consent**, ensure that the **Admins and users** option is selected.
+ 1. In the **Admin consent display name** box, enter `Access TodoListService as a user`.
+ 1. In the **Admin consent description** box, enter `Accesses the TodoListService web API as a user`.
+ 1. In the **User consent display name** box, enter `Access TodoListService as a user`.
+ 1. In the **User consent description** box, enter `Accesses the TodoListService web API as a user`.
+ 1. For **State**, keep **Enabled**.
+1. Select **Add scope**.
+
+### Configure the service project
+
+Configure the service project to match the registered web API.
+
+1. Open the solution in Visual Studio, and then open the *appsettings.json* file under the root of the TodoListService project.
+
+1. Replace the value of the `Enter_the_Application_Id_here` by the Client ID (Application ID) value from the application you registered in the **App registrations** portal both in the `ClientID` and the `Audience` properties.
+
+### Add the new scope to the app.config file
+
+To add the new scope to the TodoListClient *app.config* file, follow these steps:
+
+1. In the TodoListClient project root folder, open the *app.config* file.
+
+1. Paste the Application ID from the application that you registered for your TodoListService project in the `TodoListServiceScope` parameter, replacing the `{Enter the Application ID of your TodoListService from the app registration portal}` string.
+
+ > [!NOTE]
+ > Make sure that the Application ID uses the following format: `api://{TodoListService-Application-ID}/access_as_user` (where `{TodoListService-Application-ID}` is the GUID representing the Application ID for your TodoListService app).
+
+## Register the web app (TodoListClient)
+
+Register your TodoListClient app in **App registrations** in the Azure portal, and then configure the code in the TodoListClient project. If the client and server are considered the same application, you can reuse the application that's registered in step 2. Use the same application if you want users to sign in with a personal Microsoft account.
+
+### Register the app
+
+To register the TodoListClient app, follow these steps:
+
+1. Go to the Microsoft identity platform for developers [App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) portal.
+1. Select **New registration**.
+1. When the **Register an application page** opens, enter your application's registration information:
+
+ 1. In the **Name** section, enter a meaningful application name that will be displayed to users of the app (for example, **NativeClient-DotNet-TodoListClient**).
+ 1. For **Supported account types**, select **Accounts in any organizational directory**.
+ 1. Select **Register** to create the application.
+
+ > [!NOTE]
+ > In the TodoListClient project *app.config* file, the default value of `ida:Tenant` is set to `common`. The possible values are:
+ >
+ > - `common`: You can sign in by using a work or school account or a personal Microsoft account (because you selected **Accounts in any organizational directory** in a previous step).
+ > - `organizations`: You can sign in by using a work or school account.
+ > - `consumers`: You can sign in only by using a Microsoft personal account.
+
+1. On the app **Overview** page, select **Authentication**, and then complete these steps to add a platform:
+
+ 1. Under **Platform configurations**, select the **Add a platform** button.
+ 1. For **Mobile and desktop applications**, select **Mobile and desktop applications**.
+ 1. For **Redirect URIs**, select the `https://login.microsoftonline.com/common/oauth2/nativeclient` check box.
+ 1. Select **Configure**.
+
+1. Select **API permissions**, and then complete these steps to add permissions:
+
+ 1. Select the **Add a permission** button.
+ 1. Select the **My APIs** tab.
+ 1. In the list of APIs, select **AppModelv2-NativeClient-DotNet-TodoListService API** or the name you entered for the web API.
+ 1. Select the **access_as_user** permission check box if it's not already selected. Use the Search box if necessary.
+ 1. Select the **Add permissions** button.
+
+### Configure your project
+
+Configure your TodoListClient project by adding the Application ID to the *app.config* file.
+
+1. In the **App registrations** portal, on the **Overview** page, copy the value of the **Application (client) ID**.
+
+1. From the TodoListClient project root folder, open the *app.config* file, and then paste the Application ID value in the `ida:ClientId` parameter.
+
+## Run your projects
+
+Start both projects. For Visual Studio users;
+
+1. Right click on the Visual Studio solution and select **Properties**
+
+1. In the **Common Properties** select **Startup Project** and then **Multiple startup projects**.
+
+1. For both projects choose **Start** as the action
+
+1. Ensure the TodoListService service starts first by moving it to the first position in the list, using the up arrow.
+
+Sign in to run your TodoListClient project.
+
+1. Press F5 to start the projects. The service page opens, as well as the desktop application.
+
+1. In the TodoListClient, at the upper right, select **Sign in**, and then sign in with the same credentials you used to register your application, or sign in as a user in the same directory.
+
+ If you're signing in for the first time, you might be prompted to consent to the TodoListService web API.
+
+ To help you access the TodoListService web API and manipulate the *To-Do* list, the sign-in also requests an access token to the *access_as_user* scope.
+
+## Pre-authorize your client application
+
+You can allow users from other directories to access your web API by pre-authorizing the client application to access your web API. You do this by adding the Application ID from the client app to the list of pre-authorized applications for your web API. By adding a pre-authorized client, you're allowing users to access your web API without having to provide consent.
+
+1. In the **App registrations** portal, open the properties of your TodoListService app.
+1. In the **Expose an API** section, under **Authorized client applications**, select **Add a client application**.
+1. In the **Client ID** box, paste the Application ID of the TodoListClient app.
+1. In the **Authorized scopes** section, select the scope for the `api://<Application ID>/access_as_user` web API.
+1. Select **Add application**.
+
+### Run your project
+
+1. Press <kbd>F5</kbd> to run your project. Your TodoListClient app opens.
+1. At the upper right, select **Sign in**, and then sign in by using a personal Microsoft account, such as a *live.com* or *hotmail.com* account, or a work or school account.
+
+## Optional: Limit sign-in access to certain users
+
+By default, any personal accounts, such as *outlook.com* or *live.com* accounts, or work or school accounts from organizations that are integrated with Azure AD can request tokens and access your web API.
+
+To specify who can sign in to your application, by changing the `TenantId` property in the *appsettings.json* file.
++
+## Next steps
+
+Learn more about the protected web API scenario that the Microsoft identity platform supports.
+> [!div class="nextstepaction"]
+> [Protected web API scenario](scenario-protected-web-api-overview.md)
active-directory Quickstart Web App Aspnet Core Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-web-app-aspnet-core-sign-in.md
+
+ Title: "Quickstart: Sign in users and call the Microsoft Graph API from an ASP.NET Core web app"
+description: Learn how an ASP.NET Core web app leverages Microsoft.Identity.Web to implement Microsoft sign-in using OpenID Connect and call Microsoft Graph
+++++++++ Last updated : 04/16/2023++++
+#Customer intent: As an application developer, I want to know how to write an ASP.NET Core web app that can sign in personal Microsoft accounts and work/school accounts from any Azure Active Directory instance, then access their data in Microsoft Graph on their behalf.
++
+# Quickstart: Sign in users and call the Microsoft Graph API from an ASP.NET Core web app
+
+The following quickstart uses a ASP.NET Core web app code sample to demonstrate how to sign in users from any Azure Active Directory (Azure AD) organization.
+
+See [How the sample works](#how-the-sample-works) for an illustration.
+
+## Prerequisites
+
+* [Visual Studio 2022](https://visualstudio.microsoft.com/vs/) or [Visual Studio Code](https://code.visualstudio.com/)
+* [.NET Core SDK 6.0+](https://dotnet.microsoft.com/download)
+
+## Register and download your quickstart application
+
+### Step 1: Register your application
++
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. If access to multiple tenants is available, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which to register the application.
+1. Search for and select **Azure Active Directory**.
+1. Under **Manage**, select **App registrations** > **New registration**.
+1. For **Name**, enter a name for the application. For example, enter **AspNetCore-Quickstart**. Users of the app will see this name, and can be changed later.
+1. Set the **Redirect URI** type to **Web** and value to `https://localhost:44321/signin-oidc`.
+1. Select **Register**.
+1. Under **Manage**, select **Authentication**.
+1. For **Front-channel logout URL**, enter **https://localhost:44321/signout-oidc**.
+1. Under **Implicit grant and hybrid flows**, select **ID tokens**.
+1. Select **Save**.
+1. Under **Manage**, select **Certificates & secrets** > **Client secrets** > **New client secret**.
+1. Enter a **Description**, for example `clientsecret1`.
+1. Select **In 1 year** for the secret's expiration.
+1. Select **Add** and immediately record the secret's **Value** for use in a later step. The secret value is *never displayed again* and is irretrievable by any other means. Record it in a secure location as you would any password.
+
+### Step 2: Download the ASP.NET Core project
+
+[Download the ASP.NET Core solution](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/archive/aspnetcore3-1-callsgraph.zip)
+
+### Step 3: Configure your ASP.NET Core project
+
+1. Extract the *.zip* file to a local folder that's close to the root of the disk to avoid errors caused by path length limitations on Windows. For example, extract to *C:\Azure-Samples*.
+1. Open the solution in the chosen code editor.
+1. In *appsettings.json*, replace the values of `ClientId`, and `TenantId`. The value for the application (client) ID and the directory (tenant) ID, can be found in the app's **Overview** page on the Azure portal.
+
+ ```json
+ "Domain": "[Enter the domain of your tenant, e.g. contoso.onmicrosoft.com]",
+ "ClientId": "Enter_the_Application_Id_here",
+ "TenantId": "common",
+ ```
+
+ - `Enter_the_Application_Id_Here` is the application (client) ID for the registered application.
+ - Replace `Enter_the_Tenant_Info_Here` with one of the following:
+ - If the application supports **Accounts in this organizational directory only**, replace this value with the directory (tenant) ID (a GUID) or tenant name (for example, `contoso.onmicrosoft.com`). The directory (tenant) ID can be found on the app's **Overview** page.
+ - If the application supports **Accounts in any organizational directory**, replace this value with `organizations`.
+ - If the application supports **All Microsoft account users**, leave this value as `common`.
+ - Replace `Enter_the_Client_Secret_Here` with the **Client secret** that was created and recorded in an earlier step.
+
+For this quickstart, don't change any other values in the *appsettings.json* file.
+
+### Step 4: Build and run the application
+
+Build and run the app in Visual Studio by selecting the **Debug** menu > **Start Debugging**, or by pressing the F5 key.
+
+A prompt for credentials will appear, and then a request for consent to the permissions that the app requires. Select **Accept** on the consent prompt.
++
+After consenting to the requested permissions, the app displays that sign-in has been successful using correct Azure Active Directory credentials. The user's account email address will be displayed in the *API result* section of the page. This was extracted using the Microsoft Graph API.
++
+## More information
+
+This section gives an overview of the code required to sign in users and call the Microsoft Graph API on their behalf. This overview can be useful to understand how the code works, main arguments, and also if you want to add sign-in to an existing ASP.NET Core application and call Microsoft Graph. It uses [Microsoft.Identity.Web](microsoft-identity-web.md), which is a wrapper around [MSAL.NET](msal-overview.md).
+
+### How the sample works
+
+![Diagram of the interaction between the web browser, the web app, and the Microsoft identity platform in the sample app.](media/quickstart-v2-aspnet-core-webapp/aspnetcorewebapp-intro.svg)
+
+### Startup class
+
+The *Microsoft.AspNetCore.Authentication* middleware uses a `Startup` class that's executed when the hosting process starts:
+
+```csharp
+ // Get the scopes from the configuration (appsettings.json)
+ var initialScopes = Configuration.GetValue<string>("DownstreamApi:Scopes")?.Split(' ');
+
+ public void ConfigureServices(IServiceCollection services)
+ {
+ // Add sign-in with Microsoft
+ services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme)
+ .AddMicrosoftIdentityWebApp(Configuration.GetSection("AzureAd"))
+
+ // Add the possibility of acquiring a token to call a protected web API
+ .EnableTokenAcquisitionToCallDownstreamApi(initialScopes)
+
+ // Enables controllers and pages to get GraphServiceClient by dependency injection
+ // And use an in memory token cache
+ .AddMicrosoftGraph(Configuration.GetSection("DownstreamApi"))
+ .AddInMemoryTokenCaches();
+
+ services.AddControllersWithViews(options =>
+ {
+ var policy = new AuthorizationPolicyBuilder()
+ .RequireAuthenticatedUser()
+ .Build();
+ options.Filters.Add(new AuthorizeFilter(policy));
+ });
+
+ // Enables a UI and controller for sign in and sign out.
+ services.AddRazorPages()
+ .AddMicrosoftIdentityUI();
+ }
+```
+
+The `AddAuthentication()` method configures the service to add cookie-based authentication. This authentication is used in browser scenarios and to set the challenge to OpenID Connect.
+
+The line that contains `.AddMicrosoftIdentityWebApp` adds Microsoft identity platform authentication to the application. The application is then configured to sign in users based on the following information in the `AzureAD` section of the *appsettings.json* configuration file:
+
+| *appsettings.json* key | Description |
+||-|
+| `ClientId` | Application (client) ID of the application registered in the Azure portal. |
+| `Instance` | Security token service (STS) endpoint for the user to authenticate. This value is typically `https://login.microsoftonline.com/`, indicating the Azure public cloud. |
+| `TenantId` | Name of your tenant or the tenant ID (a GUID), or `common` to sign in users with work or school accounts or Microsoft personal accounts. |
+
+The `EnableTokenAcquisitionToCallDownstreamApi` method enables the application to acquire a token to call protected web APIs. `AddMicrosoftGraph` enables the controllers or Razor pages to benefit directly the `GraphServiceClient` (by dependency injection) and the `AddInMemoryTokenCaches` methods enables your app to benefit from a token cache.
+
+The `Configure()` method contains two important methods, `app.UseAuthentication()` and `app.UseAuthorization()`, that enable their named functionality. Also in the `Configure()` method, you must register Microsoft Identity Web routes with at least one call to `endpoints.MapControllerRoute()` or a call to `endpoints.MapControllers()`:
+
+```csharp
+app.UseAuthentication();
+app.UseAuthorization();
+
+app.UseEndpoints(endpoints =>
+{
+ endpoints.MapControllerRoute(
+ name: "default",
+ pattern: "{controller=Home}/{action=Index}/{id?}");
+ endpoints.MapRazorPages();
+});
+```
+
+### Protect a controller or a controller's method
+
+The controller or its methods can be protected by applying the `[Authorize]` attribute to the controller's class or one or more of its methods. This `[Authorize]` attribute restricts access by allowing only authenticated users. If the user isn't already authenticated, an authentication challenge can be started to access the controller. In this quickstart, the scopes are read from the configuration file:
+
+```csharp
+[AuthorizeForScopes(ScopeKeySection = "DownstreamApi:Scopes")]
+public async Task<IActionResult> Index()
+{
+ var user = await _graphServiceClient.Me.GetAsync();
+ ViewData["ApiResult"] = user.DisplayName;
+
+ return View();
+}
+```
++
+## Next steps
+
+The following GitHub repository contains the ASP.NET Core code sample referenced in this quickstart and more samples that show how to achieve the following:
+
+- Add authentication to a new ASP.NET Core web application.
+- Call Microsoft Graph, other Microsoft APIs, or your own web APIs.
+- Add authorization.
+- Sign in users in national clouds or with social identities.
+
+> [!div class="nextstepaction"]
+> [ASP.NET Core web app tutorials on GitHub](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/)
active-directory Quickstart Web App Aspnet Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-web-app-aspnet-sign-in.md
+
+ Title: "Quickstart: Sign in users and call the Microsoft Graph API from an ASP.NET web app"
+description: Download and run a code sample that shows how an ASP.NET web app can sign in Azure AD users.
+++++++ Last updated : 07/28/2023++++
+# Customer intent: As an application developer, I want to see a sample ASP.NET web app that can sign in Azure AD users.
++
+# Quickstart: Sign in users and call the Microsoft Graph API from an ASP.NET web app
+
+In this quickstart, you download and run a code sample that demonstrates an ASP.NET web application that can sign in users with Azure Active Directory (Azure AD) accounts.
+
+See [How the sample works](#how-the-sample-works) for an illustration.
+
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* [Visual Studio 2022](https://visualstudio.microsoft.com/vs/)
+* [.NET Framework 4.7.2+](https://dotnet.microsoft.com/download/visual-studio-sdks)
+
+## Register and download the app
++
+You have two options to start building your application: automatic or manual configuration.
+
+### Automatic configuration
+
+If you want to automatically configure your app and then download the code sample, follow these steps:
+
+1. Go to the [Azure portal - App registrations](https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade/quickStartType/AngularSpaQuickstartPage/sourceType/docs) quickstart experience.
+1. Enter a name for your application and select **Register**.
+1. Follow the instructions to download and automatically configure your new application in one click.
+
+### Manual configuration
+
+If you want to manually configure your application and code sample, use the following procedures.
+
+#### Step 1: Register your application
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
+1. Search for and select **Azure Active Directory**.
+1. Under **Manage**, select **App registrations** > **New registration**.
+1. For **Name**, enter a name for your application. For example, enter **ASPNET-Quickstart**. Users of your app will see this name, and you can change it later.
+1. Set the **Redirect URI** type to **Web** and value to `https://localhost:44368/`.
+1. Select **Register**.
+1. Under **Manage**, select **Authentication**.
+1. In the **Implicit grant and hybrid flows** section, select **ID tokens**.
+1. Select **Save**.
+
+#### Step 2: Download the project
+
+[Download the ASP.NET code sample](https://github.com/AzureADQuickStarts/AppModelv2-WebApp-OpenIDConnect-DotNet/archive/master.zip)
+++
+#### Step 3: Run the project
+
+1. Extract the .zip file to a local folder that's close to the root folder. For example, extract to *C:\Azure-Samples*.
+
+ We recommend extracting the archive into a directory near the root of your drive to avoid errors caused by path length limitations on Windows.
+2. Open the solution in Visual Studio (*AppModelv2-WebApp-OpenIDConnect-DotNet.sln*).
+3. Depending on the version of Visual Studio, you might need to right-click the project **AppModelv2-WebApp-OpenIDConnect-DotNet** and then select **Restore NuGet packages**.
+4. Open the Package Manager Console by selecting **View** > **Other Windows** > **Package Manager Console**. Then run `Update-Package Microsoft.CodeDom.Providers.DotNetCompilerPlatform -r`.
+
+5. Edit *appsettings.json* and replace the parameters `ClientId`, `Tenant`, and `redirectUri` with:
+ ```json
+ "ClientId" :"Enter_the_Application_Id_here" />
+ "TenantId": "Enter_the_Tenant_Info_Here" />
+ "RedirectUri" :"https://localhost:44368/" />
+ ```
+ In that code:
+
+ - `Enter_the_Application_Id_here` is the application (client) ID of the app registration that you created earlier. Find the application (client) ID on the app's **Overview** page in **App registrations** in the Azure portal.
+ - `Enter_the_Tenant_Info_Here` is one of the following options:
+ - If your application supports **My organization only**, replace this value with the directory (tenant) ID or tenant name (for example, `contoso.onmicrosoft.com`). Find the directory (tenant) ID on the app's **Overview** page in **App registrations** in the Azure portal.
+ - If your application supports **Accounts in any organizational directory**, replace this value with `organizations`.
+ - If your application supports **All Microsoft account users**, replace this value with `common`.
+ - `redirectUri` is the **Redirect URI** you entered earlier in **App registrations** in the Azure portal.
+
+## More information
+
+This section gives an overview of the code required to sign in users. This overview can be useful to understand how the code works, what the main arguments are, and how to add sign-in to an existing ASP.NET application.
++
+### How the sample works
+
+![Diagram of the interaction between the web browser, the web app, and the Microsoft identity platform in the sample app.](media/quickstart-v2-aspnet-webapp/aspnetwebapp-intro.svg)
+
+### OWIN middleware NuGet packages
+
+You can set up the authentication pipeline with cookie-based authentication by using OpenID Connect in ASP.NET with OWIN middleware packages. You can install these packages by running the following commands in Package Manager Console within Visual Studio:
+
+```powershell
+Install-Package Microsoft.Identity.Web.Owin
+Install-Package Microsoft.Identity.Web.GraphServiceClient
+Install-Package Microsoft.Owin.Security.Cookies
+```
+
+### OWIN startup class
+
+The OWIN middleware uses a *startup class* that runs when the hosting process starts. In this quickstart, the *startup.cs* file is in the root folder. The following code shows the parameters that this quickstart uses:
+
+```csharp
+ public void Configuration(IAppBuilder app)
+ {
+ app.SetDefaultSignInAsAuthenticationType(CookieAuthenticationDefaults.AuthenticationType);
+
+ app.UseCookieAuthentication(new CookieAuthenticationOptions());
+ OwinTokenAcquirerFactory factory = TokenAcquirerFactory.GetDefaultInstance<OwinTokenAcquirerFactory>();
+
+ app.AddMicrosoftIdentityWebApp(factory);
+ factory.Services
+ .Configure<ConfidentialClientApplicationOptions>(options => { options.RedirectUri = "https://localhost:44368/"; })
+ .AddMicrosoftGraph()
+ .AddInMemoryTokenCaches();
+ factory.Build();
+ }
+```
+
+|Where | Description |
+|||
+| `ClientId` | The application ID from the application registered in the Azure portal. |
+| `Authority` | The security token service (STS) endpoint for the user to authenticate. It's usually `https://login.microsoftonline.com/{tenant}/v2.0` for the public cloud. In that URL, *{tenant}* is the name of your tenant, your tenant ID, or `common` for a reference to the common endpoint. (The common endpoint is used for multitenant applications.) |
+| `RedirectUri` | The URL where users are sent after authentication against the Microsoft identity platform. |
+| `PostLogoutRedirectUri` | The URL where users are sent after signing off. |
+| `Scope` | The list of scopes being requested, separated by spaces. |
+| `ResponseType` | The request that the response from authentication contains an authorization code and an ID token. |
+| `TokenValidationParameters` | A list of parameters for token validation. In this case, `ValidateIssuer` is set to `false` to indicate that it can accept sign-ins from any personal, work, or school account type. |
+| `Notifications` | A list of delegates that can be run on `OpenIdConnect` messages. |
+
+### Authentication challenge
+
+You can force a user to sign in by requesting an authentication challenge in your controller:
+
+```csharp
+public void SignIn()
+{
+ if (!Request.IsAuthenticated)
+ {
+ HttpContext.GetOwinContext().Authentication.Challenge(
+ new AuthenticationProperties{ RedirectUri = "/" },
+ OpenIdConnectAuthenticationDefaults.AuthenticationType);
+ }
+}
+```
+
+> [!TIP]
+> Requesting an authentication challenge by using this method is optional. You'd normally use it when you want a view to be accessible from both authenticated and unauthenticated users. Alternatively, you can protect controllers by using the method described in the next section.
+
+### Attribute for protecting a controller or a controller actions
+
+You can protect a controller or controller actions by using the `[Authorize]` attribute. This attribute restricts access to the controller or actions by allowing only authenticated users to access the actions in the controller. An authentication challenge will then happen automatically when an unauthenticated user tries to access one of the actions or controllers decorated by the `[Authorize]` attribute.
+
+### Call Microsoft Graph from the controller
+
+You can call Microsoft Graph from the controller by getting the instance of GraphServiceClient using the `GetGraphServiceClient` extension method on the controller, like in the following code:
+
+```csharp
+ try
+ {
+ var me = await this.GetGraphServiceClient().Me.GetAsync();
+ ViewBag.Username = me.DisplayName;
+ }
+ catch (ServiceException graphEx) when (graphEx.InnerException is MicrosoftIdentityWebChallengeUserException)
+ {
+ HttpContext.GetOwinContext().Authentication.Challenge(OpenIdConnectAuthenticationDefaults.AuthenticationType);
+ return View();
+ }
+```
++
+## Next steps
+
+For a complete step-by-step guide on building applications and new features, including a full explanation of this quickstart, try out the ASP.NET tutorial.
+
+> [!div class="nextstepaction"]
+> [Add sign-in to an ASP.NET web app](tutorial-v2-asp-webapp.md)
active-directory Quickstart Web App Java Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-web-app-java-sign-in.md
+
+ Title: "Quickstart: Sign in users and call the Microsoft Graph API from a Java web app"
+description: In this quickstart, you'll learn how to add sign-in with Microsoft to a Java web application by using OpenID Connect.
+++++++ Last updated : 01/18/2023++++
+# Quickstart: Sign in users and call the Microsoft Graph API from a Java web app
+
+In this quickstart, you download and run a code sample that demonstrates how a Java web application can sign in users and call the Microsoft Graph API. Users from any Azure Active Directory (Azure AD) organization can sign in to the application.
+
+See [How the sample works](#how-the-sample-works) for an illustration.
+
+## Prerequisites
+
+To run this sample, you need:
+
+- [Java Development Kit (JDK)](https://openjdk.java.net/) 8 or later.
+- [Maven](https://maven.apache.org/).
++
+## Register and download your quickstart app
++
+There are two ways to start your quickstart application: express (option 1) and manual (option 2).
+
+### Option 1: Register and automatically configure your app, and then download the code sample
+
+1. Go to the [Azure portal - App registrations](https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade/quickStartType/AngularSpaQuickstartPage/sourceType/docs) quickstart experience.
+1. Enter a name for your application, and then select **Register**.
+1. Follow the instructions in the portal's quickstart experience to download the automatically configured application code.
+
+### Option 2: Register and manually configure your application and code sample
+
+#### Step 1: Register your application
+
+To register your application and manually add the app's registration information to it, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
+1. Search for and select **Azure Active Directory**.
+1. Under **Manage**, select **App registrations**.
+1. Select **New registration**.
+1. Enter a **Name** for your application, for example **java-webapp**. Users of your app might see this name. You can change it later.
+1. Select **Register**.
+1. On the **Overview** page, note the **Application (client) ID** and the **Directory (tenant) ID**. You'll need these values later.
+1. Under **Manage**, select **Authentication**.
+1. Select **Add a platform** > **Web**.
+1. In the **Redirect URIs** section, enter `https://localhost:8443/msal4jsample/secure/aad`.
+1. Select **Configure**.
+1. In the **Web** section, under **Redirect URIs**, enter `https://localhost:8443/msal4jsample/graph/me` as a second redirect URI.
+1. Under **Manage**, select **Certificates & secrets**. In the **Client secrets** section, select **New client secret**.
+1. Enter a key description (for example, *app secret*), leave the default expiration, and select **Add**.
+1. Note the **Value** of the client secret. You'll need it later.
+
+#### Step 2: Download the code sample
+
+[Download the code sample](https://github.com/Azure-Samples/ms-identity-java-webapp/archive/master.zip)
+
+#### Step 3: Configure the code sample
+1. Extract the zip file to a local folder.
+2. *Optional.* If you use an integrated development environment, open the sample in that environment.
+3. Open the *application.properties* file. You can find it in the *src/main/resources/* folder. Replace the values in the fields `aad.clientId`, `aad.authority`, and `aad.secretKey` with the application ID, tenant ID, and client secret values, respectively. Here's what it should look like:
+
+ ```file
+ aad.clientId=Enter_the_Application_Id_here
+ aad.authority=https://login.microsoftonline.com/Enter_the_Tenant_Info_Here/
+ aad.secretKey=Enter_the_Client_Secret_Here
+ aad.redirectUriSignin=https://localhost:8443/msal4jsample/secure/aad
+ aad.redirectUriGraph=https://localhost:8443/msal4jsample/graph/me
+ aad.msGraphEndpointHost="https://graph.microsoft.com/"
+ ```
+ In the previous code:
+
+ - `Enter_the_Application_Id_here` is the application ID for the application you registered.
+ - `Enter_the_Client_Secret_Here` is the **Client Secret** you created in **Certificates & secrets** for the application you registered.
+ - `Enter_the_Tenant_Info_Here` is the **Directory (tenant) ID** value of the application you registered.
+4. To use HTTPS with localhost, provide the `server.ssl.key` properties. To generate a self-signed certificate, use the keytool utility (included in JRE).
+
+ Here's an example:
+
+```
+keytool -genkeypair -alias testCert -keyalg RSA -storetype PKCS12 -keystore keystore.p12 -storepass password
+
+server.ssl.key-store-type=PKCS12
+server.ssl.key-store=classpath:keystore.p12
+server.ssl.key-store-password=password
+server.ssl.key-alias=testCert
+```
+5. Put the generated keystore file in the *resources* folder.
+
+#### Step 4: Run the code sample
+
+To run the project, take one of these steps:
+
+- Run it directly from your IDE by using the embedded Spring Boot server.
+- Package it to a WAR file by using [Maven](https://maven.apache.org/plugins/maven-war-plugin/usage.html), and then deploy it to a J2EE container solution like [Apache Tomcat](http://tomcat.apache.org/).
+
+##### Running the project from an IDE
+
+To run the web application from an IDE, select run, and then go to the home page of the project. For this sample, the standard home page URL is https://localhost:8443.
+
+1. On the front page, select the **Login** button to redirect users to Azure Active Directory and prompt them for credentials.
+
+1. After users are authenticated, they're redirected to `https://localhost:8443/msal4jsample/secure/aad`. They're now signed in, and the page will show information about the user account. The sample UI has these buttons:
+ - **Sign Out**: Signs the current user out of the application and redirects that user to the home page.
+ - **Show User Info**: Acquires a token for Microsoft Graph and calls Microsoft Graph with a request that contains the token, which returns basic information about the signed-in user.
+
+##### Running the project from Tomcat
+
+If you want to deploy the web sample to Tomcat, make a couple changes to the source code.
+
+1. Open *ms-identity-java-webapp/src/main/java/com.microsoft.azure.msalwebsample/MsalWebSampleApplication*.
+
+ - Delete all source code and replace it with this code:
+
+ ```Java
+ package com.microsoft.azure.msalwebsample;
+
+ import org.springframework.boot.SpringApplication;
+ import org.springframework.boot.autoconfigure.SpringBootApplication;
+ import org.springframework.boot.builder.SpringApplicationBuilder;
+ import org.springframework.boot.web.servlet.support.SpringBootServletInitializer;
+
+ @SpringBootApplication
+ public class MsalWebSampleApplication extends SpringBootServletInitializer {
+
+ public static void main(String[] args) {
+ SpringApplication.run(MsalWebSampleApplication.class, args);
+ }
+
+ @Override
+ protected SpringApplicationBuilder configure(SpringApplicationBuilder builder) {
+ return builder.sources(MsalWebSampleApplication.class);
+ }
+ }
+ ```
+
+2. Tomcat's default HTTP port is 8080, but you need an HTTPS connection over port 8443. To configure this setting:
+ - Go to *tomcat/conf/server.xml*.
+ - Search for the `<connector>` tag, and replace the existing connector with this connector:
+
+ ```xml
+ <Connector
+ protocol="org.apache.coyote.http11.Http11NioProtocol"
+ port="8443" maxThreads="200"
+ scheme="https" secure="true" SSLEnabled="true"
+ keystoreFile="C:/Path/To/Keystore/File/keystore.p12" keystorePass="KeystorePassword"
+ clientAuth="false" sslProtocol="TLS"/>
+ ```
+
+3. Open a Command Prompt window. Go to the root folder of this sample (where the pom.xml file is located), and run `mvn package` to build the project.
+ - This command will generate a *msal-web-sample-0.1.0.war* file in your */targets* directory.
+ - Rename this file to *msal4jsample.war*.
+ - Deploy the WAR file by using Tomcat or any other J2EE container solution.
+ - To deploy the msal4jsample.war file, copy it to the */webapps/* directory in your Tomcat installation, and then start the Tomcat server.
+
+4. After the file is deployed, go to https://localhost:8443/msal4jsample by using a browser.
+
+> [!IMPORTANT]
+> This quickstart application uses a client secret to identify itself as a confidential client. Because the client secret is added as plain text to your project files, for security reasons we recommend that you use a certificate instead of a client secret before using the application in a production environment. For more information on how to use a certificate, see [Certificate credentials for application authentication](active-directory-certificate-credentials.md).
+
+## More information
+
+### How the sample works
+![Diagram that shows how the sample app generated by this quickstart works.](media/quickstart-v2-java-webapp/java-quickstart.svg)
+
+### Get MSAL
+
+MSAL for Java (MSAL4J) is the Java library used to sign in users and request tokens that are used to access an API that's protected by the Microsoft identity platform.
+
+Add MSAL4J to your application by using Maven or Gradle to manage your dependencies by making the following changes to the application's pom.xml (Maven) or build.gradle (Gradle) file.
+
+In pom.xml:
+
+```xml
+<dependency>
+ <groupId>com.microsoft.azure</groupId>
+ <artifactId>msal4j</artifactId>
+ <version>1.0.0</version>
+</dependency>
+```
+
+In build.gradle:
+
+```$xslt
+compile group: 'com.microsoft.azure', name: 'msal4j', version: '1.0.0'
+```
+
+### Initialize MSAL
+
+Add a reference to MSAL for Java by adding the following code at the start of the file where you'll be using MSAL4J:
+
+```Java
+import com.microsoft.aad.msal4j.*;
+```
++
+## Next steps
+
+For a more in-depth discussion of building web apps that sign in users on the Microsoft identity platform, see the multipart scenario series:
+
+> [!div class="nextstepaction"]
+> [Scenario: Web app that signs in users](scenario-web-app-sign-user-overview.md?tabs=java)
active-directory Quickstart Web App Nodejs Msal Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-web-app-nodejs-msal-sign-in.md
+
+ Title: "Quickstart: Sign in users and call the Microsoft Graph API from a Node.js web application using MSAL Node"
+description: In this quickstart, you learn how to implement authentication with a Node.js web app and the Microsoft Authentication Library (MSAL) for Node.js.
++++++++ Last updated : 07/27/2023+++
+# Customer intent: As an application developer, I want to know how to set up authentication in a web application built using Node.js and MSAL Node.
++
+# Quickstart: Sign in users and call the Microsoft Graph API from a Node.js web application using MSAL Node
+
+In this quickstart, you download and run a code sample that demonstrates how a Node.js web app can sign in users by using the authorization code flow. The code sample also demonstrates how to get an access token to call the Microsoft Graph API.
+
+See [How the sample works](#how-the-sample-works) for an illustration.
+
+This quickstart uses the Microsoft Authentication Library for Node.js (MSAL Node) with the authorization code flow.
+
+## Prerequisites
+
+* An Azure subscription. [Create an Azure subscription for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* [Node.js](https://nodejs.org/en/download/)
+* [Visual Studio Code](https://code.visualstudio.com/download) or another code editor
++
+## Register and download your quickstart application
+
+#### Step 1: Register your application
++
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
+1. Under **Manage**, select **App registrations** > **New registration**.
+1. Enter a **Name** for your application. Users of your app might see this name, and you can change it later.
+1. Under **Supported account types**, select **Accounts in this organizational directory only**.
+1. Set the **Redirect URI** type to **Web** and value to `http://localhost:3000/auth/redirect`.
+1. Select **Register**.
+1. On the app **Overview** page, note the **Application (client) ID** value for later use.
+1. Under **Manage**, select **Certificates & secrets** > **Client secrets** > **New client secret**. Leave the description blank and default expiration, and then select **Add**.
+1. Note the value of **Client secret** for later use.
+
+#### Step 2: Download the project
+
+To run the project with a web server by using Node.js, [download the core project files](https://github.com/Azure-Samples/ms-identity-node/archive/main.zip).
++
+#### Step 3: Configure your Node app
+
+Extract the project, open the *ms-identity-node-main* folder, and then open the *.env* file under the *App* folder. Replace the values above as follows:
+
+| Variable | Description | Example(s) |
+|--|--||
+| `Enter_the_Cloud_Instance_Id_Here` | The Azure cloud instance in which your application is registered | `https://login.microsoftonline.com/` (include the trailing forward-slash) |
+| `Enter_the_Tenant_Info_here` | Tenant ID or Primary domain | `contoso.microsoft.com` or `cbe899ec-5f5c-4efe-b7a0-599505d3d54f` |
+| `Enter_the_Application_Id_Here` | Client ID of the application you registered | `cbe899ec-5f5c-4efe-b7a0-599505d3d54f` |
+| `Enter_the_Client_Secret_Here` | Client secret of the application you registered | `WxvhStRfDXoEiZQj1qCy` |
+| `Enter_the_Graph_Endpoint_Here` | The Microsoft Graph API cloud instance that your app will call | `https://graph.microsoft.com/` (include the trailing forward-slash) |
+| `Enter_the_Express_Session_Secret_Here` | A random string of characters used to sign the Express session cookie | `WxvhStRfDXoEiZQj1qCy` |
+
+Your file should look similar to below:
+
+```text
+CLOUD_INSTANCE=https://login.microsoftonline.com/
+TENANT_ID=cbe899ec-5f5c-4efe-b7a0-599505d3d54f
+CLIENT_ID=fa29b4c9-7675-4b61-8a0a-bf7b2b4fda91
+CLIENT_SECRET=WxvhStRfDXoEiZQj1qCy
+
+REDIRECT_URI=http://localhost:3000/auth/redirect
+POST_LOGOUT_REDIRECT_URI=http://localhost:3000
+
+GRAPH_API_ENDPOINT=https://graph.microsoft.com/
+
+EXPRESS_SESSION_SECRET=6DP6v09eLiW7f1E65B8k
+```
++
+#### Step 4: Run the project
+
+Run the project by using Node.js.
+
+1. To start the server, run the following commands from within the project directory:
+
+ ```console
+ cd App
+ npm install
+ npm start
+ ```
+
+1. Go to `http://localhost:3000/`.
+
+1. Select **Sign in** to start the sign-in process.
+
+ The first time you sign in, you're prompted to provide your consent to allow the application to sign you in and access your profile. After you're signed in successfully, you'll be redirected back to the application home page.
+
+## More information
+
+### How the sample works
+
+The sample hosts a web server on localhost, port 3000. When a web browser accesses this address, the app renders the home page. Once the user selects **Sign in**, the app redirects the browser to Azure AD sign-in screen, via the URL generated by the MSAL Node library. After user consents, the browser redirects the user back to the application home page, along with an ID and access token.
+
+### MSAL Node
+
+The MSAL Node library signs in users and requests the tokens that are used to access an API that's protected by Microsoft identity platform. You can download the latest version by using the Node.js Package Manager (npm):
+
+```console
+npm install @azure/msal-node
+```
+
+## Next steps
+
+Learn more about the web app scenario that the Microsoft identity platform supports:
+> [!div class="nextstepaction"]
+> [Web app that signs in users scenario](scenario-web-app-sign-user-overview.md)
active-directory Quickstart Web App Python Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-web-app-python-sign-in.md
+
+ Title: "Quickstart: Sign in users and call the Microsoft Graph API from a Python web app"
+description: In this quickstart, learn how a Python web app can sign in users, get an access token from the Microsoft identity platform, and call the Microsoft Graph API.
+++++++ Last updated : 07/28/2023++++
+# Quickstart: Sign in users and call the Microsoft Graph API from a Python web app
+
+In this quickstart, you download and run a code sample that demonstrates how a Python web application can sign in users and call the Microsoft Graph API. Users with a personal Microsoft Account or an account in any Azure Active Directory (Azure AD) organization can sign into the application.
+
+The following diagram displays how the sample app works:
+
+![Diagram that shows how the sample app generated by this quickstart works.](media/quickstart-v2-python-webapp/topology.png)
+
+1. The application uses the [`identity` package](https://pypi.org/project/identity/) to obtain an access token from the Microsoft Identity platform.
+2. The access token is used as a bearer token to authenticate the user when calling the Microsoft Graph API.
++
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure Active Directory (Azure AD) tenant. For more information on how to get an Azure AD tenant, see [how to get an Azure AD tenant.](/azure/active-directory/develop/quickstart-create-new-tenant)
+- [Python 3.7+](https://www.python.org/downloads/)
+
+## Step 1: Register your application
++
+Follow these steps to register your application in the Azure portal:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application.
+1. Navigate to the portal's [App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) page, and select **New registration**.
+1. Enter a **Name** for your application, for example *python-webapp*.
+1. Under **Supported account types**, select **Accounts in any organizational directory and personal Microsoft accounts**.
+1. Under **Redirect URIs**, select **Web** for the platform.
+1. Enter a redirect URI of `http://localhost:5000/getAToken`. This can be changed later.
+1. Select **Register**.
+
+## Step 2: Add a client secret
+
+1. On the app **Overview** page, note the **Application (client) ID** value for later use.
+1. Under **Manage**, select the **Certificates & secrets** and from the **Client secrets** section, select **New client secret**.
+1. Enter a description for the client secret, leave the default expiration, and select **Add**.
+1. Save the **Value** of the **Client Secret** in a safe location. You'll need it to configure the code, and you can't retrieve it later.
+
+## Step 3: Add a scope
+
+1. Under **Manage**, select **API permissions** > **Add a permission**.
+1. Ensure that the **Microsoft APIs** tab is selected.
+1. From the *Commonly used Microsoft APIs* section, select **Microsoft Graph**.
+1. From the **Delegated permissions** section, ensure that **User.ReadBasic.All** is selected. Use the search box if necessary.
+1. Select **Add permissions**.
+
+## Step 4: Download the sample app
+
+[Download the Python code sample](https://github.com/Azure-Samples/ms-identity-python-webapp/archive/main.zip) or clone the repository:
+
+```powershell
+git clone https://github.com/Azure-Samples/ms-identity-python-webapp.git
+```
+
+You can also use an integrated development environment to open the folder.
+
+## Step 5: Configure the sample app
+
+1. Go to the application folder.
+
+1. Create an *.env* file in the root folder of the project using *.env.sample* as a guide.
+
+ :::code language="python" source="~/ms-identity-python-webapp-quickstart/.env.sample" range="4-16" highlight="1,2,13":::
+
+ * Set the value of `CLIENT_ID` to the **Application (client) ID** for the registered application, available on the overview page.
+ * Set the value of `CLIENT_SECRET` to the client secret you created in **Certificates & Secrets** for the registered application.
+ * Set the value of `AUTHORITY` to a URL that includes **Directory (tenant) ID** of the registered application. That ID is also available on the overview page.
+
+ The environment variables are referenced in *app_config.py*, and are kept in a separate *.env* file to keep them out of source control. The provided *.gitignore* file prevents the *.env* file from being checked in.
+
+## Step 6: Run the sample app
+
+1. Create a virtual environment for the app:
+
+ [!INCLUDE [Virtual environment setup](<../../app-service/includes/quickstart-python/virtual-environment-setup.md>)]
+
+1. Install the requirements using `pip`:
+
+ ```shell
+ python3 -m pip install -r requirements.txt
+ ```
+
+2. Run the app from the command line, specifying the host and port to match the redirect URI:
+
+ ```shell
+ python3 -m flask run --debug --host=localhost --port=5000
+ ```
+
+ > [!IMPORTANT]
+ > This quickstart application uses a client secret to identify itself as confidential client. Because the client secret is added as a plain-text to your project files, for security reasons, it is recommended that you use a certificate instead of a client secret before considering the application as production application. For more information on how to use a certificate, see [these instructions](active-directory-certificate-credentials.md).
+++
+## Next steps
+
+Learn more about web apps that sign in users in our multi-part scenario series.
+
+> [!div class="nextstepaction"]
+> [Scenario: Web app that signs in users](scenario-web-app-sign-user-overview.md)
+
+> [!div class="nextstepaction"]
+> [Scenario: Web app that calls web APIs](scenario-web-app-call-api-overview.md)
active-directory Scenario Spa Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-spa-overview.md
Learn all you need to build a single-page application (SPA). For instructions re
If you haven't already, create your first app by completing the JavaScript SPA quickstart:
-[Quickstart: Single-page application](./single-page-app-quickstart.md?pivots=devlang-javascript)
+[Quickstart: Sign in users in single-page apps (SPA) via the authorization code flow with Proof Key for Code Exchange (PKCE) using JavaScript](quickstart-single-page-app-javascript-sign-in.md)
## Overview
active-directory Scenario Web App Sign User Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-sign-user-overview.md
Learn all you need to build a web app that uses the Microsoft identity platform
If you want to create your first portable (ASP.NET Core) web app that signs in users, follow this quickstart:
-[Quickstart: Use ASP.NET Core to add sign-in with Microsoft to a web app](web-app-quickstart.md?pivots=devlang-aspnet-core)
+[Quickstart: Use ASP.NET Core to add sign-in with Microsoft to a web app](quickstart-web-app-aspnet-core-sign-in.md)
# [ASP.NET](#tab/aspnet) If you want to understand how to add sign-in to an existing ASP.NET web application, try the following quickstart:
-[Quickstart: Use ASP.NET to add sign-in with Microsoft to a web app](web-app-quickstart.md?pivots=devlang-aspnet)
+[Quickstart: Use ASP.NET to add sign-in with Microsoft to a web app](quickstart-web-app-aspnet-sign-in.md)
# [Java](#tab/java) If you're a Java developer, try the following quickstart:
-[Quickstart: Use Java to add sign-in with Microsoft to a web app](web-app-quickstart.md?pivots=devlang-java)
+[Quickstart: Use Java to add sign-in with Microsoft to a web app](quickstart-web-app-java-sign-in.md)
# [Node.js](#tab/nodejs) If you're a Node.js developer, try the following quickstart:
-[Quickstart: Use Node.js to add sign-in with Microsoft to a web app](web-app-quickstart.md?pivots=devlang-nodejs-msal)
+[Quickstart: Use Node.js to add sign-in with Microsoft to a web app](quickstart-web-app-nodejs-msal-sign-in.md)
# [Python](#tab/python) If you develop with Python, try the following quickstart:
-[Quickstart: Use Python to add sign-in with Microsoft to a web app](web-app-quickstart.md?pivots=devlang-python)
+[Quickstart: Use Python to add sign-in with Microsoft to a web app](quickstart-web-app-python-sign-in.md)
active-directory Single Page App Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/single-page-app-quickstart.md
- Title: "Quickstart: Sign in users in single-page apps (SPA) by using the authorization code with Proof Key for Code Exchange (PKCE)"
-description: In this quickstart, learn how a JavaScript single-page application (SPA) can sign in users of personal accounts, work accounts, and school accounts by using the authorization code flow with Proof Key for Code Exchange (PKCE).
-------- Previously updated : 08/17/2022--
-zone_pivot_groups: single-page-app-quickstart
-#Customer intent: As an app developer, I want to learn how to get access tokens and refresh tokens by using the Microsoft identity platform so that my single-page app can sign in users of personal accounts, work accounts, and school accounts.
--
-# Quickstart: Sign in users in single-page apps (SPA) via the authorization code flow with Proof Key for Code Exchange (PKCE)
---
active-directory Spa Quickstart Portal Javascript Auth Code Angular https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/spa-quickstart-portal-javascript-auth-code-angular.md
> [!div renderon="docs"] > Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article: >
-> > [Quickstart: Angular single-page app with user sign-in](single-page-app-quickstart.md?pivots=devlang-angular)
+> > [Quickstart: Sign in users in single-page apps (SPA) via the authorization code flow with Proof Key for Code Exchange (PKCE) using Angular](quickstart-single-page-app-angular-sign-in.md)
> > We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
active-directory Spa Quickstart Portal Javascript Auth Code React https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/spa-quickstart-portal-javascript-auth-code-react.md
> [!div renderon="docs"] > Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article: >
-> > [Quickstart: React single-page app with user sign-in](single-page-app-quickstart.md?pivots=devlang-react)
+> > [Quickstart: Sign in users in single-page apps (SPA) via the authorization code flow with Proof Key for Code Exchange (PKCE) using React](quickstart-single-page-app-react-sign-in.md)
> > We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
active-directory Spa Quickstart Portal Javascript Auth Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/spa-quickstart-portal-javascript-auth-code.md
> [!div renderon="docs"] > Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article: >
-> > [Quickstart: JavaScript single-page app with user sign-in](single-page-app-quickstart.md?pivots=devlang-javascript)
+> > [Quickstart: Sign in users in single-page apps (SPA) via the authorization code flow with Proof Key for Code Exchange (PKCE) using JavaScript](quickstart-single-page-app-javascript-sign-in.md)
> > We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
active-directory Web Api Quickstart Portal Aspnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-api-quickstart-portal-aspnet-core.md
> [!div renderon="docs"] > Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article: >
-> > [Quickstart:Protect an ASP.NET Core web API](web-api-quickstart.md?pivots=devlang-aspnet-core)
+> > [Quickstart:Protect an ASP.NET Core web API](quickstart-web-api-aspnet-core-protect-api.md)
> > We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
active-directory Web Api Quickstart Portal Dotnet Native Aspnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-api-quickstart-portal-dotnet-native-aspnet.md
> [!div renderon="docs"] > Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article: >
-> > [Quickstart: Call a protected ASP.NET web API](web-api-quickstart.md?pivots=devlang-aspnet)
+> > [Quickstart: Call an ASP.NET web API that is protected by the Microsoft identity platform](quickstart-web-api-aspnet-protect-api.md)
> > We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
active-directory Web Api Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-api-quickstart.md
- Title: "Quickstart: Protect a web API with the Microsoft identity platform"
-description: In this quickstart, you download and modify a code sample that demonstrates how to protect a web API by using the Microsoft identity platform for authorization.
------- Previously updated : 12/09/2022--
-zone_pivot_groups: web-api-quickstart
-#Customer intent: As an app developer, I want to learn how to get access tokens and refresh tokens by using the Microsoft identity platform so that my web app can sign in users of personal accounts, work accounts, and school accounts.
--
-# Quickstart: Protect a web API with the Microsoft identity platform
--
active-directory Web App Quickstart Portal Aspnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart-portal-aspnet-core.md
# Quickstart: Add sign-in with Microsoft to an ASP.NET Core web app - > [!div renderon="docs"] > Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article: >
-> > [Quickstart: ASP.NET Core web app with user sign-in](web-app-quickstart.md?pivots=devlang-aspnet-core)
+> > [Quickstart: Add sign-in with Microsoft to an ASP.NET Core web app](quickstart-web-app-aspnet-core-sign-in.md)
> > We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
active-directory Web App Quickstart Portal Aspnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart-portal-aspnet.md
> [!div renderon="docs"] > Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article: >
-> > [Quickstart: ASP.NET web app that signs in users](web-app-quickstart.md?pivots=devlang-aspnet)
+> > [Quickstart: Add sign-in with Microsoft to an ASP.NET web app](quickstart-web-app-aspnet-sign-in.md)
> > We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
active-directory Web App Quickstart Portal Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart-portal-java.md
> [!div renderon="docs"] > Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article: >
-> > [Quickstart: Java web app with user sign-in](web-app-quickstart.md?pivots=devlang-java)
+> > [Quickstart: Add sign-in with Microsoft to a Java web app](quickstart-web-app-java-sign-in.md)
> > We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
active-directory Web App Quickstart Portal Node Js https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart-portal-node-js.md
> [!div renderon="docs"] > Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article: >
-> > [Quickstart: Node.js web app that signs in users with MSAL Node](web-app-quickstart.md?pivots=devlang-nodejs-msal)
+> > [Quickstart: Add authentication to a Node.js web app with MSAL Node](quickstart-web-app-nodejs-msal-sign-in.md)
> > We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
active-directory Web App Quickstart Portal Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart-portal-python.md
> [!div renderon="docs"] > Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article: >
-> > [Quickstart: Python web app with user sign-in](web-app-quickstart.md?pivots=devlang-python)
+> > [Quickstart: Add sign-in with Microsoft to a Python web app](quickstart-web-app-python-sign-in.md)
> > We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
active-directory Web App Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart.md
- Title: "Quickstart: Sign in users in web apps using the auth code flow"
-description: In this quickstart, learn how a web app can sign in users of personal accounts, work accounts, and school accounts by using the authorization code flow.
-------- Previously updated : 01/18/2023--
-zone_pivot_groups: web-app-quickstart
-#Customer intent: As an app developer, I want to learn how to get access tokens and refresh tokens by using the Microsoft identity platform so that my web app can sign in users of personal accounts, work accounts, and school accounts.
--
-# Quickstart: Add sign-in with Microsoft to a web app
-----
active-directory Allow Deny List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/allow-deny-list.md
Previously updated : 04/17/2023 Last updated : 08/04/2023
This article discusses two ways to configure an allow or blocklist for B2B colla
- The number of domains you can add to an allowlist or blocklist is limited only by the size of the policy. This limit applies to the number of characters, so you can have a greater number of shorter domains or fewer longer domains. The maximum size of the entire policy is 25 KB (25,000 characters), which includes the allowlist or blocklist and any other parameters configured for other features. - This list works independently from OneDrive for Business and SharePoint Online allow/block lists. If you want to restrict individual file sharing in SharePoint Online, you need to set up an allow or blocklist for OneDrive for Business and SharePoint Online. For more information, see [Restricted domains sharing in SharePoint Online and OneDrive for Business](https://support.office.com/article/restricted-domains-sharing-in-sharepoint-online-and-onedrive-for-business-5d7589cd-0997-4a00-a2ba-2320ec49c4e9). - The list doesn't apply to external users who have already redeemed the invitation. The list will be enforced after the list is set up. If a user invitation is in a pending state, and you set a policy that blocks their domain, the user's attempt to redeem the invitation will fail.
+- Both allow/block list and cross-tenant access settings are checked at the time of invitation.
## Set the allow or blocklist policy in the portal
active-directory Cross Tenant Access Settings B2b Collaboration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-settings-b2b-collaboration.md
Previously updated : 05/31/2023 Last updated : 08/04/2023
Use External Identities cross-tenant access settings to manage how you collabora
- Identify any Azure AD organizations that will need customized settings so you can configure **Organizational settings** for them. - If you want to apply access settings to specific users, groups, or applications in an external organization, you'll need to contact the organization for information before configuring your settings. Obtain their user object IDs, group object IDs, or application IDs (*client app IDs* or *resource app IDs*) so you can target your settings correctly. - If you want to set up B2B collaboration with a partner organization in an external Microsoft Azure cloud, follow the steps in [Configure Microsoft cloud settings](cross-cloud-settings.md). An admin in the partner organization will need to do the same for your tenant.
+- Both allow/block list and cross-tenant access settings are checked at the time of invitation. If a user's domain is on the allow list, they can be invited, unless the domain is explicitly blocked in the cross-tenant access settings. If a user's domain is on the deny list, they can't be invited regardless of the cross-tenant access settings. If a user is not on either list, we check the cross-tenant access settings to determine whether they can be invited.
## Configure default settings
active-directory Sample Desktop Wpf Dotnet Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/sample-desktop-wpf-dotnet-sign-in.md
Last updated 07/26/2023--+ #Customer intent: As a dev, devops, I want to learn about how to configure a sample WPF desktop app to sign in and sign out users with my Azure Active Directory (Azure AD) for customers tenant
active-directory Tutorial Browserless App Dotnet Sign In Build App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-browserless-app-dotnet-sign-in-build-app.md
+ Last updated 07/27/2023- #Customer intent: As a dev, devops, I want to learn about how to enable authentication in my .NET browserless app with Azure Active Directory (Azure AD) for customers tenant
active-directory Tutorial Browserless App Dotnet Sign In Prepare Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-browserless-app-dotnet-sign-in-prepare-tenant.md
+ Last updated 07/24/2023- #Customer intent: As a dev, devops, I want to learn how to register and configure .NET browserless app authentication details in a customer tenant so as to sign in users using Device Code flow.
active-directory Tutorial Desktop Wpf Dotnet Sign In Build App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-desktop-wpf-dotnet-sign-in-build-app.md
+ Last updated 07/26/2023
active-directory Tutorial Desktop Wpf Dotnet Sign In Prepare Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-desktop-wpf-dotnet-sign-in-prepare-tenant.md
+ Last updated 07/26/2023
active-directory Tutorial Protect Web Api Dotnet Core Build App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-protect-web-api-dotnet-core-build-app.md
Last updated 07/27/2023--+ #Customer intent: As a dev, I want to secure my ASP.NET Core web API registered in the Azure AD customer's tenant.
active-directory Tutorial Protect Web Api Dotnet Core Test Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-protect-web-api-dotnet-core-test-api.md
+ Last updated 07/27/2023- #Customer intent: As a dev, I want to learn how to test a protected web API registered in the Azure AD for customers tenant.
active-directory Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/troubleshoot.md
External users can be added only to ΓÇ£assignedΓÇ¥ or ΓÇ£SecurityΓÇ¥ groups and
## My external user didn't receive an email to redeem
-The invitee should check with their ISP or spam filter to ensure that the following address is allowed: Invites@microsoft.com
+The invitee should check with their ISP or spam filter to ensure that the following address is allowed: Invites@microsoft.com.
> [!NOTE] >
Let's say you inadvertently invite a guest user with an email address that match
## External access blocked by policy error on the login screen
-When you try to login to your tenant, you might see this error message: "Your network administrator has restricted what organizations can be accessed. Contact your IT department to unblock access." This error is related to tenant restriction settings. To resolve this issue, ask your IT team to follow the instructions in [this article](/azure/active-directory/manage-apps/tenant-restrictions).
+When you try to login to your tenant, you might see this error message: "Your network administrator has restricted what organizations can be accessed. Contact your IT department to unblock access." This error is related to tenant restriction settings. To resolve this issue, ask your IT team to follow the instructions in [this article](/azure/active-directory/manage-apps/tenant-restrictions).
+
+## Invitation is blocked due missing cross-tenant access settings
+
+You might see this message: "This invitation is blocked by cross-tenant access settings in your organization. Your administrator must configure cross-tenant access settings to allow this invitation." In this case, ask your administrator to check the cross-tenant access settings.
## Next steps
active-directory Entitlement Management External Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-external-users.md
na Previously updated : 01/25/2023 Last updated : 08/04/2023
This article describes the settings you can specify to govern access for externa
When using the [Azure AD B2B](../external-identities/what-is-b2b.md) invite experience, you must already know the email addresses of the external guest users you want to bring into your resource directory and work with. Directly inviting each user works great when you're working on a smaller or short-term project and you already know all the participants, but this process is harder to manage if you have lots of users you want to work with, or if the participants change over time. For example, you might be working with another organization and have one point of contact with that organization, but over time additional users from that organization will also need access.
-With entitlement management, you can define a policy that allows users from organizations you specify to be able to self-request an access package. That policy includes whether approval is required, whether access reviews are required, and an expiration date for the access. In most cases, you will want to require approval, in order to have appropriate oversight over which users are brought into your directory. If approval is required, then for major external organization partners, you might consider inviting one or more users from the external organization to your directory, designating them as sponsors, and configuring that sponsors are approvers - since they're likely to know which external users from their organization need access. Once you've configured the access package, obtain the access package's request link so you can send that link to your contact person (sponsor) at the external organization. That contact can share with other users in their external organization, and they can use this link to request the access package. Users from that organization who have already been invited into your directory can also use that link.
+With entitlement management, you can define a policy that allows users from organizations you specify to be able to self-request an access package. That policy includes whether approval is required, whether access reviews are required, and an expiration date for the access. In most cases, you'll want to require approval, in order to have appropriate oversight over which users are brought into your directory. If approval is required, then for major external organization partners, you might consider inviting one or more users from the external organization to your directory, designating them as sponsors, and configuring that sponsors are approvers - since they're likely to know which external users from their organization need access. Once you've configured the access package, obtain the access package's request link so you can send that link to your contact person (sponsor) at the external organization. That contact can share with other users in their external organization, and they can use this link to request the access package. Users from that organization who have already been invited into your directory can also use that link.
-You can also use entitlement management for bringing in users from organizations that do not have their own Azure AD directory. You can configure a federated identity provider for their domain, or use email-based authentication. You can also bring in users from social identity providers, including those with Microsoft accounts.
+You can also use entitlement management for bringing in users from organizations that don't have their own Azure AD directory. You can configure a federated identity provider for their domain, or use email-based authentication. You can also bring in users from social identity providers, including those with Microsoft accounts.
Typically, when a request is approved, entitlement management provisions the user with the necessary access. If the user isn't already in your directory, entitlement management will first invite the user. When the user is invited, Azure AD will automatically create a B2B guest account for them but won't send the user an email. An administrator may have previously limited which organizations are allowed for collaboration, by setting a [B2B allow or blocklist](../external-identities/allow-deny-list.md) to allow or block invites to other organization's domains. If the user's domain isn't allowed by those lists, then they won't be invited and can't be assigned access until the lists are updated.
The following diagram and steps provide an overview of how external users are gr
1. You check the catalog setting **Enabled for external users** in the catalog to contain the access package is **Yes**.
-1. You create an access package in your directory that includes a policy [For users not in your directory](entitlement-management-access-package-create.md#allow-users-not-in-your-directory-to-request-the-access-package) and specifies the connected organizations that can request, the approver and lifecycle settings. If you select in the policy the option of specific connected organizations or the option of all connected organizations, then only users from those organizations that have previously been configured can request. If you select in the policy the option of all users, then any user can request, including those which are not already part of your directory and not part of any connected organization.
+1. You create an access package in your directory that includes a policy [For users not in your directory](entitlement-management-access-package-create.md#allow-users-not-in-your-directory-to-request-the-access-package) and specifies the connected organizations that can request, the approver and lifecycle settings. If you select in the policy the option of specific connected organizations or the option of all connected organizations, then only users from those organizations that have previously been configured can request. If you select in the policy the option of all users, then any user can request, including those which aren't already part of your directory and not part of any connected organization.
-1. You check [the hidden setting on the access package](entitlement-management-access-package-edit.md#change-the-hidden-setting) to ensure the access package is hidden. If it is not hidden, then any user allowed by the policy settings in that access package can browse for the access package in the My Access portal for your tenant.
+1. You check [the hidden setting on the access package](entitlement-management-access-package-edit.md#change-the-hidden-setting) to ensure the access package is hidden. If it isn't hidden, then any user allowed by the policy settings in that access package can browse for the access package in the My Access portal for your tenant.
1. You send a [My Access portal link](entitlement-management-access-package-settings.md) to your contact at the external organization that they can share with their users to request the access package.
-1. An external user (**Requestor A** in this example) uses the My Access portal link to [request access](entitlement-management-request-access.md) to the access package. The My access portal will require that the user sign in as part of their connected organization. How the user signs in depends on the authentication type of the directory or domain that's defined in the connected organization and in the external users settings.
+1. An external user (**Requestor A** in this example) uses the My Access portal link to [request access](entitlement-management-request-access.md) to the access package. The My access portal requires that the user sign in as part of their connected organization. How the user signs in depends on the authentication type of the directory or domain that's defined in the connected organization and in the external users settings.
1. An approver [approves the request](entitlement-management-request-approve.md) (assuming the policy requires approval).
To ensure people outside of your organization can request access packages and ge
![Edit catalog settings](./media/entitlement-management-shared/catalog-edit.png)
- If you are an administrator or catalog owner, you can view the list of catalogs currently enabled for external users in the Azure portal list of catalogs, by changing the filter setting for **Enabled for external users** to **Yes**. If any of those catalogs shown in that filtered view have a non-zero number of access packages, those access packages may have a policy [for users not in your directory](entitlement-management-access-package-request-policy.md#for-users-not-in-your-directory) that allow external users to request.
+ If you're an administrator or catalog owner, you can view the list of catalogs currently enabled for external users in the Azure portal list of catalogs, by changing the filter setting for **Enabled for external users** to **Yes**. If any of those catalogs shown in that filtered view have a non-zero number of access packages, those access packages may have a policy [for users not in your directory](entitlement-management-access-package-request-policy.md#for-users-not-in-your-directory) that allow external users to request.
### Configure your Azure AD B2B external collaboration settings - Allowing guests to invite other guests to your directory means that guest invites can occur outside of entitlement management. We recommend setting **Guests can invite** to **No** to only allow for properly governed invitations. - If you have been previously using the B2B allowlist, you must either remove that list, or make sure all the domains of all the organizations you want to partner with using entitlement management are added to the list. Alternatively, if you're using the B2B blocklist, you must make sure no domain of any organization you want to partner with is present on that list.-- If you create an entitlement management policy for **All users** (All connected organizations + any new external users), and a user doesnΓÇÖt belong to a connected organization in your directory, a connected organization will automatically be created for them when they request the package. However, any B2B [allow or blocklist](../external-identities/allow-deny-list.md) settings you have will take precedence. Therefore, you want to remove the allowlist, if you were using one, so that **All users** can request access, and exclude all authorized domains from your blocklist if you're using a blocklist.
+- If you create an entitlement management policy for **All users** (All connected organizations + any new external users), and a user doesnΓÇÖt belong to a connected organization in your directory, a connected organization will automatically be created for them when they request the package. However, any B2B [allow or blocklist](../external-identities/allow-deny-list.md) settings you have takes precedence. Therefore, you want to remove the allowlist, if you were using one, so that **All users** can request access, and exclude all authorized domains from your blocklist if you're using a blocklist.
- If you want to create an entitlement management policy that includes **All users** (All connected organizations + any new external users), you must first enable email one-time passcode authentication for your directory. For more information, see [Email one-time passcode authentication](../external-identities/one-time-passcode.md). - For more information about Azure AD B2B external collaboration settings, see [Configure external collaboration settings](../external-identities/external-collaboration-settings-configure.md).
- ![Azure AD external collaboration settings](./media/entitlement-management-external-users/collaboration-settings.png)
+ [![Azure AD external collaboration settings](./media/entitlement-management-external-users/collaboration-settings.png)](./media/entitlement-management-external-users/collaboration-settings.png#lightbox)
> [!NOTE] > If you create a connected organization for an Azure AD tenant from a different Microsoft cloud, you also need to configure cross-tenant access settings appropriately. For more information on how to configure these settings, see [Configure cross-tenant access settings](../external-identities/cross-cloud-settings.md).
To ensure people outside of your organization can request access packages and ge
- Make sure to exclude the Entitlement Management app from any Conditional Access policies that impact guest users. Otherwise, a Conditional Access policy could block them from accessing MyAccess or being able to sign in to your directory. For example, guests likely don't have a registered device, aren't in a known location, and don't want to re-register for multi-factor authentication (MFA), so adding these requirements in a Conditional Access policy will block guests from using entitlement management. For more information, see [What are conditions in Azure Active Directory Conditional Access?](../conditional-access/concept-conditional-access-conditions.md). -- A common policy for Entitlement Management customers is to block all apps from guests except Entitlement Management for guests. This policy allows guests to enter My Access and request an access package. This package should contain a group (it is called Guests from My Access in the example below), which should be excluded from the block all apps policy. Once the package is approved, the guest will be in the directory. Given that the end user has the access package assignment and is part of the group, the end user will be able to access all other apps. Other common policies include excluding Entitlement Management app from MFA and compliant device.
+- A common policy for Entitlement Management customers is to block all apps from guests except Entitlement Management for guests. This policy allows guests to enter My Access and request an access package. This package should contain a group (it's called Guests from My Access in the following example), which should be excluded from the block all apps policy. Once the package is approved, the guest is in the directory. Given that the end user has the access package assignment and is part of the group, the end user is able to access all other apps. Other common policies include excluding Entitlement Management app from MFA and compliant device.
:::image type="content" source="media/entitlement-management-external-users/exclude-app-guests.png" alt-text="Screenshot of exclude app options.":::
To ensure people outside of your organization can request access packages and ge
- If you want to include SharePoint Online sites in your access packages for external users, make sure that your organization-level external sharing setting is set to **Anyone** (users don't require sign in), or **New and existing guests** (guests must sign in or provide a verification code). For more information, see [Turn external sharing on or off](/sharepoint/turn-external-sharing-on-or-off#change-the-organization-level-external-sharing-setting). -- If you want to restrict any external sharing outside of entitlement management, you can set the external sharing setting to **Existing guests**. Then, only new users that are invited through entitlement management will be able to gain access to these sites. For more information, see [Turn external sharing on or off](/sharepoint/turn-external-sharing-on-or-off#change-the-organization-level-external-sharing-setting).
+- If you want to restrict any external sharing outside of entitlement management, you can set the external sharing setting to **Existing guests**. Then, only new users that are invited through entitlement management are able to gain access to these sites. For more information, see [Turn external sharing on or off](/sharepoint/turn-external-sharing-on-or-off#change-the-organization-level-external-sharing-setting).
- Make sure that the site-level settings enable guest access (same option selections as previously listed). For more information, see [Turn external sharing on or off for a site](/sharepoint/change-external-sharing-site).
active-directory Entitlement Management Request Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-request-access.md
The first step is to sign in to the My Access portal where you can request acces
**Prerequisite role:** Requestor
-1. Look for an email or a message from the project or business manager you're working with. The email should include a link to the access package you'll need access to. The link starts with `myaccess`, includes a directory hint, and ends with an access package ID. (For US Government, the domain may be `https://myaccess.microsoft.us` instead.)
+1. Look for an email or a message from the project or business manager you're working with. The email should include a link to the access package you need access to. The link starts with `myaccess`, includes a directory hint, and ends with an access package ID. (For US Government, the domain may be `https://myaccess.microsoft.us` instead.)
`https://myaccess.microsoft.com/@<directory_hint>#/access-packages/<access_package_id>`
+ > [!NOTE]
+ > When signing into My Access via your directory hint link, you will be required to reauthenticate with your login credentials.
1. Open the link.
When you request access to an access package, your request might be denied or yo
![Select access package and view link](./media/entitlement-management-request-access/resubmit-request-select-request-and-view.png)
- A pane will open to the right with the request history for the access package.
+ A pane opens to the right with the request history for the access package.
![Select resubmit button](./media/entitlement-management-request-access/resubmit-request-select-resubmit.png)
active-directory G Suite Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/g-suite-provisioning-tutorial.md
Now any end user that was made eligible for the group in PIM can get JIT access
* 10/17/2020 - Added support for more G Suite user and group attributes. * 10/17/2020 - Updated G Suite target attribute names to match what is defined [here](https://developers.google.com/admin-sdk/directory). * 10/17/2020 - Updated default attribute mappings.
-* 03/18/2021 - Manager email is now synchronized instead of ID for all new users. For any existing users that were provisioned with a manager as an ID, you can do a restart through [Microsoft Graph](/graph/api/synchronization-synchronizationjob-restart?preserve-view=true&tabs=http&view=graph-rest-beta) with scope "full" to ensure that the email is provisioned. This change only impacts the GSuite provisioning job and not the older provisioning job beginning with Goov2OutDelta. Note, the manager email is provisioned when the user is first created or when the manager changes. The manager email isn't provisioned if the manager changes their email address.
## More resources
active-directory Litmos Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/litmos-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure SAP Litmos for automatic user provisioning with Azure Active Directory'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to SAP Litmos.
++
+writer: twimmers
+
+ms.assetid: 4e0d2a0b-2d22-4b21-8b29-64413549c5a5
++++ Last updated : 08/03/2023+++
+# Tutorial: Configure SAP Litmos for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both SAP Litmos and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [SAP Litmos](http://www.litmos.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Supported capabilities
+> [!div class="checklist"]
+> * Create users in SAP Litmos.
+> * Remove users in SAP Litmos when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and SAP Litmos.
+> * Provision groups and group memberships in SAP Litmos.
+> * [Single sign-on](litmos-tutorial.md) to SAP Litmos (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* An SAP Litmos tenant.
+* A user account in SAP Litmos with Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and SAP Litmos](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure SAP Litmos to support provisioning with Azure AD
+Contact SAP Litmos support to configure SAP Litmos to support provisioning with Azure AD.
+
+## Step 3. Add SAP Litmos from the Azure AD application gallery
+
+Add SAP Litmos from the Azure AD application gallery to start managing provisioning to SAP Litmos. If you have previously setup SAP Litmos for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to SAP Litmos
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for SAP Litmos in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **SAP Litmos**.
+
+ ![Screenshot of the SAP Litmos link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your SAP Litmos Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to SAP Litmos. If the connection fails, ensure your SAP Litmos account has Admin permissions and try again.
+
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to SAP Litmos**.
+
+1. Review the user attributes that are synchronized from Azure AD to SAP Litmos in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in SAP Litmos for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the SAP Litmos API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by SAP Litmos|
+ |||||
+ |userName|String|&check;|&check;
+ |active|Boolean||&check;
+ |title|String||
+ |emails[type eq "work"].value|String||
+ |preferredLanguage|String||
+ |name.givenName|String||&check;
+ |name.familyName|String||&check;
+ |addresses[type eq "work"].streetAddress|String||
+ |addresses[type eq "work"].locality|String||
+ |addresses[type eq "work"].region|String||
+ |addresses[type eq "work"].postalCode|String||
+ |addresses[type eq "work"].country|String||
+ |phoneNumbers[type eq "work"].value|String||
+ |phoneNumbers[type eq "mobile"].value|String||
+ |timezone|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:employeeNumber|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:organization|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|Reference||
+ |urn:ietf:params:scim:schemas:extension:Litmos:2.0:User:CustomField:CustomField1|String||
+ |urn:ietf:params:scim:schemas:extension:Litmos:2.0:User:CustomField:CustomField2|String||
+ |urn:ietf:params:scim:schemas:extension:Litmos:2.0:User:CustomField:CustomField3|String||
+ |urn:ietf:params:scim:schemas:extension:Litmos:2.0:User:CustomField:CustomField4|String||
+ |urn:ietf:params:scim:schemas:extension:Litmos:2.0:User:CustomField:CustomField5|String||
+ |urn:ietf:params:scim:schemas:extension:Litmos:2.0:User:CustomField:CustomField6|String||
+ |urn:ietf:params:scim:schemas:extension:Litmos:2.0:User:CustomField:CustomField7|String||
+ |urn:ietf:params:scim:schemas:extension:Litmos:2.0:User:CustomField:CustomField8|String||
+ |urn:ietf:params:scim:schemas:extension:Litmos:2.0:User:CustomField:CustomField9|String||
+ |urn:ietf:params:scim:schemas:extension:Litmos:2.0:User:CustomField:CustomField10|String||
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to SAP Litmos**.
+
+1. Review the group attributes that are synchronized from Azure AD to SAP Litmos in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in SAP Litmos for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by SAP Litmos|
+ |||||
+ |displayName|String|&check;|&check;
+ |members|Reference||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for SAP Litmos, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to SAP Litmos by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you're ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Tailscale Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tailscale-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Tailscale for automatic user provisioning with Azure Active Directory'
+description: Learn how to automatically provision and deprovision user accounts from Azure AD to Tailscale.
++
+writer: twimmers
+
+ms.assetid: 9bf5ef32-c9ba-4fef-acab-3c16f976af5c
++++ Last updated : 08/03/2023+++
+# Tutorial: Configure Tailscale for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Tailscale and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and deprovisions users to [Tailscale](https://tailscale.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Supported capabilities
+> [!div class="checklist"]
+> * Create users in Tailscale.
+> * Remove users in Tailscale when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Tailscale.
+> * [Single sign-on](../manage-apps/add-application-portal-setup-oidc-sso.md) to Tailscale (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A user account in Tailscale with Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who is in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Tailscale](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Tailscale to support provisioning with Azure AD
+Contact Tailscale support to configure Tailscale to support provisioning with Azure AD.
+
+## Step 3. Add Tailscale from the Azure AD application gallery
+
+Add Tailscale from the Azure AD application gallery to start managing provisioning to Tailscale. If you have previously setup Tailscale for SSO, you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who is in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who is provisioned based on assignment to the application and/or based on attributes of the user. If you choose to scope who is provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users to the application. If you choose to scope who is provisioned based solely on attributes of the user, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users before rolling out to everyone. When scope for provisioning is set to assigned users, you can control this by assigning one or two users to the app. When scope is set to all users, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to Tailscale
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in TestApp based on user assignments in Azure AD.
+
+### To configure automatic user provisioning for Tailscale in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **Tailscale**.
+
+ ![Screenshot of the Tailscale link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your Tailscale Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Tailscale. If the connection fails, ensure your Tailscale account has Admin permissions and try again.
+
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Tailscale**.
+
+1. Review the user attributes that are synchronized from Azure AD to Tailscale in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Tailscale for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you need to ensure that the Tailscale API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Tailscale|
+ |||||
+ |userName|String|&check;|&check;
+ |active|Boolean|&check;|
+ |displayName|String|&check;|
+ |preferredLanguage|String|&check;|
+ |name.givenName|String|&check;|
+ |name.familyName|String|&check;|
+ |name.formatted|String|&check;|
+ |emails[type eq "work"].value|String|&check;|
+ |externalId|String|&check;|&check;
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Tailscale, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users that you would like to provision to Tailscale by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you're ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Tanium Sso Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tanium-sso-provisioning-tutorial.md
# Tutorial: Configure Tanium SSO for automatic user provisioning
-This tutorial describes the steps you need to perform in both Tanium SSO and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Tanium SSO](https://www.tanium.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
+This tutorial describes the steps you need to perform in both Tanium SSO and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Tanium SSO](https://www.tanium.com/) using the Azure AD Provisioning service. These capabilities are supported only for Tanium Cloud customers. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
## Supported capabilities
The scenario outlined in this tutorial assumes that you already have the followi
* A user account in Tanium SSO with Admin permissions. ## Step 1. Plan your provisioning deployment+ 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). 1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). 1. Determine what data to [map between Azure AD and Tanium SSO](../app-provisioning/customize-application-attributes.md).
-## Step 2. Configure Tanium SSO to support provisioning with Azure AD
-Contact Tanium SSO support to configure Tanium SSO to support provisioning with Azure AD.
+## Step 2. Enable SCIM Provisioning in the Tanium Cloud Management Portal (CMP)
+
+* Follow the steps in the [Tanium Cloud Deployment Guide: Configure SCIM Provisioning](https://docs.tanium.com/cloud/cloud/configuring_identity_providers.html#configure_scim) to enable automatic user provisioning in Tanium Cloud.
+* Retain the **Token** and **SCIM API URL** values for later use in configuring Tanium SSO. Copy the entire token string, formatted like `token-\<58 alphanumeric characters\>`.
## Step 3. Add Tanium SSO from the Azure AD application gallery
-Add Tanium SSO from the Azure AD application gallery to start managing provisioning to Tanium SSO. If you have previously setup Tanium SSO for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+Add Tanium SSO from the Azure AD application gallery to start managing provisioning to Tanium SSO. If you have previously setup Tanium SSO for SSO you can use the same application. However, it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
## Step 4. Define who will be in scope for provisioning
The Azure AD provisioning service allows you to scope who will be provisioned ba
* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles. - ## Step 5. Configure automatic user provisioning to Tanium SSO
-This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in Tanium based on user and/or group assignments in Azure AD.
### To configure automatic user provisioning for Tanium SSO in Azure AD:
This section guides you through the steps to configure the Azure AD provisioning
![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
-1. Under the **Admin Credentials** section, input your Tanium SSO Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Tanium SSO. If the connection fails, ensure your Tanium SSO account has Admin permissions and try again.
+1. Under the **Admin Credentials** section, input your Tanium SSO **Tenant URL** and **Secret Token** that you previously retrieved from the Tanium CMP. Click **Test Connection** to ensure Azure AD can connect to Tanium SSO. If the connection fails, ensure that you entered the complete token value, including the `token-` prefix.
- ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
active-directory Vbrick Rev Cloud Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/vbrick-rev-cloud-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Vbrick Rev Cloud for automatic user provisioning with Azure Active Directory'
+description: Learn how to automatically provision and deprovision user accounts from Azure AD to Vbrick Rev Cloud.
++
+writer: twimmers
+
+ms.assetid: 4e8d8508-10c8-4b23-9699-af010030f9c3
++++ Last updated : 08/03/2023+++
+# Tutorial: Configure Vbrick Rev Cloud for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Vbrick Rev Cloud and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and deprovisions users and groups to [Vbrick Rev Cloud](https://vbrick.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Supported capabilities
+> [!div class="checklist"]
+> * Create users in Vbrick Rev Cloud.
+> * Remove users in Vbrick Rev Cloud when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Vbrick Rev Cloud.
+> * Provision groups and group memberships in Vbrick Rev Cloud.
+> * [Single sign-on](vbrick-rev-cloud-tutorial.md) to Vbrick Rev Cloud (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A Vbrick Rev Cloud tenant.
+* A user account in Vbrick Rev Cloud with Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Vbrick Rev Cloud](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Vbrick Rev Cloud to support provisioning with Azure AD
+Contact Vbrick Rev Cloud support to configure Vbrick Rev Cloud to support provisioning with Azure AD.
+
+## Step 3. Add Vbrick Rev Cloud from the Azure AD application gallery
+
+Add Vbrick Rev Cloud from the Azure AD application gallery to start managing provisioning to Vbrick Rev Cloud. If you have previously setup Vbrick Rev Cloud for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to Vbrick Rev Cloud
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Vbrick Rev Cloud in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **Vbrick Rev Cloud**.
+
+ ![Screenshot of the Vbrick Rev Cloud link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your Vbrick Rev Cloud Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Vbrick Rev Cloud. If the connection fails, ensure your Vbrick Rev Cloud account has Admin permissions and try again.
+
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Vbrick Rev Cloud**.
+
+1. Review the user attributes that are synchronized from Azure AD to Vbrick Rev Cloud in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Vbrick Rev Cloud for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Vbrick Rev Cloud API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Vbrick Rev Cloud|
+ |||||
+ |userName|String|&check;|&check;
+ |active|Boolean||&check;
+ |title|String||
+ |emails[type eq "work"].value|String||
+ |name.givenName|String||
+ |name.familyName|String||&check;
+ |name.formatted|String||
+ |phoneNumbers[type eq "work"].value|String||
+ |externalId|String||&check;
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Vbrick Rev Cloud**.
+
+1. Review the group attributes that are synchronized from Azure AD to Vbrick Rev Cloud in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Vbrick Rev Cloud for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Vbrick Rev Cloud|
+ |||||
+ |displayName|String|&check;|&check;
+ |externalId|String||&check;
+ |members|Reference||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Vbrick Rev Cloud, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to Vbrick Rev Cloud by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you're ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Vmware Identity Service Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/vmware-identity-service-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure VMware Identity Service for automatic user provisioning with Azure Active Directory'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to VMware Identity Service.
++
+writer: twimmers
+
+ms.assetid: 4ad9db26-2354-4e47-9dc3-2deb39222c87
++++ Last updated : 08/03/2023+++
+# Tutorial: Configure VMware Identity Service for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both VMware Identity Service and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [VMware Identity Service](https://www.vmware.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Supported capabilities
+> [!div class="checklist"]
+> * Create users in VMware Identity Service.
+> * Remove users in VMware Identity Service when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and VMware Identity Service.
+> * Provision groups and group memberships in VMware Identity Service.
+> * [Single sign-on](vmware-identity-service-tutorial.md) to VMware Identity Service (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* An VMware Identity Service tenant.
+* A user account in VMware Identity Service with Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and VMware Identity Service](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure VMware Identity Service to support provisioning with Azure AD
+Contact VMware Identity Service support to configure VMware Identity Service to support provisioning with Azure AD.
+
+## Step 3. Add VMware Identity Service from the Azure AD application gallery
+
+Add VMware Identity Service from the Azure AD application gallery to start managing provisioning to VMware Identity Service. If you have previously setup VMware Identity Service for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to VMware Identity Service
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for VMware Identity Service in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **VMware Identity Service**.
+
+ ![Screenshot of the VMware Identity Service link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your VMware Identity Service Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to VMware Identity Service. If the connection fails, ensure your VMware Identity Service account has Admin permissions and try again.
+
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to VMware Identity Service**.
+
+1. Review the user attributes that are synchronized from Azure AD to VMware Identity Service in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in VMware Identity Service for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the VMware Identity Service API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by VMware Identity Service|
+ |||||
+ |userName|String|&check;|&check;
+ |active|Boolean||&check;
+ |externalId|String||
+ |emails[type eq "work"].value|String||
+ |name.givenName|String||
+ |name.familyName|String||
+ |phoneNumbers[type eq "work"].value|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:employeeNumber|String||
+ |addresses[type eq \"work\"].country|String||
+ |addresses[type eq \"work\"].postalCode|String||
+ |addresses[type eq \"work\"].region|String||
+ |addresses[type eq \"work\"].locality|String||
+ |addresses[type eq \"work\"].streetAddress|String||
+ |profileUrl|String||
+ |title|String||
+ |nickName|String||
+ |displayName|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:costCenter|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:division|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:organization|String||
+ |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:adSourceAnchor|String||
+ |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:customAttribute1|String||
+ |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:customAttribute2|String||
+ |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:customAttribute3|String||
+ |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:customAttribute4|String||
+ |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:customAttribute5|String||
+ |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:distinguishedName|String||
+ |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:domain|String||
+ |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:userPrincipalName|String||
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to VMware Identity Service**.
+
+1. Review the group attributes that are synchronized from Azure AD to VMware Identity Service in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in VMware Identity Service for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by VMware Identity Service|
+ |||||
+ |displayName|String|&check;|&check;
+ |members|Reference||
+ |externalId|String||&check;
+ |urn:ietf:params:scim:schemas:extension:ws1b:2.0:Group:description|String||
+ |urn:ietf:params:scim:schemas:extension:ws1b:2.0:Group:distinguishedName|String||
+ |urn:ietf:params:scim:schemas:extension:ws1b:2.0:Group:domain|String||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for VMware Identity Service, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to VMware Identity Service by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you're ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Services Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/services-partners.md
If you're a Services Partner and would like to be considered into Entra Verified
| Services partner | Website | |:-|:--|
-| ![Screenshot of Affinitiquest logo](media/services-partners/affinitiquest.png) | [Secure Personally Identifiable Information | AffinitiQuest](https://affinitiquest.io/) |
-| ![Screenshot of Avanade logo](media/services-partners/avanade.png) | [Avanade Entra Verified ID Consulting Services](https://appsource.microsoft.com/marketplace/consulting-services/avanadeinc.ava_entra_verified_id_fy23?exp=ubp8) |
-| ![Screenshot of Credivera logo](media/services-partners/credivera.png) | [Credivera: Digital Identity Solutions | Verifiable Credentials](https://www.credivera.com/) |
-| ![Screenshot of Condatis logo](media/services-partners/condatis.png) | [Decentralized Identity | Condatis](https://condatis.com/technology/decentralized-identity/) |
-| ![Screenshot of DXC logo](media/services-partners/dxc.png) | [Digital Identity - Connect with DXC](https://dxc.com/us/en/services/security/digital-identity) |
-| ![Screenshot of CTC logo](media/services-partners/ctc.png) | [CTC's SELMID offering](https://ctc-insight.com/selmid) |
-| ![Screenshot of Kocho logo](media/services-partners/kocho.png) | [Connect with Kocho. See Verified Identity in Action](https://kocho.co.uk/contact-us/)<br/>[See Verified Identity in Action](https://kocho.co.uk/verified-id-in-action/) |
-| ![Screenshot of Oxford logo](media/services-partners/oxford.png) | [Microsoft Entra Verified ID - Oxford Computer Group](https://oxfordcomputergroup.com/microsoft-entra-verified-id-overview/) |
-| ![Screenshot of Predica logo](media/services-partners/predica.png) | [Verified ID - Predica Group](https://www.predicagroup.com/en/verified-id/) |
-| ![Screenshot of Sphereon logo](media/services-partners/sphereon.png) | [Sphereon supports customers on Microsoft's Entra Verified ID](https://sphereon.com/sphereon-supports-microsofts-entra-verified-id/) |
-| ![Screenshot of Unify logo](media/services-partners/unify.png) | [Microsoft Entra Verified ID - UNIFY Solutions](https://unifysolutions.net/entra/verified-id/) |
-| ![Screenshot of Whoiam logo](media/services-partners/whoiam.png) | [Microsoft Entra Verified ID - WhoIAM](https://www.whoiam.ai/product/microsoft-entra-verified-id/#:~:text=Verifiable%20credentials%20are%20identity%20attestations%2C%20such%20as%20proof,obtain%20and%20manage%20their%20verified%20credentials.%20Let%E2%80%99s%20Talk) |
+| ![Screenshot of Affinitiquest logo.](media/services-partners/affinitiquest.png) | [Secure Personally Identifiable Information | AffinitiQuest](https://affinitiquest.io/) |
+| ![Screenshot of Avanade logo.](media/services-partners/avanade.png) | [Avanade Entra Verified ID Consulting Services](https://appsource.microsoft.com/marketplace/consulting-services/avanadeinc.ava_entra_verified_id_fy23?exp=ubp8) |
+| ![Screenshot of Credivera logo.](media/services-partners/credivera.png) | [Credivera: Digital Identity Solutions | Verifiable Credentials](https://www.credivera.com/) |
+| ![Screenshot of Condatis logo.](media/services-partners/condatis.png) | [Decentralized Identity | Condatis](https://condatis.com/technology/decentralized-identity/) |
+| ![Screenshot of DXC logo.](media/services-partners/dxc.png) | [Digital Identity - Connect with DXC](https://dxc.com/us/en/services/security/digital-identity) |
+| ![Screenshot of CTC logo.](media/services-partners/ctc.png) | [CTC's SELMID offering](https://ctc-insight.com/selmid) |
+| ![Screenshot of Formula5 logo.](media/services-partners/formula5.png) | [Verified ID - Formula5](https://formula5.com/accelerator-for-microsoft-entra-verified-id/)<br/>[Azure Marketplace Verified ID offering](https://azuremarketplace.microsoft.com/marketplace/consulting-services/formulaconsultingllc1668008672143.verifiable_credentials_formula5-preview?tab=Overview&flightCodes=d12a14cf40204b39840e5c0f114c1366) |
+| ![Screenshot of Kocho logo.](media/services-partners/kocho.png) | [Connect with Kocho. See Verified Identity in Action](https://kocho.co.uk/contact-us/)<br/>[See Verified Identity in Action](https://kocho.co.uk/verified-id-in-action/) |
+| ![Screenshot of Predica logo.](media/services-partners/predica.png) | [Verified ID - Predica Group](https://www.predicagroup.com/en/verified-id/) |
+| ![Screenshot of Sphereon logo.](media/services-partners/sphereon.png) | [Sphereon supports customers on Microsoft's Entra Verified ID](https://sphereon.com/sphereon-supports-microsofts-entra-verified-id/) |
+| ![Screenshot of Unify logo.](media/services-partners/unify.png) | [Microsoft Entra Verified ID - UNIFY Solutions](https://unifysolutions.net/entra/verified-id/) |
+| ![Screenshot of Whoiam logo.](media/services-partners/whoiam.png) | [Microsoft Entra Verified ID - WhoIAM](https://www.whoiam.ai/product/microsoft-entra-verified-id/#:~:text=Verifiable%20credentials%20are%20identity%20attestations%2C%20such%20as%20proof,obtain%20and%20manage%20their%20verified%20credentials.%20Let%E2%80%99s%20Talk) |
## Next steps
advisor Advisor Reference Operational Excellence Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-operational-excellence-recommendations.md
Learn more about [Kubernetes service - EnableClusterAutoscaler (Enable the Clust
Some of the subnets for this cluster's node pools are full and cannot take any more worker nodes. Using the Azure CNI plugin requires to reserve IP addresses for each node and all the pods for the node at node provisioning time. If there is not enough IP address space in the subnet, no worker nodes can be deployed. Additionally, the AKS cluster cannot be upgraded if the node subnet is full.
-Learn more about [Kubernetes service - NodeSubnetIsFull (The AKS node pool subnet is full)](../aks/use-multiple-node-pools.md#add-a-node-pool-with-a-unique-subnet).
+Learn more about [Kubernetes service - NodeSubnetIsFull (The AKS node pool subnet is full)](../aks/create-node-pools.md#add-a-node-pool-with-a-unique-subnet).
### Disable the Application Routing Addon
ai-services Concept Business Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-business-card.md
See how data, including name, job title, address, email, and company name, is ex
* **en-gb** * **en-in**
-### Migration guide and REST API v3.0
+### Migration guide and REST API v3.1
-* Follow our [**Document Intelligence v3.0 migration guide**](v3-migration-guide.md) to learn how to use the v3.0 version in your applications and workflows.
+* Follow our [**Document Intelligence v3.1 migration guide**](v3-1-migration-guide.md) to learn how to use the v3.0 version in your applications and workflows.
::: moniker-end
ai-services Concept Custom Classifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-custom-classifier.md
Custom classification models require a minimum of five samples per class to trai
## Training a model
-Custom classification models are only available in the [v3.1 API](v3-migration-guide.md) starting with API version ```2023-02-28-preview```. [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio) provides a no-code user interface to interactively train a custom classifier.
+Custom classification models are only available in the [v3.1 API](v3-1-migration-guide.md) starting with API version ```2023-02-28-preview```. [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio) provides a no-code user interface to interactively train a custom classifier.
When using the REST API, if you've organized your documents by folders, you can use the ```azureBlobSource``` property of the request to train a classification model.
ai-services Concept Custom Neural https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-custom-neural.md
Values in training cases should be diverse and representative. For example, if a
## Training a model
-Custom neural models are available in the [v3.0 and v3.1 APIs](v3-migration-guide.md).
+Custom neural models are available in the [v3.0 and v3.1 APIs](v3-1-migration-guide.md).
| Document Type | REST API | SDK | Label and Test Models| |--|--|--|--|
ai-services Concept Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-custom.md
The following table lists the supported languages for print text by the most rec
### Try signature detection * **Custom model v 3.1 and v3.0 APIs** supports signature detection for custom forms. When you train custom models, you can specify certain fields as signatures. When a document is analyzed with your custom model, it indicates whether a signature was detected or not.
-* [Document Intelligence v3.0 migration guide](v3-migration-guide.md): This guide shows you how to use the v3.0 version in your applications and workflows.
+* [Document Intelligence v3.1 migration guide](v3-1-migration-guide.md): This guide shows you how to use the v3.0 version in your applications and workflows.
* [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument): This API shows you more about the v3.0 version and new capabilities. 1. Build your training dataset.
ai-services Concept General Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-general-document.md
monikerRange: '>=doc-intel-3.0.0'
[!INCLUDE [applies to v3.1 and v3.0](includes/applies-to-v3-1-v3-0.md)]
-The General document v3.0 model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to extract key-value pairs, tables, and selection marks from documents. General document is only available with the v3.0 API. For more information on using the v3.0 API, see our [migration guide](v3-migration-guide.md).
+The General document v3.0 model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to extract key-value pairs, tables, and selection marks from documents. General document is only available with the v3.0 API. For more information on using the v3.0 API, see our [migration guide](v3-1-migration-guide.md).
> [!NOTE] > The ```2023-07-31``` (GA) version of the general document model adds support for **normalized keys**.
The General document v3.0 model combines powerful Optical Character Recognition
The general document API supports most form types and analyzes your documents and extract keys and associated values. It's ideal for extracting common key-value pairs from documents. You can use the general document model as an alternative to training a custom model without labels.
-### Key normalization (common name)
-
-When the service analyzes documents with variations in key names like ```Social Security Number```, ```Social Security Nbr```, ```SSN```, the output normalizes the key variations to a single common name, ```SocialSecurityNumber```. This normalization simplifies downstream processing for documents where you no longer need to account for variations in the key name.
-- ## Development options Document Intelligence v3.0 supports the following tools:
Keys can also exist in isolation when the model detects that a key exists, with
## Next steps
-* Follow our [**Document Intelligence v3.0 migration guide**](v3-migration-guide.md) to learn how to use the v3.0 version in your applications and workflows.
+* Follow our [**Document Intelligence v3.1 migration guide**](v3-1-migration-guide.md) to learn how to use the v3.0 version in your applications and workflows.
* Explore our [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument) to learn more about the v3.0 version and new capabilities.
ai-services Concept Id Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-id-document.md
The following are the fields extracted per document type. The Document Intellige
### Migration guide
-* Follow our [**Document Intelligence v3.0 migration guide**](v3-migration-guide.md) to learn how to use the v3.0 version in your applications and workflows.
+* Follow our [**Document Intelligence v3.1 migration guide**](v3-1-migration-guide.md) to learn how to use the v3.0 version in your applications and workflows.
::: moniker-end
ai-services Concept Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-invoice.md
See how data, including customer information, vendor details, and line items, is
| Supported languages | Details | |:-|:|
-| &bullet; English (en) | United States (us), Australia (-au), Canada (-ca), United Kingdom (-uk), India (-in)|
-| &bullet; Spanish (es) |Spain (es)|
-| &bullet; German (de) | Germany (de)|
-| &bullet; French (fr) | France (fr) |
-| &bullet; Italian (it) | Italy (it)|
-| &bullet; Portuguese (pt) | Portugal (pt), Brazil (br)|
-| &bullet; Dutch (nl) | Netherlands (nl)|
-| &bullet; Czech (cs) | Czechoslovakia (cz)|
-| &bullet; Danish (da) | Denmark (dk)|
-| &bullet; Estonian (et) | Estonia (ee)|
-| &bullet; Finnish (fi) | Finland (fl)|
-| &bullet; Croation (hr) | Bosnia and Herzegovina (ba), Croatia (hr), Serbia (rs)|
-| &bullet; Hungarian (hu) | Hungary (hu)|
-| &bullet; Icelandic (is) | Iceland (is)|
-| &bullet; Japanese (ja) | Japan (ja)|
-| &bullet; Korean (ko) | Korea (kr)|
-| &bullet; Lithuanian (lt) | Lithuania (lt)|
-| &bullet; Latvian (lv) | Latvia (lv)|
-| &bullet; Malay (ms) | Malasia (ms)|
-| &bullet; Norwegian (nb) | Norway (no)|
-| &bullet; Polish (pl) | Poland (pl)|
-| &bullet; Romanian (ro) | Romania (ro)|
-| &bullet; Slovak (sk) | Slovakia (sv)|
-| &bullet; Slovenian (sl) | Slovenia (sl)|
+| &bullet; English (`en`) | United States (`us`), Australia (`au`), Canada (`ca`), United Kingdom (-uk), India (-in)|
+| &bullet; Spanish (`es`) |Spain (`es`)|
+| &bullet; German (`de`) | Germany (`de`)|
+| &bullet; French (`fr`) | France (`fr`) |
+| &bullet; Italian (`it`) | Italy (`it`)|
+| &bullet; Portuguese (`pt`) | Portugal (`pt`), Brazil (`br`)|
+| &bullet; Dutch (`nl`) | Netherlands (`nl`)|
+| &bullet; Czech (`cs`) | Czechoslovakia (`cz`)|
+| &bullet; Danish (`da`) | Denmark (`dk`)|
+| &bullet; Estonian (`et`) | Estonia (`ee`)|
+| &bullet; Finnish (`fi`) | Finland (`fl`)|
+| &bullet; Croatian (`hr`) | Bosnia and Herzegovina (`ba`), Croatia (`hr`), Serbia (`rs`)|
+| &bullet; Hungarian (`hu`) | Hungary (`hu`)|
+| &bullet; Icelandic (`is`) | Iceland (`is`)|
+| &bullet; Japanese (`ja`) | Japan (`ja`)|
+| &bullet; Korean (`ko`) | Korea (`kr`)|
+| &bullet; Lithuanian (`lt`) | Lithuania (`lt`)|
+| &bullet; Latvian (`lv`) | Latvia (`lv`)|
+| &bullet; Malay (`ms`) | Malaysia (`ms`)|
+| &bullet; Norwegian (`nb`) | Norway (`no`)|
+| &bullet; Polish (`pl`) | Poland (`pl`)|
+| &bullet; Romanian (`ro`) | Romania (`ro`)|
+| &bullet; Slovak (`sk`) | Slovakia (`sv`)|
+| &bullet; Slovenian (`sl`) | Slovenia (`sl`)|
| &bullet; Serbian (sr-Latn) | Serbia (latn-rs)|
-| &bullet; Albanian (sq) | Albania (al)|
-| &bullet; Swedish (sv) | Sweden (se)|
+| &bullet; Albanian (`sq`) | Albania (`al`)|
+| &bullet; Swedish (`sv`) | Sweden (`se`)|
| &bullet; Chinese (simplified (zh-hans)) | China (zh-hans-cn)| | &bullet; Chinese (traditional (zh-hant)) | Hong Kong (zh-hant-hk), Taiwan (zh-hant-tw)| | Supported Currency Codes | Details | |:-|:|
-| &bullet; ARS | United States (us) |
-| &bullet; AUD | Australia (au) |
-| &bullet; BRL | United States (us) |
-| &bullet; CAD | Canada (ca) |
-| &bullet; CLP | United States (us) |
-| &bullet; CNY | United States (us) |
-| &bullet; COP | United States (us) |
-| &bullet; CRC | United States (us) |
-| &bullet; CZK | United States (us) |
-| &bullet; DKK | United States (us) |
-| &bullet; EUR | United States (us) |
-| &bullet; GBP | United Kingdom (uk) |
-| &bullet; HUF | United States (us) |
-| &bullet; IDR | United States (us) |
-| &bullet; INR | United States (us) |
-| &bullet; ISK | United States (us) |
-| &bullet; JPY | Japan (jp) |
-| &bullet; KRW | United States (us) |
-| &bullet; NOK | United States (us) |
-| &bullet; PAB | United States (us) |
-| &bullet; PEN | United States (us) |
-| &bullet; PLN | United States (us) |
-| &bullet; RON | United States (us) |
-| &bullet; RSD | United States (us) |
-| &bullet; SEK | United States (us) |
-| &bullet; TWD | United States (us) |
-| &bullet; USD | United States (us) |
+| &bullet; ARS | United States (`us`) |
+| &bullet; AUD | Australia (`au`) |
+| &bullet; BRL | United States (`us`) |
+| &bullet; CAD | Canada (`ca`) |
+| &bullet; CLP | United States (`us`) |
+| &bullet; CNY | United States (`us`) |
+| &bullet; COP | United States (`us`) |
+| &bullet; CRC | United States (`us`) |
+| &bullet; CZK | United States (`us`) |
+| &bullet; DKK | United States (`us`) |
+| &bullet; EUR | United States (`us`) |
+| &bullet; GBP | United Kingdom (`uk`) |
+| &bullet; HUF | United States (`us`) |
+| &bullet; IDR | United States (`us`) |
+| &bullet; INR | United States (`us`) |
+| &bullet; ISK | United States (`us`) |
+| &bullet; JPY | Japan (`jp`) |
+| &bullet; KRW | United States (`us`) |
+| &bullet; NOK | United States (`us`) |
+| &bullet; PAB | United States (`us`) |
+| &bullet; PEN | United States (`us`) |
+| &bullet; PLN | United States (`us`) |
+| &bullet; RON | United States (`us`) |
+| &bullet; RSD | United States (`us`) |
+| &bullet; SEK | United States (`us`) |
+| &bullet; TWD | United States (`us`) |
+| &bullet; USD | United States (`us`) |
## Field extraction
The JSON output has three parts:
## Migration guide
-* Follow our [**Document Intelligence v3.0 migration guide**](v3-migration-guide.md) to learn how to use the v3.0 version in your applications and workflows.
+* Follow our [**Document Intelligence v3.1 migration guide**](v3-1-migration-guide.md) to learn how to use the v3.0 version in your applications and workflows.
::: moniker-end
ai-services Concept Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-model-overview.md
A composed model is created by taking a collection of custom models and assignin
### Version migration
-Learn how to use Document Intelligence v3.0 in your applications by following our [**Document Intelligence v3.0 migration guide**](v3-migration-guide.md)
+Learn how to use Document Intelligence v3.0 in your applications by following our [**Document Intelligence v3.1 migration guide**](v3-1-migration-guide.md)
::: moniker-end
A composed model is created by taking a collection of custom models and assignin
### Version migration
- You can learn how to use Document Intelligence v3.0 in your applications by following our [**Document Intelligence v3.0 migration guide**](v3-migration-guide.md)
+ You can learn how to use Document Intelligence v3.0 in your applications by following our [**Document Intelligence v3.1 migration guide**](v3-1-migration-guide.md)
::: moniker-end
ai-services Concept W2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-w2.md
Try extracting data from W-2 forms using the Document Intelligence Studio. You n
* | W2FormVariant | | String | The variants of W-2 forms, including *W-2*, *W-2AS*, *W-2CM*, *W-2GU*, *W-2VI* | W-2 |
-### Migration guide and REST API v3.0
+### Migration guide and REST API v3.1
-* Follow our [**Document Intelligence v3.0 migration guide**](v3-migration-guide.md) to learn how to use the v3.0 version in your applications and workflows.
+* Follow our [**Document Intelligence v3.1 migration guide**](v3-1-migration-guide.md) to learn how to use the v3.0 version in your applications and workflows.
* Explore our [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument) to learn more about the v3.0 version and new capabilities.
ai-services Deploy Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/deploy-label-tool.md
monikerRange: 'doc-intel-2.1.0'
> > * For an enhanced experience and advanced model quality, try the [Document Intelligence v3.0 Studio](https://formrecognizer.appliedai.azure.com/studio). > * The v3.0 Studio supports any model trained with v2.1 labeled data.
-> * You can refer to the [API migration guide](v3-migration-guide.md) for detailed information about migrating from v2.1 to v3.0.
+> * You can refer to the [API migration guide](v3-1-migration-guide.md) for detailed information about migrating from v2.1 to v3.0.
> * *See* our [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) or [**C#**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), [**Java**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), or [Python](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) SDK quickstarts to get started with the v3.0 version. > [!NOTE]
ai-services Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/label-tool.md
monikerRange: 'doc-intel-2.1.0'
> > * For an enhanced experience and advanced model quality, try the [Document Intelligence v3.0 Studio](https://formrecognizer.appliedai.azure.com/studio). > * The v3.0 Studio supports any model trained with v2.1 labeled data.
-> * You can refer to the [API migration guide](v3-migration-guide.md) for detailed information about migrating from v2.1 to v3.0.
+> * You can refer to the [API migration guide](v3-1-migration-guide.md) for detailed information about migrating from v2.1 to v3.0.
> * *See* our [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) or [**C#**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), [**Java**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), or [Python](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) SDK quickstarts to get started with the V3.0. In this article, you use the Document Intelligence REST API with the Sample Labeling tool to train a custom model with manually labeled data.
ai-services Try Document Intelligence Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/quickstarts/try-document-intelligence-studio.md
To label for signature detection: (Custom form only)
## Next steps
-* Follow our [**Document Intelligence v3.0 migration guide**](../v3-migration-guide.md) to learn the differences from the previous version of the REST API.
+* Follow our [**Document Intelligence v3.1 migration guide**](../v3-1-migration-guide.md) to learn the differences from the previous version of the REST API.
* Explore our [**v3.0 SDK quickstarts**](get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) to try the v3.0 features in your applications using the new SDKs. * Refer to our [**v3.0 REST API quickstarts**](get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) to try the v3.0 features using the new REST API.
ai-services Studio Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/studio-overview.md
monikerRange: '>=doc-intel-3.0.0'
Document Intelligence Studio is an online tool to visually explore, understand, train, and integrate features from the Document Intelligence service into your applications. The studio provides a platform for you to experiment with the different Document Intelligence models and sample returned data in an interactive manner without the need to write code.
-The studio supports Document Intelligence v3.0 models and v3.0 model training. Previously trained v2.1 models with labeled data are supported, but not v2.1 model training. Refer to the [REST API migration guide](v3-migration-guide.md) for detailed information about migrating from v2.1 to v3.0.
+The studio supports Document Intelligence v3.0 models and v3.0 model training. Previously trained v2.1 models with labeled data are supported, but not v2.1 model training. Refer to the [REST API migration guide](v3-1-migration-guide.md) for detailed information about migrating from v2.1 to v3.0.
## Get started using Document Intelligence Studio
ai-services Supervised Table Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/supervised-table-tags.md
monikerRange: 'doc-intel-2.1.0'
> > * For an enhanced experience and advanced model quality, try the [Document Intelligence v3.0 Studio](https://formrecognizer.appliedai.azure.com/studio). > * The v3.0 Studio supports any model trained with v2.1 labeled data.
-> * You can refer to the [API migration guide](v3-migration-guide.md) for detailed information about migrating from v2.1 to v3.0.
+> * You can refer to the [API migration guide](v3-1-migration-guide.md) for detailed information about migrating from v2.1 to v3.0.
> * *See* our [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) or [**C#**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), [**Java**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), or [Python](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) SDK quickstarts to get started with version v3.0. In this article, you'll learn how to train your custom template model with table tags (labels). Some scenarios require more complex labeling than simply aligning key-value pairs. Such scenarios include extracting information from forms with complex hierarchical structures or encountering items that not automatically detected and extracted by the service. In these cases, you can use table tags to train your custom template model.
ai-services V3 Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/v3-migration-guide.md
- Title: "How-to: Migrate your application from Document Intelligence v2.1 to v3.0."-
-description: This how-to guide specifies the differences between Document Intelligence API v2.1 and v3.0. You also learn how to move to the newer version of the API.
----- Previously updated : 07/18/2023-
-monikerRange: '<=doc-intel-3.1.0'
---
-# Document Intelligence v3.0 migration
-
-> [!IMPORTANT]
->
-> Document Intelligence REST API v3.0 introduces breaking changes in the REST API request and analyze response JSON.
-
-## Migrating from a v3.0 preview API version
-
-Preview APIs are periodically deprecated. If you're using a preview API version, update your application to target the GA API version. To migrate from the 2021-09-30-preview, 2022-01-30-preview or the 2022-06-30-preview API versions to the `2022-08-31` (GA) API version using the SDK, update to the [current version of the language specific SDK](sdk-overview.md).
-
-> [!IMPORTANT]
->
-> Preview API versions 2021-09-30-preview, 2022-01-30-preview and 2022-06-30-preview are being retired July 31st 2023. All analyze requests that use these API versions will fail. Custom neural models trained with any of these API versions will no longer be usable once the API versions are deprecated. All custom neural models trained with preview API versions will need to be retrained with the GA API version.
-
-The `2022-08-31` (GA) API has a few updates from the preview API versions:
-
-* Field rename: boundingBox to polygon to support non quadrilateral polygon regions.
-* Field deleted: entities removed from the result of the general document model.
-* Field rename: documentLanguage.languageCode to locale
-* Added support for HEIF format
-* Added paragraph detection, with role classification for layout and general document models.
-* Added support for parsed address fields.
-
-## Migrating from v2.1
-
-Document Intelligence v3.0 introduces several new features and capabilities:
-
-* [Document Intelligence REST API](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) has been redesigned for better usability.
-* [**General document (v3.0)**](concept-general-document.md) model is a new API that extracts text, tables, structure, and key-value pairs, from forms and documents.
-* [**Custom neural model (v3.0)**](concept-custom-neural.md) is a new custom model type to extract fields from structured and unstructured documents.
-* [**Receipt (v3.0)**](concept-receipt.md) model supports single-page hotel receipt processing.
-* [**ID document (v3.0)**](concept-id-document.md) model supports endorsements, restrictions, and vehicle classification extraction from US driver's licenses.
-* [**Custom model API (v3.0)**](concept-custom.md) supports signature detection for custom template models.
-* [**Custom model API (v3.0)**](overview.md) supports analysis of all the newly added prebuilt models. For a complete list of prebuilt models, see the [overview](overview.md) page.
-
-In this article, we show you the differences between Document Intelligence v2.1 and v3.0 and demonstrate how to move to the newer version of the API.
-
-> [!CAUTION]
->
-> * REST API **2022-08-31** release includes a breaking change in the REST API analyze response JSON.
-> * The `boundingBox` property is renamed to `polygon` in each instance.
-
-## Changes to the REST API endpoints
-
- The v3.0 REST API combines the analysis operations for layout analysis, prebuilt models, and custom models into a single pair of operations by assigning **`documentModels`** and **`modelId`** to the layout analysis (prebuilt-layout) and prebuilt models.
-
-### POST request
-
-```http
-https://{your-form-recognizer-endpoint}/formrecognizer/documentModels/{modelId}?api-version=2022-08-31
-
-```
-
-### GET request
-
-```http
-https://{your-form-recognizer-endpoint}/formrecognizer/documentModels/{modelId}/AnalyzeResult/{resultId}?api-version=2022-08-31
-```
-
-### Analyze operation
-
-* The request payload and call pattern remain unchanged.
-* The Analyze operation specifies the input document and content-specific configurations, it returns the analyze result URL via the Operation-Location header in the response.
-* Poll this Analyze Result URL, via a GET request to check the status of the analyze operation (minimum recommended interval between requests is 1 second).
-* Upon success, status is set to succeeded and [analyzeResult](#changes-to-analyze-result) is returned in the response body. If errors are encountered, status is set to `failed`, and an error is returned.
-
-| Model | v2.1 | v3.0 |
-|:--| :--| :--|
-| **Request URL prefix**| **https://{your-form-recognizer-endpoint}/formrecognizer/v2.1** | **https://{your-form-recognizer-endpoint}/formrecognizer** |
-| **General document**|N/A|`/documentModels/prebuilt-document:analyze` |
-| **Layout**| /layout/analyze |`/documentModels/prebuilt-layout:analyze`|
-|**Custom**| /custom/{modelId}/analyze |`/documentModels/{modelId}:analyze` |
-| **Invoice** | /prebuilt/invoice/analyze | `/documentModels/prebuilt-invoice:analyze` |
-| **Receipt** | /prebuilt/receipt/analyze | `/documentModels/prebuilt-receipt:analyze` |
-| **ID document** | /prebuilt/idDocument/analyze | `/documentModels/prebuilt-idDocument:analyze` |
-|**Business card**| /prebuilt/businessCard/analyze| `/documentModels/prebuilt-businessCard:analyze`|
-|**W-2**| /prebuilt/w-2/analyze| `/documentModels/prebuilt-w-2:analyze`|
-
-### Analyze request body
-
-The content to be analyzed is provided via the request body. Either the URL or base64 encoded data can be user to construct the request.
-
- To specify a publicly accessible web URL, set  Content-Type to **application/json** and send the following JSON body:
-
- ```json
- {
- "urlSource": "{urlPath}"
- }
- ```
-
-Base 64 encoding is also supported in Document Intelligence v3.0:
-
-```json
-{
- "base64Source": "{base64EncodedContent}"
-}
-```
-
-### Additionally supported parameters
-
-Parameters that continue to be supported:
-
-* `pages` : Analyze only a specific subset of pages in the document. List of page numbers indexed from the number `1` to analyze. Ex. "1-3,5,7-9"
-* `locale` : Locale hint for text recognition and document analysis. Value may contain only the language code (ex. `en`, `fr`) or BCP 47 language tag (ex. "en-US").
-
-Parameters no longer supported:
-
-* includeTextDetails
-
-The new response format is more compact and the full output is always returned.
-
-## Changes to analyze result
-
-Analyze response has been refactored to the following top-level results to support multi-page elements.
-
-* `pages`
-* `tables`
-* `keyValuePairs`
-* `entities`
-* `styles`
-* `documents`
-
-> [!NOTE]
->
-> The analyzeResult response changes includes a number of changes like moving up from a property of pages to a top lever property within analyzeResult.
-
-```json
-
-{
-// Basic analyze result metadata
-"apiVersion": "2022-08-31", // REST API version used
-"modelId": "prebuilt-invoice", // ModelId used
-"stringIndexType": "textElements", // Character unit used for string offsets and lengths:
-// textElements, unicodeCodePoint, utf16CodeUnit // Concatenated content in global reading order across pages.
-// Words are generally delimited by space, except CJK (Chinese, Japanese, Korean) characters.
-// Lines and selection marks are generally delimited by newline character.
-// Selection marks are represented in Markdown emoji syntax (:selected:, :unselected:).
-"content": "CONTOSO LTD.\nINVOICE\nContoso Headquarters...", "pages": [ // List of pages analyzed
-{
-// Basic page metadata
-"pageNumber": 1, // 1-indexed page number
-"angle": 0, // Orientation of content in clockwise direction (degree)
-"width": 0, // Page width
-"height": 0, // Page height
-"unit": "pixel", // Unit for width, height, and polygon coordinates
-"spans": [ // Parts of top-level content covered by page
-{
-"offset": 0, // Offset in content
-"length": 7 // Length in content
-}
-], // List of words in page
-"words": [
-{
-"text": "CONTOSO", // Equivalent to $.content.Substring(span.offset, span.length)
-"boundingBox": [ ... ], // Position in page
-"confidence": 0.99, // Extraction confidence
-"span": { ... } // Part of top-level content covered by word
-}, ...
-], // List of selectionMarks in page
-"selectionMarks": [
-{
-"state": "selected", // Selection state: selected, unselected
-"boundingBox": [ ... ], // Position in page
-"confidence": 0.95, // Extraction confidence
-"span": { ... } // Part of top-level content covered by selection mark
-}, ...
-], // List of lines in page
-"lines": [
-{
-"content": "CONTOSO LTD.", // Concatenated content of line (may contain both words and selectionMarks)
-"boundingBox": [ ... ], // Position in page
-"spans": [ ... ], // Parts of top-level content covered by line
-}, ...
-]
-}, ...
-], // List of extracted tables
-"tables": [
-{
-"rowCount": 1, // Number of rows in table
-"columnCount": 1, // Number of columns in table
-"boundingRegions": [ // Polygons or Bounding boxes potentially across pages covered by table
-{
-"pageNumber": 1, // 1-indexed page number
-"polygon": [ ... ], // Previously Bounding box, renamed to polygon in the 2022-08-31 API
-}
-],
-"spans": [ ... ], // Parts of top-level content covered by table // List of cells in table
-"cells": [
-{
-"kind": "stub", // Cell kind: content (default), rowHeader, columnHeader, stub, description
-"rowIndex": 0, // 0-indexed row position of cell
-"columnIndex": 0, // 0-indexed column position of cell
-"rowSpan": 1, // Number of rows spanned by cell (default=1)
-"columnSpan": 1, // Number of columns spanned by cell (default=1)
-"content": "SALESPERSON", // Concatenated content of cell
-"boundingRegions": [ ... ], // Bounding regions covered by cell
-"spans": [ ... ] // Parts of top-level content covered by cell
-}, ...
-]
-}, ...
-], // List of extracted key-value pairs
-"keyValuePairs": [
-{
-"key": { // Extracted key
-"content": "INVOICE:", // Key content
-"boundingRegions": [ ... ], // Key bounding regions
-"spans": [ ... ] // Key spans
-},
-"value": { // Extracted value corresponding to key, if any
-"content": "INV-100", // Value content
-"boundingRegions": [ ... ], // Value bounding regions
-"spans": [ ... ] // Value spans
-},
-"confidence": 0.95 // Extraction confidence
-}, ...
-],
-"styles": [
-{
-"isHandwritten": true, // Is content in this style handwritten?
-"spans": [ ... ], // Spans covered by this style
-"confidence": 0.95 // Detection confidence
-}, ...
-], // List of extracted documents
-"documents": [
-{
-"docType": "prebuilt-invoice", // Classified document type (model dependent)
-"boundingRegions": [ ... ], // Document bounding regions
-"spans": [ ... ], // Document spans
-"confidence": 0.99, // Document splitting/classification confidence // List of extracted fields
-"fields": {
-"VendorName": { // Field name (docType dependent)
-"type": "string", // Field value type: string, number, array, object, ...
-"valueString": "CONTOSO LTD.",// Normalized field value
-"content": "CONTOSO LTD.", // Raw extracted field content
-"boundingRegions": [ ... ], // Field bounding regions
-"spans": [ ... ], // Field spans
-"confidence": 0.99 // Extraction confidence
-}, ...
-}
-}, ...
-]
-}
-
-```
-
-## Build or train model
-
-The model object has three updates in the new API
-
-* ```modelId``` is now a property that can be set on a model for a human readable name.
-* ```modelName``` has been renamed to ```description```
-* ```buildMode``` is a new property with values of ```template``` for custom form models or ```neural``` for custom neural models.
-
-The ```build``` operation is invoked to train a model. The request payload and call pattern remain unchanged. The build operation specifies the model and training dataset, it returns the result via the Operation-Location header in the response. Poll this model operation URL, via a GET request to check the status of the build operation (minimum recommended interval between requests is 1 second). Unlike v2.1, this URL isn't the resource location of the model. Instead, the model URL can be constructed from the given modelId, also retrieved from the resourceLocation property in the response. Upon success, status is set to ```succeeded``` and result contains the custom model info. If errors are encountered, status is set to ```failed```, and the error is returned.
-
-The following code is a sample build request using a SAS token. Note the trailing slash when setting the prefix or folder path.
-
-```json
-POST https://{your-form-recognizer-endpoint}/formrecognizer/documentModels:build?api-version=2022-08-31
-
-{
- "modelId": {modelId},
- "description": "Sample model",
- "buildMode": "template",
- "azureBlobSource": {
- "containerUrl": "https://{storageAccount}.blob.core.windows.net/{containerName}?{sasToken}",
- "prefix": "{folderName/}"
- }
-}
-```
-
-## Changes to compose model
-
-Model compose is now limited to single level of nesting. Composed models are now consistent with custom models with the addition of ```modelId``` and ```description``` properties.
-
-```json
-POST https://{your-form-recognizer-endpoint}/formrecognizer/documentModels:compose?api-version=2022-08-31
-{
- "modelId": "{composedModelId}",
- "description": "{composedModelDescription}",
- "componentModels": [
- { "modelId": "{modelId1}" },
- { "modelId": "{modelId2}" },
- ]
-}
-
-```
-
-## Changes to copy model
-
-The call pattern for copy model remains unchanged:
-
-* Authorize the copy operation with the target resource calling ```authorizeCopy```. Now a POST request.
-* Submit the authorization to the source resource to copy the model calling ```copyTo```
-* Poll the returned operation to validate the operation completed successfully
-
-The only changes to the copy model function are:
-
-* HTTP action on the ```authorizeCopy``` is now a POST request.
-* The authorization payload contains all the information needed to submit the copy request.
-
-***Authorize the copy***
-
-```json
-POST https://{targetHost}/formrecognizer/documentModels:authorizeCopy?api-version=2022-08-31
-{
- "modelId": "{targetModelId}",
- "description": "{targetModelDescription}",
-}
-```
-
-Use the response body from the authorize action to construct the request for the copy.
-
-```json
-POST https://{sourceHost}/formrecognizer/documentModels/{sourceModelId}:copyTo?api-version=2022-08-31
-{
- "targetResourceId": "{targetResourceId}",
- "targetResourceRegion": "{targetResourceRegion}",
- "targetModelId": "{targetModelId}",
- "targetModelLocation": "https://{targetHost}/formrecognizer/documentModels/{targetModelId}",
- "accessToken": "{accessToken}",
- "expirationDateTime": "2021-08-02T03:56:11Z"
-}
-```
-
-## Changes to list models
-
-List models have been extended to now return prebuilt and custom models. All prebuilt model names start with ```prebuilt-```. Only models with a status of succeeded are returned. To list models that either failed or are in progress, see [List Operations](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/GetModels).
-
-***Sample list models request***
-
-```json
-GET https://{your-form-recognizer-endpoint}/formrecognizer/documentModels?api-version=2022-08-31
-```
-
-## Change to get model
-
-As get model now includes prebuilt models, the get operation returns a ```docTypes``` dictionary. Each document type description includes name, optional description, field schema, and optional field confidence. The field schema describes the list of fields potentially returned with the document type.
-
-```json
-GET https://{your-form-recognizer-endpoint}/formrecognizer/documentModels/{modelId}?api-version=2022-08-31
-```
-
-## New get info operation
-
-The ```info``` operation on the service returns the custom model count and custom model limit.
-
-```json
-GET https://{your-form-recognizer-endpoint}/formrecognizer/info? api-version=2022-08-31
-```
-
-***Sample response***
-
-```json
-{
- "customDocumentModels": {
- "count": 5,
- "limit": 100
- }
-}
-```
-
-## Next steps
-
-In this migration guide, you've learned how to upgrade your existing Document Intelligence application to use the v3.0 APIs.
-
-* [Review the new REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)
-* [What is Document Intelligence?](overview.md)
-* [Document Intelligence quickstart](quickstarts/get-started-sdks-rest-api.md)
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/whats-new.md
monikerRange: '<=doc-intel-3.1.0'
Document Intelligence service is updated on an ongoing basis. Bookmark this page to stay up to date with release notes, feature enhancements, and our newest documentation. >[!NOTE]
-> With the 2022-08-31 API general availability (GA) release, the associated preview APIs are being deprecated. If you are using the 2021-09-30-preview, the 2022-01-30-preview or he 2022-06-30-preview API versions, please update your applications to target the 2022-08-31 API version. There are a few minor changes involved, for more information, _see_ the [migration guide](v3-migration-guide.md).
+> With the 2022-08-31 API general availability (GA) release, the associated preview APIs are being deprecated. If you are using the 2021-09-30-preview, the 2022-01-30-preview or he 2022-06-30-preview API versions, please update your applications to target the 2022-08-31 API version. There are a few minor changes involved, for more information, _see_ the [migration guide](v3-1-migration-guide.md).
## July 2023
Document Intelligence service is updated on an ongoing basis. Bookmark this page
The Document Intelligence version 3.1 API is now generally available (GA)! The API version corresponds to ```2023-07-31```. The v3.1 API introduces new and updated capabilities:
-* Document Intelligence APIs are now more modular, with support for optional features, you can now customize the output to specifically include the features you need. Learn more about the [optional parameters](v3-migration-guide.md).
+* Document Intelligence APIs are now more modular, with support for optional features, you can now customize the output to specifically include the features you need. Learn more about the [optional parameters](v3-1-migration-guide.md).
* Document classification API for splitting a single file into individual documents. [Learn more](concept-custom-classifier.md) about document classification. * [Prebuilt contract model](concept-contract.md) * [Prebuilt US tax form 1098 model](concept-tax-document.md)
The v3.1 API introduces new and updated capabilities:
* Support for [high resolution documents](concept-add-on-capabilities.md) * Custom neural models now require a single labeled sample to train * Custom neural models language expansion. Train a neural model for documents in 30 languages. See [language support](language-support.md) for the complete list of supported languages
-* Prebuilt invoice locale expansion
-* Prebuilt receipt updates
+* 🆕 [Prebuilt health insurance card model](concept-insurance-card.md).
+* [Prebuilt invoice model locale expansion](concept-invoice.md#supported-languages-and-locales).
+* [Prebuilt receipt model language and locale expansion](concept-receipt.md#supported-languages-and-locales) with more than 100 languages supported.
+* [Prebuilt ID model](concept-id-document.md#supported-document-types) now supports European IDs.
+ **Document Intelligence Studio UX Updates**
The v3.1 API introduces new and updated capabilities:
* [**Font extraction**](concept-add-on-capabilities.md#font-property-extraction) is now recognized with the ```2023-02-28-preview``` API. * [**Formula extraction**](concept-add-on-capabilities.md#formula-extraction) is now recognized with the ```2023-02-28-preview``` API. * [**High resolution extraction**](concept-add-on-capabilities.md#high-resolution-extraction) is now recognized with the ```2023-02-28-preview``` API.
-* [**Common name key normalization**](concept-general-document.md#key-normalization-common-name) capabilities are added to the General Document model to improve processing forms with variations in key names.
* [**Custom extraction model updates**](concept-custom.md) * [**Custom neural model**](concept-custom-neural.md) now supports added languages for training and analysis. Train neural models for Dutch, French, German, Italian and Spanish. * [**Custom template model**](concept-custom-template.md) now has an improved signature detection capability.
This release introduces the Document Intelligence 2.0. In the next sections, you
* Complete a [Document Intelligence quickstart](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
ai-services Multi Service Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/multi-service-resource.md
Previously updated : 7/18/2023 Last updated : 08/02/2023 zone_pivot_groups: programming-languages-portal-cli-sdk
You can access Azure AI services through two different resources: A multi-servic
* Consolidates billing from the services you use. * Single-service resource: * Access a single Azure AI service with a unique key and endpoint for each service created.
- * Most Azure AI servives offer a free tier to try it out.
+ * Most Azure AI services offer a free tier to try it out.
Azure AI services are represented by Azure [resources](../azure-resource-manager/management/manage-resources-portal.md) that you create under your Azure subscription. After you create a resource, you can use the keys and endpoint generated to authenticate your applications.
Azure AI services are represented by Azure [resources](../azure-resource-manager
## Next steps
-* Explore [Azure AI services](./what-are-ai-services.md) and choose a service to get started.
+* Now that you have a resource, you can authenticate your API requests to the following Azure AI services. Use these links to find quickstart articles, samples and more to start using your resource.
+ * [Content Moderator](./content-moderator/index.yml) (retired)
+ * [Custom Vision](./custom-vision-service/index.yml)
+ * [Document Intelligence](./document-intelligence/index.yml)
+ * [Face](./computer-vision/overview-identity.md)
+ * [Language](./language-service/index.yml)
+ * [Speech](./speech-service/index.yml)
+ * [Translator](./translator/index.yml)
+ * [Vision](./computer-vision/index.yml)
ai-services Quota https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/quota.md
Quota provides the flexibility to actively manage the allocation of rate limits
## Prerequisites > [!IMPORTANT]
-> Viewing quota and deploying models requires the **Cognitive Services Usages Reader** role. This role provides the minimal access necessary to view quota usage across an Azure subscription.
+> Viewing quota and deploying models requires the **Cognitive Services Usages Reader** role. This role provides the minimal access necessary to view quota usage across an Azure subscription. To learn more about this role and the other roles you will need to access Azure OpenAI, consult our [Azure role-based access (Azure RBAC) guide](./role-based-access-control.md).
> > This role can be found in the Azure portal under **Subscriptions** > **Access control (IAM)** > **Add role assignment** > search for **Cognitive Services Usages Reader**.This role **must be applied at the subscription level**, it does not exist at the resource level. >
ai-services Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/role-based-access-control.md
+
+ Title: Role-based access control for Azure OpenAI
+
+description: Learn how to use Azure RBAC for managing individual access to Azure OpenAI resources.
++++++ Last updated : 08/02/2022+
+recommendations: false
++
+# Role-based access control for Azure OpenAI Service
+
+Azure OpenAI Service supports Azure role-based access control (Azure RBAC), an authorization system for managing individual access to Azure resources. Using Azure RBAC, you assign different team members different levels of permissions based on their needs for a given project. For more information, see the [Azure RBAC documentation](../../../role-based-access-control/index.yml) for more information.
+
+## Add role assignment to an Azure OpenAI resource
+
+Azure RBAC can be assigned to an Azure OpenAI resource. To grant access to an Azure resource, you add a role assignment.
+1. In the [Azure portal](https://portal.azure.com/), search for **Azure OpenAI**.
+1. Select **Azure OpenAI**, and navigate to your specific resource.
+ > [!NOTE]
+ > You can also set up Azure RBAC for whole resource groups, subscriptions, or management groups. Do this by selecting the desired scope level and then navigating to the desired item. For example, selecting **Resource groups** and then navigating to a specific resource group.
+
+1. Select **Access control (IAM)** on the left navigation pane.
+1. Select **Add**, then select **Add role assignment**.
+1. On the **Role** tab on the next screen, select a role you want to add.
+1. On the **Members** tab, select a user, group, service principal, or managed identity.
+1. On the **Review + assign** tab, select **Review + assign** to assign the role.
+
+Within a few minutes, the target will be assigned the selected role at the selected scope. For help with these steps, see [Assign Azure roles using the Azure portal](../../../role-based-access-control/role-assignments-portal.md).
+
+## Azure OpenAI roles
+
+- **Cognitive Services OpenAI User**
+- **Cognitive Services OpenAI Contributor**
+- **Cognitive Services Contributor**
+- **Cognitive Services Usages Reader**
+
+> [!NOTE]
+> Subscription level *Owner* and *Contributor* roles are inherited and take priority over the custom Azure OpenAI roles applied at the Resource Group level.
+
+This section covers common tasks that different accounts and combinations of accounts are able to perform for Azure OpenAI resources. To view the full list of available **Actions** and **DataActions**, an individual role is granted from your Azure OpenAI resource go **Access control (IAM)** > **Roles** > Under the **Details** column for the role you're interested in select **View**. By default the **Actions** radial button is selected. You need to examine both **Actions** and **DataActions** to understand the full scope of capabilities assigned to a role.
+
+### Cognitive Services OpenAI User
+
+If a user were granted role-based access to only this role for an Azure OpenAI resource, they would be able to perform the following common tasks:
+
+✅ View the resource in [Azure portal](https://portal.azure.com) <br>
+✅ View the resource endpoint under **Keys and Endpoint** <br>
+✅ Ability to view the resource and associated model deployments in Azure OpenAI Studio. <br>
+✅ Ability to view what models are available for deployment in Azure OpenAI Studio. <br>
+✅ Use the Chat, Completions, and DALL-E (preview) playground experiences to generate text and images with any models that have already been deployed to this Azure OpenAI resource.
+
+A user with only this role assigned would be unable to:
+
+❌ Create new Azure OpenAI resources <br>
+❌ View/Copy/Regenerate keys under **Keys and Endpoint** <br>
+❌ Create new model deployments or edit existing model deployments <br>
+❌ Create/deploy custom fine-tuned models <br>
+❌ Upload datasets for fine-tuning <br>
+❌ Access quota <br>
+❌ Create customized content filters <br>
+❌ Add a data source for the use your data feature
+
+### Cognitive Services OpenAI Contributor
+
+This role has all the permissions of Cognitive Services OpenAI User and is also able to perform additional tasks like:
+
+✅ Create custom fine-tuned models <br>
+✅ Upload datasets for fine-tuning <br>
+
+A user with only this role assigned would be unable to:
+
+❌ Create new Azure OpenAI resources <br>
+❌ View/Copy/Regenerate keys under **Keys and Endpoint** <br>
+❌ Create new model deployments or edit existing model deployments <br>
+❌ Access quota <br>
+❌ Create customized content filters <br>
+❌ Add a data source for the use your data feature
+
+### Cognitive Services Contributor
+
+This role is typically granted access at the resource group level for a user in conjunction with additional roles. By itself this role would allow a user to perform the following tasks.
+
+✅ Create new Azure OpenAI resources within the assigned resource group. <br>
+✅ View resources in the assigned resource group in the [Azure portal](https://portal.azure.com). <br>
+✅ View the resource endpoint under **Keys and Endpoint** <br>
+✅ View/Copy/Regenerate keys under **Keys and Endpoint** <br>
+✅ Ability to view what models are available for deployment in Azure OpenAI Studio <br>
+✅ Use the Chat, Completions, and DALL-E (preview) playground experiences to generate text and images with any models that have already been deployed to this Azure OpenAI resource <br>
+✅ Create customized content filters <br>
+✅ Add a data source for the use your data feature <br>
+
+A user with only this role assigned would be unable to:
+
+❌ Create new model deployments or edit existing model deployments <br>
+❌ Access quota <br>
+❌ Create custom fine-tuned models <br>
+❌ Upload datasets for fine-tuning
+
+### Cognitive Services Usages Reader
+
+Viewing quota requires the **Cognitive Services Usages Reader** role. This role provides the minimal access necessary to view quota usage across an Azure subscription.
+
+This role can be found in the Azure portal under **Subscriptions** > ***Access control (IAM)** > **Add role assignment** > search for **Cognitive Services Usages Reader**. The role must be applied at the subscription level, it does not exist at the resource level.
+
+If you don't wish to use this role, the subscription **Reader** role provides equivalent access, but it also grants read access beyond the scope of what is needed for viewing quota. Model deployment via the Azure OpenAI Studio is also partially dependent on the presence of this role.
+
+This role provides little value by itself and is instead typically assigned in combination with one or more of the previously described roles.
+
+#### Cognitive Services Usages Reader + Cognitive Services OpenAI User
+
+All the capabilities of Cognitive Services OpenAI plus the ability to:
+
+✅ View quota allocations in Azure OpenAI Studio
+
+#### Cognitive Services Usages Reader + Cognitive Services OpenAI Contributor
+
+All the capabilities of Cognitive Services OpenAI Contributor plus the ability to:
+
+✅ View quota allocations in Azure OpenAI Studio
+
+#### Cognitive Services Usages Reader + Cognitive Services Contributor
+
+All the capabilities of Cognitive Services Contributor plus the ability to:
+
+✅ View & edit quota allocations in Azure OpenAI Studio <br>
+✅ Create new model deployments or edit existing model deployments <br>
+
+## Common Issues
+
+### Unable to view Azure Cognitive Search option in Azure OpenAI Studio
+
+**Issue:**
+
+When selecting an existing Cognitive Search resource the search indices don't load, and the loading wheel spins continuously. In Azure OpenAI Studio, go to **Playground Chat** > **Add your data (preview)** under Assistant setup. Selecting **Add a data source** opens a modal that allows you to add a data source through either Azure Cognitive Search or Blob Storage. Selecting the Azure Cognitive Search option and an existing Cognitive Search resource should load the available Azure Cognitive Search indices to select from.
+
+**Root cause**
+
+To make a generic API call for listing Azure Cognitive Search services, the following call is made:
+
+https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Search/searchServices?api-version=2021-04-01-Preview
+
+Replace {subscriptionId} with your actual subscription ID.
+
+For this API call, you need a **subscription-level scope** role. You can use the **Reader** role for read-only access or the **Contributor** role for read-write access. If you only need access to Azure Cognitive Search services, you can use the **Azure Cognitive Search Service Contributor** or **Azure Cognitive Search Service Reader** roles.
+
+**Solution options**
+
+- Contact your subscription administrator or owner: Reach out to the person managing your Azure subscription and request the appropriate access. Explain your requirements and the specific role you need (for example, Reader, Contributor, Azure Cognitive Search Service Contributor, or Azure Cognitive Search Service Reader).
+
+- Request subscription-level or resource group-level access: If you need access to specific resources, ask the subscription owner to grant you access at the appropriate level (subscription or resource group). This enables you to perform the required tasks without having access to unrelated resources.
+
+- Use API keys for Azure Cognitive Search: If you only need to interact with the Azure Cognitive Search service, you can request the admin keys or query keys from the subscription owner. These keys allow you to make API calls directly to the search service without needing an Azure RBAC role. Keep in mind that using API keys will **bypass** the Azure RBAC access control, so use them cautiously and follow security best practices.
+
+### Unable to upload files in Azure OpenAI Studio for on your data
+
+**Symptom:** Unable to access storage for the **on your data** feature using Azure OpenAI Studio.
+
+**Root cause:**
+
+Insufficient subscription-level access for the user attempting to access the blob storage in Azure OpenAI Studio. The user may **not** have the necessary permissions to call the Azure Management API endpoint: https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Storage/storageAccounts/{accountName}/listAccountSas?api-version=2022-09-01
+
+Public access to the blob storage is disabled by the owner of the Azure subscription for security reasons.
+
+Permissions needed for the API call:
+`**Microsoft.Storage/storageAccounts/listAccountSas/action:**` This permission allows the user to list the Shared Access Signature (SAS) tokens for the specified storage account.
+
+Possible reasons why the user may **not** have permissions:
+
+- The user is assigned a limited role in the Azure subscription, which does not include the necessary permissions for the API call.
+- The user's role has been restricted by the subscription owner or administrator due to security concerns or organizational policies.
+- The user's role has been recently changed, and the new role does not grant the required permissions.
+
+**Solution options**
+
+- Verify and update access rights: Ensure the user has the appropriate subscription-level access, including the necessary permissions for the API call (Microsoft.Storage/storageAccounts/listAccountSas/action). If required, request the subscription owner or administrator to grant the necessary access rights.
+- Request assistance from the owner or admin: If the solution above is not feasible, consider asking the subscription owner or administrator to upload the data files on your behalf. This approach can help import the data into Azure OpenAI Studio without **user** requiring subscription-level access or public access to the blob storage.
+
+## Next steps
+
+- Learn more about [Azure-role based access control (Azure RBAC)](../../../role-based-access-control/index.yml).
+- Also check out[assign Azure roles using the Azure portal](../../../role-based-access-control/role-assignments-portal.md).
ai-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/policy-reference.md
Title: Built-in policy definitions for Azure AI services description: Lists Azure Policy built-in policy definitions for Azure AI services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/25/2023 Last updated : 08/03/2023
aks Aks Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/aks-migration.md
AKS is a managed service offering unique capabilities with lower management over
We recommend using AKS clusters backed by [Virtual Machine Scale Sets](../virtual-machine-scale-sets/index.yml) and the [Azure Standard Load Balancer](./load-balancer-standard.md) to ensure you get the following features:
-* [Multiple node pools](./use-multiple-node-pools.md),
+* [Multiple node pools](./create-node-pools.md),
* [Availability zones](../reliability/availability-zones-overview.md), * [Authorized IP ranges](./api-server-authorized-ip-ranges.md), * [Cluster autoscaler](./cluster-autoscaler.md),
aks Cluster Autoscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-autoscaler.md
To further help improve cluster resource utilization and free up CPU and memory
<!-- LINKS - internal --> [aks-faq-node-resource-group]: faq.md#can-i-modify-tags-and-other-properties-of-the-aks-resources-in-the-node-resource-group
-[aks-multiple-node-pools]: use-multiple-node-pools.md
+[aks-multiple-node-pools]: create-node-pools.md
[aks-scale-apps]: tutorial-kubernetes-scale.md [aks-view-master-logs]: ../azure-monitor/containers/monitor-kubernetes.md#configure-monitoring [azure-cli-install]: /cli/azure/install-azure-cli
aks Concepts Clusters Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-clusters-workloads.md
Nodes of the same configuration are grouped together into *node pools*. A Kubern
You scale or upgrade an AKS cluster against the default node pool. You can choose to scale or upgrade a specific node pool. For upgrade operations, running containers are scheduled on other nodes in the node pool until all the nodes are successfully upgraded.
-For more information about how to use multiple node pools in AKS, see [Create and manage multiple node pools for a cluster in AKS][use-multiple-node-pools].
+For more information about how to use multiple node pools in AKS, see [Create multiple node pools for a cluster in AKS][use-multiple-node-pools].
### Node selectors
This article covers some of the core Kubernetes components and how they apply to
[aks-helm]: kubernetes-helm.md [operator-best-practices-cluster-security]: operator-best-practices-cluster-security.md [operator-best-practices-scheduler]: operator-best-practices-scheduler.md
-[use-multiple-node-pools]: use-multiple-node-pools.md
+[use-multiple-node-pools]: create-node-pools.md
[operator-best-practices-advanced-scheduler]: operator-best-practices-advanced-scheduler.md [reservation-discounts]:../cost-management-billing/reservations/save-compute-costs-reservations.md [configure-nrg]: ./cluster-configuration.md#fully-managed-resource-group-preview
aks Concepts Vulnerability Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-vulnerability-management.md
Microsoft identifies and patches vulnerabilities and missing security updates fo
## AKS Container Images
-While the [Cloud Native Computing Foundation][cloud-native-computing-foundation] (CNCF) owns and maintains most of the code running in AKS, Microsoft takes responsibility for building the open-source packages that we deploy on AKS. With that responsibility, it includes having complete ownership of the build, scan, sign, validate, and hotfix process and control over the binaries in container images. By us having responsibility for building the open-source packages deployed on AKS, it enables us to both establish a software supply chain over the binary, and patch the software as needed.  
+While the [Cloud Native Computing Foundation][cloud-native-computing-foundation] (CNCF) owns and maintains most of the code AKS runs, Microsoft takes responsibility for building the open-source packages we deploy on AKS. With that responsibility, it includes having complete ownership of the build, scan, sign, validate, and hotfix process and control over the binaries in container images. By us having responsibility for building the open-source packages deployed on AKS, it enables us to both establish a software supply chain over the binary, and patch the software as needed.  
Microsoft is active in the broader Kubernetes ecosystem to help build the future of cloud-native compute in the wider CNCF community. This work not only ensures the quality of every Kubernetes release for the world, but also enables AKS quickly get new Kubernetes releases out into production for several years. In some cases, ahead of other cloud providers by several months. Microsoft collaborates with other industry partners in the Kubernetes security organization. For example, the Security Response Committee (SRC) receives, prioritizes, and patches embargoed security vulnerabilities before they're  announced to the public. This commitment ensures Kubernetes is secure for everyone, and enables AKS to patch and respond to vulnerabilities faster to keep our customers safe. In addition to Kubernetes, Microsoft has signed up to receive pre-release notifications for software vulnerabilities for products such as Envoy, container runtimes, and many other open-source projects.
Each evening, Linux nodes in AKS receive security patches through their distribu
Nightly, we apply security updates to the OS on the node, but the node image used to create nodes for your cluster remains unchanged. If a new Linux node is added to your cluster, the original image is used to create the node. This new node receives all the security and kernel updates available during the automatic assessment performed every night, but remains unpatched until all checks and restarts are complete. You can use node image upgrade to check for and update node images used by your cluster. For more information on node image upgrade, see [Azure Kubernetes Service (AKS) node image upgrade][aks-node-image-upgrade].
-For AKS clusters on the [OS auto upgrade][aks-node-image-upgrade] channel, the unattended upgrade process is disabled, and the OS nodes will receive security updates through the weekly node image upgrade.
+For AKS clusters on the [OS auto upgrade][aks-node-image-upgrade] channel, the unattended upgrade process is disabled, and the OS nodes receives security updates through the weekly node image upgrade.
### Windows Server nodes
The following table describes vulnerability severity categories:
## How vulnerabilities are updated
-AKS patches CVE's that has a *vendor fix* every week. CVE's without a fix are waiting on a *vendor fix* before it can be remediated. The fixed container images are cached in the next corresponding Virtual Hard Disk (VHD) build, which also contains the updated Ubuntu/Azure Linux/Windows patched CVE's. As long as you're running the updated VHD, you shouldn't be running any container image CVE's with a vendor fix that is over 30 days old.
+AKS patches CVEs that has a *vendor fix* every week. CVEs without a fix are waiting on a *vendor fix* before it can be remediated. The fixed container images are cached in the next corresponding Virtual Hard Disk (VHD) build, which also contains the updated Ubuntu/Azure Linux/Windows patched CVEs. As long as you're running the updated VHD, you shouldn't be running any container image CVEs with a vendor fix that is over 30 days old.
-For the OS-based vulnerabilities in the VHD, AKS uses **Unattended Update** by default, so any security updates should be applied to the existing VHD's daily. If **Unattended Update** is disabled, then it's a recommended best practice that you apply a Node Image update on a regular cadence to ensure the latest OS and Image security updates are applied.
+For the OS-based vulnerabilities in the VHD, AKS uses **Unattended Update** by default, so any security updates should be applied to the existing VHDs daily. If **Unattended Update** is disabled, then it's a recommended best practice that you apply a Node Image update on a regular cadence to ensure the latest OS and Image security updates are applied.
## Update release timelines
See the overview about [Upgrading Azure Kubernetes Service clusters and node poo
[microsoft-azure-fedramp-high]: ../azure-government/compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope [apply-security-kernel-updates-to-aks-nodes]: node-updates-kured.md [aks-node-image-upgrade]: node-image-upgrade.md
-[upgrade-node-pool-in-aks]: use-multiple-node-pools.md#upgrade-a-node-pool
+[upgrade-node-pool-in-aks]: manage-node-pools.md#upgrade-a-single-node-pool
[aks-node-image-upgrade]: auto-upgrade-node-image.md <!-- LINKS - external -->
aks Configure Azure Cni https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-azure-cni.md
Learn more about networking in AKS in the following articles:
[az-feature-list]: /cli/azure/feature#az_feature_list [az-provider-register]: /cli/azure/provider#az_provider_register [network-policy]: use-network-policies.md
-[nodepool-upgrade]: use-multiple-node-pools.md#upgrade-a-node-pool
+[nodepool-upgrade]: manage-node-pools.md#upgrade-a-single-node-pool
[network-comparisons]: concepts-network.md#compare-network-models [system-node-pools]: use-system-pools.md [prerequisites]: configure-azure-cni.md#prerequisites
aks Create Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/create-node-pools.md
+
+ Title: Create node pools in Azure Kubernetes Service (AKS)
+description: Learn how to create multiple node pools for a cluster in Azure Kubernetes Service (AKS).
++ Last updated : 07/18/2023++
+# Create node pools for a cluster in Azure Kubernetes Service (AKS)
+
+In Azure Kubernetes Service (AKS), nodes of the same configuration are grouped together into *node pools*. These node pools contain the underlying VMs that run your applications. When you create an AKS cluster, you define the initial number of nodes and their size (SKU), which creates a [*system node pool*][use-system-pool].
+
+To support applications that have different compute or storage demands, you can create *user node pools*. System node pools serve the primary purpose of hosting critical system pods such as CoreDNS and `konnectivity`. User node pools serve the primary purpose of hosting your application pods. For example, use more user node pools to provide GPUs for compute-intensive applications, or access to high-performance SSD storage. However, if you wish to have only one pool in your AKS cluster, you can schedule application pods on system node pools.
+
+> [!NOTE]
+> This feature enables more control over creating and managing multiple node pools and requires separate commands for create/update/delete operations. Previously, cluster operations through `az aks create` or `az aks update` used the managedCluster API and were the only options to change your control plane and a single node pool. This feature exposes a separate operation set for agent pools through the agentPool API and requires use of the `az aks nodepool` command set to execute operations on an individual node pool.
+
+This article shows you how to create one or more node pools in an AKS cluster.
+
+## Before you begin
+
+* You need the Azure CLI version 2.2.0 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+* Review [Storage options for applications in Azure Kubernetes Service][aks-storage-concepts] to plan your storage configuration.
+
+## Limitations
+
+The following limitations apply when you create AKS clusters that support multiple node pools:
+
+* See [Quotas, virtual machine size restrictions, and region availability in Azure Kubernetes Service (AKS)](quotas-skus-regions.md).
+* You can delete system node pools if you have another system node pool to take its place in the AKS cluster.
+* System pools must contain at least one node, and user node pools may contain zero or more nodes.
+* The AKS cluster must use the Standard SKU load balancer to use multiple node pools. The feature isn't supported with Basic SKU load balancers.
+* The AKS cluster must use Virtual Machine Scale Sets for the nodes.
+* The name of a node pool may only contain lowercase alphanumeric characters and must begin with a lowercase letter.
+ * For Linux node pools, the length must be between 1-12 characters.
+ * For Windows node pools, the length must be between 1-6 characters.
+* All node pools must reside in the same virtual network.
+* When you create multiple node pools at cluster creation time, the Kubernetes versions for the node pools must match the version set for the control plane.
+
+## Create an AKS cluster
+
+> [!IMPORTANT]
+> If you run a single system node pool for your AKS cluster in a production environment, we recommend you use at least three nodes for the node pool. If one node goes down, you lose control plane resources and redundancy is compromised. You can mitigate this risk by having more control plane nodes.
+
+1. Create an Azure resource group using the [`az group create`][az-group-create] command.
+
+ ```azurecli-interactive
+ az group create --name myResourceGroup --location eastus
+ ```
+
+2. Create an AKS cluster with a single node pool using the [`az aks create`][az-aks-create] command.
+
+ ```azurecli-interactive
+ az aks create \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --vm-set-type VirtualMachineScaleSets \
+ --node-count 2 \
+ --generate-ssh-keys \
+ --load-balancer-sku standard
+ ```
+
+ It takes a few minutes to create the cluster.
+
+3. When the cluster is ready, get the cluster credentials using the [`az aks get-credentials`][az-aks-get-credentials] command.
+
+ ```azurecli-interactive
+ az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
+ ```
+
+## Add a node pool
+
+The cluster created in the previous step has a single node pool. In this section, we add a second node pool to the cluster.
+
+1. Create a new node pool using the [`az aks nodepool add`][az-aks-nodepool-add] command. The following example creates a node pool named *mynodepool* that runs *three* nodes:
+
+ ```azurecli-interactive
+ az aks nodepool add \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name mynodepool \
+ --node-count 3
+ ```
+
+2. Check the status of your node pools using the [`az aks node pool list`][az-aks-nodepool-list] command and specify your resource group and cluster name.
+
+ ```azurecli-interactive
+ az aks nodepool list --resource-group myResourceGroup --cluster-name myAKSCluster
+ ```
+
+ The following example output shows *mynodepool* has been successfully created with three nodes. When the AKS cluster was created in the previous step, a default *nodepool1* was created with a node count of *2*.
+
+ ```output
+ [
+ {
+ ...
+ "count": 3,
+ ...
+ "name": "mynodepool",
+ "orchestratorVersion": "1.15.7",
+ ...
+ "vmSize": "Standard_DS2_v2",
+ ...
+ },
+ {
+ ...
+ "count": 2,
+ ...
+ "name": "nodepool1",
+ "orchestratorVersion": "1.15.7",
+ ...
+ "vmSize": "Standard_DS2_v2",
+ ...
+ }
+ ]
+ ```
+
+## ARM64 node pools
+
+The ARM64 processor provides low power compute for your Kubernetes workloads. To create an ARM64 node pool, you need to choose a [Dpsv5][arm-sku-vm1], [Dplsv5][arm-sku-vm2] or [Epsv5][arm-sku-vm3] series Virtual Machine.
+
+### Limitations
+
+* ARM64 node pools aren't supported on Defender-enabled clusters.
+* FIPS-enabled node pools aren't supported with ARM64 SKUs.
+
+### Add an ARM64 node pool
+
+* Add an ARM64 node pool into your existing cluster using the [`az aks nodepool add`][az-aks-nodepool-add].
+
+ ```azurecli-interactive
+ az aks nodepool add \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name armpool \
+ --node-count 3 \
+ --node-vm-size Standard_D2pds_v5
+ ```
+
+## Azure Linux node pools
+
+The Azure Linux container host for AKS is an open-source Linux distribution available as an AKS container host. It provides high reliability, security, and consistency. It only includes the minimal set of packages needed for running container workloads, which improve boot times and overall performance.
+
+### Add an Azure Linux node pool
+
+* Add an Azure Linux node pool into your existing cluster using the [`az aks nodepool add`][az-aks-nodepool-add] command and specify `--os-sku AzureLinux`.
+
+ ```azurecli-interactive
+ az aks nodepool add \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name azurelinuxpool \
+ --os-sku AzureLinux
+ ```
+
+### Migrate Ubuntu nodes to Azure Linux nodes
+
+1. [Add an Azure Linux node pool into your existing cluster](#add-an-azure-linux-node-pool).
+
+ > [!NOTE]
+ > When adding a new Azure Linux node pool, you need to add at least one as `--mode System`. Otherwise, AKS won't allow you to delete your existing Ubuntu node pool.
+
+2. [Cordon the existing Ubuntu nodes](resize-node-pool.md#cordon-the-existing-nodes).
+3. [Drain the existing Ubuntu nodes](resize-node-pool.md#drain-the-existing-nodes).
+4. Remove the existing Ubuntu nodes using the [`az aks delete`][az-aks-delete] command.
+
+ ```azurecli-interactive
+ az aks nodepool delete \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name mynodepool
+ ```
+
+## Node pools with unique subnets
+
+A workload may require splitting cluster nodes into separate pools for logical isolation. Separate subnets dedicated to each node pool in the cluster can help support this isolation, which can address requirements such as having noncontiguous virtual network address space to split across node pools.
+
+> [!NOTE]
+> Make sure to use Azure CLI version `2.35.0` or later.
+
+### Limitations
+
+* All subnets assigned to node pools must belong to the same virtual network.
+* System pods must have access to all nodes and pods in the cluster to provide critical functionality, such as DNS resolution and tunneling kubectl logs/exec/port-forward proxy.
+* If you expand your VNET after creating the cluster, you must update your cluster before adding a subnet outside the original CIDR block. While AKS errors-out on the agent pool add, the `aks-preview` Azure CLI extension (version 0.5.66+) now supports running `az aks update -g <resourceGroup> -n <clusterName>` without any optional arguments. This command performs an update operation without making any changes, which can recover a cluster stuck in a failed state.
+* In clusters with Kubernetes version < 1.23.3, kube-proxy will SNAT traffic from new subnets, which can cause Azure Network Policy to drop the packets.
+* Windows nodes SNAT traffic to the new subnets until the node pool is reimaged.
+* Internal load balancers default to one of the node pool subnets.
+
+### Add a node pool with a unique subnet
+
+* Add a node pool with a unique subnet into your existing cluster using the [`az aks nodepool add`][az-aks-nodepool-add] command and specify the `--vnet-subnet-id`.
+
+ ```azurecli-interactive
+ az aks nodepool add \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name mynodepool \
+ --node-count 3 \
+ --vnet-subnet-id <YOUR_SUBNET_RESOURCE_ID>
+ ```
+
+## FIPS-enabled node pools
+
+For more information on enabling Federal Information Process Standard (FIPS) for your AKS cluster, see [Enable Federal Information Process Standard (FIPS) for Azure Kubernetes Service (AKS) node pools][enable-fips-nodes].
+
+## Windows Server node pools with `containerd`
+
+Beginning in Kubernetes version 1.20 and higher, you can specify `containerd` as the container runtime for Windows Server 2019 node pools. Starting with Kubernetes 1.23, `containerd` is the default and only container runtime for Windows.
+
+> [!IMPORTANT]
+> When using `containerd` with Windows Server 2019 node pools:
+>
+> * Both the control plane and Windows Server 2019 node pools must use Kubernetes version 1.20 or greater.
+> * When you create or update a node pool to run Windows Server containers, the default value for `--node-vm-size` is *Standard_D2s_v3*, which was minimum recommended size for Windows Server 2019 node pools prior to Kubernetes 1.20. The minimum recommended size for Windows Server 2019 node pools using `containerd` is *Standard_D4s_v3*. When setting the `--node-vm-size` parameter, please check the list of [restricted VM sizes][restricted-vm-sizes].
+> * We highly recommended using [taints or labels][aks-taints] with your Windows Server 2019 node pools running `containerd` and tolerations or node selectors with your deployments to guarantee your workloads are scheduled correctly.
+
+### Add a Windows Server node pool with `containerd`
+
+* Add a Windows Server node pool with `containerd` into your existing cluster using the [`az aks nodepool add`][az-aks-nodepool-add].
+
+ > [!NOTE]
+ > If you don't specify the `WindowsContainerRuntime=containerd` custom header, the node pool still uses `containerd` as the container runtime by default.
+
+ ```azurecli-interactive
+ az aks nodepool add \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --os-type Windows \
+ --name npwcd \
+ --node-vm-size Standard_D4s_v3 \
+ --kubernetes-version 1.20.5 \
+ --aks-custom-headers WindowsContainerRuntime=containerd \
+ --node-count 1
+ ```
+
+### Upgrade a specific existing Windows Server node pool to `containerd`
+
+* Upgrade a specific node pool from Docker to `containerd` using the [`az aks nodepool upgrade`][az-aks-nodepool-upgrade] command.
+
+ ```azurecli-interactive
+ az aks nodepool upgrade \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name npwd \
+ --kubernetes-version 1.20.7 \
+ --aks-custom-headers WindowsContainerRuntime=containerd
+ ```
+
+### Upgrade all existing Windows Server node pools to `containerd`
+
+* Upgrade all node pools from Docker to `containerd` using the [`az aks nodepool upgrade`][az-aks-nodepool-upgrade] command.
+
+ ```azurecli-interactive
+ az aks nodepool upgrade \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --kubernetes-version 1.20.7 \
+ --aks-custom-headers WindowsContainerRuntime=containerd
+ ```
+
+## Delete a node pool
+
+If you no longer need a node pool, you can delete it and remove the underlying VM nodes.
+
+> [!CAUTION]
+> When you delete a node pool, AKS doesn't perform cordon and drain, and there are no recovery options for data loss that may occur when you delete a node pool. If pods can't be scheduled on other node pools, those applications become unavailable. Make sure you don't delete a node pool when in-use applications don't have data backups or the ability to run on other node pools in your cluster. To minimize the disruption of rescheduling pods currently running on the node pool you want to delete, perform a cordon and drain on all nodes in the node pool before deleting.
+
+* Delete a node pool using the [`az aks nodepool delete`][az-aks-nodepool-delete] command and specify the node pool name.
+
+ ```azurecli-interactive
+ az aks nodepool delete -g myResourceGroup --cluster-name myAKSCluster --name mynodepool --no-wait
+ ```
+
+ It takes a few minutes to delete the nodes and the node pool.
+
+## Next steps
+
+In this article, you learned how to create multiple node pools in an AKS cluster. To learn about how to manage multiple node pools, see [Manage multiple node pools for a cluster in Azure Kubernetes Service (AKS)](manage-node-pools.md).
+
+<!-- LINKS -->
+[aks-storage-concepts]: concepts-storage.md
+[arm-sku-vm1]: ../virtual-machines/dpsv5-dpdsv5-series.md
+[arm-sku-vm2]: ../virtual-machines/dplsv5-dpldsv5-series.md
+[arm-sku-vm3]: ../virtual-machines/epsv5-epdsv5-series.md
+[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
+[az-aks-create]: /cli/azure/aks#az_aks_create
+[az-aks-delete]: /cli/azure/aks#az_aks_delete
+[az-aks-nodepool-add]: /cli/azure/aks/nodepool#az_aks_nodepool_add
+[az-aks-nodepool-list]: /cli/azure/aks/nodepool#az_aks_nodepool_list
+[az-aks-nodepool-upgrade]: /cli/azure/aks/nodepool#az_aks_nodepool_upgrade
+[az-aks-nodepool-delete]: /cli/azure/aks/nodepool#az_aks_nodepool_delete
+[az-group-create]: /cli/azure/group#az_group_create
+[enable-fips-nodes]: enable-fips-nodes.md
+[install-azure-cli]: /cli/azure/install-azure-cli
+[use-system-pool]: use-system-pools.md
+[restricted-vm-sizes]: ../virtual-machines/sizes.md
+[aks-taints]: manage-node-pools.md#set-node-pool-taints
aks Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/faq.md
As you work with the node resource group, keep in mind that you can't:
## Can I modify tags and other properties of the AKS resources in the node resource group?
-You might get unexpected scaling and upgrading errors if you modify or delete Azure-created tags and other resource properties in the node resource group. AKS allows you to create and modify custom tags created by end users, and you can add those tags when [creating a node pool](use-multiple-node-pools.md#specify-a-taint-label-or-tag-for-a-node-pool). You might want to create or modify custom tags, for example, to assign a business unit or cost center. Another option is to create Azure Policies with a scope on the managed resource group.
+You might get unexpected scaling and upgrading errors if you modify or delete Azure-created tags and other resource properties in the node resource group. AKS allows you to create and modify custom tags created by end users, and you can add those tags when [creating a node pool](manage-node-pools.md#specify-a-taint-label-or-tag-for-a-node-pool). You might want to create or modify custom tags, for example, to assign a business unit or cost center. Another option is to create Azure Policies with a scope on the managed resource group.
However, modifying any **Azure-created tags** on resources under the node resource group in the AKS cluster is an unsupported action, which breaks the service-level objective (SLO). For more information, see [Does AKS offer a service-level agreement?](#does-aks-offer-a-service-level-agreement)
The issue has been resolved with Kubernetes version 1.20. For more information,
## Can I use FIPS cryptographic libraries with deployments on AKS?
-FIPS-enabled nodes are now supported on Linux-based node pools. For more information, see [Add a FIPS-enabled node pool](use-multiple-node-pools.md#add-a-fips-enabled-node-pool).
+FIPS-enabled nodes are now supported on Linux-based node pools. For more information, see [Add a FIPS-enabled node pool](create-node-pools.md#fips-enabled-node-pools).
## Can I configure NSGs with AKS?
The extension **doesn't require additional outbound access** to any URLs, IP add
[aks-preview-cli]: /cli/azure/aks [az-aks-create]: /cli/azure/aks#az-aks-create [aks-rm-template]: /azure/templates/microsoft.containerservice/2022-09-01/managedclusters
-[nodepool-upgrade]: use-multiple-node-pools.md#upgrade-a-node-pool
+[nodepool-upgrade]: manage-node-pools.md#upgrade-a-single-node-pool
[aks-windows-limitations]: ./windows-faq.md [reservation-discounts]:../cost-management-billing/reservations/save-compute-costs-reservations.md [api-server-authorized-ip-ranges]: ./api-server-authorized-ip-ranges.md
-[multi-node-pools]: ./use-multiple-node-pools.md
+[multi-node-pools]: ./create-node-pools.md
[availability-zones]: ./availability-zones.md [private-clusters]: ./private-clusters.md [supported-kubernetes-versions]: ./supported-kubernetes-versions.md
aks Howto Deploy Java Quarkus App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-quarkus-app.md
Last updated 07/26/2023-
-#CustomerIntent: As a developer, I want deploy a simple CRUD Quarkus app on AKS so that can start iterating it into a proper LOB app.
+ # external contributor: danieloh30
aks Internal Lb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/internal-lb.md
To learn more about Kubernetes services, see the [Kubernetes services documentat
[aks-sp]: kubernetes-service-principal.md#delegate-access-to-other-azure-resources [different-subnet]: #specify-a-different-subnet [aks-vnet-subnet]: configure-kubenet.md#create-a-virtual-network-and-subnet
-[unique-subnet]: use-multiple-node-pools.md#add-a-node-pool-with-a-unique-subnet
+[unique-subnet]: create-node-pools.md#add-a-node-pool-with-a-unique-subnet
[az-network-vnet-subnet-list]: /cli/azure/network/vnet/subnet#az-network-vnet-subnet-list [get-azvirtualnetworksubnetconfig]: /powershell/module/az.network/get-azvirtualnetworksubnetconfig
aks Load Balancer Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/load-balancer-standard.md
This article covers integration with a public load balancer on AKS. For internal
## Before you begin
-* Azure Load Balancer is available in two SKUs: *Basic* and *Standard*. The *Standard* SKU is used by default when you create an AKS cluster. The *Standard* SKU gives you access to added functionality, such as a larger backend pool, [multiple node pools](use-multiple-node-pools.md), [Availability Zones](availability-zones.md), and is [secure by default][azure-lb]. It's the recommended load balancer SKU for AKS. For more information on the *Basic* and *Standard* SKUs, see [Azure Load Balancer SKU comparison][azure-lb-comparison].
+* Azure Load Balancer is available in two SKUs: *Basic* and *Standard*. The *Standard* SKU is used by default when you create an AKS cluster. The *Standard* SKU gives you access to added functionality, such as a larger backend pool, [multiple node pools](create-node-pools.md), [Availability Zones](availability-zones.md), and is [secure by default][azure-lb]. It's the recommended load balancer SKU for AKS. For more information on the *Basic* and *Standard* SKUs, see [Azure Load Balancer SKU comparison][azure-lb-comparison].
* For a full list of the supported annotations for Kubernetes services with type `LoadBalancer`, see [LoadBalancer annotations][lb-annotations]. * This article assumes you have an AKS cluster with the *Standard* SKU Azure Load Balancer. If you need an AKS cluster, you can create one [using Azure CLI][aks-quickstart-cli], [Azure PowerShell][aks-quickstart-powershell], or [the Azure portal][aks-quickstart-portal]. * AKS manages the lifecycle and operations of agent nodes. Modifying the IaaS resources associated with the agent nodes isn't supported. An example of an unsupported operation is making manual changes to the load balancer resource group.
Deploy the public service manifest using [`kubectl apply`][kubectl-apply] and sp
kubectl apply -f public-svc.yaml ```
-The Azure Load Balancer will be configured with a new public IP that will front the new service. Since the Azure Load Balancer can have multiple frontend IPs, each new service that you deploy will get a new dedicated frontend IP to be uniquely accessed.
+The Azure Load Balancer is configured with a new public IP that fronts the new service. Since the Azure Load Balancer can have multiple frontend IPs, each new service that you deploy gets a new dedicated frontend IP to be uniquely accessed.
Confirm your service is created and the load balancer is configured using the following command.
default public-svc LoadBalancer 10.0.39.110 52.156.88.187 80:320
When you view the service details, the public IP address created for this service on the load balancer is shown in the *EXTERNAL-IP* column. It may take a few minutes for the IP address to change from *\<pending\>* to an actual public IP address.
-For more detailed information about your service using the following command.
+For more detailed information about your service, use the following command.
```azurecli-interactive kubectl describe service public-svc
You can customize different settings for your standard public load balancer at c
### Change the inbound pool type (PREVIEW)
-AKS nodes can be referenced in the load balancer backend pools by either their IP configuration (VMSS based membership) or by their IP address only. Utilizing the IP address based backend pool membership provides higher efficiencies when updating services and provisioning load balancers, especially at high node counts. Provisioning new clusters with IP based backend pools and converting existing clusters is now supported. When combined with NAT Gateway or user-defined routing egress types, provisioning of new nodes and services will be more performant.
+AKS nodes can be referenced in the load balancer backend pools by either their IP configuration (Azure Virtual Machine Scale Sets based membership) or by their IP address only. Utilizing the IP address based backend pool membership provides higher efficiencies when updating services and provisioning load balancers, especially at high node counts. Provisioning new clusters with IP based backend pools and converting existing clusters is now supported. When combined with NAT Gateway or user-defined routing egress types, provisioning of new nodes and services are more performant.
Two different pool membership types are available: -- `nodeIPConfiguration` - legacy VMSS IP configuration based pool membership type
+- `nodeIPConfiguration` - legacy Virtual Machine Scale Sets IP configuration based pool membership type
- `nodeIP` - IP-based membership type #### Requirements
az aks create \
#### Update an existing AKS cluster to use IP-based inbound pool membership > [!WARNING]
-> This operation will cause a temporary disruption to incoming service traffic in the cluster. The impact time will increase with larger clusters that have many nodes.
+> This operation causes a temporary disruption to incoming service traffic in the cluster. The impact time increases with larger clusters that have many nodes.
```azurecli-interactive az aks update \
Use the following command to update an existing cluster. You can also set this p
> [!IMPORTANT] > We don't recommend using the Azure portal to make any outbound rule changes. When making these changes, you should go through the AKS cluster and not directly on the Load Balancer resource. >
-> Outbound rule changes made directly on the Load Balancer resource will be removed whenever the cluster is reconciled, such as when it's stopped, started, upgraded, or scaled.
+> Outbound rule changes made directly on the Load Balancer resource are removed whenever the cluster is reconciled, such as when it's stopped, started, upgraded, or scaled.
> > Use the Azure CLI, as shown in the examples. Outbound rule changes made using `az aks` CLI commands are permanent across cluster downtime. >
At cluster creation time, you can also set the initial number of managed outboun
When you use a *Standard* SKU load balancer, the AKS cluster automatically creates a public IP in the AKS-managed infrastructure resource group and assigns it to the load balancer outbound pool by default.
-A public IP created by AKS is an AKS-managed resource, meaning the lifecycle of that public IP is intended to be managed by AKS and requires no user action directly on the public IP resource. Alternatively, you can assign your own custom public IP or public IP prefix at cluster creation time. Your custom IPs can also be updated on an existing cluster's load balancer properties.
+A public IP created by AKS is an AKS-managed resource, meaning AKS manages the lifecycle of that public IP and doesn't require user action directly on the public IP resource. Alternatively, you can assign your own custom public IP or public IP prefix at cluster creation time. Your custom IPs can also be updated on an existing cluster's load balancer properties.
Requirements for using your own public IP or prefix include:
-* Custom public IP addresses must be created and owned by the user. Managed public IP addresses created by AKS can't be reused as a "bring your own custom IP" as it can cause management conflicts.
+* Users must create and own custom public IP addresses. Managed public IP addresses created by AKS can't be reused as a "bring your own custom IP" as it can cause management conflicts.
* You must ensure the AKS cluster identity (Service Principal or Managed Identity) has permissions to access the outbound IP, as per the [required public IP permissions list](kubernetes-service-principal.md#networking).
-* Make sure you meet the [pre-requisites and constraints](../virtual-network/ip-services/public-ip-address-prefix.md#limitations) necessary to configure outbound IPs or outbound IP prefixes.
+* Make sure you meet the [prerequisites and constraints](../virtual-network/ip-services/public-ip-address-prefix.md#limitations) necessary to configure outbound IPs or outbound IP prefixes.
#### Update the cluster with your own outbound public IP
When calculating the number of outbound ports and IPs and setting the values, ke
* Adding more IPs doesn't add more ports to any node, but it provides capacity for more nodes in the cluster. * You must account for nodes that may be added as part of upgrades, including the count of nodes specified via [maxSurge values][maxsurge].
-The following examples show how the number of outbound ports and IP addresses are affected by the values you set:
+The following examples show how the values you set affect the number of outbound ports and IP addresses:
-* If the default values are used and the cluster has 48 nodes, each node will have 1024 ports available.
-* If the default values are used and the cluster scales from 48 to 52 nodes, each node will be updated from 1024 ports available to 512 ports available.
+* If the default values are used and the cluster has 48 nodes, each node has 1024 ports available.
+* If the default values are used and the cluster scales from 48 to 52 nodes, each node is updated from 1024 ports available to 512 ports available.
* If the number of outbound ports is set to 1,000 and the outbound IP count is set to 2, then the cluster can support a maximum of 128 nodes: `64,000 ports per IP / 1,000 ports per node * 2 IPs = 128 nodes`. * If the number of outbound ports is set to 1,000 and the outbound IP count is set to 7, then the cluster can support a maximum of 448 nodes: `64,000 ports per IP / 1,000 ports per node * 7 IPs = 448 nodes`. * If the number of outbound ports is set to 4,000 and the outbound IP count is set to 2, then the cluster can support a maximum of 32 nodes: `64,000 ports per IP / 4,000 ports per node * 2 IPs = 32 nodes`.
If you expect to have numerous short-lived connections and no long-lived connect
> > TCP RST is only sent during TCP connection in ESTABLISHED state. Read more about it [here](../load-balancer/load-balancer-tcp-reset.md).
-When setting *IdleTimeoutInMinutes* to a different value than the default of 30 minutes, consider how long your workloads will need an outbound connection. Also consider that the default timeout value for a *Standard* SKU load balancer used outside of AKS is 4 minutes. An *IdleTimeoutInMinutes* value that more accurately reflects your specific AKS workload can help decrease SNAT exhaustion caused by tying up connections no longer being used.
+When setting *IdleTimeoutInMinutes* to a different value than the default of 30 minutes, consider how long your workloads need an outbound connection. Also consider that the default timeout value for a *Standard* SKU load balancer used outside of AKS is 4 minutes. An *IdleTimeoutInMinutes* value that more accurately reflects your specific AKS workload can help decrease SNAT exhaustion caused by tying up connections no longer being used.
> [!WARNING] > Altering the values for *AllocatedOutboundPorts* and *IdleTimeoutInMinutes* may significantly change the behavior of the outbound rule for your load balancer and shouldn't be done lightly. Check the [SNAT Troubleshooting section][troubleshoot-snat] and review the [Load Balancer outbound rules][azure-lb-outbound-rules-overview] and [outbound connections in Azure][azure-lb-outbound-connections] before updating these values to fully understand the impact of your changes.
spec:
- MY_EXTERNAL_IP_RANGE ```
-This example updates the rule to allow inbound external traffic only from the `MY_EXTERNAL_IP_RANGE` range. If you replace `MY_EXTERNAL_IP_RANGE` with the internal subnet IP address, traffic is restricted to only cluster internal IPs. If traffic is restricted to cluster internal IPs, clients outside your Kubernetes cluster won't be able to access the load balancer.
+This example updates the rule to allow inbound external traffic only from the `MY_EXTERNAL_IP_RANGE` range. If you replace `MY_EXTERNAL_IP_RANGE` with the internal subnet IP address, traffic is restricted to only cluster internal IPs. If traffic is restricted to cluster internal IPs, clients outside your Kubernetes cluster are unable to access the load balancer.
> [!NOTE] > Inbound, external traffic flows from the load balancer to the virtual network for your AKS cluster. The virtual network has a network security group (NSG) which allows all inbound traffic from the load balancer. This NSG uses a [service tag][service-tags] of type *LoadBalancer* to allow traffic from the load balancer. ## Maintain the client's IP on inbound connections
-By default, a service of type `LoadBalancer` [in Kubernetes](https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-loadbalancer) and in AKS won't persist the client's IP address on the connection to the pod. The source IP on the packet that's delivered to the pod will be the private IP of the node. To maintain the clientΓÇÖs IP address, you must set `service.spec.externalTrafficPolicy` to `local` in the service definition. The following manifest shows an example.
+By default, a service of type `LoadBalancer` [in Kubernetes](https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-loadbalancer) and in AKS doesn't persist the client's IP address on the connection to the pod. The source IP on the packet that's delivered to the pod becomes the private IP of the node. To maintain the clientΓÇÖs IP address, you must set `service.spec.externalTrafficPolicy` to `local` in the service definition. The following manifest shows an example.
```yaml apiVersion: v1
The following annotations are supported for Kubernetes services with type `LoadB
| -- | - | | `service.beta.kubernetes.io/azure-load-balancer-internal` | `true` or `false` | Specify whether the load balancer should be internal. If not set, it defaults to public. | `service.beta.kubernetes.io/azure-load-balancer-internal-subnet` | Name of the subnet | Specify which subnet the internal load balancer should be bound to. If not set, it defaults to the subnet configured in cloud config file.
-| `service.beta.kubernetes.io/azure-dns-label-name` | Name of the DNS label on Public IPs | Specify the DNS label name for the **public** service. If it's set to an empty string, the DNS entry in the Public IP won't be used.
+| `service.beta.kubernetes.io/azure-dns-label-name` | Name of the DNS label on Public IPs | Specify the DNS label name for the **public** service. If it's set to an empty string, the DNS entry in the Public IP isn't used.
| `service.beta.kubernetes.io/azure-shared-securityrule` | `true` or `false` | Specify that the service should be exposed using an Azure security rule that may be shared with another service. Trade specificity of rules for an increase in the number of services that can be exposed. This annotation relies on the Azure [Augmented Security Rules](../virtual-network/network-security-groups-overview.md#augmented-security-rules) feature of Network Security groups. | `service.beta.kubernetes.io/azure-load-balancer-resource-group` | Name of the resource group | Specify the resource group of load balancer public IPs that aren't in the same resource group as the cluster infrastructure (node resource group). | `service.beta.kubernetes.io/azure-allowed-service-tags` | List of allowed service tags | Specify a list of allowed [service tags][service-tags] separated by commas.
The root cause of SNAT exhaustion is frequently an anti-pattern for how outbound
Take advantage of connection reuse and connection pooling whenever possible. These patterns help you avoid resource exhaustion problems and result in predictable behavior. Primitives for these patterns can be found in many development libraries and frameworks.
-* Atomic requests (one request per connection) generally aren't a good design choice. Such anti-patterns limit scale, reduce performance, and decrease reliability. Instead, reuse HTTP/S connections to reduce the numbers of connections and associated SNAT ports. The application scale will increase and performance will improve because of reduced handshakes, overhead, and cryptographic operation cost when using TLS.
+* Atomic requests (one request per connection) generally aren't a good design choice. Such anti-patterns limit scale, reduce performance, and decrease reliability. Instead, reuse HTTP/S connections to reduce the numbers of connections and associated SNAT ports. The application scale increases and performance improves because of reduced handshakes, overhead, and cryptographic operation cost when using TLS.
* If you're using out of cluster/custom DNS, or custom upstream servers on coreDNS, keep in mind that DNS can introduce many individual flows at volume when the client isn't caching the DNS resolvers result. Make sure to customize coreDNS first instead of using custom DNS servers and to define a good caching value. * UDP flows (for example, DNS lookups) allocate SNAT ports during the idle timeout. The longer the idle timeout, the higher the pressure on SNAT ports. Use short idle timeout (for example, 4 minutes). * Use connection pools to shape your connection volume.
-* Never silently abandon a TCP flow and rely on TCP timers to clean up flow. If you don't let TCP explicitly close the connection, state remains allocated at intermediate systems and endpoints and makes SNAT ports unavailable for other connections. This pattern can trigger application failures and SNAT exhaustion.
-* Don't change OS-level TCP close related timer values without expert knowledge of impact. While the TCP stack will recover, your application performance can be negatively affected when the endpoints of a connection have mismatched expectations. Wishing to change timers is usually a sign of an underlying design problem. Review following recommendations.
+* Never silently abandon a TCP flow and rely on TCP timers to clean up flow. If you don't let TCP explicitly close the connection, state remains allocated at intermediate systems and endpoints, and it makes SNAT ports unavailable for other connections. This pattern can trigger application failures and SNAT exhaustion.
+* Don't change OS-level TCP close related timer values without expert knowledge of impact. While the TCP stack recovers, your application performance can be negatively affected when the endpoints of a connection have mismatched expectations. Wishing to change timers is usually a sign of an underlying design problem. Review following recommendations.
## Moving from a *Basic* SKU load balancer to *Standard* SKU
The following limitations apply when you create and manage AKS clusters that sup
* At least one public IP or IP prefix is required for allowing egress traffic from the AKS cluster. The public IP or IP prefix is required to maintain connectivity between the control plane and agent nodes and to maintain compatibility with previous versions of AKS. You have the following options for specifying public IPs or IP prefixes with a *Standard* SKU load balancer: * Provide your own public IPs. * Provide your own public IP prefixes.
- * Specify a number up to 100 to allow the AKS cluster to create that many *Standard* SKU public IPs in the same resource group as the AKS cluster. This resource group is usually named with *MC_* at the beginning. AKS assigns the public IP to the *Standard* SKU load balancer. By default, one public IP will automatically be created in the same resource group as the AKS cluster, if no public IP, public IP prefix, or number of IPs is specified. You also must allow public addresses and avoid creating any Azure policies that ban IP creation.
-* A public IP created by AKS can't be reused as a custom bring your own public IP address. All custom IP addresses must be created and managed by the user.
+ * Specify a number up to 100 to allow the AKS cluster to create that many *Standard* SKU public IPs in the same resource group as the AKS cluster. This resource group is usually named with *MC_* at the beginning. AKS assigns the public IP to the *Standard* SKU load balancer. By default, one public IP is automatically created in the same resource group as the AKS cluster if no public IP, public IP prefix, or number of IPs is specified. You also must allow public addresses and avoid creating any Azure policies that ban IP creation.
+* A public IP created by AKS can't be reused as a custom bring your own public IP address. Users must create and manage all custom IP addresses.
* Defining the load balancer SKU can only be done when you create an AKS cluster. You can't change the load balancer SKU after an AKS cluster has been created. * You can only use one type of load balancer SKU (*Basic* or *Standard*) in a single cluster. * *Standard* SKU load balancers only support *Standard* SKU IP addresses.
aks Manage Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/manage-node-pools.md
+
+ Title: Manage node pools in Azure Kubernetes Service (AKS)
+description: Learn how to manage node pools for a cluster in Azure Kubernetes Service (AKS).
++ Last updated : 07/19/2023++
+# Manage node pools for a cluster in Azure Kubernetes Service (AKS)
+
+In Azure Kubernetes Service (AKS), nodes of the same configuration are grouped together into *node pools*. These node pools contain the underlying VMs that run your applications. When you create an AKS cluster, you define the initial number of nodes and their size (SKU). As application demands change, you may need to change the settings on your node pools. For example, you may need to scale the number of nodes in a node pool or upgrade the Kubernetes version of a node pool.
+
+This article shows you how to manage one or more node pools in an AKS cluster.
+
+## Before you begin
+
+* Review [Create node pools for a cluster in Azure Kubernetes Service (AKS)][create-node-pools] to learn how to create node pools for your AKS clusters.
+* You need the Azure CLI version 2.2.0 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+* Review [Storage options for applications in Azure Kubernetes Service][aks-storage-concepts] to plan your storage configuration.
+
+## Limitations
+
+The following limitations apply when you create and manage AKS clusters that support multiple node pools:
+
+* See [Quotas, virtual machine size restrictions, and region availability in Azure Kubernetes Service (AKS)][quotas-skus-regions].
+* [System pools][use-system-pool] must contain at least one node, and user node pools may contain zero or more nodes.
+* You can't change the VM size of a node pool after you create it.
+* When you create multiple node pools at cluster creation time, all Kubernetes versions used by node pools must match the version set for the control plane. You can make updates after provisioning the cluster using per node pool operations.
+* You can't simultaneously run upgrade and scale operations on a cluster or node pool. If you attempt to run them at the same time, you receive an error. Each operation type must complete on the target resource prior to the next request on that same resource. For more information, see the [troubleshooting guide](./troubleshooting.md#im-receiving-errors-when-trying-to-upgrade-or-scale-that-state-my-cluster-is-being-upgraded-or-has-failed-upgrade).
+
+## Upgrade a single node pool
+
+> [!NOTE]
+> The node pool OS image version is tied to the Kubernetes version of the cluster. You only get OS image upgrades, following a cluster upgrade.
+
+In this example, we upgrade the *mynodepool* node pool. Since there are two node pools, we must use the [`az aks nodepool upgrade`][az-aks-nodepool-upgrade] command to upgrade.
+
+1. Check for any available upgrades using the [`az aks get-upgrades`][az-aks-get-upgrades] command.
+
+ ```azurecli-interactive
+ az aks get-upgrades --resource-group myResourceGroup --name myAKSCluster
+ ```
+
+2. Upgrade the *mynodepool* node pool using the [`az aks nodepool upgrade`][az-aks-nodepool-upgrade] command.
+
+ ```azurecli-interactive
+ az aks nodepool upgrade \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name mynodepool \
+ --kubernetes-version KUBERNETES_VERSION \
+ --no-wait
+ ```
+
+3. List the status of your node pools using the [`az aks nodepool list`][az-aks-nodepool-list] command.
+
+ ```azurecli-interactive
+ az aks nodepool list -g myResourceGroup --cluster-name myAKSCluster
+ ```
+
+ The following example output shows *mynodepool* is in the *Upgrading* state:
+
+ ```output
+ [
+ {
+ ...
+ "count": 3,
+ ...
+ "name": "mynodepool",
+ "orchestratorVersion": "KUBERNETES_VERSION",
+ ...
+ "provisioningState": "Upgrading",
+ ...
+ "vmSize": "Standard_DS2_v2",
+ ...
+ },
+ {
+ ...
+ "count": 2,
+ ...
+ "name": "nodepool1",
+ "orchestratorVersion": "1.15.7",
+ ...
+ "provisioningState": "Succeeded",
+ ...
+ "vmSize": "Standard_DS2_v2",
+ ...
+ }
+ ]
+ ```
+
+ It takes a few minutes to upgrade the nodes to the specified version.
+
+As a best practice, you should upgrade all node pools in an AKS cluster to the same Kubernetes version. The default behavior of [`az aks upgrade`][az-aks-upgrade] is to upgrade all node pools together with the control plane to achieve this alignment. The ability to upgrade individual node pools lets you perform a rolling upgrade and schedule pods between node pools to maintain application uptime within the above constraints mentioned.
+
+## Upgrade a cluster control plane with multiple node pools
+
+> [!NOTE]
+> Kubernetes uses the standard [Semantic Versioning](https://semver.org/) versioning scheme. The version number is expressed as *x.y.z*, where *x* is the major version, *y* is the minor version, and *z* is the patch version. For example, in version *1.12.6*, 1 is the major version, 12 is the minor version, and 6 is the patch version. The Kubernetes version of the control plane and the initial node pool are set during cluster creation. Other node pools have their Kubernetes version set when they are added to the cluster. The Kubernetes versions may differ between node pools and between a node pool and the control plane.
+
+An AKS cluster has two cluster resource objects with Kubernetes versions associated to them:
+
+1. The cluster control plane Kubernetes version, and
+2. A node pool with a Kubernetes version.
+
+The control plane maps to one or many node pools. The behavior of an upgrade operation depends on which Azure CLI command you use.
+
+* [`az aks upgrade`][az-aks-upgrade] upgrades the control plane and all node pools in the cluster to the same Kubernetes version.
+* [`az aks upgrade`][az-aks-upgrade] with the `--control-plane-only` flag upgrades only the cluster control plane and leaves all node pools unchanged.
+* [`az aks nodepool upgrade`][az-aks-nodepool-upgrade] upgrades only the target node pool with the specified Kubernetes version.
+
+### Validation rules for upgrades
+
+Kubernetes upgrades for a cluster control plane and node pools are validated using the following sets of rules:
+
+* **Rules for valid versions to upgrade node pools**:
+ * The node pool version must have the same *major* version as the control plane.
+ * The node pool *minor* version must be within two *minor* versions of the control plane version.
+ * The node pool version can't be greater than the control `major.minor.patch` version.
+
+* **Rules for submitting an upgrade operation**:
+ * You can't downgrade the control plane or a node pool Kubernetes version.
+ * If a node pool Kubernetes version isn't specified, the behavior depends on the client. In Resource Manager templates, declaration falls back to the existing version defined for the node pool. If nothing is set, it uses the control plane version to fall back on.
+ * You can't simultaneously submit multiple operations on a single control plane or node pool resource. You can either upgrade or scale a control plane or a node pool at a given time.
+
+## Scale a node pool manually
+
+As your application workload demands change, you may need to scale the number of nodes in a node pool. The number of nodes can be scaled up or down.
+
+1. Scale the number of nodes in a node pool using the [`az aks node pool scale`][az-aks-nodepool-scale] command.
+
+ ```azurecli-interactive
+ az aks nodepool scale \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name mynodepool \
+ --node-count 5 \
+ --no-wait
+ ```
+
+2. List the status of your node pools using the [`az aks node pool list`][az-aks-nodepool-list] command.
+
+ ```azurecli-interactive
+ az aks nodepool list -g myResourceGroup --cluster-name myAKSCluster
+ ```
+
+ The following example output shows *mynodepool* is in the *Scaling* state with a new count of five nodes:
+
+ ```output
+ [
+ {
+ ...
+ "count": 5,
+ ...
+ "name": "mynodepool",
+ "orchestratorVersion": "1.15.7",
+ ...
+ "provisioningState": "Scaling",
+ ...
+ "vmSize": "Standard_DS2_v2",
+ ...
+ },
+ {
+ ...
+ "count": 2,
+ ...
+ "name": "nodepool1",
+ "orchestratorVersion": "1.15.7",
+ ...
+ "provisioningState": "Succeeded",
+ ...
+ "vmSize": "Standard_DS2_v2",
+ ...
+ }
+ ]
+ ```
+
+ It takes a few minutes for the scale operation to complete.
+
+## Scale a specific node pool automatically using the cluster autoscaler
+
+AKS offers a separate feature to automatically scale node pools with a feature called the [cluster autoscaler](cluster-autoscaler.md). You can enable this feature with unique minimum and maximum scale counts per node pool.
+
+For more information, see [use the cluster autoscaler](cluster-autoscaler.md#use-the-cluster-autoscaler-with-multiple-node-pools-enabled).
+
+## Associate capacity reservation groups to node pools (preview)
+
+As your workload demands change, you can associate existing capacity reservation groups to node pools to guarantee allocated capacity for your node pools.
+
+For more information, see [capacity reservation groups][capacity-reservation-groups].
+
+### Register preview feature
++
+1. Install the `aks-preview` extension using the [`az extension add`][az-extension-add] command.
+
+ ```azurecli-interactive
+ az extension add --name aks-preview
+ ```
+
+2. Update to the latest version of the extension using the [`az extension update`][az-extension-update] command.
+
+ ```azurecli-interactive
+ az extension update --name aks-preview
+ ```
+
+3. Register the `CapacityReservationGroupPreview` feature flag using the [`az feature register`][az-feature-register] command.
+
+ ```azurecli-interactive
+ az feature register --namespace "Microsoft.ContainerService" --name "CapacityReservationGroupPreview"
+ ```
+
+ It takes a few minutes for the status to show *Registered*.
+
+4. Verify the registration status using the [`az feature show][az-feature-show`] command.
+
+ ```azurecli-interactive
+ az feature show --namespace "Microsoft.ContainerService" --name "CapacityReservationGroupPreview"
+ ```
+
+5. When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider using the [`az provider register`][az-provider-register] command.
+
+ ```azurecli-interactive
+ az provider register --namespace Microsoft.ContainerService
+ ```
+
+### Manage capacity reservations
+
+> [!NOTE]
+> The capacity reservation group should already exist, otherwise the node pool is added to the cluster with a warning and no capacity reservation group gets associated.
+
+#### Associate an existing capacity reservation group to a node pool
+
+* Associate an existing capacity reservation group to a node pool using the [`az aks nodepool add`][az-aks-nodepool-add] command and specify a capacity reservation group with the `--capacityReservationGroup` flag.
+
+ ```azurecli-interactive
+ az aks nodepool add -g MyRG --cluster-name MyMC -n myAP --capacityReservationGroup myCRG
+ ```
+
+#### Associate an existing capacity reservation group to a system node pool
+
+* Associate an existing capacity reservation group to a system node pool using the [`az aks create`][az-aks-create] command.
+
+ ```azurecli-interactive
+ az aks create -g MyRG --cluster-name MyMC --capacityReservationGroup myCRG
+ ```
+
+> [!NOTE]
+> Deleting a node pool implicitly dissociates that node pool from any associated capacity reservation group before the node pool is deleted. Deleting a cluster implicitly dissociates all node pools in that cluster from their associated capacity reservation groups.
+
+## Specify a VM size for a node pool
+
+You may need to create node pools with different VM sizes and capabilities. For example, you may create a node pool that contains nodes with large amounts of CPU or memory or a node pool that provides GPU support. In the next section, you [use taints and tolerations](#set-node-pool-taints) to tell the Kubernetes scheduler how to limit access to pods that can run on these nodes.
+
+In the following example, we create a GPU-based node pool that uses the *Standard_NC6* VM size. These VMs are powered by the NVIDIA Tesla K80 card. For information, see [Available sizes for Linux virtual machines in Azure][vm-sizes].
+
+1. Create a node pool using the [`az aks node pool add`][az-aks-nodepool-add] command. Specify the name *gpunodepool* and use the `--node-vm-size` parameter to specify the *Standard_NC6* size.
+
+ ```azurecli-interactive
+ az aks nodepool add \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name gpunodepool \
+ --node-count 1 \
+ --node-vm-size Standard_NC6 \
+ --no-wait
+ ```
+
+2. Check the status of the node pool using the [`az aks nodepool list`][az-aks-nodepool-list] command.
+
+ ```azurecli-interactive
+ az aks nodepool list -g myResourceGroup --cluster-name myAKSCluster
+ ```
+
+ The following example output shows the *gpunodepool* node pool is *Creating* nodes with the specified *VmSize*:
+
+ ```output
+ [
+ {
+ ...
+ "count": 1,
+ ...
+ "name": "gpunodepool",
+ "orchestratorVersion": "1.15.7",
+ ...
+ "provisioningState": "Creating",
+ ...
+ "vmSize": "Standard_NC6",
+ ...
+ },
+ {
+ ...
+ "count": 2,
+ ...
+ "name": "nodepool1",
+ "orchestratorVersion": "1.15.7",
+ ...
+ "provisioningState": "Succeeded",
+ ...
+ "vmSize": "Standard_DS2_v2",
+ ...
+ }
+ ]
+ ```
+
+ It takes a few minutes for the *gpunodepool* to be successfully created.
+
+## Specify a taint, label, or tag for a node pool
+
+When creating a node pool, you can add taints, labels, or tags to it. When you add a taint, label, or tag, all nodes within that node pool also get that taint, label, or tag.
+
+> [!IMPORTANT]
+> Adding taints, labels, or tags to nodes should be done for the entire node pool using `az aks nodepool`. We don't recommend using `kubectl` to apply taints, labels, or tags to individual nodes in a node pool.
+
+### Set node pool taints
+
+1. Create a node pool with a taint using the [`az aks nodepool add`][az-aks-nodepool-add] command. Specify the name *taintnp* and use the `--node-taints` parameter to specify *sku=gpu:NoSchedule* for the taint.
+
+ ```azurecli-interactive
+ az aks nodepool add \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name taintnp \
+ --node-count 1 \
+ --node-taints sku=gpu:NoSchedule \
+ --no-wait
+ ```
+
+2. Check the status of the node pool using the [`az aks nodepool list`][az-aks-nodepool-list] command.
+
+ ```azurecli-interactive
+ az aks nodepool list -g myResourceGroup --cluster-name myAKSCluster
+ ```
+
+ The following example output shows that the *taintnp* node pool is *Creating* nodes with the specified *nodeTaints*:
+
+ ```output
+ [
+ {
+ ...
+ "count": 1,
+ ...
+ "name": "taintnp",
+ "orchestratorVersion": "1.15.7",
+ ...
+ "provisioningState": "Creating",
+ ...
+ "nodeTaints": [
+ "sku=gpu:NoSchedule"
+ ],
+ ...
+ },
+ ...
+ ]
+ ```
+
+The taint information is visible in Kubernetes for handling scheduling rules for nodes. The Kubernetes scheduler can use taints and tolerations to restrict what workloads can run on nodes.
+
+* A **taint** is applied to a node that indicates only specific pods can be scheduled on them.
+* A **toleration** is then applied to a pod that allows them to *tolerate* a node's taint.
+
+For more information on how to use advanced Kubernetes scheduled features, see [Best practices for advanced scheduler features in AKS][taints-tolerations]
+
+### Set node pool tolerations
+
+In the previous step, you applied the *sku=gpu:NoSchedule* taint when creating your node pool. The following example YAML manifest uses a toleration to allow the Kubernetes scheduler to run an NGINX pod on a node in that node pool.
+
+1. Create a file named `nginx-toleration.yaml` and copy in the following example YAML.
+
+ ```yaml
+ apiVersion: v1
+ kind: Pod
+ metadata:
+ name: mypod
+ spec:
+ containers:
+ - image: mcr.microsoft.com/oss/nginx/nginx:1.15.9-alpine
+ name: mypod
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 1
+ memory: 2G
+ tolerations:
+ - key: "sku"
+ operator: "Equal"
+ value: "gpu"
+ effect: "NoSchedule"
+ ```
+
+2. Schedule the pod using the `kubectl apply` command.
+
+ ```azurecli-interactive
+ kubectl apply -f nginx-toleration.yaml
+ ```
+
+ It takes a few seconds to schedule the pod and pull the NGINX image.
+
+3. Check the status using the [`kubectl describe pod`][kubectl-describe] command.
+
+ ```azurecli-interactive
+ kubectl describe pod mypod
+ ```
+
+ The following condensed example output shows the *sku=gpu:NoSchedule* toleration is applied. In the events section, the scheduler assigned the pod to the *aks-taintnp-28993262-vmss000000* node:
+
+ ```output
+ [...]
+ Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
+ node.kubernetes.io/unreachable:NoExecute for 300s
+ sku=gpu:NoSchedule
+ Events:
+ Type Reason Age From Message
+ - - - -
+ Normal Scheduled 4m48s default-scheduler Successfully assigned default/mypod to aks-taintnp-28993262-vmss000000
+ Normal Pulling 4m47s kubelet pulling image "mcr.microsoft.com/oss/nginx/nginx:1.15.9-alpine"
+ Normal Pulled 4m43s kubelet Successfully pulled image "mcr.microsoft.com/oss/nginx/nginx:1.15.9-alpine"
+ Normal Created 4m40s kubelet Created container
+ Normal Started 4m40s kubelet Started container
+ ```
+
+ Only pods that have this toleration applied can be scheduled on nodes in *taintnp*. Any other pods are scheduled in the *nodepool1* node pool. If you create more node pools, you can use taints and tolerations to limit what pods can be scheduled on those node resources.
+
+### Setting node pool labels
+
+For more information, see [Use labels in an Azure Kubernetes Service (AKS) cluster][use-labels].
+
+### Setting node pool Azure tags
+
+For more information, see [Use Azure tags in Azure Kubernetes Service (AKS)][use-tags].
+
+## Manage node pools using a Resource Manager template
+
+When you use an Azure Resource Manager template to create and manage resources, you can change settings in your template and redeploy it to update resources. With AKS node pools, you can't update the initial node pool profile once the AKS cluster has been created. This behavior means you can't update an existing Resource Manager template, make a change to the node pools, and then redeploy the template. Instead, you must create a separate Resource Manager template that updates the node pools for the existing AKS cluster.
+
+1. Create a template, such as `aks-agentpools.json`, and paste in the following example manifest. Make sure to edit the values as needed. This example template configures the following settings:
+
+ * Updates the *Linux* node pool named *myagentpool* to run three nodes.
+ * Sets the nodes in the node pool to run Kubernetes version *1.15.7*.
+ * Defines the node size as *Standard_DS2_v2*.
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "clusterName": {
+ "type": "string",
+ "metadata": {
+ "description": "The name of your existing AKS cluster."
+ }
+ },
+ "location": {
+ "type": "string",
+ "metadata": {
+ "description": "The location of your existing AKS cluster."
+ }
+ },
+ "agentPoolName": {
+ "type": "string",
+ "defaultValue": "myagentpool",
+ "metadata": {
+ "description": "The name of the agent pool to create or update."
+ }
+ },
+ "vnetSubnetId": {
+ "type": "string",
+ "defaultValue": "",
+ "metadata": {
+ "description": "The Vnet subnet resource ID for your existing AKS cluster."
+ }
+ }
+ },
+ "variables": {
+ "apiVersion": {
+ "aks": "2020-01-01"
+ },
+ "agentPoolProfiles": {
+ "maxPods": 30,
+ "osDiskSizeGB": 0,
+ "agentCount": 3,
+ "agentVmSize": "Standard_DS2_v2",
+ "osType": "Linux",
+ "vnetSubnetId": "[parameters('vnetSubnetId')]"
+ }
+ },
+ "resources": [
+ {
+ "apiVersion": "2020-01-01",
+ "type": "Microsoft.ContainerService/managedClusters/agentPools",
+ "name": "[concat(parameters('clusterName'),'/', parameters('agentPoolName'))]",
+ "location": "[parameters('location')]",
+ "properties": {
+ "maxPods": "[variables('agentPoolProfiles').maxPods]",
+ "osDiskSizeGB": "[variables('agentPoolProfiles').osDiskSizeGB]",
+ "count": "[variables('agentPoolProfiles').agentCount]",
+ "vmSize": "[variables('agentPoolProfiles').agentVmSize]",
+ "osType": "[variables('agentPoolProfiles').osType]",
+ "storageProfile": "ManagedDisks",
+ "type": "VirtualMachineScaleSets",
+ "vnetSubnetID": "[variables('agentPoolProfiles').vnetSubnetId]",
+ "orchestratorVersion": "1.15.7"
+ }
+ }
+ ]
+ }
+ ```
+
+2. Deploy the template using the [`az deployment group create`][az-deployment-group-create] command.
+
+ ```azurecli-interactive
+ az deployment group create \
+ --resource-group myResourceGroup \
+ --template-file aks-agentpools.json
+ ```
+
+ > [!TIP]
+ > You can add a tag to your node pool by adding the *tag* property in the template, as shown in the following example:
+ >
+ > ```json
+ > ...
+ > "resources": [
+ > {
+ > ...
+ > "properties": {
+ > ...
+ > "tags": {
+ > "name1": "val1"
+ > },
+ > ...
+ > }
+ > }
+ > ...
+ > ```
+
+ It may take a few minutes to update your AKS cluster depending on the node pool settings and operations you define in your Resource Manager template.
+
+## Next steps
+
+* For more information about how to control pods across node pools, see [Best practices for advanced scheduler features in AKS][operator-best-practices-advanced-scheduler].
+* Use [proximity placement groups][reduce-latency-ppg] to reduce latency for your AKS applications.
+* Use [instance-level public IP addresses](use-node-public-ips.md) to enable your nodes to directly serve traffic.
+
+<!-- EXTERNAL LINKS -->
+[kubectl-describe]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe
+[capacity-reservation-groups]:/azure/virtual-machines/capacity-reservation-associate-virtual-machine-scale-set
+
+<!-- INTERNAL LINKS -->
+[az-aks-upgrade]: /cli/azure/aks#az_aks_upgrade
+[aks-storage-concepts]: concepts-storage.md
+[az-aks-create]: /cli/azure/aks#az_aks_create
+[az-aks-get-upgrades]: /cli/azure/aks#az_aks_get_upgrades
+[az-aks-nodepool-add]: /cli/azure/aks/nodepool#az_aks_nodepool_add
+[az-aks-nodepool-list]: /cli/azure/aks/nodepool#az_aks_nodepool_list
+[az-aks-nodepool-upgrade]: /cli/azure/aks/nodepool#az_aks_nodepool_upgrade
+[az-aks-nodepool-scale]: /cli/azure/aks/nodepool#az_aks_nodepool_scale
+[az-feature-register]: /cli/azure/feature#az_feature_register
+[az-provider-register]: /cli/azure/provider#az_provider_register
+[az-deployment-group-create]: /cli/azure/deployment/group#az_deployment_group_create
+[install-azure-cli]: /cli/azure/install-azure-cli
+[operator-best-practices-advanced-scheduler]: operator-best-practices-advanced-scheduler.md
+[quotas-skus-regions]: quotas-skus-regions.md
+[taints-tolerations]: operator-best-practices-advanced-scheduler.md#provide-dedicated-nodes-using-taints-and-tolerations
+[vm-sizes]: ../virtual-machines/sizes.md
+[use-system-pool]: use-system-pools.md
+[reduce-latency-ppg]: reduce-latency-ppg.md
+[use-tags]: use-tags.md
+[use-labels]: use-labels.md
+[cordon-and-drain]: resize-node-pool.md#cordon-the-existing-nodes
+[internal-lb-different-subnet]: internal-lb.md#specify-a-different-subnet
+[drain-nodes]: resize-node-pool.md#drain-the-existing-nodes
+[create-node-pools]: create-node-pools.md
+[use-labels]: use-labels.md
+[use-tags]: use-tags.md
+[az-extension-add]: /cli/azure/extension#az_extension_add
+[az-extension-update]: /cli/azure/extension#az_extension_update
aks Node Image Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-image-upgrade.md
az aks nodepool show \
- See the [AKS release notes](https://github.com/Azure/AKS/releases) for information about the latest node images. - Learn how to upgrade the Kubernetes version with [Upgrade an AKS cluster][upgrade-cluster]. - [Automatically apply cluster and node pool upgrades with GitHub Actions][github-schedule].-- Learn more about multiple node pools and how to upgrade node pools with [Create and manage multiple node pools][use-multiple-node-pools].
+- Learn more about multiple node pools with [Create multiple node pools][use-multiple-node-pools].
<!-- LINKS - external --> [kubernetes-json-path]: https://kubernetes.io/docs/reference/kubectl/jsonpath/
az aks nodepool show \
<!-- LINKS - internal --> [upgrade-cluster]: upgrade-cluster.md [github-schedule]: node-upgrade-github-actions.md
-[use-multiple-node-pools]: use-multiple-node-pools.md
+[use-multiple-node-pools]: create-node-pools.md
[max-surge]: upgrade-cluster.md#customize-node-surge-upgrade [auto-upgrade-node-image]: auto-upgrade-node-image.md [az-aks-nodepool-get-upgrades]: /cli/azure/aks/nodepool#az_aks_nodepool_get_upgrades
aks Node Pool Snapshot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-pool-snapshot.md
# Azure Kubernetes Service (AKS) node pool snapshot
-AKS releases a new node image weekly and every new cluster, new node pool, or upgrade cluster will always receive the latest image that can make it hard to maintain your environments consistent and to have repeatable environments.
+AKS releases a new node image weekly. Every new cluster, new node pool, or upgrade cluster always receives the latest image, which can make it hard to maintain consistency and have repeatable environments.
Node pool snapshots allow you to take a configuration snapshot of your node pool and then create new node pools or new clusters based of that snapshot for as long as that configuration and kubernetes version is supported. For more information on the supportability windows, see [Supported Kubernetes versions in AKS][supported-versions].
-The snapshot is an Azure resource that will contain the configuration information from the source node pool such as the node image version, kubernetes version, OS type, and OS SKU. You can then reference this snapshot resource and the respective values of its configuration to create any new node pool or cluster based off of it.
+The snapshot is an Azure resource that contains the configuration information from the source node pool, such as the node image version, kubernetes version, OS type, and OS SKU. You can then reference this snapshot resource and the respective values of its configuration to create any new node pool or cluster based off of it.
## Before you begin
This article assumes that you have an existing AKS cluster. If you need an AKS c
- Any node pool or cluster created from a snapshot must use a VM from the same virtual machine family as the snapshot, for example, you can't create a new N-Series node pool based of a snapshot captured from a D-Series node pool because the node images in those cases are structurally different. - Snapshots must be created same region as the source node pool, those snapshots can be used to create or update clusters and node pools in other regions. - ## Take a node pool snapshot
-In order to take a snapshot from a node pool first you'll need the node pool resource ID, which you can get from the command below:
+In order to take a snapshot from a node pool, you need the node pool resource ID, which you can get from the following command:
```azurecli-interactive NODEPOOL_ID=$(az aks nodepool show --name nodepool1 --cluster-name myAKSCluster --resource-group myResourceGroup --query id -o tsv)
NODEPOOL_ID=$(az aks nodepool show --name nodepool1 --cluster-name myAKSCluster
> Your AKS node pool must be created or upgraded after Nov 10th, 2021 in order for a snapshot to be taken from it. > If you are using the `aks-preview` Azure CLI extension version `0.5.59` or newer, the commands for node pool snapshot have changed. For updated commands, see the [Node Pool Snapshot CLI reference][az-aks-nodepool-snapshot].
-Now, to take a snapshot from the previous node pool you'll use the `az aks snapshot` CLI command.
+Now, to take a snapshot from the previous node pool, you use the `az aks snapshot` CLI command.
```azurecli-interactive az aks nodepool snapshot create --name MySnapshot --resource-group MyResourceGroup --nodepool-id $NODEPOOL_ID --location eastus
az aks nodepool snapshot create --name MySnapshot --resource-group MyResourceGro
## Create a node pool from a snapshot
-First you'll need the resource ID from the snapshot that was previously created, which you can get from the command below:
+First, you need the resource ID from the snapshot that was previously created, which you can get from the following command:
```azurecli-interactive SNAPSHOT_ID=$(az aks nodepool snapshot show --name MySnapshot --resource-group myResourceGroup --query id -o tsv) ```
-Now, we can use the command below to add a new node pool based off of this snapshot.
+Now, we can use the following command to add a new node pool based off of this snapshot.
```azurecli-interactive az aks nodepool add --name np2 --cluster-name myAKSCluster --resource-group myResourceGroup --snapshot-id $SNAPSHOT_ID
az aks nodepool add --name np2 --cluster-name myAKSCluster --resource-group myRe
You can upgrade a node pool to a snapshot configuration so long as the snapshot kubernetes version and node image version are more recent than the versions in the current node pool.
-First you'll need the resource ID from the snapshot that was previously created, which you can get from the command below:
+First, you need the resource ID from the snapshot that was previously created, which you can get from the following command:
```azurecli-interactive SNAPSHOT_ID=$(az aks nodepool snapshot show --name MySnapshot --resource-group myResourceGroup --query id -o tsv)
az aks nodepool upgrade --name nodepool1 --cluster-name myAKSCluster --resource-
``` > [!NOTE]
-> Your node pool image version will be the same contained in the snapshot and will remain the same throughout every scale operation. However, if this node pool is upgraded or a node image upgrade is performed without providing a snapshot-id the node image will be upgraded to latest.
+> Your node pool image version is the same contained in the snapshot and remains the same throughout every scale operation. However, if this node pool is upgraded or a node image upgrade is performed without providing a snapshot-id the node image is upgraded to the latest version.
> [!NOTE] > To upgrade only the node version for your node pool, use the `--node-image-only` flag. This is required when upgrading the node image version for a node pool based on a snapshot with an identical Kubernetes version. ## Create a cluster from a snapshot
-When you create a cluster from a snapshot, the cluster original system pool will be created from the snapshot configuration.
+When you create a cluster from a snapshot, the snapshot configuration creates the cluster original system pool.
-First you'll need the resource ID from the snapshot that was previously created, which you can get from the command below:
+First, you need the resource ID from the snapshot that was previously created, which you can get from the following command:
```azurecli-interactive SNAPSHOT_ID=$(az aks nodepool snapshot show --name MySnapshot --resource-group myResourceGroup --query id -o tsv)
az aks create --name myAKSCluster2 --resource-group myResourceGroup --snapshot-i
- See the [AKS release notes](https://github.com/Azure/AKS/releases) for information about the latest node images. - Learn how to upgrade the Kubernetes version with [Upgrade an AKS cluster][upgrade-cluster].-- Learn how to upgrade you node image version with [Node Image Upgrade][node-image-upgrade]-- Learn more about multiple node pools and how to upgrade node pools with [Create and manage multiple node pools][use-multiple-node-pools].
+- Learn how to upgrade your node image version with [Node Image Upgrade][node-image-upgrade]
+- Learn more about multiple node pools with [Create multiple node pools][use-multiple-node-pools].
<!-- LINKS - internal --> [aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
az aks create --name myAKSCluster2 --resource-group myResourceGroup --snapshot-i
[upgrade-cluster]: upgrade-cluster.md [node-image-upgrade]: node-image-upgrade.md [github-schedule]: node-upgrade-github-actions.md
-[use-multiple-node-pools]: use-multiple-node-pools.md
+[use-multiple-node-pools]: create-node-pools.md
[max-surge]: upgrade-cluster.md#customize-node-surge-upgrade [az-extension-add]: /cli/azure/extension#az_extension_add [az-aks-nodepool-snapshot]:/cli/azure/aks/nodepool#az-aks-nodepool-add
aks Node Updates Kured https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-updates-kured.md
For AKS clusters that use Windows Server nodes, see [Upgrade a node pool in AKS]
[DaemonSet]: concepts-clusters-workloads.md#statefulsets-and-daemonsets [aks-ssh]: ssh.md [aks-upgrade]: upgrade-cluster.md
-[nodepool-upgrade]: use-multiple-node-pools.md#upgrade-a-node-pool
+[nodepool-upgrade]: manage-node-pools.md#upgrade-a-single-node-pool
[node-image-upgrade]: node-image-upgrade.md
aks Node Upgrade Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-upgrade-github-actions.md
jobs:
- See the [AKS release notes](https://github.com/Azure/AKS/releases) for information about the latest node images. - Learn how to upgrade the Kubernetes version with [Upgrade an AKS cluster][cluster-upgrades-article].-- Learn more about multiple node pools and how to upgrade node pools with [Create and manage multiple node pools][use-multiple-node-pools].
+- Learn more about multiple node pools with [Create multiple node pools][use-multiple-node-pools].
- Learn more about [system node pools][system-pools] - To learn how to save costs using Spot instances, see [add a spot node pool to AKS][spot-pools]
jobs:
[cluster-upgrades-article]: upgrade-cluster.md [system-pools]: use-system-pools.md [spot-pools]: spot-node-pool.md
-[use-multiple-node-pools]: use-multiple-node-pools.md
+[use-multiple-node-pools]: create-node-pools.md
[auto-upgrade-node-image]: auto-upgrade-node-image.md [azure-built-in-roles]: ../role-based-access-control/built-in-roles.md [azure-rbac-scope-levels]: ../role-based-access-control/scope-overview.md#scope-format
aks Operator Best Practices Advanced Scheduler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-advanced-scheduler.md
The Kubernetes scheduler uses taints and tolerations to restrict what workloads
* Apply a **taint** to a node to indicate only specific pods can be scheduled on them. * Then apply a **toleration** to a pod, allowing them to *tolerate* a node's taint.
-When you deploy a pod to an AKS cluster, Kubernetes only schedules pods on nodes whose taint aligns with the toleration. Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes. One or more taints are applied to a node, marking the the node so that it does not accept any pods that do not tolerate the taints.
+When you deploy a pod to an AKS cluster, Kubernetes only schedules pods on nodes whose taint aligns with the toleration. Taints and tolerations work together to ensure that pods aren't scheduled onto inappropriate nodes. One or more taints are applied to a node, marking the node so that it doesn't accept any pods that don't tolerate the taints.
For example, assume you added a node pool in your AKS cluster for nodes with GPU support. You define name, such as *gpu*, then a value for scheduling. Setting this value to *NoSchedule* restricts the Kubernetes scheduler from scheduling pods with undefined toleration on the node.
az aks nodepool add \
--no-wait ```
-With a taint applied to nodes in the node pool, you'll define a toleration in the pod specification that allows scheduling on the nodes. The following example defines the `sku: gpu` and `effect: NoSchedule` to tolerate the taint applied to the node pool in the previous step:
+With a taint applied to nodes in the node pool, you define a toleration in the pod specification that allows scheduling on the nodes. The following example defines the `sku: gpu` and `effect: NoSchedule` to tolerate the taint applied to the node pool in the previous step:
```yaml kind: Pod
When this pod is deployed using `kubectl apply -f gpu-toleration.yaml`, Kubernet
When you apply taints, work with your application developers and owners to allow them to define the required tolerations in their deployments.
-For more information about how to use multiple node pools in AKS, see [Create and manage multiple node pools for a cluster in AKS][use-multiple-node-pools].
+For more information about how to use multiple node pools in AKS, see [Create multiple node pools for a cluster in AKS][use-multiple-node-pools].
### Behavior of taints and tolerations in AKS When you upgrade a node pool in AKS, taints and tolerations follow a set pattern as they're applied to new nodes:
-#### Default clusters that use VM scale sets
+#### Default clusters that use Azure Virtual Machine Scale Sets
You can [taint a node pool][taint-node-pool] from the AKS API to have newly scaled out nodes receive API specified node taints.
Let's assume:
1. You begin with a two-node cluster: *node1* and *node2*. 1. You upgrade the node pool.
-1. Two additional nodes are created: *node3* and *node4*.
+1. Two other nodes are created: *node3* and *node4*.
1. The taints are passed on respectively. 1. The original *node1* and *node2* are deleted.
-#### Clusters without VM scale set support
+#### Clusters without Virtual Machine Scale Sets support
Again, let's assume: 1. You have a two-node cluster: *node1* and *node2*. 1. You upgrade the node pool.
-1. An additional node is created: *node3*.
+1. An extra node is created: *node3*.
1. The taints from *node1* are applied to *node3*. 1. *node1* is deleted. 1. A new *node1* is created to replace to original *node1*. 1. The *node2* taints are applied to the new *node1*. 1. *node2* is deleted.
-In essence *node1* becomes *node3*, and *node2* becomes the new *node1*.
+In essence, *node1* becomes *node3*, and *node2* becomes the new *node1*.
-When you scale a node pool in AKS, taints and tolerations do not carry over by design.
+When you scale a node pool in AKS, taints and tolerations don't carry over by design.
## Control pod scheduling using node selectors and affinity
Alternatively, you can use node selectors. For example, you label nodes to indic
Unlike tolerations, pods without a matching node selector can still be scheduled on labeled nodes. This behavior allows unused resources on the nodes to consume, but prioritizes pods that define the matching node selector.
-Let's look at an example of nodes with a high amount of memory. These nodes prioritize pods that request a high amount of memory. To ensure the resources don't sit idle, they also allow other pods to run. The following example command adds a node pool with the label *hardware=highmem* to the *myAKSCluster* in the *myResourceGroup*. All nodes in that node pool will have this label.
+Let's look at an example of nodes with a high amount of memory. These nodes prioritize pods that request a high amount of memory. To ensure the resources don't sit idle, they also allow other pods to run. The following example command adds a node pool with the label *hardware=highmem* to the *myAKSCluster* in the *myResourceGroup*. All nodes in that node pool have this label.
```azurecli-interactive az aks nodepool add \
This article focused on advanced Kubernetes scheduler features. For more informa
[aks-best-practices-scheduler]: operator-best-practices-scheduler.md [aks-best-practices-identity]: operator-best-practices-identity.md [aks-best-practices-isolation]: operator-best-practices-cluster-isolation.md
-[use-multiple-node-pools]: use-multiple-node-pools.md
-[taint-node-pool]: use-multiple-node-pools.md#specify-a-taint-label-or-tag-for-a-node-pool
+[use-multiple-node-pools]: create-node-pools.md
+[taint-node-pool]: manage-node-pools.md#specify-a-taint-label-or-tag-for-a-node-pool
[use-gpus-aks]: gpu-cluster.md
aks Operator Best Practices Cluster Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-cluster-security.md
You can then upgrade your AKS cluster using the [Set-AzAksCluster][set-azaksclus
>[!IMPORTANT] > Test new minor versions in a dev test environment and validate that your workload remains healthy with the new Kubernetes version. >
-> Kubernetes may deprecate APIs (like in version 1.16) that your workloads rely on. When bringing new versions into production, consider using [multiple node pools on separate versions](use-multiple-node-pools.md) and upgrade individual pools one at a time to progressively roll the update across a cluster. If running multiple clusters, upgrade one cluster at a time to progressively monitor for impact or changes.
+> Kubernetes may deprecate APIs (like in version 1.16) that your workloads rely on. When bringing new versions into production, consider using [multiple node pools on separate versions](create-node-pools.md) and upgrade individual pools one at a time to progressively roll the update across a cluster. If running multiple clusters, upgrade one cluster at a time to progressively monitor for impact or changes.
> > ### [Azure CLI](#tab/azure-cli) >
aks Operator Best Practices Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-network.md
For the specific details around limits and sizing for these address ranges, see
### Kubenet networking
-Although kubenet doesn't require you to set up the virtual networks before deploying the cluster, there are disadvantages to waiting, such as:
+Although kubenet doesn't require you to configure the virtual networks before deploying the cluster, there are disadvantages to waiting, such as:
* Since nodes and pods are placed on different IP subnets, User Defined Routing (UDR) and IP forwarding routes traffic between pods and nodes. This extra routing may reduce network performance. * Connections to existing on-premises networks or peering to other Azure virtual networks can be complex.
To get started with policies, see [Secure traffic between pods using network pol
> > Don't expose remote connectivity to your AKS nodes. Create a bastion host, or jump box, in a management virtual network. Use the bastion host to securely route traffic into your AKS cluster to remote management tasks.
-You can complete most operations in AKS using the Azure management tools or through the Kubernetes API server. AKS nodes are only available on a private network and aren't connected to the public internet. To connect to nodes and provide maintenance and support, route your connections through a bastion host, or jump box. Verify this host lives in a separate, securely-peered management virtual network to the AKS cluster virtual network.
+You can complete most operations in AKS using the Azure management tools or through the Kubernetes API server. AKS nodes are only available on a private network and aren't connected to the public internet. To connect to nodes and provide maintenance and support, route your connections through a bastion host, or jump box. Verify this host lives in a separate, securely peered management virtual network to the AKS cluster virtual network.
![Connect to AKS nodes using a bastion host, or jump box](media/operator-best-practices-network/connect-using-bastion-host-simplified.png)
This article focused on network connectivity and security. For more information
[advanced-networking]: configure-azure-cni.md [aks-configure-kubenet-networking]: configure-kubenet.md [concepts-node-selectors]: concepts-clusters-workloads.md#node-selectors
-[nodepool-upgrade]: use-multiple-node-pools.md#upgrade-a-node-pool
+[nodepool-upgrade]: manage-node-pools.md#upgrade-a-single-node-pool
aks Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/policy-reference.md
Title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/25/2023 Last updated : 08/03/2023
aks Quotas Skus Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/quotas-skus-regions.md
You can increase certain default limits and quotas. If your resource supports an
<!-- LINKS - Internal --> [vm-skus]: ../virtual-machines/sizes.md
-[nodepool-upgrade]: use-multiple-node-pools.md#upgrade-a-node-pool
+[nodepool-upgrade]: manage-node-pools.md#upgrade-a-single-node-pool
aks Resize Node Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/resize-node-pool.md
After resizing a node pool by cordoning and draining, learn more about [using mu
[empty-dir]: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir [specify-disruption-budget]: https://kubernetes.io/docs/tasks/run-application/configure-pdb/ [disruptions]: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/
-[use-multiple-node-pools]: use-multiple-node-pools.md
+[use-multiple-node-pools]: create-node-pools.md
aks Spot Node Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/spot-node-pool.md
In this article, you learned how to add a Spot node pool to an AKS cluster. For
[pricing-windows]: https://azure.microsoft.com/pricing/details/virtual-machine-scale-sets/windows/ [spot-toleration]: #verify-the-spot-node-pool [taints-tolerations]: operator-best-practices-advanced-scheduler.md#provide-dedicated-nodes-using-taints-and-tolerations
-[use-multiple-node-pools]: use-multiple-node-pools.md
+[use-multiple-node-pools]: create-node-pools.md
[vmss-spot]: ../virtual-machine-scale-sets/use-spot.md [upgrade-cluster]: upgrade-cluster.md
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md
If you prefer to see this information visually, here's a Gantt chart with all th
## AKS Components Breaking Changes by Version
-Note important changes to make, before you upgrade to any of the available minor versions per below.
+Note the following important changes to make before you upgrade to any of the available minor versions:
|Kubernetes Version | AKS Managed Addons | AKS Components | OS components | Breaking Changes | Notes |--||-||-||
Note important changes to make, before you upgrade to any of the available minor
> [!NOTE] > Alias minor version requires Azure CLI version 2.37 or above as well as API version 20220401 or above. Use `az upgrade` to install the latest version of the CLI.
-AKS allows you to create a cluster without specifying the exact patch version. When you create a cluster without designating a patch, the cluster runs the minor version's latest GA patch. For example, if you create a cluster with **`1.21`**, your cluster will run **`1.21.7`**, which is the latest GA patch version of *1.21*.
+AKS allows you to create a cluster without specifying the exact patch version. When you create a cluster without designating a patch, the cluster runs the minor version's latest GA patch. For example, if you create a cluster with **`1.21`**, your cluster runs **`1.21.7`**, which is the latest GA patch version of *1.21*.
When you upgrade by alias minor version, only a higher minor version is supported. For example, upgrading from `1.14.x` to `1.14` doesn't trigger an upgrade to the latest GA `1.14` patch, but upgrading to `1.15` triggers an upgrade to the latest GA `1.15` patch.
To see what patch you're on, run the `az aks show --resource-group myResourceGro
AKS defines a generally available (GA) version as a version available in all regions and enabled in all SLO or SLA measurements. AKS supports three GA minor versions of Kubernetes:
-* The latest GA minor version released in AKS (which we'll refer to as *N*).
+* The latest GA minor version released in AKS (which we refer to as *N*).
* Two previous minor versions. * Each supported minor version also supports a maximum of two stable patches.
New Supported Version List
## Platform support policy
-Platform support policy is a reduced support plan for certain unsupported kubernetes versions. During platform support, customers will only receive support from Microsoft for AKS/Azure platform related issues. Any issues related to Kubernetes functionality and components will not be supported.
+Platform support policy is a reduced support plan for certain unsupported kubernetes versions. During platform support, customers only receive support from Microsoft for AKS/Azure platform related issues. Any issues related to Kubernetes functionality and components aren't supported.
-Platform support policy applies to clusters in an n-3 version (where n is the latest supported AKS GA minor version), before the cluster drops to n-4. For example, kubernetes v1.25 will be considered platform support when v1.28 is the latest GA version. However, during the v1.29 GA release, v1.25 will then be auto-upgraded to v1.26.
+Platform support policy applies to clusters in an n-3 version (where n is the latest supported AKS GA minor version), before the cluster drops to n-4. For example, kubernetes v1.25 is considered platform support when v1.28 is the latest GA version. However, during the v1.29 GA release, v1.25 will then auto-upgrade to v1.26.
-AKS relies on the releases and patches from [kubernetes](https://kubernetes.io/releases/), which is an Open Source project that only supports a sliding window of 3 minor versions. AKS can only guarantee [full support](#kubernetes-version-support-policy) while those versions are being serviced upstream. Since there's no more patches being produced upstream, AKS can either leave those versions unpatched or fork. Due to this limitation, platform support will not support anything from relying on kubernetes upstream.
+AKS relies on the releases and patches from [kubernetes](https://kubernetes.io/releases/), which is an Open Source project that only supports a sliding window of three minor versions. AKS can only guarantee [full support](#kubernetes-version-support-policy) while those versions are being serviced upstream. Since there's no more patches being produced upstream, AKS can either leave those versions unpatched or fork. Due to this limitation, platform support doesn't support anything from relying on kubernetes upstream.
This table outlines support guidelines for Community Support compared to Platform support.
This table outlines support guidelines for Community Support compared to Platfor
| Platform (Azure) availability | Supported | Supported| | Node pool scaling| Supported | Supported| | VM availability| Supported | Supported|
-| Storage, Networking related issues| Supported | Supported with the exception of bug fixes and retired components |
+| Storage, Networking related issues| Supported | Supported except for bug fixes and retired components |
| Start/stop | Supported | Supported| | Rotate certificates | Supported | Supported| | Infrastructure SLA| Supported | Supported|
This table outlines support guidelines for Community Support compared to Platfor
| Node image upgrade| Supported | Not supported| > [!NOTE]
- > The above table is subject to change and outlines common support scenarios. Any scenarios related to Kubernetes functionality and components will not be supported for N-3. For further support, see [Support and troubleshooting for AKS](./aks-support-help.md).
+ > The above table is subject to change and outlines common support scenarios. Any scenarios related to Kubernetes functionality and components aren't supported for N-3. For further support, see [Support and troubleshooting for AKS](./aks-support-help.md).
### Supported `kubectl` versions
For new **minor** versions of Kubernetes:
For new **patch** versions of Kubernetes: * Because of the urgent nature of patch versions, they can be introduced into the service as they become available. Once available, patches have a two month minimum lifecycle.
-* In general, AKS doesn't broadly communicate the release of new patch versions. However, AKS constantly monitors and validates available CVE patches to support them in AKS in a timely manner. If a critical patch is found or user action is required, AKS will notify you to upgrade to the newly available patch.
+* In general, AKS doesn't broadly communicate the release of new patch versions. However, AKS constantly monitors and validates available CVE patches to support them in AKS in a timely manner. If a critical patch is found or user action is required, AKS notifies you to upgrade to the newly available patch.
* You have **30 days** from a patch release's removal from AKS to upgrade into a supported patch and continue receiving support. However, you'll **no longer be able to create clusters or node pools once the version is deprecated/removed.** ### Supported versions policy exceptions
Get-AzAksVersion -Location eastus
### How does Microsoft notify me of new Kubernetes versions?
-The AKS team publishes announcements with planned dates of the new Kubernetes versions in our documentation, our [GitHub](https://github.com/Azure/AKS/releases), and in emails to subscription administrators who own clusters that are going to fall out of support. AKS also uses [Azure Advisor](../advisor/advisor-overview.md) to alert you inside the Azure portal if you're out of support and inform you of deprecated APIs that will affect your application or development process.
+The AKS team publishes announcements with planned dates of the new Kubernetes versions in our documentation, [GitHub](https://github.com/Azure/AKS/releases), and in emails to subscription administrators who own clusters that are going to fall out of support. AKS also uses [Azure Advisor](../advisor/advisor-overview.md) to alert you inside the Azure portal if you're out of support and inform you of deprecated APIs that can affect your application or development process.
### How often should I expect to upgrade Kubernetes versions to stay in support?
If a cluster has been out of support for more than three (3) minor versions and
### What version does the control plane support if the node pool isn't in one of the supported AKS versions?
-The control plane must be within a window of versions from all node pools. For details on upgrading the control plane or node pools, visit documentation on [upgrading node pools](use-multiple-node-pools.md#upgrade-a-cluster-control-plane-with-multiple-node-pools).
+The control plane must be within a window of versions from all node pools. For details on upgrading the control plane or node pools, visit documentation on [upgrading node pools](manage-node-pools.md#upgrade-a-cluster-control-plane-with-multiple-node-pools).
### Can I skip multiple AKS versions during cluster upgrade?
To upgrade from *1.12.x* -> *1.14.x*:
Skipping multiple versions can only be done when upgrading from an unsupported version back into the minimum supported version. For example, you can upgrade from an unsupported *1.10.x* to a supported *1.15.x* if *1.15* is the minimum supported minor version.
-When performing an upgrade from an _unsupported version_ that skips two or more minor versions, the upgrade is performed without any guarantee of functionality and is excluded from the service-level agreements and limited warranty. If your version is significantly out of date, it's recommended to re-create the cluster.
+When performing an upgrade from an _unsupported version_ that skips two or more minor versions, the upgrade is performed without any guarantee of functionality and is excluded from the service-level agreements and limited warranty. If your version is significantly out of date, we recommend that you re-create the cluster.
### Can I create a new 1.xx.x cluster during its 30 day support window?
No. Once a version is deprecated/removed, you can't create a cluster with that v
### I'm on a freshly deprecated version, can I still add new node pools? Or will I have to upgrade?
-No. You won't be allowed to add node pools of the deprecated version to your cluster. You can add node pools of a new version, but this may require you to update the control plane first.
+No. You aren't allowed to add node pools of the deprecated version to your cluster. You can add node pools of a new version, but it may require you to update the control plane first.
### How often do you update patches?
aks Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade-cluster.md
This article showed you how to upgrade an existing AKS cluster. To learn more ab
[az-feature-list]: /cli/azure/feature#az_feature_list [az-feature-register]: /cli/azure/feature#az_feature_register [az-provider-register]: /cli/azure/provider#az_provider_register
-[nodepool-upgrade]: use-multiple-node-pools.md#upgrade-a-node-pool
+[nodepool-upgrade]: manage-node-pools.md#upgrade-a-single-node-pool
[upgrade-cluster]: #upgrade-an-aks-cluster [planned-maintenance]: planned-maintenance.md [aks-auto-upgrade]: auto-upgrade-cluster.md
aks Use Azure Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-azure-linux.md
The Azure Linux container host has the following limitations:
[azurelinux-doc]: https://microsoft.github.io/CBL-Mariner/docs/#cbl-mariner-linux [azurelinux-capabilities]: https://microsoft.github.io/CBL-Mariner/docs/#key-capabilities-of-cbl-mariner-linux [azurelinux-cluster-config]: cluster-configuration.md#azure-linux-container-host-for-aks
-[azurelinux-node-pool]: use-multiple-node-pools.md#add-an-azure-linux-node-pool
-[ubuntu-to-azurelinux]: use-multiple-node-pools.md#migrate-ubuntu-nodes-to-azure-linux
+[azurelinux-node-pool]: create-node-pools.md#add-an-azure-linux-node-pool
+[ubuntu-to-azurelinux]: create-node-pools.md#migrate-ubuntu-nodes-to-azure-linux-nodes
[auto-upgrade-aks]: auto-upgrade-cluster.md [kured]: node-updates-kured.md
aks Use Multiple Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-multiple-node-pools.md
- Title: Use multiple node pools in Azure Kubernetes Service (AKS)
-description: Learn how to create and manage multiple node pools for a cluster in Azure Kubernetes Service (AKS)
-- Previously updated : 06/27/2023--
-# Create and manage multiple node pools for a cluster in Azure Kubernetes Service (AKS)
-
-In Azure Kubernetes Service (AKS), nodes of the same configuration are grouped together into *node pools*. These node pools contain the underlying VMs that run your applications. The initial number of nodes and their size (SKU) is defined when you create an AKS cluster, which creates a [system node pool][use-system-pool]. To support applications that have different compute or storage demands, you can create more *user node pools*. System node pools serve the primary purpose of hosting critical system pods such as CoreDNS and `konnectivity`. User node pools serve the primary purpose of hosting your application pods. However, application pods can be scheduled on system node pools if you wish to only have one pool in your AKS cluster. User node pools are where you place your application-specific pods. For example, use more user node pools to provide GPUs for compute-intensive applications, or access to high-performance SSD storage.
-
-> [!NOTE]
-> This feature enables higher control over how to create and manage multiple node pools. As a result, separate commands are required for create/update/delete. Previously cluster operations through `az aks create` or `az aks update` used the managedCluster API and were the only options to change your control plane and a single node pool. This feature exposes a separate operation set for agent pools through the agentPool API and require use of the `az aks nodepool` command set to execute operations on an individual node pool.
-
-This article shows you how to create and manage one or more node pools in an AKS cluster.
-
-## Before you begin
-
-* You need the Azure CLI version 2.2.0 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
-* Review [Storage options for applications in Azure Kubernetes Service][aks-storage-concepts] to plan your storage configuration.
-
-## Limitations
-
-The following limitations apply when you create and manage AKS clusters that support multiple node pools:
-
-* See [Quotas, virtual machine size restrictions, and region availability in Azure Kubernetes Service (AKS)][quotas-skus-regions].
-* You can delete system node pools, provided you have another system node pool to take its place in the AKS cluster.
-* System pools must contain at least one node, and user node pools may contain zero or more nodes.
-* The AKS cluster must use the Standard SKU load balancer to use multiple node pools, the feature isn't supported with Basic SKU load balancers.
-* The AKS cluster must use Virtual Machine Scale Sets for the nodes.
-* You can't change the VM size of a node pool after you create it.
-* The name of a node pool may only contain lowercase alphanumeric characters and must begin with a lowercase letter. For Linux node pools the length must be between 1 and 12 characters, for Windows node pools the length must be between 1 and 6 characters.
-* All node pools must reside in the same virtual network.
-* When creating multiple node pools at cluster create time, all Kubernetes versions used by node pools must match the version set for the control plane. This can be updated after the cluster has been provisioned by using per node pool operations.
-
-## Create an AKS cluster
-
-> [!IMPORTANT]
-> If you run a single system node pool for your AKS cluster in a production environment, we recommend you use at least three nodes for the node pool. If one node goes down, you lose control plane resources and redundancy is compromised. You can mitigate this risk by having more control plane nodes.
-
-To get started, create an AKS cluster with a single node pool. The following example uses the [az group create][az-group-create] command to create a resource group named *myResourceGroup* in the *eastus* region. An AKS cluster named *myAKSCluster* is then created using the [`az aks create`][az-aks-create] command.
-
-> [!NOTE]
-> The *Basic* load balancer SKU is **not supported** when using multiple node pools. By default, AKS clusters are created with the *Standard* load balancer SKU from the Azure CLI and Azure portal.
-
-```azurecli-interactive
-# Create a resource group in East US
-az group create --name myResourceGroup --location eastus
-
-# Create a basic single-node pool AKS cluster
-az aks create \
- --resource-group myResourceGroup \
- --name myAKSCluster \
- --vm-set-type VirtualMachineScaleSets \
- --node-count 2 \
- --generate-ssh-keys \
- --load-balancer-sku standard
-```
-
-It takes a few minutes to create the cluster.
-
-> [!NOTE]
-> To ensure your cluster operates reliably, you should run at least 2 (two) nodes in the default node pool, as essential system services are running across this node pool.
-
-When the cluster is ready, use the [`az aks get-credentials`][az-aks-get-credentials] command to get the cluster credentials for use with `kubectl`:
-
-```azurecli-interactive
-az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
-```
-
-## Add a node pool
-
-The cluster created in the previous step has a single node pool. Let's add a second node pool using the [`az aks nodepool add`][az-aks-nodepool-add] command. The following example creates a node pool named *mynodepool* that runs *3* nodes:
-
-```azurecli-interactive
-az aks nodepool add \
- --resource-group myResourceGroup \
- --cluster-name myAKSCluster \
- --name mynodepool \
- --node-count 3
-```
-
-> [!NOTE]
-> The name of a node pool must start with a lowercase letter and can only contain alphanumeric characters. For Linux node pools the length must be between 1 and 12 characters, for Windows node pools the length must be between 1 and 6 characters.
-
-To see the status of your node pools, use the [`az aks node pool list`][az-aks-nodepool-list] command and specify your resource group and cluster name:
-
-```azurecli-interactive
-az aks nodepool list --resource-group myResourceGroup --cluster-name myAKSCluster
-```
-
-The following example output shows that *mynodepool* has been successfully created with three nodes in the node pool. When the AKS cluster was created in the previous step, a default *nodepool1* was created with a node count of *2*.
-
-```output
-[
- {
- ...
- "count": 3,
- ...
- "name": "mynodepool",
- "orchestratorVersion": "1.15.7",
- ...
- "vmSize": "Standard_DS2_v2",
- ...
- },
- {
- ...
- "count": 2,
- ...
- "name": "nodepool1",
- "orchestratorVersion": "1.15.7",
- ...
- "vmSize": "Standard_DS2_v2",
- ...
- }
-]
-```
-
-> [!TIP]
-> If no *VmSize* is specified when you add a node pool, the default size is *Standard_D2s_v3* for Windows node pools and *Standard_DS2_v2* for Linux node pools. If no *OrchestratorVersion* is specified, it defaults to the same version as the control plane.
-
-### Add an ARM64 node pool
-
-The ARM64 processor provides low power compute for your Kubernetes workloads. To create an ARM64 node pool, you need to choose a [Dpsv5][arm-sku-vm1], [Dplsv5][arm-sku-vm2] or [Epsv5][arm-sku-vm3] series Virtual Machine.
-
-#### Limitations
-
-* ARM64 node pools aren't supported on Defender-enabled clusters
-* FIPS-enabled node pools aren't supported with ARM64 SKUs
-
-Use `az aks nodepool add` command to add an ARM64 node pool.
-
-```azurecli-interactive
-az aks nodepool add \
- --resource-group myResourceGroup \
- --cluster-name myAKSCluster \
- --name armpool \
- --node-count 3 \
- --node-vm-size Standard_D2pds_v5
-```
-
-### Add an Azure Linux node pool
-
-The Azure Linux container host for AKS is an open-source Linux distribution available as an AKS container host. It provides high reliability, security, and consistency. It only includes the minimal set of packages needed for running container workloads, which improve boot times and overall performance.
-
-You can add an Azure Linux node pool into your existing cluster using the `az aks nodepool add` command and specifying `--os-sku AzureLinux`.
-
-```azurecli-interactive
-az aks nodepool add \
- --resource-group myResourceGroup \
- --cluster-name myAKSCluster \
- --name azurelinuxpool \
- --os-sku AzureLinux
-```
-
-### Migrate Ubuntu nodes to Azure Linux
-
-Use the following instructions to migrate your Ubuntu nodes to Azure Linux nodes.
-
-1. Add an Azure Linux node pool into your existing cluster using the `az aks nodepool add` command and specifying `--os-sku AzureLinux`.
-
-> [!NOTE]
-> When adding a new Azure Linux node pool, you need to add at least one as `--mode System`. Otherwise, AKS won't allow you to delete your existing Ubuntu node pool.
-
-2. [Cordon the existing Ubuntu nodes][cordon-and-drain].
-3. [Drain the existing Ubuntu nodes][drain-nodes].
-4. Remove the existing Ubuntu nodes using the `az aks delete` command.
-
-```azurecli-interactive
-az aks nodepool delete \
- --resource-group myResourceGroup \
- --cluster-name myAKSCluster \
- --name myNodePool
-```
-
-### Add a node pool with a unique subnet
-
-A workload may require splitting a cluster's nodes into separate pools for logical isolation. This isolation can be supported with separate subnets dedicated to each node pool in the cluster. This can address requirements such as having non-contiguous virtual network address space to split across node pools.
-
-> [!NOTE]
-> Make sure to use Azure CLI version `2.35.0` or later.
-
-#### Limitations
-
-* All subnets assigned to node pools must belong to the same virtual network.
-* System pods must have access to all nodes/pods in the cluster to provide critical functionality such as DNS resolution and tunneling kubectl logs/exec/port-forward proxy.
-* If you expand your VNET after creating the cluster, you must update your cluster (perform any managed cluster operations, but node pool operations don't count) before adding a subnet outside the original CIDR block. While AKS errors-out on the agent pool add, the `aks-preview` Azure CLI extension (version 0.5.66+) now supports running `az aks update -g <resourceGroup> -n <clusterName>` without any optional arguments. This command performs an update operation without making any changes, which can recover a cluster stuck in a failed state.
-* In clusters with Kubernetes version < 1.23.3, kube-proxy will SNAT traffic from new subnets, which can cause Azure Network Policy to drop the packets.
-* Windows nodes will SNAT traffic to the new subnets until the node pool is reimaged.
-* Internal load balancers default to one of the node pool subnets (usually the first subnet of the node pool at cluster creation). To override this behavior, you can [specify the load balancer's subnet explicitly using an annotation][internal-lb-different-subnet].
-
-To create a node pool with a dedicated subnet, pass the subnet resource ID as another parameter when creating a node pool.
-
-```azurecli-interactive
-az aks nodepool add \
- --resource-group myResourceGroup \
- --cluster-name myAKSCluster \
- --name mynodepool \
- --node-count 3 \
- --vnet-subnet-id <YOUR_SUBNET_RESOURCE_ID>
-```
-
-## Upgrade a node pool
-
-> [!NOTE]
-> Upgrade and scale operations on a cluster or node pool cannot occur simultaneously, if attempted an error is returned. Instead, each operation type must complete on the target resource prior to the next request on that same resource. Read more about this on our [troubleshooting guide](./troubleshooting.md#im-receiving-errors-when-trying-to-upgrade-or-scale-that-state-my-cluster-is-being-upgraded-or-has-failed-upgrade).
-
-The commands in this section explain how to upgrade a single specific node pool. The relationship between upgrading the Kubernetes version of the control plane and the node pool are explained in the [Upgrade a cluster control plan with multiple node pools](#upgrade-a-cluster-control-plane-with-multiple-node-pools) section.
-
-> [!NOTE]
-> The node pool OS image version is tied to the Kubernetes version of the cluster. You only get OS image upgrades, following a cluster upgrade.
-
-Since there are two node pools in this example, we must use [`az aks nodepool upgrade`][az-aks-nodepool-upgrade] to upgrade a node pool. To see the available upgrades use [`az aks get-upgrades`][az-aks-get-upgrades]
-
-```azurecli-interactive
-az aks get-upgrades --resource-group myResourceGroup --name myAKSCluster
-```
-
-Let's upgrade the *mynodepool*. Use the [`az aks nodepool upgrade`][az-aks-nodepool-upgrade] command to upgrade the node pool, as shown in the following example:
-
-```azurecli-interactive
-az aks nodepool upgrade \
- --resource-group myResourceGroup \
- --cluster-name myAKSCluster \
- --name mynodepool \
- --kubernetes-version KUBERNETES_VERSION \
- --no-wait
-```
-
-List the status of your node pools again using the [`az aks node pool list`][az-aks-nodepool-list] command. The following example shows that *mynodepool* is in the *Upgrading* state to *KUBERNETES_VERSION*:
-
-```azurecli
-az aks nodepool list -g myResourceGroup --cluster-name myAKSCluster
-```
-
-```output
-[
- {
- ...
- "count": 3,
- ...
- "name": "mynodepool",
- "orchestratorVersion": "KUBERNETES_VERSION",
- ...
- "provisioningState": "Upgrading",
- ...
- "vmSize": "Standard_DS2_v2",
- ...
- },
- {
- ...
- "count": 2,
- ...
- "name": "nodepool1",
- "orchestratorVersion": "1.15.7",
- ...
- "provisioningState": "Succeeded",
- ...
- "vmSize": "Standard_DS2_v2",
- ...
- }
-]
-```
-
-It takes a few minutes to upgrade the nodes to the specified version.
-
-As a best practice, you should upgrade all node pools in an AKS cluster to the same Kubernetes version. The default behavior of `az aks upgrade` is to upgrade all node pools together with the control plane to achieve this alignment. The ability to upgrade individual node pools lets you perform a rolling upgrade and schedule pods between node pools to maintain application uptime within the above constraints mentioned.
-
-## Upgrade a cluster control plane with multiple node pools
-
-> [!NOTE]
-> Kubernetes uses the standard [Semantic Versioning](https://semver.org/) versioning scheme. The version number is expressed as *x.y.z*, where *x* is the major version, *y* is the minor version, and *z* is the patch version. For example, in version *1.12.6*, 1 is the major version, 12 is the minor version, and 6 is the patch version. The Kubernetes version of the control plane and the initial node pool are set during cluster creation. Other node pools have their Kubernetes version set when they are added to the cluster. The Kubernetes versions may differ between node pools as well as between a node pool and the control plane.
-
-An AKS cluster has two cluster resource objects with Kubernetes versions associated.
-
-1. A cluster control plane Kubernetes version.
-2. A node pool with a Kubernetes version.
-
-A control plane maps to one or many node pools. The behavior of an upgrade operation depends on which Azure CLI command is used.
-
-Upgrading an AKS control plane requires using `az aks upgrade`. This command upgrades the control plane version and all node pools in the cluster.
-
-Issuing the `az aks upgrade` command with the `--control-plane-only` flag upgrades only the cluster control plane. None of the associated node pools in the cluster are changed.
-
-Upgrading individual node pools requires using `az aks nodepool upgrade`. This command upgrades only the target node pool with the specified Kubernetes version
-
-### Validation rules for upgrades
-
-Kubernetes upgrades for a cluster's control plane and node pools are validated using the following sets of rules.
-
-* Rules for valid versions to upgrade node pools:
- * The node pool version must have the same *major* version as the control plane.
- * The node pool *minor* version must be within two *minor* versions of the control plane version.
- * The node pool version can't be greater than the control `major.minor.patch` version.
-
-* Rules for submitting an upgrade operation:
- * You can't downgrade the control plane or a node pool Kubernetes version.
- * If a node pool Kubernetes version isn't specified, behavior depends on the client being used. Declaration in Resource Manager templates falls back to the existing version defined for the node pool if used, if none is set the control plane version is used to fall back on.
- * You can either upgrade or scale a control plane or a node pool at a given time, you can't submit multiple operations on a single control plane or node pool resource simultaneously.
-
-## Scale a node pool manually
-
-As your application workload demands change, you may need to scale the number of nodes in a node pool. The number of nodes can be scaled up or down.
-
-<!--If you scale down, nodes are carefully [cordoned and drained][kubernetes-drain] to minimize disruption to running applications.-->
-
-To scale the number of nodes in a node pool, use the [`az aks node pool scale`][az-aks-nodepool-scale] command. The following example scales the number of nodes in *mynodepool* to *5*:
-
-```azurecli-interactive
-az aks nodepool scale \
- --resource-group myResourceGroup \
- --cluster-name myAKSCluster \
- --name mynodepool \
- --node-count 5 \
- --no-wait
-```
-
-List the status of your node pools again using the [`az aks node pool list`][az-aks-nodepool-list] command. The following example shows that *mynodepool* is in the *Scaling* state with a new count of *5* nodes:
-
-```azurecli-interactive
-az aks nodepool list -g myResourceGroup --cluster-name myAKSCluster
-```
-
-```output
-[
- {
- ...
- "count": 5,
- ...
- "name": "mynodepool",
- "orchestratorVersion": "1.15.7",
- ...
- "provisioningState": "Scaling",
- ...
- "vmSize": "Standard_DS2_v2",
- ...
- },
- {
- ...
- "count": 2,
- ...
- "name": "nodepool1",
- "orchestratorVersion": "1.15.7",
- ...
- "provisioningState": "Succeeded",
- ...
- "vmSize": "Standard_DS2_v2",
- ...
- }
-]
-```
-
-It takes a few minutes for the scale operation to complete.
-
-## Scale a specific node pool automatically by enabling the cluster autoscaler
-
-AKS offers a separate feature to automatically scale node pools with a feature called the [cluster autoscaler](cluster-autoscaler.md). This feature can be enabled per node pool with unique minimum and maximum scale counts per node pool. Learn how to [use the cluster autoscaler per node pool](cluster-autoscaler.md#use-the-cluster-autoscaler-with-multiple-node-pools-enabled).
-
-## Delete a node pool
-
-If you no longer need a pool, you can delete it and remove the underlying VM nodes. To delete a node pool, use the [`az aks node pool delete`][az-aks-nodepool-delete] command and specify the node pool name. The following example deletes the *mynodepool* created in the previous steps:
-
-> [!CAUTION]
-> When you delete a node pool, AKS doesn't perform cordon and drain, and there are no recovery options for data loss that may occur when you delete a node pool. If pods can't be scheduled on other node pools, those applications become unavailable. Make sure you don't delete a node pool when in-use applications don't have data backups or the ability to run on other node pools in your cluster. To minimize the disruption of rescheduling pods currently running on the node pool you are going to delete, perform a cordon and drain on all nodes in the node pool before deleting. For more information, see [cordon and drain node pools][cordon-and-drain].
-
-```azurecli-interactive
-az aks nodepool delete -g myResourceGroup --cluster-name myAKSCluster --name mynodepool --no-wait
-```
-
-The following example output from the [`az aks node pool list`][az-aks-nodepool-list] command shows that *mynodepool* is in the *Deleting* state:
-
-```azurecli-interactive
-az aks nodepool list -g myResourceGroup --cluster-name myAKSCluster
-```
-
-```output
-[
- {
- ...
- "count": 5,
- ...
- "name": "mynodepool",
- "orchestratorVersion": "1.15.7",
- ...
- "provisioningState": "Deleting",
- ...
- "vmSize": "Standard_DS2_v2",
- ...
- },
- {
- ...
- "count": 2,
- ...
- "name": "nodepool1",
- "orchestratorVersion": "1.15.7",
- ...
- "provisioningState": "Succeeded",
- ...
- "vmSize": "Standard_DS2_v2",
- ...
- }
-]
-```
-
-It takes a few minutes to delete the nodes and the node pool.
-
-## Associate capacity reservation groups to node pools (preview)
-
-As your application workloads demands, you may associate node pools to capacity reservation groups already created. This ensures guaranteed capacity is allocated for your node pools.
-
-For more information on the capacity reservation groups, review [Capacity Reservation Groups][capacity-reservation-groups].
-
-### Register preview feature
--
-To install the aks-preview extension, run the following command:
-
-```azurecli-interactive
-az extension add --name aks-preview
-```
-
-Run the following command to update to the latest version of the extension released:
-
-```azurecli-interactive
-az extension update --name aks-preview
-```
-
-Register the `CapacityReservationGroupPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
-
-```azurecli-interactive
-az feature register --namespace "Microsoft.ContainerService" --name "CapacityReservationGroupPreview"
-```
-
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command:
-
-```azurecli-interactive
-az feature show --namespace "Microsoft.ContainerService" --name "CapacityReservationGroupPreview"
-```
-
-When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
-
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
-```
-
-### Manage capacity reservations
-
-Associating a node pool with an existing capacity reservation group can be done using [`az aks nodepool add`][az-aks-nodepool-add] command and specifying a capacity reservation group with the --crg-id flag". The capacity reservation group should already exist, otherwise the node pool is added to the cluster with a warning and no capacity reservation group gets associated.
-
-```azurecli-interactive
-az aks nodepool add -g MyRG --cluster-name MyMC -n myAP --crg-id {crg_id}
-```
-
-Associating a system node pool with an existing capacity reservation group can be done using [`az aks create`][az-aks-create] command. If the capacity reservation group specified doesn't exist, then a warning is issued and the cluster gets created without any capacity reservation group association.
-
-```azurecli-interactive
-az aks create -g MyRG --cluster-name MyMC --crg-id {crg_id}
-```
-
-Deleting a node pool command implicitly dissociates a node pool from any associated capacity reservation group, before that node pool is deleted.
-
-```azurecli-interactive
-az aks nodepool delete -g MyRG --cluster-name MyMC -n myAP
-```
-
-Deleting a cluster command implicitly dissociates all node pools in a cluster from their associated capacity reservation groups.
-
-```azurecli-interactive
-az aks delete -g MyRG --cluster-name MyMC
-```
-
-## Specify a VM size for a node pool
-
-In the previous examples to create a node pool, a default VM size was used for the nodes created in the cluster. A more common scenario is for you to create node pools with different VM sizes and capabilities. For example, you may create a node pool that contains nodes with large amounts of CPU or memory, or a node pool that provides GPU support. In the next step, you [use taints and tolerations](#setting-node-pool-taints) to tell the Kubernetes scheduler how to limit access to pods that can run on these nodes.
-
-In the following example, create a GPU-based node pool that uses the *Standard_NC6* VM size. These VMs are powered by the NVIDIA Tesla K80 card. For information on available VM sizes, see [Sizes for Linux virtual machines in Azure][vm-sizes].
-
-Create a node pool using the [`az aks node pool add`][az-aks-nodepool-add] command again. This time, specify the name *gpunodepool*, and use the `--node-vm-size` parameter to specify the *Standard_NC6* size:
-
-```azurecli-interactive
-az aks nodepool add \
- --resource-group myResourceGroup \
- --cluster-name myAKSCluster \
- --name gpunodepool \
- --node-count 1 \
- --node-vm-size Standard_NC6 \
- --no-wait
-```
-
-The following example output from the [`az aks node pool list`][az-aks-nodepool-list] command shows that *gpunodepool* is *Creating* nodes with the specified *VmSize*:
-
-```azurecli-interactive
-az aks nodepool list -g myResourceGroup --cluster-name myAKSCluster
-```
-
-```output
-[
- {
- ...
- "count": 1,
- ...
- "name": "gpunodepool",
- "orchestratorVersion": "1.15.7",
- ...
- "provisioningState": "Creating",
- ...
- "vmSize": "Standard_NC6",
- ...
- },
- {
- ...
- "count": 2,
- ...
- "name": "nodepool1",
- "orchestratorVersion": "1.15.7",
- ...
- "provisioningState": "Succeeded",
- ...
- "vmSize": "Standard_DS2_v2",
- ...
- }
-]
-```
-
-It takes a few minutes for the *gpunodepool* to be successfully created.
-
-## Specify a taint, label, or tag for a node pool
-
-When creating a node pool, you can add taints, labels, or tags to that node pool. When you add a taint, label, or tag, all nodes within that node pool also get that taint, label, or tag.
-
-> [!IMPORTANT]
-> Adding taints, labels, or tags to nodes should be done for the entire node pool using `az aks nodepool`. Applying taints, labels, or tags to individual nodes in a node pool using `kubectl` is not recommended.
-
-### Setting node pool taints
-
-To create a node pool with a taint, use [`az aks nodepool add`][az-aks-nodepool-add]. Specify the name *taintnp* and use the `--node-taints` parameter to specify *sku=gpu:NoSchedule* for the taint.
-
-```azurecli-interactive
-az aks nodepool add \
- --resource-group myResourceGroup \
- --cluster-name myAKSCluster \
- --name taintnp \
- --node-count 1 \
- --node-taints sku=gpu:NoSchedule \
- --no-wait
-```
-
-The following example output from the [`az aks nodepool list`][az-aks-nodepool-list] command shows that *taintnp* is *Creating* nodes with the specified *nodeTaints*:
-
-```azurecli-interactive
-az aks nodepool list -g myResourceGroup --cluster-name myAKSCluster
-```
-
-```output
-[
- {
- ...
- "count": 1,
- ...
- "name": "taintnp",
- "orchestratorVersion": "1.15.7",
- ...
- "provisioningState": "Creating",
- ...
- "nodeTaints": [
- "sku=gpu:NoSchedule"
- ],
- ...
- },
- ...
-]
-```
-
-The taint information is visible in Kubernetes for handling scheduling rules for nodes. The Kubernetes scheduler can use taints and tolerations to restrict what workloads can run on nodes.
-
-* A **taint** is applied to a node that indicates only specific pods can be scheduled on them.
-* A **toleration** is then applied to a pod that allows them to *tolerate* a node's taint.
-
-For more information on how to use advanced Kubernetes scheduled features, see [Best practices for advanced scheduler features in AKS][taints-tolerations]
-
-In the previous step, you applied the *sku=gpu:NoSchedule* taint when you created your node pool. The following basic example YAML manifest uses a toleration to allow the Kubernetes scheduler to run an NGINX pod on a node in that node pool.
-
-Create a file named `nginx-toleration.yaml` and copy in the following example YAML:
-
-```yaml
-apiVersion: v1
-kind: Pod
-metadata:
- name: mypod
-spec:
- containers:
- - image: mcr.microsoft.com/oss/nginx/nginx:1.15.9-alpine
- name: mypod
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 1
- memory: 2G
- tolerations:
- - key: "sku"
- operator: "Equal"
- value: "gpu"
- effect: "NoSchedule"
-```
-
-Schedule the pod using the `kubectl apply -f nginx-toleration.yaml` command:
-
-```bash
-kubectl apply -f nginx-toleration.yaml
-```
-
-It takes a few seconds to schedule the pod and pull the NGINX image. Use the [kubectl describe pod][kubectl-describe] command to view the pod status. The following condensed example output shows the *sku=gpu:NoSchedule* toleration is applied. In the events section, the scheduler has assigned the pod to the *aks-taintnp-28993262-vmss000000* node:
-
-```bash
-kubectl describe pod mypod
-```
-
-```output
-[...]
-Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
- node.kubernetes.io/unreachable:NoExecute for 300s
- sku=gpu:NoSchedule
-Events:
- Type Reason Age From Message
- - - - -
- Normal Scheduled 4m48s default-scheduler Successfully assigned default/mypod to aks-taintnp-28993262-vmss000000
- Normal Pulling 4m47s kubelet pulling image "mcr.microsoft.com/oss/nginx/nginx:1.15.9-alpine"
- Normal Pulled 4m43s kubelet Successfully pulled image "mcr.microsoft.com/oss/nginx/nginx:1.15.9-alpine"
- Normal Created 4m40s kubelet Created container
- Normal Started 4m40s kubelet Started container
-```
-
-Only pods that have this toleration applied can be scheduled on nodes in *taintnp*. Any other pod would be scheduled in the *nodepool1* node pool. If you create more node pools, you can use taints and tolerations to limit what pods can be scheduled on those node resources.
-
-### Setting node pool labels
-
-For more information on using labels with node pools, see [Use labels in an Azure Kubernetes Service (AKS) cluster][use-labels].
-
-### Setting node pool Azure tags
-
-For more information on using Azure tags with node pools, see [Use Azure tags in Azure Kubernetes Service (AKS)][use-tags].
-
-## Add a FIPS-enabled node pool
-
-For more information on enabling Federal Information Process Standard (FIPS) for your AKS cluster, see [Enable Federal Information Process Standard (FIPS) for Azure Kubernetes Service (AKS) node pools][enable-fips-nodes].
-
-## Manage node pools using a Resource Manager template
-
-When you use an Azure Resource Manager template to create and managed resources, you can typically update the settings in your template and redeploy to update the resource. With node pools in AKS, the initial node pool profile can't be updated once the AKS cluster has been created. This behavior means that you can't update an existing Resource Manager template, make a change to the node pools, and redeploy. Instead, you must create a separate Resource Manager template that updates only the node pools for an existing AKS cluster.
-
-Create a template such as `aks-agentpools.json` and paste the following example manifest. This example template configures the following settings:
-
-* Updates the *Linux* node pool named *myagentpool* to run three nodes.
-* Sets the nodes in the node pool to run Kubernetes version *1.15.7*.
-* Defines the node size as *Standard_DS2_v2*.
-
-Edit these values as need to update, add, or delete node pools as needed:
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "clusterName": {
- "type": "string",
- "metadata": {
- "description": "The name of your existing AKS cluster."
- }
- },
- "location": {
- "type": "string",
- "metadata": {
- "description": "The location of your existing AKS cluster."
- }
- },
- "agentPoolName": {
- "type": "string",
- "defaultValue": "myagentpool",
- "metadata": {
- "description": "The name of the agent pool to create or update."
- }
- },
- "vnetSubnetId": {
- "type": "string",
- "defaultValue": "",
- "metadata": {
- "description": "The Vnet subnet resource ID for your existing AKS cluster."
- }
- }
- },
- "variables": {
- "apiVersion": {
- "aks": "2020-01-01"
- },
- "agentPoolProfiles": {
- "maxPods": 30,
- "osDiskSizeGB": 0,
- "agentCount": 3,
- "agentVmSize": "Standard_DS2_v2",
- "osType": "Linux",
- "vnetSubnetId": "[parameters('vnetSubnetId')]"
- }
- },
- "resources": [
- {
- "apiVersion": "2020-01-01",
- "type": "Microsoft.ContainerService/managedClusters/agentPools",
- "name": "[concat(parameters('clusterName'),'/', parameters('agentPoolName'))]",
- "location": "[parameters('location')]",
- "properties": {
- "maxPods": "[variables('agentPoolProfiles').maxPods]",
- "osDiskSizeGB": "[variables('agentPoolProfiles').osDiskSizeGB]",
- "count": "[variables('agentPoolProfiles').agentCount]",
- "vmSize": "[variables('agentPoolProfiles').agentVmSize]",
- "osType": "[variables('agentPoolProfiles').osType]",
- "storageProfile": "ManagedDisks",
- "type": "VirtualMachineScaleSets",
- "vnetSubnetID": "[variables('agentPoolProfiles').vnetSubnetId]",
- "orchestratorVersion": "1.15.7"
- }
- }
- ]
-}
-```
-
-Deploy this template using the [`az deployment group create`][az-deployment-group-create] command, as shown in the following example. You're prompted for the existing AKS cluster name and location:
-
-```azurecli-interactive
-az deployment group create \
- --resource-group myResourceGroup \
- --template-file aks-agentpools.json
-```
-
-> [!TIP]
-> You can add a tag to your node pool by adding the *tag* property in the template, as shown in the following example.
->
-> ```json
-> ...
-> "resources": [
-> {
-> ...
-> "properties": {
-> ...
-> "tags": {
-> "name1": "val1"
-> },
-> ...
-> }
-> }
-> ...
-> ```
-
-It may take a few minutes to update your AKS cluster depending on the node pool settings and operations you define in your Resource Manager template.
-
-## Clean up resources
-
-In this article, you created an AKS cluster that includes GPU-based nodes. To reduce unnecessary cost, you may want to delete the *gpunodepool*, or the whole AKS cluster.
-
-To delete the GPU-based node pool, use the [`az aks nodepool delete`][az-aks-nodepool-delete] command as shown in following example:
-
-```azurecli-interactive
-az aks nodepool delete -g myResourceGroup --cluster-name myAKSCluster --name gpunodepool
-```
-
-To delete the cluster itself, use the [`az group delete`][az-group-delete] command to delete the AKS resource group:
-
-```azurecli-interactive
-az group delete --name myResourceGroup --yes --no-wait
-```
-
-You can also delete the other cluster you created for the public IP for node pools scenario.
-
-```azurecli-interactive
-az group delete --name myResourceGroup2 --yes --no-wait
-```
-
-## Next steps
-
-* Learn more about [system node pools][use-system-pool].
-
-* In this article, you learned how to create and manage multiple node pools in an AKS cluster. For more information about how to control pods across node pools, see [Best practices for advanced scheduler features in AKS][operator-best-practices-advanced-scheduler].
-
-* To create and use Windows Server container node pools, see [Create a Windows Server container in AKS][aks-quickstart-windows-cli].
-
-* Use [proximity placement groups][reduce-latency-ppg] to reduce latency for your AKS applications.
-
-* Use [instance-level public IP addresses](use-node-public-ips.md) to make your nodes able to serve traffic directly.
-
-<!-- EXTERNAL LINKS -->
-[kubectl-describe]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe
-[capacity-reservation-groups]:/azure/virtual-machines/capacity-reservation-associate-virtual-machine-scale-set
-
-<!-- INTERNAL LINKS -->
-[aks-storage-concepts]: concepts-storage.md
-[arm-sku-vm1]: ../virtual-machines/dpsv5-dpdsv5-series.md
-[arm-sku-vm2]: ../virtual-machines/dplsv5-dpldsv5-series.md
-[arm-sku-vm3]: ../virtual-machines/epsv5-epdsv5-series.md
-[aks-quickstart-windows-cli]: ./learn/quick-windows-container-deploy-cli.md
-[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
-[az-aks-create]: /cli/azure/aks#az_aks_create
-[az-aks-get-upgrades]: /cli/azure/aks#az_aks_get_upgrades
-[az-aks-nodepool-add]: /cli/azure/aks/nodepool#az_aks_nodepool_add
-[az-aks-nodepool-list]: /cli/azure/aks/nodepool#az_aks_nodepool_list
-[az-aks-nodepool-upgrade]: /cli/azure/aks/nodepool#az_aks_nodepool_upgrade
-[az-aks-nodepool-scale]: /cli/azure/aks/nodepool#az_aks_nodepool_scale
-[az-aks-nodepool-delete]: /cli/azure/aks/nodepool#az_aks_nodepool_delete
-[az-feature-register]: /cli/azure/feature#az_feature_register
-[az-provider-register]: /cli/azure/provider#az_provider_register
-[az-group-create]: /cli/azure/group#az_group_create
-[az-group-delete]: /cli/azure/group#az_group_delete
-[az-deployment-group-create]: /cli/azure/deployment/group#az_deployment_group_create
-[enable-fips-nodes]: enable-fips-nodes.md
-[install-azure-cli]: /cli/azure/install-azure-cli
-[operator-best-practices-advanced-scheduler]: operator-best-practices-advanced-scheduler.md
-[quotas-skus-regions]: quotas-skus-regions.md
-[taints-tolerations]: operator-best-practices-advanced-scheduler.md#provide-dedicated-nodes-using-taints-and-tolerations
-[vm-sizes]: ../virtual-machines/sizes.md
-[use-system-pool]: use-system-pools.md
-[reduce-latency-ppg]: reduce-latency-ppg.md
-[use-tags]: use-tags.md
-[use-labels]: use-labels.md
-[cordon-and-drain]: resize-node-pool.md#cordon-the-existing-nodes
-[internal-lb-different-subnet]: internal-lb.md#specify-a-different-subnet
-[drain-nodes]: resize-node-pool.md#drain-the-existing-nodes
aks Use Node Public Ips https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-node-public-ips.md
Containers:
## Next steps
-* Learn about [using multiple node pools in AKS](use-multiple-node-pools.md).
+* Learn about [using multiple node pools in AKS](create-node-pools.md).
* Learn about [using standard load balancers in AKS](load-balancer-standard.md)
aks Use System Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-system-pools.md
In this article, you learned how to create and manage system node pools in an AK
[kubernetes-label-syntax]: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set <!-- INTERNAL LINKS -->
-[aks-taints]: use-multiple-node-pools.md#setting-node-pool-taints
+[aks-taints]: manage-node-pools.md#set-node-pool-taints
[aks-windows]: windows-container-cli.md [az-aks-get-credentials]: /cli/azure/aks#az-aks-get-credentials [az-aks-create]: /cli/azure/aks#az-aks-create
In this article, you learned how to create and manage system node pools in an AK
[tag-limitation]: ../azure-resource-manager/management/tag-resources.md [taints-tolerations]: operator-best-practices-advanced-scheduler.md#provide-dedicated-nodes-using-taints-and-tolerations [vm-sizes]: ../virtual-machines/sizes.md
-[use-multiple-node-pools]: use-multiple-node-pools.md
+[use-multiple-node-pools]: create-node-pools.md
[maximum-pods]: configure-azure-cni.md#maximum-pods-per-node [update-node-pool-mode]: use-system-pools.md#update-existing-cluster-system-and-user-node-pools [start-stop-nodepools]: ./start-stop-nodepools.md
aks Windows Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/windows-faq.md
To get started with Windows Server containers in AKS, see [Create a node pool th
<!-- LINKS - internal --> [azure-network-models]: concepts-network.md#azure-virtual-networks [configure-azure-cni]: configure-azure-cni.md
-[nodepool-upgrade]: use-multiple-node-pools.md#upgrade-a-node-pool
+[nodepool-upgrade]: manage-node-pools.md#upgrade-a-single-node-pool
[windows-node-cli]: ./learn/quick-windows-container-deploy-cli.md [aks-support-policies]: support-policies.md [aks-faq]: faq.md [upgrade-cluster]: upgrade-cluster.md
-[upgrade-cluster-cp]: use-multiple-node-pools.md#upgrade-a-cluster-control-plane-with-multiple-node-pools
+[upgrade-cluster-cp]: manage-node-pools.md#upgrade-a-cluster-control-plane-with-multiple-node-pools
[azure-outbound-traffic]: ../load-balancer/load-balancer-outbound-connections.md#defaultsnat
-[nodepool-limitations]: use-multiple-node-pools.md#limitations
+[nodepool-limitations]: create-node-pools.md#limitations
[windows-container-compat]: /virtualization/windowscontainers/deploy-containers/version-compatibility?tabs=windows-server-2019%2Cwindows-10-1909 [maximum-number-of-pods]: configure-azure-cni.md#maximum-pods-per-node [azure-monitor]: ../azure-monitor/containers/container-insights-overview.md#what-does-azure-monitor-for-containers-provide
analysis-services Analysis Services Datasource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-datasource.md
Connecting to on-premises data sources from an Azure Analysis Services server re
|IBM Informix |Yes | No | | |JSON document | Yes | No | <sup>[6](#tab1400b)</sup> | |Lines from binary | Yes | No | <sup>[6](#tab1400b)</sup> |
-|MySQL Database | Yes | No | |
+|MySQL Database | Yes | No | <sup>[13](#mysql)</sup> |
|OData Feed | Yes | No | <sup>[6](#tab1400b)</sup> | |ODBC query | Yes | No | | |OLE DB | Yes | No | |
Connecting to on-premises data sources from an Azure Analysis Services server re
<a name="instgw">8</a> - If specifying MSOLEDBSQL as the data provider, it may be necessary to download and install the [Microsoft OLE DB Driver for SQL Server](/sql/connect/oledb/oledb-driver-for-sql-server) on the same computer as the On-premises data gateway. <a name="oracle">9</a> - For tabular 1200 models, or as a *provider* data source in tabular 1400+ models, specify Oracle Data Provider for .NET. If specified as a structured data source, be sure to [enable Oracle managed provider](#enable-oracle-managed-provider). <a name="teradata">10</a> - For tabular 1200 models, or as a *provider* data source in tabular 1400+ models, specify Teradata Data Provider for .NET.
-<a name="filesSP">11</a> - Files in on-premises SharePoint are not supported.
-<a name="tds">12</a> - Azure Analysis Services does not support direct connections to the Dynamics 365 [Dataverse TDS endpoint](/power-apps/developer/data-platform/dataverse-sql-query). When connecting to this data source from Azure Analysis Services, you must use an On-premises Data Gateway, and refresh the tokens manually.
+<a name="filesSP">11</a> - Files in on-premises SharePoint aren't supported.
+<a name="tds">12</a> - Azure Analysis Services doesn't support direct connections to the Dynamics 365 [Dataverse TDS endpoint](/power-apps/developer/data-platform/dataverse-sql-query). When connecting to this data source from Azure Analysis Services, you must use an On-premises Data Gateway and refresh the tokens manually.
+<a name="mysql">13</a> - Azure Analysis Services doesn't support direct connections to MySQL databases. When connecting to this data source from Azure Analysis Services, you must use an On-premises Data Gateway and refresh the tokens manually.
## Understanding providers
api-management Api Management Howto Api Inspector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-api-inspector.md
In this tutorial, you learn how to:
> * Trace an example call > * Review request processing steps ## Prerequisites
To trace request processing, you must enable the **Allow tracing** setting for t
1. Navigate to your API Management instance and select **Subscriptions** to review the settings.
- :::image type="content" source="media/api-management-howto-api-inspector/allow-tracing-1.png" alt-text="Allow tracing for subscription":::
+ :::image type="content" source="media/api-management-howto-api-inspector/allow-tracing-1.png" alt-text="Screenshot showing how to allow tracing for subscription." lightbox="media/api-management-howto-api-inspector/allow-tracing-1.png":::
1. If tracing isn't enabled for the subscription you're using, select the subscription and enable **Allow tracing**. [!INCLUDE [api-management-tracing-alert](../../includes/api-management-tracing-alert.md)]
To trace request processing, you must enable the **Allow tracing** setting for t
* If your subscription doesn't already allow tracing, you're prompted to enable it if you want to trace the call. * You can also choose to send the request without tracing.
- :::image type="content" source="media/api-management-howto-api-inspector/06-debug-your-apis-01-trace-call-1.png" alt-text="Screenshot showing configure API tracing.":::
+ :::image type="content" source="media/api-management-howto-api-inspector/06-debug-your-apis-01-trace-call-1.png" alt-text="Screenshot showing configure API tracing." lightbox="media/api-management-howto-api-inspector/06-debug-your-apis-01-trace-call-1.png":::
## Review trace information
api-management Api Management Howto Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-ip-addresses.md
For information about when this file is updated and when the IP addresses change
In the Developer, Basic, Standard, and Premium tiers of API Management, the public IP address or addresses (VIP) and private VIP addresses (if configured in the internal VNet mode) are static for the lifetime of a service, with the following exceptions: * The API Management service is deleted and then re-created.
-* The service subscription is [suspended](https://github.com/Azure/azure-resource-manager-rpc/blob/master/v1.0/subscription-lifecycle-api-reference.md#subscription-states) or [warned](https://github.com/Azure/azure-resource-manager-rpc/blob/master/v1.0/subscription-lifecycle-api-reference.md#subscription-states) (for example, for nonpayment) and then reinstated.
+* The service subscription is disabled or warned (for example, for nonpayment) and then reinstated. [Learn more about subscription states](/azure/cost-management-billing/manage/subscription-states)
* (Developer and Premium tiers) Azure Virtual Network is added to or removed from the service. * (Developer and Premium tiers) API Management service is switched between external and internal VNet deployment mode. * (Developer and Premium tiers) API Management service is moved to a different subnet.
api-management Configure Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/configure-custom-domain.md
If you already have a private certificate from a third-party provider, you can u
We recommend using Azure Key Vault to [manage your certificates](../key-vault/certificates/about-certificates.md) and setting them to `autorenew`.
-If you use Azure Key Vault to manage a custom domain TLS certificate, make sure the certificate is inserted into Key Vault [as a _certificate_](/rest/api/keyvault/certificates/create-certificate/create-certificate), not a _secret_.
+If you use Azure Key Vault to manage a custom domain TLS certificate, make sure the certificate is inserted into Key Vault [as a ](/rest/api/keyvault/certificates/create-certificate/create-certificate)_[certificate](/rest/api/keyvault/certificates/create-certificate/create-certificate)_, not a _secret_.
+
+> [!CAUTION]
+> When using a key vault certificate in API Management, be careful not to delete the certificate, key vault, or managed identity used to access the key vault.
To fetch a TLS/SSL certificate, API Management must have the list and get secrets permissions on the Azure Key Vault containing the certificate. * When you use the Azure portal to import the certificate, all the necessary configuration steps are completed automatically.
API Management offers a free, managed TLS certificate for your domain, if you do
* Does not support root domain names (for example, `contoso.com`). Requires a fully qualified name such as `api.contoso.com`. * Can only be configured when updating an existing API Management instance, not when creating an instance ++ ## Set a custom domain name - portal
Choose the steps according to the [domain certificate](#domain-certificate-optio
> [!NOTE] > The process of assigning the certificate may take 15 minutes or more depending on size of deployment. Developer tier has downtime, while Basic and higher tiers do not. ++ ## DNS configuration
You can also get a domain ownership identifier by calling the [Get Domain Owners
[Upgrade and scale your service](upgrade-and-scale.md) +
api-management Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-reference.md
Title: Built-in policy definitions for Azure API Management description: Lists Azure Policy built-in policy definitions for Azure API Management. These built-in policy definitions provide approaches to managing your Azure resources. Previously updated : 07/25/2023 Last updated : 08/03/2023
api-management Virtual Network Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/virtual-network-concepts.md
The minimum size of the subnet in which API Management can be deployed is /29, w
* **/25 subnet**: 128 possible IP addresses - 5 reserved Azure IP addresses - 2 API Management IP addresses for one instance - 1 IP address for internal load balancer, if used in internal mode = 120 remaining IP addresses left for sixty scale-out units (2 IP addresses/scale-out unit) for a total of sixty-one units. This is an extremely large, theoretical number of scale-out units.
+> [!IMPORTANT]
+> The private IP addresses of internal load balancer and API Management units are assigned dynamically. Therefore, it is impossible to anticipate the private IP of the API Management instance prior to its deployment. Additionally, changing to a different subnet and then returning may cause a change in the private IP address.
+ ### Routing See the Routing guidance when deploying your API Management instance into an [external VNet](./api-management-using-with-vnet.md#routing) or [internal VNet](./api-management-using-with-internal-vnet.md#routing).
app-service Configure Connect To Azure Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-connect-to-azure-storage.md
description: Learn how to attach custom network share in Azure App Service. Sha
Previously updated : 6/29/2022 Last updated : 8/4/2023 zone_pivot_groups: app-service-containers-code # Mount Azure Storage as a local share in App Service
-> [!NOTE]
-> When using VNET integration on your web app, the mounted drive will use an RC1918 IP address and not an IP address from your VNET.
->
-
-This guide shows how to mount Azure Storage Files as a network share in Windows code (non-container) in App Service. Only [Azure Files Shares](../storage/files/storage-how-to-use-files-portal.md) and [Premium Files Shares](../storage/files/storage-how-to-create-file-share.md) are supported. The benefits of custom-mounted storage include:
--- Configure persistent storage for your App Service app and manage the storage separately.-- Make static content like video and images readily available for your App Service app. -- Write application log files or archive older application log to Azure File shares. -- Share content across multiple apps or with other Azure services.-
-The following features are supported for Windows code:
--- Secured access to storage accounts with [private endpoints](../storage/common/storage-private-endpoints.md) and [service endpoints](../storage/common/storage-network-security.md#grant-access-from-a-virtual-network) (when [VNET integration](./overview-vnet-integration.md) is used).-- Azure Files (read/write).-- Up to five mount points per app.-- Mount Azure Storage file shares using "/mounts/`<path-name>`".--
-This guide shows how to mount Azure Storage Files as a network share in a Windows container in App Service. Only [Azure Files Shares](../storage/files/storage-how-to-use-files-portal.md) and [Premium Files Shares](../storage/files/storage-how-to-create-file-share.md) are supported. The benefits of custom-mounted storage include:
--- Configure persistent storage for your App Service app and manage the storage separately.-- Make static content like video and images readily available for your App Service app. -- Write application log files or archive older application log to Azure File shares. -- Share content across multiple apps or with other Azure services.-- Mount Azure Storage in a Windows container, including Isolated ([App Service environment v3](environment/overview.md)).-
-The following features are supported for Windows containers:
--- Secured access to storage accounts with [private endpoints](../storage/common/storage-private-endpoints.md) and [service endpoints](../storage/common/storage-network-security.md#grant-access-from-a-virtual-network) (when [VNET integration](./overview-vnet-integration.md) is used).-- Azure Files (read/write).-- Up to five mount points per app.-- Drive letter assignments (`C:` to `Z:`).---
-This guide shows how to mount Azure Storage as a network share in a built-in Linux container or a custom Linux container in App Service. See the video [how to mount Azure Storage as a local share](https://www.youtube.com/watch?v=OJkvpWYr57Y). For using Azure Storage in an ARM template, see [Bring your own storage](https://github.com/Azure/app-service-linux-docs/blob/master/BringYourOwnStorage/BYOS_azureFiles.json). The benefits of custom-mounted storage include:
--- Configure persistent storage for your App Service app and manage the storage separately.-- Make static content like video and images readily available for your App Service app. -- Write application log files or archive older application log to Azure File shares. -- Share content across multiple apps or with other Azure services.-
-The following features are supported for Linux containers:
--- Secured access to storage accounts with [service endpoints](../storage/common/storage-network-security.md#grant-access-from-a-virtual-network) and [private links](../storage/common/storage-private-endpoints.md) (when [VNET integration](./overview-vnet-integration.md) is used).-- Azure Files (read/write).-- Azure Blobs (read-only).-- Up to five mount points per app.--
-## Prerequisites
::: zone pivot="code-windows"--- [An existing Windows code app in App Service](quickstart-dotnetcore.md)-- [Create Azure file share](../storage/files/storage-how-to-use-files-portal.md)-- [Upload files to Azure File share](../storage/files/storage-how-to-create-file-share.md)- ::: zone-end ::: zone pivot="container-windows"--- [An existing Windows container app in App Service](quickstart-custom-container.md)-- [Create Azure file share](../storage/files/storage-how-to-use-files-portal.md)-- [Upload files to Azure File share](../storage/files/storage-how-to-create-file-share.md)- ::: zone-end ::: zone pivot="container-linux"--- An existing [App Service on Linux app](index.yml).-- An [Azure Storage Account](../storage/common/storage-account-create.md?tabs=azure-cli)-- An [Azure file share and directory](../storage/files/storage-how-to-use-files-portal.md).--
-> [!NOTE]
-> Azure Storage is non-default storage for App Service and billed separately, not included with App Service.
->
-
-## Limitations
---- [Storage firewall](../storage/common/storage-network-security.md) is supported only through [private endpoints](../storage/common/storage-private-endpoints.md) and [service endpoints](../storage/common/storage-network-security.md#grant-access-from-a-virtual-network) (when [VNET integration](./overview-vnet-integration.md) is used).-- Azure blobs are not supported when configuring Azure storage mounts for Windows code apps deployed to App Service.-- FTP/FTPS access to mounted storage not supported (use [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/)).-- Mapping `/mounts`, `mounts/foo/bar`, `/`, and `/mounts/foo.bar/` to custom-mounted storage is not supported (you can only use /mounts/pathname for mounting custom storage to your web app.)-- Storage mounts are not backed up when you [back up your app](manage-backup.md). Be sure to follow best practices to back up the Azure Storage accounts. ----- Azure blobs are not supported.-- [Storage firewall](../storage/common/storage-network-security.md) is supported only through [private endpoints](../storage/common/storage-private-endpoints.md) and [service endpoints](../storage/common/storage-network-security.md#grant-access-from-a-virtual-network) (when [VNET integration](./overview-vnet-integration.md) is used).-- FTP/FTPS access to mounted storage not supported (use [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/)).-- Mapping `[C-Z]:\`, `[C-Z]:\home`, `/`, and `/home` to custom-mounted storage is not supported.-- Storage mounts are not backed up when you [back up your app](manage-backup.md). Be sure to follow best practices to back up the Azure Storage accounts.-
-> [!NOTE]
-> Ensure ports 80 and 445 are open when using Azure Files with VNET integration.
->
---- [Storage firewall](../storage/common/storage-network-security.md) is supported only through [service endpoints](../storage/common/storage-network-security.md#grant-access-from-a-virtual-network) and [private endpoints](../storage/common/storage-private-endpoints.md) (when [VNET integration](./overview-vnet-integration.md) is used).-- FTP/FTPS access to custom-mounted storage is not supported (use [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/)).-- Azure CLI, Azure PowerShell, and Azure SDK support is in preview.-- Mapping `/` or `/home` to custom-mounted storage is not supported.-- Don't map the custom storage mount to `/tmp` or its subdirectories as this may cause timeout during app startup.-- Azure Storage is not supported with [Docker Compose Scenarios](configure-custom-container.md?pivots=container-linux#docker-compose-options)-- Storage mounts are not backed up when you [back up your app](manage-backup.md). Be sure to follow best practices to back up the Azure Storage accounts. -- Only Azure Files [SMB](../storage/files/files-smb-protocol.md) are supported. Azure Files [NFS](../storage/files/files-nfs-protocol.md) is not currently supported for Linux App Services.-
-> [!NOTE]
-> When VNET integration is used, ensure the following ports are open:
-> * Azure Files: 80 and 445.
-> * Azure Blobs: 80 and 443.
->
-
-## Mount storage to Windows code
-## Mount storage to Windows container
-## Mount storage to Linux container
-
-# [Azure portal](#tab/portal)
-
-1. In the [Azure portal](https://portal.azure.com), navigate to the app.
-1. From the left navigation, click **Configuration** > **Path Mappings** > **New Azure Storage Mount**.
-1. Configure the storage mount according to the following table. When finished, click **OK**.
-
- ::: zone pivot="code-windows"
- | Setting | Description |
- |-|-|
- | **Name** | Name of the mount configuration. Spaces are not allowed. |
- | **Configuration options** | Select **Basic** if the storage account is not using [private endpoints](../storage/common/storage-private-endpoints.md). Otherwise, select **Advanced**. |
- | **Storage accounts** | Azure Storage account. It must contain an Azure Files share. |
- | **Share name** | Files share to mount. |
- | **Access key** (Advanced only) | [Access key](../storage/common/storage-account-keys-manage.md) for your storage account. |
- | **Mount path** | Directory inside your app service that you want to mount. Only `/mounts/pathname` is supported.|
- | **Deployment slot setting** | When checked, the storage mount settings also apply to deployment slots.|
- ::: zone-end
- ::: zone pivot="container-windows"
- | Setting | Description |
- |-|-|
- | **Name** | Name of the mount configuration. Spaces are not allowed. |
- | **Configuration options** | Select **Basic** if the storage account is not using [private endpoints](../storage/common/storage-private-endpoints.md). Otherwise, select **Advanced**. |
- | **Storage accounts** | Azure Storage account. It must contain an Azure Files share. |
- | **Share name** | Files share to mount. |
- | **Access key** (Advanced only) | [Access key](../storage/common/storage-account-keys-manage.md) for your storage account. |
- | **Mount path** | Directory inside your Windows container that you want to mount. Do not use a root directory (`[C-Z]:\` or `/`) or the `home` directory (`[C-Z]:\home`, or `/home`) as it's not supported.|
- | **Deployment slot setting** | When checked, the storage mount settings also apply to deployment slots.|
- ::: zone-end
- ::: zone pivot="container-linux"
- | Setting | Description |
- |-|-|
- | **Name** | Name of the mount configuration. Spaces are not allowed. |
- | **Configuration options** | Select **Basic** if the storage account is not using [service endpoints](../storage/common/storage-network-security.md#grant-access-from-a-virtual-network) or [private endpoints](../storage/common/storage-private-endpoints.md). Otherwise, select **Advanced**. |
- | **Storage accounts** | Azure Storage account. |
- | **Storage type** | Select the type based on the storage you want to mount. Azure Blobs only supports read-only access. |
- | **Storage container** or **Share name** | Files share or Blobs container to mount. |
- | **Access key** (Advanced only) | [Access key](../storage/common/storage-account-keys-manage.md) for your storage account. |
- | **Mount path** | Directory inside the Linux container to mount to Azure Storage. Do not use `/` or `/home`.|
- | **Deployment slot setting** | When checked, the storage mount settings also apply to deployment slots.|
- ::: zone-end
-
-# [Azure CLI](#tab/cli)
-
-Use the [`az webapp config storage-account add`](/cli/azure/webapp/config/storage-account#az-webapp-config-storage-account-add) command. For example:
-
-```azurecli-interactive
-az webapp config storage-account add --resource-group <group-name> --name <app-name> --custom-id <custom-id> --storage-type AzureFiles --share-name <share-name> --account-name <storage-account-name> --access-key "<access-key>" --mount-path <mount-path-directory>
-```
--- `--storage-type` must be `AzureFiles` for Windows containers. -- `mount-path-directory` must be in the form `/path/to/dir` or `[C-Z]:\path\to\dir`.-- `--storage-type` can be `AzureBlob` or `AzureFiles`. `AzureBlob` is read-only.-- `--mount-path` is the directory inside the Linux container to mount to Azure Storage. Do not use `/` (the root directory).-
-Verify your storage is mounted by running the following command:
-
-```azurecli-interactive
-az webapp config storage-account list --resource-group <resource-group> --name <app-name>
-```
---
-> [!NOTE]
-> Adding, editing, or deleting a storage mount causes the app to be restarted.
--
-## Test the mounted storage
-
-To validate that the Azure Storage is mounted successfully for the app:
-
-1. [Open an SSH session](configure-linux-open-ssh-session.md) into the container.
-1. In the SSH terminal, execute the following command:
-
- ```bash
- df ΓÇôh
- ```
-1. Check if the storage share is mounted. If it's not present, there's an issue with mounting the storage share.
-1. Check latency or general reachability of the storage mount with the following command:
-
- ```bash
- tcpping Storageaccount.file.core.windows.net
- ```
--
-## Best practices
---- Azure Storage mounts can be configured as a virtual directory to serve static content. To configure the virtual directory, in the left navigation click **Configuration** > **Path Mappings** > **New Virtual Application or Directory**. Set the **Physical path** to the **Mount path** defined on the Azure Storage mount.--- To avoid potential issues related to latency, place the app and the Azure Storage account in the same Azure region. Note, however, if the app and Azure Storage account are in same Azure region, and if you grant access from App Service IP addresses in the [Azure Storage firewall configuration](../storage/common/storage-network-security.md), then these IP restrictions are not honored.--- In the Azure Storage account, avoid [regenerating the access key](../storage/common/storage-account-keys-manage.md) that's used to mount the storage in the app. The storage account contains two different keys. Azure App Services stores Azure storage account key. Use a stepwise approach to ensure that the storage mount remains available to the app during key regeneration. For example, assuming that you used **key1** to configure storage mount in your app:-
- 1. Regenerate **key2**.
- 1. In the storage mount configuration, update the access the key to use the regenerated **key2**.
- 1. Regenerate **key1**.
--- If you delete an Azure Storage account, container, or share, remove the corresponding storage mount configuration in the app to avoid possible error scenarios. --- The mounted Azure Storage account can be either Standard or Premium performance tier. Based on the app capacity and throughput requirements, choose the appropriate performance tier for the storage account. See the [scalability and performance targets for Files](../storage/files/storage-files-scale-targets.md).--- If your app [scales to multiple instances](../azure-monitor/autoscale/autoscale-get-started.md), all the instances connect to the same mounted Azure Storage account. To avoid performance bottlenecks and throughput issues, choose the appropriate performance tier for the storage account. --- It's not recommended to use storage mounts for local databases (such as SQLite) or for any other applications and components that rely on file handles and locks. --- If you [initiate a storage failover](../storage/common/storage-initiate-account-failover.md) and the storage account is mounted to the app, the mount will fail to connect until you either restart the app or remove and add the Azure Storage mount.
-
-- When VNET integration is used, ensure app setting, `WEBSITE_CONTENTOVERVNET` is set to `1` and the following ports are open:
- - Azure Files: 80 and 445
--- The mounted Azure Storage account can be either Standard or Premium performance tier. Based on the app capacity and throughput requirements, choose the appropriate performance tier for the storage account. See [the scalability and performance targets for Files](../storage/files/storage-files-scale-targets.md)--- To avoid potential issues related to latency, place the app and the Azure Storage account in the same Azure region. Note, however, if the app and Azure Storage account are in same Azure region, and if you grant access from App Service IP addresses in the [Azure Storage firewall configuration](../storage/common/storage-network-security.md), then these IP restrictions are not honored.--- In the Azure Storage account, avoid [regenerating the access key](../storage/common/storage-account-keys-manage.md) that's used to mount the storage in the app. The storage account contains two different keys. Azure App Services stores Azure storage account key. Use a stepwise approach to ensure that the storage mount remains available to the app during key regeneration. For example, assuming that you used **key1** to configure storage mount in your app:-
- 1. Regenerate **key2**.
- 1. In the storage mount configuration, update the access the key to use the regenerated **key2**.
- 1. Regenerate **key1**.
--- If you delete an Azure Storage account, container, or share, remove the corresponding storage mount configuration in the app to avoid possible error scenarios. --- The mounted Azure Storage account can be either Standard or Premium performance tier. Based on the app capacity and throughput requirements, choose the appropriate performance tier for the storage account. See the [scalability and performance targets for Files](../storage/files/storage-files-scale-targets.md).--- If your app [scales to multiple instances](../azure-monitor/autoscale/autoscale-get-started.md), all the instances connect to the same mounted Azure Storage account. To avoid performance bottlenecks and throughput issues, choose the appropriate performance tier for the storage account. --- It's not recommended to use storage mounts for local databases (such as SQLite) or for any other applications and components that rely on file handles and locks. --- If you [initiate a storage failover](../storage/common/storage-initiate-account-failover.md) and the storage account is mounted to the app, the mount will fail to connect until you either restart the app or remove and add the Azure Storage mount. ---- To avoid potential issues related to latency, place the app and the Azure Storage account in the same Azure region. Note, however, if the app and Azure Storage account are in same Azure region, and if you grant access from App Service IP addresses in the [Azure Storage firewall configuration](../storage/common/storage-network-security.md), then these IP restrictions are not honored.--- The mount directory in the custom container should be empty. Any content stored at this path is deleted when the Azure Storage is mounted (if you specify a directory under `/home`, for example). If you are migrating files for an existing app, make a backup of the app and its content before you begin.--- Mounting the storage to `/home` is not recommended because it may result in performance bottlenecks for the app.
-
-- In the Azure Storage account, avoid [regenerating the access key](../storage/common/storage-account-keys-manage.md) that's used to mount the storage in the app. The storage account contains two different keys. Azure App Services stores Azure storage account key. Use a stepwise approach to ensure that the storage mount remains available to the app during key regeneration. For example, assuming that you used **key1** to configure storage mount in your app:-
- 1. Regenerate **key2**.
- 1. In the storage mount configuration, update the access the key to use the regenerated **key2**.
- 1. Regenerate **key1**.
--- If you delete an Azure Storage account, container, or share, remove the corresponding storage mount configuration in the app to avoid possible error scenarios. --- The mounted Azure Storage account can be either Standard or Premium performance tier. Based on the app capacity and throughput requirements, choose the appropriate performance tier for the storage account. See the scalability and performance targets that correspond to the storage type:-
- - [For Files](../storage/files/storage-files-scale-targets.md)
- - [For Blobs](../storage/blobs/scalability-targets.md)
--- If your app [scales to multiple instances](../azure-monitor/autoscale/autoscale-get-started.md), all the instances connect to the same mounted Azure Storage account. To avoid performance bottlenecks and throughput issues, choose the appropriate performance tier for the storage account. --- It's not recommended to use storage mounts for local databases (such as SQLite) or for any other applications and components that rely on file handles and locks. --- If you [initiate a storage failover](../storage/common/storage-initiate-account-failover.md) and the storage account is mounted to the app, the mount will fail to connect until you either restart the app or remove and add the Azure Storage mount. ---
-## Next steps
---- [Migrate .NET apps to Azure App Service](app-service-asp-net-migration.md).----- [Migrate custom software to Azure App Service using a custom container](tutorial-custom-container.md?pivots=container-windows).----- [Configure a custom container](configure-custom-container.md?pivots=platform-linux).-- [Video: How to mount Azure Storage as a local share](https://www.youtube.com/watch?v=OJkvpWYr57Y).- ::: zone-end
app-service Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/policy-reference.md
Title: Built-in policy definitions for Azure App Service description: Lists Azure Policy built-in policy definitions for Azure App Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/25/2023 Last updated : 08/03/2023
application-gateway Quickstart Create Application Gateway For Containers Byo Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/quickstart-create-application-gateway-for-containers-byo-deployment.md
+ Last updated 07/24/2023
application-gateway Quickstart Create Application Gateway For Containers Managed By Alb Controller https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/quickstart-create-application-gateway-for-containers-managed-by-alb-controller.md
+ Last updated 07/24/2023
application-gateway Quickstart Deploy Application Gateway For Containers Alb Controller https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/quickstart-deploy-application-gateway-for-containers-alb-controller.md
+ Last updated 07/25/2023
attestation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-reference.md
Title: Built-in policy definitions for Azure Attestation description: Lists Azure Policy built-in policy definitions for Azure Attestation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/25/2023 Last updated : 08/03/2023
automation Change Tracking Data Collection Rule Creation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/change-tracking-data-collection-rule-creation.md
This script helps you to create a data collection rule in Change tracking and in
Save the above script on your machine with a name as *CtDcrCreation.json*. For more information, see [Enable Change Tracking and Inventory using Azure Monitoring Agent (Preview)](enable-vms-monitoring-agent.md#enable-change-tracking-at-scale-using-azure-monitoring-agent).
+> [!NOTE]
+> A reference JSON script to configure windows file settings:
+> ```json
+> "fileSettings": {
+> "fileCollectionFrequency": 2700,
+> "fileinfo": [
+> {
+> "name": "ChangeTrackingCustomPath_witems1",
+> "enabled": true,
+> "description": "",
+> "path": "D:\\testing\\*",
+> "recurse": true,
+> "maxContentsReturnable": 5000000,
+> "maxOutputSize": 500000,
+> "checksum": "sha256",
+> "pathType": "File",
+> "groupTag": "Custom"
+> },
+> {
+> "name": "ChangeTrackingCustomPath_witems2",
+> "enabled": true,
+> "description": "",
+> "path": "E:\\test1",
+> "recurse": false,
+> "maxContentsReturnable": 5000000,
+> "maxOutputSize": 500000,
+> "checksum": "sha256",
+> "pathType": "File",
+> "groupTag": "Custom"
+> }
+> ]
+> }
+>```
+ ## Next steps [Learn more](manage-change-tracking-monitoring-agent.md) on Manage change tracking and inventory using Azure Monitoring Agent (Preview).
automation Migrate Run As Accounts Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/migrate-run-as-accounts-managed-identity.md
Title: Migrate from a Run As account to Managed identities
description: This article describes how to migrate from a Run As account to managed identities in Azure Automation. Previously updated : 06/06/2023 Last updated : 08/04/2023
The following steps include an example to show how a graphical runbook that uses
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Open the Automation account, and then select **Process Automation** > **Runbooks**.
-1. Select a runbook. For example, select the **Start Azure V2 VMs** runbook from the list, and then select **Edit**. Or go to **Browse Gallery** and select **Start Azure V2 VMs**.
+1. Select a runbook. For example, select the **Start Azure V2 VMs** runbook from the list, and then select **Edit** or go to **Browse Gallery** and select **Start Azure V2 VMs**.
:::image type="content" source="./media/migrate-run-as-account-managed-identity/edit-graphical-runbook-inline.png" alt-text="Screenshot of editing a graphical runbook." lightbox="./media/migrate-run-as-account-managed-identity/edit-graphical-runbook-expanded.png"::: 1. Replace the Run As connection that uses `AzureRunAsConnection` and the connection asset that internally uses the PowerShell `Get-AutomationConnection` cmdlet with the `Connect-AzAccount` cmdlet.
-1. Add identity support for use in the runbook by adding a new code activity as mentioned in the following step, which leverages the `Connect-AzAccount` cmdlet to connect to the Managed Identity.
+1. Select **Delete** to delete the `Get Run As Connection` and `Connect to Azure` activities.
+
+ :::image type="content" source="./media/migrate-run-as-account-managed-identity/connect-azure-graphical-runbook-inline.png" alt-text="Screenshot to connect to the Azure activities." lightbox="./media/migrate-run-as-account-managed-identity/connect-azure-graphical-runbook-expanded.png":::
+
+
+1. In the left panel, under **RUNBOOK CONTROL**, select **Code** and then select **Add to canvas**.
- :::image type="content" source="./media/migrate-run-as-account-managed-identity/add-functionality-inline.png" alt-text="Screenshot of adding functionality to a graphical runbook." lightbox="./media/migrate-run-as-account-managed-identity/add-functionality-expanded.png":::
+ :::image type="content" source="./media/migrate-run-as-account-managed-identity/add-canvas-graphical-runbook-inline.png" alt-text="Screenshot to select code and add it to the canvas." lightbox="./media/migrate-run-as-account-managed-identity/add-canvas-graphical-runbook-expanded.png":::
-1. Select **Code**, and then enter the following code to pass the identity:
+1. Edit the code activity, assign any appropriate label name, and select **Author activity logic**.
- ```powershell-interactive
+ :::image type="content" source="./media/migrate-run-as-account-managed-identity/author-activity-log-graphical-runbook-inline.png" alt-text="Screenshot to edit code activity." lightbox="./media/migrate-run-as-account-managed-identity/author-activity-log-graphical-runbook-expanded.png":::
+
+1. In the **Code Editor** page, enter the following PowerShell code and select **OK**.
+
+ ```powershell-interactive
try { Write-Output ("Logging in to Azure...")
The following steps include an example to show how a graphical runbook that uses
throw $_.Exception } ```
+
+1. Connect the new activity to the activities that were connected by **Connect to Azure** earlier and save the runbook.
-For example, in the runbook **Start Azure V2 VMs** in the runbook gallery, you must replace the `Get Run As Connection` and `Connect to Azure` activities with the `Connect-AzAccount` cmdlet activity.
+ :::image type="content" source="./media/migrate-run-as-account-managed-identity/connect-activities-graphical-runbook-inline.png" alt-text="Screenshot to connect new activity to activities." lightbox="./media/migrate-run-as-account-managed-identity/connect-activities-graphical-runbook-expanded.png":::
+For example, in the runbook **Start Azure V2 VMs** in the runbook gallery, you must replace the `Get Run As Connection` and `Connect to Azure` activities with the code activity which uses `Connect-AzAccount` cmdlet as described above.
For more information, see the sample runbook name **AzureAutomationTutorialWithIdentityGraphical** that's created with the Automation account. > [!NOTE]
-> AzureRM PowerShell modules are retiring on 29 February 2024. If you are using AzureRM PowerShell modules in Graphical runbooks, you must upgrade them to use Az PowerShell modules. [Learn more](/powershell/azure/migrate-from-azurerm-to-az?view=azps-9.4.0&preserve-view=true).
+> AzureRM PowerShell modules are retiring on **29 February 2024**. If you are using AzureRM PowerShell modules in Graphical runbooks, you must upgrade them to use Az PowerShell modules. [Learn more](/powershell/azure/migrate-from-azurerm-to-az?view=azps-9.4.0&preserve-view=true).
## Next steps
automation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/policy-reference.md
Title: Built-in policy definitions for Azure Automation description: Lists Azure Policy built-in policy definitions for Azure Automation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/25/2023 Last updated : 08/03/2023
azure-app-configuration Concept Snapshots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-snapshots.md
+
+ Title: Snapshots in Azure App Configuration (preview)
+description: Details of Snapshots in Azure App Configuration
++++ Last updated : 05/16/2023++
+# Snapshots (preview)
+
+A snapshot is a named, immutable subset of an App Configuration store's key-values. The key-values that make up a snapshot are chosen during creation time through the usage of key and label filters. Once a snapshot is created, the key-values within are guaranteed to remain unchanged.
+
+## Deploy safely with snapshots
+
+Snapshots are designed to safely deploy configuration changes. Deploying faulty configuration changes into a running environment can cause issues such as service disruption and data loss. In order to avoid such issues, it's important to be able to vet configuration changes before moving into production environments. If such an issue does occur, it's important to be able to roll back any faulty configuration changes in order to restore service. Snapshots are created for managing these scenarios.
+
+Configuration changes should be deployed in a controlled, consistent way. Developers can use snapshots to perform controlled rollout. The only change needed in an application to begin a controlled rollout is to update the name of the snapshot the application is referencing. As the application moves into production, there's a guarantee that the configuration in the referenced snapshot remains unchanged. This guarantee against any change in a snapshot protects against unexpected settings making their way into production. The immutability and ease-of-reference of snapshots make it simple to ensure that the right set of configuration changes are rolled out safely.
+
+## Scenarios for using snapshots
+
+* **Controlled rollout**: Snapshots are well suited for supporting controlled rollout due to their immutable nature. When developers utilize snapshots for configuration, they can be confident that the configuration remains unchanged as the release progresses through different phases of the rollout.
+
+* **Last Known Good (LKG) configuration**: Snapshots can be used to support safe deployment practices for Configuration. With snapshots, developers can ensure that a Last known Good (LKG) configuration is available for rollback if there was any issue during deployment.
+
+* **Configuration versioning**: Snapshots can be used to create a version history of configuration settings to sync with release versions. Settings captured in each snapshot can be compared to identify changes between versions.
+
+* **Auditing**: Snapshots can be used for auditing and compliance purposes. Developers can maintain a record of configuration changes in between releases by using the snapshots for the releases.
+
+* **Testing and Staging environments**: Snapshots can be used to create consistent testing and staging environments. Developers can ensure that the same configuration is used across different environments, by using the same snapshot, which can help with debugging and testing.
+
+* **Simplified Client Configuration composition**: Usually, the clients of App Configuration need a subset of the key-values from the App Configuration instance. To get the set of required key-values, they need to have query logic written in code. As Snapshots support providing filters during creation time, it helps simplify client composition because clients can now refer to the set of key-values they require by name.
+
+## Snapshot operations
+
+As snapshots are immutable entities, snapshots can only be created and archived. No deleting, purging or editing is possible.
+
+* **Create snapshot**: Snapshots can be created by defining the key and label filters to capture the required key-values from App Configuration instance. The filtered key-values are stored as a snapshot with the name provided during creation.
+
+* **Archive snapshot**: Archiving a snapshot puts it in an archived state. While a snapshot is archived, it's still fully functional. When the snapshot is archived, an expiration time is set based on the retention period configured during the snapshot's creation. If the snapshot remains in the archived state up until the expiration time, then it automatically disappears from the system when the expiration time passes. Archival is used for phasing out snapshots that are no longer in use.
+
+* **Recover snapshot**: Recovering a snapshot puts it back in an active state. At this point, the snapshot is no longer subject to expiration based on its configured retention period. Recovery is only possible in the retention period after archival.
+
+> [!NOTE]
+> The retention period can only be set during the creation of a snapshot. The default value for retention period is 30 days for Standard stores and 7 days for Free stores.
+
+## Requirements for snapshot operations
+
+The following sections detail the permissions required to perform snapshot related operations with Azure AD and HMAC authentication.
+
+### Create a snapshot
+
+To create a snapshot in stores using Azure Active Directory (Azure AD) authentication, the following permissions are required. The App Configuration Data Owner role already has these permissions.
+- `Microsoft.AppConfiguration/configurationStores/keyvalues/read`
+- `Microsoft.AppConfiguration/configurationStores/snapshots/write`
+
+To archive and/or recover a snapshot using HMAC authentication, a read-write access key must be used.
+
+### Archive and recover a snapshot
+
+To archive and/or recover a snapshot using Azure AD authentication, the following permission is needed. The App Configuration Data Owner role already has this permission.
+- `Microsoft.AppConfiguration/configurationStores/snapshots/archive/action`
+
+To archive and/or recover a snapshot using HMAC authentication, a read-write access key must be used.
+
+### Read and list snapshots
+
+To list all snapshots, or get all the key-values in an individual snapshot by name the following permission is needed for stores utilizing Azure AD authentication. The built-in Data Owner and Data Reader roles already have this permission.
+- `Microsoft.AppConfiguration/configurationStores/snapshots/read`
+
+For stores that use HMAC authentication, both the "read snapshot" operation (to read the key-values from a snapshot) and the "list snapshots" operation can be performed using either the read-write access keys or the read-only access keys.
+
+## Billing considerations and limits
+
+The storage quota for snapshots is detailed in the "storage per resource section" of the [App Configuration pricing page](https://azure.microsoft.com/pricing/details/app-configuration/) There's no extra charge for snapshots before the included snapshot storage quota is exhausted.
+
+App Configuration has two tiers, Free and Standard. Check the following details for snapshot quotas in each tier.
+
+* **Free tier**: This tier has a snapshot storage quota of 10 MB. One can create as many snapshots as possible as long as the total storage size of all active and archived snapshots is less than 10 MB.
+
+* **Standard tier**: This tier has a snapshot storage quota of 1 GB. One can create as many snapshots as possible as long as the total storage size of all active and archived snapshots is less than 1 GB.
+
+The maximum size for a snapshot is 1 MB.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Create a snapshot](./howto-create-snapshots.md)
azure-app-configuration Howto Create Snapshots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-create-snapshots.md
+
+ Title: How to create snapshots (preview) in Azure App Configuration
+description: How to create and manage snapshots Azure App Configuration store.
++++ Last updated : 05/16/2023++
+# Use snapshots (preview)
+
+In this article, learn how to create and manage snapshots in Azure App Configuration. Snapshot is a set of App Configuration settings stored in an immutable state.
+
+## Prerequisites
+
+- An App Configuration store. [Create a store](./quickstart-azure-app-configuration-create.md#create-an-app-configuration-store).
+- "DataOwner" role in the App Configuration store. Details on required [role and permissions for snapshots](./concept-snapshots.md)
+
+### Add key-values to the App configuration store
+
+In your App Configuration store, go to **Operations** > **Configuration explorer** and add the following key-values. Leave **Content Type** with its default value. For more information about how to add key-values to a store using the Azure portal or the CLI, go to [Create a key-value](./quickstart-azure-app-configuration-create.md#create-a-key-value).
+
+| Key | Value | Label |
+|-|-|-|
+| *app2/bgcolor* | *Light Gray* | *label2* |
+| *app1/color* | *Black* | No label |
+| *app1/color* | *Blue* | *label1* |
+| *app1/color* | *Green* | *label2* |
+| *app1/color* | *Yellow* | *label3* |
+| *app1/message* | *Hello* | *label1* |
+| *app1/message* | *Hi!* | *label2* |
+| *app2/message* | *Good morning!*| *label1* |
+
+## Create a snapshot
+
+ > [!IMPORTANT]
+ > You may see any error "You are not authorized to view this configuration store data" when you switch to the Snapshots blade in the Azure portal if you opt to use Azure AD authentication in the Configuration explorer or the Feature manager blades. This is a known issue in the Azure portal, and we are working on addressing it. It doesn't affect any scenarios other than the Azure Portal regarding accessing snapshots with Azure AD authentication.
+
+As a temporary workaround, you can switch to using Access keys authentication from either the Configuration explorer or the Feature manager blades. You should then see the Snapshot blade displayed properly, assuming you have permission for the access keys.
+
+Under **Operations** > **Snapshots (preview)**, select **Create a new snapshot**.
+
+1. Enter a **snapshot name** and optionally also add **Tags**.
+1. Under **Choose the composition type**, keep the default value **Key (default)**.
+ - With the *Key* composition type, if your store has identical keys with different labels, only the key-value specified in the last applicable filter is included in the snapshot. Identical key-values with other labels are left out of the snapshot.
+ - With the *Key-Label* composition type, if your store has identical keys with different labels, all key-values with identical keys but different labels are included in the snapshot depending on the specified filters.
+1. Select **Add filters** to select the key-values for your snapshot. Filtering is done by selecting filters: **Equals**, **Starts with**, **Any of** and **All** for keys and for labels. You can enter between one and three filters.
+ 1. Add the first filter:
+ - Under **Key**, select **Starts with** and enter *app1*
+ - Under **Label**, select **Equals** and select *label2* from the drop-down menu.
+ 1. Add the second filter:
+ - Under **Key**, select **Starts with** and enter *app1*
+ - Under **Label**, select **Equals** and select *label1* from the drop-down menu.
+
+1. If you archive a snapshot, by default, it will be retained for 30 days after archival. Optionally, under **Recovery options**, decrease the number of retention days the snapshot will be available after archival.
+
+ > [!NOTE]
+ > The duration of the retention period can't be updated once the snapshot has been created.
+
+1. Select **Create** to generate the snapshot. In this example, the created snapshot has **Key** composition type and below filters:
+ - Keys that start with *app1*, with *label2* label
+ - Keys that start with *app1*, with *label1* label.
+
+ :::image type="content" source="./media/howto-create-snapshots/create-snapshot.png" alt-text="Screenshot of the Create form with data filled as above steps and Create button highlighted.":::
+
+1. Check the table to understand which key-values from the configuration store end up in the snapshot based on the provided parameters.
+
+ | Key | Value | Label | Included in snapshot |
+ |-|-|-|-|
+ | *app2/bgcolor* | *Light Gray* | *label2* | No: doesn't start with *app1*.
+ | *app1/color* | *Black* | No label | No: doesn't have the label *label2* or *label1*.
+ | *app1/color* | *Blue* | *label1* | Yes: Has the right label *label1* from the last of applicable filters.
+ | *app1/color* | *Green* | *label2* | No: Same key with label *label1* selected by the second filter overrides this one though it has the selected label, *label2*.
+ | *app1/color* | *Yellow* | *label3* | No: doesn't have the label *label2* or *label1*.
+ | *app1/message* | *Hello* | *label1* | Yes: Has the right label *label1* from the last of applicable filters.
+ | *app1/message* | *Hi!* | *label2* | No: Same key with label *label1* selected by the second filter overrides this one though it has the selected label, *label2*.
+ | *app2/message* | *Good morning!*| *label1* | No: doesn't start with *app1*.
+
+## Create sample snapshots
+
+To create sample snapshots and check how the snapshots feature work, use the snapshot sandbox. This sandbox contains sample data you can play with to better understand how snapshot's composition type and filters work.
+
+1. In **Operations** > **Snapshots (preview)** > **Active snapshots**, select **Test in sandbox**.
+1. Review the sample data and practice creating snapshots by filling out the form with a composition type and one or more filters.
+1. Select **Create** to generate the sample snapshot.
+1. Check out the snapshot result generated under **Generated sample snapshot**. The sample snapshot displays all keys that are included in the sample snapshot, according to your selection.
+
+## Manage active snapshots
+
+The page under **Operations** > **Snapshots (preview)** displays two tabs: **Active snapshots** and **Archived snapshots**. Select **Active snapshots** to view the list of all active snapshots in an App Configuration store.
+
+ :::image type="content" source="./media/howto-create-snapshots/snapshots-view-list.png" alt-text="Screenshot of the list of active snapshots.":::
+
+### View existing snapshot
+
+In the **Active snapshots** tab, select the ellipsis **...** on the right of an existing snapshot and select **View** to view a snapshot. This action opens a Snapshot details page that displays the snapshot's settings and the key-values included in the snapshot.
+
+ :::image type="content" source="./media/howto-create-snapshots/snapshot-details-view.png" alt-text="Screenshot of the detailed view of an active snapshot.":::
+
+### Archive a snapshot
+
+In the **Active snapshots** tab, select the ellipsis **...** on the right of an existing snapshot and select **Archive** to archive a snapshot. Confirm archival by selecting **Yes** or cancel with **No**. Once a snapshot has been archived, a notification appears to confirm the operation and the list of active snapshots is updated.
+
+ :::image type="content" source="./media/howto-create-snapshots/archive-snapshot.png" alt-text="Screenshot of the archive option in the active snapshots.":::
+
+## Manage archived snapshots
+
+Go to **Operations** > **Snapshots (preview)** > **Archived snapshots** to view the list of all archived snapshots in an App Configuration store. Archived snapshots remain accessible for the retention period that was selected during their creation.
+
+ :::image type="content" source="./media/howto-create-snapshots/archived-snapshots.png" alt-text="Screenshot of the list of archived snapshots.":::
+
+### View archived snapshot
+
+Detailed view of snapshot is available in the archive state as well. In the **Archived snapshots** tab, select the ellipsis **...** on the right of an existing snapshot and select **View** to view a snapshot. This action opens a Snapshot details page that displays the snapshot's settings and the key-values included in the snapshot.
+
+ :::image type="content" source="./media/howto-create-snapshots/archived-snapshots-details.png" alt-text="Screenshot of the detailed view of an archived snapshot.":::
+
+### Recover an archived snapshot
+
+In the **Archived snapshots** tab, select the ellipsis **...** on the right of an archived snapshot and select **Recover** to recover a snapshot. Confirm App Configuration snapshot recovery by selecting **Yes** or cancel with **No**. Once a snapshot has been recovered, a notification appears to confirm the operation and the list of archived snapshots is updated.
+
+ :::image type="content" source="./media/howto-create-snapshots/recover-snapshots.png" alt-text="Screenshot of the recover option in the archived snapshots.":::
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Snapshots in Azure App Configuration](./concept-snapshots.md)
azure-app-configuration Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/policy-reference.md
Title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/25/2023 Last updated : 08/03/2023
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled Kubernetes description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/25/2023 Last updated : 08/03/2023 #
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled servers description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/25/2023 Last updated : 08/03/2023
azure-cache-for-redis Cache Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-overview.md
The Enterprise tiers rely on Redis Enterprise, a commercial variant of Redis fro
- If you use a private Marketplace, it must contain the Redis Inc. Enterprise offer. > [!IMPORTANT]
-> Azure Cache for Redis Enterprise requires standard network Load Balancers that are charged separately from cache instances themselves. For more information, see [Load Balancer pricing](https://azure.microsoft.com/pricing/details/load-balancer/).
+> Azure Cache for Redis Enterprise requires standard network Load Balancers that are charged separately from cache instances themselves. Currently, these charges are absorbed by Azure Cache for Redis and not passed on to customers. This may change in the future. For more information, see [Load Balancer pricing](https://azure.microsoft.com/pricing/details/load-balancer/).
>
-> If an Enterprise cache is configured for multiple Availability Zones, data transfer is billed at the [standard network bandwidth rates](https://azure.microsoft.com/pricing/details/bandwidth/)
-> starting from July 1, 2022.
+> If an Enterprise cache is configured for multiple Availability Zones, data transfer charges are absorbed by Azure cache for Redis and not passed to customers. This may change in the future, where data transfer would be billed at the [standard network bandwidth rates](https://azure.microsoft.com/pricing/details/bandwidth/)
> > In addition, data persistence adds Managed Disks. The use of these resources is free during the public preview of Enterprise data persistence. This might change when the feature becomes generally available.
azure-cache-for-redis Cache Redis Cache Bicep Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-redis-cache-bicep-provision.md
Title: Deploy Azure Cache for Redis using Bicep description: Learn how to use Bicep to deploy an Azure Cache for Redis resource.--++
azure-cache-for-redis Cache Web App Arm With Redis Cache Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-web-app-arm-with-redis-cache-provision.md
Title: Provision Web App with Azure Cache for Redis
description: Use Azure Resource Manager template to deploy web app with Azure Cache for Redis. --+ Last updated 01/06/2017 + # Create a Web App plus Azure Cache for Redis using a template [!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
azure-cache-for-redis Cache Web App Bicep With Redis Cache Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-web-app-bicep-with-redis-cache-provision.md
Title: Provision Web App that uses Azure Cache for Redis using Bicep description: Use Bicep to deploy web app with Azure Cache for Redis.--++ Last updated 05/24/2022-+ + # Create a Web App plus Azure Cache for Redis using Bicep In this article, you use Bicep to deploy an Azure Web App that uses Azure Cache for Redis, as well as an App Service plan.
azure-cache-for-redis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/policy-reference.md
Title: Built-in policy definitions for Azure Cache for Redis description: Lists Azure Policy built-in policy definitions for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/25/2023 Last updated : 08/03/2023
azure-functions Durable Functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-overview.md
Last updated 02/13/2023
+zone_pivot_groups: df-languages
#Customer intent: As a < type of user >, I want < what? > so that < why? >.
Durable Functions is designed to work with all Azure Functions programming langu
| PowerShell | Functions 3.0+ | PowerShell 7+ | 2.x bundles | | Java | Functions 4.0+ | Java 8+ | 4.x bundles | > [!NOTE]
-> The new programming model for authoring Functions in Node.js (V4) is currently in preview. Compared to the current model, this new experience is designed to be more flexible and intuitive for Node developers. Learn more about the differences between the models in the [Node.js upgrade guide](../functions-node-upgrade-v4.md).
->
-> In the following code snippets, Python (PM2) denotes programming model V2, and JavaScript (PM4) denotes programming model V4, the new experiences.
+> The new programming model for authoring Functions in Node.js (V4) is currently in preview. Compared to the current model, the new experience is designed to be more flexible and intuitive for JavaScript/TypeScript developers. Learn more about the differences between the models in the [Node.js upgrade guide](../functions-node-upgrade-v4.md).
++ Like Azure Functions, there are templates to help you develop Durable Functions using [Visual Studio](durable-functions-create-first-csharp.md), [Visual Studio Code](quickstart-js-vscode.md), and the [Azure portal](durable-functions-create-portal.md).
You can use Durable Functions to implement the function chaining pattern concise
In this example, the values `F1`, `F2`, `F3`, and `F4` are the names of other functions in the same function app. You can implement control flow by using normal imperative coding constructs. Code executes from the top down. The code can involve existing language control flow semantics, like conditionals and loops. You can include error handling logic in `try`/`catch`/`finally` blocks.
-# [C# (InProc)](#tab/csharp-inproc)
+
+# [C# (InProc)](#tab/in-process)
```csharp [FunctionName("Chaining")]
public static async Task<object> Run(
You can use the `context` parameter to invoke other functions by name, pass parameters, and return function output. Each time the code calls `await`, the Durable Functions framework checkpoints the progress of the current function instance. If the process or virtual machine recycles midway through the execution, the function instance resumes from the preceding `await` call. For more information, see the next section, Pattern #2: Fan out/fan in.
-# [C# (Isolated)](#tab/csharp-isolated)
+# [C# (Isolated)](#tab/isolated-process)
```csharp [Function("Chaining")]
public static async Task<object> Run(
You can use the `context` parameter to invoke other functions by name, pass parameters, and return function output. Each time the code calls `await`, the Durable Functions framework checkpoints the progress of the current function instance. If the process or virtual machine recycles midway through the execution, the function instance resumes from the preceding `await` call. For more information, see the next section, Pattern #2: Fan out/fan in.
-# [JavaScript (PM3)](#tab/javascript-v3)
+
+# [V3 model](#tab/v3-model)
```javascript const df = require("durable-functions");
You can use the `context.df` object to invoke other functions by name, pass para
> [!NOTE] > The `context` object in JavaScript represents the entire [function context](../functions-reference-node.md#context-object). Access the Durable Functions context using the `df` property on the main context.
-# [JavaScript (PM4)](#tab/javascript-v4)
+# [V4 model](#tab/v4-model)
```javascript const df = require("durable-functions");
You can use the `context.df` object to invoke other functions by name, pass para
> [!NOTE] > The `context` object in JavaScript represents the entire [function context](../functions-reference-node.md#context-object). Access the Durable Functions context using the `df` property on the main context.
-# [Python](#tab/python)
+
+# [Python](#tab/v1-model)
```python import azure.functions as func
You can use the `context` object to invoke other functions by name, pass paramet
> [!NOTE] > The `context` object in Python represents the orchestration context. Access the main Azure Functions context using the `function_context` property on the orchestration context.
-# [Python (PM2)](#tab/python-v2)
+# [Python (V2 model)](#tab/v2-model)
```python import azure.functions as func
You can use the `context` object to invoke other functions by name, pass paramet
> [!NOTE] > The `context` object in Python represents the orchestration context. Access the main Azure Functions context using the `function_context` property on the orchestration context.
-# [PowerShell](#tab/powershell)
```PowerShell param($Context)
Invoke-DurableActivity -FunctionName 'F4' -Input $Z
You can use the `Invoke-DurableActivity` command to invoke other functions by name, pass parameters, and return function output. Each time the code calls `Invoke-DurableActivity` without the `NoWait` switch, the Durable Functions framework checkpoints the progress of the current function instance. If the process or virtual machine recycles midway through the execution, the function instance resumes from the preceding `Invoke-DurableActivity` call. For more information, see the next section, Pattern #2: Fan out/fan in.
-# [Java](#tab/java)
```java @FunctionName("Chaining")
public double functionChaining(
You can use the `ctx` object to invoke other functions by name, pass parameters, and return function output. The output of these method calls is a `Task<V>` object where `V` is the type of data returned by the invoked function. Each time you call `Task<V>.await()`, the Durable Functions framework checkpoints the progress of the current function instance. If the process unexpectedly recycles midway through the execution, the function instance resumes from the preceding `Task<V>.await()` call. For more information, see the next section, Pattern #2: Fan out/fan in. - ### <a name="fan-in-out"></a>Pattern #2: Fan out/fan in
With normal functions, you can fan out by having the function send multiple mess
The Durable Functions extension handles this pattern with relatively simple code:
-# [C# (InProc)](#tab/csharp-inproc)
+
+# [C# (InProc)](#tab/in-process)
```csharp [FunctionName("FanOutFanIn")]
The fan-out work is distributed to multiple instances of the `F2` function. The
The automatic checkpointing that happens at the `await` call on `Task.WhenAll` ensures that a potential midway crash or reboot doesn't require restarting an already completed task.
-# [C# (Isolated)](#tab/csharp-isolated)
+# [C# (Isolated)](#tab/isolated-process)
```csharp [Function("FanOutFanIn")]
The fan-out work is distributed to multiple instances of the `F2` function. The
The automatic checkpointing that happens at the `await` call on `Task.WhenAll` ensures that a potential midway crash or reboot doesn't require restarting an already completed task.
-# [JavaScript (PM3)](#tab/javascript-v3)
+
+# [V3 model](#tab/v3-model)
```javascript const df = require("durable-functions");
The fan-out work is distributed to multiple instances of the `F2` function. The
The automatic checkpointing that happens at the `yield` call on `context.df.Task.all` ensures that a potential midway crash or reboot doesn't require restarting an already completed task.
-# [JavaScript (PM4)](#tab/javascript-v4)
+# [V4 model](#tab/v4-model)
```javascript const df = require("durable-functions");
The fan-out work is distributed to multiple instances of the `F2` function. The
The automatic checkpointing that happens at the `yield` call on `context.df.Task.all` ensures that a potential midway crash or reboot doesn't require restarting an already completed task.
-# [Python](#tab/python)
+# [Python](#tab/v1-model)
```python import azure.durable_functions as df
The fan-out work is distributed to multiple instances of the `F2` function. The
The automatic checkpointing that happens at the `yield` call on `context.task_all` ensures that a potential midway crash or reboot doesn't require restarting an already completed task.
-# [Python (PM2)](#tab/python-v2)
+# [Python (V2 model)](#tab/v2-model)
```python import azure.functions as func
The fan-out work is distributed to multiple instances of the `F2` function. The
The automatic checkpointing that happens at the `yield` call on `context.task_all` ensures that a potential midway crash or reboot doesn't require restarting an already completed task.
-# [PowerShell](#tab/powershell)
```PowerShell param($Context)
The fan-out work is distributed to multiple instances of the `F2` function. Plea
The automatic checkpointing that happens at the `Wait-ActivityFunction` call ensures that a potential midway crash or reboot doesn't require restarting an already completed task.
-# [Java](#tab/java)
```java @FunctionName("FanOutFanIn")
The fan-out work is distributed to multiple instances of the `F2` function. The
The automatic checkpointing that happens at the `.await()` call on `ctx.allOf(parallelTasks)` ensures that an unexpected process recycle doesn't require restarting any already completed tasks. - > [!NOTE] > In rare circumstances, it's possible that a crash could happen in the window after an activity function completes but before its completion is saved into the orchestration history. If this happens, the activity function would re-run from the beginning after the process recovers.
In a few lines of code, you can use Durable Functions to create multiple monitor
The following code implements a basic monitor:
-# [C# (InProc)](#tab/csharp-inproc)
+
+# [C# (InProc)](#tab/in-process)
```csharp [FunctionName("MonitorJobStatus")]
public static async Task Run(
} ```
-# [C# (Isolated)](#tab/csharp-isolated)
+# [C# (Isolated)](#tab/isolated-process)
```csharp [Function("MonitorJobStatus")]
public static async Task Run(
} ```
-# [JavaScript (PM3)](#tab/javascript-v3)
+
+# [V3 model](#tab/v3-model)
```javascript const df = require("durable-functions");
module.exports = df.orchestrator(function*(context) {
}); ```
-# [JavaScript (PM4)](#tab/javascript-v4)
+# [V4 model](#tab/v4-model)
```javascript const df = require("durable-functions");
df.app.orchestration("monitorDemo", function* (context) {
}); ```
-# [Python](#tab/python)
+
+# [Python](#tab/v1-model)
```python import azure.durable_functions as df
def orchestrator_function(context: df.DurableOrchestrationContext):
main = df.Orchestrator.create(orchestrator_function) ```
-# [Python (PM2)](#tab/python-v2)
+# [Python (V2 model)](#tab/v2-model)
```python import json
def orchestrator_function(context: df.DurableOrchestrationContext):
# Perform more work here, or let the orchestration end. ```
-# [PowerShell](#tab/powershell)
```powershell param($Context)
while ($Context.CurrentUtcDateTime -lt $expiryTime) {
$output ```
-# [Java](#tab/java)
```java @FunctionName("Monitor")
public String monitorOrchestrator(
} ``` -+ When a request is received, a new orchestration instance is created for that job ID. The instance polls a status until either a condition is met or until a timeout expires. A durable timer controls the polling interval. Then, more work can be performed, or the orchestration can end.
You can implement the pattern in this example by using an orchestrator function.
These examples create an approval process to demonstrate the human interaction pattern:
-# [C# (InProc)](#tab/csharp-inproc)
+
+# [C# (InProc)](#tab/in-process)
```csharp [FunctionName("ApprovalWorkflow")]
public static async Task Run(
To create the durable timer, call `context.CreateTimer`. The notification is received by `context.WaitForExternalEvent`. Then, `Task.WhenAny` is called to decide whether to escalate (timeout happens first) or process the approval (the approval is received before timeout).
-# [C# (Isolated)](#tab/csharp-isolated)
+# [C# (Isolated)](#tab/isolated-process)
```csharp [Function("ApprovalWorkflow")]
public static async Task Run(
To create the durable timer, call `context.CreateTimer`. The notification is received by `context.WaitForExternalEvent`. Then, `Task.WhenAny` is called to decide whether to escalate (timeout happens first) or process the approval (the approval is received before timeout).
-# [JavaScript (PM3)](#tab/javascript-v3)
+
+# [V3 model](#tab/v3-model)
```javascript const df = require("durable-functions");
module.exports = df.orchestrator(function*(context) {
To create the durable timer, call `context.df.createTimer`. The notification is received by `context.df.waitForExternalEvent`. Then, `context.df.Task.any` is called to decide whether to escalate (timeout happens first) or process the approval (the approval is received before timeout).
-# [JavaScript (PM4)](#tab/javascript-v4)
+# [V4 model](#tab/v4-model)
```javascript const df = require("durable-functions");
df.app.orchestration("humanInteractionDemo", function* (context) {
To create the durable timer, call `context.df.createTimer`. The notification is received by `context.df.waitForExternalEvent`. Then, `context.df.Task.any` is called to decide whether to escalate (timeout happens first) or process the approval (the approval is received before timeout).
-# [Python](#tab/python)
+# [Python](#tab/v1-model)
```python import azure.durable_functions as df
main = df.Orchestrator.create(orchestrator_function)
To create the durable timer, call `context.create_timer`. The notification is received by `context.wait_for_external_event`. Then, `context.task_any` is called to decide whether to escalate (timeout happens first) or process the approval (the approval is received before timeout).
-# [Python (PM2)](#tab/python-v2)
+# [Python (V2 model)](#tab/v2-model)
```python import json
def orchestrator_function(context: df.DurableOrchestrationContext):
To create the durable timer, call `context.create_timer`. The notification is received by `context.wait_for_external_event`. Then, `context.task_any` is called to decide whether to escalate (timeout happens first) or process the approval (the approval is received before timeout).
-# [PowerShell](#tab/powershell)
```powershell param($Context)
$output
``` To create the durable timer, call `Start-DurableTimer`. The notification is received by `Start-DurableExternalEventListener`. Then, `Wait-DurableTask` is called to decide whether to escalate (timeout happens first) or process the approval (the approval is received before timeout).
-# [Java](#tab/java)
```java @FunctionName("ApprovalWorkflow")
public void approvalWorkflow(
The `ctx.waitForExternalEvent(...).await()` method call pauses the orchestration until it receives an event named `ApprovalEvent`, which has a `boolean` payload. If the event is received, an activity function is called to process the approval result. However, if no such event is received before the `timeout` (72 hours) expires, a `TaskCanceledException` is raised and the `Escalate` activity function is called. - > [!NOTE] > There is no charge for time spent waiting for external events when running in the Consumption plan.
curl -d "true" http://localhost:7071/runtime/webhooks/durabletask/instances/{ins
An event can also be raised using the durable orchestration client from another function in the same function app:
-# [C# (InProc)](#tab/csharp-inproc)
+
+# [C# (InProc)](#tab/in-process)
```csharp [FunctionName("RaiseEventToOrchestration")]
public static async Task Run(
} ```
-# [C# (Isolated)](#tab/csharp-isolated)
+# [C# (Isolated)](#tab/isolated-process)
```csharp [Function("RaiseEventToOrchestration")]
public static async Task Run(
} ```
-# [JavaScript (PM3)](#tab/javascript-v3)
+
+# [V3 model](#tab/v3-model)
```javascript const df = require("durable-functions");
module.exports = async function (context) {
}; ```
-# [JavaScript (PM4)](#tab/javascript-v4)
+# [V4 model](#tab/v4-model)
```javascript const df = require("durable-functions");
app.get("raiseEventToOrchestration", async function (request, context) {
}); ```
-# [Python](#tab/python)
+
+# [Python](#tab/v1-model)
```python import azure.functions as func
async def main(client):
```
-# [Python (PM2)](#tab/python-v2)
+# [Python (V2 model)](#tab/v2-model)
```python import azure.functions as func
async def main(client: str):
await client.raise_event(instance_id, "ApprovalEvent", is_approved) ```
-# [PowerShell](#tab/powershell)
```powershell
Send-DurableExternalEvent -InstanceId $InstanceId -EventName "ApprovalEvent" -Ev
```
-# [Java](#tab/java)
```java @FunctionName("RaiseEventToOrchestration")
public void raiseEventToOrchestration(
} ``` - ### <a name="aggregator"></a>Pattern #6: Aggregator (stateful entities)
The tricky thing about trying to implement this pattern with normal, stateless f
You can use [Durable entities](durable-functions-entities.md) to easily implement this pattern as a single function.
-# [C# (InProc)](#tab/csharp-inproc)
+
+# [C# (InProc)](#tab/in-process)
```csharp [FunctionName("Counter")]
public class Counter
} ```
-# [C# (Isolated)](#tab/csharp-isolated)
+# [C# (Isolated)](#tab/isolated-process)
Durable entities are currently not supported in the .NET-isolated worker.
-# [JavaScript (PM3)](#tab/javascript-v3)
+
+# [V3 model](#tab/v3-model)
```javascript const df = require("durable-functions");
module.exports = df.entity(function(context) {
}); ```
-# [JavaScript (PM4)](#tab/javascript-v4)
+# [V4 model](#tab/v4-model)
```javascript const df = require("durable-functions");
df.app.entity("entityDemo", function (context) {
}); ```
-# [Python](#tab/python)
+
+# [Python](#tab/v1-model)
```python import azure.functions as func
def entity_function(context: df.DurableOrchestrationContext):
main = df.Entity.create(entity_function) ```
-# [Python (PM2)](#tab/python-v2)
+# [Python (V2 model)](#tab/v2-model)
```python import azure.functions as func
def entity_function(context: df.DurableOrchestrationContext):
context.set_state(current_value) ```
-# [PowerShell](#tab/powershell)
-Durable entities are currently not supported in PowerShell.
+> [!NOTE]
+> Durable entities are currently not supported in PowerShell.
-# [Java](#tab/java)
-Durable entities are currently not supported in Java.
+> [!NOTE]
+> Durable entities are currently not supported in Java.
- Clients can enqueue *operations* for (also known as "signaling") an entity function using the [entity client binding](durable-functions-bindings.md#entity-client).
-# [C# (InProc)](#tab/csharp-inproc)
+
+# [C# (InProc)](#tab/in-process)
```csharp [FunctionName("EventHubTriggerCSharp")]
public static async Task Run(
> [!NOTE] > Dynamically generated proxies are also available in .NET for signaling entities in a type-safe way. And in addition to signaling, clients can also query for the state of an entity function using [type-safe methods](durable-functions-dotnet-entities.md#accessing-entities-through-interfaces) on the orchestration client binding.
-# [C# (Isolated)](#tab/csharp-isolated)
+# [C# (Isolated)](#tab/isolated-process)
Durable entities are currently not supported in the .NET-isolated worker.
-# [JavaScript (PM3)](#tab/javascript-v3)
+
+# [V3 model](#tab/v3-model)
```javascript const df = require("durable-functions");
module.exports = async function (context) {
}; ```
-# [JavaScript (PM4)](#tab/javascript-v4)
+# [V4 model](#tab/v4-model)
```javascript const df = require("durable-functions");
app.get("signalEntityDemo", async function (request, context) {
}); ```
-# [Python](#tab/python)
+
+# [Python](#tab/v1-model)
```python import azure.functions as func
async def main(req: func.HttpRequest, starter: str) -> func.HttpResponse:
return func.HttpResponse("Entity signaled") ```
-# [Python (PM2)](#tab/python-v2)
+# [Python (V2 model)](#tab/v2-model)
```python import azure.functions as func
async def main(req: func.HttpRequest, client) -> func.HttpResponse:
return func.HttpResponse("Entity signaled") ```
-# [PowerShell](#tab/powershell)
-Durable entities are currently not supported in PowerShell.
+> [!NOTE]
+> Durable entities are currently not supported in PowerShell.
-# [Java](#tab/java)
-Durable entities are currently not supported in Java.
+> [!NOTE]
+> Durable entities are currently not supported in Java.
- Entity functions are available in [Durable Functions 2.0](durable-functions-versions.md) and above for C#, JavaScript, and Python.
azure-functions Azfd0006 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/errors-diagnostics/diagnostic-events/azfd0006.md
+
+ Title: "AZFD0006: SAS Token Expiring"
+
+description: "AZFD0006: SAS token expiring"
+++ Last updated : 08/03/2023+++
+# AZFD0006: SAS Token Expiring Warning
++
+This event occurs when an SAS token in an application setting is set to expire within 45 days.
++
+| | Value |
+|-|-|
+| **Event ID** |AZFD0006|
+| **Category** |[Usage]|
+| **Severity** |Warning|
+
+## Event description
+
+This warning event occurs when an SAS token within a URI is set to expire within 45 days. If the token has already expired, an error event will be triggered.
++
+Functions currently only checks the following application settings for expiring SAS tokens:
++ [`WEBSITE_RUN_FROM_PACKAGE`](../../functions-app-settings.md#website_run_from_package) ++ [`AzureWebJobsStorage`](../../functions-app-settings.md#azurewebjobsstorage)++ [`WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`](../../functions-app-settings.md#website_contentazurefileconnectionstring)
+The warning message specifies which variable has the SAS token that's going to expire and how much time is left.
+++
+## How to resolve the event
+
+There are two ways to resolve this event:
+++ Renew the SAS token so that it doesn't expire within the next 45 days and set the new value in application settings. For more information, see [Manually uploading a package to Blob Storage](../../run-functions-from-deployment-package.md#manually-uploading-a-package-to-blob-storage).+++ Switch to using managed identities instead relying on using an SAS URI. For more information, see [Fetch a package from Azure Blob Storage using a managed identity](../../run-functions-from-deployment-package.md#fetch-a-package-from-azure-blob-storage-using-a-managed-identity).++
+## When to suppress the event
+
+This event shouldn't be suppressed.
azure-functions Functions Bindings Azure Sql Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-trigger.md
Title: Azure SQL trigger for Functions description: Learn to use the Azure SQL trigger in Azure Functions.-+ Previously updated : 4/14/2023- Last updated : 8/04/2023+ zone_pivot_groups: programming-languages-set-functions-lang-workers
The [C# library](functions-dotnet-class-library.md) uses the [SqlTrigger](https:
||| | **TableName** | Required. The name of the table monitored by the trigger. | | **ConnectionStringSetting** | Required. The name of an app setting that contains the connection string for the database containing the table monitored for changes. The connection string setting name corresponds to the application setting (in `local.settings.json` for local development) that contains the [connection string](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-5.&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString) to the Azure SQL or SQL Server instance.|
+| **LeasesTableName** | Optional. Name of the table used to store leases. If not specified, the leases table name will be Leases_{FunctionId}_{TableId}. More information on how this is generated can be found [here](https://github.com/Azure/azure-functions-sql-extension/blob/release/trigger/docs/TriggerBinding.md#az_funcleasestablename).
::: zone-end
In the [Java functions runtime library](/java/api/overview/azure/functions/runti
| **name** | Required. The name of the parameter that the trigger binds to. | | **tableName** | Required. The name of the table monitored by the trigger. | | **connectionStringSetting** | Required. The name of an app setting that contains the connection string for the database containing the table monitored for changes. The connection string setting name corresponds to the application setting (in `local.settings.json` for local development) that contains the [connection string](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-5.&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString) to the Azure SQL or SQL Server instance.|
+| **LeasesTableName** | Optional. Name of the table used to store leases. If not specified, the leases table name will be Leases_{FunctionId}_{TableId}. More information on how this is generated can be found [here](https://github.com/Azure/azure-functions-sql-extension/blob/release/trigger/docs/TriggerBinding.md#az_funcleasestablename).
::: zone-end
The following table explains the binding configuration properties that you set i
| **direction** | Required. Must be set to `in`. | | **tableName** | Required. The name of the table monitored by the trigger. | | **connectionStringSetting** | Required. The name of an app setting that contains the connection string for the database containing the table monitored for changes. The connection string setting name corresponds to the application setting (in `local.settings.json` for local development) that contains the [connection string](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-5.&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString) to the Azure SQL or SQL Server instance.|
+| **LeasesTableName** | Optional. Name of the table used to store leases. If not specified, the leases table name will be Leases_{FunctionId}_{TableId}. More information on how this is generated can be found [here](https://github.com/Azure/azure-functions-sql-extension/blob/release/trigger/docs/TriggerBinding.md#az_funcleasestablename).
::: zone-end ## Optional Configuration
azure-functions Functions Bindings Example https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-example.md
Title: Azure Functions trigger and binding example description: Learn to configure Azure Function bindings -+ ms.devlang: csharp, javascript Last updated 02/08/2022
azure-functions Functions Bindings Return Value https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-return-value.md
Title: Using return value from an Azure Function
description: Learn to manage return values for Azure Functions ms.devlang: csharp, fsharp, java, javascript, powershell, python-+ Last updated 07/25/2023 zone_pivot_groups: programming-languages-set-functions-lang-workers
azure-functions Functions Container Apps Hosting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-container-apps-hosting.md
Title: Azure Container Apps hosting of Azure Functions description: Learn about how you can use Azure Container Apps to host containerized function apps in Azure Functions. Previously updated : 05/04/2023 Last updated : 07/30/2023 # Customer intent: As a cloud developer, I want to learn more about hosting my function apps in Linux containers by using Azure Container Apps.
Keep in mind the following considerations when deploying your function app conta
+ Australia East + Central US + East US
- + East Us 2
+ + East US 2
+ North Europe + South Central US + UK South + West Europe
- + West US3
+ + West US 3
+ When your container is hosted in a [Consumption + Dedicated plan structure](../container-apps/plans.md#consumption-dedicated), only the default Consumption plan is currently supported. Dedicated plans in this structure aren't yet supported for Functions. When running functions on Container Apps, you're charged only for the Container Apps usage. For more information, see the [Azure Container Apps pricing page](https://azure.microsoft.com/pricing/details/container-apps/). + While all triggers can be used, only the following triggers can dynamically scale (from zero instances) when running on Container Apps: + HTTP
azure-functions Functions Core Tools Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-core-tools-reference.md
Title: Azure Functions Core Tools reference description: Reference documentation that supports the Azure Functions Core Tools (func.exe). Previously updated : 07/13/2021 Last updated : 07/30/2023 # Azure Functions Core Tools reference
Core Tools commands are organized into the following contexts, each providing a
| -- | -- | | [`func`](#func-init) | Commands used to create and run functions on your local computer. | | [`func azure`](#func-azure-functionapp-fetch-app-settings) | Commands for working with Azure resources, including publishing. |
+| [`func azurecontainerapps`](#func-azurecontainerapps-deploy) | Deploy containerized function app to Azure Container Apps. |
| [`func durable`](#func-durable-delete-task-hub) | Commands for working with [Durable Functions](./durable/durable-functions-overview.md). | | [`func extensions`](#func-extensions-install) | Commands for installing and managing extensions. | | [`func kubernetes`](#func-kubernetes-deploy) | Commands for working with Kubernetes and Azure Functions. |
func host start
| **`--timeout`** | The timeout for the Functions host to start, in seconds. Default: 20 seconds.| | **`--useHttps`** | Bind to `https://localhost:{port}` rather than to `http://localhost:{port}`. By default, this option creates a trusted certificate on your computer.|
-In version 1.x, you can also use the [`func run` command](#func-run) to run a specific function and pass test data to it.
+In version 1.x, you can also use the [`func run`](#func-run) command to run a specific function and pass test data to it.
To learn more, see [Enable streaming logs](functions-run-local.md#enable-streami
Deploys a Functions project to an existing function app resource in Azure. ```command
-func azure functionapp publish <FunctionAppName>
+func azure functionapp publish <APP_NAME>
``` For more information, see [Deploy project files](functions-run-local.md#project-file-deployment).
The following publish options apply, based on version:
| Option | Description | | | -- |
+| **`--access-token`** | Let's you use a specific access token when performing authenticated azure actions. |
+| **`--access-token-stdin `** | Reads a specific access token from a standard input. Use this when reading the token directly from a previous command such as [`az account get-access-token`](/cli/azure/account#az-account-get-access-token). |
| **`--additional-packages`** | List of packages to install when building native dependencies. For example: `python3-dev libevent-dev`. | | **`--build`**, **`-b`** | Performs build action when deploying to a Linux function app. Accepts: `remote` and `local`. | | **`--build-native-deps`** | Skips generating the `.wheels` folder when publishing Python function apps. | | **`--csx`** | Publish a C# script (.csx) project. |
+| **`--dotnet-cli-params`** | When publishing compiled C# (.csproj) functions, the core tools calls `dotnet build --output bin/publish`. Any parameters passed to this are appended to the command line. |
| **`--force`** | Ignore prepublishing verification in certain scenarios. |
-| **`--dotnet-cli-params`** | When publishing compiled C# (.csproj) functions, the core tools calls `dotnet build --output bin/publish`. Any parameters passed to this will be appended to the command line. |
|**`--list-ignored-files`** | Displays a list of files that are ignored during publishing, which is based on the `.funcignore` file. | | **`--list-included-files`** | Displays a list of files that are published, which is based on the `.funcignore` file. |
+| **`--management-url`** | Sets the management URL for your cloud. Use this when running in a sovereign cloud. |
| **`--no-build`** | Project isn't built during publishing. For Python, `pip install` isn't performed. | | **`--nozip`** | Turns the default `Run-From-Package` mode off. | | **`--overwrite-settings -y`** | Suppress the prompt to overwrite app settings when `--publish-local-settings -i` is used.| | **`--publish-local-settings -i`** | Publish settings in local.settings.json to Azure, prompting to overwrite if the setting already exists. If you're using a [local storage emulator](functions-develop-local.md#local-storage-emulator), first change the app setting to an [actual storage connection](functions-run-local.md#get-your-storage-connection-strings). | | **`--publish-settings-only`**, **`-o`** | Only publish settings and skip the content. Default is prompt. | | **`--slot`** | Optional name of a specific slot to which to publish. |
+| **`--subscription`** | Sets the default subscription to use. |
+ # [v1.x](#tab/v1)
Gets the connection string for the specified Azure Storage account.
func azure storage fetch-connection-string <STORAGE_ACCOUNT_NAME> ```
+## func azurecontainerapps deploy
+
+Deploys a containerized function app to an Azure Container Apps environment. Both the storage account used by the function app and the environment must already exist. For more information, see [Azure Container Apps hosting of Azure Functions](functions-container-apps-hosting.md).
+
+```command
+func azurecontainerapps deploy --name <APP_NAME> --environment <ENVIRONMENT_NAME> --storage-account <STORAGE_CONNECTION> --resource-group <RESOURCE_GROUP> --image-name <IMAGE_NAME> --registry-server <REGISTRY_SERVER> --registry-username <USERNAME> --registry-password <PASSWORD>
+
+```
+
+The following deployment options apply:
+
+| Option | Description |
+| | -- |
+| **`--access-token`** | Let's you use a specific access token when performing authenticated azure actions. |
+| **`--access-token-stdin `** | Reads a specific access token from a standard input. Use this when reading the token directly from a previous command such as [`az account get-access-token`](/cli/azure/account#az-account-get-access-token). |
+| **`--environment`** | The name of an existing Container Apps environment.|
+| **`--image-build`** | When set to `true`, skips the local Docker build. |
+| **`--image-name`** | The image name of an existing container in a container registry. The image name includes the tag name. |
+| **`--location `** | Region for the deployment. Ideally, this is the same region as the environment and storage account resources. |
+| **`--management-url`** | Sets the management URL for your cloud. Use this when running in sovereign cloud. |
+| **`--name`** | The name used for the function app deployment in the Container Apps environment. This same name is also used when managing the function app in the portal. The name should be unique in the environment. |
+| **`--registry`** | When set, a Docker build is run and the image is pushed to the registry set in `--registry`. You can't use `--registry` with `--image-name`. For Docker Hub, also use `--registry-username`.|
+| **`--registry-password`** | The password or token used to retrieve the image from a private registry.|
+| **`--registry-username`** | The username used to retrieve the image from a private registry.|
+| **`--resource-group`** | The resource group in which to create the functions-related resources.|
+| **`--storage-account`** | The connection string for the storage account to be used by the function app.|
+| **`--subscription`** | Sets the default subscription to use. |
+| **`--worker-runtime`** | Sets the runtime language of the function app. This parameter is only used with `--image-name` and `--image-build`, otherwise the language is determined during the local build. Supported values are: `dotnet`, `dotnetIsolated`, `node`, `python`, `powershell`, and `custom` (for customer handlers). |
++
+> [!IMPORTANT]
+> Storage connection strings and other service credentials are important secrets. Make sure to securely store any script files using `func azurecontainerapps deploy` and don't store them in any publicly accessible source control.
+ ## func deploy The `func deploy` command is deprecated. Instead use [`func kubernetes deploy`](#func-kubernetes-deploy).
Deploys a Functions project as a custom docker container to a Kubernetes cluster
func kubernetes deploy ```
-This command builds your project as a custom container and publishes it to a Kubernetes cluster. Custom containers must have a Dockerfile. To create an app with a Dockerfile, use the `--dockerfile` option with the [`func init` command](#func-init).
+This command builds your project as a custom container and publishes it to a Kubernetes cluster. Custom containers must have a Dockerfile. To create an app with a Dockerfile, use the `--dockerfile` option with the [`func init`](#func-init) command.
The following Kubernetes deployment options are available:
The following Kubernetes deployment options are available:
| **`--service-type`** | Sets the type of Kubernetes Service. Supported values are: `ClusterIP`, `NodePort`, and `LoadBalancer` (default). | | **`--use-config-map`** | Use a `ConfigMap` object (v1) instead of a `Secret` object (v1) to configure [function app settings](functions-how-to-use-azure-function-app-settings.md#settings). The map name is set using `--config-map-name`.|
-Core Tools uses the local Docker CLI to build and publish the image.
-
-Make sure your Docker is already installed locally. Run the `docker login` command to connect to your account.
+Core Tools uses the local Docker CLI to build and publish the image. Make sure your Docker is already installed locally. Run the `docker login` command to connect to your account.
To learn more, see [Deploying a function app to Kubernetes](functions-kubernetes-keda.md#deploying-a-function-app-to-kubernetes).
azure-functions Functions How To Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-how-to-custom-container.md
Title: Working with Azure Functions in containers description: Learn how to work with function apps running in Linux containers. Previously updated : 06/14/2023 Last updated : 07/30/2023 zone_pivot_groups: functions-container-hosting
zone_pivot_groups: functions-container-hosting
# Working with containers and Azure Functions
-This article demonstrates the support that Azure Functions provides for working with function apps running in Linux containers. Choose the hosting environment for your containerized function app at the top of the article.
+This article demonstrates the support that Azure Functions provides for working with containerized function apps running in an Azure Container Apps environment. Support for hosting function app containers in Container Apps is currently in preview. For more information, see [Azure Container Apps hosting of Azure Functions](functions-container-apps-hosting.md).
+This article demonstrates the support that Azure Functions provides for working with function apps running in Linux containers.
+
+Choose the hosting environment for your containerized function app at the top of the article.
If you want to jump right in, the following article shows you how to create your first function running in a Linux container and deploy the image from a container registry to a supported Azure hosting service:
az functionapp config container set --image <IMAGE_NAME> --registry-password <SE
In this example, `<IMAGE_NAME>` is the full name of the new image with version. Private registries require you to supply a username and password. Store these credentials securely. You should also consider [enabling continuous deployment](#enable-continuous-deployment-to-azure). ## Azure portal create using containers
-When you create a function app in the [Azure portal](https://portal.azure.com), you can also create a deployment of the function app from an existing container image. The following steps create and deploy a function app from an [existing container image](#creating-your-function-app-in-a-container).
+When you create a function app in the [Azure portal](https://portal.azure.com), you can choose to deploy the function app from an image in a container registry. To learn how to create a containerized function app in a container registry, see[Creating your function app in a container](#creating-your-function-app-in-a-container).
+
+The following steps create and deploy an existing containerized function app from a container registry.
1. From the Azure portal menu or the **Home** page, select **Create a resource**.
When you create a function app in the [Azure portal](https://portal.azure.com),
| | - | -- | | **Subscription** | Your subscription | The subscription in which you create your function app. | | **[Resource Group](../azure-resource-manager/management/overview.md)** | *myResourceGroup* | Name for the new resource group in which you create your function app. You should create a resource group because there are [known limitations when creating new function apps in an existing resource group](functions-scale.md#limitations-for-creating-new-function-apps-in-an-existing-resource-group).|
- | **Function App name** | Globally unique name | Name that identifies your new function app. Valid characters are `a-z` (case insensitive), `0-9`, and `-`. |
+ | **Function App name** | Unique name<sup>*</sup> | Name that identifies your new function app. Valid characters are `a-z` (case insensitive), `0-9`, and `-`. |
| **Do you want to deploy code or container image?**| Container image | Deploy a containerized function app from a registry. To create a function app in registry, see [Create a function app in a local container](functions-create-container-registry.md). |
- |**Region**| Preferred region | Select a [region](https://azure.microsoft.com/regions/) that's near you or near other services that your functions can access. |
+ |**Region**| Preferred region | Select a [region](https://azure.microsoft.com/regions/) that's near you or near other services that your functions can access. |
+
+ <sup>*</sup>App name must be globally unique amongst all Azure Functions hosted apps.
+ 4. In **[Hosting options and plans](functions-scale.md)**, choose **Functions Premium**.
- :::image type="content" source="media/functions-how-to-custom-container/function-app-create-container-functions-premium.png" alt-text="Screenshot of the Basics tab in the Azure portal when creating a function app for hosting a container in a Functions Premium plan.":::
+ :::image type="content" source="media/functions-how-to-custom-container/function-app-create-container-functions-premium.png" alt-text="Screenshot of the Basics tab in the Azure portal when creating a function app for hosting a container in a Functions Premium plan.":::
This creates a function app hosted by Azure Functions in the [Premium plan](functions-premium-plan.md), which supports dynamic scaling. You can also choose to run in an **App Service plan**, but in this kind of dedicated plan you must manage the [scaling of your function app](functions-scale.md).
+ <sup>*</sup>App name must be unique within the Azure Container Apps environment. Not all regions are supported in the preview. For more information, see [Considerations for Container Apps hosting](functions-container-apps-hosting.md#considerations-for-container-apps-hosting).
+ 4. In **[Hosting options and plans](functions-scale.md)**, choose **Azure Container Apps Environment plan**. :::image type="content" source="media/functions-how-to-custom-container/function-app-create-container-apps-hosting.png" alt-text="Portal create Basics tab for a containerized function app hosted in Azure Container Apps.":::
- This creates a new **Azure Container Apps Environment** resource to host your function app container. By default, the environment is created in a Consumption plan without zone redundancy, to minimize costs. You can also choose an existing Container Apps environment. For more information, see [Azure Container Apps hosting of Azure Functions](functions-container-apps-hosting.md).
+ This creates a new **Azure Container Apps Environment** resource to host your function app container. For more information, see [Azure Container Apps hosting of Azure Functions](functions-container-apps-hosting.md).
+
+ By default, the environment is created in a Consumption plan without zone redundancy, to minimize costs. You can also choose an existing Container Apps environment. To learn about environments, see [Azure Container Apps environments](../container-apps/environment.md).
::: zone-end :::zone pivot="azure-functions,container-apps" 5. Accept the default options of creating a new storage account on the **Storage** tab and a new Application Insight instance on the **Monitoring** tab. You can also choose to use an existing storage account or Application Insights instance.
azure-functions Functions Run Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-run-local.md
Title: Work with Azure Functions Core Tools
description: Learn how to code and test Azure Functions from the command prompt or terminal on your local computer before you run them on Azure Functions. ms.assetid: 242736be-ec66-4114-924b-31795fd18884 Previously updated : 06/26/2023 Last updated : 07/30/2023 zone_pivot_groups: programming-languages-set-functions
The following considerations apply to Core Tools versions:
The recommended way to install Core Tools depends on the operating system of your local development computer.
-# [Windows](#tab/windows)
+### [Windows](#tab/windows)
The following steps use a Windows installer (MSI) to install Core Tools v4.x. For more information about other package-based installers, see the [Core Tools readme](https://github.com/Azure/azure-functions-core-tools/blob/v4.x/README.md#windows).
If you previously used Windows installer (MSI) to install Core Tools on Windows,
If you need to install version 1.x of the Core Tools, see the [GitHub repository](https://github.com/Azure/azure-functions-core-tools/blob/v1.x/README.md#installing) for more information.
-# [macOS](#tab/macos)
+### [macOS](#tab/macos)
[!INCLUDE [functions-x86-emulation-on-arm64-note](../../includes/functions-x86-emulation-on-arm64-note.md)]
The following steps use Homebrew to install the Core Tools on macOS.
# if upgrading on a machine that has 2.x or 3.x installed: brew link --overwrite azure-functions-core-tools@4 ```
-# [Linux](#tab/linux)
+### [Linux](#tab/linux)
The following steps use [APT](https://wiki.debian.org/Apt) to install Core Tools on your Ubuntu/Debian Linux distribution. For other Linux distributions, see the [Core Tools readme](https://github.com/Azure/azure-functions-core-tools/blob/v4.x/README.md#linux).
For compiled C# project, add references to the specific NuGet packages for the b
::: zone pivot="programming-language-java,programming-language-javascript,programming-language-powershell,programming-language-python,programming-language-typescript" Functions provides _extension bundles_ to make is easy to work with binding extensions in your project. Extension bundles, which are versioned and defined in the host.json file, install a complete set of compatible binding extension packages for your app. Your host.json should already have extension bundles enabled. If for some reason you need to add or update the extension bundle in the host.json file, see [Extension bundles](functions-bindings-register.md#extension-bundles).
-If you must use a binding extension or an extension version not in a supported bundle, you'll need to manually install extension. For such rare scenarios, see [Install extensions](#install-extensions).
+If you must use a binding extension or an extension version not in a supported bundle, you need to manually install extensions. For such rare scenarios, see [Install extensions](#install-extensions).
::: zone-end [!INCLUDE [functions-local-settings-file](../../includes/functions-local-settings-file.md)]
When no valid storage connection string is set for [`AzureWebJobsStorage`] and a
Even when using the [Azurite storage emulator](functions-develop-local.md#local-storage-emulator) for development, you may want to run locally with an actual storage connection. Assuming you have already [created a storage account](../storage/common/storage-account-create.md), you can get a valid storage connection string in one of several ways:
-# [Portal](#tab/portal)
+#### [Portal](#tab/portal)
1. From the [Azure portal], search for and select **Storage accounts**.
Even when using the [Azurite storage emulator](functions-develop-local.md#local-
![Copy connection string from Azure portal](./media/functions-run-local/copy-storage-connection-portal.png)
-# [Core Tools](#tab/azurecli)
+#### [Core Tools](#tab/azurecli)
From the project root, use one of the following commands to download the connection string from Azure:
From the project root, use one of the following commands to download the connect
When you aren't already signed in to Azure, you're prompted to do so. These commands overwrite any existing settings in the local.settings.json file. To learn more, see the [`func azure functionapp fetch-app-settings`](functions-core-tools-reference.md#func-azure-functionapp-fetch-app-settings) and [`func azure storage fetch-connection-string`](functions-core-tools-reference.md#func-azure-storage-fetch-connection-string) commands.
-# [Storage Explorer](#tab/storageexplorer)
+#### [Storage Explorer](#tab/storageexplorer)
1. Run [Azure Storage Explorer](https://storageexplorer.com/).
This example creates a Queue Storage trigger named `MyQueueTrigger`:
func new --template "Azure Queue Storage Trigger" --name MyQueueTrigger ```
-To learn more, see the [`func new` command](functions-core-tools-reference.md#func-new).
+To learn more, see the [`func new`](functions-core-tools-reference.md#func-new) command.
## <a name="start"></a>Run functions locally
func start
::: zone-end ::: zone pivot="programming-language-csharp,programming-language-javascript" The way you start the host depends on your runtime version:
-# [v4.x](#tab/v2)
+### [v4.x](#tab/v2)
``` func start ```
-# [v1.x](#tab/v1)
+### [v1.x](#tab/v1)
``` func host start ```
Keep in mind the following considerations when running your functions locally:
+ By default, authorization isn't enforced locally for HTTP endpoints. This means that all local HTTP requests are handled as `authLevel = "anonymous"`. For more information, see the [HTTP binding article](functions-bindings-http-webhook-trigger.md#authorization-keys). You can use the `--enableAuth` option to require authorization when running locally. For more information, see [`func start`](./functions-core-tools-reference.md?tabs=v2#func-start)
-+ While there is local storage emulation available, it's often best to validate your triggers and bindings against live services in Azure. You can maintain the connections to these services in the local.settings.json project file. For more information, see [Local settings file](functions-develop-local.md#local-settings-file). Make sure to keep test and production data separate when testing against live Azure services.
++ While there's local storage emulation available, it's often best to validate your triggers and bindings against live services in Azure. You can maintain the connections to these services in the local.settings.json project file. For more information, see [Local settings file](functions-develop-local.md#local-settings-file). Make sure to keep test and production data separate when testing against live Azure services. + You can trigger non-HTTP functions locally without connecting to a live service. For more information, see [Non-HTTP triggered functions](#non-http-triggered-functions).
curl --get http://localhost:7071/api/MyHttpTrigger?name=Azure%20Rocks
The following example is the same function called from a POST request passing _name_ in the request body:
-# [Bash](#tab/bash)
+##### [Bash](#tab/bash)
```bash curl --request POST http://localhost:7071/api/MyHttpTrigger --data '{"name":"Azure Rocks"}' ```
-# [Cmd](#tab/cmd)
+##### [Cmd](#tab/cmd)
```cmd curl --request POST http://localhost:7071/api/MyHttpTrigger --data "{'name':'Azure Rocks'}" ```
To pass test data to the administrator endpoint of a function, you must supply t
The `<trigger_input>` value contains data in a format expected by the function. The following cURL example is a POST to a `QueueTriggerJS` function. In this case, the input is a string that is equivalent to the message expected to be found in the queue.
-# [Bash](#tab/bash)
+##### [Bash](#tab/bash)
```bash curl --request POST -H "Content-Type:application/json" --data '{"input":"sample queue data"}' http://localhost:7071/admin/functions/QueueTrigger ```
-# [Cmd](#tab/cmd)
+##### [Cmd](#tab/cmd)
```bash curl --request POST -H "Content-Type:application/json" --data "{'input':'sample queue data'}" http://localhost:7071/admin/functions/QueueTrigger ```
The Azure Functions Core Tools supports two types of deployment:
| Deployment type | Command | Description | | -- | -- | -- | | Project files | [`func azure functionapp publish`](functions-core-tools-reference.md#func-azure-functionapp-publish) | Deploys function project files directly to your function app using [zip deployment](functions-deployment-technologies.md#zip-deploy). |
+| Azure Container Apps | `func azurecontainerapps deploy` | Deploys a containerized function app to an existing Container Apps environment. |
| Kubernetes cluster | `func kubernetes deploy` | Deploys your Linux function app as a custom Docker container to a Kubernetes cluster. |
-### Before you publish
+### Authenticating with Azure
->[!IMPORTANT]
->You must have the [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-azure-powershell) installed locally to be able to publish to Azure from Core Tools.
-
-A project folder may contain language-specific files and directories that shouldn't be published. Excluded items are listed in a .funcignore file in the root project folder.
-
-You must have already [created a function app in your Azure subscription](functions-cli-samples.md#create), to which you can deploy your code. Projects that require compilation should be built so that the binaries can be deployed.
-
-To learn how to create a function app from the command prompt or terminal window using the Azure CLI or Azure PowerShell, see [Create a Function App for serverless execution](./scripts/functions-cli-create-serverless.md).
-
->[!IMPORTANT]
-> When you create a function app in the Azure portal, it uses version 4.x of the Function runtime by default. To make the function app use version 1.x of the runtime, follow the instructions in [Run on version 1.x](functions-versions.md#creating-1x-apps).
-> You can't change the runtime version for a function app that has existing functions.
+You must have either the [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-azure-powershell) installed locally to be able to publish to Azure from Core Tools. By default, Core Tools uses these tools to authenticate with your Azure account.
+If you don't have these tools installed, you need to instead [get a valid access token](/cli/azure/account#az-account-get-access-token) to use during deployment. You can present an access token using the `--access-token` option in the deployment commands.
### <a name="project-file-deployment"></a>Deploy project files
-To publish your local code to a function app in Azure, use the `publish` command:
+To publish your local code to a function app in Azure, use the [`func azure functionapp publish publish`](./functions-core-tools-reference.md#func-azure-functionapp-publish) command, as in the following example:
``` func azure functionapp publish <FunctionAppName> ```
+This command publishes project files from the current directory to the `<FunctionAppName>` as a .zip deployment package. If the project requires compilation, it's done remotely during deployment.
+Java uses Maven to publish your local project to Azure instead of Core Tools. Use the following Maven command to publish your project to Azure:
+
+```
+mvn azure-functions:deploy
+```
+
+When you run this command, Azure resources are created during the initial deployment based on the settings in your _pom.xml_ file. For more information, see [Deploy the function project to Azure](create-first-function-cli-java.md#deploy-the-function-project-to-azure).
The following considerations apply to this kind of deployment:
-+ Publishing overwrites existing files in the function app.
++ Publishing overwrites existing files in the remote function app deployment.
-+ Use the [`--publish-local-settings` option][func azure functionapp publish] to automatically create app settings in your function app based on values in the local.settings.json file.
++ You must have already [created a function app in your Azure subscription](functions-cli-samples.md#create). Core Tools deploys your project code to this function app resource. To learn how to create a function app from the command prompt or terminal window using the Azure CLI or Azure PowerShell, see [Create a Function App for serverless execution](./scripts/functions-cli-create-serverless.md). You can also [create these resources in the Azure portal](./functions-create-function-app-portal.md#create-a-function-app). You get an error when you try to publish to a `<FunctionAppName>` that doesn't exist in your subscription. +++ A project folder may contain language-specific files and directories that shouldn't be published. Excluded items are listed in a .funcignore file in the root project folder. +++ By default, your project is deployed so that it [runs from the deployment package](run-functions-from-deployment-package.md). To disable this recommended deployment mode, use the [`--nozip` option][func azure functionapp publish]. + A [remote build](functions-deployment-technologies.md#remote-build) is performed on compiled projects. This can be controlled by using the [`--no-build` option][func azure functionapp publish].
-+ Your project is deployed such that it [runs from the deployment package](run-functions-from-deployment-package.md). To disable this recommended deployment mode, use the [`--nozip` option][func azure functionapp publish].
++ Use the [`--publish-local-settings` option][func azure functionapp publish] to automatically create app settings in your function app based on values in the local.settings.json file. + To publish to a specific named slot in your function app, use the [`--slot` option](functions-core-tools-reference.md#func-azure-functionapp-publish).
-+ Java uses Maven to publish your local project to Azure. Instead, use the following command to publish to Azure: `mvn azure-functions:deploy`. Azure resources are created during initial deployment.
+### Azure Container Apps deployment
-+ You get an error when you try to publish to a `<FunctionAppName>` that doesn't exist in your subscription.
+Functions lets you deploy a [containerized function app](functions-create-container-registry.md) to an Azure Container Apps environment. For more information, see [Azure Container Apps hosting of Azure Functions](functions-container-apps-hosting.md). Use the following [`func azurecontainerapps deploy`](./functions-core-tools-reference.md#func-azurecontainerapps-deploy) command to deploy an existing container image to a Container Apps environment:
-### Kubernetes cluster
+```command
+func azurecontainerapps deploy --name <APP_NAME> --environment <ENVIRONMENT_NAME> --storage-account <STORAGE_CONNECTION> --resource-group <RESOURCE_GROUP> --image-name <IMAGE_NAME> [--registry-password] [--registry-server] [--registry-username]
+
+```
+
+When deploying to an Azure Container Apps environment, the environment and storage account must already exist. You don't need to create a separate function app resource. The storage account connection string you provide is used by the deployed function app.
-Functions also lets you define your Functions project to run in a Docker container. Use the [`--docker` option][func init] of `func init` to generate a Dockerfile for your specific language. This file is then used when creating a container to deploy. For more information, see [Working with containers and Azure Functions](functions-how-to-custom-container.md).
+> [!IMPORTANT]
+> Storage connection strings and other service credentials are important secrets. Make sure to securely store any script files using `func azurecontainerapps deploy` and don't store them in any publicly accessible source control systems.
-Core Tools can be used to deploy your project as a custom container image to a Kubernetes cluster.
+### Kubernetes cluster
-The following command uses the Dockerfile to generate a container and deploy it to a Kubernetes cluster.
+Core Tools can also be used to deploy a [containerized function app](functions-create-container-registry.md) to a Kubernetes cluster that you manage. The following [`func kubernetes deploy`](./functions-core-tools-reference.md#func-kubernetes-deploy) command uses the Dockerfile to generate a container in the specified registry and deploy it to the default Kubernetes cluster.
```command func kubernetes deploy --name <DEPLOYMENT_NAME> --registry <REGISTRY_USERNAME> ```
-To learn more, see [Deploying a function app to Kubernetes](functions-kubernetes-keda.md#deploying-a-function-app-to-kubernetes).
+Azure Functions on Kubernetes using KEDA is an open-source effort that you can use free of cost. Best-effort support is provided by contributors and from the community. To learn more, see [Deploying a function app to Kubernetes](functions-kubernetes-keda.md#deploying-a-function-app-to-kubernetes).
## Install extensions
Use the following command to install a specific extension package at a specific
func extensions install --package Microsoft.Azure.WebJobs.Extensions.Storage --version 5.0.0 ```
-You can use this command to install any compatible NuGet package. To learn more, see the [`func extensions install` command](functions-core-tools-reference.md#func-extensions-install).
+You can use this command to install any compatible NuGet package. To learn more, see the [`func extensions install`](functions-core-tools-reference.md#func-extensions-install) command.
## Monitoring functions
azure-functions Functions Target Based Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-target-based-scaling.md
The following considerations apply when using target-based scaling:
+ Target-based scaling is enabled by default for function apps on the Consumption plan or for Premium plans, but you can [opt-out](#opting-out). Event-driven scaling isn't supported when running on Dedicated (App Service) plans. + Your [function app runtime version](set-runtime-version.md) must be 4.3.0 or a later version.++ Target Based Scaling is enabled by default on function app runtime 4.19.0 or a later version. + When using target-based scaling, the `functionAppScaleLimit` site setting is still honored. For more information, see [Limit scale out](event-driven-scaling.md#limit-scale-out). + To achieve the most accurate scaling based on metrics, use only one target-based triggered function per function app. + When multiple functions in the same function app are all requesting to scale out at the same time, a sum across those functions is used to determine the change in desired instances. Functions requesting to scale-out override functions requesting to scale-in.
azure-functions Migrate Dotnet To Isolated Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-dotnet-to-isolated-model.md
The following example is a .csproj project file that uses .NET 6 on version 4.x:
Use one of the following procedures to update this XML file to run in the isolated worker model:
-# [.NET 6 (isolated)](#tab/net6-isolated)
+# [.NET 6](#tab/net6-isolated)
[!INCLUDE [functions-dotnet-migrate-project-v4-isolated](../../includes/functions-dotnet-migrate-project-v4-isolated.md)]
Use one of the following procedures to update this XML file to run in the isolat
When migrating to run in an isolated worker process, you must add the following program.cs file to your project:
-# [.NET 6 (isolated)](#tab/net6-isolated)
+# [.NET 6](#tab/net6-isolated)
:::code language="csharp" source="~/functions-quickstart-templates/Functions.Templates/ProjectTemplate_v4.x/CSharp-Isolated/Program.cs" range="23-29":::
The differences between in-process and isolated worker process can be seen in HT
The HTTP trigger template for the migrated version looks like the following example:
-# [.NET 6 (isolated)](#tab/net6-isolated)
+# [.NET 6](#tab/net6-isolated)
:::code language="csharp" source="~/functions-quickstart-templates/Functions.Templates/Templates/HttpTrigger-CSharp-Isolated/HttpTriggerCSharp.cs":::
azure-functions Migrate Version 1 Version 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-version-1-version-4.md
description: This article shows you how to upgrade your existing function apps r
Last updated 07/31/2023-+ zone_pivot_groups: programming-languages-set-functions
azure-large-instances Find Your Subscription Id https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-large-instances/find-your-subscription-id.md
Last updated 06/01/2023
This article explains how to find your Azure Large Instances subscription ID. A *Subscription ID* is a unique identifier for your service in Azure.
-You need it when interacting with the Microsoft Support team. To find your subscription ID, follow these steps:
+You need it when interacting with the Microsoft Support team.
+To find your subscription ID, follow these steps:
1. Go to [Azure support portal](https://portal.Azure.Com)
-2. From the left pane, select **Subscriptions**.
-3. A new blade called ΓÇ£SubscriptionsΓÇ¥ will open to display your subscriptions.
-
-1. Choose the subscription you have used for Azure Large Instances.
----
+2. In **Azure services**, select **Subscriptions**.
+The **Subscriptions** blade appears, displaying your subscriptions.
+3. Choose the subscription you have used for Azure Large Instances.
azure-large-instances Quality Checks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-large-instances/quality-checks.md
Last updated 06/01/2023
# Quality checks for Azure Large Instances This article provides an overview of Azure Large Instances for Epic<sup>®</sup> quality checks.
-The Microsoft operations team performs a series of extensive quality checks to ensure that customers' requests to run Azure Large Instances for Epic<sup>®</sup> are fulfilled accurately, and that infrastructure is healthy before handover.
+The Microsoft operations team performs a series of extensive quality checks to ensure that customers' requests to run Azure Large Instances for Epic are fulfilled accurately, and that infrastructure is healthy before handover.
However, customers are advised to perform their own checks to ensure services have been provided as requested, including the following: * Basic connectivity
azure-maps How To Create Data Registries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-create-data-registries.md
Now that you have your storage account with the desired files linked to your Azu
> [!NOTE] > The maximum size of a file that can be registered with an Azure Maps datastore is one gigabyte.+ To create a data registry:
+# [system-assigned](#tab/System-assigned)
+
+1. Provide the information needed to reference the storage account that is being added to the data registry in the body of your HTTP request. The information must be in JSON format and contain the following fields:
+
+ ```json
+ {
+ "kind": "AzureBlob",
+ "azureBlob": {
+ "dataFormat": "geojson",
+ "linkedResource": "{datastore ID}",
+ "blobUrl": "https://teststorageaccount.blob.core.windows.net/testcontainer/test.geojson"
+ }
+ }
+ ```
+
+ > [!NOTE]
+ > When using System-assigned managed identities, you will get an error if you provide a value for the msiClientId property in your HTTP request.
+
+ For more information on the properties required in the HTTP request body, see [Data registry properties](#data-registry-properties).
+
+1. Once you have the body of your HTTP request ready, execute the following **HTTP PUT request**:
+
+ ```http
+ https://us.atlas.microsoft.com/dataRegistries/{udid}?api-version=2023-06-01&subscription-key={Your-Azure-Maps-Subscription-key}
+
+ ```
+
+ For more information on the `udid` property, see [The user data ID](#the-user-data-id).
+
+1. Copy the value of the **Operation-Location** key from the response header.
+
+# [user-assigned](#tab/User-assigned)
+ 1. Provide the information needed to reference the storage account that is being added to the data registry in the body of your HTTP request. The information must be in JSON format and contain the following fields: ```json
To create a data registry:
} ```
+ > [!NOTE]
+ > When using User-assigned managed identities, you will get an error if you don't provide a value for the msiClientId property in your HTTP request.
+ For more information on the properties required in the HTTP request body, see [Data registry properties](#data-registry-properties). 1. Once you have the body of your HTTP request ready, execute the following **HTTP PUT request**:
To create a data registry:
1. Copy the value of the **Operation-Location** key from the response header. ++ > [!TIP] > If the contents of a previously registered file is modified, it will fail its [data validation](#data-validation) and won't be usable in Azure Maps until it's re-registered. To re-register a file, rerun the register request, passing in the same [AzureBlob](#the-azureblob) used to create the original registration. The value of the **Operation-Location** key is the status URL that you'll use to check the status of the data registry creation in the next section, it contains the operation ID used by the [Get operation][Get operation] API.
azure-monitor Best Practices Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-cost.md
description: Recommendations for reducing costs in Azure Monitor.
Previously updated : 03/29/2023 Last updated : 08/03/2023
This article describes [Cost optimization](/azure/architecture/framework/cost/)
| Recommendation | Benefit | |:|:| | Configure agent collection to remove unneeded data. | Analyze the data collected by Container insights as described in [Controlling ingestion to reduce cost](containers/container-insights-cost.md#control-ingestion-to-reduce-cost) and adjust your configuration to stop collection of data in ContainerLogs you don't need. |
-| Modify settings for collection of metric data. | You can reduce your costs by modifying the default collection settings Container insights uses for the collection of metric data. See [Enable cost optimization settings (preview)](containers/container-insights-cost-config.md) for details on modifying both the frequency that metric data is collected and the namespaces that are collected. |
+| Modify settings for collection of metric data. | You can reduce your costs by modifying the default collection settings Container insights uses for the collection of metric data. See [Enable cost optimization settings](containers/container-insights-cost-config.md) for details on modifying both the frequency that metric data is collected and the namespaces that are collected. |
| Limit Prometheus metrics collected. | If you configured Prometheus metric scraping, then follow the recommendations at [Controlling ingestion to reduce cost](containers/container-insights-cost.md#prometheus-metrics-scraping) to optimize your data collection for cost. | | Configure Basic Logs. | [Convert your schema to ContainerLogV2](containers/container-insights-logging-v2.md) which is compatible with Basic logs and can provide significant cost savings as described in [Controlling ingestion to reduce cost](containers/container-insights-cost.md#configure-basic-logs). |
This article describes [Cost optimization](/azure/architecture/framework/cost/)
## Next step - [Get best practices for a complete deployment of Azure Monitor](best-practices.md).+
azure-monitor Container Insights Enable Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-aks.md
Use the following procedure if you're not using managed identity authentication.
3. Configure private link by following the instructions at [Configure your private link](../logs/private-link-configure.md). Set ingestion access to public and then set to private after the private endpoint is created but before monitoring is enabled. The private link resource region must be same as AKS cluster region. - 4. Enable monitoring for the AKS cluster. ```cli
Use the following procedure if you're not using managed identity authentication.
- When you enable managed identity authentication, a data collection rule is created with the name *MSCI-\<cluster-region\>-<\cluster-name\>*. Currently, this name can't be modified.
+- You must be on a machine on the same private network to access live logs from a private cluster.
+ ## Next steps * If you experience issues while you attempt to onboard the solution, review the [Troubleshooting guide](container-insights-troubleshoot.md). * With monitoring enabled to collect health and resource utilization of your AKS cluster and workloads running on them, learn [how to use](container-insights-analyze.md) Container insights.++
azure-monitor Container Insights Metric Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-metric-alerts.md
Metric alerts in Azure Monitor proactively identify issues related to system resources of your Azure resources, including monitored Kubernetes clusters. Container insights provides preconfigured alert rules so that you don't have to create your own. This article describes the different types of alert rules you can create and how to enable and configure them. > [!IMPORTANT]
-> Container insights in Azure Monitor now supports alerts based on Prometheus metrics, and metric rules will be retired on March 14, 2026. If you already use alerts based on custom metrics, you should migrate to Prometheus alerts and disable the equivalent custom metric alerts.
-
+> Container insights in Azure Monitor now supports alerts based on Prometheus metrics, and metric rules will be retired on March 14, 2026. If you already use alerts based on custom metrics, you should migrate to Prometheus alerts and disable the equivalent custom metric alerts. As of August 15, 2023, you will no longer be able to configure new custom metric recommended alerts using the portal.
## Types of metric alert rules There are two types of metric rules used by Container insights based on either Prometheus metrics or custom metrics. See a list of the specific alert rules for each at [Alert rule details](#alert-rule-details).
The methods currently available for creating Prometheus alert rules are Azure Re
1. To deploy community and recommended alerts, follow this [template](https://aka.ms/azureprometheus-alerts-bicep) and follow the README.md file in the same folder for how to deploy. + ### Edit Prometheus alert rules
The configuration change can take a few minutes to finish before it takes effect
## Metric alert rules > [!IMPORTANT]
-> Metric alerts (preview) are retiring and no longer recommended. Please refer to the migration guidance at [Migrate from Container insights recommended alerts to Prometheus recommended alert rules (preview)](#migrate-from-metric-rules-to-prometheus-rules-preview).
-
+> Metric alerts (preview) are retiring and no longer recommended. As of August 15, 2023, you will no longer be able to configure new custom metric recommended alerts using the portal. Please refer to the migration guidance at [Migrate from Container insights recommended alerts to Prometheus recommended alert rules (preview)](#migrate-from-metric-rules-to-prometheus-rules-preview).
### Prerequisites - You might need to enable collection of custom metrics for your cluster. See [Metrics collected by Container insights](container-insights-custom-metrics.md).
To disable custom alert rules, use the same ARM template to create the rule, but
+ ## Migrate from metric rules to Prometheus rules (preview) If you're using metric alert rules to monitor your Kubernetes cluster, you should transition to Prometheus recommended alert rules (preview) before March 14, 2026 when metric alerts are retired.
View fired alerts for your cluster from **Alerts** in the **Monitor** menu in th
- Read about the [different alert rule types in Azure Monitor](../alerts/alerts-types.md). - Read about [alerting rule groups in Azure Monitor managed service for Prometheus](../essentials/prometheus-rule-groups.md).+
azure-monitor Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/policy-reference.md
Title: Built-in policy definitions for Azure Monitor description: Lists Azure Policy built-in policy definitions for Azure Monitor. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/25/2023 Last updated : 08/03/2023
azure-netapp-files Backup Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-introduction.md
Azure NetApp Files backup is supported for the following regions:
Backup vaults are organizational units to manage backups. You must create a backup vault before you can create a backup.
+Although it's possible to create multiple backup vaults in your Azure NetApp Files account, it's recommended you have only one backup vault.
+ >[!IMPORTANT] >If you have existing backups on Azure NetApp Files, you must migrate the backups to a backup vault before you can perform any operation with the backup. To learn how to migrate, see [Manage backup vaults](backup-vault-manage.md#migrate-backups-to-a-backup-vault).
azure-netapp-files Backup Vault Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-vault-manage.md
Backup vaults store the backups for your Azure NetApp Files subscription.
+Although it's possible to create multiple backup vaults in your Azure NetApp Files account, it's recommended you have only one backup vault.
+ >[!IMPORTANT] >If you have existing backups on Azure NetApp Files, you must migrate the backups to a backup vault before you can perform any operation with the backup.
azure-portal Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/policy-reference.md
Title: Built-in policy definitions for Azure portal description: Lists Azure Policy built-in policy definitions for Azure portal. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/25/2023 Last updated : 08/03/2023
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/policy-reference.md
Title: Built-in policy definitions for Azure Custom Resource Providers description: Lists Azure Policy built-in policy definitions for Azure Custom Resource Providers. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/25/2023 Last updated : 08/03/2023
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/policy-reference.md
Title: Built-in policy definitions for Azure Managed Applications description: Lists Azure Policy built-in policy definitions for Azure Managed Applications. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/25/2023 Last updated : 08/03/2023
azure-resource-manager Control Plane And Data Plane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/control-plane-and-data-plane.md
Azure Resource Manager handles all control plane requests. It automatically appl
* [Management Locks](lock-resources.md) * [Activity Logs](../../azure-monitor/essentials/activity-log.md)
-After authenticating the request, Azure Resource Manager sends it to the resource provider, which completes the operation.
+After authenticating the request, Azure Resource Manager sends it to the resource provider, which completes the operation. Even during periods of unavailability for the control plane, you can still access the data plane of your Azure resources. For instance, you can continue to access and operate on data in your storage account resource via its separate storage URI `https://myaccount.blob.core.windows.net` even when `https://management.azure.com` is not available.
The control plane includes two scenarios for handling requests - "green field" and "brown field". Green field refers to new resources. Brown field refers to existing resources. As you deploy resources, Azure Resource Manager understands when to create new resources and when to update existing resources. You don't have to worry that identical resources will be created.
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/overview.md
There are some important factors to consider when defining your resource group:
The Azure Resource Manager service is designed for resiliency and continuous availability. Resource Manager and control plane operations (requests sent to `management.azure.com`) in the REST API are:
-* Distributed across regions. Although Azure Resource Manager is distributed across regions, some services are regional. This distinction means that while the initial handling of the control plane operation is resilient, the request may be susceptible to regional outages when forwarded to the service.
+* Distributed across regions. Azure Resource Manager has a separate instance in each region of Azure, meaning that a failure of the Azure Resource Manager instance in one region won't impact the availability of Azure Resource Manager or other Azure services in another region. Although Azure Resource Manager is distributed across regions, some services are regional. This distinction means that while the initial handling of the control plane operation is resilient, the request may be susceptible to regional outages when forwarded to the service.
-* Distributed across Availability Zones (and regions) in locations that have multiple Availability Zones.
+* Distributed across Availability Zones (and regions) in locations that have multiple Availability Zones. This distribution ensures that when a region loses one or more zones, Azure Resource Manager can either fail over to another zone or to another region to continue to provide control plane capability for the resources.
* Not dependent on a single logical data center.
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/policy-reference.md
Title: Built-in policy definitions for Azure Resource Manager description: Lists Azure Policy built-in policy definitions for Azure Resource Manager. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/25/2023 Last updated : 08/03/2023
azure-resource-manager Resource Providers And Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-providers-and-types.md
Title: Resource providers and resource types
description: Describes the resource providers that support Azure Resource Manager. It describes their schemas, available API versions, and the regions that can host the resources. Last updated 07/14/2023 -+ content_well_notification: - AI-contribution
West US
* To learn about creating Resource Manager templates, see [Authoring Azure Resource Manager templates](../templates/syntax.md). * To view the resource provider template schemas, see [Template reference](/azure/templates/). * For a list that maps resource providers to Azure services, see [Resource providers for Azure services](azure-services-resource-providers.md).
-* To view the operations for a resource provider, see [Azure REST API](/rest/api/).
+* To view the operations for a resource provider, see [Azure REST API](/rest/api/).
azure-resource-manager Template Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-resource.md
Title: Template functions - resources description: Describes the functions to use in an Azure Resource Manager template (ARM template) to retrieve values about resources. Previously updated : 05/22/2023 Last updated : 08/02/2023
Resource Manager provides the following functions for getting resource values in
* [pickZones](#pickzones) * [providers (deprecated)](#providers) * [reference](#reference)
+* [references](#references)
* [resourceId](#resourceid) * [subscriptionResourceId](#subscriptionresourceid) * [managementGroupResourceId](#managementgroupresourceid)
The possible uses of `list*` are shown in the following table.
| Microsoft.Relay/namespaces/WcfRelays/authorizationRules | [listkeys](/rest/api/relay/wcfrelays/listkeys) | | Microsoft.Search/searchServices | [listAdminKeys](/rest/api/searchmanagement/2021-04-01-preview/admin-keys/get) | | Microsoft.Search/searchServices | [listQueryKeys](/rest/api/searchmanagement/2021-04-01-preview/query-keys/list-by-search-service) |
-| Microsoft.ServiceBus/namespaces/authorizationRules | [listKeys](/rest/api/servicebus/stable/namespaces-authorization-rules/list-keys) |
-| Microsoft.ServiceBus/namespaces/disasterRecoveryConfigs/authorizationRules | [listKeys](/rest/api/servicebus/stable/disasterrecoveryconfigs/listkeys) |
-| Microsoft.ServiceBus/namespaces/queues/authorizationRules | [listKeys](/rest/api/servicebus/stable/queues-authorization-rules/list-keys) |
-| Microsoft.ServiceBus/namespaces/topics/authorizationRules | [listKeys](/rest/api/servicebus/stable/topics%20%E2%80%93%20authorization%20rules/list-keys) |
+| Microsoft.ServiceBus/namespaces/authorizationRules | [listKeys](/rest/api/servicebus/controlplane-stable/namespaces-authorization-rules/list-keys) |
+| Microsoft.ServiceBus/namespaces/disasterRecoveryConfigs/authorizationRules | [listKeys](/rest/api/servicebus/controlplane-stable/disaster-recovery-configs/list-keys) |
+| Microsoft.ServiceBus/namespaces/queues/authorizationRules | [listKeys](/rest/api/servicebus/controlplane-stable/queues-authorization-rules/list-keys) |
+| Microsoft.ServiceBus/namespaces/topics/authorizationRules | [listKeys](/rest/api/servicebus/controlplane-stable/topics%20ΓÇô%20authorization%20rules/list-keys) |
| Microsoft.SignalRService/SignalR | [listKeys](/rest/api/signalr/signalr/listkeys) | | Microsoft.Storage/storageAccounts | [listAccountSas](/rest/api/storagerp/storageaccounts/listaccountsas) | | Microsoft.Storage/storageAccounts | [listKeys](/rest/api/storagerp/storageaccounts/listkeys) |
The following example template references a storage account that isn't deployed
:::code language="json" source="~/resourcemanager-templates/azure-resource-manager/functions/resource/reference.json":::
+## references
+
+`references(symbolic name of a resource collection, ['Full', 'Properties])`
+
+The `references` function works similarly as [`reference`](#reference). Instead of returning an object presenting a resource's runtime state, the `references` function returns an array of objects representing a collection of resource's runtime states. This function requires ARM template language version `1.10-experimental` and with [symbolic name](../bicep/file.md#resources) enabled:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "languageVersion": "1.10-experimental",
+ "contentVersion": "1.0.0.0",
+ ...
+}
+```
+
+In Bicep, there is no explicit `references` function. Instead, symbolic collection usage is employed directly, and during code generation, Bicep translates it to an ARM template that utilizes the ARM template `references` function. The forthcoming release of Bicep will include the translation feature that converts symbolic collections to ARM templates using the `references` function.
+
+### Parameters
+
+| Parameter | Required | Type | Description |
+|: |: |: |: |
+| Symbolic name of a resource collection |Yes |string |Symbolic name of a resource collection that is defined in the current template. The `references` function does not support referencing resources external to the current template. |
+| 'Full', 'Properties' |No |string |Value that specifies whether to return an array of the full resource objects. The default value is `'Properties'`. If you don't specify `'Full'`, only the properties objects of the resources are returned. The full object includes values such as the resource ID and location. |
+
+### Return value
+
+An array of the resource collection. Every resource type returns different properties for the `reference` function. Also, the returned value differs based on the value of the `'Full'` argument. For more information, see [reference](#reference).
+
+The output order of `references` is always arranged in ascending order based on the copy index. Therefore, the first resource in the collection with index 0 is displayed first, followed by index 1, and so on. For instance, *[worker-0, worker-1, worker-2, ...]*.
+
+In the preceding example, if *worker-0* and *worker-2* are deployed while *worker-1* is not due to a false condition, the output of `references` will omit the non-deployed resource and display the deployed ones, ordered by their numbers. The output of `references` will be *[worker-0, worker-2, ...]*. If all of the resources are omitted, the function returns an empty array.
+
+### Valid uses
+
+The `references` function can't be used within [resource copy loops](./copy-resources.md) or [Bicep for loop](../bicep/loops.md). For example, `references` is not allowed in the following scenario:
+
+```json
+{
+ resources: {
+ "resourceCollection": {
+ "copy": { ... },
+ "properties": {
+ "prop": "[references(...)]"
+ }
+ }
+ }
+}
+```
+
+To use the `references` function or any `list*` function in the outputs section of a nested template, you must set the `expressionEvaluationOptions` to use [inner scope](linked-templates.md#expression-evaluation-scope-in-nested-templates) evaluation or use a linked instead of a nested template.
+
+### Implicit dependency
+
+By using the `references` function, you implicitly declare that one resource depends on another resource. You don't need to also use the `dependsOn` property. The function isn't evaluated until the referenced resource has completed deployment.
+
+### Reference example
+
+The following example deploys a resource collection, and references that resource collection.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "languageVersion": "1.10-experimental",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]",
+ "metadata": {
+ "description": "Location for all resources."
+ }
+ },
+ "numWorkers": {
+ "type": "int",
+ "defaultValue": 4,
+ "metadata": {
+ "description": "The number of workers"
+ }
+ }
+ },
+ "resources": {
+ "containerWorkers": {
+ "copy": {
+ "name": "containerWorkers",
+ "count": "[length(range(0, parameters('numWorkers')))]"
+ },
+ "type": "Microsoft.ContainerInstance/containerGroups",
+ "apiVersion": "2022-09-01",
+ "name": "[format('worker-{0}', range(0, parameters('numWorkers'))[copyIndex()])]",
+ "location": "[parameters('location')]",
+ "properties": {
+ "containers": [
+ {
+ "name": "[format('worker-container-{0}', range(0, parameters('numWorkers'))[copyIndex()])]",
+ "properties": {
+ "image": "mcr.microsoft.com/azuredocs/aci-helloworld",
+ "ports": [
+ {
+ "port": 80,
+ "protocol": "TCP"
+ }
+ ],
+ "resources": {
+ "requests": {
+ "cpu": 1,
+ "memoryInGB": 2
+ }
+ }
+ }
+ }
+ ],
+ "osType": "Linux",
+ "restartPolicy": "Always",
+ "ipAddress": {
+ "type": "Public",
+ "ports": [
+ {
+ "port": 80,
+ "protocol": "TCP"
+ }
+ ]
+ }
+ }
+ },
+ "containerController": {
+ "type": "Microsoft.ContainerInstance/containerGroups",
+ "apiVersion": "2022-09-01",
+ "name": "controller",
+ "location": "[parameters('location')]",
+ "properties": {
+ "containers": [
+ {
+ "name": "controller-container",
+ "properties": {
+ "command": [
+ "echo",
+ "[format('Worker IPs are {0}', join(map(references('containerWorkers', 'full'), lambda('w', lambdaVariables('w').properties.ipAddress.ip)), ','))]"
+ ],
+ "image": "mcr.microsoft.com/azuredocs/aci-helloworld",
+ "ports": [
+ {
+ "port": 80,
+ "protocol": "TCP"
+ }
+ ],
+ "resources": {
+ "requests": {
+ "cpu": 1,
+ "memoryInGB": 2
+ }
+ }
+ }
+ }
+ ],
+ "osType": "Linux",
+ "restartPolicy": "Always",
+ "ipAddress": {
+ "type": "Public",
+ "ports": [
+ {
+ "port": 80,
+ "protocol": "TCP"
+ }
+ ]
+ }
+ },
+ "dependsOn": [
+ "containerWorkers"
+ ]
+ }
+ },
+ "outputs": {
+ "workerIpAddresses": {
+ "type": "string",
+ "value": "[join(map(references('containerWorkers', 'full'), lambda('w', lambdaVariables('w').properties.ipAddress.ip)), ',')]"
+ },
+ "containersFull": {
+ "type": "array",
+ "value": "[references('containerWorkers', 'full')]"
+ },
+ "container": {
+ "type": "array",
+ "value": "[references('containerWorkers')]"
+ }
+ }
+}
+```
+
+The preceding example returns the three objects.
+
+```json
+"outputs": {
+ "workerIpAddresses": {
+ "type": "String",
+ "value": "20.66.74.26,20.245.100.10,13.91.86.58,40.83.249.30"
+ },
+ "containersFull": {
+ "type": "Array",
+ "value": [
+ {
+ "apiVersion": "2022-09-01",
+ "condition": true,
+ "copyContext": {
+ "copyIndex": 0,
+ "copyIndexes": {
+ "": 0,
+ "containerWorkers": 0
+ },
+ "name": "containerWorkers"
+ },
+ "copyLoopSymbolicName": "containerWorkers",
+ "deploymentResourceLineInfo": {
+ "lineNumber": 30,
+ "linePosition": 25
+ },
+ "existing": false,
+ "isAction": false,
+ "isConditionTrue": true,
+ "isTemplateResource": true,
+ "location": "westus",
+ "properties": {
+ "containers": [
+ {
+ "name": "worker-container-0",
+ "properties": {
+ "environmentVariables": [],
+ "image": "mcr.microsoft.com/azuredocs/aci-helloworld",
+ "instanceView": {
+ "currentState": {
+ "detailStatus": "",
+ "startTime": "2023-07-31T19:25:31.996Z",
+ "state": "Running"
+ },
+ "restartCount": 0
+ },
+ "ports": [
+ {
+ "port": 80,
+ "protocol": "TCP"
+ }
+ ],
+ "resources": {
+ "requests": {
+ "cpu": 1.0,
+ "memoryInGB": 2.0
+ }
+ }
+ }
+ }
+ ],
+ "initContainers": [],
+ "instanceView": {
+ "events": [],
+ "state": "Running"
+ },
+ "ipAddress": {
+ "ip": "20.66.74.26",
+ "ports": [
+ {
+ "port": 80,
+ "protocol": "TCP"
+ }
+ ],
+ "type": "Public"
+ },
+ "isCustomProvisioningTimeout": false,
+ "osType": "Linux",
+ "provisioningState": "Succeeded",
+ "provisioningTimeoutInSeconds": 1800,
+ "restartPolicy": "Always",
+ "sku": "Standard"
+ },
+ "provisioningOperation": "Create",
+ "references": [],
+ "resourceGroupName": "demoRg",
+ "resourceId": "Microsoft.ContainerInstance/containerGroups/worker-0",
+ "scope": "",
+ "subscriptionId": "",
+ "symbolicName": "containerWorkers[0]"
+ },
+ ...
+ ]
+ },
+ "containers": {
+ "type": "Array",
+ "value": [
+ {
+ "containers": [
+ {
+ "name": "worker-container-0",
+ "properties": {
+ "environmentVariables": [],
+ "image": "mcr.microsoft.com/azuredocs/aci-helloworld",
+ "instanceView": {
+ "currentState": {
+ "detailStatus": "",
+ "startTime": "2023-07-31T19:25:31.996Z",
+