Updates from: 10/22/2022 01:06:13
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Force Password Reset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/force-password-reset.md
Previously updated : 08/04/2022 Last updated : 10/06/2022 + zone_pivot_groups: b2c-policy-type
Content-type: application/json
If you disabled the strong [password complexity](password-complexity.md), update the password policy to [DisableStrongPassword](user-profile-attributes.md#password-policy-attribute):
+> [!NOTE]
+> After the user resets their password, the passwordPolicies will be changed back to DisablePasswordExpiration
+ ```http PATCH https://graph.microsoft.com/v1.0/users/<user-object-ID> Content-type: application/json
active-directory Active Directory Claims Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-claims-mapping.md
-+ Previously updated : 06/16/2021 Last updated : 10/21/2022
A claim is information that an identity provider states about a user inside the token they issue for that user. Claims customization is used by tenant admins to customize the claims emitted in tokens for a specific application in their tenant. You can use claims-mapping policies to: -- select which claims are included in tokens.-- create claim types that do not already exist.-- choose or change the source of data emitted in specific claims.
+- Select which claims are included in tokens.
+- Create claim types that don't already exist.
+- Choose or change the source of data emitted in specific claims.
Claims customization supports configuring claim-mapping policies for the WS-Fed, SAML, OAuth, and OpenID Connect protocols.
-> [!NOTE]
-> This feature replaces and supersedes the [claims customization](active-directory-saml-claims-customization.md) offered through the Azure portal. On the same application, if you customize claims using the portal in addition to the Microsoft Graph/PowerShell method detailed in this document, tokens issued for that application will ignore the configuration in the portal. Configurations made through the methods detailed in this document will not be reflected in the portal.
+This feature replaces and supersedes the [claims customization](active-directory-saml-claims-customization.md) offered through the Azure portal. On the same application, if you customize claims using the portal in addition to the Microsoft Graph/PowerShell method detailed in this document, tokens issued for that application will ignore the configuration in the portal. Configurations made through the methods detailed in this document won't be reflected in the portal.
In this article, we walk through a few common scenarios that can help you understand how to use the [claims-mapping policy type](reference-claims-mapping-policy-type.md). ## Get started
-In the following examples, you create, update, link, and delete policies for service principals. Claims-mapping policies can only be assigned to service principal objects. If you are new to Azure AD, we recommend that you [learn about how to get an Azure AD tenant](quickstart-create-new-tenant.md) before you proceed with these examples.
+In the following examples, you create, update, link, and delete policies for service principals. Claims-mapping policies can only be assigned to service principal objects. If you're new to Azure Active Directory (Azure AD), we recommend that you [learn about how to get an Azure AD tenant](quickstart-create-new-tenant.md) before you proceed with these examples.
-When creating a claims-mapping policy, you can also emit a claim from a directory extension attribute in tokens. Use *ExtensionID* for the extension attribute instead of *ID* in the `ClaimsSchema` element. For more info on extension attributes, see [Using directory extension attributes](active-directory-schema-extensions.md).
+When creating a claims-mapping policy, you can also emit a claim from a directory extension attribute in tokens. Use _ExtensionID_ for the extension attribute instead of _ID_ in the `ClaimsSchema` element. For more info on extension attributes, see [Using directory extension attributes](active-directory-schema-extensions.md).
-> [!NOTE]
-> The [Azure AD PowerShell Module public preview release](https://www.powershellgallery.com/packages/AzureADPreview) is required to configure claims-mapping policies. The PowerShell module is in preview, while the claims mapping and token creation runtime in Azure is generally available. Updates to the preview PowerShell module could require you to update or change your configuration scripts.
+The [Azure AD PowerShell Module public preview release](https://www.powershellgallery.com/packages/AzureADPreview) is required to configure claims-mapping policies. The PowerShell module is in preview, while the claims mapping and token creation runtime in Azure is generally available. Updates to the preview PowerShell module could require you to update or change your configuration scripts.
To get started, do the following steps: 1. Download the latest [Azure AD PowerShell Module public preview release](https://www.powershellgallery.com/packages/AzureADPreview). 1. Run the [Connect-AzureAD](/powershell/module/azuread/connect-azuread?view=azureadps-2.0-preview&preserve-view=true) command to sign in to your Azure AD admin account. Run this command each time you start a new session.
- ``` powershell
+ ```powershell
Connect-AzureAD -Confirm ```+ 1. To see all policies that have been created in your organization, run the following command. We recommend that you run this command after most operations in the following scenarios, to check that your policies are being created as expected.
- ``` powershell
+ ```powershell
Get-AzureADPolicy ```
-Next, create a claims mapping policy and assign it to a service principal. See these examples for common scenarios:
+Next, create a claims mapping policy and assign it to a service principal. See these examples for common scenarios:
+ - [Omit the basic claims from tokens](#omit-the-basic-claims-from-tokens) - [Include the EmployeeID and TenantCountry as claims in tokens](#include-the-employeeid-and-tenantcountry-as-claims-in-tokens) - [Use a claims transformation in tokens](#use-a-claims-transformation-in-tokens)
-After creating a claims mapping policy, configure your application to acknowledge that tokens will contain customized claims. For more information, read [security considerations](#security-considerations).
+After creating a claims mapping policy, configure your application to acknowledge that tokens will contain customized claims. For more information, read [security considerations](#security-considerations).
## Omit the basic claims from tokens In this example, you create a policy that removes the [basic claim set](reference-claims-mapping-policy-type.md#claim-sets) from tokens issued to linked service principals. 1. Create a claims-mapping policy. This policy, linked to specific service principals, removes the basic claim set from tokens.+ 1. To create the policy, run this command:
- ``` powershell
+ ```powershell
New-AzureADPolicy -Definition @('{"ClaimsMappingPolicy":{"Version":1,"IncludeBasicClaimSet":"false"}}') -DisplayName "OmitBasicClaims" -Type "ClaimsMappingPolicy" ```+ 2. To see your new policy, and to get the policy ObjectId, run the following command:
- ``` powershell
+ ```powershell
Get-AzureADPolicy ```+ 1. Assign the policy to your service principal. You also need to get the ObjectId of your service principal.+ 1. To see all your organization's service principals, you can [query the Microsoft Graph API](/graph/traverse-the-graph). Or, in [Microsoft Graph Explorer](https://developer.microsoft.com/graph/graph-explorer), sign in to your Azure AD account. 2. When you have the ObjectId of your service principal, run the following command:
- ``` powershell
+ ```powershell
Add-AzureADServicePrincipalPolicy -Id <ObjectId of the ServicePrincipal> -RefObjectId <ObjectId of the Policy> ```
In this example, you create a policy that removes the [basic claim set](referenc
In this example, you create a policy that adds the EmployeeID and TenantCountry to tokens issued to linked service principals. The EmployeeID is emitted as the name claim type in both SAML tokens and JWTs. The TenantCountry is emitted as the country/region claim type in both SAML tokens and JWTs. In this example, we continue to include the basic claims set in the tokens. 1. Create a claims-mapping policy. This policy, linked to specific service principals, adds the EmployeeID and TenantCountry claims to tokens.+ 1. To create the policy, run the following command:
- ``` powershell
+ ```powershell
New-AzureADPolicy -Definition @('{"ClaimsMappingPolicy":{"Version":1,"IncludeBasicClaimSet":"true", "ClaimsSchema": [{"Source":"user","ID":"employeeid","SamlClaimType":"http://schemas.xmlsoap.org/ws/2005/05/identity/claims/employeeid","JwtClaimType":"employeeid"},{"Source":"company","ID":"tenantcountry","SamlClaimType":"http://schemas.xmlsoap.org/ws/2005/05/identity/claims/country","JwtClaimType":"country"}]}}') -DisplayName "ExtraClaimsExample" -Type "ClaimsMappingPolicy" ```
- > [!WARNING]
- > When you define a claims mapping policy for a directory extension attribute, use the `ExtensionID` property instead of the `ID` property within the body of the `ClaimsSchema` array.
+ When you define a claims mapping policy for a directory extension attribute, use the `ExtensionID` property instead of the `ID` property within the body of the `ClaimsSchema` array.
2. To see your new policy, and to get the policy ObjectId, run the following command:
- ``` powershell
+ ```powershell
Get-AzureADPolicy ```+ 1. Assign the policy to your service principal. You also need to get the ObjectId of your service principal.+ 1. To see all your organization's service principals, you can [query the Microsoft Graph API](/graph/traverse-the-graph). Or, in [Microsoft Graph Explorer](https://developer.microsoft.com/graph/graph-explorer), sign in to your Azure AD account. 2. When you have the ObjectId of your service principal, run the following command:
- ``` powershell
+ ```powershell
Add-AzureADServicePrincipalPolicy -Id <ObjectId of the ServicePrincipal> -RefObjectId <ObjectId of the Policy> ```
In this example, you create a policy that adds the EmployeeID and TenantCountry
In this example, you create a policy that emits a custom claim "JoinedData" to JWTs issued to linked service principals. This claim contains a value created by joining the data stored in the extensionattribute1 attribute on the user object with ".sandbox". In this example, we exclude the basic claims set in the tokens. 1. Create a claims-mapping policy. This policy, linked to specific service principals, adds the EmployeeID and TenantCountry claims to tokens.+ 1. To create the policy, run the following command:
- ``` powershell
+ ```powershell
New-AzureADPolicy -Definition @('{"ClaimsMappingPolicy":{"Version":1,"IncludeBasicClaimSet":"true", "ClaimsSchema":[{"Source":"user","ID":"extensionattribute1"},{"Source":"transformation","ID":"DataJoin","TransformationId":"JoinTheData","JwtClaimType":"JoinedData"}],"ClaimsTransformations":[{"ID":"JoinTheData","TransformationMethod":"Join","InputClaims":[{"ClaimTypeReferenceId":"extensionattribute1","TransformationClaimType":"string1"}], "InputParameters": [{"ID":"string2","Value":"sandbox"},{"ID":"separator","Value":"."}],"OutputClaims":[{"ClaimTypeReferenceId":"DataJoin","TransformationClaimType":"outputClaim"}]}]}}') -DisplayName "TransformClaimsExample" -Type "ClaimsMappingPolicy" ``` 2. To see your new policy, and to get the policy ObjectId, run the following command:
- ``` powershell
+ ```powershell
Get-AzureADPolicy ```+ 1. Assign the policy to your service principal. You also need to get the ObjectId of your service principal.+ 1. To see all your organization's service principals, you can [query the Microsoft Graph API](/graph/traverse-the-graph). Or, in [Microsoft Graph Explorer](https://developer.microsoft.com/graph/graph-explorer), sign in to your Azure AD account. 2. When you have the ObjectId of your service principal, run the following command:
- ``` powershell
+ ```powershell
Add-AzureADServicePrincipalPolicy -Id <ObjectId of the ServicePrincipal> -RefObjectId <ObjectId of the Policy> ``` ## Security considerations
-Applications that receive tokens rely on the fact that the claim values are authoritatively issued by Azure AD and cannot be tampered with. However, when you modify the token contents through claims-mapping policies, these assumptions may no longer be correct. Applications must explicitly acknowledge that tokens have been modified by the creator of the claims-mapping policy to protect themselves from claims-mapping policies created by malicious actors. This can be done in one the following ways:
+Applications that receive tokens rely on the fact that the claim values are authoritatively issued by Azure AD and can't be tampered with. However, when you modify the token contents through claims-mapping policies, these assumptions may no longer be correct. Applications must explicitly acknowledge that tokens have been modified by the creator of the claims-mapping policy to protect themselves from claims-mapping policies created by malicious actors. This can be done in one the following ways:
- [Configure a custom signing key](#configure-a-custom-signing-key) - Or, [update the application manifest](#update-the-application-manifest) to accept mapped claims.
-
+ Without this, Azure AD will return an [`AADSTS50146` error code](reference-aadsts-error-codes.md#aadsts-error-codes). ### Configure a custom signing key
-For multi-tenant apps, a custom signing key should be used. Do not set `acceptMappedClaims` in the app manifest. If set up an app in the Azure portal, you get an app registration object and a service principal in your tenant. That app is using the Azure global sign-in key, which cannot be used for customizing claims in tokens. To get custom claims in tokens, create a custom sign-in key from a certificate and add it to service principal. For testing purposes, you can use a self-signed certificate. After configuring the custom signing key, your application code needs to [validate the token signing key](#validate-token-signing-key).
+For multi-tenant apps, a custom signing key should be used. Don't set `acceptMappedClaims` in the app manifest. If set up an app in the Azure portal, you get an app registration object and a service principal in your tenant. That app is using the Azure global sign-in key, which can't be used for customizing claims in tokens. To get custom claims in tokens, create a custom sign-in key from a certificate and add it to service principal. For testing purposes, you can use a self-signed certificate. After configuring the custom signing key, your application code needs to [validate the token signing key](#validate-token-signing-key).
Add the following information to the service principal:
Extract the private and public key base-64 encoded from the PFX file export of y
#### Request
-The following shows the format of the HTTP PATCH request to add a custom signing key to a service principal. The "key" value in the `keyCredentials` property is shortened for readability. The value is base-64 encoded. For the private key, the property usage is "Sign". For the public key, the property usage is "Verify".
+The following shows the format of the HTTP PATCH request to add a custom signing key to a service principal. The "key" value in the `keyCredentials` property is shortened for readability. The value is base-64 encoded. For the private key, the property usage is "Sign". For the public key, the property usage is "Verify".
``` PATCH https://graph.microsoft.com/v1.0/servicePrincipals/f47a6776-bca7-4f2e-bc6c-eec59d058e3e
Authorization: Bearer {token}
Use PowerShell to [instantiate an MSAL Public Client Application](msal-net-initializing-client-applications.md#initializing-a-public-client-application-from-code) and use the [Authorization Code Grant](v2-oauth2-auth-code-flow.md) flow to obtain a delegated permission access token for Microsoft Graph. Use the access token to call Microsoft Graph and configure a custom signing key for the service principal. After configuring the custom signing key, your application code needs to [validate the token signing key](#validate-token-signing-key).
-To run this script you need:
-1. The object ID of your application's service principal, found in the **Overview** blade of your application's entry in [Enterprise Applications](https://portal.azure.com/#blade/Microsoft_AAD_IAM/StartboardApplicationsMenuBlade/AllApps/menuId/) in the Azure portal.
-2. An app registration to sign in a user and get an access token to call Microsoft Graph. Get the application (client) ID of this app in the **Overview** blade of the application's entry in [App registrations](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RegisteredApps) in the Azure portal. The app registration should have the following configuration:
- - A redirect URI of "http://localhost" listed in the **Mobile and desktop applications** platform configuration
- - In **API permissions**, Microsoft Graph delegated permissions **Application.ReadWrite.All** and **User.Read** (make sure you grant Admin consent to these permissions)
+To run this script, you need:
+
+1. The object ID of your application's service principal, found in the **Overview** pane of your application's entry in [Enterprise Applications](https://portal.azure.com/#blade/Microsoft_AAD_IAM/StartboardApplicationsMenuBlade/AllApps/menuId/) in the Azure portal.
+2. An app registration to sign in a user and get an access token to call Microsoft Graph. Get the application (client) ID of this app in the **Overview** pane of the application's entry in [App registrations](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RegisteredApps) in the Azure portal. The app registration should have the following configuration:
+ - A redirect URI of "http://localhost" listed in the **Mobile and desktop applications** platform configuration
+ - In **API permissions**, Microsoft Graph delegated permissions **Application.ReadWrite.All** and **User.Read** (make sure you grant Admin consent to these permissions)
3. A user who logs in to get the Microsoft Graph access token. The user should be one of the following Azure AD administrative roles (required to update the service principal):
- - Cloud Application Administrator
- - Application Administrator
- - Global Administrator
+ - Cloud Application Administrator
+ - Application Administrator
+ - Global Administrator
4. A certificate to configure as a custom signing key for our application. You can either create a self-signed certificate or obtain one from your trusted certificate authority. The following certificate components are used in the script:
- - public key (typically a .cer file)
- - private key in PKCS#12 format (in .pfx file)
- - password for the private key (pfx file)
+ - public key (typically a .cer file)
+ - private key in PKCS#12 format (in .pfx file)
+ - password for the private key (pfx file)
-> [!IMPORTANT]
-> The private key must be in PKCS#12 format since Azure AD does not support other format types. Using the wrong format can result in the error "Invalid certificate: Key value is invalid certificate" when using Microsoft Graph to PATCH the service principal with a `keyCredentials` containing the certificate info.
+The private key must be in PKCS#12 format since Azure AD doesn't support other format types. Using the wrong format can result in the error "Invalid certificate: Key value is invalid certificate" when using Microsoft Graph to PATCH the service principal with a `keyCredentials` containing the certificate info.
```powershell
$pwdSecure = ConvertTo-SecureString -String $pwd -Force -AsPlainText
$path = 'cert:\currentuser\my\' + $cert.Thumbprint $cerFile = $location + "\\" + $fqdn + ".cer" $pfxFile = $location + "\\" + $fqdn + ".pfx"
-
+ # Export the public and private keys Export-PfxCertificate -cert $path -FilePath $pfxFile -Password $pwdSecure Export-Certificate -cert $path -FilePath $cerFile
Export-Certificate -cert $path -FilePath $cerFile
$ClientID = "<app-id>" $loginURL = "https://login.microsoftonline.com" $tenantdomain = "fourthcoffeetest.onmicrosoft.com"
-$redirectURL = "http://localhost" # this reply URL is needed for PowerShell Core
+$redirectURL = "http://localhost" # this reply URL is needed for PowerShell Core
[string[]] $Scopes = "https://graph.microsoft.com/.default" $pfxpath = $pfxFile # path to pfx file $cerpath = $cerFile # path to cer file $SPOID = "<service-principal-id>" $graphuri = "https://graph.microsoft.com/v1.0/serviceprincipals/$SPOID" $password = $pwd # password for the pfx file
-
-
++ # choose the correct folder name for MSAL based on PowerShell version 5.1 (.Net) or PowerShell Core (.Net Core)
-
+ if ($PSVersionTable.PSVersion.Major -gt 5)
- {
+ {
$core = $true $foldername = "netcoreapp2.1" } else
- {
+ {
$core = $false $foldername = "net45" }
-
+ # Load the MSAL/microsoft.identity/client assembly -- needed once per PowerShell session [System.Reflection.Assembly]::LoadFrom((Get-ChildItem C:/Users/<username>/.nuget/packages/microsoft.identity.client/4.32.1/lib/$foldername/Microsoft.Identity.Client.dll).fullname) | out-null
-
+ $global:app = $null
-
+ $ClientApplicationBuilder = [Microsoft.Identity.Client.PublicClientApplicationBuilder]::Create($ClientID) [void]$ClientApplicationBuilder.WithAuthority($("$loginURL/$tenantdomain")) [void]$ClientApplicationBuilder.WithRedirectUri($redirectURL)
-
+ $global:app = $ClientApplicationBuilder.Build()
-
+ Function Get-GraphAccessTokenFromMSAL { [Microsoft.Identity.Client.AuthenticationResult] $authResult = $null $AquireTokenParameters = $global:app.AcquireTokenInteractive($Scopes)
Function Get-GraphAccessTokenFromMSAL {
$ErrorMessage = $_.Exception.Message Write-Host $ErrorMessage }
-
+ return $authResult }
-
+ $myvar = Get-GraphAccessTokenFromMSAL if ($myvar) { $GraphAccessToken = $myvar.AccessToken Write-Host "Access Token: " $myvar.AccessToken #$GraphAccessToken = "eyJ0eXAiOiJKV1QiL ... iPxstltKQ"
-
-
++ # this is for PowerShell Core $Secure_String_Pwd = ConvertTo-SecureString $password -AsPlainText -Force
-
+ # reading certificate files and creating Certificate Object if ($core) {
if ($myvar)
# $cert = Get-PfxCertificate -FilePath $pfxpath $cert = [System.Security.Cryptography.X509Certificates.X509Certificate2]::new($pfxpath, $password) }
-
+ # base 64 encode the private key and public key $base64pfx = [System.Convert]::ToBase64String($pfx_cert) $base64cer = [System.Convert]::ToBase64String($cer_cert)
-
+ # getting id for the keyCredential object $guid1 = New-Guid $guid2 = New-Guid
-
+ # get the custom key identifier from the certificate thumbprint: $hasher = [System.Security.Cryptography.HashAlgorithm]::Create('sha256') $hash = $hasher.ComputeHash([System.Text.Encoding]::UTF8.GetBytes($cert.Thumbprint)) $customKeyIdentifier = [System.Convert]::ToBase64String($hash)
-
+ # get end date and start date for our keycredentials $endDateTime = ($cert.NotAfter).ToUniversalTime().ToString( "yyyy-MM-ddTHH:mm:ssZ" ) $startDateTime = ($cert.NotBefore).ToUniversalTime().ToString( "yyyy-MM-ddTHH:mm:ssZ" )
-
+ # building our json payload
- $object = [ordered]@{
- keyCredentials = @(
- [ordered]@{
+ $object = [ordered]@{
+ keyCredentials = @(
+ [ordered]@{
customKeyIdentifier = $customKeyIdentifier endDateTime = $endDateTime keyId = $guid1
- startDateTime = $startDateTime
+ startDateTime = $startDateTime
type = "X509CertAndPassword" usage = "Sign" key = $base64pfx
- displayName = "CN=fourthcoffeetest"
+ displayName = "CN=fourthcoffeetest"
},
- [ordered]@{
+ [ordered]@{
customKeyIdentifier = $customKeyIdentifier endDateTime = $endDateTime keyId = $guid2
- startDateTime = $startDateTime
+ startDateTime = $startDateTime
type = "AsymmetricX509Cert" usage = "Verify" key = $base64cer
- displayName = "CN=fourthcoffeetest"
+ displayName = "CN=fourthcoffeetest"
}
- )
+ )
passwordCredentials = @( [ordered]@{ customKeyIdentifier = $customKeyIdentifier
- keyId = $guid1
+ keyId = $guid1
endDateTime = $endDateTime startDateTime = $startDateTime secretText = $password } ) }
-
+ $json = $object | ConvertTo-Json -Depth 99 Write-Host "JSON Payload:" Write-Output $json
-
+ # Request Header $Header = @{} $Header.Add("Authorization","Bearer $($GraphAccessToken)") $Header.Add("Content-Type","application/json")
-
- try
+
+ try
{ Invoke-RestMethod -Uri $graphuri -Method "PATCH" -Headers $Header -Body $json
- }
- catch
+ }
+ catch
{ # Dig into the exception to get the Response details. # Note that value__ is not a typo.
- Write-Host "StatusCode:" $_.Exception.Response.StatusCode.value__
+ Write-Host "StatusCode:" $_.Exception.Response.StatusCode.value__
Write-Host "StatusDescription:" $_.Exception.Response.StatusDescription }
-
+ Write-Host "Complete Request" } else
else
``` #### Validate token signing key+ Apps that have claims mapping enabled must validate their token signing keys by appending `appid={client_id}` to their [OpenID Connect metadata requests](v2-protocols-oidc.md#fetch-the-openid-configuration-document). Below is the format of the OpenID Connect metadata document you should use: ```
https://login.microsoftonline.com/{tenant}/v2.0/.well-known/openid-configuration
### Update the application manifest
-For single tenant apps, you can set the `acceptMappedClaims` property to `true` in the [application manifest](reference-app-manifest.md). As documented on the [apiApplication resource type](/graph/api/resources/apiapplication#properties), this allows an application to use claims mapping without specifying a custom signing key.
+For single tenant apps, you can set the `acceptMappedClaims` property to `true` in the [application manifest](reference-app-manifest.md). As documented on the [apiApplication resource type](/graph/api/resources/apiapplication#properties), this allows an application to use claims mapping without specifying a custom signing key.
-> [!WARNING]
-> Do not set `acceptMappedClaims` property to `true` for multi-tenant apps, which can allow malicious actors to create claims-mapping policies for your app.
+Don't set `acceptMappedClaims` property to `true` for multi-tenant apps, which can allow malicious actors to create claims-mapping policies for your app.
This does require the requested token audience to use a verified domain name of your Azure AD tenant, which means you should ensure to set the `Application ID URI` (represented by the `identifierUris` in the application manifest) for example to `https://contoso.com/my-api` or (simply using the default tenant name) `https://contoso.onmicrosoft.com/my-api`.
-If you're not using a verified domain, Azure AD will return an `AADSTS501461` error code with message *"AcceptMappedClaims is only supported for a token audience matching the application GUID or an audience within the tenant's verified domains. Either change the resource identifier, or use an application-specific signing key."*
+If you're not using a verified domain, Azure AD will return an `AADSTS501461` error code with message _"AcceptMappedClaims is only supported for a token audience matching the application GUID or an audience within the tenant's verified domains. Either change the resource identifier, or use an application-specific signing key."_
## Next steps
active-directory Msal Error Handling Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-error-handling-dotnet.md
Title: Handle errors and exceptions in MSAL.NET description: Learn how to handle errors and exceptions, Conditional Access claims challenges, and retries in MSAL.NET. -+
Last updated 11/26/2020-+
active-directory Msal Error Handling Js https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-error-handling-js.md
Title: Handle errors and exceptions in MSAL.js description: Learn how to handle errors and exceptions, Conditional Access claims challenges, and retries in MSAL.js applications. -+
Last updated 11/26/2020-+
active-directory Msal Logging Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-logging-dotnet.md
Title: Logging errors and exceptions in MSAL.NET description: Learn how to log errors and exceptions in MSAL.NET -+ Previously updated : 01/25/2021- Last updated : 10/21/2022+
## Configure logging in MSAL.NET
-In MSAL logging is set at application creation using the `.WithLogging` builder modifier. This method takes optional parameters:
+In MSAL, logging is set at application creation using the `.WithLogging` builder modifier. This method takes optional parameters:
- `Level` enables you to decide which level of logging you want. Setting it to Errors will only get errors-- `PiiLoggingEnabled` enables you to log personal and organizational data (PII) if set to true. By default this is set to false, so that your application does not log personal data.
+- `PiiLoggingEnabled` enables you to log personal and organizational data (PII) if set to true. By default, this is set to false, so that your application doesn't log personal data.
- `LogCallback` is set to a delegate that does the logging. If `PiiLoggingEnabled` is true, this method will receive messages that can have PII, in which case the `containsPii` flag will be set to true. - `DefaultLoggingEnabled` enables the default logging for the platform. By default it's false. If you set it to true it uses Event Tracing in Desktop/UWP applications, NSLog on iOS and logcat on Android.
active-directory Msal Logging Js https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-logging-js.md
Title: Logging errors and exceptions in MSAL.js description: Learn how to log errors and exceptions in MSAL.js -+
Last updated 12/21/2021-+
active-directory Msal Net Aad B2c Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-aad-b2c-considerations.md
Title: Azure AD B2C and MSAL.NET description: Considerations when using Azure AD B2C with the Microsoft Authentication Library for .NET (MSAL.NET). -+
active-directory Msal Net Acquire Token Silently https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-acquire-token-silently.md
Title: Acquire a token from the cache (MSAL.NET) description: Learn how to acquire an access token silently (from the token cache) using the Microsoft Authentication Library for .NET (MSAL.NET). -+
Last updated 07/16/2019-+ #Customer intent: As an application developer, I want to learn how how to use the AcquireTokenSilent method so I can acquire tokens from the cache.
active-directory Msal Net Adfs Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-adfs-support.md
Title: AD FS support in MSAL.NET description: Learn about Active Directory Federation Services (AD FS) support in the Microsoft Authentication Library for .NET (MSAL.NET). -+
Last updated 03/22/2022-+ #Customer intent: As an application developer, I want to learn about AD FS support in MSAL.NET so I can decide if this platform meets my application development needs and requirements.
active-directory Msal Net Clear Token Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-clear-token-cache.md
Title: Clear the token cache (MSAL.NET) description: Learn how to clear the token cache using the Microsoft Authentication Library for .NET (MSAL.NET). -+
Last updated 05/07/2019-+ #Customer intent: As an application developer, I want to learn how how to clear the token cache so I can .
active-directory Msal Net Initializing Client Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-initializing-client-applications.md
Title: Initialize MSAL.NET client applications description: Learn about initializing public client and confidential client applications using the Microsoft Authentication Library for .NET (MSAL.NET). -+
Last updated 09/18/2019-+ #Customer intent: As an application developer, I want to learn about initializing client applications so I can decide if this platform meets my application development needs and requirements.
active-directory Msal Net Instantiate Confidential Client Config Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-instantiate-confidential-client-config-options.md
Title: Instantiate a confidential client app (MSAL.NET) description: Learn how to instantiate a confidential client application with configuration options using the Microsoft Authentication Library for .NET (MSAL.NET). -+
Last updated 04/30/2019-+ #Customer intent: As an application developer, I want to learn how to use application config options so I can instantiate a confidential client app.
active-directory Msal Net Instantiate Public Client Config Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-instantiate-public-client-config-options.md
Title: Instantiate a public client app (MSAL.NET) description: Learn how to instantiate a public client application with configuration options using the Microsoft Authentication Library for .NET (MSAL.NET). -+
Last updated 04/30/2019-+ #Customer intent: As an application developer, I want to learn how to use application config options so I can instantiate a public client app.
active-directory Scenario Desktop Acquire Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token.md
Title: Acquire a token to call a web API (desktop app) description: Learn how to build a desktop app that calls web APIs to acquire a token for the app -+ - Previously updated : 08/25/2021-- Last updated : 10/21/2022+++ #Customer intent: As an application developer, I want to know how to write a desktop app that calls web APIs by using the Microsoft identity platform.
After you've built an instance of the public client application, you'll use it t
## Recommended pattern
-The web API is defined by its `scopes`. Whatever the experience you provide in your application, the pattern to use is:
+The web API is defined by its *scopes*. Whatever the experience you provide in your application, the pattern to use is:
- Systematically attempt to get a token from the token cache by calling `AcquireTokenSilent`. - If this call fails, use the `AcquireToken` flow that you want to use, which is represented here by `AcquireTokenXX`.
active-directory Documo Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/documo-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
## Step 2. Configure Documo to support provisioning with Azure AD
-1. [Generate an API key](https://help.documo.com/support/solutions/articles/72000513690-how-to-enable-api-and-retrieve-api-key) to use for Azure AD provisioning.
+1. [Generate an API key](https://help.documo.com/hc/en-us/articles/7789630698011-How-to-Enable-and-Retrieve-API-Keys) to use for Azure AD provisioning.
1. Find and remember your API URL. The default API URL is `https://api.documo.com`. If you have a custom Documo API domain, you can reference it in the domain tab of the Documo branding settings page. ## Step 3. Add Documo from the Azure AD application gallery
active-directory F5 Big Ip Headers Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/f5-big-ip-headers-easy-button.md
+
+ Title: 'Tutorial: Azure AD SSO integration with F5ΓÇÖs BIG-IP Easy Button for header-based SSO'
+description: Learn how to Configure SSO between Azure AD and F5ΓÇÖs BIG-IP Easy Button for header-based SSO.
++++++++ Last updated : 10/17/2022+++
+# Tutorial: Configure SSO between Azure AD and F5ΓÇÖs BIG-IP Easy Button for header-based SSO
+
+In this tutorial, you'll learn how to integrate F5 with Azure Active Directory (Azure AD). When you integrate F5 with Azure AD, you can:
+
+* Control in Azure AD who has access to F5.
+* Enable your users to be automatically signed-in to F5 with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+> [!NOTE]
+> F5 BIG-IP APM [Purchase Now](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/f5-networks.f5-big-ip-best?tab=Overview).
+
+## Scenario description
+
+This scenario looks at the classic legacy application using **HTTP authorization headers** to manage access to protected content.
+
+Being legacy, the application lacks modern protocols to support a direct integration with Azure AD. The application can be modernized, but it is costly, requires careful planning, and introduces risk of potential downtime. Instead, an F5 BIG-IP Application Delivery Controller (ADC) is used to bridge the gap between the legacy application and the modern ID control plane, through protocol transitioning.
+
+Having a BIG-IP in front of the application enables us to overlay the service with Azure AD pre-authentication and headers-based SSO, significantly improving the overall security posture of the application.
+
+> [!NOTE]
+> Organizations can also gain remote access to this type of application with [Azure AD Application Proxy](../app-proxy/application-proxy.md).
+
+## Scenario architecture
+
+The SHA solution for this scenario is made up of:
+
+**Application:** BIG-IP published service to be protected by Azure AD SHA.
+
+**Azure AD:** Security Assertion Markup Language (SAML) Identity Provider (IdP) responsible for verification of user credentials, Conditional Access (CA), and SAML based SSO to the BIG-IP. Through SSO, Azure AD provides the BIG-IP with any required session attributes.
+
+**BIG-IP:** Reverse proxy and SAML service provider (SP) to the application, delegating authentication to the SAML IdP before performing header-based SSO to the backend application.
+
+SHA for this scenario supports both SP and IdP initiated flows. The following image illustrates the SP initiated flow.
+
+ ![Screenshot of Secure hybrid access - SP initiated flow.](./media/f5-big-ip-headers-easy-button/sp-initiated-flow.png)
+
+| Steps| Description |
+| - |-|
+| 1| User connects to application endpoint (BIG-IP) |
+| 2| BIG-IP APM access policy redirects user to Azure AD (SAML IdP) |
+| 3| Azure AD pre-authenticates user and applies any enforced Conditional Access policies |
+| 4| User is redirected to BIG-IP (SAML SP) and SSO is performed using issued SAML token |
+| 5| BIG-IP injects Azure AD attributes as headers in request to the application |
+| 6| Application authorizes request and returns payload |
+
+## Prerequisites
+
+Prior BIG-IP experience isnΓÇÖt necessary, but youΓÇÖll need:
+
+* An Azure AD free subscription or above.
+
+* An existing BIG-IP or [deploy a BIG-IP Virtual Edition (VE) in Azure.](../manage-apps/f5-bigip-deployment-guide.md).
+
+* Any of the following F5 BIG-IP license SKUs.
+
+ * F5 BIG-IP® Best bundle.
+
+ * F5 BIG-IP Access Policy ManagerΓäó (APM) standalone license.
+
+ * F5 BIG-IP Access Policy Manager™ (APM) add-on license on an existing BIG-IP F5 BIG-IP® Local Traffic Manager™ (LTM).
+
+ * 90-day BIG-IP full feature [trial license](https://www.f5.com/trial/big-ip-trial.php).
+
+* User identities [synchronized](../hybrid/how-to-connect-sync-whatis.md) from an on-premises directory to Azure AD.
+
+* An account with Azure AD application admin [permissions](/azure/active-directory/users-groups-roles/directory-assign-admin-roles#application-administrator).
+
+* An [SSL Web certificate](../manage-apps/f5-bigip-deployment-guide.md#ssl-profile) for publishing services over HTTPS, or use default BIG-IP certs while testing.
+
+* An existing header-based application or [setup a simple IIS header app](/previous-versions/iis/6.0-sdk/ms525396(v=vs.90)) for testing.
+
+## BIG-IP configuration methods
+
+There are many methods to configure BIG-IP for this scenario, including two template-based options and an advanced configuration. This tutorial covers the latest Guided Configuration 16.1 offering an Easy button template. With the Easy Button, admins no longer go back and forth between Azure AD and a BIG-IP to enable services for SHA. The deployment and policy management is handled directly between the APMΓÇÖs Guided Configuration wizard and Microsoft Graph. This rich integration between BIG-IP APM and Azure AD ensures that applications can quickly, easily support identity federation, SSO, and Azure AD Conditional Access, reducing administrative overhead.
+
+> [!NOTE]
+> All example strings or values referenced throughout this guide should be replaced with those for your actual environment.
+
+## Register Easy Button
+
+Before a client or service can access Microsoft Graph, it must be trusted by the [Microsoft identity platform.](../develop/quickstart-register-app.md)
+
+This first step creates a tenant app registration that will be used to authorize the **Easy Button** access to Graph. Through these permissions, the BIG-IP will be allowed to push the configurations required to establish a trust between a SAML SP instance for published application, and Azure AD as the SAML IdP.
+
+1. Sign-in to the [Azure AD portal](https://portal.azure.com/) using an account with Application Administrative rights.
+2. From the left navigation pane, select the **Azure Active Directory** service.
+3. Under Manage, select **App registrations > New registration**.
+4. Enter a display name for your application. For example, `F5 BIG-IP Easy Button`.
+5. Specify who can use the application > **Accounts in this organizational directory only**.
+6. Select **Register** to complete the initial app registration.
+7. Navigate to **API permissions** and authorize the following Microsoft Graph **Application permissions**:
+
+ * Application.Read.All
+ * Application.ReadWrite.All
+ * Application.ReadWrite.OwnedBy
+ * Directory.Read.All
+ * Group.Read.All
+ * IdentityRiskyUser.Read.All
+ * Policy.Read.All
+ * Policy.ReadWrite.ApplicationConfiguration
+ * Policy.ReadWrite.ConditionalAccess
+ * User.Read.All
+
+8. Grant admin consent for your organization.
+9. In the **Certificates & Secrets** blade, generate a new **client secret** and note it down.
+10. From the **Overview** blade, note the **Client ID** and **Tenant ID**.
+
+## Configure Easy Button
+
+Initiate the APM's **Guided Configuration** to launch the **Easy Button** Template.
+
+1. Navigate to **Access > Guided Configuration > Microsoft Integration** and select **Azure AD Application**.
+
+ ![Screenshot for Configure Easy Button- Install the template.](./media/f5-big-ip-headers-easy-button/easy-button-template.png)
+
+2. Review the list of configuration steps and select **Next**.
+
+ ![Screenshot for Configure Easy Button - List configuration steps.](./media/f5-big-ip-headers-easy-button/configuration-steps.png)
+
+3. Follow the sequence of steps required to publish your application.
+
+ ![Screenshot of Configuration steps flow.](./media/f5-big-ip-headers-easy-button/configuration-steps-flow.png#lightbox)
++
+### Configuration Properties
+
+The **Configuration Properties** tab creates a BIG-IP application config and SSO object. Consider the **Azure Service Account Details** section to represent the client you registered in your Azure AD tenant earlier, as an application. These settings allow a BIG-IP's OAuth client to individually register a SAML SP directly in your tenant, along with the SSO properties you would normally configure manually. Easy Button does this for every BIG-IP service being published and enabled for SHA.
+
+Some of these are global settings so can be reused for publishing more applications, further reducing deployment time and effort.
+
+1. Enter a unique **Configuration Name** so admins can easily distinguish between Easy Button configurations.
+
+2. Enable **Single Sign-On (SSO) & HTTP Headers**.
+
+3. Enter the **Tenant Id**, **Client ID**, and **Client Secret** you noted when registering the Easy Button client in your tenant.
+
+4. Confirm the BIG-IP can successfully connect to your tenant, and then select **Next**.
+
+ ![Screenshot for Configuration General and Service Account properties.](./media/f5-big-ip-headers-easy-button/configuration-properties.png)
+
+### Service Provider
+
+The Service Provider settings define the properties for the SAML SP instance of the application protected through SHA.
+
+1. Enter **Host**. This is the public FQDN of the application being secured.
+
+2. Enter **Entity ID**. This is the identifier Azure AD will use to identify the SAML SP requesting a token.
+
+ ![Screenshot for Service Provider settings.](./media/f5-big-ip-headers-easy-button/service-provider.png)
+
+ The optional **Security Settings** specify whether Azure AD should encrypt issued SAML assertions. Encrypting assertions between Azure AD and the BIG-IP APM provides additional assurance that the content tokens canΓÇÖt be intercepted, and personal or corporate data be compromised.
+
+3. From the **Assertion Decryption Private Key** list, select **Create New**.
+
+ ![Screenshot for Configure Easy Button- Create New import.](./media/f5-big-ip-headers-easy-button/configure-security-create-new.png)
+
+4. Select **OK**. This opens the **Import SSL Certificate and Keys** dialog in a new tab.
+
+6. Select **PKCS 12 (IIS)** to import your certificate and private key. Once provisioned close the browser tab to return to the main tab.
+
+ ![Screenshot for Configure Easy Button- Import new cert.](./media/f5-big-ip-headers-easy-button/import-ssl-certificates-and-keys.png)
+
+6. Check **Enable Encrypted Assertion**.
+7. If you have enabled encryption, select your certificate from the **Assertion Decryption Private Key** list. This is the private key for the certificate that BIG-IP APM will use to decrypt Azure AD assertions.
+8. If you have enabled encryption, select your certificate from the **Assertion Decryption Certificate** list. This is the certificate that BIG-IP will upload to Azure AD for encrypting the issued SAML assertions.
+
+ ![Screenshot for Service Provider security settings.](./media/f5-big-ip-headers-easy-button/service-provider-security-settings.png)
+
+### Azure Active Directory
+
+This section defines all properties that you would normally use to manually configure a new BIG-IP SAML application within your Azure AD tenant. Easy Button provides a set of pre-defined application templates for Oracle PeopleSoft, Oracle E-business Suite, Oracle JD Edwards, SAP ERP as well as generic SHA template for any other apps. For this scenario select **F5 BIG-IP APM Azure AD Integration > Add**.
+
+ ![Screenshot for Azure configuration add BIG-IP application.](./media/f5-big-ip-headers-easy-button/azure-configuration-add-app.png)
+
+#### Azure Configuration
+
+1. Enter **Display Name** of app that the BIG-IP creates in your Azure AD tenant, and the icon that the users will see on [MyApps portal](https://myapplications.microsoft.com/).
+
+2. Do not enter anything in the **Sign On URL (optional)** to enable IdP initiated sign-on.
+
+ ![Screenshot for Azure configuration add display info.](./media/f5-big-ip-headers-easy-button/azure-configuration-properties.png)
+
+3. Select the refresh icon next to the **Signing Key** and **Signing Certificate** to locate the certificate you imported earlier.
+
+5. Enter the certificateΓÇÖs password in **Signing Key Passphrase**.
+
+6. Enable **Signing Option** (optional). This ensures that BIG-IP only accepts tokens and claims that are signed by Azure AD.
+
+ ![Screenshot for Azure configuration - Add signing certificates info.](./media/f5-big-ip-headers-easy-button/azure-configuration-sign-certificates.png)
+
+7. **User and User Groups** are dynamically queried from your Azure AD tenant and used to authorize access to the application. Add a user or group that you can use later for testing, otherwise all access will be denied.
+
+ ![Screenshot for Azure configuration - Add users and groups.](./media/f5-big-ip-headers-easy-button/azure-configuration-add-user-groups.png)
+
+#### User Attributes & Claims
+
+When a user successfully authenticates, Azure AD issues a SAML token with a default set of claims and attributes uniquely identifying the user. The **User Attributes & Claims tab** shows the default claims to issue for the new application. It also lets you configure more claims.
+
+For this example, you can include one more attribute:
+
+1. Enter **Header Name** as employeeid.
+
+2. Enter **Source Attribute** as user.employeeid.
+
+ ![Screenshot for user attributes and claims.](./media/f5-big-ip-headers-easy-button/user-attributes-claims.png)
+
+#### Additional User Attributes
+
+In the **Additional User Attributes tab**, you can enable session augmentation required by a variety of distributed systems such as Oracle, SAP, and other JAVA based implementations requiring attributes stored in other directories. Attributes fetched from an LDAP source can then be injected as additional SSO headers to further control access based on roles, Partner IDs, etc.
+
+ ![Screenshot for additional user attributes.](./media/f5-big-ip-headers-easy-button/additional-user-attributes.png)
+
+>[!NOTE]
+>This feature has no correlation to Azure AD but is another source of attributes. 
+
+#### Conditional Access Policy
+
+CA policies are enforced post Azure AD pre-authentication, to control access based on device, application, location, and risk signals.
+
+The **Available Policies** view, by default, will list all CA policies that do not include user based actions.
+
+The **Selected Policies** view, by default, displays all policies targeting All cloud apps. These policies cannot be deselected or moved to the Available Policies list as they are enforced at a tenant level.
+
+To select a policy to be applied to the application being published:
+
+1. Select the desired policy in the **Available Policies** list.
+2. Select the right arrow and move it to the **Selected Policies** list.
+
+Selected policies should either have an **Include** or **Exclude** option checked. If both options are checked, the selected policy is not enforced.
+
+ ![Screenshot for CA policies.](./media/f5-big-ip-headers-easy-button/conditional-access-policy.png)
+
+> [!NOTE]
+> The policy list is enumerated only once when first switching to this tab. A refresh button is available to manually force the wizard to query your tenant, but this button is displayed only when the application has been deployed.
+
+### Virtual Server Properties
+
+A virtual server is a BIG-IP data plane object represented by a virtual IP address listening for clients requests to the application. Any received traffic is processed and evaluated against the APM profile associated with the virtual server, before being directed according to the policy results and settings.
+
+1. Enter **Destination Address**. This is any available IPv4/IPv6 address that the BIG-IP can use to receive client traffic. A corresponding record should also exist in DNS, enabling clients to resolve the external URL of your BIG-IP published application to this IP, instead of the application itself. Using a test PC's localhost DNS is fine for testing.
+
+2. Enter **Service Port** as *443* for HTTPS.
+
+3. Check **Enable Redirect Port** and then enter **Redirect Port**. It redirects incoming HTTP client traffic to HTTPS.
+
+4. The Client SSL Profile enables the virtual server for HTTPS, so that client connections are encrypted over TLS. Select the **Client SSL Profile** you created as part of the prerequisites or leave the default whilst testing.
+
+ ![Screenshot for Virtual server.](./media/f5-big-ip-headers-easy-button/virtual-server.png)
+
+### Pool Properties
+
+The **Application Pool tab** details the services behind a BIG-IP that are represented as a pool, containing one or more application servers.
+
+1. Choose from **Select a Pool**. Create a new pool or select an existing one.
+
+2. Choose the **Load Balancing Method** as `Round Robin`.
+
+3. For **Pool Servers** select an existing node or specify an IP and port for the server hosting the header-based application.
+
+ ![Screenshot for Application pool.](./media/f5-big-ip-headers-easy-button/application-pool.png)
+
+Our backend application sits on HTTP port 80 but obviously switch to 443 if yours is HTTPS.
+
+#### Single Sign-On & HTTP Headers
+
+Enabling SSO allows users to access BIG-IP published services without having to enter credentials. The **Easy Button wizard** supports Kerberos, OAuth Bearer, and HTTP authorization headers for SSO, the latter of which weΓÇÖll enable to configure the following.
+
+* **Header Operation:** Insert
+* **Header Name:** upn
+* **Header Value:** %{session.saml.last.identity}
+
+* **Header Operation:** Insert
+* **Header Name:** employeeid
+* **Header Value:** %{session.saml.last.attr.name.employeeid}
+
+ ![Screenshot for SSO and HTTP headers.](./media/f5-big-ip-headers-easy-button/sso-http-headers.png)
+
+>[!NOTE]
+>APM session variables defined within curly brackets are CASE sensitive. For example, if you enter OrclGUID when the Azure AD attribute name is being defined as orclguid, it will cause an attribute mapping failure.
+
+### Session Management
+
+The BIG-IPs session management settings are used to define the conditions under which user sessions are terminated or allowed to continue, limits for users and IP addresses, and corresponding user info. Refer to [F5's docs](https://support.f5.com/csp/article/K18390492) for details on these settings.
+
+What isnΓÇÖt covered here however is Single Log-Out (SLO) functionality, which ensures all sessions between the IdP, the BIG-IP, and the user agent are terminated as users sign off. When the Easy Button instantiates a SAML application in your Azure AD tenant, it also populates the Logout Url with the APMΓÇÖs SLO endpoint. That way IdP initiated sign-outs from the Azure AD MyApps portal also terminate the session between the BIG-IP and a client.
+
+Along with this the SAML federation metadata for the published application is also imported from your tenant, providing the APM with the SAML logout endpoint for Azure AD. This ensures SP initiated sign outs terminate the session between a client and Azure AD. But for this to be truly effective, the APM needs to know exactly when a user signs-out of the application.
+
+If the BIG-IP webtop portal is used to access published applications then a sign-out from there would be processed by the APM to also call the Azure AD sign-out endpoint. But consider a scenario where the BIG-IP webtop portal isnΓÇÖt used, then the user has no way of instructing the APM to sign out. Even if the user signs-out of the application itself, the BIG-IP is technically oblivious to this. So for this reason, SP initiated sign-out needs careful consideration to ensure sessions are securely terminated when no longer required. One way of achieving this would be to add an SLO function to your applications sign out button, so that it can redirect your client to either the Azure AD SAML or BIG-IP sign-out endpoint. The URL for SAML sign-out endpoint for your tenant can be found in **App Registrations > Endpoints**.
+
+If making a change to the app is a no go, then consider having the BIG-IP listen for the application's sign-out call, and upon detecting the request have it trigger SLO. Refer to our [Oracle PeopleSoft SLO guidance](../manage-apps/f5-big-ip-oracle-peoplesoft-easy-button.md#peoplesoft-single-logout) for using BIG-IP irules to achieve this. More details on using BIG-IP iRules to achieve this is available in the F5 knowledge article [Configuring automatic session termination (logout) based on a URI-referenced file name](https://support.f5.com/csp/article/K42052145) and [Overview of the Logout URI Include option](https://support.f5.com/csp/article/K12056).
+
+## Summary
+
+This last step provides a breakdown of your configurations. Select **Deploy** to commit all settings and verify that the application now exists in your tenants list of ΓÇÿEnterprise applications.
+
+Your application should now be published and accessible via SHA, either directly via its URL or through MicrosoftΓÇÖs application portals.
+
+## Next steps
+
+From a browser, **connect** to the applicationΓÇÖs external URL or select the **applicationΓÇÖs icon** in the [Microsoft MyApps portal](https://myapplications.microsoft.com/). After authenticating against Azure AD, youΓÇÖll be redirected to the BIG-IP virtual server for the application and automatically signed in through SSO.
+
+This shows the output of the injected headers displayed by our headers-based application.
+
+ ![Screenshot for App views.](./media/f5-big-ip-headers-easy-button/app-view.png)
+
+For increased security, organizations using this pattern could also consider blocking all direct access to the application, thereby forcing a strict path through the BIG-IP.
+
+## Advanced deployment
+
+There may be cases where the Guided Configuration templates lack the flexibility to achieve more specific requirements. For those scenarios, see [Advanced Configuration for headers-based SSO](../manage-apps/f5-big-ip-header-advanced.md).
+
+Alternatively, the BIG-IP gives you the option to disable **Guided ConfigurationΓÇÖs strict management mode**. This allows you to manually tweak your configurations, even though bulk of your configurations are automated through the wizard-based templates.
+
+You can navigate to **Access > Guided Configuration** and select the **small padlock icon** on the far right of the row for your applicationsΓÇÖ configs.
+
+ ![Screenshot for Configure Easy Button - Strict Management.](./media/f5-big-ip-headers-easy-button/strict-mode-padlock.png)
+
+At that point, changes via the wizard UI are no longer possible, but all BIG-IP objects associated with the published instance of the application will be unlocked for direct management.
+
+> [!NOTE]
+> Re-enabling strict mode and deploying a configuration will overwrite any settings performed outside of the Guided Configuration UI, therefore we recommend the advanced configuration method for production services.
+
+## Troubleshooting
+
+Failure to access an SHA protected application can be due to any number of factors. BIG-IP logging can help quickly isolate all sorts of issues with connectivity, SSO, policy violations, or misconfigured variable mappings. Start troubleshooting by increasing the log verbosity level.
+
+1. Navigate to **Access Policy > Overview > Event Logs > Settings**.
+
+2. Select the row for your published application then **Edit > Access System Logs**.
+
+3. Select **Debug** from the SSO list then **OK**.
+
+Reproduce your issue, then inspect the logs, but remember to switch this back when finished as verbose mode generates lots of data.
+
+If you see a BIG-IP branded error immediately after successful Azure AD pre-authentication, itΓÇÖs possible the issue relates to SSO from Azure AD to the BIG-IP.
+
+1. Navigate to **Access > Overview > Access reports**.
+
+2. Run the report for the last hour to see if the logs provide any clues. The **View session** variables link for your session will also help understand if the APM is receiving the expected claims from Azure AD.
+
+If you donΓÇÖt see a BIG-IP error page, then the issue is probably more related to the backend request or SSO from the BIG-IP to the application.
+
+1. In which case head to **Access Policy > Overview > Active Sessions** and select the link for your active session.
+
+2. The **View Variables** link in this location may also help root cause SSO issues, particularly if the BIG-IP APM fails to obtain the right attributes from Azure AD or another source.
+
+For more information, visit this F5 knowledge article [Configuring LDAP remote authentication for Active Directory](https://support.f5.com/csp/article/K11072). ThereΓÇÖs also a great BIG-IP reference table to help diagnose LDAP-related issues in this F5 knowledge article on [LDAP Query](https://techdocs.f5.com/kb/en-us/products/big-ip_apm/manuals/product/apm-authentication-single-sign-on-11-5-0/5.html).
active-directory Github Enterprise Managed User Oidc Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/github-enterprise-managed-user-oidc-provisioning-tutorial.md
This section guides you through the steps to configure the Azure AD provisioning
5. Under the **Admin Credentials** section, input your GitHub Enterprise Managed User (OIDC) Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to GitHub Enterprise Managed User (OIDC). If the connection fails, ensure your GitHub Enterprise Managed User (OIDC) account has created the secret token as an enterprise owner and try again.
- For "Tenant URL", type https://api.github.com/scim/v2/enterprises/YOUR_ENTERPRISE, replacing YOUR_ENTERPRISE with the name of your enterprise account.
+ For "Tenant URL", type `https://api.github.com/scim/v2/enterprises/YOUR_ENTERPRISE`, replacing YOUR_ENTERPRISE with the name of your enterprise account.
For example, if your enterprise account's URL is https://github.com/enterprises/octo-corp, the name of the enterprise account is octo-corp.
active-directory Headerf5 Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/headerf5-tutorial.md
- Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with F5 | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and F5.
-------- Previously updated : 02/09/2021---
-# Tutorial: Configure single sign-on (SSO) between Azure Active Directory and F5
-
-In this tutorial, you'll learn how to integrate F5 with Azure Active Directory (Azure AD). When you integrate F5 with Azure AD, you can:
-
-* Control in Azure AD who has access to F5.
-* Enable your users to be automatically signed-in to F5 with their Azure AD accounts.
-* Manage your accounts in one central location - the Azure portal.
-
-> [!NOTE]
-> F5 BIG-IP APM [Purchase Now](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/f5-networks.f5-big-ip-best?tab=Overview).
-
-## Prerequisites
-
-To get started, you need the following items:
-
-* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-
-* F5 single sign-on (SSO) enabled subscription.
-
-* Deploying the joint solution requires the following license:
-
- * F5 BIG-IP® Best bundle (or)
-
- * F5 BIG-IP Access Policy ManagerΓäó (APM) standalone license
-
- * F5 BIG-IP Access Policy Manager™ (APM) add-on license on an existing BIG-IP F5 BIG-IP® Local Traffic Manager™ (LTM).
-
- * In addition to the above license, the F5 system may also be licensed with:
-
- * A URL Filtering subscription to use the URL category database
-
- * An F5 IP Intelligence subscription to detect and block known attackers and malicious traffic
-
- * A network hardware security module (HSM) to safeguard and manage digital keys for strong authentication
-
-* F5 BIG-IP system is provisioned with APM modules (LTM is optional)
-
-* Although optional, it is highly recommended to Deploy the F5 systems in a [sync/failover device group](https://techdocs.f5.com/content/techdocs/en-us/bigip-14-1-0/big-ip-device-service-clustering-administration-14-1-0.html) (S/F DG), which includes the active standby pair, with a floating IP address for high availability (HA). Further interface redundancy can be achieved using the Link Aggregation Control Protocol (LACP). LACP manages the connected physical interfaces as a single virtual interface (aggregate group) and detects any interface failures within the group.
-
-* For Kerberos applications, an on-premises AD service account for constrained delegation. Refer to [F5 Documentation](https://support.f5.com/csp/article/K43063049) for creating a AD delegation account.
-
-## Access guided configuration
-
-* Access guided configurationΓÇÖ is supported on F5 TMOS version 13.1.0.8 and above. If your BIG-IP system is running a version below 13.1.0.8, please refer to the **Advanced configuration** section.
-
-* Access guided configuration presents a completely new and streamlined user experience. This workflow-based architecture provides intuitive, re-entrant configuration steps tailored to the selected topology.
-
-* Before proceeding to the configuration, upgrade the guided configuration by downloading the latest use case pack from [downloads.f5.com](https://login.f5.com/resource/login.jsp?ctx=719748). To upgrade, follow the below procedure.
-
- >[!NOTE]
- >The screenshots below are for the latest released version (BIG-IP 15.0 with AGC version 5.0). The configuration steps below are valid for this use case across from 13.1.0.8 to the latest BIG-IP version.
-
-1. On the F5 BIG-IP Web UI, click on **Access >> Guide Configuration**.
-
-1. On the **Guided Configuration** page, click on **Upgrade Guided Configuration** on the top left-hand corner.
-
- ![Screenshot shows the Guided Configuration page with the Update Guided Configuration link.](./media/headerf5-tutorial/configure14.png)
-
-1. On the Upgrade Guide Configuration pop screen, select **Choose File** to upload the downloaded use case pack and click on **Upload and Install** button.
-
- ![Screenshot shows the Upgrade Guided Configuration dialog box with Choose File selected.](./media/headerf5-tutorial/configure15.png)
-
-1. When upgrade is completed, click on the **Continue** button.
-
- ![Screenshot shows the Upgrade Guided Configuration dialog box with a completion message.](./media/headerf5-tutorial/configure16.png)
-
-## Scenario description
-
-In this tutorial, you configure and test Azure AD SSO in a test environment.
-
-* F5 SSO can be configured in three different ways.
--- [Configure F5 single sign-on for Header Based application](#configure-f5-single-sign-on-for-header-based-application)--- [Configure F5 single sign-on for Kerberos application](kerbf5-tutorial.md)--- [Configure F5 single sign-on for Advanced Kerberos application](advance-kerbf5-tutorial.md)-
-### Key Authentication Scenarios
-
-* Apart from Azure Active Directory native integration support for modern authentication protocols like Open ID Connect, SAML and WS-Fed, F5 extends secure access for legacy-based authentication apps for both internal and external access with Azure AD, enabling modern scenarios (e.g. password-less access) to these applications. This include:
-
-* Header-based authentication apps
-
-* Kerberos authentication apps
-
-* Anonymous authentication or no inbuilt authentication apps
-
-* NTLM authentication apps (protection with dual prompts for the user)
-
-* Forms Based Application (protection with dual prompts for the user)
-
-## Adding F5 from the gallery
-
-To configure the integration of F5 into Azure AD, you need to add F5 from the gallery to your list of managed SaaS apps.
-
-1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
-1. On the left navigation pane, select the **Azure Active Directory** service.
-1. Navigate to **Enterprise Applications** and then select **All Applications**.
-1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **F5** in the search box.
-1. Select **F5** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-
- Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
-
-## Configure and test Azure AD SSO for F5
-
-Configure and test Azure AD SSO with F5 using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in F5.
-
-To configure and test Azure AD SSO with F5, perform the following steps:
-
-1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure F5 SSO](#configure-f5-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create F5 test user](#create-f5-test-user)** - to have a counterpart of B.Simon in F5 that is linked to the Azure AD representation of user.
-1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-
-## Configure Azure AD SSO
-
-Follow these steps to enable Azure AD SSO in the Azure portal.
-
-1. In the Azure portal, on the **F5** application integration page, find the **Manage** section and select **single sign-on**.
-1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
-
-1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
-
- a. In the **Identifier** text box, type a URL using the following pattern:
- `https://<YourCustomFQDN>.f5.com/`
-
- b. In the **Reply URL** text box, type a URL using the following pattern:
- `https://<YourCustomFQDN>.f5.com/`
-
-1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
-
- In the **Sign-on URL** text box, type a URL using the following pattern:
- `https://<YourCustomFQDN>.f5.com/`
-
- > [!NOTE]
- > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [F5 Client support team](https://support.f5.com/csp/knowledge-center/software/BIG-IP?module=BIG-IP%20APM45) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-
-1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
-
- ![The Certificate download link](common/metadataxml.png)
-
-1. On the **Set up F5** section, copy the appropriate URL(s) based on your requirement.
-
- ![Copy configuration URLs](common/copy-configuration-urls.png)
-
-### Create an Azure AD test user
-
-In this section, you'll create a test user in the Azure portal called B.Simon.
-
-1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
-1. Select **New user** at the top of the screen.
-1. In the **User** properties, follow these steps:
- 1. In the **Name** field, enter `B.Simon`.
- 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
- 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
- 1. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to F5.
-
-1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **F5**.
-1. In the app's overview page, find the **Manage** section and select **Users and groups**.
-1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
-1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
-1. In the **Add Assignment** dialog, click the **Assign** button.
-
-## Configure F5 SSO
--- [Configure F5 single sign-on for Kerberos application](kerbf5-tutorial.md)--- [Configure F5 single sign-on for Advanced Kerberos application](advance-kerbf5-tutorial.md)-
-### Configure F5 single sign-on for Header Based application
-
-### Guided Configuration
-
-1. Open a new web browser window and sign into your F5 (Header Based) company site as an administrator and perform the following steps:
-
-1. Navigate to **System > Certificate Management > Traffic Certificate Management > SSL Certificate List**. Select **Import** from the right-hand corner. Specify a **Certificate Name** (will be referenced Later in the config). In the **Certificate Source**, select Upload File specify the certificate downloaded from Azure while configuring SAML Single Sign on. Click **Import**.
-
- ![Screenshot shows the S S L Certificate List where you select the Certificate Name and Certificate Source.](./media/headerf5-tutorial/configure12.png)
-
-1. Additionally, you will require **SSL Certificate for the Application Hostname. Navigate to System > Certificate Management > Traffic Certificate Management > SSL Certificate List**. Select **Import** from the right-hand corner. **Import Type** will be **PKCS 12(IIS)**. Specify a **Key Name** (will be referenced Later in the config) and the specify the PFX file. Specify the **Password** for the PFX. Click **Import**.
-
- >[!NOTE]
- >In the example our app name is `Headerapp.superdemo.live`, we are using a Wild Card Certificate our keyname is `WildCard-SuperDemo.live`.
-
- ![Screenshot shows the S S L Certificate/Key Source page.](./media/headerf5-tutorial/configure13.png)
-
-1. We will use the Guided Experience to setup the Azure AD Federation and Application Access. Go to ΓÇô F5 BIG-IP **Main** and select **Access > Guided Configuration > Federation > SAML Service Provider**. Click **Next** then click **Next** to begin configuration.
-
- ![Screenshot shows the Guided Configuration page with Federation selected.](./media/headerf5-tutorial/configure01.png)
-
- ![Screenshot shows the SAML Service Provider page.](./media/headerf5-tutorial/configure02.png)
-
-1. Provide a **Configuration Name**. Specify the **Entity ID** (same as what you configured on the Azure AD Application Configuration). Specify the **Host name**. Add a **Description** for reference. Accept the remaining default entries and select and then click **Save & Next**.
-
- ![Screenshot shows the Service Provider Properties page.](./media/headerf5-tutorial/configure03.png)
-
-1. In this example we are creating a new Virtual Server as 192.168.30.20 with port 443. Specify the Virtual Server IP address in the **Destination Address**. Select the Client **SSL Profile**, select Create new. Specify previously uploaded application certificate, (the wild card certificate in this example) and the associated key, and then click **Save & Next**.
-
- >[!NOTE]
- >in this example our Internal webserver is running on port 888 and we want to publish it with 443.
-
- ![Screenshot shows the Virtual Server Properties page.](./media/headerf5-tutorial/configure04.png)
-
-1. Under **Select method to configure your IdP connector**, specify Metadata, click on Choose File and upload the Metadata XML file downloaded earlier from Azure AD. Specify a unique **Name** for SAML IDP connector. Choose the **Metadata Signing Certificate** which was upload earlier. Click **Save & Next**.
-
- ![Screenshot shows the External Identity Provider Connector Settings page.](./media/headerf5-tutorial/configure05.png)
-
-1. Under **Select a Pool**, specify **Create New** (alternatively select a pool it already exists). Let other value be default. Under Pool Servers, type the IP Address under **IP Address/Node Name**. Specify the **Port**. Click **Save & Next**.
-
- ![Screenshot shows the Pool Properties page.](./media/headerf5-tutorial/configure06.png)
-
-1. On the Single Sign-On Settings screen, select **Enable Single Sign-On**. Under Selected Single Sign-On Type choose **HTTP header-based**. Replace **session.saml.last.Identity** with **session.saml.last.attr.name.Identity** under Username Source ( this variable it set using claims mapping in the Azure AD ). Under SSO Headers.
-
- * **HeaderName : MyAuthorization**
-
- * **Header Value : %{session.saml.last.attr.name.Identity}**
-
- * Click **Save & Next**
-
- Refer Appendix for complete list of variables and values. You can add more headers as required.
-
- >[!NOTE]
- >Account Name Is the F5 Delegation Account Created (Check F5 Documentation).
-
- ![Screenshot shows the Single Sign-On Settings page.](./media/headerf5-tutorial/configure07.png)
-
-1. For purposes of this guidance, we will skip endpoint checks. Refer to F5 documentation for details. Select **Save & Next**.
-
- ![Screenshot shows the Endpoint Checks Properties page.](./media/headerf5-tutorial/configure08.png)
-
-1. Accept the defaults and click **Save & Next**. Refer F5 documentation for details regarding SAML session management settings.
-
- ![Screenshot shows the Timeout Settings page.](./media/headerf5-tutorial/configure09.png)
-
-1. Review the summary screen and select **Deploy** to configure the BIG-IP. click on **Finish**.
-
- ![Screenshot shows the Your application is ready to be deployed page.](./media/headerf5-tutorial/configure10.png)
-
- ![Screenshot shows the Your application is deployed page.](./media/headerf5-tutorial/configure11.png)
-
-## Advanced Configuration
-
-This section is intended to be used if you cannot use the Guided configuration or would like to add/modify additional Parameters. You will require a TLS/SSL certificate for the Application Hostname.
-
-1. Navigate to **System > Certificate Management > Traffic Certificate Management > SSL Certificate List**. Select **Import** from the right-hand corner. **Import Type** will be **PKCS 12(IIS)**. Specify a **Key Name** (will be referenced Later in the config) and the specify the PFX file. Specify the **Password** for the PFX. Click **Import**.
-
- >[!NOTE]
- >In the example our app name is `Headerapp.superdemo.live`, we are using a Wild Card Certificate our keyname is `WildCard-SuperDemo.live`.
-
- ![Screenshot shows the S S L Certificate/Key Source page for Advanced Configuration.](./media/headerf5-tutorial/configure17.png)
-
-### Adding a new Web Server to BigIP-F5
-
-1. Click on **Main > IApps > Application Services > Application > Create**.
-
-1. Provide the **Name** and under **Template** choose **f5.http**.
-
- ![Screenshot shows the Application Services page with Template Selection.](./media/headerf5-tutorial/configure18.png)
-
-1. We will publish our HeaderApp2 externally as HTTPS in this case, **how should the BIG-IP system handle SSL Traffic**? we specify **Terminate SSL from Client, Plaintext to servers (SSL Offload)**. Specify your Certificate and Key under **Which SSL certificate do you want to use?** and **Which SSL private key do you want to use?**. Specify the Virtual Server IP under **What IP Address do you want to use for the Virtual Server?**.
-
- * **Specify other details**
-
- * FQDN
-
- * Specify exiting app pool or create a new one.
-
- * If creating a new App Server specify **internal IP Address** and **port number**.
-
- ![Screenshot shows the pane where you can specify these details.](./media/headerf5-tutorial/configure19.png)
-
-1. Click **Finished**.
-
- ![Screenshot shows the page after completion.](./media/headerf5-tutorial/configure20.png)
-
-1. Ensure the App Properties can be modified. Click **Main > IApps > Application
-
- ![Screenshot shows the Application Services page with the Properties tab selected.](./media/headerf5-tutorial/configure21.png)
-
-1. At this point you should be able to browse the virtual Server.
-
-### Configuring F5 as SP and Azure as IDP
-
-1. Click **Access > Federation> SAML Service Provider > Local SP Service > click create or + sign**.
-
- ![Screenshot shows the About this BIG I P page. ](./media/headerf5-tutorial/configure22.png)
-
-1. Specify Details for the Service Provider Service. Specify **Name** representing F5 SP Configuration. Specify **Entity ID** (generally same as application URL).
-
- ![Screenshot shows the SAML Service Provider page with the Create New SAML S P Service dialog box.](./media/headerf5-tutorial/configure23.png)
-
- ![Screenshot shows the Create New SAML S P Service dialog box with Endpoint Settings selected.](./media/headerf5-tutorial/configure24.png)
-
- ![Screenshot shows the Create New SAML S P Service dialog box with Security Settings selected.](./media/headerf5-tutorial/configure25.png)
-
- ![Screenshot shows the Create New SAML S P Service dialog box with Authentication Context selected.](./media/headerf5-tutorial/configure26.png)
-
- ![Screenshot shows the Create New SAML S P Service dialog box with Requested Attributes selected.](./media/headerf5-tutorial/configure27.png)
-
- ![Screenshot shows the Edit SAML S P Service dialog box with Advanced Settings selected.](./media/headerf5-tutorial/configure28.png)
-
-### Create Idp Connector
-
-1. Click **Bind/Unbind IdP Connectors** button, select **Create New IdP Connector** and choose From **Metadata** option then perform the following steps:
-
- ![Screenshot shows the Edit SAML I d Ps that use this S P dialog box with Create New I d P Connector selected.](./media/headerf5-tutorial/configure29.png)
-
- a. Browse to metadata.xml file downloaded from Azure AD and specify an **Identity Provider Name**.
-
- b. Click **ok**.
-
- c. The connector is created, and certificate is ready automatically from the metadata xml file.
-
- ![Screenshot shows the Create New SAML I d P Connector dialog box.](./media/headerf5-tutorial/configure30.png)
-
- d. Configure F5BIG-IP to send all request to Azure AD.
-
- e. Click **Add New Row**, choose **AzureIDP** (as created in previous steps, specify
-
- f. **Matching Source = %{session.server.landinguri}**
-
- g. **Matching Value = /***
-
- h. Click **update**
-
- i. Click **OK**
-
- j. **SAML IDP setup is completed**
-
- ![Screenshot shows the Edit SAML I d Ps that user this S P dialog box.](./media/headerf5-tutorial/configure31.png)
-
-### Configure F5 Policy to redirect users to Azure SAML IDP
-
-1. To configure F5 Policy to redirect users to Azure SAML IDP, perform the following steps:
-
- a. Click **Main > Access > Profile/Policies > Access Profiles**.
-
- b. Click on the **Create** button.
-
- ![Screenshot shows the Access Profiles page.](./media/headerf5-tutorial/configure32.png)
-
- c. Specify **Name** (HeaderAppAzureSAMLPolicy in the example).
-
- d. You can customize other settings please refer to F5 Documentation.
-
- ![Screenshot shows the General Properties page.](./media/headerf5-tutorial/configure33.png)
-
- ![Screenshot shows the General Properties page continued.](./media/headerf5-tutorial/configure34.png)
-
- e. Click **Finished**.
-
- f. Once the Policy creation is completed, click on the Policy and go to the **Access Policy** Tab.
-
- ![Screenshot shows the Access Policy tab with General Properties.](./media/headerf5-tutorial/configure35.png)
-
- g. Click on the **Visual Policy editor**, edit **Access Policy for Profile** link.
-
- h. Click on the + Sign in the Visual Policy editor and choose **SAML Auth**.
-
- ![Screenshot shows an Access Policy.](./media/headerf5-tutorial/configure36.png)
-
- ![Screenshot shows a search dialog box with SAML Auth selected.](./media/headerf5-tutorial/configure37.png)
-
- i. Click **Add Item**.
-
- j. Under **Properties** specify **Name** and under **AAA Server** select the previously configured SP, click **SAVE**.
-
- ![Screenshot shows the Properties of the item including its A A A server.](./media/headerf5-tutorial/configure38.png)
-
- k. The basic Policy is ready you can customize the policy to incorporate additional sources/attribute stores.
-
- ![Screenshot shows the customized policy.](./media/headerf5-tutorial/configure39.png)
-
- l. Ensure you click on the **Apply Access Policy** link on the top.
-
-### Apply Access Profile to the Virtual Server
-
-1. Assign the access profile to the Virtual Server in order for F5 BIG-IP APM to apply the profile settings to incoming traffic and run the previously defined access policy.
-
- a. Click **Main** > **Local Traffic** > **Virtual Servers**.
-
- ![Screenshot shows the Virtual Servers List page.](./media/headerf5-tutorial/configure40.png)
-
- b. Click on the virtual server, scroll to **Access Policy** section, in the **Access Profile** drop down and select the SAML Policy created (in the example HeaderAppAzureSAMLPolicy)
-
- c. Click **update**
-
- ![Screenshot shows the Access Policy pane.](./media/headerf5-tutorial/configure41.png)
-
- d. create an F5 BIG-IP iRule® to extract the custom SAML attributes from the incoming assertion and pass them as HTTP headers to the backend test application. Click **Main > Local Traffic > iRules > iRule List > click create**
-
- ![Screenshot shows the Local Traffic iRule List.](./media/headerf5-tutorial/configure42.png)
-
- e. Paste the F5 BIG-IP iRule text below into the Definition window.
-
- ![Screenshot shows the New iRule page.](./media/headerf5-tutorial/configure43.png)
-
- when RULE_INIT {
- set static::debug 0
- }
- when ACCESS_ACL_ALLOWED {
-
- set AZUREAD_USERNAME [ACCESS::session data get "session.saml.last.attr.name.http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name"]
- if { $static::debug } { log local0. "AZUREAD_USERNAME = $AZUREAD_USERNAME" }
- if { !([HTTP::header exists "AZUREAD_USERNAME"]) } {
- HTTP::header insert "AZUREAD_USERNAME" $AZUREAD_USERNAME
- }
-
- set AZUREAD_DISPLAYNAME [ACCESS::session data get "session.saml.last.attr.name.http://schemas.microsoft.com/identity/claims/displayname"]
- if { $static::debug } { log local0. "AZUREAD_DISPLAYNAME = $AZUREAD_DISPLAYNAME" }
- if { !([HTTP::header exists "AZUREAD_DISPLAYNAME"]) } {
- HTTP::header insert "AZUREAD_DISPLAYNAME" $AZUREAD_DISPLAYNAME
- }
-
- set AZUREAD_EMAILADDRESS [ACCESS::session data get "session.saml.last.attr.name.http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress"]
- if { $static::debug } { log local0. "AZUREAD_EMAILADDRESS = $AZUREAD_EMAILADDRESS" }
- if { !([HTTP::header exists "AZUREAD_EMAILADDRESS"]) } {
- HTTP::header insert "AZUREAD_EMAILADDRESS" $AZUREAD_EMAILADDRESS }}
-
- **Sample output below**
-
- ![Screenshot shows the sample output.](./media/headerf5-tutorial/configure44.png)
-
-### Create F5 test user
-
-In this section, you create a user called B.Simon in F5. Work with [F5 Client support team](https://support.f5.com/csp/knowledge-center/software/BIG-IP?module=BIG-IP%20APM45) to add the users in the F5 platform. Users must be created and activated before you use single sign-on.
-
-## Test SSO
-
-In this section, you test your Azure AD single sign-on configuration with following options.
-
-#### SP initiated:
-
-* Click on **Test this application** in Azure portal. This will redirect to F5 Sign on URL where you can initiate the login flow.
-
-* Go to F5 Sign-on URL directly and initiate the login flow from there.
-
-#### IDP initiated:
-
-* Click on **Test this application** in Azure portal and you should be automatically signed in to the F5 for which you set up the SSO
-
-You can also use Microsoft My Apps to test the application in any mode. When you click the F5 tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the F5 for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-
-> [!NOTE]
-> F5 BIG-IP APM [Purchase Now](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/f5-networks.f5-big-ip-best?tab=Overview).
-
-## Next steps
-
-Once you configure F5 you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Introdus Pre And Onboarding Platform Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/introdus-pre-and-onboarding-platform-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md) * A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (e.g. Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator). * An introdus subscription, that includes Single Sign-On (SSO)
-* A valid introdus API Token. A guide on how to generate Token, can be found [here](https://help.introdus.dk/en/articles/2011815-introdus-open-api).
+* A valid introdus API Token. A guide on how to generate Token, can be found [here](https://api.introdus.dk/docs/#api-OpenAPI).
## Step 1. Plan your provisioning deployment 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
active-directory Lawvu Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lawvu-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure LawVu for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to LawVu.
+
+documentationcenter: ''
+
+writer: Thwimmer
++
+ms.assetid: 37a258fe-b435-4bd8-88a8-8e93bb6f6b6b
+++
+ms.devlang: na
+ Last updated : 10/17/2022+++
+# Tutorial: Configure LawVu for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both LawVu and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users to [LawVu](https://lawvu.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Capabilities supported
+> [!div class="checklist"]
+> * Create users in LawVu.
+> * Remove users in LawVu when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and LawVu.
+> * [Single sign-on](lawvu-tutorial.md) to LawVu (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* The Tenant URL and Secret Token.
+* Global Administrative rights for the Active Directory.
+* Access rights to set up Enterprise applications.
+* An active LawVu account.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and LawVu](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure LawVu to support provisioning with Azure AD
+Your contact at LawVu will send you a LawVu Tenant URL and corresponding Secret Token.
++
+## Step 3. Add LawVu from the Azure AD application gallery
+
+Add LawVu from the Azure AD application gallery to start managing provisioning to LawVu. If you have previously setup LawVu for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users to the application. If you choose to scope who will be provisioned based solely on attributes of the user, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users before rolling out to everyone. When scope for provisioning is set to assigned users, you can control this by assigning one or two users to the app. When scope is set to all users, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
+
+## Step 5. Configure automatic user provisioning to LawVu
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in LawVu based on user assignments in Azure AD.
+
+### To configure automatic user provisioning for LawVu in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **LawVu**.
+
+ ![Screenshot of the LawVu link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab,](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your LawVu Tenant URL and corresponding Secret Token. Click **Test Connection** to ensure Azure AD can connect to LawVu.
+
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to LawVu**.
+
+1. Review the user attributes that are synchronized from Azure AD to LawVu in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in LawVu for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the LawVu API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by LawVu|
+ |||||
+ |userName|String|&check;|&check;
+ |externalId|String|&check;|&check;
+ |active|Boolean|||
+ |title|String|||
+ |name.givenName|String||&check;
+ |name.familyName|String||&check;
+ |phoneNumbers[type eq "work"].value|String|||
+ |phoneNumbers[type eq "mobile"].value|String|||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String||
+
+ >[!NOTE]
+ >LawVu app support **Schema Discovery**. The `/schemas` request will be made by the Azure AD Provisioning Service every time someone saves the provisioning configuration in the Azure portal or every time a user lands on the edit provisioning page in the Azure portal. Other attributes discovered will be surfaced to customers in the attribute mappings under the target attribute list. Schema discovery only leads to more target attributes being added. It will not result in attributes being removed.
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for LawVu, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users that you would like to provision to LawVu by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you are ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Oracle Peoplesoft Protected By F5 Big Ip Apm Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/oracle-peoplesoft-protected-by-f5-big-ip-apm-tutorial.md
- Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Oracle PeopleSoft - Protected by F5 BIG-IP APM | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and Oracle PeopleSoft - Protected by F5 BIG-IP APM.
-------- Previously updated : 03/22/2021----
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Oracle PeopleSoft - Protected by F5 BIG-IP APM
-
-In this tutorial, you'll learn how to integrate Oracle PeopleSoft - Protected by F5 BIG-IP APM with Azure Active Directory (Azure AD). When you integrate Oracle PeopleSoft - Protected by F5 BIG-IP APM with Azure AD, you can:
-
-* Control in Azure AD who has access to Oracle PeopleSoft - Protected by F5 BIG-IP APM.
-* Enable your users to be automatically signed-in to Oracle PeopleSoft - Protected by F5 BIG-IP APM with their Azure AD accounts.
-* Manage your accounts in one central location - the Azure portal.
-* This tutorial covers instructions for Oracle PeopleSoft ELM.
-
-## Prerequisites
-
-To get started, you need the following items:
-
-1. An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-1. Oracle PeopleSoft - Protected by F5 BIG-IP APM single sign-on (SSO) enabled subscription.
-
-1. Deploying the joint solution requires the following license:
-
- 1. F5 BIG-IP® Best bundle (or)
- 2. F5 BIG-IP Access Policy ManagerΓäó (APM) standalone license
- 3. F5 BIG-IP Access Policy Manager™ (APM) add-on license on an existing BIG-IP F5 BIG-IP® Local Traffic Manager™ (LTM).
- 4. In addition to the above license, the F5 system may also be licensed with:
- * A URL Filtering subscription to use the URL category database.
- * An F5 IP Intelligence subscription to detect and block known attackers and malicious traffic.
- * A network hardware security module (HSM) to safeguard and manage digital keys for strong authentication.
-1. F5 BIG-IP system is provisioned with APM modules (LTM is optional).
-1. Although optional, it is highly recommended to Deploy the F5 systems in a [sync/failover device group](https://techdocs.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/big-ip-device-service-clustering-administration-14-1-0.html) (S/F DG), which includes the active standby pair, with a floating IP address for high availability (HA). Further interface redundancy can be achieved using the Link Aggregation Control Protocol (LACP). LACP manages the connected physical interfaces as a single virtual interface (aggregate group) and detects any interface failures within the group.
-
-## Scenario description
-
-In this tutorial, you configure and test Azure AD SSO in a test environment.
-
-* Oracle PeopleSoft - Protected by F5 BIG-IP APM supports **SP and IDP** initiated SSO.
-
-## Add Oracle PeopleSoft - Protected by F5 BIG-IP APM from the gallery
-
-To configure the integration of Oracle PeopleSoft - Protected by F5 BIG-IP APM into Azure AD, you need to add Oracle PeopleSoft - Protected by F5 BIG-IP APM from the gallery to your list of managed SaaS apps.
-
-1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
-1. On the left navigation pane, select the **Azure Active Directory** service.
-1. Navigate to **Enterprise Applications** and then select **All Applications**.
-1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **Oracle PeopleSoft - Protected by F5 BIG-IP APM** in the search box.
-1. Select **Oracle PeopleSoft - Protected by F5 BIG-IP APM** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-
- Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
--
-## Configure and test Azure AD SSO for Oracle PeopleSoft - Protected by F5 BIG-IP APM
-
-Configure and test Azure AD SSO with Oracle PeopleSoft - Protected by F5 BIG-IP APM using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Oracle PeopleSoft - Protected by F5 BIG-IP APM.
-
-To configure and test Azure AD SSO with Oracle PeopleSoft - Protected by F5 BIG-IP APM, perform the following steps:
-
-1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure Oracle PeopleSoft-Protected by F5 BIG-IP APM SSO](#configure-oracle-peoplesoft-protected-by-f5-big-ip-apm-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create Oracle PeopleSoft-Protected by F5 BIG-IP APM test user](#create-oracle-peoplesoft-protected-by-f5-big-ip-apm-test-user)** - to have a counterpart of B.Simon in Oracle PeopleSoft - Protected by F5 BIG-IP APM that is linked to the Azure AD representation of user.
-1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-
-## Configure Azure AD SSO
-
-Follow these steps to enable Azure AD SSO in the Azure portal.
-
-1. In the Azure portal, on the **Oracle PeopleSoft - Protected by F5 BIG-IP APM** application integration page, find the **Manage** section and select **single sign-on**.
-1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
-
-1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
-
- a. In the **Identifier** text box, type a URL using the following pattern:
- `https://<FQDN>.peoplesoft.f5.com`
-
- b. In the **Reply URL** text box, type a URL using the following pattern:
- `https://<FQDN>.peoplesoft.f5.com/saml/sp/profile/post/acs`
-
- c. In the **Logout URL** text box, type a URL using the following pattern:
- `https://<FQDN>.peoplesoft.f5.com/saml/sp/profile/redirect/slr`
-
-1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
-
- In the **Sign-on URL** text box, type a URL using the following pattern:
- `https://<FQDN>.peoplesoft.f5.com/`
-
- > [!NOTE]
- >These values are not real. Update these values with the actual Sign-On URL, Identifier, Reply URL and Logout URL. Contact [Oracle PeopleSoft - Protected by F5 BIG-IP APM Client support team](https://support.f5.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-
-1. Oracle PeopleSoft - Protected by F5 BIG-IP APM application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
-
- ![image](common/default-attributes.png)
-
-1. In addition to above, Oracle PeopleSoft - Protected by F5 BIG-IP APM application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
-
- | Name | Source Attribute|
- | | |
- | EMPLID | user.employeeid |
-
-1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, Download the **Federation Metadata XML** and the **Certificate (Base64)** and save it on your computer.
-
- ![The Certificate download link](./media/oracle-peoplesoft-protected-by-f5-big-ip-apm-tutorial/both-certificate.png)
-
-### Create an Azure AD test user
-
-In this section, you'll create a test user in the Azure portal called B.Simon.
-
-1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
-1. Select **New user** at the top of the screen.
-1. In the **User** properties, follow these steps:
- 1. In the **Name** field, enter `B.Simon`.
- 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
- 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
- 1. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Oracle PeopleSoft - Protected by F5 BIG-IP APM.
-
-1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **Oracle PeopleSoft - Protected by F5 BIG-IP APM**.
-1. In the app's overview page, find the **Manage** section and select **Users and groups**.
-1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
-1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
-1. In the **Add Assignment** dialog, click the **Assign** button.
-
-## Configure Oracle PeopleSoft-Protected by F5 BIG-IP APM SSO
-
-### F5 SAML SP Configuration
-
-Import the Metadata Certificate into the F5 which will be used later in the setup process. Navigate to **System > Certificate Management > Traffic Certificate Management > SSL Certificate List**. Select **Import** from the right-hand corner.
-
-![F5 SAML SP Configuration](./media/oracle-peoplesoft-protected-by-f5-big-ip-apm-tutorial/sp-configuration.png)
-
-#### Setup the SAML IDP Connector
-
-1. Navigate to **Access > Federation > SAML: Service Provider > External Idp Connectors** and click **Create > From Metadata**.
-
- ![F5 SAML IDP Connector](./media/oracle-peoplesoft-protected-by-f5-big-ip-apm-tutorial/saml-idp-connector.png)
-
-1. In the following page, click on **Browse** to upload the xml file.
-
-1. Give a valid name in the **Identity Provider Name** textbox and then click on **OK**.
-
- ![new SAML IDP Connector](./media/oracle-peoplesoft-protected-by-f5-big-ip-apm-tutorial/new-saml-idp.png)
-
-1. Perform the required steps in the **Security Settings** tab and then click on **OK**.
-
- ![edit SAML IDP Connector](./media/oracle-peoplesoft-protected-by-f5-big-ip-apm-tutorial/edit-saml-idp.png)
-
-#### Setup the SAML SP
-
-1. Navigate to **Access > Federation > SAML Service Provider > Local SP Services** and click **Create**. Complete the following information and click **OK**.
-
- * Name: `<Name>`
- * Entity ID: `https://<FQDN>.peoplesoft.f5.com`
- * SP Name Settings
- * Scheme: `https`
- * Host: `<FQDN>.peoplesoft.f5.com`
- * Description: `<Description>`
-
- ![new SAML SP services](./media/oracle-peoplesoft-protected-by-f5-big-ip-apm-tutorial/new-saml-sp-service.png)
-
-1. Select the SP Configuration, PeopleSoftAppSSO, and **Click Bind/UnBind IdP Connectors**.
-Click on **Add New Row** and Select the **External IdP connector** created in previous step, click **Update**, and then click **OK**.
-
- ![create SAML SP services](./media/oracle-peoplesoft-protected-by-f5-big-ip-apm-tutorial/edit-saml-idp-use-sp.png)
-
-## Configuring Application
-
-### Create a new Pool
-1. Navigate to **Local Traffic > Pools > Pool List**, click **Create**, complete the following information and click **Finished**.
-
- * Name: `<Name>`
- * Description: `<Description>`
- * Health Monitors: `http`
- * Address: `<Address>`
- * Service Port: `<Service Port>`
-
- ![new pool creation](./media/oracle-peoplesoft-protected-by-f5-big-ip-apm-tutorial/create-pool.png)
--
-### Create a new Client SSL profile
-
-Navigate to **Local Traffic > Profiles > SSL > Client > +**, complete the following information and click **Finished**.
-
-* Name: `<Name>`
-* Certificate: `<Certificate>`
-* Key: `<Key>`
-
-![new Client SSL profile](./media/oracle-peoplesoft-protected-by-f5-big-ip-apm-tutorial/client-ssl-profile.png)
-
-### Create a new Virtual Server
-
-1. Navigate to **Local Traffic > Virtual Servers > Virtual Server List > +**, complete the following information and click **Finished**.
- * Name: `<Name>`
- * Destination Address/Mask: `<Address>`
- * Service Port: Port 443 HTTPS
- * HTTP Profile (Client): http
-
- ![Create a new Virtual Server](./media/oracle-peoplesoft-protected-by-f5-big-ip-apm-tutorial/virtual-server-list.png)
-
-1. Fill the following values in the below page:
-
- * SSL Profile (Client): `<SSL Profile>`
- * Source Address Translation: Auto Map
- * Access Profile: `<Access Profile>`
- * Default Pool: `<Pool>`
--
- ![Create a new Virtual Server peoplesoft pool](./media/oracle-peoplesoft-protected-by-f5-big-ip-apm-tutorial/virtual-server-people-soft.png)
-
-## Setting up PeopleSoft application to support F5 Big-IP APM as the single sign-on solution
-
->[!Note]
-> Reference https://docs.oracle.com/cd/E12530_01/oam.1014/e10356/people.htm
-
-1. Logon to Peoplesoft Console `https://<FQDN>.peoplesoft.f5.com/:8000/psp/ps/?cmd=start` using Admin credentials(Example: PS/PS).
-
- ![Manager self services](./media/oracle-peoplesoft-protected-by-f5-big-ip-apm-tutorial/people-soft-console.png)
-
-1. In the PeopleSoft application, create **OAMPSFT** as a new user profile and associate a low security role such as **PeopleSoft User**.
-Navigate to **Peopletools > Security > User Profiles > User Profiles** to create a new user profile, for example: **OAMPSFT** and Add **Peoplesoft User**.
-
- ![Peoplesoft User](./media/oracle-peoplesoft-protected-by-f5-big-ip-apm-tutorial/user-profile.png)
-
-1. Access the web profile and enter **OAMPSFT** as the public access **user ID**.
-
- ![User Profiles](./media/oracle-peoplesoft-protected-by-f5-big-ip-apm-tutorial/web-profile-configuration.png)
-
-1. From the **PeopleTools Application Designer**, open the **FUNCLIB_LDAP** record.
-
- ![web profile configuration](./media/oracle-peoplesoft-protected-by-f5-big-ip-apm-tutorial/funclib.png)
-
-1. Update the user Header with **PS_SSO_UID** for **OAMSSO_AUTHENTICATION** function.
-In the **getWWWAuthConfig()** function, replace the value that is assigned to the **&defaultUserId** with the **OAMPSFT** that we defined in the Web profile. Save the record definition.
-
- ![OAMSSO_AUTHENTICATION](./media/oracle-peoplesoft-protected-by-f5-big-ip-apm-tutorial/record.png)
-
-1. Access the **Signon PeopleCode** page (PeopleTools, Security, Security Objects, Signon PeopleCode) and enable the **OAMSSO_AUTHENTICATION** functionΓÇöthe Signon PeopleCode for Oracle Access Manager single signon.
-
- ![OAMSSO_AUTHENTICATION ](./media/oracle-peoplesoft-protected-by-f5-big-ip-apm-tutorial/sign-on-people-soft.png)
-
-## Setting up F5 Big-IP APM to populate ΓÇ£PS_SSO_UIDΓÇ¥ HTTP Header with the PeopleSoft User ID
-
-### Configuring Per-Request Policy
-1. Navigate to **Access > Profile/Policies > Per-Request Policies**, click **Create**, complete the following information and click **Finished**.
-
- * Name: `<Name>`
- * Profile Type: All
- * Languages: `<Language>`
-
- ![Configuring Per-Request Policy ](./media/oracle-peoplesoft-protected-by-f5-big-ip-apm-tutorial/per-request.png)
-
-1. Click **Edit** Per-Request Policy `<Name>`
- ![edit Per-Request Policy PeopleSoftSSO ](./media/oracle-peoplesoft-protected-by-f5-big-ip-apm-tutorial/people-soft-sso.png)
-
- `Header Name: <Header Name>`
- `Header Value: <Header Value>`
-
-### Assign Per-Request Policy to the Virtual Server
-
-Navigate to **Local Traffic > Virtual Servers > Virtual Server List > PeopleSoftApp**
-Specify `<Name>` as Per-Request Policy
-
-![PeopleSoftSSO as Per-Request Policy ](./media/oracle-peoplesoft-protected-by-f5-big-ip-apm-tutorial/people-soft-sso-1.png)
-
-## Setting up F5 Big-IP APM to support Single Logout from PeopleSoft application
-
-To add single logout support for all PeopleSoft users,please follow these steps:
-
-1. Determine the correct logout URL for PeopleSoft portal
- * To determine the address that the PeopleSoft application uses to end a user session, you need to open the portal using any web browser and enable browser debug tools, as shown in the example below:
-
- ![logout URL for PeopleSoft portal ](./media/oracle-peoplesoft-protected-by-f5-big-ip-apm-tutorial/sign-out.png)
-
- * Find the element with the `PT_LOGOUT_MENU` id and save the URL path with the query parameters. In our example, we got the following value: `/psp/ps/?cmd=logout`
-
-1. Create LTM iRule that will redirect the user to the APM logout URL: `/my.logout.php3`
-
- * Navigate to **Local Traffic > iRule**, click **Create**, complete the following information and click **Finished**.
-
- ```text
- Name: `<Name>`
- Definition:
- _when HTTP_REQUEST {
- switch -glob -- [HTTP::URI] {
- `/psp/ps/?cmd=logout` {
- HTTP::redirect `/my.logout.php3`
- }
- }
- }_
- ```
-
-1. Assign the created iRule to the Virtual Server
-
- * Navigate to **Local Traffic > Virtual Servers > Virtual Server List > PeopleSoftApp > Resources**. Click the **Manage…** button:
-
- * Specify `<Name>` as Enabled iRule and click **Finished**.
-
- ![_iRule_PeopleSoftApp ](./media/oracle-peoplesoft-protected-by-f5-big-ip-apm-tutorial/irule-people-soft.png)
-
- * Give the **Name** textbox value as `<Name>`
-
- ![_iRule_PeopleSoftApp finished](./media/oracle-peoplesoft-protected-by-f5-big-ip-apm-tutorial/common-irule.png)
--
-### Create Oracle PeopleSoft-Protected by F5 BIG-IP APM test user
-
-In this section, you create a user called B.Simon in Oracle PeopleSoft-Protected by F5 BIG-IP APM. Work with [Oracle PeopleSoft-Protected by F5 BIG-IP APM support team](https://support.f5.com) to add the users in the Oracle PeopleSoft-Protected by F5 BIG-IP APM platform. Users must be created and activated before you use single sign-on.
-
-## Test SSO
-
-In this section, you test your Azure AD single sign-on configuration with following options.
-
-#### SP initiated:
-
-* Click on **Test this application** in Azure portal. This will redirect to Oracle PeopleSoft-Protected by F5 BIG-IP APM Sign on URL where you can initiate the login flow.
-
-* Go to Oracle PeopleSoft-Protected by F5 BIG-IP APM Sign-on URL directly and initiate the login flow from there.
-
-#### IDP initiated:
-
-* Click on **Test this application** in Azure portal and you should be automatically signed in to the Oracle PeopleSoft-Protected by F5 BIG-IP APM for which you set up the SSO.
-
-You can also use Microsoft My Apps to test the application in any mode. When you click the Oracle PeopleSoft-Protected by F5 BIG-IP APM tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Oracle PeopleSoft-Protected by F5 BIG-IP APM for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-
-## Next steps
-
-Once you configure Oracle PeopleSoft-Protected by F5 BIG-IP APM you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
aks Configure Azure Cni https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-azure-cni.md
The following screenshot from the Azure portal shows an example of configuring t
## Dynamic allocation of IPs and enhanced subnet support
-A drawback with the traditional CNI is the exhaustion of pod IP addresses as the AKS cluster grows, resulting in the need to rebuild the entire cluster in a bigger subnet. The new dynamic IP allocation capability in Azure CNI solves this problem by allotting pod IPs from a subnet separate from the subnet hosting the AKS cluster. It offers the following benefits:
+A drawback with the traditional CNI is the exhaustion of pod IP addresses as the AKS cluster grows, resulting in the need to rebuild the entire cluster in a bigger subnet. The new dynamic IP allocation capability in Azure CNI solves this problem by allocating pod IPs from a subnet separate from the subnet hosting the AKS cluster. It offers the following benefits:
* **Better IP utilization**: IPs are dynamically allocated to cluster Pods from the Pod subnet. This leads to better utilization of IPs in the cluster compared to the traditional CNI solution, which does static allocation of IPs for every node.
aks Use Wasi Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-wasi-node-pools.md
Title: Create WebAssembly System Interface(WASI) node pools in Azure Kubernetes
description: Learn how to create a WebAssembly System Interface(WASI) node pool in Azure Kubernetes Service (AKS) to run your WebAssembly(WASM) workload on Kubernetes. Previously updated : 10/12/2021 Last updated : 10/19/2022 # Create WebAssembly System Interface (WASI) node pools in Azure Kubernetes Service (AKS) to run your WebAssembly (WASM) workload (preview)
-[WebAssembly (WASM)][wasm] is a binary format that is optimized for fast download and maximum execution speed in a WASM runtime. A WASM runtime is designed to run on a target architecture and execute WebAssemblies in a sandbox, isolated from the host computer, at near-native performance. By default, WebAssemblies can't access resources on the host outside of the sandbox unless it is explicitly allowed, and they can't communicate over sockets to access things environment variables or HTTP traffic. The [WebAssembly System Interface (WASI)][wasi] standard defines an API for WASM runtimes to provide access to WebAssemblies to the environment and resources outside the host using a capabilities model. [Krustlet][krustlet] is an open-source project that allows WASM modules to be run on Kubernetes. Krustlet creates a kubelet that runs on nodes with a WASM/WASI runtime. AKS allows you to create node pools that run WASM assemblies using nodes with WASM/WASI runtimes and Krustlets.
+[WebAssembly (WASM)][wasm] is a binary format that is optimized for fast download and maximum execution speed in a WASM runtime. A WASM runtime is designed to run on a target architecture and execute WebAssemblies in a sandbox, isolated from the host computer, at near-native performance. By default, WebAssemblies can't access resources on the host outside of the sandbox unless it is explicitly allowed, and they can't communicate over sockets to access things environment variables or HTTP traffic. The [WebAssembly System Interface (WASI)][wasi] standard defines an API for WASM runtimes to provide access to WebAssemblies to the environment and resources outside the host using a capabilities model.
+
+> [!IMPORTANT]
+> WASI nodepools now use [containerd shims][wasm-containerd-shims] to run WASM workloads. Previously, AKS used [Krustlet][krustlet] to allow WASM modules to be run on Kubernetes. If you are still using Krustlet-based WASI nodepools, you can migrate to containerd shims by creating a new WASI nodepool and migrating your workloads to the new nodepool.
## Before you begin
WASM/WASI node pools are currently in preview.
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
-This article uses [Helm 3][helm] to install the *nginx* chart on a supported version of Kubernetes. Make sure that you are using the latest release of Helm and have access to the *bitnami* Helm repository. The steps outlined in this article may not be compatible with previous versions of the Helm chart or Kubernetes.
-
-You must also have the following resource installed:
-
-* The latest version of the Azure CLI.
-* The `aks-preview` extension version 0.5.34 or later
+You must also have the latest version of the Azure CLI and `aks-preview` extension installed.
### Register the `WasmNodePoolPreview` preview feature
az extension update --name aks-preview
### Limitations
-* You can't run WebAssemblies and containers in the same node pool.
-* Only the WebAssembly(WASI) runtime is available, using the Wasmtime provider.
+* Currently, there are only containerd shims available for [spin][spin] and [slight][slight] applications, which use the [wasmtime][wasmtime] runtime. In addition to wasmtime runtime applications, you can also run containers on WASI/WASM node pools.
+* You can run containers and wasm modules on the same node, but you can't run containers and wasm modules on the same pod.
* The WASM/WASI node pools can't be used for system node pool. * The *os-type* for WASM/WASI node pools must be Linux.
-* Krustlet doesn't work with Azure CNI at this time. For more information, see the [CNI Support for Kruslet GitHub issue][krustlet-cni-support].
-* Krustlet doesn't provide networking configuration for WebAssemblies. The WebAssembly manifest must provide the networking configuration, such as IP address.
+* You can't use the Azure portal to create WASM/WASI node pools.
## Add a WASM/WASI node pool to an existing AKS Cluster
az aks nodepool add \
--cluster-name myAKSCluster \ --name mywasipool \ --node-count 1 \
- --workload-runtime wasmwasi
+ --workload-runtime WasmWasi
``` > [!NOTE]
az aks nodepool add \
Verify the *workloadRuntime* value using `az aks nodepool show`. For example: ```azurecli-interactive
-az aks nodepool show -g myResourceGroup --cluster-name myAKSCluster -n mywasipool
+az aks nodepool show -g myResourceGroup --cluster-name myAKSCluster -n mywasipool --query workloadRuntime
``` The following example output shows the *mywasipool* has the *workloadRuntime* type of *WasmWasi*. ```output
-{
- ...
- "name": "mywasipool",
- ..
- "workloadRuntime": "WasmWasi"
-}
+$ az aks nodepool show -g myResourceGroup --cluster-name myAKSCluster -n mywasipool --query workloadRuntime
+"WasmWasi"
```
-For a WASM/WASI node pool, verify the taint is set to `kubernetes.io/arch=wasm32-wagi:NoSchedule` and `kubernetes.io/arch=wasm32-wagi:NoExecute`, which will prevent container pods from being scheduled on this node pool. Also, you should see nodeLabels to be `kubernetes.io/arch: wasm32-wasi`, which prevents WASM pods from being scheduled on regular container(OCI) node pools.
-
-> [!NOTE]
-> The taints for a WASI node pool are not visible using `az aks nodepool list`. Use `kubectl` to verify the taints are set on the nodes in the WASI node pool.
- Configure `kubectl` to connect to your Kubernetes cluster using the [az aks get-credentials][az-aks-get-credentials] command. The following command: ```azurecli
Use `kubectl get nodes` to display the nodes in your cluster.
```output $ kubectl get nodes -o wide
-NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
-aks-mywasipool-12456878-vmss000000 Ready agent 9m 1.0.0-alpha.1 WASINODE_IP <none> <unknown> <unknown> mvp
-aks-nodepool1-12456878-vmss000000 Ready agent 13m v1.20.9 NODE1_IP <none> Ubuntu 18.04.6 LTS 5.4.0-1059-azure containerd://1.4.9+azure
+NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
+aks-mywasipool-12456878-vmss000000 Ready agent 123m v1.23.12 <WASINODE_IP> <none> Ubuntu 22.04.1 LTS 5.15.0-1020-azure containerd://1.5.11+azure-2
+aks-nodepool1-12456878-vmss000000 Ready agent 133m v1.23.12 <NODE_IP> <none> Ubuntu 22.04.1 LTS 5.15.0-1020-azure containerd://1.5.11+azure-2
```
-Save the value of *WASINODE_IP* as it is used in later step.
-
-Use `kubectl describe node` to show the labels and taints on a node in the WASI node pool. The following example shows the details of *aks-mywasipool-12456878-vmss000000*.
+Use `kubectl describe node` to show the labels on a node in the WASI node pool. The following example shows the details of *aks-mywasipool-12456878-vmss000000*.
```output $ kubectl describe node aks-mywasipool-12456878-vmss000000
Name: aks-mywasipool-12456878-vmss000000
Roles: agent Labels: agentpool=mywasipool ...
- kubernetes.io/arch=wasm32-wagi
+ kubernetes.azure.com/wasmtime-slight-v1=true
+ kubernetes.azure.com/wasmtime-spin-v1=true
...
-Taints: kubernetes.io/arch=wasm32-wagi:NoExecute
- kubernetes.io/arch=wasm32-wagi:NoSchedule
``` -
-## Running WASM/WASI Workload
-
-To run a workload on a WASM/WASI node pool, add a node selector and tolerations to your deployment. For example:
+Add a `RuntimeClass` for running [spin][spin] and [slight][slight] applications. Create a file named *wasm-runtimeclass.yaml* with the following content:
```yml
-...
-spec:
+apiVersion: node.k8s.io/v1
+kind: RuntimeClass
+metadata:
+ name: "wasmtime-slight-v1"
+handler: "slight"
+scheduling:
nodeSelector:
- kubernetes.io/arch: "wasm32-wagi"
- tolerations:
- - key: "node.kubernetes.io/network-unavailable"
- operator: "Exists"
- effect: "NoSchedule"
- - key: "kubernetes.io/arch"
- operator: "Equal"
- value: "wasm32-wagi"
- effect: "NoExecute"
- - key: "kubernetes.io/arch"
- operator: "Equal"
- value: "wasm32-wagi"
- effect: "NoSchedule"
-...
-```
-
-To run a sample deployment, create a `wasi-example.yaml` file using the following YAML definition:
-
-```yml
-apiVersion: v1
-kind: Pod
+ "kubernetes.azure.com/wasmtime-slight-v1": "true"
+
+apiVersion: node.k8s.io/v1
+kind: RuntimeClass
metadata:
- name: krustlet-wagi-demo
- labels:
- app: krustlet-wagi-demo
- annotations:
- alpha.wagi.krustlet.dev/default-host: "0.0.0.0:3001"
- alpha.wagi.krustlet.dev/modules: |
- {
- "krustlet-wagi-demo-http-example": {"route": "/http-example", "allowed_hosts": ["https://api.brigade.sh"]},
- "krustlet-wagi-demo-hello": {"route": "/hello/..."},
- "krustlet-wagi-demo-error": {"route": "/error"},
- "krustlet-wagi-demo-log": {"route": "/log"},
- "krustlet-wagi-demo-index": {"route": "/"}
- }
-spec:
- hostNetwork: true
+ name: "wasmtime-spin-v1"
+handler: "spin"
+scheduling:
nodeSelector:
- kubernetes.io/arch: wasm32-wagi
- containers:
- - image: webassembly.azurecr.io/krustlet-wagi-demo-http-example:v1.0.0
- imagePullPolicy: Always
- name: krustlet-wagi-demo-http-example
- - image: webassembly.azurecr.io/krustlet-wagi-demo-hello:v1.0.0
- imagePullPolicy: Always
- name: krustlet-wagi-demo-hello
- - image: webassembly.azurecr.io/krustlet-wagi-demo-index:v1.0.0
- imagePullPolicy: Always
- name: krustlet-wagi-demo-index
- - image: webassembly.azurecr.io/krustlet-wagi-demo-error:v1.0.0
- imagePullPolicy: Always
- name: krustlet-wagi-demo-error
- - image: webassembly.azurecr.io/krustlet-wagi-demo-log:v1.0.0
- imagePullPolicy: Always
- name: krustlet-wagi-demo-log
- tolerations:
- - key: "node.kubernetes.io/network-unavailable"
- operator: "Exists"
- effect: "NoSchedule"
- - key: "kubernetes.io/arch"
- operator: "Equal"
- value: "wasm32-wagi"
- effect: "NoExecute"
- - key: "kubernetes.io/arch"
- operator: "Equal"
- value: "wasm32-wagi"
- effect: "NoSchedule"
+ "kubernetes.azure.com/wasmtime-spin-v1": "true"
```
-Use `kubectl` to run your example deployment:
+Use `kubectl` to create the `RuntimeClass` objects.
```azurecli-interactive
-kubectl apply -f wasi-example.yaml
+kubectl apply -f wasm-runtimeclass.yaml
```
-> [!NOTE]
-> The pod for the example deployment may stay in the *Registered* status. This behavior is expected, and you and proceed to the next step.
+## Running WASM/WASI Workload
-Create `values.yaml` using the example yaml below, replacing *WASINODE_IP* with the value from the earlier step.
+Create a file named *slight.yaml* with the following content:
```yml
-serverBlock: |-
- server {
- listen 0.0.0.0:8080;
- location / {
- proxy_pass http://WASINODE_IP:3001;
- }
- }
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: wasm-slight
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: wasm-slight
+ template:
+ metadata:
+ labels:
+ app: wasm-slight
+ spec:
+ runtimeClassName: wasmtime-slight-v1
+ containers:
+ - name: testwasm
+ image: ghcr.io/deislabs/containerd-wasm-shims/examples/slight-rust-hello:latest
+ command: ["/"]
+
+apiVersion: v1
+kind: Service
+metadata:
+ name: wasm-slight
+spec:
+ type: LoadBalancer
+ ports:
+ - protocol: TCP
+ port: 80
+ targetPort: 80
+ selector:
+ app: wasm-slight
```
-Using [Helm][helm], add the *bitnami* repository and install the *nginx* chart with the `values.yaml` file you created in the previous step. Installing NGINX with the above `values.yaml` creates a reverse proxy to the example deployment, allowing you to access it using an external IP address.
+> [!NOTE]
+> When developing applications, modules should be build against the `wasm32-wasi` target. For more details, see the [spin][spin] and [slight][slight] documentation.
->[!NOTE]
-> The following example pulls a public container image from Docker Hub. We recommend that you set up a pull secret to authenticate using a Docker Hub account instead of making an anonymous pull request. To improve reliability when working with public content, import and manage the image in a private Azure container registry. [Learn more about working with public images.][dockerhub-callout]
+Use `kubectl` to run your example deployment:
-```console
-helm repo add bitnami https://charts.bitnami.com/bitnami
-helm repo update
-helm install hello-wasi bitnami/nginx -f values.yaml
+```azurecli-interactive
+kubectl apply -f slight.yaml
```
-Use `kubectl get service` to display the external IP address of the *hello-wasi-ngnix* service.
+Use `kubectl get svc` to get the external IP address of the service.
```output
-$ kubectl get service
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-hello-wasi-nginx LoadBalancer 10.0.58.239 EXTERNAL_IP 80:32379/TCP 112s
-kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 145m
-```
-
-Verify the example deployment is running by the `curl` command against the `/hello` path of *EXTERNAL_IP*.
-
-```azurecli-interactive
-curl EXTERNAL_IP/hello
+$ kubectl get svc
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 10m
+wasm-slight LoadBalancer 10.0.133.247 <EXTERNAL-IP> 80:30725/TCP 2m47s
```
-The follow example output confirms the example deployment is running.
+Access the example application at `http://EXTERNAL-IP/hello`. The following example uses `curl`.
```output
-$ curl EXTERNAL_IP/hello
-hello world
+$ curl http://EXTERNAL-IP/hello
+hello
```
-> [!NOTE]
-> To publish the service on your own domain, see [Azure DNS][azure-dns-zone] and the [external-dns][external-dns] project.
+> [!NOTE]
+> If your request times out, use `kubectl get pods` and `kubectl describe pod <POD_NAME>` to check the status of the pod. If the pod is not running, use `kubectl get rs` and `kubectl describe rs <REPLICA_SET_NAME>` to see if the replica set is having issues creating a new pod.
## Clean up
-To remove NGINX, use `helm delete`.
-
-```console
-helm delete hello-wasi
-```
- To remove the example deployment, use `kubectl delete`. ```azurecli-interactive
-kubectl delete -f wasi-example.yaml
+kubectl delete -f slight.yaml
``` To remove the WASM/WASI node pool, use `az aks nodepool delete`.
az aks nodepool delete --name mywasipool -g myresourcegroup --cluster-name myaks
[wasi]: https://wasi.dev/ [azure-dns-zone]: https://azure.microsoft.com/services/dns/ [external-dns]: https://github.com/kubernetes-sigs/external-dns-
+[wasm-containerd-shims]: https://github.com/deislabs/containerd-wasm-shims
+[spin]: https://spin.fermyon.dev/
+[slight]: https://github.com/deislabs/spiderlightning#spiderlightning-or-slight
+[wasmtime]: https://wasmtime.dev/
<!-- INTERNAL LINKS --> [az-aks-create]: /cli/azure/aks#az_aks_create
applied-ai-services Concept Composed Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-composed-models.md
Previously updated : 10/10/2022 Last updated : 10/20/2022 recommendations: false # Composed custom models +++ **Composed models**. A composed model is created by taking a collection of custom models and assigning them to a single model built from your form types. When a document is submitted for analysis using a composed model, the service performs a classification to decide which custom model best represents the submitted document. With composed models, you can assign multiple custom models to a composed model called with a single model ID. It's useful when you've trained several models and want to group them to analyze similar form types. For example, your composed model might include custom models trained to analyze your supply, equipment, and furniture purchase orders. Instead of manually trying to select the appropriate model, you can use a composed model to determine the appropriate custom model for each analysis and extraction.
With composed models, you can assign multiple custom models to a composed model
* The limit for maximum number of custom models that can be composed is 100. + ## Development options ::: moniker range="form-recog-3.0.0"
applied-ai-services Concept Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-model-overview.md
Previously updated : 08/22/2022 Last updated : 10/20/2022 recommendations: false- +
+<!-- markdownlint-disable MD024 -->
<!-- markdownlint-disable MD033 --> # Form Recognizer models ++ Azure Form Recognizer supports a wide variety of models that enable you to add intelligent document processing to your apps and flows. You can use a prebuilt document analysis or domain specific model or train a custom model tailored to your specific business needs and use cases. Form Recognizer can be used with the REST API or Python, C#, Java, and JavaScript SDKs. ## Model overview + | **Model** | **Description** | | | | |**Document analysis**||
The Read API analyzes and extracts ext lines, words, their locations, detected l
> [!div class="nextstepaction"] > [Learn more: read model](concept-read.md)
-### W-2
+### W-2
[:::image type="icon" source="media/studio/w2.png":::](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)
The W-2 model analyzes and extracts key information reported in each box on a W-
> [!div class="nextstepaction"] > [Learn more: W-2 model](concept-w2.md)
-### General document
+### General document
[:::image type="icon" source="media/studio/general-document.png":::](https://formrecognizer.appliedai.azure.com/studio/document)
A composed model is created by taking a collection of custom models and assignin
| **Model ID** | **Text extraction** | **Language detection** | **Selection Marks** | **Tables** | **Paragraphs** | **Paragraph roles** | **Key-Value pairs** | **Fields** | |:--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
-| [prebuilt-read](concept-read.md#data-extraction) | Γ£ô | Γ£ô | | | Γ£ô | | | |
+| [prebuilt-read](concept-read.md#data-extraction) | Γ£ô | Γ£ô | | | Γ£ô | | | |
| [prebuilt-tax.us.w2](concept-w2.md#field-extraction) | Γ£ô | | Γ£ô | | Γ£ô | | | Γ£ô | | [prebuilt-document](concept-general-document.md#data-extraction)| Γ£ô | | Γ£ô | Γ£ô | Γ£ô | | Γ£ô | |
-| [prebuilt-layout](concept-layout.md#data-extraction) | Γ£ô | | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | |
-| [prebuilt-invoice](concept-invoice.md#field-extraction) | Γ£ô | | Γ£ô | Γ£ô | Γ£ô | | Γ£ô | Γ£ô |
-| [prebuilt-receipt](concept-receipt.md#field-extraction) | Γ£ô | | | | Γ£ô | | | Γ£ô |
-| [prebuilt-idDocument](concept-id-document.md#field-extractions) | Γ£ô | | | | Γ£ô | | | Γ£ô |
-| [prebuilt-businessCard](concept-business-card.md#field-extractions) | Γ£ô | | | | Γ£ô | | | Γ£ô |
-| [Custom](concept-custom.md#compare-model-features) | Γ£ô | | Γ£ô | Γ£ô | Γ£ô | | | Γ£ô |
+| [prebuilt-layout](concept-layout.md#data-extraction) | Γ£ô | | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | |
+| [prebuilt-invoice](concept-invoice.md#field-extraction) | Γ£ô | | Γ£ô | Γ£ô | Γ£ô | | Γ£ô | Γ£ô |
+| [prebuilt-receipt](concept-receipt.md#field-extraction) | Γ£ô | | | | Γ£ô | | | Γ£ô |
+| [prebuilt-idDocument](concept-id-document.md#field-extractions) | Γ£ô | | | | Γ£ô | | | Γ£ô |
+| [prebuilt-businessCard](concept-business-card.md#field-extractions) | Γ£ô | | | | Γ£ô | | | Γ£ô |
+| [Custom](concept-custom.md#compare-model-features) | Γ£ô | | Γ£ô | Γ£ô | Γ£ô | | | Γ£ô |
## Input requirements
A composed model is created by taking a collection of custom models and assignin
Learn how to use Form Recognizer v3.0 in your applications by following our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md) ++
+| **Model** | **Description** |
+| | |
+|**Document analysis**||
+| [Layout](#layout) | Extract text and layout information from documents.|
+|**Prebuilt**||
+| [Invoice](#invoice) | Extract key information from English and Spanish invoices. |
+| [Receipt](#receipt) | Extract key information from English receipts. |
+| [ID document](#id-document) | Extract key information from US driver licenses and international passports. |
+| [Business card](#business-card) | Extract key information from English business cards. |
+|**Custom**||
+| [Custom](#custom) | Extract data from forms and documents specific to your business. Custom models are trained for your distinct data and use cases. |
+| [Composed](#composed-custom-model) | Compose a collection of custom models and assign them to a single model built from your form types.
+
+### Layout
+
+The Layout API analyzes and extracts text, tables and headers, selection marks, and structure information from documents.
+
+***Sample document processed using the [sample labeling tool](https://fott-2-1.azurewebsites.net/layout-analyze)***:
++
+> [!div class="nextstepaction"]
+>
+> [Learn more: layout model](concept-layout.md)
+
+### Invoice
+
+The invoice model analyzes and extracts key information from sales invoices. The API analyzes invoices in various formats and extracts key information such as customer name, billing address, due date, and amount due.
+
+***Sample invoice processed using the [sample labeling tool](https://fott-2-1.azurewebsites.net/prebuilts-analyze)***:
++
+> [!div class="nextstepaction"]
+> [Learn more: invoice model](concept-invoice.md)
+
+### Receipt
+
+* The receipt model analyzes and extracts key information from printed and handwritten sales receipts.
+
+***Sample receipt processed using [sample labeling tool](https://fott-2-1.azurewebsites.net/prebuilts-analyze)***:
++
+> [!div class="nextstepaction"]
+> [Learn more: receipt model](concept-receipt.md)
+
+### ID document
+
+ The ID document model analyzes and extracts key information from the following documents:
+
+* U.S. Driver's Licenses (all 50 states and District of Columbia)
+
+* Biographical pages from international passports (excluding visa and other travel documents). The API analyzes identity documents and extracts
+
+***Sample U.S. Driver's License processed using the [sample labeling tool](https://fott-2-1.azurewebsites.net/prebuilts-analyze)***:
++
+> [!div class="nextstepaction"]
+> [Learn more: identity document model](concept-id-document.md)
+
+### Business card
+
+The business card model analyzes and extracts key information from business card images.
+
+***Sample business card processed using the [sample labeling tool](https://fott-2-1.azurewebsites.net/prebuilts-analyze)***:
++
+> [!div class="nextstepaction"]
+> [Learn more: business card model](concept-business-card.md)
+
+### Custom
+
+* Custom models analyze and extract data from forms and documents specific to your business. The API is a machine-learning program trained to recognize form fields within your distinct content and extract key-value pairs and table data. You only need five examples of the same form type to get started and your custom model can be trained with or without labeled datasets.
+
+***Sample custom model processing using the [sample labeling tool](https://fott-2-1.azurewebsites.net/)***:
++
+> [!div class="nextstepaction"]
+> [Learn more: custom model](concept-custom.md)
+
+#### Composed custom model
+
+A composed model is created by taking a collection of custom models and assigning them to a single model built from your form types. You can assign multiple custom models to a composed model called with a single model ID. you can assign up to 100 trained custom models to a single composed model.
+
+***Composed model dialog window using the [sample labeling tool](https://formrecognizer.appliedai.azure.com/studio/customform/projects)***:
++
+> [!div class="nextstepaction"]
+> [Learn more: custom model](concept-custom.md)
+
+## Model data extraction
+
+| **Model** | **Text extraction** | **Language detection** | **Selection Marks** | **Tables** | **Paragraphs** | **Paragraph roles** | **Key-Value pairs** | **Fields** |
+|:--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
+| [Layout](concept-layout.md#data-extraction) | Γ£ô | | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | |
+| [Invoice](concept-invoice.md#field-extraction) | Γ£ô | | Γ£ô | Γ£ô | Γ£ô | | Γ£ô | Γ£ô |
+| [Receipt](concept-receipt.md#field-extraction) | Γ£ô | | | | Γ£ô | | | Γ£ô |
+| [ID Document](concept-id-document.md#field-extractions) | Γ£ô | | | | Γ£ô | | | Γ£ô |
+| [Business Card](concept-business-card.md#field-extractions) | Γ£ô | | | | Γ£ô | | | Γ£ô |
+| [Custom Form](concept-custom.md#compare-model-features) | Γ£ô | | Γ£ô | Γ£ô | Γ£ô | | | Γ£ô |
+
+## Input requirements
++
+> [!NOTE]
+> The [Sample Labeling tool](https://fott-2-1.azurewebsites.net/) does not support the BMP file format. This is a limitation of the tool not the Form Recognizer Service.
+
+### Version migration
+
+ You can learn how to use Form Recognizer v3.0 in your applications by following our [**Form Recognizer v3.0 migration guide**](v3-migration-guide.md)
++ ## Next steps
-* [Learn how to process your own forms and documents](quickstarts/try-sample-label-tool.md) with our [Form Recognizer sample tool](https://fott-2-1.azurewebsites.net/)
+
+* [Learn how to process your own forms and documents](quickstarts/try-v3-form-recognizer-studio.md) with the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio)
+
+* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
+++
+* [Learn how to process your own forms and documents](quickstarts/try-sample-label-tool.md) with the [Form Recognizer sample labeling tool](https://fott-2-1.azurewebsites.net/)
+
+* Complete a [Form Recognizer quickstart](quickstarts/get-started-sdks-rest-api.md?view=form-recog-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
-* Complete a [Form Recognizer quickstart](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api) and get started creating a document processing app in the development language of your choice.
applied-ai-services Form Recognizer Container Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/containers/form-recognizer-container-configuration.md
Previously updated : 08/22/2022 Last updated : 10/20/2022
+monikerRange: '>=form-recog-2.1.0'
+recommendations: false
# Configure Form Recognizer containers
applied-ai-services Form Recognizer Container Image Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/containers/form-recognizer-container-image-tags.md
Previously updated : 06/23/2022 Last updated : 10/20/2022
-keywords: Docker, container, images
+monikerRange: '>=form-recog-2.1.0'
+recommendations: false
# Form Recognizer container image tags and release notes
applied-ai-services Form Recognizer Container Install Run https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/containers/form-recognizer-container-install-run.md
Previously updated : 12/16/2021 Last updated : 10/20/2022
-keywords: on-premises, Docker, container, identify
+monikerRange: '>=form-recog-2.1.0'
+recommendations: false
# Install and run Form Recognizer v2.1-preview containers
applied-ai-services Create A Form Recognizer Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/create-a-form-recognizer-resource.md
Previously updated : 08/22/2022 Last updated : 10/20/2022
+monikerRange: '>=form-recog-2.1.0'
recommendations: false #Customer intent: I want to learn how to use create a Form Recognizer service in the Azure portal. # Create a Form Recognizer resource + Azure Form Recognizer is a cloud-based [Azure Applied AI Service](../../applied-ai-services/index.yml) that uses machine-learning models to extract key-value pairs, text, and tables from your documents. Here, you'll learn how to create a Form Recognizer resource in the Azure portal. ## Visit the Azure portal
applied-ai-services Create Sas Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/create-sas-tokens.md
Previously updated : 05/27/2022 Last updated : 10/20/2022
+monikerRange: '>=form-recog-2.1.0'
recommendations: false # Create SAS tokens for storage containers + In this article, you'll learn how to create user delegation, shared access signature (SAS) tokens, using the Azure portal or Azure Storage Explorer. User delegation SAS tokens are secured with Azure AD credentials. SAS tokens provide secure, delegated access to resources in your Azure storage account. At a high level, here's how SAS tokens work:
applied-ai-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/encrypt-data-at-rest.md
Previously updated : 08/28/2020 Last updated : 10/20/2022
+monikerRange: '>=form-recog-2.1.0'
+recommendations: false
# Form Recognizer encryption of data at rest + Azure Form Recognizer automatically encrypts your data when persisting it to the cloud. Form Recognizer encryption protects your data to help you to meet your organizational security and compliance commitments. [!INCLUDE [cognitive-services-about-encryption](../../cognitive-services/includes/cognitive-services-about-encryption.md)]
applied-ai-services Form Recognizer Studio Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/form-recognizer-studio-overview.md
Previously updated : 10/14/2022 Last updated : 10/20/2022 monikerRange: 'form-recog-3.0.0' recommendations: false
applied-ai-services Estimate Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/estimate-cost.md
Previously updated : 07/14/2022 Last updated : 10/20/2022
+monikerRange: '>=form-recog-2.1.0'
recommendations: false # Check my Form Recognizer usage and estimate the price + In this guide, you'll learn how to use the metrics dashboard in the Azure portal to view how many pages were processed by Azure Form Recognizer. You'll also learn how to estimate the cost of processing those pages using the Azure pricing calculator. ## Check how many pages were processed
applied-ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/language-support.md
Previously updated : 10/10/2022 Last updated : 10/20/2022
The following table lists the supported languages for print text by the most rec
|:--|:-:|:--|:-:| |Afrikaans|`af`|Khasi | `kha` | |Albanian |`sq`|K'iche' | `quc` |
-|Angika (Devanagiri) | `anp`| Korean | `ko` |
+|Angika (Devanagari) | `anp`| Korean | `ko` |
|Arabic | `ar` | Korku | `kfq`| |Asturian |`ast`| Koryak | `kpy`|
-|Awadhi-Hindi (Devanagiri) | `awa`| Kosraean | `kos`|
+|Awadhi-Hindi (Devanagari) | `awa`| Kosraean | `kos`|
|Azerbaijani (Latin) | `az`| Kumyk (Cyrillic) | `kum`| |Bagheli | `bfy`| Kurdish (Arabic) | `ku-arab`| |Basque |`eu`| Kurdish (Latin) | `ku-latn`
-|Belarusian (Cyrillic) | `be`, `be-cyrl`|Kurukh (Devanagiri) | `kru`|
+|Belarusian (Cyrillic) | `be`, `be-cyrl`|Kurukh (Devanagari) | `kru`|
|Belarusian (Latin) | `be`, `be-latn`| Kyrgyz (Cyrillic) | `ky`
-|Bhojpuri-Hindi (Devanagiri) | `bho`| Lakota | `lkt` |
+|Bhojpuri-Hindi (Devanagari) | `bho`| Lakota | `lkt` |
|Bislama |`bi`| Latin | `la` |
-|Bodo (Devanagiri) | `brx`| Lithuanian | `lt` |
+|Bodo (Devanagari) | `brx`| Lithuanian | `lt` |
|Bosnian (Latin) | `bs`| Lower Sorbian | `dsb` | |Brajbha | `bra`|Lule Sami | `smj`| |Breton |`br`|Luxembourgish | `lb` |
-|Bulgarian | `bg`|Mahasu Pahari (Devanagiri) | `bfz`|
+|Bulgarian | `bg`|Mahasu Pahari (Devanagari) | `bfz`|
|Bundeli | `bns`|Malay (Latin) | `ms` | |Buryat (Cyrillic) | `bua`|Maltese | `mt`
-|Catalan |`ca`|Malto (Devanagiri) | `kmj`
+|Catalan |`ca`|Malto (Devanagari) | `kmj`
|Cebuano |`ceb`|Manx | `gv` | |Chamling | `rab`|Maori | `mi`| |Chamorro |`ch`|Marathi | `mr`|
-|Chhattisgarhi (Devanagiri)| `hne`| Mongolian (Cyrillic) | `mn`|
+|Chhattisgarhi (Devanagari)| `hne`| Mongolian (Cyrillic) | `mn`|
|Chinese Simplified | `zh-Hans`|Montenegrin (Cyrillic) | `cnr-cyrl`| |Chinese Traditional | `zh-Hant`|Montenegrin (Latin) | `cnr-latn`| |Cornish |`kw`|Neapolitan | `nap` |
The following table lists the supported languages for print text by the most rec
|Czech | `cs` |Northern Sami (Latin) | `sme`| |Danish | `da` |Norwegian | `no` | |Dari | `prs`|Occitan | `oc` |
-|Dhimal (Devanagiri) | `dhi`| Ossetic | `os`|
-|Dogri (Devanagiri) | `doi`|Pashto | `ps`|
+|Dhimal (Devanagari) | `dhi`| Ossetic | `os`|
+|Dogri (Devanagari) | `doi`|Pashto | `ps`|
|Dutch | `nl` |Persian | `fa`| |English | `en` |Polish | `pl` | |Erzya (Cyrillic) | `myv`|Portuguese | `pt` |
The following table lists the supported languages for print text by the most rec
|Fijian |`fj`|Romanian | `ro` | |Filipino |`fil`|Romansh | `rm` | |Finnish | `fi` | Russian | `ru` |
-|French | `fr` |Sadri (Devanagiri) | `sck` |
+|French | `fr` |Sadri (Devanagari) | `sck` |
|Friulian | `fur` | Samoan (Latin) | `sm` |Gagauz (Latin) | `gag`|Sanskrit (Devanagari) | `sa`| |Galician | `gl` |Santali(Devanagiri) | `sat` | |German | `de` | Scots | `sco` | |Gilbertese | `gil` | Scottish Gaelic | `gd` |
-|Gondi (Devanagiri) | `gon`| Serbian (Latin) | `sr`, `sr-latn`|
-|Greenlandic | `kl` | Sherpa (Devanagiri) | `xsr` |
-|Gurung (Devanagiri) | `gvr`| Sirmauri (Devanagiri) | `srx`|
+|Gondi (Devanagari) | `gon`| Serbian (Latin) | `sr`, `sr-latn`|
+|Greenlandic | `kl` | Sherpa (Devanagari) | `xsr` |
+|Gurung (Devanagari) | `gvr`| Sirmauri (Devanagari) | `srx`|
|Haitian Creole | `ht` | Skolt Sami | `sms` |
-|Halbi (Devanagiri) | `hlb`| Slovak | `sk`|
+|Halbi (Devanagari) | `hlb`| Slovak | `sk`|
|Hani | `hni` | Slovenian | `sl` | |Haryanvi | `bgc`|Somali (Arabic) | `so`| |Hawaiian | `haw`|Southern Sami | `sma`
The following table lists the supported languages for print text by the most rec
|Irish | `ga` |Turkmen (Latin) | `tk`| |Italian | `it` |Tuvan | `tyv`| |Japanese | `ja` |Upper Sorbian | `hsb` |
-|Jaunsari (Devanagiri) | `Jns`|Urdu | `ur`|
+|Jaunsari (Devanagari) | `Jns`|Urdu | `ur`|
|Javanese | `jv` |Uyghur (Arabic) | `ug`| |Kabuverdianu | `kea` |Uzbek (Arabic) | `uz-arab`| |Kachin (Latin) | `kac` |Uzbek (Cyrillic) | `uz-cyrl`|
-|Kangri (Devanagiri) | `xnr`|Uzbek (Latin) | `uz` |
+|Kangri (Devanagari) | `xnr`|Uzbek (Latin) | `uz` |
|Karachay-Balkar | `krc`|Volap├╝k | `vo` | |Kara-Kalpak (Cyrillic) | `kaa-cyrl`|Walser | `wae` | |Kara-Kalpak (Latin) | `kaa` |Welsh | `cy` |
applied-ai-services Managed Identities Secured Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/managed-identities-secured-access.md
Previously updated : 05/23/2022 Last updated : 10/20/2022
+monikerRange: '>=form-recog-2.1.0'
+recommendations: false
# Configure secure access with managed identities and private endpoints + This how-to guide will walk you through the process of enabling secure connections for your Form Recognizer resource. You can secure the following connections: * Communication between a client application within a Virtual Network (VNET) and your Form Recognizer Resource.
That's it! You can now configure secure access for your Form Recognizer resource
* **AccessDenied**:
- :::image type="content" source="media/managed-identities/access-denied.png" alt-text="Screenshot of a access denied error.":::
+ :::image type="content" source="media/managed-identities/access-denied.png" alt-text="Screenshot of an access denied error.":::
**Resolution**: Check to make sure there's connectivity between the computer accessing the form recognizer studio and the form recognizer service. For example, you may need to add the client IP address to the Form Recognizer service's networking tab.
applied-ai-services Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/managed-identities.md
Previously updated : 05/23/2022 Last updated : 10/20/2022 -
+monikerRange: '>=form-recog-2.1.0'
+recommendations: false
# Managed identities for Form Recognizer + Managed identities for Azure resources are service principals that create an Azure Active Directory (Azure AD) identity and specific permissions for Azure managed resources: * You can use managed identities to grant access to any resource that supports Azure AD authentication, including your own applications. Unlike security keys and authentication tokens, managed identities eliminate the need for developers to manage credentials.
applied-ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/overview.md
Previously updated : 10/12/2022 Last updated : 10/20/2022 recommendations: false
applied-ai-services Sdk Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/sdk-overview.md
Previously updated : 10/14/2022 Last updated : 10/20/2022 recommendations: false
applied-ai-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/service-limits.md
Previously updated : 10/14/2022 Last updated : 10/20/2022 monikerRange: '>=form-recog-2.1.0' recommendations: false
recommendations: false
# Form Recognizer service quotas and limits <!-- markdownlint-disable MD033 -->
+**Applies to:** ![Form Recognizer v2.1 checkmark](media/yes-icon.png) **Form Recognizer v3.0**. ![Form Recognizer v2.1 checkmark](media/yes-icon.png) **Form Recognizer v2.1**.
-This article contains a quick reference and the **detailed description** of Azure Form Recognizer service Quotas and Limits for all [pricing tiers](https://azure.microsoft.com/pricing/details/form-recognizer/). It also contains some best practices to avoid request throttling.
+* This article contains a quick reference and the **detailed description** of Azure Form Recognizer service Quotas and Limits for all [pricing tiers](https://azure.microsoft.com/pricing/details/form-recognizer/). It also contains some best practices to avoid request throttling.
-For the usage with [Form Recognizer SDK](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [Form Recognizer REST API](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [Form Recognizer Studio](quickstarts/try-v3-form-recognizer-studio.md) and [Sample Labeling Tool](https://fott-2-1.azurewebsites.net/).
+* For the usage with [Form Recognizer SDK](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [Form Recognizer REST API](quickstarts/get-started-sdks-rest-api.md?view=form-recog-3.0.0&preserve-view=true), [Form Recognizer Studio](quickstarts/try-v3-form-recognizer-studio.md) and [Sample Labeling Tool](https://fott-2-1.azurewebsites.net/):
| Quota | Free (F0)<sup>1</sup> | Standard (S0) | |--|--|--|
applied-ai-services V3 Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/v3-migration-guide.md
Previously updated : 10/10/2022 Last updated : 10/20/2022
+monikerRange: '>=form-recog-2.1.0'
recommendations: false
applied-ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/whats-new.md
Previously updated : 10/10/2022 Last updated : 10/20/2022
+monikerRange: '>=form-recog-2.1.0'
+recommendations: false
<!-- markdownlint-disable MD024 --> <!-- markdownlint-disable MD036 -->
# What's new in Azure Form Recognizer + Form Recognizer service is updated on an ongoing basis. Bookmark this page to stay up to date with release notes, feature enhancements, and documentation updates. ## October 2022
-With the latest preview release, Form Recognizer's Read (OCR), Layout, and Custom template models support 134 new languages including Greek, Latvian, Serbian, Thai, Ukrainian, and Vietnamese, along with several Latin and Cyrillic languages, making it a total of 299 supported languages across the most recent GA and the new preview versions. Please refer to the [supported languages](language-support.md) page to see all supported languages.
+With the latest preview release, Form Recognizer's Read (OCR), Layout, and Custom template models support 134 new languages. These language additions include Greek, Latvian, Serbian, Thai, Ukrainian, and Vietnamese, along with several Latin and Cyrillic languages. Form Recognizer now has a total of 299 supported languages across the most recent GA and new preview versions. Refer to the [supported languages](language-support.md) page to see all supported languages.
Use the REST API parameter `api-version=2022-06-30-preview` when using the API or the corresponding SDK to support the new languages in your applications. ## September 2022
-### Region expansion for training custom neural models
+### Region expansion for training custom neural models
+
+Training custom neural models is now supported in six new regions.
-Training custom neural models is now supported in six additional regions.
* Australia East * Central US * East Asia
Training custom neural models is now supported in six additional regions.
* UK South * West US2
-For a complete list of regions where training is supported see [custom neural models](concept-custom-neural.md).
+For a complete list of regions where training is supported see [custom neural models](concept-custom-neural.md).
#### Form Recognizer SDK version 4.0.0 GA release
This release includes the following updates:
* **AI quality improvements**
- * [**prebuilt-read**](concept-read.md). Enhanced support for single characters, handwritten dates, amounts, names, other entities commonly found in receipts and invoices as well as improved processing of digital PDF documents.
+ * [**prebuilt-read**](concept-read.md). Enhanced support for single characters, handwritten dates, amounts, names, other entities commonly found in receipts and invoices and improved processing of digital PDF documents.
* [**prebuilt-layout**](concept-layout.md). Support for better detection of cropped tables, borderless tables, and improved recognition of long spanning cells.
automation Automation Dsc Create Composite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-create-composite.md
keywords: dsc,powershell,configuration,setup
Last updated 08/08/2019+ # Convert configurations to composite resources
-> Applies To: Windows PowerShell 5.1
+> **Applies to:** :heavy_check_mark: Windows PowerShell 5.1
-Once you get started authoring configurations,
-you can quickly create "scenarios" that manage
-groups of settings.
-Examples would be:
+> [!IMPORTANT]
+> This article refers to a solution that is maintained by the Open Source community and support is only available in the form of GitHub collaboration, not from Microsoft.
-- create a web server-- create a DNS server-- create a server that runs SharePoint-- configure a SQL cluster-- manage firewall settings-- manage password settings
+This article, explains on creating **scenarios** that manage groups of settings, after you get started with authoring configurations. Listed below are few examples:
-If you are interested in sharing this work with others,
-the best option is to package the configuration as a
-[Composite Resource](/powershell/dsc/resources/authoringresourcecomposite).
-Creating composite resources for the first time can be overwhelming.
+- Create a web server
+- Create a DNS server
+- Create a server that runs SharePoint
+- Configure a SQL cluster
+- Manage firewall settings
+- Manage password settings
-> [!NOTE]
-> This article refers to a solution that is maintained by the Open Source community.
-> Support is only available in the form of GitHub collaboration, not from Microsoft.
+We recommend that you package the configuration as a [Composite Resource](/powershell/dsc/resources/authoringresourcecomposite) before you share it with others as creating composite resources for the first time can be a tedious effort.
+
+## Community project - CompositeResource
+
+A [CompositeResource](https://github.com/microsoft/compositeresource) is a community maintained solution that
+has been created to resolve this challenge. Composite Resource automates the process of creating a new module from your configuration.
-## Community project: CompositeResource
-A community maintained solution named
-[CompositeResource](https://github.com/microsoft/compositeresource)
-has been created to resolve this challenge.
+## Create a composite resource module
-CompositeResource automates the process of creating a new module from your configuration.
-You start by
-[dot sourcing](https://devblogs.microsoft.com/scripting/how-to-reuse-windows-powershell-functions-in-scripts/)
-the configuration script on your workstation (or build server)
-so it is loaded in memory.
-Next, rather than running the configuration to generate a MOF file,
-use the function provided by the CompositeResource module to automate a conversion.
-The cmdlet will load the contents of your configuration,
-get the list of parameters,
-and generate a new module with everything you need.
+Follow the steps to create a composite resource module:
-Once you have generated a module,
-you can increment the version and add release notes each time you make changes
-and publish it to your own
+1. Begin by [dot sourcing](https://devblogs.microsoft.com/scripting/how-to-reuse-windows-powershell-functions-in-scripts/) the configuration script on your workstation (or build server) to ensure that it is loaded in memory.
+1. Use the function provided by the *CompositeResource* module to automate a conversion instead of running the configuration to generate a *MOF* file.
+ Here, the cmdlet will load the contents of your configuration, gets the list of parameters, and generates a new module.
+1. After you generate a module, you can increment the version and add release notes each time you make changes and publish it to your own
[PowerShellGet repository](https://powershellexplained.com/2018-03-03-Powershell-Using-a-NuGet-server-for-a-PSRepository/?utm_source=blog&utm_medium=blog&utm_content=psscriptrepo).
+1. Use the module in the [Composable Authoring Experience](./compose-configurationwithcompositeresources.md) in Azure, or add them to [DSC Configuration scripts](/powershell/dsc/configurations/configurations) to generate MOF files and [upload the MOF files to Azure Automation](./tutorial-configure-servers-desired-state.md#create-and-upload-a-configuration-to-azure-automation).
+1. Register your servers from either [on-premises](./automation-dsc-onboarding.md#enable-physicalvirtual-linux-machines) or [in Azure](./automation-dsc-onboarding.md#enable-azure-vms)to pull configurations.
-Once you have created a composite resource module containing your configuration
-(or multiple configurations),
-you can use them in the
-[Composable Authoring Experience](./compose-configurationwithcompositeresources.md)
-in Azure,
-or add them to
-[DSC Configuration scripts](/powershell/dsc/configurations/configurations)
-to generate MOF files
-and
-[upload the MOF files to Azure Automation](./tutorial-configure-servers-desired-state.md#create-and-upload-a-configuration-to-azure-automation).
-Then register your servers from either
-[on-premises](./automation-dsc-onboarding.md#enable-physicalvirtual-linux-machines)
-or [in Azure](./automation-dsc-onboarding.md#enable-azure-vms)
-to pull configurations.
-The latest update to the project has also published
-[runbooks](https://www.powershellgallery.com/packages?q=DscGallerySamples)
-for Azure Automation to automate the process of importing configurations
-from the PowerShell Gallery.
+> [!NOTE]
+> The latest update to the project has also published [runbooks](https://www.powershellgallery.com/packages?q=DscGallerySamples) for Azure Automation to automate the process of importing configurations from the PowerShell Gallery.
-To try out automating creation of composite resources for DSC, visit the
-[PowerShell Gallery](https://www.powershellgallery.com/packages/compositeresource/)
-and download the solution or click "Project Site"
-to view the
-[documentation](https://github.com/microsoft/compositeresource).
+For more information on how to automate the creation of composite resources for DSC, see [PowerShell Gallery](https://www.powershellgallery.com/packages/compositeresource/) and download the solution or select **Project Site** to view the [documentation](https://github.com/microsoft/compositeresource).
## Next steps
azure-functions Set Runtime Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/set-runtime-version.md
The following table provides an example of [`linuxFxVersion`] values required to
| [Hosting plan](functions-scale.md) | [`linuxFxVersion` value][`linuxFxVersion`] | | | |
-| Consumption | `DOCKER\|mcr.microsoft.com/azure-functions/mesh:4.11.2-node18` |
-| Premium/Dedicated | `DOCKER\|mcr.microsoft.com/azure-functions/node:4.11.2-node18-appservice` |
+| Consumption | `DOCKER|mcr.microsoft.com/azure-functions/mesh:4.11.2-node18` |
+| Premium/Dedicated | `DOCKER|mcr.microsoft.com/azure-functions/node:4.11.2-node18-appservice` |
When needed, a support professional can provide you with a valid base image URI for your application.
The function app restarts after the change is made to the site config.
> [!div class="nextstepaction"] > [See Release notes for runtime versions](https://github.com/Azure/azure-webjobs-sdk-script/releases)
-[`linuxFxVersion`]: functions-app-settings.md#linuxfxversion
+[`linuxFxVersion`]: functions-app-settings.md#linuxfxversion
azure-monitor Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups.md
You might have a limited number of Azure app actions per action group.
### Email
-Ensure that your email filtering is configured appropriately. Emails are sent from the following email addresses:
+Ensure that your email filtering and any malware/spam prevention services are configured appropriately. Emails are sent from the following email addresses:
- azure-noreply@microsoft.com - azureemail-noreply@microsoft.com
For source IP address ranges, see [Action group IP addresses](../app/ip-addresse
- Learn more about [ITSM Connector](./itsmc-overview.md). - Learn more about [rate limiting](./alerts-rate-limiting.md) on alerts. - Get an [overview of activity log alerts](./alerts-overview.md), and learn how to receive alerts.-- Learn how to [configure alerts whenever a Service Health notification is posted](../../service-health/alerts-activity-log-service-notifications-portal.md).
+- Learn how to [configure alerts whenever a Service Health notification is posted](../../service-health/alerts-activity-log-service-notifications-portal.md).
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|PEBytesIn|Yes|Bytes In|Count|Total|Total number of Bytes Out|No Dimensions|
+|PEBytesIn|Yes|Bytes In|Count|Total|Total number of Bytes In |No Dimensions|
|PEBytesOut|Yes|Bytes Out|Count|Total|Total number of Bytes Out|No Dimensions|
azure-monitor Prometheus Remote Write Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-remote-write-managed-identity.md
+
+ Title: Remote-write in Azure Monitor Managed Service for Prometheus (preview)
+description: Describes how to configure remote-write to send data from self-managed Prometheus running in your AKS cluster or Azure Arc-enabled Kubernetes cluster using managed identity authentication.
++ Last updated : 10/20/2022++
+# Azure Monitor managed service for Prometheus remote write - managed identity (preview)
+Azure Monitor managed service for Prometheus is intended to be a replacement for self managed Prometheus so you don't need to manage a Prometheus server in your Kubernetes clusters. You may also choose to use the managed service to centralize data from self-managed Prometheus clusters for long term data retention and to create a centralized view across your clusters. In this case, you can use [remote_write](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage) to send data from your self-managed Prometheus into our managed service.
+
+This article describes how to configure remote-write to send data from self-managed Prometheus running in your AKS cluster or Azure Arc-enabled Kubernetes cluster using managed identity authentication. You either use an existing identity created by AKS or [create one of your own](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md). Both options are described here.
+
+## Architecture
+Azure Monitor provides a reverse proxy container (Azure Monitor side car container) that provides an abstraction for ingesting Prometheus remote write metrics and helps in authenticating packets. The Azure Monitor side car container currently supports User Assigned Identity and Azure Active Directory (Azure AD) based authentication to ingest Prometheus remote write metrics to Azure Monitor workspace.
++
+## Cluster configurations
+This article applies to the following cluster configurations:
+
+- Azure Kubernetes service (AKS)
+- Azure Arc-enabled Kubernetes cluster
+
+## Prerequisites
+
+- You must have self-managed Prometheus running on your AKS cluster. For example, see [Using Azure Kubernetes Service with Grafana and Prometheus](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/using-azure-kubernetes-service-with-grafana-and-prometheus/ba-p/3020459).
+- You used [Kube-Prometheus Stack](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack) when you set up Prometheus on your AKS cluster.
++
+## Create Azure Monitor workspace
+Data for Azure Monitor managed service for Prometheus is stored in an [Azure Monitor workspace](../essentials/azure-monitor-workspace-overview.md). You must [create a new workspace](../essentials/azure-monitor-workspace-overview.md#create-an-azure-monitor-workspace) if you don't already have one.
++
+## Locate AKS node resource group
+The node resource group of the AKS cluster contains resources that you will require for other steps in this process. This resource group has the name `MC_<AKS-RESOURCE-GROUP>_<AKS-CLUSTER-NAME>_<REGION>`. You can locate it from the **Resource groups** menu in the Azure portal. Start by making sure that you can locate this resource group since other steps below will refer to it.
++
+## Get the client ID of the user assigned identity
+You will require the client ID of the identity that you're going to use. Note this value for use in later steps in this process.
+
+Get the **Client ID** from the **Overview** page of your [managed identity](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md).
++
+Instead of creating your own ID, you can use one of the identities created by AKS, which are listed in [Use a managed identity in Azure Kubernetes Service](../../aks/use-managed-identity.md). This article uses the `Kubelet` identity. The name of this identity will be `<AKS-CLUSTER-NAME>-agentpool` and located in the node resource group of the AKS cluster.
++++
+## Assign managed identity the Monitoring Metrics Publisher role on the data collection rule
+The managed identity requires the *Monitoring Metrics Publisher* role on the data collection rule associated with your Azure Monitor workspace.
+
+1. From the menu of your Azure Monitor Workspace account, click the **Data collection rule** to open the **Overview** page for the data collection rule.
+
+ :::image type="content" source="media/prometheus-remote-write-managed-identity/azure-monitor-account-data-collection-rule.png" alt-text="Screenshot showing data collection rule used by Azure Monitor workspace." lightbox="media/prometheus-remote-write-managed-identity/azure-monitor-account-data-collection-rule.png":::
+
+2. Click on **Access control (IAM)** in the **Overview** page for the data collection rule.
+
+ :::image type="content" source="media/prometheus-remote-write-managed-identity/azure-monitor-account-access-control.png" alt-text="Screenshot showing Access control (IAM) menu item on the data collection rule Overview page." lightbox="media/prometheus-remote-write-managed-identity/azure-monitor-account-access-control.png":::
+
+3. Click **Add** and then **Add role assignment**.
+
+ :::image type="content" source="media/prometheus-remote-write-managed-identity/data-collection-rule-add-role-assignment.png" alt-text="Screenshot showing adding a role assignment on Access control pages." lightbox="media/prometheus-remote-write-managed-identity/data-collection-rule-add-role-assignment.png":::
+
+4. Select **Monitoring Metrics Publisher** role and click **Next**.
+
+ :::image type="content" source="media/prometheus-remote-write-managed-identity/add-role-assignment.png" alt-text="Screenshot showing list of role assignments." lightbox="media/prometheus-remote-write-managed-identity/add-role-assignment.png":::
+
+5. Select **Managed Identity** and then click **Select members**. Choose the subscription the user assigned identity is located in and then select **User-assigned managed identity**. Select the User Assigned Identity that you're going to use and click **Select**.
+
+ :::image type="content" source="media/prometheus-remote-write-managed-identity/select-managed-identity.png" alt-text="Screenshot showing selection of managed identity." lightbox="media/prometheus-remote-write-managed-identity/select-managed-identity.png":::
+
+6. Click **Review + assign** to complete the role assignment.
++
+## Grant AKS cluster access to the identity
+This step isn't required if you're using an AKS identity since it will already have access to the cluster.
+
+> [!IMPORTANT]
+> You must have owner/user access administrator access on the cluster.
+
+1. Identify the virtual machine scale sets in the [node resource group](#locate-aks-node-resource-group) for your AKS cluster.
+
+ :::image type="content" source="media/prometheus-remote-write-managed-identity/resource-group-details-virtual-machine-scale-sets.png" alt-text="Screenshot showing virtual machine scale sets in the node resource group." lightbox="media/prometheus-remote-write-managed-identity/resource-group-details-virtual-machine-scale-sets.png":::
+
+2. Run the following command in Azure CLI for each virtual machine scale set.
+
+ ```azurecli
+ az vmss identity assign -g <AKS-NODE-RESOURCE-GROUP> -n <AKS-VMSS-NAME> --identities <USER-ASSIGNED-IDENTITY-RESOURCE-ID>
+ ```
++
+## Deploy Side car and configure remote write on the Prometheus server
+
+1. Copy the YAML below and save to a file. This YAML assumes you're using 8081 as your listening port. Modify that value if you use a different port.
+
+ ```yml
+ prometheus:
+ prometheusSpec:
+ externalLabels:
+ cluster: <AKS-CLUSTER-NAME>
+
+ ## https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write
+ ##
+ remoteWrite:
+ - url: "http://localhost:8081/api/v1/write"
+
+ containers:
+ - name: prom-remotewrite
+ image: <CONTAINER-IMAGE-VERSION>
+ imagePullPolicy: Always
+ ports:
+ - name: rw-port
+ containerPort: 8081
+ livenessProbe:
+ httpGet:
+ path: /health
+ port: rw-port
+ readinessProbe:
+ httpGet:
+ path: /ready
+ port: rw-port
+ env:
+ - name: INGESTION_URL
+ value: "<INGESTION_URL>"
+ - name: LISTENING_PORT
+ value: "8081"
+ - name: IDENTITY_TYPE
+ value: "userAssigned"
+ - name: AZURE_CLIENT_ID
+ value: "<MANAGED-IDENTITY-CLIENT-ID>"
+ # Optional parameters
+ - name: CLUSTER
+ value: "<CLUSTER-NAME>"
+ ```
++
+2. Replace the following values in the YAML.
+
+ | Value | Description |
+ |:|:|
+ | `<AKS-CLUSTER-NAME>` | Name of your AKS cluster |
+ | `<CONTAINER-IMAGE-VERSION>` | `mcr.microsoft.com/azuremonitor/prometheus/promdev/prom-remotewrite:prom-remotewrite-20221012.2`<br>This is the remote write container image version. |
+ | `<INGESTION-URL>` | **Metrics ingestion endpoint** from the **Overview** page for the Azure Monitor workspace |
+ | `<MANAGED-IDENTITY-CLIENT-ID>` | **Client ID** from the **Overview** page for the managed identity |
+ | `<CLUSTER-NAME>` | Name of the cluster Prometheus is running on |
+
+
+++
+3. Open Azure Cloud Shell and upload the YAML file.
+4. Use helm to apply the YAML file to update your Prometheus configuration with the following CLI commands.
+
+ ```azurecli
+ # set context to your cluster
+ az aks get-credentials -g <aks-rg-name> -n <aks-cluster-name>
+
+ # use helm to update your remote write config
+ helm upgrade -f <YAML-FILENAME>.yml prometheus prometheus-community/kube-prometheus-stack -namespace <namespace where Prometheus pod resides>
+ ```
+++
+## Next steps
+
+- [Learn more about Azure Monitor managed service for Prometheus](prometheus-metrics-overview.md).
backup Backup Azure Mars Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-mars-troubleshoot.md
We recommend that you check the following before you start troubleshooting Micro
| Error | Recommended actions | | | |
-|Failed to download the vault credential file. (ID: 403) | <ul><li> Try downloading the vault credentials by using a different browser, or take these steps: <ul><li> Start Internet Explorer. Select F12. </li><li> Go to the **Network** tab and clear the cache and cookies. </li> <li> Refresh the page.<br></li></ul> <li> Check if the subscription is disabled/expired.<br></li> <li> Check if any firewall rule is blocking the download. <br></li> <li> Ensure you haven't exhausted the limit on the vault (50 machines per vault).<br></li> <li> Ensure the user has the Azure Backup permissions that are required to download vault credentials and register a server with the vault. See [Use Azure role-based access control to manage Azure Backup recovery points](backup-rbac-rs-vault.md).</li></ul> |
+|Failed to download the vault credential file. (ID: 403) | - Try downloading the vault credentials by using a different browser, or follow these steps: <br><br> a. Start Internet Explorer. Select F12. <br> b. Go to the **Network** tab and clear the cache and cookies. <br> c. Refresh the page. <br><br> - Check if the subscription is disabled/expired.<br><br> - Check if any firewall rule is blocking the download. <br><br> - Ensure you haven't exhausted the limit on the vault (50 machines per vault). <br><br> - Ensure the user has the Azure Backup permissions that are required to download vault credentials and register a server with the vault. See [Use Azure role-based access control to manage Azure Backup recovery points](backup-rbac-rs-vault.md). |
## The Microsoft Azure Recovery Service Agent was unable to connect to Microsoft Azure Backup
backup Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix.md
Title: Azure Backup support matrix description: Provides a summary of support settings and limitations for the Azure Backup service. Previously updated : 10/14/2022 Last updated : 10/21/2022
The resource health check functions in following conditions:
| Resource health check | Details | | | |
-| **Supported Resources** | Recovery Services vault |
-| **Supported Regions** | East US, East US 2, Central US, South Central US, North Central US, West Central US, West US, West US 2, West US 3, Canada East, Canada Central, North Europe, West Europe, UK West, UK South, France Central, France South, Sweden Central, Sweden South, East Asia, South East Asia, Japan East, Japan West, Korea Central, Korea South, Australia East, Australia Central, Australia Central 2, Australia South East, South Africa North, South Africa West, UAE North, UAE Central, Brazil South East, Brazil South, Switzerland North, Switzerland West, Norway East, Norway West, Germany North, Germany West Central, West India, Central India, South India, Jio India West, Jio India Central. |
+| **Supported Resources** | Recovery Services vault, Backup vault |
+| **Supported Regions** | - Recovery Services vault: Supported in all Azure public regions, US Sovereign cloud, and China Sovereign cloud. <br><br> - Backup vault: Supported in all Azure public regions, except Sovereign clouds. |
| **For unsupported regions** | The resource health status is shown as "Unknown". | ## Zone-redundant storage support
backup Monitoring And Alerts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/monitoring-and-alerts-overview.md
Title: Monitoring and reporting solutions for Azure Backup description: Learn about different monitoring and reporting solutions provided by Azure Backup. Previously updated : 09/14/2022 Last updated : 10/21/2022
The following table provides a summary of the different monitoring and reporting
| Scenario | Solutions available | | | | | Monitor backup jobs and backup instances | - **Built-in monitoring**: You can monitor backup jobs and backup instances in real time via the [Backup center](./backup-center-overview.md) dashboard. <br><br> - **Customized monitoring dashboards**: Azure Backup allows you to use non-portal clients, such as [PowerShell](./backup-azure-vms-automation.md), [CLI](./create-manage-azure-services-using-azure-command-line-interface.md), and [REST API](./backup-azure-arm-userestapi-managejobs.md), to query backup monitoring data for use in your custom dashboards. In addition, you can query your backups at scale (across vaults, subscriptions, regions, and Lighthouse tenants) using [Azure Resource Graph (ARG)](./query-backups-using-azure-resource-graph.md). [Backup Explorer](./monitor-azure-backup-with-backup-explorer.md) is one sample monitoring workbook, which uses data in ARG that you can use as a reference to create your own dashboards. |
-| Monitor overall backup health | - **Resource Health**: You can monitor the health of your Recovery Services vault and troubleshoot events causing the resource health issues. [Learn more](../service-health/resource-health-overview.md). You can view the health history and identify events affecting the health of your resource. You can also trigger alerts related to the resource health events. <br><br> - **Azure Monitor Metrics**: Azure Backup also offers the above health metrics via Azure Monitor, which provides you more granular details about the health of your backups. This also allows you to configure alerts and notifications on these metrics. [Learn more](./metrics-overview.md). |
+| Monitor overall backup health | - **Resource Health**: You can monitor the health of your Recovery Services vault and Backup vaults, and troubleshoot events causing the resource health issues. [Learn more](../service-health/resource-health-overview.md). You can view the health history and identify events affecting the health of your resource. You can also trigger alerts related to the resource health events. <br><br> - **Azure Monitor Metrics**: Azure Backup also offers the above health metrics via Azure Monitor, which provides you more granular details about the health of your backups. This also allows you to configure alerts and notifications on these metrics. [Learn more](./metrics-overview.md). |
| Get alerted to critical backup incidents | - **Built-in alerts using Azure Monitor**: Azure Backup provides an [alerting solution based on Azure Monitor](./backup-azure-monitoring-built-in-monitor.md#azure-monitor-alerts-for-azure-backup) for scenarios, such as deletion of backup data, disabling of soft-delete, backup failures, and restore failures. You can view and manage these alerts via Backup center. To [configure notifications](./backup-azure-monitoring-built-in-monitor.md#configuring-notifications-for-alerts) for these alerts (for example, emails), you can use Azure Monitor's [Action rules](../azure-monitor/alerts/alerts-action-rules.md?tabs=portal) and [Action groups](../azure-monitor/alerts/action-groups.md) to route alerts to a wide range of notification channels. <br><br> - **Azure Backup Metric Alerts using Azure Monitor (preview)**: You can write custom alert rules using Azure Monitor metrics to monitor the health of your backup items across different KPIs. [Learn more](./metrics-overview.md) <br><br> - **Classic Alerts**: This is the older alerting solution, which you can [access using the Backup Alerts tab](./backup-azure-monitoring-built-in-monitor.md#backup-alerts-in-recovery-services-vault) in the Recovery Services vault blade. These alerts don't appear in Backup center. If you're using classic alerts, we recommend to start using one or more of the Azure Monitor based alert solutions (described above) as it's the forward-looking solution for alerting. <br><br> - **Custom log alerts**: If you've scenarios where an alert needs to be generated based on custom logic, you can use [Log Analytics based alerts](./backup-azure-monitoring-use-azuremonitor.md#create-alerts-by-using-log-analytics) for such scenarios, provided you've configured your vaults to send diagnostics data to a Log Analytics (LA) workspace. Due to the current [frequency at which data in an LA workspace is updated](./backup-azure-monitoring-use-azuremonitor.md#diagnostic-data-update-frequency), this solution is typically used for scenarios where it's acceptable to have a short time difference between the occurrence of the actual incident and the generation of the alert. | | Analyze historical trends | - **Built-in reports**: You can use [Backup Reports](./configure-reports.md) (based on Azure Monitor Logs) to analyze historical trends related to job success and backup usage, and discover optimization opportunities for your backups. You can also [configure periodic emails](./backup-reports-email.md) of these reports. <br><br> - **Customized reporting dashboards**: You can also query the data in Azure Monitor Logs (LA) using the documented [system functions](./backup-reports-system-functions.md) to create your own dashboards to analyze historical information related to your backups. | | Audit user triggered actions on vaults | **Activity Logs**: You can use standard [Activity Logs](../azure-monitor/essentials/activity-log.md) for your vaults to view information on various user-triggered actions, such as modification of backup policies, restoration of a backup item, and so on. You can also configure alerts on Activity Logs, or export these logs to a Log Analytics workspace for long-term retention. |
cognitive-services Concept Generating Thumbnails https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-generating-thumbnails.md
# Smart-cropped thumbnails
-A thumbnail is a reduced-size representation of an image. Thumbnails are used to represent images and other data in a more economical, layout-friendly way. The Computer Vision API uses smart cropping to create intuitive image thumbnails that include the key objects in the image.
+A thumbnail is a reduced-size representation of an image. Thumbnails are used to represent images and other data in a more economical, layout-friendly way. The Computer Vision API uses smart cropping to create intuitive image thumbnails that include the most important regions of an image with priority given to any detected faces.
#### [Version 3.2](#tab/3-2) The Computer Vision thumbnail generation algorithm works as follows:
The following table illustrates thumbnails defined by smart-cropping for the exa
#### [Version 4.0](#tab/4-0)
-The Computer Vision smart-cropping utility takes a given aspect ratio (or several) and returns the bounding box coordinates (in pixels) of the region(s) identified. Your app can then crop and return the image using those coordinates.
+The Computer Vision smart-cropping utility takes one or more aspect ratios in the range [0.75, 1.80] and returns the bounding box coordinates (in pixels) of the region(s) identified. Your app can then crop and return the image using those coordinates.
> [!IMPORTANT] > This feature uses face detection to help determine important regions in the image. The detection does not involve distinguishing one face from another face, predicting or classifying facial attributes, or creating a facial template (a unique set of numbers generated from an image that represents the distinctive features of a face).
container-registry Container Registry Tutorial Sign Build Push https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-sign-build-push.md
In this tutorial:
> [!NOTE] > The tutorial uses early released versions of notation and notation plugins.
-1. Install notation with plugin support from the [release version](https://github.com/notaryproject/notation/releases/)
+1. Install notation 0.11.0-alpha.4 with plugin support on a Linux environment. You can also download the package for other environments from the [release page](https://github.com/notaryproject/notation/releases/tag/v0.11.0-alpha.4).
```bash # Download, extract and install
- curl -Lo notation.tar.gz https://github.com/notaryproject/notation/releases/download/v0.9.0-alpha.1/notation_0.9.0-alpha.1_linux_amd64.tar.gz
+ curl -Lo notation.tar.gz https://github.com/notaryproject/notation/releases/download/v0.11.0-alpha.4/notation_0.11.0-alpha.4_linux_amd64.tar.gz
tar xvzf notation.tar.gz # Copy the notation cli to the desired bin directory in your PATH cp ./notation /usr/local/bin ```
-2. Install the notation Azure Key Vault plugin for remote signing and verification
+2. Install the notation Azure Key Vault plugin for remote signing and verification.
> [!NOTE] > The plugin directory varies depending upon the operating system being used. The directory path below assumes Ubuntu.
In this tutorial:
# Download the plugin curl -Lo notation-azure-kv.tar.gz \
- https://github.com/Azure/notation-azure-kv/releases/download/v0.3.1-alpha.1/notation-azure-kv_0.3.1-alpha.1_Linux_amd64.tar.gz
+ https://github.com/Azure/notation-azure-kv/releases/download/v0.4.0-alpha.4/notation-azure-kv_0.4.0-alpha.4_Linux_amd64.tar.gz
# Extract to the plugin directory tar xvzf notation-azure-kv.tar.gz -C ~/.config/notation/plugins/azure-kv notation-azure-kv ```
-3. List the available plugins and verify that the plugin is available
+3. List the available plugins and verify that the plugin is available.
```bash notation plugin ls
In this tutorial:
> [!NOTE] > For easy execution of commands in the tutorial, provide values for the Azure resources to match the existing ACR and AKV resources.
-1. Configure AKV resource names
+1. Configure AKV resource names.
```bash # Name of the existing Azure Key Vault used to store the signing keys
In this tutorial:
CERT_PATH=./${KEY_NAME}.pem ```
-2. Configure ACR and image resource names
+2. Configure ACR and image resource names.
```bash # Name of the existing registry example: myregistry.azurecr.io
Otherwise create an x509 self-signed certificate storing it in AKV for remote si
### Create a self-signed certificate (Azure CLI)
-1. Create a certificate policy file
+1. Create a certificate policy file.
Once the certificate policy file is executed as below, it creates a valid signing certificate compatible with **notation** in AKV. The EKU listed is for code-signing, but isn't required for notation to sign artifacts.
Otherwise create an x509 self-signed certificate storing it in AKV for remote si
EOF ```
-1. Create the certificate
+1. Create the certificate.
```azure-cli az keyvault certificate create -n $KEY_NAME --vault-name $AKV_NAME -p @my_policy.json ```
-1. Get the Key ID for the certificate
+1. Get the Key ID for the certificate.
```bash KEY_ID=$(az keyvault certificate show -n $KEY_NAME --vault-name $AKV_NAME --query 'kid' -o tsv) ```
-4. Download public certificate
+4. Download public certificate.
```bash CERT_ID=$(az keyvault certificate show -n $KEY_NAME --vault-name $AKV_NAME --query 'id' -o tsv) az keyvault certificate download --file $CERT_PATH --id $CERT_ID --encoding PEM ```
-5. Add the Key ID to the keys and certs
+5. Add the Key ID to the keys and certs.
```bash notation key add --name $KEY_NAME --plugin azure-kv --id $KEY_ID notation cert add --name $KEY_NAME $CERT_PATH ```
-6. List the keys and certs to confirm
+6. List the keys and certs to confirm.
```bash notation key ls
Otherwise create an x509 self-signed certificate storing it in AKV for remote si
## Build and sign a container image
-1. Build and push a new image with ACR Tasks
+1. Build and push a new image with ACR Tasks.
```azure-cli az acr build -r $ACR_NAME -t $IMAGE $IMAGE_SOURCE ```
-2. Authenticate with your individual Azure AD identity to use an ACR token
+2. Authenticate with your individual Azure AD identity to use an ACR token.
```azure-cli export USER_NAME="00000000-0000-0000-0000-000000000000"
Otherwise create an x509 self-signed certificate storing it in AKV for remote si
export NOTATION_PASSWORD=$PASSWORD ```
-3. Sign the container image
+3. Choose [COSE](https://datatracker.ietf.org/doc/html/rfc8152) or JWS signature envelope to sign the container image.
+ - Sign the container image with the COSE signature envelope:
+
+ ```bash
+ notation sign --envelope-type cose --key $KEY_NAME $IMAGE
+ ```
+
+ - Sign the container image with the default JWS signature envelope:
+
```bash notation sign --key $KEY_NAME $IMAGE ```-
+
## View the graph of artifacts with the ORAS CLI
-ACR support for ORAS artifacts enables a linked graph of supply chain artifacts that can be viewed through the ORAS CLI or the Azure CLI
+ACR support for ORAS artifacts enables a linked graph of supply chain artifacts that can be viewed through the ORAS CLI or the Azure CLI.
-1. Signed images can be view with the ORAS CLI
+1. Signed images can be view with the ORAS CLI.
```bash oras login -u $USER_NAME -p $PASSWORD $REGISTRY
ACR support for ORAS artifacts enables a linked graph of supply chain artifacts
## View the graph of artifacts with the Azure CLI
-1. List the manifest details for the container image
+1. List the manifest details for the container image.
```azure-cli az acr manifest show-metadata $IMAGE -o jsonc
notation verify $IMAGE
## Next steps
-[Enforce policy to only deploy signed container images to Azure Kubernetes Service (AKS) utilizing **ratify** and **gatekeeper**.](https://github.com/Azure/notation-azure-kv/blob/main/docs/nv2-sign-verify-aks.md)
+See [Enforce policy to only deploy signed container images to Azure Kubernetes Service (AKS) utilizing **ratify** and **gatekeeper**.](https://github.com/Azure/notation-azure-kv/blob/main/docs/nv2-sign-verify-aks.md)
cost-management-billing Cost Management Billing Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/cost-management-billing-overview.md
description: You use Cost Management + Billing features to conduct billing admin
keywords: Previously updated : 09/01/2022+ Last updated : 10/20/2022 -+
-# What is Cost Management + Billing?
+# What is Microsoft Cost Management and Billing?
-By using the Microsoft cloud, you can significantly improve the technical performance of your business workloads. It can also reduce your costs and the overhead required to manage organizational assets. However, the business opportunity creates a risk because of the potential for waste and inefficiencies that are introduced into your cloud deployments. Cost Management + Billing is a suite of tools provided by Microsoft that help you analyze, manage, and optimize the costs of your workloads. Using the suite helps ensure that your organization is taking advantage of the benefits provided by the cloud.
+Microsoft Cost Management is a suite of tools that help organizations monitor, allocate, and optimize the cost of their Microsoft Cloud workloads. Cost Management is available to anyone with access to a billing or resource management scope. The availability includes anyone from the cloud finance team with access to the billing account. And, to DevOps teams managing resources in subscriptions and resource groups.
-You can think of your Azure workloads like the lights in your home. When you leave to go out for the day, are you leaving the lights on? Could you use different bulbs that are more efficient to help reduce your monthly energy bill? Do you have more lights in one room than are needed? You can use Cost Management + Billing to apply a similar thought process to the workloads used by your organization.
+Billing is where you can manage your accounts, invoices, and payments. Billing is available to anyone with access to a billing account or other billing scope, like billing profiles and invoice sections. The cloud finance team and organizational leaders are typically included.
-With Azure products and services, you only pay for what you use. As you create and use Azure resources, youΓÇÖre charged for the resources. Because of the deployment ease for new resources, the costs of your workloads can jump significantly without proper analysis and monitoring. You use Cost Management + Billing features to:
+Together, Cost Management and Billing are your gateway to the Microsoft Commerce system that's available to everyone throughout the journey. From initial sign-up and billing account management, to the purchase and management of Microsoft and third-party Marketplace offers, to financial operations (FinOps) tools.
-- Conduct billing administrative tasks such as paying your bill-- Manage billing access to costs-- Download cost and usage data that was used to generate your monthly invoice-- Proactively apply data analysis to your costs-- Set spending thresholds-- Identify opportunities for workload changes that can optimize your spending
+A few examples of what you can do in Cost Management and Billing include:
-To learn more about how to approach cost management as an organization, take a look at the [Cost Management best practices](./costs/cost-mgt-best-practices.md) article.
+- Report on and analyze costs in the Azure portal, Microsoft 365 admin center, or externally by exporting data.
+- Monitor costs proactively with budget, anomaly, and scheduled alerts.
+- Split shared costs with cost allocation rules.
+- Create and organize subscriptions to customize invoices.
+- Configure payment options and pay invoices.
+- Manage your billing information, such as legal entity, tax information, and agreements.
-![Diagram of the Cost Management + Billing optimization process.](./media/cost-management-optimization-process.png)
+## How charges are processed
-## Understand Azure Billing
+To understand how Cost Management and Billing works, you should first understand the Commerce system. At its core, Microsoft Commerce is a data pipeline that underpins all Microsoft commercial transactions, whether consumer or commercial. There are many inputs and connections to the pipeline. It includes the sign-up and Marketplace purchase experiences. However, we'll focus on the pieces that make up your cloud billing account and how charges are processed within the system.
-Azure Billing features are used to review your invoiced costs and manage access to billing information. In larger organizations, procurement and finance teams usually conduct billing tasks.
-A billing account is created when you sign up to use Azure. You use your billing account to manage your invoices, payments, and track costs. You can have access to multiple billing accounts. For example, you might have signed up for Azure for your personal projects. So, you might have an individual Azure subscription with a billing account. You could also have access through your organization's Enterprise Agreement or Microsoft Customer Agreement. For each scenario, you would have a separate billing account.
+In the left side of the diagram, your Azure, Microsoft 365, Dynamics 365, and Power Platform services are all pushing data into the Commerce data pipeline. Each service publishes data on a different cadence. In general, if data for one service is slower than another, it's due to how frequently those services are publishing their usage and charges.
-### Billing accounts
+As the data makes its way through the pipeline, the rating system applies discounts based on your specific price sheet and generates *rated usage*, which includes price and quantity for each cost record. It's the basis for what you see in Cost Management, but we'll cover that later. At the end of the month, credits are applied and the invoice is published. The process starts 72 hours after your billing period ends, which is usually the last day of the calendar month for most accounts. For example, if your billing period ends on March 31, charges will be finalized on April 4 at midnight.
-The Azure portal currently supports the following types of billing accounts:
+> [!IMPORTANT]
+> Credits are applied like a gift card or other payment instrument before the invoice is generated. While credit status is tracked as new charges flow into the data pipeline, credits aren't explicitly applied to these charges until the end of the month.
-- **Microsoft Online Services Program**: An individual billing account for a Microsoft Online Services Program is created when you sign up for Azure through the Azure website. For example, when you sign up for an [Azure Free Account](./manage/create-free-services.md), account with pay-as-you-go rates or as a Visual studio subscriber.
+Everything up to this point makes up the billing process. It's where charges are finalized, discounts are applied, and invoices are published. Billing account and billing profile owners may be familiar with this process as part of the Billing experience within the Azure portal or Microsoft 365 admin center. The Billing experience allows you to review credits, manage your billing address and payment methods, pay invoices, and more ΓÇô everything related to managing your billing relationship with Microsoft.
-- **Enterprise Agreement**: A billing account for an Enterprise Agreement is created when your organization signs an Enterprise Agreement (EA) to use Azure.
+After discounts are applied, cost details then flow into Cost Management, where:
-- **Microsoft Customer Agreement**: A billing account for a Microsoft Customer Agreement is created when your organization works with a Microsoft representative to sign a Microsoft Customer Agreement. Some customers in select regions, who sign up through the Azure website for an account with pay-as-you-go rates or upgrade their [Azure Free Account](./manage/create-free-services.md) may have a billing account for a Microsoft Customer Agreement as well.
+- The [anomaly detection](./understand/analyze-unexpected-charges.md) model identifies anomalies daily based on normalized usage (not rated usage).
+- The cost allocation engine applies tag inheritance and [splits shared costs](./costs/allocate-costs.md).
+- AWS cost and usage reports are pulled based on any [connectors for AWS](./costs/aws-integration-manage.md) you may have configured.
+- Azure Advisor cost recommendations are pulled in to enable cost savings insights for subscriptions and resource groups.
+- Cost alerts are sent out for [budgets](./costs/tutorial-acm-create-budgets.md), [anomalies](./understand/analyze-unexpected-charges.md#create-an-anomaly-alert), [scheduled alerts](./costs/save-share-views.md#subscribe-to-cost-alerts), and more based on the configured settings.
-## Understand Cost Management
+Lastly, cost details are made available from [cost analysis](./costs/quick-acm-cost-analysis.md) in the Azure portal and published to your storage account via [scheduled exports](./costs/tutorial-export-acm-data.md).
-Although related, billing differs from cost management. Billing is the process of invoicing customers for goods or services and managing the commercial relationship.
+## How Cost Management and Billing relate
-Cost Management shows organizational cost and usage patterns with advanced analytics. Reports in Cost Management show the usage-based costs consumed by Azure services and third-party Marketplace offerings. Costs are based on negotiated prices and factor in reservation and Azure Hybrid Benefit discounts. Collectively, the reports show your internal and external costs for usage and Azure Marketplace charges. **Other charges, such as support and taxes aren't yet shown in reports**. The reports help you understand your spending and resource use and can help find spending anomalies. Predictive analytics are also available. Cost Management uses Azure management groups, budgets, and recommendations to show clearly how your expenses are organized and how you might reduce costs.
+[Cost Management](https://portal.azure.com/#view/Microsoft_Azure_CostManagement/Menu) is a set of FinOps tools that enable you to analyze, manage, and optimize your costs.
-You can use the Azure portal or various APIs for export automation to integrate cost data with external systems and processes. Automated billing data export and scheduled reports are also available.
+[Billing](https://portal.azure.com/#view/Microsoft_Azure_GTM/ModernBillingMenuBlade) provides all the tools you need to manage your billing account and pay invoices.
-Watch the Cost Management overview video for a quick overview about how Cost Management can help you save money in Azure. To watch other videos, visit the [Cost Management YouTube channel](https://www.youtube.com/c/AzureCostManagement).
+Cost Management is available from within the Billing experience. It's also available from every subscription, resource group, and management group in the Azure portal. The availability is to ensure everyone has full visibility into the costs they're responsible for. And, so they can optimize their workloads to maximize efficiency. Cost Management is also available independently to streamline the process for managing cost across multiple billing accounts, subscriptions, resource groups, and management groups.
->[!VIDEO https://www.youtube.com/embed/el4yN5cHsJ0]
-### Plan and control expenses
-The ways that Cost Management help you plan for and control your costs include: Cost analysis, budgets, recommendations, and exporting cost management data.
+## What data is included in Cost Management and Billing?
-You use cost analysis to explore and analyze your organizational costs. You can view aggregated costs by organization to understand where costs are accrued and to identify spending trends. And you can see accumulated costs over time to estimate monthly, quarterly, or even yearly cost trends against a budget.
+Within the Billing experience, you can manage all the products, subscriptions, and recurring purchases you use; review your credits and commitments; and view and pay your invoices. Invoices are available online or as PDFs and include all billed charges and any applicable taxes. Credits are applied to the total invoice amount when invoices are generated. This invoicing process happens in parallel to Cost Management data processing, which means Cost Management doesn't include credits, taxes, and some purchases, like support charges in non-Microsoft Customer Agreement (MCA) accounts.
-Budgets help you plan for and meet financial accountability in your organization. They help prevent cost thresholds or limits from being surpassed. Budgets can also help you inform others about their spending to proactively manage costs. And with them, you can see how spending progresses over time.
+The classic Cloud Solution Provider (CSP) and sponsorship subscriptions aren't supported in Cost Management. These subscriptions will be supported after they transition to MCA.
-Recommendations show how you can optimize and improve efficiency by identifying idle and underutilized resources. Or, they can show less expensive resource options. When you act on the recommendations, you change the way you use your resources to save money. To act, you first view cost optimization recommendations to view potential usage inefficiencies. Next, you act on a recommendation to modify your Azure resource use to a more cost-effective option. Then you verify the action to make sure that the change you make is successful.
+For more information about supported offers, what data is included, or how data is refreshed and retained in Cost Management, see [Understand Cost Management data](./costs/understand-cost-mgt-data.md).
-If you use external systems to access or review cost management data, you can easily export the data from Azure. And you can set a daily scheduled export in CSV format and store the data files in Azure storage. Then, you can access the data from your external system.
+## Manage your billing account and invoices
-### Additional Azure tools
+Microsoft has several types of billing accounts. Each type has a slightly different experience to support the unique aspects of the billing account. To learn more, see [Billing accounts and scopes](./manage/view-all-accounts.md).
-Azure has other tools that aren't a part of the Cost Management + Billing feature set. However, they play an important role in the cost management process. To learn more about these tools, see the following links.
+You use billing account management tasks to:
-- [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/) - Use this tool to estimate your up-front cloud costs.-- [Azure Migrate](../migrate/migrate-services-overview.md) - Assess your current datacenter workload for insights about what's needed from an Azure replacement solution.-- [Azure Advisor](../advisor/advisor-overview.md) - Identify unused VMs and receive recommendations about Azure reserved instance purchases.-- [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/) - Use your current on-premises Windows Server or SQL Server licenses for VMs in Azure to save.
+- View invoices and make payments.
+- Configure your billing address and PO numbers.
+- Create and organize subscriptions into departments or billing profiles.
+- Renew or cancel products you've purchased.
+- Enable access to Cost Management, Reservations, and Marketplace offers.
+- View agreements, credits, and commitments.
+
+Management for classic Cloud Solution Provider (CSP) and classic sponsorship subscriptions isn't available in Billing or Cost Management experiences because they're billed differently.
+
+## Report on and analyze costs
+
+Cost Management and Billing include several tools to help you understand, report on, and analyze your invoiced Microsoft Cloud and AWS costs.
+
+- [**Cost analysis**](./costs/quick-acm-cost-analysis.md) is a tool for ad-hoc cost exploration. Get quick answers with lightweight insights and analytics.
+**Power BI** is an advanced solution to build more extensive dashboards and complex reports or combine costs with other data. Power BI is available for billing accounts and billing profiles.
+- [**Exports and the Cost Details API**](./automate/usage-details-best-practices.md) enable you to integrate cost details into external systems or business processes.
+- The **Credits** page shows your available credit or prepaid commitment balance. They aren't included in cost analysis.
+- The **Invoices** page provides a list of all previously invoiced charges and their payment status for your billing account.
+- **Connectors for AWS** enable you to ingest your AWS cost details into Azure to facilitate managing Azure and AWS costs together. After configured, the connector also enables other capabilities, like budget and scheduled alerts.
+
+For more information, see [Get started with Cost Management and Billing reporting](./costs/reporting-get-started.md).
+
+## Organize and allocate costs
+
+Organizing and allocating costs are critical to ensuring invoices are routed to the correct business units and can be further split for internal billing, also known as *chargeback*. Cost Management and Billing offer the following options to organize resources and subscriptions:
+
+- MCA **billing profiles** and **invoice sections** are used to [group subscriptions into invoices](./manage/mca-section-invoice.md). Each billing profile represents a separate invoice that can be billed to a different business unit and each invoice section is segmented separately within those invoices. You can also view costs by billing profile or invoice section in costs analysis.
+- EA **departments** and **enrollment accounts** are conceptually similar to invoice sections, as groups of subscriptions, but they aren't represented within the invoice PDF. They're included within the cost details backing each invoice, however. You can also view costs by department or enrollment account in costs analysis.
+- **Management groups** also allow grouping subscriptions together, but offer a few key differences:
+ - Management group access is inherited down to the subscriptions and resources.
+ - Management groups can be layered into multiple levels and subscriptions can be placed at any level.
+ - Management groups aren't included in cost details.
+ - All historical costs are returned for management groups based on the subscriptions currently within that hierarchy. When a subscription moves, all historical cost moves.
+ - Management groups are supported by Azure Policy and can have rules assigned to automate compliance reporting for your cost governance strategy.
+- **Subscriptions** and **resource groups** are the lowest level at which you can organize your cloud solutions. At Microsoft, every product ΓÇô sometimes even limited to a single region ΓÇô is managed within its own subscription. It simplifies cost governance but requires more overhead for subscription management. Most organizations use subscriptions for business units and separating dev/test from production or other environments, then use resource groups for the products. It complicates cost management because resource group owners don't have a way to manage cost across resource groups. On the other hand, it's a straightforward way to understand who's responsible for most resource-based charges. Keep in mind that not all charges come from resources and some don't have resource groups or subscriptions associated with them. It also changes as you move to MCA billing accounts.
+- **Resource tags** are the only way to add your own business context to cost details and are perhaps the most flexible way to map resources to applications, business units, environments, owners, etc. For more information, see [How tags are used in cost and usage data](./costs/understand-cost-mgt-data.md#how-tags-are-used-in-cost-and-usage-data) for limitations and important considerations.
+
+In addition to organizing resources and subscriptions using the subscription hierarchy and metadata (tags), Cost Management also offers the ability to *move* or split shared costs via cost allocation rules. Cost allocation doesn't change the invoice. Cost allocation simply moves charges from one subscription, resource group, or tag to another subscription, resource group, or tag. The goal of cost allocation is to split and move shared costs to reduce overhead. And, to more accurately report on where charges are ultimately coming from (albeit indirectly), which should drive more complete accountability. For more information, see [Allocate Azure costs](./costs/allocate-costs.md).
+
+How you organize and allocate costs plays a huge role in how people within your organization can manage and optimize costs. Be sure to plan ahead and revisit your allocation strategy yearly.
+
+## Monitor costs with alerts
+
+Cost Management and Billing offer many different types of emails and alerts to keep you informed and help you proactively manage your account and incurred costs.
+
+- [**Budget alerts**](./costs/tutorial-acm-create-budgets.md) notify recipients when cost exceeds a predefined cost or forecast amount. Budgets can be visualized in cost analysis and are available on every scope supported by Cost Management. Subscription and resource group budgets can also be configured to notify an action group to take automated actions to reduce or even stop further charges.
+- [**Anomaly alerts**](./understand/analyze-unexpected-charges.md)notify recipients when an unexpected change in daily usage has been detected. It can be a spike or a dip. Anomaly detection is only available for subscriptions and can be viewed within the cost analysis preview. Anomaly alerts can be configured from the cost alerts page.
+- [**Scheduled alerts**](./costs/save-share-views.md#subscribe-to-cost-alerts) notify recipients about the latest costs on a daily, weekly, or monthly schedule based on a saved cost view. Alert emails include a visual chart representation of the view and can optionally include a CSV file. Views are configured in cost analysis, but recipients don't require access to cost in order to view the email, chart, or linked CSV.
+- **EA commitment balance alerts** are automatically sent to any notification contacts configured on the EA billing account when the balance is 90% or 100% used.
+- **Invoice alerts** can be configured for MCA billing profiles and Microsoft Online Services Program (MOSP) subscriptions. For details, see [View and download your Azure invoice](./understand/download-azure-invoice.md).
+
+For for information, see [Monitor usage and spending with cost alerts](./costs/cost-mgt-alerts-monitor-usage-spending.md).
+
+## Optimize costs
+
+Microsoft offers a wide range of tools for optimizing your costs. Some of these tools are available outside the Cost Management and Billing experience, but are included for completeness.
+
+- There are many [**free services**](https://azure.microsoft.com/pricing/free-services/) available in Azure. Be sure to pay close attention to the constraints. Different services are free indefinitely, for 12 months, or 30 days. Some are free up to a specific amount of usage and some may have dependencies on other services that aren't free.
+- The [**Azure pricing calculator**](https://azure.microsoft.com/pricing/calculator/) is the best place to start when planning a new deployment. You can tweak many aspects of the deployment to understand how you'll be charged for that service and identify which SKUs/options will keep you within your desired price range. For more information about pricing for each of the services you use, see [pricing details](https://azure.microsoft.com/pricing/).
+- [**Azure Advisor cost recommendations**](./costs/tutorial-acm-opt-recommendations.md) should be your first stop when interested in optimizing existing resources. Advisor recommendations are updated daily and are based on your usage patterns. Advisor is available for subscriptions and resource groups. Management group users can also see recommendations but will need to select the desired subscriptions. Billing users can only see recommendations for subscriptions they have resource access to.
+- [**Azure saving plans**](./savings-plan/index.yml) save you money when you have consistent usage of Azure compute resources. A savings plan can significantly reduce your resource costs by up to 65% from pay-as-you-go prices.
+- [**Azure reservations**](https://azure.microsoft.com/reservations/) help you save up to 72% compared to pay-as-you-go rates by pre-committing to specific usage amounts for a set time duration.
+- [**Azure Hybrid Benefit**](https://azure.microsoft.com/pricing/hybrid-benefit/) helps you significantly reduce costs by using on-premises Windows Server and SQL Server licenses or RedHat and SUSE Linux subscriptions on Azure.
+
+For other options, see [Azure benefits and incentives](https://azure.microsoft.com/pricing/offers/#cloud).
## Next steps
cost-management-billing Avoid Charges Free Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/avoid-charges-free-account.md
You get a limited quantity of free services each month with your Azure free acco
## You used some services that aren't free
-Once you've upgraded your account, you get charged pay-as-you-go rates for using services that aren't included for free with your Azure free account. Only certain tiers within a service are included for free. To learn about services included with your free account, see the [Azure free account FAQ](https://azure.microsoft.com/free/free-account-faq/). You can check your service usage in the Azure portal. To learn more, see [Plan and control expenses](../cost-management-billing-overview.md#plan-and-control-expenses).
+Once you've upgraded your account, you get charged pay-as-you-go rates for using services that aren't included for free with your Azure free account. Only certain tiers within a service are included for free. To learn about services included with your free account, see the [Azure free account FAQ](https://azure.microsoft.com/free/free-account-faq/). You can check your service usage in the Azure portal. To learn more, see [Report on and analyze costs](../cost-management-billing-overview.md#report-on-and-analyze-costs).
## You reached the end of your free 12 months
data-factory Transform Data Using Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-script.md
Previously updated : 09/22/2022 Last updated : 10/19/2022 # Transform data by using the Script activity in Azure Data Factory or Synapse Analytics
Here is the JSON format for defining a Script activity:
... ] },
+ "scriptBlockExecutionTimeout": "<time>",
"logSettings": { "logDestination": "<ActivityOutput> or <ExternalStore>", "logLocationSettings":{
The following table describes these JSON properties:
|scripts.parameter.type |The data type of the parameter. The type is logical type and follows type mapping of each connector. |No | |scripts.parameter.direction |The direction of the parameter. It can be Input, Output, InputOutput. The value is ignored if the direction is Output. ReturnValue type is not supported. Set the return value of SP to an output parameter to retrieve it. |No | |scripts.parameter.size |The max size of the parameter. Only applies to Output/InputOutput direction parameter of type string/byte[]. |No |
+|scriptBlockExecutionTimeout |The wait time for the script block execution operation to complete before it times out. |No |
|logSettings |The settings to store the output logs. If not specified, script log is disabled. |No | |logSettings.logDestination |The destination of log output. It can be ActivityOutput or ExternalStore. Default: ActivityOutput. |No | |logSettings.logLocationSettings |The settings of the target location if logDestination is ExternalStore. |No |
dms Dms Tools Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/dms-tools-matrix.md
The following tables identify the services and tools you can use to plan for dat
| Source | Target | App Data Access<br/>Layer Assessment | Database<br/>Assessment | Performance<br/>Assessment | | | | | | |
-| SQL Server | Azure SQL DB | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [DMA](/sql/dma/dma-overview) | [Azure SQL Migration extension](/azure/dms/articles/dms/migration-using-azure-data-studio)<br/>[DMA](/sql/dma/dma-overview)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [DEA](https://www.microsoft.com/download/details.aspx?id=54090)<br/>[Cloudamize*](https://www.cloudamize.com/) |
-| SQL Server | Azure SQL DB MI | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [DMA](/sql/dma/dma-overview) | [Azure SQL Migration extension](/azure/dms/articles/dms/migration-using-azure-data-studio)<br/>[DMA](/sql/dma/dma-overview)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [DEA](https://www.microsoft.com/download/details.aspx?id=54090)<br/>[Cloudamize*](https://www.cloudamize.com/) |
-| SQL Server | Azure SQL VM | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [DMA](/sql/dma/dma-overview) | [Azure SQL Migration extension](/azure/dms/articles/dms/migration-using-azure-data-studio)<br/>[DMA](/sql/dma/dma-overview)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [DEA](https://www.microsoft.com/download/details.aspx?id=54090)<br/>[Cloudamize*](https://www.cloudamize.com/) |
+| SQL Server | Azure SQL DB | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [DMA](/sql/dma/dma-overview) | [Azure SQL Migration extension](/azure/dms/migration-using-azure-data-studio)<br/>[DMA](/sql/dma/dma-overview)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [DEA](https://www.microsoft.com/download/details.aspx?id=54090)<br/>[Cloudamize*](https://www.cloudamize.com/) |
+| SQL Server | Azure SQL DB MI | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [DMA](/sql/dma/dma-overview) | [Azure SQL Migration extension](/azure/dms/migration-using-azure-data-studio)<br/>[DMA](/sql/dma/dma-overview)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [DEA](https://www.microsoft.com/download/details.aspx?id=54090)<br/>[Cloudamize*](https://www.cloudamize.com/) |
+| SQL Server | Azure SQL VM | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [DMA](/sql/dma/dma-overview) | [Azure SQL Migration extension](/azure/dms/migration-using-azure-data-studio)<br/>[DMA](/sql/dma/dma-overview)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [DEA](https://www.microsoft.com/download/details.aspx?id=54090)<br/>[Cloudamize*](https://www.cloudamize.com/) |
| SQL Server | Azure Synapse Analytics | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) | | |
-| RDS SQL | Azure SQL DB, MI, VM | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [DMA](/sql/dma/dma-overview) | [Azure SQL Migration extension](/azure/dms/articles/dms/migration-using-azure-data-studio)<br/>[DMA](/sql/dma/dma-overview) | [DEA](https://www.microsoft.com/download/details.aspx?id=54090) |
+| RDS SQL | Azure SQL DB, MI, VM | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [DMA](/sql/dma/dma-overview) | [Azure SQL Migration extension](/azure/dms/migration-using-azure-data-studio)<br/>[DMA](/sql/dma/dma-overview) | [DEA](https://www.microsoft.com/download/details.aspx?id=54090) |
| Oracle | Azure SQL DB, MI, VM | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [SSMA](/sql/ssma/sql-server-migration-assistant) | [SSMA](/sql/ssma/sql-server-migration-assistant) | | | Oracle | Azure Synapse Analytics | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [SSMA](/sql/ssma/sql-server-migration-assistant) | [SSMA](/sql/ssma/sql-server-migration-assistant) | | | Oracle | Azure DB for PostgreSQL -<br/>Single server | | [Ora2Pg*](http://ora2pg.darold.net/start.html) | |
The following tables identify the services and tools you can use to plan for dat
| Source | Target | Schema | Data<br/>(Offline) | Data<br/>(Online) | | | | | | |
-| SQL Server | Azure SQL DB | [SQL Server dacpac extension](/sql/azure-data-studio/extensions/sql-server-dacpac-extension)<br/>[DMA](/sql/dma/dma-overview)<br/>[DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Azure SQL Migration extension](/azure/dms/articles/dms/migration-using-azure-data-studio)<br/>[DMA](/sql/dma/dma-overview)<br/>[DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Cloudamize*](https://www.cloudamize.com/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
-| SQL Server | Azure SQL DB MI | [Azure SQL Migration extension](/azure/dms/articles/dms/migration-using-azure-data-studio)<br/>[DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Azure SQL Migration extension](/azure/dms/articles/dms/migration-using-azure-data-studio)<br/>[DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Azure SQL Migration extension](/azure/dms/articles/dms/migration-using-azure-data-studio)<br/>[DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
-| SQL Server | Azure SQL VM | [Azure SQL Migration extension](/azure/dms/articles/dms/migration-using-azure-data-studio)<br/>[DMA](/sql/dma/dma-overview)<br/>[DMS](https://azure.microsoft.com/services/database-migration/)<br>[Cloudamize*](https://www.cloudamize.com/) | [Azure SQL Migration extension](/azure/dms/articles/dms/migration-using-azure-data-studio)<br/>[DMA](/sql/dma/dma-overview)<br/>[DMS](https://azure.microsoft.com/services/database-migration/)<br>[Cloudamize*](https://www.cloudamize.com/) | [Azure SQL Migration extension](/azure/dms/articles/dms/migration-using-azure-data-studio)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
+| SQL Server | Azure SQL DB | [SQL Server dacpac extension](/sql/azure-data-studio/extensions/sql-server-dacpac-extension)<br/>[DMA](/sql/dma/dma-overview)<br/>[DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Azure SQL Migration extension](/azure/dms/migration-using-azure-data-studio)<br/>[DMA](/sql/dma/dma-overview)<br/>[DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Cloudamize*](https://www.cloudamize.com/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
+| SQL Server | Azure SQL DB MI | [Azure SQL Migration extension](/azure/dms/migration-using-azure-data-studio)<br/>[DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Azure SQL Migration extension](/azure/dms/migration-using-azure-data-studio)<br/>[DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Azure SQL Migration extension](/azure/dms/migration-using-azure-data-studio)<br/>[DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
+| SQL Server | Azure SQL VM | [Azure SQL Migration extension](/azure/dms/migration-using-azure-data-studio)<br/>[DMA](/sql/dma/dma-overview)<br/>[DMS](https://azure.microsoft.com/services/database-migration/)<br>[Cloudamize*](https://www.cloudamize.com/) | [Azure SQL Migration extension](/azure/dms/migration-using-azure-data-studio)<br/>[DMA](/sql/dma/dma-overview)<br/>[DMS](https://azure.microsoft.com/services/database-migration/)<br>[Cloudamize*](https://www.cloudamize.com/) | [Azure SQL Migration extension](/azure/dms/migration-using-azure-data-studio)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
| SQL Server | Azure Synapse Analytics | | | |
-| RDS SQL | Azure SQL DB | [SQL Server dacpac extension](/sql/azure-data-studio/extensions/sql-server-dacpac-extension)<br/>[DMA](/sql/dma/dma-overview)<br/>[DMS](https://azure.microsoft.com/services/database-migration/) | [Azure SQL Migration extension](/azure/dms/articles/dms/migration-using-azure-data-studio)<br/>[DMA](/sql/dma/dma-overview)<br/>[DMS](https://azure.microsoft.com/services/database-migration/) | [Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
+| RDS SQL | Azure SQL DB | [SQL Server dacpac extension](/sql/azure-data-studio/extensions/sql-server-dacpac-extension)<br/>[DMA](/sql/dma/dma-overview)<br/>[DMS](https://azure.microsoft.com/services/database-migration/) | [Azure SQL Migration extension](/azure/dms/migration-using-azure-data-studio)<br/>[DMA](/sql/dma/dma-overview)<br/>[DMS](https://azure.microsoft.com/services/database-migration/) | [Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
| RDS SQL | Azure SQL DB MI | [DMS](https://azure.microsoft.com/services/database-migration/) | [DMS](https://azure.microsoft.com/services/database-migration/) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
-| RDS SQL | Azure SQL VM | [Azure SQL Migration extension](/azure/dms/articles/dms/migration-using-azure-data-studio)<br/>[DMA](/sql/dma/dma-overview)<br/>[DMS](https://azure.microsoft.com/services/database-migration/) | [Azure SQL Migration extension](/azure/dms/articles/dms/migration-using-azure-data-studio)<br/>[DMA](/sql/dma/dma-overview)<br/>[DMS](https://azure.microsoft.com/services/database-migration/) | [Azure SQL Migration extension](/azure/dms/articles/dms/migration-using-azure-data-studio)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
+| RDS SQL | Azure SQL VM | [Azure SQL Migration extension](/azure/dms/migration-using-azure-data-studio)<br/>[DMA](/sql/dma/dma-overview)<br/>[DMS](https://azure.microsoft.com/services/database-migration/) | [Azure SQL Migration extension](/azure/dms/migration-using-azure-data-studio)<br/>[DMA](/sql/dma/dma-overview)<br/>[DMS](https://azure.microsoft.com/services/database-migration/) | [Azure SQL Migration extension](/azure/dms/migration-using-azure-data-studio)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) |
| Oracle | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) | | Oracle | Azure Synapse Analytics | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/solutions) | [SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-for-microsoft-azure/) | | Oracle | Azure DB for PostgreSQL -<br/>Single server | [Ispirer*](https://www.ispirer.com/solutions) | [Ispirer*](https://www.ispirer.com/solutions) | [Ora2Pg*](http://ora2pg.darold.net/start.html) |
dms Tutorial Sql Server Azure Sql Database Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-azure-sql-database-offline-ads.md
To complete this tutorial, you need to:
- As an alternative to using the above built-in roles, you can assign a custom role as defined in [this article.](resource-custom-roles-sql-database-ads.md) > [!IMPORTANT] > Azure account is only required when configuring the migration steps and is not required for assessment or Azure recommendation steps in the migration wizard.
-* Create a target [Azure SQL Database](/azure/azure-sql/database/single-database-create-quickstart).
+* Create a target [Azure SQL Database](/azure/azure/azure-sql/database/single-database-create-quickstart).
* Ensure that the SQL Server login to connect the source SQL Server is a member of the `db_datareader` and the login for the target SQL server is `db_owner`. * Migrate database schema from source to target using [SQL Server dacpac extension](/sql/azure-data-studio/extensions/sql-server-dacpac-extension) or, [SQL Database Projects extension](/sql/azure-data-studio/extensions/sql-database-project-extension) for Azure Data Studio. * If you're using the Azure Database Migration Service for the first time, ensure that Microsoft.DataMigration resource provider is registered in your subscription. You can follow the steps to [register the resource provider](quickstart-create-data-migration-service-portal.md#register-the-resource-provider)
At this point, you've completed the migration to Azure SQL Database. We encour
## Next steps
-* For a tutorial showing you how to create an Azure SQL Database using the Azure portal, PowerShell, or AZ CLI commands, see [Create a single database - Azure SQL Database](/azure-sql/database/single-database-create-quickstart).
-* For information about Azure SQL Database, see [What is Azure SQL Database](/azure-sql/database/sql-database-paas-overview).
-* For information about connecting to Azure SQL Database, see [Connect applications](/azure-sql/database/connect-query-content-reference-guide).
+* For a tutorial showing you how to create an Azure SQL Database using the Azure portal, PowerShell, or AZ CLI commands, see [Create a single database - Azure SQL Database](/azure/azure-sql/database/single-database-create-quickstart).
+* For information about Azure SQL Database, see [What is Azure SQL Database](/azure/azure-sql/database/sql-database-paas-overview).
+* For information about connecting to Azure SQL Database, see [Connect applications](/azure/azure-sql/database/connect-query-content-reference-guide).
education-hub Set Up Course Classroom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/set-up-course-classroom.md
# Quickstart: Set up a course and create a classroom
+[!Warning] This is the legacy version of Azure Education Hub
+ This quickstart explains how to set up a course and classroom in the Microsoft Azure Education Hub, including subscription details. ## Prerequisites
education-hub Set Up Lab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/set-up-lab.md
+
+ Title: Set up a lab in Azure Education Hub
+description: This quickstart explains how to set up a lab in Azure Education Hub.
+++ Last updated : 10/19/2022+++++
+# Quickstart: Set up a lab
+
+This quickstart explains how to set up a lab in the Microsoft Azure Education Hub, including subscription details.
+
+## Prerequisites
+
+- An academic grant with an approved credit amount
+
+### Subscriptions
+
+Each student is given a subscription tied to a monetary cap of credit allocated by the professor. The term *monetary cap* describes the US$ amount of an academic sponsorship. For example, a $1,000 monetary cap provides the recipient with a USD$1,000 Azure credit using [published WebDirect rates](https://azure.microsoft.com/pricing/calculator/).
+
+The educator can choose to allocate a cap to the subscriptions to prevent
+unintended use, and then set an expiration date. For example:
+
+- **Flat amount per class**: Each student gets $*x* to manage for the entire quarter or
+semester.
+
+At the subscription level, you can increase or decrease the cap and changes will take effect with
+minimal latency. When the class or project ends, you can reallocate unused cap to other subscriptions prior to the expiration date.
+
+## Create a lab
+
+Follow these steps to create a lab by using one of two different methods:
+
+1. Select the **Labs** page in the Azure Education Hub to open the tool you use to create and manage courses. A table opens showing all your existing labs.
+
+ :::image type="content" source="media/set-up-lab/navigate-to-lab-blade.png" alt-text="Azure Education Hub Labs page" border="false":::
+
+1. Select the **+ Add** icon in the upper-left corner of the table to start the creation
+workflow.
+
+ :::image type="content" source="media/set-up-lab/create-a-lab-button.png" alt-text="Add a course to Azure Education Hub" border="false":::
+
+1. You can create a course roster by using two methods: by uploading a roster, or by using an invitation code.
+ - **Roster**: If you already have the names and logins of all students, you can populate and upload a roster file. To download a sample file of the .csv file needed to upload the roster, select the **Download sample file** link in the upper-right corner.
+ - **Invitation code**: If you choose to use an invitation code, decide how many codes can be redeemed and when they will expire. You'll send your students the following link to redeem the code: https://aka.ms/JoinEduLab.
+
+ :::image type="content" source="media/set-up-lab/create-a-lab.png" alt-text="Enter your invitation code in Azure Education Hub" border="false":::
+
+1. Select **Create** in the bottom-right corner. This might take a few
+moments to complete.
+
+ :::image type="content" source="media/set-up-lab/finalize-lab.png" alt-text="Create a classroom in Azure Education Hub" border="false":::
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Create an assignment and allocate credit](create-assignment-allocate-credit.md)
iot-edge How To Configure Iot Edge For Linux On Windows Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-configure-iot-edge-for-linux-on-windows-networking.md
EFLOW-Ext External
To use a specific virtual switch(**internal** or **external**), make sure you specify the correct parameters: `-vSwitchName` and `vSwitchType`. For example, if you want to deploy the EFLOW VM with an **external switch** named **EFLOW-Ext**, then in an elevated PowerShell session use the following command: ```powershell
-Deploy-EflowVm -vSwitchType "External" -vSwitchName "EFLOW-Ext"
+Deploy-Eflow -vSwitchType "External" -vSwitchName "EFLOW-Ext"
```
By default, if no **static IP** address is set up, the EFLOW VM will try to allo
If you're using a **static IP**, you'll have to specify three parameters during EFLOW deployment: `-ip4Address`, `ip4GatewayAddress` and `ip4PrefixLength`. If one parameter is missing or incorrect, the EFLOW VM installation will fail to allocate an IP address and installation will fail. For more information about EFLOW VM deployment, see [PowerShell functions for IoT Edge for Linux on Windows](./reference-iot-edge-for-linux-on-windows-functions.md#deploy-eflow). For example, if you want to deploy the EFLOW VM with an **external switch** named **EFLOW-Ext**, and a static IP configuration, with an IP address equal to **192.168.0.2**, gateway IP address equal to **192.168.0.1** and IP prefix length equal to **24**, then in an elevated PowerShell session use the following command: ```powershell
-Deploy-EflowVm -vSwitchType "External" -vSwitchName "EFLOW-Ext" -ip4Address "192.168.0.2" -ip4GatewayAddress "192.168.0.1" -ip4PrefixLength "24"
+Deploy-Eflow -vSwitchType "External" -vSwitchName "EFLOW-Ext" -ip4Address "192.168.0.2" -ip4GatewayAddress "192.168.0.1" -ip4PrefixLength "24"
``` >[!TIP]
iot-edge How To Manage Device Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-manage-device-certificates.md
If you are using IoT Edge for Linux on Windows, you need to use the SSH key loca
1. Open the IoT Edge security daemon config file: `/etc/aziot/config.toml`
+ >[!TIP]
+ >If the config file doesn't exist on your device yet, then use `/etc/aziot/config.toml.edge.template` as a template to create one.
+ 1. Find the `trust_bundle_cert` parameter at the beginning of the file. Uncomment this line, and provide the file URI to the root CA certificate on your device. ```toml
machine-learning How To Configure Network Isolation With V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-network-isolation-with-v2.md
To disable v1_legacy_mode, use [Workspace.update](/python/api/azureml-core/azure
from azureml.core import Workspace ws = Workspace.from_config()
-ws.update(v1_legacy_mode=false)
+ws.update(v1_legacy_mode=False)
``` # [Azure CLI extension v1](#tab/azurecliextensionv1)
machine-learning How To Create Component Pipeline Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-component-pipeline-python.md
The code above define a component with display name `Prep Data` using `@command_
* `display_name` is a friendly display name of the component in UI, which isn't unique. * `description` usually describes what task this component can complete. * `environment` specifies the run-time environment for this component. The environment of this component specifies a docker image and refers to the `conda.yaml` file.+
+ The `conda.yaml` file contains all packages used for the component like following:
+
+ :::code language="python" source="~/azureml-examples-v2samplesreorg/sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet/prep/conda.yaml":::
+ * The `prepare_data_component` function defines one input for `input_data` and two outputs for `training_data` and `test_data`. `input_data` is input data path. `training_data` and `test_data` are output data paths for training data and test data. * This component converts the data from `input_data` into a training data csv to `training_data` and a test data csv to `test_data`.
Following is what a component looks like in the studio UI.
:::image type="content" source="./media/how-to-create-component-pipeline-python/prep-data-component.png" alt-text="Screenshot of the Prep Data component in the UI and code." lightbox ="./media/how-to-create-component-pipeline-python/prep-data-component.png":::
-#### Specify component run-time environment
-
-You'll need to modify the runtime environment in which your component runs.
--
-The above code creates an object of `Environment` class, which represents the runtime environment in which the component runs.
-
-The `conda.yaml` file contains all packages used for the component like following:
--- Now, you've prepared all source files for the `Prep Data` component.
The code above define a component with display name `Train Image Classification
* The `keras_train_component` function defines one input `input_data` where training data comes from, one input `epochs` specifying epochs during training, and one output `output_model` where outputs the model file. The default value of `epochs` is 10. The execution logic of this component is from `train()` function in `train.py` above.
-#### Specify component run-time environment
- The train-model component has a slightly more complex configuration than the prep-data component. The `conda.yaml` is like following: :::code language="yaml" source="~/azureml-examples-main/sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet/train/conda.yaml":::
In this section, you'll learn to create a component specification in the valid Y
* This component has two inputs and one output. * The source code path of it's defined in the `code` section and when the component is run in cloud, all files from that path will be uploaded as the snapshot of this component. * The `command` section specifies the command to execute while running this component.
-* The `environment` section contains a docker image and a conda yaml file.
-
-#### Specify component run-time environment
-
-The score component uses the same image and conda.yaml file as the train component. The source file is in the [sample repository](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet/train/conda.yaml).
+* The `environment` section contains a docker image and a conda yaml file. The source file is in the [sample repository](https://github.com/Azure/azureml-examples/blob/v2samplesreorg/sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet/score/conda.yaml).
Now, you've got all source files for score-model component.
machine-learning Tutorial Designer Automobile Price Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-designer-automobile-price-deploy.md
You can update the online endpoint with new model trained in the designer. On th
## Limitations
-Due to datastore access limitation, if your inference pipeline contains **Import Data** or **Export Data** component, they'll be auto-removed when deploy to real-time endpoint.
+* Due to datastore access limitation, if your inference pipeline contains **Import Data** or **Export Data** component, they'll be auto-removed when deploy to real-time endpoint.
+
+* If you have datasets in the real-time inference pipeline and want to deploy it to real-time endpoint, currently this flow only supports datasets registered from **Blob** datastore. If you want to use datasets from other type datastores, you can use Select Column to connect with your initial dataset with settings of selecting all columns, register the outputs of Select Column as File dataset and then replace the initial dataset in the real-time inference pipeline with this newly registered dataset.
+
+* If your inference graph contains "Enter Data Manually" component which is not connected to the same port as ΓÇ£Web service InputΓÇ¥ component, the "Enter Data Manually" component will not be executed during HTTP call processing. A workaround is to register the outputs of that "Enter Data Manually" component as dataset, then in the inference pipeline draft, replace the "Enter Data Manually" component with the registered dataset.
+
+ :::image type="content" source="./media/tutorial-designer-automobile-price-deploy/real-time-inferencepipeline-limitation.png" alt-text="Screenshot showing how to modify inference pipeline containing enter data manually component.":::
## Clean up resources
mysql Connect Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/connect-java.md
ms.devlang: java Previously updated : 01/16/2021 Last updated : 10/20/2022 # Use Java and JDBC with Azure Database for MySQL Flexible Server
Last updated 01/16/2021
This topic demonstrates creating a sample application that uses Java and [JDBC](https://en.wikipedia.org/wiki/Java_Database_Connectivity) to store and retrieve information in [Azure Database for MySQL Flexible Server](./index.yml).
-## Prerequisites
+JDBC is the standard Java API to connect to traditional relational databases.
+
+In this article, we'll include two authentication methods: Azure Active Directory (Azure AD) authentication and MySQL authentication. The **Passwordless** tab shows the Azure AD authentication and the **Password** tab shows the MySQL authentication.
+
+Azure AD authentication is a mechanism for connecting to Azure Database for MySQL using identities defined in Azure AD. With Azure AD authentication, you can manage database user identities and other Microsoft services in a central location, which simplifies permission management.
-- An Azure account with an active subscription.
+MySQL authentication uses accounts stored in MySQL. If you choose to use passwords as credentials for the accounts, these credentials will be stored in the `user` table. Because these passwords are stored in MySQL, you'll need to manage the rotation of the passwords by yourself.
+
+## Prerequisites
- [!INCLUDE [flexible-server-free-trial-note](../includes/flexible-server-free-trial-note.md)]
+- An Azure account with an active subscription.
+ [!INCLUDE [flexible-server-free-trial-note](../includes/flexible-server-free-trial-note.md)]
- [Azure Cloud Shell](../../cloud-shell/quickstart.md) or [Azure CLI](/cli/azure/install-azure-cli). We recommend Azure Cloud Shell so you'll be logged in automatically and have access to all the tools you'll need. - A supported [Java Development Kit](/azure/developer/java/fundamentals/java-support-on-azure), version 8 (included in Azure Cloud Shell). - The [Apache Maven](https://maven.apache.org/) build tool. ## Prepare the working environment
-We are going to use environment variables to limit typing mistakes, and to make it easier for you to customize the following configuration for your specific needs.
+First, use the following command to set up some environment variables.
+
+### [Passwordless (Recommended)](#tab/passwordless)
+
+```bash
+export AZ_RESOURCE_GROUP=database-workshop
+export AZ_DATABASE_NAME=<YOUR_DATABASE_NAME>
+export AZ_LOCATION=<YOUR_AZURE_REGION>
+export AZ_MYSQL_AD_NON_ADMIN_USERNAME=demo-non-admin
+export AZ_USER_IDENTITY_NAME=<YOUR_USER_ASSIGNED_MANAGED_IDENTITY_NAME>
+export CURRENT_USERNAME=$(az ad signed-in-user show --query userPrincipalName -o tsv)
+export CURRENT_USER_OBJECTID=$(az ad signed-in-user show --query id -o tsv)
+```
+
+Replace the placeholders with the following values, which are used throughout this article:
+
+- `<YOUR_DATABASE_NAME>`: The name of your MySQL server, which should be unique across Azure.
+- `<YOUR_AZURE_REGION>`: The Azure region you'll use. You can use `eastus` by default, but we recommend that you configure a region closer to where you live. You can see the full list of available regions by entering `az account list-locations`.
+- `<YOUR_USER_ASSIGNED_MANAGED_IDENTITY_NAME>`: The name of your user assigned managed identity server, which should be unique across Azure.
-Set up those environment variables by using the following commands:
+### [Password](#tab/password)
```bash
-AZ_RESOURCE_GROUP=database-workshop
-AZ_DATABASE_NAME= flexibleserverdb
-AZ_LOCATION=<YOUR_AZURE_REGION>
-AZ_MYSQL_USERNAME=demo
-AZ_MYSQL_PASSWORD=<YOUR_MYSQL_PASSWORD>
-AZ_LOCAL_IP_ADDRESS=<YOUR_LOCAL_IP_ADDRESS>
+export AZ_RESOURCE_GROUP=database-workshop
+export AZ_DATABASE_NAME=<YOUR_DATABASE_NAME>
+export AZ_LOCATION=<YOUR_AZURE_REGION>
+export AZ_MYSQL_ADMIN_USERNAME=demo
+export AZ_MYSQL_ADMIN_PASSWORD=<YOUR_MYSQL_ADMIN_PASSWORD>
+export AZ_MYSQL_NON_ADMIN_USERNAME=demo-non-admin
+export AZ_MYSQL_NON_ADMIN_PASSWORD=<YOUR_MYSQL_NON_ADMIN_PASSWORD>
``` Replace the placeholders with the following values, which are used throughout this article: -- `<YOUR_DATABASE_NAME>`: The name of your MySQL server. It should be unique across Azure.-- `<YOUR_AZURE_REGION>`: The Azure region you'll use. You can use `eastus` by default, but we recommend that you configure a region closer to where you live. You can have the full list of available regions by entering `az account list-locations`.-- `<YOUR_MYSQL_PASSWORD>`: The password of your MySQL database server. That password should have a minimum of eight characters. The characters should be from three of the following categories: English uppercase letters, English lowercase letters, numbers (0-9), and non-alphanumeric characters (!, $, #, %, and so on).-- `<YOUR_LOCAL_IP_ADDRESS>`: The IP address of your local computer, from which you'll run your Java application. One convenient way to find it is to point your browser to [whatismyip.akamai.com](http://whatismyip.akamai.com/).
+- `<YOUR_DATABASE_NAME>`: The name of your MySQL server, which should be unique across Azure.
+- `<YOUR_AZURE_REGION>`: The Azure region you'll use. You can use `eastus` by default, but we recommend that you configure a region closer to where you live. You can see the full list of available regions by entering `az account list-locations`.
+- `<YOUR_MYSQL_ADMIN_PASSWORD>` and `<YOUR_MYSQL_NON_ADMIN_PASSWORD>`: The password of your MySQL database server. That password should have a minimum of eight characters. The characters should be from three of the following categories: English uppercase letters, English lowercase letters, numbers (0-9), and non-alphanumeric characters (!, $, #, %, and so on).
++ Next, create a resource group:
Next, create a resource group:
az group create \ --name $AZ_RESOURCE_GROUP \ --location $AZ_LOCATION \
- | jq
+ --output tsv
```
-> [!NOTE]
-> We use the `jq` utility, which is installed by default on [Azure Cloud Shell](https://shell.azure.com/) to display JSON data and make it more readable.
-> If you don't like that utility, you can safely remove the `| jq` part of all the commands we'll use.
- ## Create an Azure Database for MySQL instance
-The first thing we'll create is a managed MySQL server.
+### Create a MySQL server and set up admin user
+
+The first thing you'll create is a managed MySQL server.
> [!NOTE] > You can read more detailed information about creating MySQL servers in [Create an Azure Database for MySQL server by using the Azure portal](./quickstart-create-server-portal.md).
-In [Azure Cloud Shell](https://shell.azure.com/), run the following script:
+#### [Passwordless connection (Recommended)](#tab/passwordless)
+
+If you're using Azure CLI, run the following command to make sure it has sufficient permission:
+
+```bash
+az login --scope https://graph.microsoft.com/.default
+```
+
+Run the following command to create the server:
+
+```azurecli
+az mysql flexible-server create \
+ --resource-group $AZ_RESOURCE_GROUP \
+ --name $AZ_DATABASE_NAME \
+ --location $AZ_LOCATION \
+ --yes \
+ --output tsv
+```
+
+Run the following command to create a user-assigned identity for assigning:
+
+```azurecli
+az identity create \
+ --resource-group $AZ_RESOURCE_GROUP \
+ --name $AZ_USER_IDENTITY_NAME
+```
+
+> [!IMPORTANT]
+> After creating the user-assigned identity, ask your *Global Administrator* or *Privileged Role Administrator* to grant the following permissions for this identity: `User.Read.All`, `GroupMember.Read.All`, and `Application.Read.ALL`. For more information, see the [Permissions](/azure/mysql/flexible-server/concepts-azure-ad-authentication#permissions) section of [Active Directory authentication](/azure/mysql/flexible-server/concepts-azure-ad-authentication).
+
+Run the following command to assign the identity to MySQL server for creating Azure AD admin:
+
+```azurecli
+az mysql flexible-server identity assign \
+ --resource-group $AZ_RESOURCE_GROUP \
+ --server-name $AZ_DATABASE_NAME \
+ --identity $AZ_USER_IDENTITY_NAME
+```
+
+Run the following command to set the Azure AD admin user:
+
+```azurecli
+az mysql flexible-server ad-admin create \
+ --resource-group $AZ_RESOURCE_GROUP \
+ --server-name $AZ_DATABASE_NAME \
+ --display-name $CURRENT_USERNAME \
+ --object-id $CURRENT_USER_OBJECTID \
+ --identity $AZ_USER_IDENTITY_NAME
+```
+
+> [!IMPORTANT]
+> When setting the administrator, a new user is added to the Azure Database for MySQL server with full administrator permissions. Only one Azure AD admin can be created per MySQL server and selection of another one will overwrite the existing Azure AD admin configured for the server.
+
+This command creates a small MySQL server and sets the Active Directory admin to the signed-in user.
+
+#### [Password](#tab/password)
```azurecli az mysql flexible-server create \ --resource-group $AZ_RESOURCE_GROUP \ --name $AZ_DATABASE_NAME \ --location $AZ_LOCATION \
- --sku-name Standard_B1ms \
- --storage-size 5120 \
- --admin-user $AZ_MYSQL_USERNAME \
- --admin-password $AZ_MYSQL_PASSWORD \
- --public-access $AZ_LOCAL_IP_ADDRESS
- | jq
+ --admin-user $AZ_MYSQL_ADMIN_USERNAME \
+ --admin-password $AZ_MYSQL_ADMIN_PASSWORD \
+ --yes \
+ --output tsv
```
-Make sure your enter \<YOUR-IP-ADDRESS\> in order to access the server from your local machine. This command creates a Burstable Tier MySQL flexible server suitable for development.
+This command creates a small MySQL server.
++
-The MySQL server that you created has a empty database called **flexibleserverdb**. We will use this database for this article.
+The MySQL server that you created has an empty database called `flexibleserverdb`.
[Having any issues? Let us know.](https://github.com/MicrosoftDocs/azure-docs/issues)
+### Configure a firewall rule for your MySQL server
+
+Azure Database for MySQL instances are secured by default. They have a firewall that doesn't allow any incoming connection.
+
+You can skip this step if you're using Bash because the `flexible-server create` command already detected your local IP address and set it on MySQL server.
+
+If you're connecting to your MySQL server from Windows Subsystem for Linux (WSL) on a Windows computer, you'll need to add the WSL host ID to your firewall. Obtain the IP address of your host machine by running the following command in WSL:
+
+```bash
+cat /etc/resolv.conf
+```
+
+Copy the IP address following the term `nameserver`, then use the following command to set an environment variable for the WSL IP Address:
+
+```bash
+AZ_WSL_IP_ADDRESS=<the-copied-IP-address>
+```
+
+Then, use the following command to open the server's firewall to your WSL-based app:
+
+```azurecli
+az mysql flexible-server firewall-rule create \
+ --resource-group $AZ_RESOURCE_GROUP \
+ --name $AZ_DATABASE_NAME \
+ --start-ip-address $AZ_WSL_IP_ADDRESS \
+ --end-ip-address $AZ_WSL_IP_ADDRESS \
+ --rule-name allowiprange \
+ --output tsv
+```
+
+### Configure a MySQL database
+
+Create a new database called `demo` by using the following command:
+
+```azurecli
+az mysql flexible-server db create \
+ --resource-group $AZ_RESOURCE_GROUP \
+ --database-name demo \
+ --server-name $AZ_DATABASE_NAME \
+ --output tsv
+```
+
+### Create a MySQL non-admin user and grant permission
+
+Next, create a non-admin user and grant all permissions on the `demo` database to it.
+
+> [!NOTE]
+> You can read more detailed information about creating MySQL users in [Create users in Azure Database for MySQL](/azure/mysql/single-server/how-to-create-users).
+
+#### [Passwordless connection (Recommended)](#tab/passwordless)
+
+Create a SQL script called *create_ad_user.sql* for creating a non-admin user. Add the following contents and save it locally:
+
+```bash
+export AZ_MYSQL_AD_NON_ADMIN_USERID=$CURRENT_USER_OBJECTID
+
+cat << EOF > create_ad_user.sql
+SET aad_auth_validate_oids_in_tenant = OFF;
+
+CREATE AADUSER '$AZ_MYSQL_AD_NON_ADMIN_USERNAME' IDENTIFIED BY '$AZ_MYSQL_AD_NON_ADMIN_USERID';
+
+GRANT ALL PRIVILEGES ON demo.* TO '$AZ_MYSQL_AD_NON_ADMIN_USERNAME'@'%';
+
+FLUSH privileges;
+
+EOF
+```
+
+Then, use the following command to run the SQL script to create the Azure AD non-admin user:
+
+```bash
+mysql -h $AZ_DATABASE_NAME.mysql.database.azure.com --user $CURRENT_USERNAME --enable-cleartext-plugin --password=$(az account get-access-token --resource-type oss-rdbms --output tsv --query accessToken) < create_ad_user.sql
+```
+
+Now use the following command to remove the temporary SQL script file:
+
+```bash
+rm create_ad_user.sql
+```
+
+#### [Password](#tab/password)
+
+Create a SQL script called *create_user.sql* for creating a non-admin user. Add the following contents and save it locally:
+
+```bash
+cat << EOF > create_user.sql
+
+CREATE USER '$AZ_MYSQL_NON_ADMIN_USERNAME'@'%' IDENTIFIED BY '$AZ_MYSQL_NON_ADMIN_PASSWORD';
+
+GRANT ALL PRIVILEGES ON demo.* TO '$AZ_MYSQL_NON_ADMIN_USERNAME'@'%';
+
+FLUSH PRIVILEGES;
+
+EOF
+```
+
+Then, use the following command to run the SQL script to create the Azure AD non-admin user:
+
+```bash
+mysql -h $AZ_DATABASE_NAME.mysql.database.azure.com --user $AZ_MYSQL_ADMIN_USERNAME --enable-cleartext-plugin --password=$AZ_MYSQL_ADMIN_PASSWORD < create_user.sql
+```
+
+Now use the following command to remove the temporary SQL script file:
+
+```bash
+rm create_user.sql
+```
+++ ### Create a new Java project
-Using your favorite IDE, create a new Java project, and add a `pom.xml` file in its root directory:
+Using your favorite IDE, create a new Java project, and add a *pom.xml* file in its root directory:
+
+#### [Passwordless connection (Recommended)](#tab/passwordless)
```xml <?xml version="1.0" encoding="UTF-8"?>
Using your favorite IDE, create a new Java project, and add a `pom.xml` file in
<dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId>
- <version>8.0.20</version>
+ <version>8.0.30</version>
+ </dependency>
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-identity-providers-jdbc-mysql</artifactId>
+ <version>1.0.0-beta.1</version>
</dependency> </dependencies> </project> ```
-This file is an [Apache Maven](https://maven.apache.org/) that configures our project to use:
+#### [Password](#tab/password)
+
+```xml
+<?xml version="1.0" encoding="UTF-8"?>
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
+ <modelVersion>4.0.0</modelVersion>
+ <groupId>com.example</groupId>
+ <artifactId>demo</artifactId>
+ <version>0.0.1-SNAPSHOT</version>
+ <name>demo</name>
+
+ <properties>
+ <java.version>1.8</java.version>
+ <maven.compiler.source>1.8</maven.compiler.source>
+ <maven.compiler.target>1.8</maven.compiler.target>
+ </properties>
+
+ <dependencies>
+ <dependency>
+ <groupId>mysql</groupId>
+ <artifactId>mysql-connector-java</artifactId>
+ <version>8.0.30</version>
+ </dependency>
+ </dependencies>
+</project>
+```
+++
+This file is an [Apache Maven](https://maven.apache.org/) file that configures your project to use:
- Java 8 - A recent MySQL driver for Java ### Prepare a configuration file to connect to Azure Database for MySQL
-Create a *src/main/resources/application.properties* file, and add:
+Run the following script in the project root directory to create a *src/main/resources/application.properties* file and add configuration details:
-```properties
-url=jdbc:mysql://$AZ_DATABASE_NAME.mysql.database.azure.com:3306/demo?serverTimezone=UTC
-user=demo
-password=$AZ_MYSQL_PASSWORD
+#### [Passwordless connection (Recommended)](#tab/passwordless)
+
+```bash
+mkdir -p src/main/resources && touch src/main/resources/application.properties
+
+cat << EOF > src/main/resources/application.properties
+url=jdbc:mysql://${AZ_DATABASE_NAME}.mysql.database.azure.com:3306/demo?sslMode=REQUIRED&serverTimezone=UTC&defaultAuthenticationPlugin=com.azure.identity.providers.mysql.AzureIdentityMysqlAuthenticationPlugin&authenticationPlugins=com.azure.identity.providers.mysql.AzureIdentityMysqlAuthenticationPlugin
+user=${AZ_MYSQL_AD_NON_ADMIN_USERNAME}
+EOF
``` -- Replace the two `$AZ_DATABASE_NAME` variables with the value that you configured at the beginning of this article.-- Replace the `$AZ_MYSQL_PASSWORD` variable with the value that you configured at the beginning of this article.
+#### [Password](#tab/password)
+
+```bash
+mkdir -p src/main/resources && touch src/main/resources/application.properties
+
+cat << EOF > src/main/resources/application.properties
+url=jdbc:mysql://${AZ_DATABASE_NAME}.mysql.database.azure.com:3306/demo?useSSL=true&sslMode=REQUIRED&serverTimezone=UTC
+user=${AZ_MYSQL_NON_ADMIN_USERNAME}
+password=${AZ_MYSQL_NON_ADMIN_PASSWORD}
+EOF
+```
++ > [!NOTE]
-> We append `?serverTimezone=UTC` to the configuration property `url`, to tell the JDBC driver to use the UTC date format (or Coordinated Universal Time) when connecting to the database. Otherwise, our Java server would not use the same date format as the database, which would result in an error.
+> The configuration property `url` has `?serverTimezone=UTC` appended to tell the JDBC driver to use the UTC date format (or Coordinated Universal Time) when connecting to the database. Otherwise, your Java server would not use the same date format as the database, which would result in an error.
### Create an SQL file to generate the database schema
-We will use a *src/main/resources/`schema.sql`* file in order to create a database schema. Create that file, with the following content:
+You'll use a *src/main/resources/schema.sql* file in order to create a database schema. Create that file, with the following content:
```sql DROP TABLE IF EXISTS todo;
CREATE TABLE todo (id SERIAL PRIMARY KEY, description VARCHAR(255), details VARC
Next, add the Java code that will use JDBC to store and retrieve data from your MySQL server.
-Create a *src/main/java/DemoApplication.java* file, that contains:
+Create a *src/main/java/DemoApplication.java* file and add the following contents:
```java package com.example.demo;
public class DemoApplication {
statement.execute(scanner.nextLine()); }
- /*
- Todo todo = new Todo(1L, "configuration", "congratulations, you have set up JDBC correctly!", true);
+ /*
+ Todo todo = new Todo(1L, "configuration", "congratulations, you have set up JDBC correctly!", true);
insertData(todo, connection); todo = readData(connection); todo.setDetails("congratulations, you have updated data!"); updateData(todo, connection); deleteData(todo, connection);
- */
+ */
log.info("Closing database connection"); connection.close();
public class DemoApplication {
[Having any issues? Let us know.](https://github.com/MicrosoftDocs/azure-docs/issues)
-This Java code will use the *application.properties* and the *schema.sql* files that we created earlier, in order to connect to the MySQL server and create a schema that will store our data.
+This Java code will use the *application.properties* and the *schema.sql* files that you created earlier, in order to connect to the MySQL server and create a schema that will store your data.
-In this file, you can see that we commented methods to insert, read, update and delete data: we will code those methods in the rest of this article, and you will be able to uncomment them one after each other.
+In this file, you can see that we commented methods to insert, read, update and delete data: you'll code those methods in the rest of this article, and you'll be able to uncomment them one after each other.
> [!NOTE] > The database credentials are stored in the *user* and *password* properties of the *application.properties* file. Those credentials are used when executing `DriverManager.getConnection(properties.getProperty("url"), properties);`, as the properties file is passed as an argument.
You can now execute this main class with your favorite tool:
The application should connect to the Azure Database for MySQL, create a database schema, and then close the connection, as you should see in the console logs:
-```
+```output
[INFO ] Loading application properties [INFO ] Connecting to the database [INFO ] Database connection test: demo
insertData(todo, connection);
Executing the main class should now produce the following output:
-```
+```output
[INFO ] Loading application properties [INFO ] Connecting to the database [INFO ] Database connection test: demo
Executing the main class should now produce the following output:
### Reading data from Azure Database for MySQL
-Let's read the data previously inserted, to validate that our code works correctly.
+Next, read the data previously inserted to validate that your code works correctly.
In the *src/main/java/DemoApplication.java* file, after the `insertData` method, add the following method to read data from the database:
todo = readData(connection);
Executing the main class should now produce the following output:
-```
+```output
[INFO ] Loading application properties [INFO ] Connecting to the database [INFO ] Database connection test: demo
Executing the main class should now produce the following output:
### Updating data in Azure Database for MySQL
-Let's update the data we previously inserted.
+Next, update the data you previously inserted.
Still in the *src/main/java/DemoApplication.java* file, after the `readData` method, add the following method to update data inside the database:
updateData(todo, connection);
Executing the main class should now produce the following output:
-```
+```output
[INFO ] Loading application properties [INFO ] Connecting to the database [INFO ] Database connection test: demo
Executing the main class should now produce the following output:
### Deleting data in Azure Database for MySQL
-Finally, let's delete the data we previously inserted.
+Finally, delete the data you previously inserted.
Still in the *src/main/java/DemoApplication.java* file, after the `updateData` method, add the following method to delete data inside the database:
deleteData(todo, connection);
Executing the main class should now produce the following output:
-```
+```output
[INFO ] Loading application properties [INFO ] Connecting to the database [INFO ] Database connection test: demo
mysql Connect Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-java.md
EOF
Then, use the following command to run the SQL script to create the Azure AD non-admin user: ```bash
-mysql -h $AZ_DATABASE_NAME.mysql.database.azure.com --user $CURRENT_USERNAME@$AZ_DATABASE_NAME --enable-cleartext-plugin --password=`az account get-access-token --resource-type oss-rdbms --output tsv --query accessToken` < create_ad_user.sql
+mysql -h $AZ_DATABASE_NAME.mysql.database.azure.com --user $CURRENT_USERNAME@$AZ_DATABASE_NAME --enable-cleartext-plugin --password=$(az account get-access-token --resource-type oss-rdbms --output tsv --query accessToken) < create_ad_user.sql
``` Now use the following command to remove the temporary SQL script file:
spring-apps Expose Apps Gateway End To End Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/expose-apps-gateway-end-to-end-tls.md
This article explains how to expose applications to the internet using Applicati
## Configure Application Gateway for Azure Spring Apps
-We recommend that the domain name, as seen by the browser, is the same as the host name which Application Gateway uses to direct traffic to the Azure Spring Apps back end. This recommendation provides the best experience when using Application Gateway to expose applications hosted in Azure Spring Apps and residing in a virtual network. If the domain exposed by Application Gateway is different from the domain accepted by Azure Spring Apps, cookies and generated redirect URLs (for example) can be broken.
+We recommend that the domain name, as seen by the browser, is the same as the host name which Application Gateway uses to direct traffic to the Azure Spring Apps back end. This recommendation provides the best experience when using Application Gateway to expose applications hosted in Azure Spring Apps and residing in a virtual network. If the domain exposed by Application Gateway is different from the domain accepted by Azure Spring Apps, cookies and generated redirect URLs (for example) can be broken. For more information, see [Host name preservation](/azure/architecture/best-practices/host-name-preservation.md).
To configure Application Gateway in front of Azure Spring Apps, use the following steps.
spring-apps Expose Apps Gateway Tls Termination https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/expose-apps-gateway-tls-termination.md
When an Azure Spring Apps service instance is deployed in your virtual network (
## Configure Application Gateway for Azure Spring Apps
-We recommend that the domain name, as seen by the browser, is the same as the host name which Application Gateway uses to direct traffic to the Azure Spring Apps back end. This recommendation provides the best experience when using Application Gateway to expose applications hosted in Azure Spring Apps and residing in a virtual network. If the domain exposed by Application Gateway is different from the domain accepted by Azure Spring Apps, cookies and generated redirect URLs (for example) can be broken.
+We recommend that the domain name, as seen by the browser, is the same as the host name which Application Gateway uses to direct traffic to the Azure Spring Apps back end. This recommendation provides the best experience when using Application Gateway to expose applications hosted in Azure Spring Apps and residing in a virtual network. If the domain exposed by Application Gateway is different from the domain accepted by Azure Spring Apps, cookies and generated redirect URLs (for example) can be broken. For more information, see [Host name preservation](/azure/architecture/best-practices/host-name-preservation.md).
To configure Application Gateway in front of Azure Spring Apps in a private VNET, use the following steps.
spring-apps How To Integrate Azure Load Balancers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-integrate-azure-load-balancers.md
Azure Spring Apps supports Spring applications on Azure. Increasing business can require multiple data centers with management of multiple instances of Azure Spring Apps.
-Azure already provides different load-balance solutions. There are three options to integrate Azure Spring Apps with Azure load-balance solutions:
+Azure already provides [different load-balance solutions](/azure/architecture/guide/technology-choices/load-balancing-overview.md). There are three common options to integrate Azure Spring Apps with Azure load-balance solutions:
1. Integrate Azure Spring Apps with Azure Traffic Manager
-2. Integrate Azure Spring Apps with Azure App Gateway
-3. Integrate Azure Spring Apps with Azure Front Door
+1. Integrate Azure Spring Apps with Azure App Gateway
+1. Integrate Azure Spring Apps with Azure Front Door
+
+In the examples below, we will load balance requests for a custom domain of `www.contoso.com` towards two deployments of Azure Spring Apps in two different regions: `eastus.azuremicroservices.io` and `westus.azuremicroservices.io`.
+
+We recommend that the domain name, as seen by the browser, is the same as the host name which the load balancer uses to direct traffic to the Azure Spring Apps back end. This recommendation provides the best experience when using a load balancer to expose applications hosted in Azure Spring Apps. If the domain exposed by the load balancer is different from the domain accepted by Azure Spring Apps, cookies and generated redirect URLs (for example) can be broken. For more information, see [Host name preservation](/azure/architecture/best-practices/host-name-preservation.md).
## Prerequisites
+* A custom domain to be used to access the application: [Tutorial: Map an existing custom domain to Azure Spring Apps](./tutorial-custom-domain.md)
* Azure Spring Apps: [How to create an Azure Spring Apps service](./quickstart.md) * Azure Traffic * Azure App Gateway: [How to create an application gateway](../application-gateway/quick-create-portal.md)
Add endpoints in traffic
To finish the configuration: 1. Sign in to the website of your domain provider, and create a CNAME record mapping from your custom domain to traffic managerΓÇÖs Azure default domain name.
-1. Follow instructions [How to add custom domain to Azure Spring Apps](./tutorial-custom-domain.md).
-1. Add above custom domain binding to traffic manager to Azure Spring Apps corresponding app service and upload SSL certificate there.
-
- ![Traffic Manager 3](media/spring-cloud-load-balancers/traffic-manager-3.png)
## Integrate Azure Spring Apps with Azure App Gateway
To integrate with Azure Spring Apps service, complete the following configuratio
### Add Custom Probe 1. Select **Health Probes** then **Add** to open custom **Probe** dialog.
-1. The key point is to select *Yes* for **Pick host name from backend HTTP settings** option.
+1. The key point is to select *No* for **Pick host name from backend HTTP settings** option and explicitly specify the host name. For more information, see [Application Gateway configuration for host name preservation](/azure/architecture/best-practices/host-name-preservation.md#application-gateway).
![App Gateway 2](media/spring-cloud-load-balancers/app-gateway-2.png)
-### Configure Http Setting
+### Configure Backend Setting
-1. Select **Http Settings** then **Add** to add an HTTP setting.
-1. **Override with new host name:** select *Yes*.
-1. **Host name override**: select **Pick host name from backend target**.
+1. Select **Backend settings** then **Add** to add a backend setting.
+1. **Override with new host name:** select *No*.
1. **Use custom probe**: select *Yes* and pick the custom probe created above. ![App Gateway 3](media/spring-cloud-load-balancers/app-gateway-3.png)
-### Configure Rewrite Set
-
-1. Select **Rewrites** then **Rewrite set** to add a rewrite set.
-1. Select the routing rules that route requests to Azure Spring Apps public endpoints.
-1. On **Rewrite rule configuration** tab, select **Add rewrite rule**.
-1. **Rewrite type**: select **Request Header**
-1. **Action type**: select **Delete**
-1. **Header name**: select **Common header**
-1. **Common Header**: select **X-Forwarded-Proto**
-
- ![App Gateway 4](media/spring-cloud-load-balancers/app-gateway-4.png)
- ## Integrate Azure Spring Apps with Azure Front Door
-To integrate with Azure Spring Apps service and configure backend pool, use the following steps:
+To integrate with Azure Spring Apps service and configure an origin group, use the following steps:
-1. **Add backend pool**.
-1. Specify the backend endpoint by adding host.
+1. **Add origin group**.
+1. Specify the backend endpoints by adding origins for the different Azure Spring Apps instances.
![Front Door 1](media/spring-cloud-load-balancers/front-door-1.png)
-1. Specify **backend host type** as *custom host*.
-1. Input FQDN of your Azure Spring Apps public endpoints in **backend host name**.
-1. Accept the **backend host header** default, which is the same as **backend host name**.
+1. Specify **origin type** as *Azure Spring Apps*.
+1. Select your Azure Spring Apps instance for the **host name**.
+1. Keep the **origin host header** empty, so that the incoming host header will be used towards the backend. For more information, see [Azure Front Door configuration for host name preservation](/azure/architecture/best-practices/host-name-preservation.md#azure-front-door).
![Front Door 2](media/spring-cloud-load-balancers/front-door-2.png)
storage Object Replication Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/object-replication-configure.md
Object replication requires that blob versioning is enabled for both the source
To configure an object replication policy for a storage account, you must be assigned the Azure Resource Manager **Contributor** role, scoped to the level of the storage account or higher. For more information, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md) in the Azure role-based access control (Azure RBAC) documentation.
+Object replication is not yet supported in accounts that have a hierarchical namespace enabled.
+ ## Configure object replication with access to both storage accounts If you have access to both the source and destination storage accounts, then you can configure the object replication policy on both accounts. The following examples show how to configure object replication with the Azure portal, PowerShell, or Azure CLI.
storage Elastic San Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-networking.md
Currently, you must use either the Azure PowerShell module or Azure CLI to manag
> You can use the **subscription** parameter to retrieve the subnet ID for a virtual network belonging to another Azure AD tenant. ```azurecli
- az elastic-san volume-group update -e $sanName -g $resourceGroupName --name $volumeGroupName --network-acls "{virtualNetworkRules:[{id:/subscriptions/subscriptionID/resourceGroups/RGName/providers/Microsoft.Network/virtualNetworks/vnetName/subnets/default, action:Allow}]}"
+ az elastic-san volume-group update -e $sanName -g $resourceGroupName --name $volumeGroupName --network-acls '{virtual-network-rules:[{id:/subscriptions/subscriptionID/resourceGroups/RGName/providers/Microsoft.Network/virtualNetworks/vnetName/subnets/default,action:Allow}]}'
``` - Remove a network rule. The following command removes the first network rule, modify it to remove the network rule you'd like.
storage Storage Files Identity Ad Ds Configure Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-configure-permissions.md
Previously updated : 09/27/2022 Last updated : 10/20/2022
Both share-level and file/directory level permissions are enforced when a user a
## Azure RBAC permissions
-The following table contains the Azure RBAC permissions related to this configuration:
+The following table contains the Azure RBAC permissions related to this configuration. If you're using Azure Storage Explorer, you'll also need the [Reader and Data Access](../../role-based-access-control/built-in-roles.md#reader-and-data-access) role in order to read/access the file share.
| Built-in role | NTFS permission | Resulting access | ||||
storage Veeam Solution Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/backup-archive-disaster-recovery/veeam/veeam-solution-guide.md
- Title: Back up your data to Azure with Veeam-
-description: Provides an overview of factors to consider and steps to follow to use Azure as a storage target and recovery location for Veeam Backup and Recovery
--- Previously updated : 05/12/2021-----
-# Backup to Azure with Veeam
-
-This article helps you integrate a Veeam infrastructure with Azure Blob storage. It includes prerequisites, considerations, implementation, and operational guidance. This article addresses using Azure as an offsite backup target and a recovery site if a disaster occurs, which prevents normal operation within your primary site.
-
-> [!NOTE]
-> Veeam also offers a lower recovery time objective (RTO) solution, Veeam Backup & Replication with support for Azure VMware Solution workloads. This solution lets you have a standby VM that can help you recover more quickly in the event of a disaster in an Azure production environment. Veeam also offers Direct Restore to Microsoft Azure and other dedicated tools to back up Azure and Office 365 resources. These capabilities are outside the scope of this document.
-
-## Reference architecture
-
-The following diagram provides a reference architecture for on-premises to Azure and in-Azure deployments.
-
-![Veeam to Azure reference architecture diagram.](../media/veeam-architecture.png)
-
-Your existing Veeam deployment can easily integrate with Azure by adding an Azure storage account, or multiple accounts, as a cloud backup repository. Veeam also allows you to recover backups from on-premises within Azure giving you a recovery-on-demand site in Azure.
-
-## Veeam interoperability matrix
-
-| Workload | GPv2 and Blob Storage | Cool tier support | Archive tier support | Data Box Family support |
-|--|--|--|-|-|
-| On-premises VMs/data | v10a | v10a | N/A | 10a<sup>*</sup> |
-| Azure VMs | v10a | v10a | N/A | 10a<sup>*</sup> |
-| Azure Blob | v10a | v10a | N/A | 10a<sup>*</sup> |
-| Azure Files | v10a | v10a | N/A | 10a<sup>*</sup> |
-
-Veeam has offered support for above Azure features in older versions of their product as well, for optimal experience leveraging the latest product version is strongly recommended.
-
-<sup>*</sup>Veeam Backup and Replication support REST API only for Azure Data Box. Therefore, Azure Data Box Disk is not supported. Please see details for Data Box support [here](https://helpcenter.veeam.com/docs/backup/hyperv/osr_adding_data_box.html?ver=110).
--
-## Before you begin
-
-A little upfront planning will help you use Azure as an offsite backup target and recovery site.
-
-### Get started with Azure
-
-Microsoft offers a framework to follow to get you started with Azure. The [Cloud Adoption Framework](/azure/architecture/cloud-adoption/) (CAF) is a detailed approach to enterprise digital transformation and comprehensive guide to planning a production grade cloud adoption. The CAF includes a step-by-step [Azure Setup Guide](/azure/cloud-adoption-framework/ready/azure-setup-guide/) to help you get up and running quickly and securely. You can find an interactive version in the [Azure portal](https://portal.azure.com/?feature.quickstart=true#blade/Microsoft_Azure_Resources/QuickstartCenterBlade). You'll find sample architectures, specific best practices for deploying applications, and free training resources to put you on the path to Azure expertise.
-
-### Consider the network between your location and Azure
-
-Whether using cloud resources to run production, test and development, or as a backup target and recovery site, it's important to understand your bandwidth needs for initial backup seeding and for ongoing day-to-day transfers.
-
-Azure Data Box provides a way to transfer your initial backup baseline to Azure without requiring more bandwidth. This is useful if the baseline transfer is estimated to take longer than you can tolerate. You can use the Data Transfer estimator when you create a storage account to estimate the time required to transfer your initial backup.
-
-![Shows the Azure Storage data transfer estimator in the portal.](../media/az-storage-transfer.png)
-
-Remember, you'll require enough network capacity to support daily data transfers within the required transfer window (backup window) without impacting production applications. This section outlines the tools and techniques that are available to assess your network needs.
-
-#### Determine how much bandwidth you'll need
-
-Multiple assessment options are available to determine change rate and total backup set size for the initial baseline transfer to Azure. Here are some examples of assessment and reporting tools:
--- [MiTrend](https://mitrend.com/)-- [Apt are](https://www.veritas.com/insights/aptare-it-analytics)-- [Datavoss](https://www.datavoss.com/)-
-#### Determine unutilized internet bandwidth
-
-It's important to know how much typically unutilized bandwidth (or *headroom*) you have available on a day-to-day basis. This helps you assess whether you can meet your goals for:
--- initial time to upload when you're not using Azure Data Box for offline seeding-- completing daily backups based on the change rate identified earlier and your backup window-
-Use the following methods to identify the bandwidth headroom that your backups to Azure are free to consume.
--- If you're an existing Azure ExpressRoute customer, view your [circuit usage](../../../../../expressroute/expressroute-monitoring-metrics-alerts.md#circuits-metrics) in the Azure portal.-- Contact your ISP. They should be able to share reports that show your existing daily and monthly utilization.-- There are several tools that can measure utilization by monitoring your network traffic at the router/switch level. These include:-
- - [Solarwinds Bandwidth Analyzer Pack](https://www.solarwinds.com/network-bandwidth-analyzer-pack?CMP=ORG-BLG-DNS)
- - [Paessler PRTG](https://www.paessler.com/bandwidth_monitoring)
- - [Cisco Network Assistant](https://www.cisco.com/c/en/us/products/cloud-systems-management/network-assistant/https://docsupdatetracker.net/index.html)
- - [WhatsUp Gold](https://www.whatsupgold.com/network-traffic-monitoring)
-
-### Choose the right storage options
-
-When you use Azure as a backup target, you'll make use of [Azure Blob storage](../../../../blobs/storage-blobs-introduction.md). Blob storage is Microsoft's object storage solution. Blob storage is optimized for storing massive amounts of unstructured data, which is data that does not adhere to any data model or definition. Additionally, Azure Storage is durable, highly available, secure, and scalable. You can select the right storage for your workload to provide the [level of resiliency](../../../../common/storage-redundancy.md) to meet your internal SLAs. Blob storage is a pay-per-use service. You're [charged monthly](../../../../blobs/access-tiers-overview.md#pricing-and-billing) for the amount of data stored, accessing that data, and in the case of cool and archive tiers, a minimum required retention period. The resiliency and tiering options applicable to backup data are summarized in the following tables.
-
-**Blob storage resiliency options:**
-
-| |Locally-redundant |Zone-redundant |Geo-redundant |Geo-zone-redundant |
-||||||
-|**Effective # of copies** | 3 | 3 | 6 | 6 |
-|**# of availability zones** | 1 | 3 | 2 | 4 |
-|**# of region**s | 1 | 1 | 2 | 2 |
-|**Manual failover to secondary region** | N/A | N/A | Yes | Yes |
-
-**Blob storage tiers:**
-
-| | Hot tier |Cool tier | Archive tier |
-| -- | -- | -- | -- |
-| **Availability** | 99.9% | 99% | Offline |
-| **Usage charges** | Higher storage costs, Lower access, and transaction costs | Lower storage costs, higher access, and transaction costs | Lowest storage costs, highest access, and transaction costs |
-| **Minimum data retention required** | NA | 30 days | 180 days |
-| **Latency (time to first byte)** | Milliseconds | Milliseconds | Hours |
-
-#### Sample backup to Azure cost model
-
-With pay-per-use can be daunting to customers who are new to the cloud. While you pay for only the capacity used, you do also pay for transactions (read and or writes) and [egress for data](https://azure.microsoft.com/pricing/details/bandwidth/) read back to your on-premises environment when [Azure Express Route direct local or Express Route unlimited data plan](https://azure.microsoft.com/pricing/details/expressroute/) are in use where data egress from Azure is included. You can use the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/) to perform "what if" analysis. You can base the analysis on list pricing or on [Azure Storage Reserved Capacity pricing](../../../../../cost-management-billing/reservations/save-compute-costs-reservations.md), which can deliver up to 38% savings. Here's an example pricing exercise to model the monthly cost of backing up to Azure. This is only an example. *Your pricing may vary due to activities not captured here.*
-
-|Cost factor |Monthly cost |
-|||
-|100 TB of backup data on cool storage |$1556.48 |
-|2 TB of new data written per day x 30 days |$42 in transactions |
-|Monthly estimated total |$1598.48 |
-|||
-|One time restore of 5 TB to on-premises over public internet | $527.26 |
-
-> [!Note]
-> This estimate was generated in the Azure Pricing Calculator using East US Pay-as-you-go pricing and is based on the Veeam default of 512 kb chunk size for WAN transfers. This example may not be applicable towards your requirements.
-
-## Implementation guidance
-
-This section provides a brief guide for how to add Azure Storage to an on-premises Veeam deployment. For detailed guidance and planning considerations, we recommend you take a look at the following Veeam Guidance for their [Capacity Tier](https://helpcenter.veeam.com/docs/backup/vsphere/capacity_tier.html?ver=110).
-
-1. Open the Azure portal, and search for **Storage Accounts**. You can also click on the default service icon.
-
- ![Shows adding a storage accounts in the Azure portal.](../media/azure-portal.png)
-
- ![Shows where you've typed storage in the search box of the Azure portal.](../media/locate-storage-account.png)
-
-2. Select **Create** to add an account. Select or create a resource group, provide a unique name, choose the region, select **Standard** performance, always leave account kind as **Storage V2**, choose the replication level which meets your SLAs, and the default tier your backup software will apply. An Azure Storage account makes hot, cool, and archive tiers available within a single account and Veeam policies allow you to use multiple tiers to effectively manage the lifecycle of your data.
-
- ![Shows storage account settings in the portal](../media/account-create-1.png)
-
-3. Keep the default networking and data protection options for now. Do **not** enable Soft Delete for Storage Accounts storing Veeam capacity tiers.
-
- 4. Next, we recommend the default settings from the **Advanced** screen for backup to Azure use cases.
-
- ![Shows Advanced settings tab in the portal.](../media/account-create-3.png)
-
-5. Add tags for organization if you use tagging, and create your account.
-
-6. Two quick steps are all that are now required before you can add the account to your Veeam environment. Navigate to the account you created in the Azure portal and select **Containers** under the **Blob service** menu. Add a container and choose a meaningful name. Then, navigate to the **Access keys** item under **Settings** and copy the **Storage account name** and one of the two access keys. You will need the container name, account name, and access key in the next steps.
-
- ![Shows container creation in the portal.](../media/container-b.png)
-
- ![Shows access key settings in the portal.](../media/access-key.png)
-
- > [!Note]
- > Veeam Backup and Replication offers additional options to connect to Azure. For the use case of this article, using Microsoft Azure Blob Storage as a backup target, using the above method is the recommended best practice.
-
-7. *(Optional)* You can add more layers of security to your deployment.
-
- 1. Configure role-based access to limit who can make changes to your storage account. For more information, see [Built-in roles for management operations](../../../../common/authorization-resource-provider.md#built-in-roles-for-management-operations).
-
- 1. Restrict access to the account to specific network segments with [storage firewall settings](../../../../common/storage-network-security.md) to prevent access attempts from outside your corporate network.
-
- ![Shows storage firewall settings in the portal.](../media/storage-firewall.png)
-
- 1. Set a [delete lock](../../../../../azure-resource-manager/management/lock-resources.md) on the account to prevent accidental deletion of the storage account.
-
- ![Resource Lock](../media/resource-lock.png)
-
- 1. Configure additional [security best practices](../../../../../storage/blobs/security-recommendations.md).
-
-8. In the Veaam Backup and Replication Management Console, navigate to **Backup Infrastructure** -> right-click in the overview pane and select **Add Backup Repository** to open the configuration wizard. In the dialog box, select **Object storage** -> **Microsoft Azure Blob Storage** -> **Azure Blob Storage**.
-
- ![Shows selecting object storage in the Veeam Repository Wizard.](../media/veeam-repo-a.png)
-
- ![Shows selecting Microsoft Azure Blob Storage in the Veeam Repository Wizard.](../media/veeam-repo-b.png)
-
- ![Shows selecting Azure Blob Storage in the Veeam Repository Wizard.](../media/veeam-repo-c.png)
-
-9. Next, specify a name and a description of your new Blob storage repository.
-
- ![Shows typing a name for the repository in the Veeam Repository Wizard.](../media/veeam-repo-d.png)
-
-10. In the next step, add the credentials to access your Azure storage account. Select **Microsoft Azure Storage Account** in the Cloud Credential Manager, and enter your storage account name and access key. Select **Azure Global** in the region selector, and any gateway server if applicable.
-
- ![Shows specifying an account in the Veeam Repository Wizard.](../media/veeam-repo-e.png)
-
- > [!Note]
- > If you choose not to use a Veeam gateway server, make sure that all scale-out repository extents have direct internet access.
-
-11. On the container register, select your Azure Storage container and select or create a folder to store your backups in. You can also define a soft limit on the overall storage capacity to be used by Veeam, which is recommended. Review the displayed information in the summary section and complete the configuration tool. You can now select the new repository in your backup job configuration.
-
- ![Shows specifying a container in the Veeam Repository Wizard.](../media/veeam-repo-f.png)
-
- ![Shows creating a folder in the Veeam Repository Wizard.](../media/veeam-repo-g.png)
-
-## Operational guidance
-
-### Azure alerts and performance monitoring
-
-It is advisable to monitor both your Azure resources and Veeam's ability to leverage them as you would with any storage target you rely on to store your backups. A combination of Azure Monitor and Veeam's monitoring capabilities (the **Statistics** tab in the **Jobs** node of the Veeam Management Console or more advanced options like Veeam One Reporter) will help you keep your environment healthy.
-
-#### Azure portal
-
-Azure provides a robust monitoring solution in the form of [Azure Monitor](../../../../../azure-monitor/essentials/monitor-azure-resource.md). You can [configure Azure Monitor](../../../../blobs/monitor-blob-storage.md) to track Azure Storage capacity, transactions, availability, authentication, and more. The full reference of metrics tracked may be found [here](../../../../blobs/monitor-blob-storage-reference.md). A few useful metrics to track are BlobCapacity - to make sure you remain below the maximum [storage account capacity limit](../../../../common/scalability-targets-standard-account.md), Ingress and Egress - to track the amount of data being written to and read from your Azure storage account, and SuccessE2ELatency - to track the roundtrip time for requests to and from Azure Storage and your MediaAgent.
-
-You can also [create log alerts](../../../../../service-health/alerts-activity-log-service-notifications-portal.md) to track Azure Storage service health and view the [Azure status dashboard](https://azure.status.microsoft/status) at any time.
-
-#### Veeam reporting
--- [Configure Veeam One Reporting](https://helpcenter.veeam.com/docs/one/reporter/configure_reporter.html?ver=100)-- [Veeam backup and replication alarms](https://helpcenter.veeam.com/docs/one/monitor/backup_alarms.html?ver=100)-
-### How to open support cases
-
-When you need help with your backup to Azure solution, you should open a case with both Veeam and Azure. This helps our support organizations to collaborate, if necessary.
-
-#### To open a case with Veeam
-
-On the [Veeam customer support site](https://www.veeam.com/support.html), sign in, and open a case.
-
-To understand the support options available to you by Veeam, see the [Veeam Customer Support Policy](https://www.veeam.com/veeam_software_support_policy_ds.pdf).
-
-You may also call to open a case: [Worldwide support numbers](https://www.veeam.com/contacts.html?ad=in-text-link#support-numbers)
-
-#### To open a case with Azure
-
-In the [Azure portal](https://portal.azure.com) search for **support** in the search bar at the top. Select **Help + support** -> **New Support Request**.
-
-> [!NOTE]
-> When you open a case, be specific that you need assistance with Azure Storage or Azure Networking. Do not specify Azure Backup. Azure Backup is the name of an Azure service and your case will be routed incorrectly.
-
-### Links to relevant Veeam documentation
-
-See the following Veeam documentation for further detail:
--- [Veeam User Guide](https://helpcenter.veeam.com/docs/backup/hyperv/overview.html?ver=100)-- [Veeam Architecture Guide](https://helpcenter.veeam.com/docs/backup/vsphere/backup_architecture.html?ver=100)-
-### Marketplace offerings
-
-You can continue to use the Veeam solution you know and trust to protect your workloads running on Azure. Veeam has made it easy to deploy their solution in Azure and protect Azure Virtual Machines and many other Azure services.
--- [Deploy Veeam Backup & Replication via the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/veeam.veeam-backup-replication?tab=overview)-- [Veeam's Azure backup and recovery website](https://www.veeam.com/backup-azure.html?ad=menu-products)-
-## Next steps
-
-See the following resources on the Veeam website for information about specialized usage scenarios:
--- [Veeam How to Videos](https://www.veeam.com/how-to-videos.html?ad=menu-resources)-- [Veeam Technical Documentations](https://www.veeam.com/documentation-guides-datasheets.html?ad=menu-resources)-- [Veeam Knowledge Base and FAQ](https://www.veeam.com/knowledge-base.html?ad=menu-resources)+
+ Title: Azure Data Protection with Veeam
+
+description: This article provides information for using Azure Blob storage with Veeam solutions, including details on how to get started and best practices.
+++ Last updated : 10/15/2022+++++
+# Azure Data Protection with Veeam
+
+This article provides information for using Azure Blob storage with Veeam solutions, including details on how to get started and best practices.
+
+Azure Blob storage can be used with many Veeam products to provide cost-effective data retention and recovery capabilities. Not all Azure Blob features are supported by each product, learn more about [using object storage with Veeam Products](https://www.veeam.com/kb4241).
+## Backup on-premises workloads to Azure
+
+You can store backups of your on-premises workloads on Azure Blob using [Veeam Backup & Replication](https://helpcenter.veeam.com/docs/backup/hyperv/overview.html), allowing you to use Azure Storage's pay-per-use model to easily scale your backup infrastructure with durable, cost-effective storage. This includes support for virtual workloads, physical workloads, enterprise applications and unstructured data.
+
+## Restore on-premises workloads to Azure
+
+To restore your on-premises workloads directly to Azure, you can use [Veeam Backup & Replication](https://helpcenter.veeam.com/docs/backup/hyperv/overview.html), giving you the ability to use Azure as an on-demand recovery site or for migration purposes.
+
+## Protect Azure workloads with Veeam
+
+To agentlessly protect Azure Virtual Machine, Azure Files and Azure SQL workloads, you can use [Veeam Backup for Microsoft Azure](https://helpcenter.veeam.com/docs/vbazure/guide/overview.html), allowing you to perform snapshot, backup and recovery operations entirely within Azure.
+Azure VMware Solution and enterprise application-specific workloads such as Oracle and SAP HANA can be protected with [Veeam Backup & Replication](https://helpcenter.veeam.com/docs/backup/hyperv/overview.html). This includes support for agent-based backup.
+Azure Kubernetes Service (AKS) and Azure RedHat OpenShift (ARO) workloads are supported with [Kasten K10 by Veeam](https://docs.kasten.io/latest/), providing easy-to-use backup, restore, and application mobility.
+
+## Protect your data in Microsoft 365
+
+You can protect Exchange Online, SharePoint Online, OneDrive for Business, and Teams data with [Veeam Backup for Microsoft 365](https://helpcenter.veeam.com/docs/vbo365/guide/).
++
+This diagram provides an overview of these capabilities.
+
+![Veeam to Azure reference architecture diagram.](../media/veeam-architectures.png)
+
+## Before you begin
+
+As you plan your Azure Storage strategy with Veeam, it's recommended to review the [Microsoft Cloud Adoption Framework](https://docs.microsoft.com/azure/cloud-adoption-framework/) for guidance on setting up your Azure environment. The [Azure Setup Guide](https://docs.microsoft.com/azure/cloud-adoption-framework/ready/azure-setup-guide/) includes step-by-step details to help you establish a foundation for operating efficiently and securely within Azure.
+
+## Using Azure Blob Storage with Veeam
+
+### Veeam Backup & Replication
+
+Veeam Backup & Replication supports [object storage as a destination](https://helpcenter.veeam.com/docs/backup/hyperv/object_storage_repository.html) for long-term data storage or for [archiving purpose](https://helpcenter.veeam.com/docs/backup/hyperv/osr_adding_blob_storage_archive_tier.html). Veeam Backup for Azure repository can also be added as external repository for more flexible restore options. Learn how to [configure Veeam Backup & Replication with Azure Blob](https://helpcenter.veeam.com/docs/backup/hyperv/new_object_repository_azure_type.html). Veeam manages the data lifecycle of stored objects, [review considerations and limitations for using Azure Blob with Veeam Products](https://www.veeam.com/kb4241).
+### Veeam Backup for Microsoft 365
+
+Veeam Backup for Microsoft 365 supports object storage as primary repository as well as Azure archive storage for long term retention and backup archive. Learn how to configure an [Azure object storage repository in Veeam Backup for M365](https://helpcenter.veeam.com/docs/vbo365/guide/adding_azure_storage.html). Veeam manages the data lifecycle of stored objects, [review considerations and limitations for using Azure Blob with Veeam Products](https://www.veeam.com/kb4241).
+
+### Veeam Backup for Azure
+
+Veeam Backup for Microsoft Azure uses blob containers as target locations for image-level backups of Azure VMs and Azure SQL databases. Learn how to [add a repository in Veeam Backup for Microsoft Azure](https://helpcenter.veeam.com/docs/vbazure/guide/adding_repositories.html?ver=40). Veeam manages the data lifecycle of stored objects, [review considerations and limitations for using Azure Blob with Veeam Products](https://www.veeam.com/kb4241).
+
+### Kasten K10
+
+Kasten supports the use of Azure blob storage as backup target. Learn how to [configure Azure Blob storage with Kasten K10](https://docs.kasten.io/latest/usage/configuration.html#azure-storage)
+
+## Resources
+
+- [Veeam Backup & Replication for VMware vSphere](https://helpcenter.veeam.com/docs/backup/vsphere/)
+- [Veeam Backup & Replication for Microsoft Hyper-V](https://helpcenter.veeam.com/docs/backup/hyperv/)
+- [Veeam Backup & Replication support for Azure VMware Solution](https://www.veeam.com/kb4012)
+- [Veeam Backup for Microsoft Azure User Guide](https://helpcenter.veeam.com/docs/vbazure/guide/)
+- [Veeam Backup for Microsoft 365](https://helpcenter.veeam.com/docs/vbo365/guide/)
+- [Kasten K10 by Veeam](https://docs.kasten.io/latest/)
+- [Veeam Plug-ins for Enterprise Applications User Guide](https://helpcenter.veeam.com/docs/backup/plugins/)
+- [Veeam Agent Management Guide](https://helpcenter.veeam.com/docs/backup/agents/)
+
+### Additional Veeam Resources
+
+- [Veeam How-To Videos](https://www.veeam.com/how-to-videos.html)
+- [Veeam Technical Documentation](https://www.veeam.com/documentation-guides-datasheets.html)
+- [Veeam Knowledge Base](https://www.veeam.com/knowledge-base.html)
+
+## Support
+If you have an issue using Azure Storage with Veeam, open a case with both Azure and Veeam. This helps our support organizations collaborate, if necessary.
+### Open a support case with Veeam
+
+On the [Veeam customer support site](https://www.veeam.com/support.html), sign in, and open a case.
+
+To understand the support options available to you from Veeam, review the [Veeam Customer Support Policy](https://www.veeam.com/support-policy.html).
+
+#### Open a support case with Azure
+
+In the [Azure portal](https://portal.azure.com) search for **support** in the search bar at the top. Select **Help + support** -> **New Support Request**.
+
+> [!NOTE]
+> When you open a case, be specific that you need assistance with Azure Storage or Azure Networking. Do not specify Azure Backup. Azure Backup is the name of an Azure service and your case will be routed incorrectly.
+
+## Next steps
+
+You can continue to use the Veeam solution you know and trust to protect your workloads running on Azure. Veeam has made it easy to deploy their solutions in Azure and protect Azure Virtual Machines and many other Azure services.
+
+### Marketplace offerings
+
+- [Deploy Veeam Backup & Replication from the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/veeam.veeam-backup-replication?tab=overview)
+- [Deploy Veeam Backup for Microsoft Azure from the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/veeam.azure_backup)
+- [Deploy Veeam Backup for Microsoft 365 from the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/veeam.office365backup)
synapse-analytics Sql Data Warehouse Troubleshoot Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-troubleshoot-connectivity.md
The status of your dedicated SQL pool (formerly SQL DW) will be shown here. If
![Service Available](./media/sql-data-warehouse-troubleshoot-connectivity/resource-health.png)
-If your Resource health shows that your dedicated SQL pool (formerly SQL DW) instance is paused or scaling, follow the guidance to resume your instance.
-
-![Screenshot shows an instance of dedicated SQL pool that is paused or scaling.](./media/sql-data-warehouse-troubleshoot-connectivity/resource-health-pausing.png)
Additional information about Resource Health can be found here. ## Check for paused or scaling operation
virtual-machines Key Vault Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/key-vault-setup.md
Title: Set up Azure Key Vault using CLI
description: How to set up Key Vault for virtual machine using the Azure CLI. - Previously updated : 02/24/2017 Last updated : 10/20/2022
To perform these steps, you need the latest [Azure CLI](/cli/azure/install-az-cl
## Create a Key Vault Create a key vault and assign the deployment policy with [az keyvault create](/cli/azure/keyvault). The following example creates a key vault named `myKeyVault` in the `myResourceGroup` resource group:
-```azurecli
+```azurecli-interactive
az keyvault create -l westus -n myKeyVault -g myResourceGroup --enabled-for-deployment true ``` ## Update a Key Vault for use with VMs Set the deployment policy on an existing key vault with [az keyvault update](/cli/azure/keyvault). The following updates the key vault named `myKeyVault` in the `myResourceGroup` resource group:
-```azurecli
+```azurecli-interactive
az keyvault update -n myKeyVault -g myResourceGroup --set properties.enabledForDeployment=true ```
virtual-machines Expose Sap Process Orchestration On Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/expose-sap-process-orchestration-on-azure.md
Non-HTTP protocols like FTP can't be addressed with Azure API Management, Applic
Files need to be stored before SAP can process them. We recommend that you use [SFTP](../../../storage/blobs/secure-file-transfer-protocol-support.md). Azure Blob Storage supports SFTP natively.
-> [!NOTE]
-> [The Azure Blob Storage SFTP feature](../../../storage/blobs/secure-file-transfer-protocol-support.md) is currently in preview.
- :::image type="content" source="media/expose-sap-process-orchestration-on-azure/file-blob-4.png" alt-text="Diagram that shows a file-based scenario with Azure Blob Storage and SAP Process Orchestration on Azure."::: Alternative SFTP options are available in Azure Marketplace if necessary.
This second example shows a setup where SAP RISE runs the whole integration chai
In this scenario, the SAP-managed Process Orchestration instance writes files to the customer-managed file share on Azure or to a workload sitting on-premises. The customer handles the breakout.
-> [!NOTE]
-> The [Azure Blob Storage SFTP feature](../../../storage/blobs/secure-file-transfer-protocol-support.md) is currently in preview.
- :::image type="content" source="media/expose-sap-process-orchestration-on-azure/rise-5b.png" alt-text="Diagram that shows a file share scenario with SAP Process Orchestration on Azure in the RISE context."::: ## Comparison of gateway setups
virtual-machines Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/get-started.md
ms.assetid: ad8e5c75-0cf6-4564-ae62-ea1246b4e5f2
vm-linux Previously updated : 10/19/2022 Last updated : 10/20/2022
In the SAP workload documentation space, you can find the following areas:
## Change Log
+- October 20, 2022: Change in [HA for NFS on Azure VMs on SLES](./high-availability-guide-suse-nfs.md) and [HA for SAP NW on Azure VMs on SLES for SAP applications](./high-availability-guide-suse.md) to indicate that we are de-emphasizing SAP reference architectures, utilizing NFS clusters
- October 18, 2022: Clarify some considerations around using Azure Availability Zones in [SAP workload configurations with Azure Availability Zones](./sap-ha-availability-zones.md) - October 17, 2022: Change in [HA for SAP HANA on Azure VMs on SLES](./sap-hana-high-availability.md) and [HA for SAP HANA on Azure VMs on RHEL](./sap-hana-high-availability-rhel.md) to add guidance for setting up parameter `AUTOMATED_REGISTER` - September 29, 2022: Announcing HANA Large Instances being in sunset mode in [SAP workload on Azure virtual machine supported scenarios](./sap-planning-supported-configurations.md) and [What is SAP HANA on Azure (Large Instances)?](./hana-overview-architecture.md). Adding some statements around Azure VMware and Azure Active Directory support status in [SAP workload on Azure virtual machine supported scenarios](./sap-planning-supported-configurations.md)
virtual-machines High Availability Guide Suse Nfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-suse-nfs.md
vm-windows Previously updated : 01/24/2022 Last updated : 10/20/2022
[sap-hana-ha]:sap-hana-high-availability.md +
+> [!NOTE]
+> We recommend deploying one of the Azure first-party NFS
+ This article describes how to deploy the virtual machines, configure the virtual machines, install the cluster framework, and install a highly available NFS server that can be used to store the shared data of a highly available SAP system. This guide describes how to set up a highly available NFS server that is used by two SAP systems, NW1 and NW2. The names of the resources (for example virtual machines, virtual networks) in the example assume that you have used the [SAP file server template][template-file-server] with resource prefix **prod**.
virtual-machines High Availability Guide Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-suse.md
vm-windows Previously updated : 03/25/2022 Last updated : 10/20/2022
The NFS server, SAP NetWeaver ASCS, SAP NetWeaver SCS, SAP NetWeaver ERS, and th
## Setting up a highly available NFS server
+> [!NOTE]
+> We recommend deploying one of the Azure first-party NFS
+> The SAP configuration guides for SAP NW highly available SAP system with native NFS services are:
+> - [High availability SAP NW on Azure VMswith simple mount and NFS on SLES for SAP Applications](./high-availability-guide-suse-nfs-simple-mount.md)
+> - [High availability for SAP NW on Azure VMs with NFS on Azure Files on SLES for SAP Applications](./high-availability-guide-suse-nfs-azure-files.md)
+> - [High availability for SAP NW on Azure VMs with NFS on Azure NetApp Files on SLES for SAP Applications](./high-availability-guide-suse-netapp-files.md)
+ SAP NetWeaver requires shared storage for the transport and profile directory. Read [High availability for NFS on Azure VMs on SUSE Linux Enterprise Server][nfs-ha] on how to set up an NFS server for SAP NetWeaver. ## Setting up (A)SCS
virtual-network Nat Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-metrics.md
The dropped packets metric shows you the number of data packets dropped by NAT g
Use this metric to: -- Assess whether or not you're nearing or possibly experiencing SNAT exhaustion with a given NAT gateway resource. Check to see if periods of dropped packets coincide with periods of failed SNAT connections with the [Total SNAT Connection Count](#total-snat-connection-count) metric.
+- Assess whether or not you're nearing or possibly experiencing SNAT exhaustion with a given NAT gateway resource. Check to see if periods of dropped packets coincide with periods of failed SNAT connections with the [SNAT Connection Count](#snat-connection-count) metric.
- Help assess if you're experiencing a pattern of failed outbound connections.
Reasons for why you may see dropped packets:
### SNAT connection count
-The SNAT connection count metric shows you the number of new SNAT connections within a specified time frame.
+The SNAT connection count metric shows you the number of new SNAT connections within a specified time frame. This metric can be broken out to view different connection states including: attempted, established, failed, closed, and timed out connections. A failed connection volume greater than zero may indicate SNAT port exhaustion.
Use this metric to: -- Evaluate the number of successful and failed attempts to make outbound connections.
+- Evaluate the health of your outbound connections.
-- Help assess if you're experiencing a pattern of failed outbound connections.
+- Assess whether or not you're nearing or possibly experiencing SNAT port exhaustion.
+
+- Evaluate whether your NAT gateway resource should be scaled out further by adding more public IPs.
-To view the number of attempted and failed connections:
+- Assess if you're experiencing a pattern of failed outbound connections.
+
+To view the connection state of your connections:
1. Select the NAT gateway resource you would like to monitor.
To view the number of attempted and failed connections:
:::image type="content" source="./media/nat-metrics/nat-metrics-3.png" alt-text="Screenshot of the metrics configuration.":::
-Reasons for why you may see failed connections:
--- If you're seeing a pattern of failed connections for your NAT gateway resource, there could be multiple possible reasons. See the NAT gateway [troubleshooting guide](./troubleshoot-nat.md) to help you further diagnose. - ### Total SNAT connection count The **Total SNAT connection count** metric shows you the total number of active SNAT connections over a period of time. You can use this metric to: -- Monitor SNAT port utilization on a given NAT gateway resource.
+- Assess if you're nearing the connection limit of your NAT gateway resource.
-- Analyze over a given time interval to provide insight on whether or not NAT gateway connectivity should be scaled out further by adding more public IPs.
+- Help assess if you're experiencing a pattern of failed outbound connections.
+
+Reasons for why you may see failed connections:
-- Assess whether or not you're nearing or possibly experiencing SNAT exhaustion with a given NAT gateway resource.
+- If you're seeing a pattern of failed connections for your NAT gateway resource, there could be multiple possible reasons. See the NAT gateway [troubleshooting guide](./troubleshoot-nat.md) to help you further diagnose.
### Data path availability (Preview)
To set up a datapath availability alert, follow these steps:
>Aggregation granularity is the period of time over which the datapath availability is measured to determine if it has dropped below the threshold value. Setting the aggregation granularity to less than 5 minutes may trigger false positive alerts that detect noise in the datapath.
-### Alerts for SNAT port usage
+### Alerts for SNAT port exhaustion
-Use the total **SNAT connection count** metric and alerts for when you're nearing the limits of available SNAT ports.
+Use the **SNAT connection count** metric and alerts to help determine if you're experiencing SNAT port exhaustion. A failed connection volume greater than zero may indicate SNAT port exhaustion. You may need to investigate further to determine the root cause of these failures.
To create the alert, use the following steps:
To create the alert, use the following steps:
2. Select **Create alert rule**.
-3. From the signal list, select **Total SNAT Connection Count**.
+3. From the signal list, select **SNAT Connection Count**.
-4. From the **Operator** drop-down menu, select **Less than or equal to**.
+4. From the **Aggregation type** drop-down menu, select **Total**.
-5. From the **Aggregation type** drop-down menu, select **Total**.
+5. From the **Operator** drop-down menu, select **Greater than**.
-6. In the **Threshold value** box, enter a percentage value that the Total SNAT connection count must drop below before an alert is fired. When deciding what threshold value to use, keep in mind how much you've scaled out your NAT gateway outbound connectivity with public IP addresses. For more information, see [Scale NAT gateway](./nat-gateway-resource.md#scalability).
+6. From the **Unit** drop-down menu, select **Count**.
-7. From the **Unit** drop-down menu, select **Count**.
+7. In the **Threshold value** box, enter 0.
+
+8. In the Split by dimensions section, select **Connection State** under Dimension name.
+
+9. Under Dimension values, select **Failed** connections.
-8. From the **Aggregation granularity (Period)** drop-down menu, select a time period over which you would like the SNAT connection count to be measured.
+8. From the When to evaluate section, select **1 minute** under the **Check every** drop-down menu.
9. Create an **Action** for your alert by providing a name, notification type, and type of action that is performed when the alert is triggered.
To create the alert, use the following steps:
11. Select **Create** to create the alert rule. >[!NOTE]
->SNAT exhaustion on your NAT gateway resource is uncommon. If you see SNAT exhaustion, your NAT gateway's idle timeout timer may be holding on to SNAT ports too long or your may need to scale with additional public IPs. To troubleshoot these kinds of issues, refer to the NAT gateway [troubleshooting guide](./troubleshoot-nat.md).
+>SNAT port exhaustion on your NAT gateway resource is uncommon. If you see SNAT port exhaustion, your NAT gateway's idle timeout timer may be holding on to SNAT ports too long or your may need to scale with additional public IPs. To troubleshoot these kinds of issues, refer to the [NAT gateway connectivity troubleshooting guide](/azure/virtual-network/nat-gateway/troubleshoot-nat-connectivity#snat-exhaustion-due-to-nat-gateway-configuration).
## Network Insights
virtual-wan How To Routing Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/how-to-routing-policies.md
While Private Traffic includes both branch and Virtual Network address prefixes
* **Internet Traffic Routing Policy**: When an Internet Traffic Routing Policy is configured on a Virtual WAN hub, all branch (User VPN (Point-to-site VPN), Site-to-site VPN, and ExpressRoute) and Virtual Network connections to that Virtual WAN Hub will forward Internet-bound traffic to the Azure Firewall resource, Third-Party Security provider or **Network Virtual Appliance** specified as part of the Routing Policy.
- In other words, when Traffic Routing Policy is configured on a Virtual WAN hub, the Virtual WAN will advertise a **default** route to all spokes, Gateways and Network Virtual Appliances (deployed in the hub or spoke). This includes the **Network Virtual Appliance** that is the next hop for the Itnernet Traffic routing policy.
+ In other words, when Traffic Routing Policy is configured on a Virtual WAN hub, the Virtual WAN will advertise a **default** route to all spokes, Gateways and Network Virtual Appliances (deployed in the hub or spoke). This includes the **Network Virtual Appliance** that is the next hop for the Internet Traffic routing policy.
* **Private Traffic Routing Policy**: When a Private Traffic Routing Policy is configured on a Virtual WAN hub, **all** branch and Virtual Network traffic in and out of the Virtual WAN Hub including inter-hub traffic will be forwarded to the Next Hop Azure Firewall resource or Network Virtual Appliance resource that was specified in the Private Traffic Routing Policy.
virtual-wan Virtual Wan Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-faq.md
Yes. Virtual WAN prefers ExpressRoute over VPN for traffic egressing Azure. Howe
### When a Virtual WAN hub has an ExpressRoute circuit and a VPN site connected to it, what would cause a VPN connection route to be preferred over ExpressRoute?
-When an ExpressRoute circuit is connected to virtual hub, the Microsoft Edge routers are the first node for communication between on-premises and Azure. These edge routers communicate with the Virtual WAN ExpressRoute gateways that, in turn, learn routes from the virtual hub router that controls all routes between any gateways in Virtual WAN. The Microsoft Edge routers process virtual hub ExpressRoute routes with higher preference over routes learned from on-premises.
+When an ExpressRoute circuit is connected to a virtual hub, the Microsoft Edge routers are the first node for communication between on-premises and Azure. These edge routers communicate with the Virtual WAN ExpressRoute gateways that, in turn, learn routes from the virtual hub router that controls all routes between any gateways in Virtual WAN. The Microsoft Edge routers process virtual hub ExpressRoute routes with higher preference over routes learned from on-premises.
For any reason, if the VPN connection becomes the primary medium for the virtual hub to learn routes from (e.g failover scenarios between ExpressRoute and VPN), unless the VPN site has a longer AS Path length, the virtual hub will continue to share VPN learned routes with the ExpressRoute gateway. This causes the Microsoft Edge routers to prefer VPN routes over on-premises routes.
The current behavior is to prefer the ExpressRoute circuit path over hub-to-hub
* Contact the product team to take part in the gated public preview. In this preview, traffic between the 2 hubs traverses through the Azure Virtual WAN router in each hub and uses a hub-to-hub path instead of the ExpressRoute path (which traverses through the Microsoft Edge routers/MSEE). To use this feature during preview, email **previewpreferh2h@microsoft.com** with the Virtual WAN IDs, Subscription ID, and the Azure region. Expect a response within 48 business hours (Monday-Friday) with confirmation that the feature is enabled.
-### When there's an ExpressRoute circuit connected as a bow-tie to a Virtual WAN hub and a non Virtual WAN (customer-managed) VNet, what is the path for the non Virtual WAN VNet to reach the Virtual WAN hub?
+### When there's an ExpressRoute circuit connected as a bow-tie to a Virtual WAN hub and a non Virtual WAN VNet, what is the path for the non Virtual WAN VNet to reach the Virtual WAN hub?
The current behavior is to prefer the ExpressRoute circuit path for non Virtual WAN VNet to Virtual WAN connectivity. It is recommended that the customer [create a Virtual Network connection](howto-connect-vnet-hub.md) to directly connect the non Virtual WAN VNet to the Virtual WAN hub. Afterwards, VNet to VNet traffic will traverse through the Virtual WAN router instead of the ExpressRoute path (which traverses through the Microsoft Enterprise Edge routers/MSEE).
-### Can hubs be created in different resource group in Virtual WAN?
+### Can hubs be created in different resource groups in Virtual WAN?
Yes. This option is currently available via PowerShell only. The Virtual WAN portal requires that the hubs are in the same resource group as the Virtual WAN resource itself.
YouΓÇÖll only be able to update your virtual hub router if all the resources (ga
If the update fails for any reason, your hub will be auto recovered to the old version to ensure there is still a working setup. >[!NOTE]
-> The user will need to have an **owner** or **contributor** role to see an accurate status of the hub router version. If a user is assigned a **reader** role to the Virtual WAN resource, then Azure portal will display to that user that the hub router needs to be upgraded to the latest version, even if the hub is already on the latest version.
+> The user will need to have an **owner** or **contributor** role to see an accurate status of the hub router version. If a user is assigned a **reader** role to the Virtual WAN resource and subscription, then Azure portal will display to that user that the hub router needs to be upgraded to the latest version, even if the hub is already on the latest version.
### Is there a route limit for OpenVPN clients connecting to an Azure P2S VPN gateway?
web-application-firewall Waf Front Door Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-create-portal.md
Previously updated : 04/20/2022 Last updated : 10/21/2022
First, create a basic WAF policy with managed Default Rule Set (DRS) by using th
1. On the top left-hand side of the screen, select **Create a resource** > search for **WAF** > select **Web Application Firewall (WAF)** > select **Create**.
-1. In the **Basics** tab of the **Create a WAF policy** page, enter or select the following information, accept the defaults for the remaining settings, and then select **Review + create**:
+1. In the **Basics** tab of the **Create a WAF policy** page, enter or select the following information, accept the defaults for the remaining settings:
| Setting | Value | | | | | Policy for | Select **Global WAF (Front Door)**. |
- | Front Door SKU | Select between basic, standard and premium SKU. |
- | Subscription | Select your Front Door subscription name.|
+ | Front Door SKU | Select between **Classic**, **Standard** and **Premium** SKUs. |
+ | Subscription | Select your Azure subscription.|
| Resource group | Select your Front Door resource group name.| | Policy name | Enter a unique name for your WAF policy.| | Policy state | Set as **Enabled**. | :::image type="content" source="../media/waf-front-door-create-portal/basic.png" alt-text="Screenshot of the Create a W A F policy page, with a Review + create button and list boxes for the subscription, resource group, and policy name.":::
-1. In the **Association** tab of the **Create a WAF policy** page, select **+ Associate a Front Door profile**, enter the following settings, and then select **Add**:
+1. Select **Association**, and then select **+ Associate a Front door profile**, enter the following settings, and then select **Add**:
| Setting | Value | | | |
First, create a basic WAF policy with managed Default Rule Set (DRS) by using th
### Change mode
-When you create a WAF policy, by the default WAF policy is in **Detection** mode. In **Detection** mode, WAF doesn't block any requests, instead, requests matching the WAF rules are logged at WAF logs.
+When you create a WAF policy, by default, WAF policy is in **Detection** mode. In **Detection** mode, WAF doesn't block any requests, instead, requests matching the WAF rules are logged at WAF logs.
To see WAF in action, you can change the mode settings from **Detection** to **Prevention**. In **Prevention** mode, requests that match rules that are defined in Default Rule Set (DRS) are blocked and logged at WAF logs. :::image type="content" source="../media/waf-front-door-create-portal/policy.png" alt-text="Screenshot of the Policy settings section. The Mode toggle is set to Prevention.":::
Below is an example of configuring a custom rule to block a request if the query
### Default Rule Set (DRS)
-Azure-managed Default Rule Set is enabled by default. Current default version is DefaultRuleSet_1.0. From WAF **Managed rules**, **Assign**, recently available ruleset Microsoft_DefaultRuleSet_1.1 is available in the drop-down list.
+Azure-managed Default Rule Set (DRS) is enabled by default. Current default version is Microsoft_DefaultRuleSet_2.0. From **Managed rules** page, select **Assign** to assign a different DRS.
To disable an individual rule, select the **check box** in front of the rule number, and select **Disable** at the top of the page. To change actions types for individual rules within the rule set, select the check box in front of the rule number, and then select the **Change action** at the top of the page.
When no longer needed, remove the resource group and all related resources.
> [!div class="nextstepaction"] > [Learn more about Azure Front Door](../../frontdoor/front-door-overview.md)
-> [Learn more about Azure Front Door Standard/Premium](../../frontdoor/standard-premium/overview.md)
+> [Learn more about Azure Front Door tiers](../../frontdoor/standard-premium/tier-comparison.md)
web-application-firewall Application Gateway Web Application Firewall Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-web-application-firewall-portal.md
Title: 'Tutorial: Create using portal - Web Application Firewall'
+ Title: 'Tutorial: Create an application gateway with a Web Application Firewall using the Azure portal'
description: In this tutorial, you learn how to create an application gateway with a Web Application Firewall by using the Azure portal. Previously updated : 05/23/2022 Last updated : 10/20/2022 #Customer intent: As an IT administrator, I want to use the Azure portal to set up an application gateway with Web Application Firewall so I can protect my applications.
In this tutorial, you learn how to:
> * Create a storage account and configure diagnostics > * Test the application gateway
-![Web application firewall example](../media/application-gateway-web-application-firewall-portal/scenario-waf.png)
[!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)]
Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.c
Select **OK** to close the **Create virtual network** window and save the virtual network settings.
- ![Create new application gateway: virtual network](../media/application-gateway-web-application-firewall-portal/application-gateway-create-vnet.png)
+ :::image type="content" source="../media/application-gateway-web-application-firewall-portal/application-gateway-create-vnet.png" alt-text="Screenshot of Create new application gateway: Create virtual network.":::
3. On the **Basics** tab, accept the default values for the other settings and then select **Next: Frontends**.
Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.c
2. Choose **Add new** for the **Public IP address** and enter *myAGPublicIPAddress* for the public IP address name, and then select **OK**.
- ![Create new application gateway: frontends](../media/application-gateway-web-application-firewall-portal/application-gateway-create-frontends.png)
+ :::image type="content" source="../media/application-gateway-web-application-firewall-portal/application-gateway-create-frontends.png" alt-text="Screenshot of Create new application gateway: Frontends.":::
3. Select **Next: Backends**.
The backend pool is used to route requests to the backend servers that serve the
3. In the **Add a backend pool** window, select **Add** to save the backend pool configuration and return to the **Backends** tab.
- ![Create new application gateway: backends](../media/application-gateway-web-application-firewall-portal/application-gateway-create-backends.png)
+ :::image type="content" source="../media/application-gateway-web-application-firewall-portal/application-gateway-create-backends.png" alt-text="Screenshot of Create new application gateway: Backends.":::
4. On the **Backends** tab, select **Next: Configuration**.
On the **Configuration** tab, you'll connect the frontend and backend pool you c
Accept the default values for the other settings on the **Listener** tab, then select the **Backend targets** tab to configure the rest of the routing rule.
- :::image type="content" source="../media/application-gateway-web-application-firewall-portal/application-gateway-create-rule-listener.png" alt-text="Screenshot showing Create new application gateway: listener." lightbox="../media/application-gateway-web-application-firewall-portal/application-gateway-create-rule-listener.png":::
+ :::image type="content" source="../media/application-gateway-web-application-firewall-portal/application-gateway-create-rule-listener.png" alt-text="Screenshot showing Create new application gateway: listener." lightbox="../media/application-gateway-web-application-firewall-portal/application-gateway-create-rule-listener-expanded.png":::
4. On the **Backend targets** tab, select **myBackendPool** for the **Backend target**. 5. For the **Backend settings**, select **Add new** to create a new Backend setting. This setting determines the behavior of the routing rule. In the **Add Backend setting** window that opens, enter *myBackendSetting* for the **Backend settings name**. Accept the default values for the other settings in the window, then select **Add** to return to the **Add a routing rule** window.
- :::image type="content" source="../media/application-gateway-web-application-firewall-portal/application-gateway-create-backend-setting.png" alt-text="Screenshot showing Create new application gateway, Backend setting." lightbox="../media/application-gateway-web-application-firewall-portal/application-gateway-create-backend-setting.png":::
+ :::image type="content" source="../media/application-gateway-web-application-firewall-portal/application-gateway-create-backend-setting.png" alt-text="Screenshot showing Create new application gateway, Backend setting." lightbox="../media/application-gateway-web-application-firewall-portal/application-gateway-create-backend-setting-expanded.png":::
6. On the **Add a routing rule** window, select **Add** to save the routing rule and return to the **Configuration** tab.
- :::image type="content" source="../media/application-gateway-web-application-firewall-portal/application-gateway-create-rule-backends.png" alt-text="Screenshot showing Create new application gateway: routing rule." lightbox="../media/application-gateway-web-application-firewall-portal/application-gateway-create-rule-backends.png":::
+ :::image type="content" source="../media/application-gateway-web-application-firewall-portal/application-gateway-create-rule-backends.png" alt-text="Screenshot showing Create new application gateway: routing rule." lightbox="../media/application-gateway-web-application-firewall-portal/application-gateway-create-rule-backends-expanded.png":::
7. Select **Next: Tags** and then **Next: Review + create**.
To do this, you'll:
- **Resource group**: Select **myResourceGroupAG** for the resource group name. - **Virtual machine name**: Enter *myVM* for the name of the virtual machine.
- - **Username**: Enter a name for the administrator user name.
+ - **Username**: Enter a name for the administrator username.
- **Password**: Enter a password for the administrator password. - **Public inbound ports**: Select **None**. 4. Accept the other defaults and then select **Next: Disks**.
To do this, you'll:
6. On the **Networking** tab, verify that **myVNet** is selected for the **Virtual network** and the **Subnet** is set to **myBackendSubnet**. 1. For **Public IP**, select **None**. 1. Accept the other defaults and then select **Next: Management**.
-1. On the **Monitoring** tab, set **Boot diagnostics** to **Disable**. Accept the other defaults and then select **Review + create**.
+1. Select **Next: Monitoring**, set **Boot diagnostics** to **Disable**. Accept the other defaults and then select **Review + create**.
1. On the **Review + create** tab, review the settings, correct any validation errors, and then select **Create**. 1. Wait for the virtual machine creation to complete before continuing.
In this example, you install IIS on the virtual machines only to verify Azure cr
1. Open [Azure PowerShell](../../cloud-shell/quickstart-powershell.md). To do so, select **Cloud Shell** from the top navigation bar of the Azure portal and then select **PowerShell** from the drop-down list.
- ![Install custom extension](../media/application-gateway-web-application-firewall-portal/application-gateway-extension.png)
+ :::image type="content" source="../media/application-gateway-web-application-firewall-portal/application-gateway-extension.png" alt-text="Screenshot of accessing PowerShell from Portal Cloud shell.":::
2. Set the location parameter for your environment, and then run the following command to install IIS on the virtual machine:
In this example, you install IIS on the virtual machines only to verify Azure cr
Although IIS isn't required to create the application gateway, you installed it to verify whether Azure successfully created the application gateway. Use IIS to test the application gateway:
-1. Find the public IP address for the application gateway on its **Overview** page.![Record application gateway public IP address](../media/application-gateway-web-application-firewall-portal/application-gateway-record-ag-address.png)
+1. Find the public IP address for the application gateway on its **Overview** page.
+ :::image type="content" source="../media/application-gateway-web-application-firewall-portal/application-gateway-record-ag-address.png" alt-text="Screenshot of Application Gateway public IP address on the Overview page.":::
Or, you can select **All resources**, enter *myAGPublicIPAddress* in the search box, and then select it in the search results. Azure displays the public IP address on the **Overview** page. 1. Copy the public IP address, and then paste it into the address bar of your browser. 1. Check the response. A valid response verifies that the application gateway was successfully created and it can successfully connect with the backend.
- ![Test application gateway](../media/application-gateway-web-application-firewall-portal/application-gateway-iistest.png)
+ :::image type="content" source="../media/application-gateway-web-application-firewall-portal/application-gateway-iistest.png" alt-text="Screenshot of testing the application gateway.":::
## Clean up resources