Updates from: 03/12/2021 04:09:51
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Add Api Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/add-api-connector.md
For Azure App Service and Azure Functions, see [configure TLS mutual authenticat
It's recommended you set reminder alerts for when your certificate will expire. To upload a new certificate to an existing API connector, select the API connector under **API connectors (preview)** and click on **Upload new certificate**. The most recently uploaded certificate which is not expired and is past the start date will be used automatically by Azure AD B2C. ### API Key
-Some services use an "API key" mechanism to make it harder to access your HTTP endpoints during development. For [Azure Functions](../azure-functions/functions-bindings-http-webhook-trigger.md#authorization-keys), you can accomplish this by including the `code` as a query parameter in the **Endpoint URL**. For example, `https://contoso.azurewebsites.net/api/endpoint`<b>`?code=0123456789`</b>).
+Some services use an "API key" mechanism to obfuscate access to your HTTP endpoints during development. For [Azure Functions](../azure-functions/functions-bindings-http-webhook-trigger.md#authorization-keys), you can accomplish this by including the `code` as a query parameter in the **Endpoint URL**. For example, `https://contoso.azurewebsites.net/api/endpoint`<b>`?code=0123456789`</b>).
This is not a mechanism that should be used alone in production. Therefore, configuration for basic or certificate authentication is always required. If you do not wish to implement any authentication method (not recommended) for development purposes, you can choose basic authentication and use temporary values for `username` and `password` that your API can disregard while you implement the authorization in your API.
active-directory-b2c Code Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/code-samples.md
The following tables provide links to samples for applications including iOS, An
| Sample | Description | |--| -- |
+| [ms-identity-javascript-react-tutorial](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/3-Authorization-II/2-call-api-b2c) | A single page application (SPA) calling a web API. Authentication is done with Azure AD B2C by using MSAL React. This sample uses the authorization code flow with PKCE. |
| [ms-identity-b2c-javascript-spa](https://github.com/Azure-Samples/ms-identity-b2c-javascript-spa) | A single page application (SPA) calling a web API. Authentication is done with Azure AD B2C by using MSAL.js. This sample uses the authorization code flow with PKCE. |
-| [javascript-nodejs-management](https://github.com/Azure-Samples/ms-identity-b2c-javascript-nodejs-management/tree/main/Chapter1) | A single page application (SPA) calling Microsoft Graph to manage users in a B2C directory. Authentication is done with Azure AD B2C by using MSAL.js. This sample uses the authorization code flow with PKCE.|
| [javascript-msal-singlepageapp](https://github.com/Azure-Samples/active-directory-b2c-javascript-msal-singlepageapp) | A single page application (SPA) calling a web API. Authentication is done with Azure AD B2C by using MSAL.js. This sample uses the implicit flow.|
+| [javascript-nodejs-management](https://github.com/Azure-Samples/ms-identity-b2c-javascript-nodejs-management/tree/main/Chapter1) | A single page application (SPA) calling Microsoft Graph to manage users in a B2C directory. Authentication is done with Azure AD B2C by using MSAL.js. This sample uses the authorization code flow with PKCE.|
## Console/Daemon apps
active-directory-b2c User Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/user-migration.md
Previously updated : 02/14/2020 Last updated : 03/11/2021
The seamless migration flow thus has two phases: *pre migration* and *set creden
### Phase 1: Pre migration 1. Your migration application reads the user accounts from the old identity provider.
-1. The migration application creates corresponding user accounts in your Azure AD B2C directory, but *does not set passwords*.
+1. The migration application creates corresponding user accounts in your Azure AD B2C directory, but *set random passwords* you generate.
### Phase 2: Set credentials
active-directory-domain-services Powershell Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/powershell-create-instance.md
Previously updated : 02/04/2021- Last updated : 03/10/2021+
To complete this article, you need the following resources:
* You need *global administrator* privileges in your Azure AD tenant to enable Azure AD DS. * You need *Contributor* privileges in your Azure subscription to create the required Azure AD DS resources.
+ > [!IMPORTANT]
+ > While the **Az.ADDomainServices** PowerShell module is in preview, you must install it separately
+ > using the `Install-Module` cmdlet.
+
+ ```azurepowershell-interactive
+ Install-Module -Name Az.ADDomainServices
+ ```
+ ## Create required Azure AD resources Azure AD DS requires a service principal to authenticate and communicate and an Azure AD group to define which users have administrative permissions in the managed domain.
Add-AzureADGroupMember -ObjectId $GroupObjectId.ObjectId -RefObjectId $UserObjec
First, register the Azure AD Domain Services resource provider using the [Register-AzResourceProvider][Register-AzResourceProvider] cmdlet:
-```powershell
+```azurepowershell-interactive
Register-AzResourceProvider -ProviderNamespace Microsoft.AAD ``` Next, create a resource group using the [New-AzResourceGroup][New-AzResourceGroup] cmdlet. In the following example, the resource group is named *myResourceGroup* and is created in the *westus* region. Use your own name and desired region:
-```powershell
+```azurepowershell-interactive
$ResourceGroupName = "myResourceGroup" $AzureLocation = "westus"
Create the virtual network and subnets for Azure AD Domain Services. Two subnets
Create the subnets using the [New-AzVirtualNetworkSubnetConfig][New-AzVirtualNetworkSubnetConfig] cmdlet, then create the virtual network using the [New-AzVirtualNetwork][New-AzVirtualNetwork] cmdlet.
-```powershell
+```azurepowershell-interactive
$VnetName = "myVnet" # Create the dedicated subnet for Azure AD Domain Services.
Azure AD DS needs a network security group to secure the ports needed for the ma
The following PowerShell cmdlets use [New-AzNetworkSecurityRuleConfig][New-AzNetworkSecurityRuleConfig] to create the rules, then [New-AzNetworkSecurityGroup][New-AzNetworkSecurityGroup] to create the network security group. The network security group and rules are then associated with the virtual network subnet using the [Set-AzVirtualNetworkSubnetConfig][Set-AzVirtualNetworkSubnetConfig] cmdlet.
-```powershell
+```azurepowershell-interactive
$NSGName = "aaddsNSG" # Create a rule to allow inbound TCP port 3389 traffic from Microsoft secure access workstations for troubleshooting
Availability Zones are unique physical locations within an Azure region. Each zo
There's nothing for you to configure for Azure AD DS to be distributed across zones. The Azure platform automatically handles the zone distribution of resources. For more information and to see region availability, see [What are Availability Zones in Azure?][availability-zones].
-```powershell
+```azurepowershell-interactive
$AzureSubscriptionId = "YOUR_AZURE_SUBSCRIPTION_ID" $ManagedDomainName = "aaddscontoso.com" # Enable Azure AD Domain Services for the directory.
-New-AzResource -ResourceId "/subscriptions/$AzureSubscriptionId/resourceGroups/$ResourceGroupName/providers/Microsoft.AAD/DomainServices/$ManagedDomainName" `
- -ApiVersion "2017-06-01" `
- -Location $AzureLocation `
- -Properties @{"DomainName"=$ManagedDomainName; `
- "SubnetId"="/subscriptions/$AzureSubscriptionId/resourceGroups/$ResourceGroupName/providers/Microsoft.Network/virtualNetworks/$VnetName/subnets/DomainServices"} `
- -Force -Verbose
+$replicaSetParams = @{
+ Location = $AzureLocation
+ SubnetId = "/subscriptions/$AzureSubscriptionId/resourceGroups/$ResourceGroupName/providers/Microsoft.Network/virtualNetworks/$VnetName/subnets/DomainServices"
+}
+$replicaSet = New-AzADDomainServiceReplicaSetObject @replicaSetParams
+
+$domainServiceParams = @{
+ Name = $ManagedDomainName
+ ResourceGroupName = $ResourceGroupName
+ DomainName = $ManagedDomainName
+ ReplicaSet = $replicaSet
+}
+New-AzADDomainService @domainServiceParams
``` It takes a few minutes to create the resource and return control to the PowerShell prompt. The managed domain continues to be provisioned in the background, and can take up to an hour to complete the deployment. In the Azure portal, the **Overview** page for your managed domain shows the current status throughout this deployment stage.
The following complete PowerShell script combines all of the tasks shown in this
> [!NOTE] > To enable Azure AD DS, you must be a global administrator for the Azure AD tenant. You also need at least *Contributor* privileges in the Azure subscription.
-```powershell
+```azurepowershell-interactive
# Change the following values to match your deployment. $AaddsAdminUserUpn = "admin@contoso.onmicrosoft.com" $ResourceGroupName = "myResourceGroup"
$Vnet=New-AzVirtualNetwork `
-Name $VnetName ` -AddressPrefix 10.0.0.0/16 ` -Subnet $AaddsSubnet,$WorkloadSubnet
-
+ $NSGName = "aaddsNSG" # Create a rule to allow inbound TCP port 3389 traffic from Microsoft secure access workstations for troubleshooting
Set-AzVirtualNetworkSubnetConfig -Name $SubnetName `
$vnet | Set-AzVirtualNetwork # Enable Azure AD Domain Services for the directory.
-New-AzResource -ResourceId "/subscriptions/$AzureSubscriptionId/resourceGroups/$ResourceGroupName/providers/Microsoft.AAD/DomainServices/$ManagedDomainName" `
- -ApiVersion "2017-06-01" `
- -Location $AzureLocation `
- -Properties @{"DomainName"=$ManagedDomainName; `
- "SubnetId"="/subscriptions/$AzureSubscriptionId/resourceGroups/$ResourceGroupName/providers/Microsoft.Network/virtualNetworks/$VnetName/subnets/DomainServices"} `
- -Force -Verbose
+$replicaSetParams = @{
+ Location = $AzureLocation
+ SubnetId = "/subscriptions/$AzureSubscriptionId/resourceGroups/$ResourceGroupName/providers/Microsoft.Network/virtualNetworks/$VnetName/subnets/DomainServices"
+}
+$replicaSet = New-AzADDomainServiceReplicaSetObject @replicaSetParams
+
+$domainServiceParams = @{
+ Name = $ManagedDomainName
+ ResourceGroupName = $ResourceGroupName
+ DomainName = $ManagedDomainName
+ ReplicaSet = $replicaSet
+}
+New-AzADDomainService @domainServiceParams
``` It takes a few minutes to create the resource and return control to the PowerShell prompt. The managed domain continues to be provisioned in the background, and can take up to an hour to complete the deployment. In the Azure portal, the **Overview** page for your managed domain shows the current status throughout this deployment stage.
active-directory-domain-services Troubleshoot Account Lockout https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/troubleshoot-account-lockout.md
AADDomainServicesAccountManagement
| sort by TimeGenerated asc ```
+**Note**
+
+You may find on 4776 and 4740 event details of "Source Workstation: " empty. This is because the bad password happened over Network logon via some other devices.
+For Example: If you have RADIUS server, which can forward the auth to AAD DS. To confirm that Enable RDP to DC backend configure netlogon logs.
+
+03/04 19:07:29 [LOGON] [10752] contoso: SamLogon: Transitive Network logon of contoso\Nagappan.Veerappan from (via LOB11-RADIUS) Entered
+
+03/04 19:07:29 [LOGON] [10752] contoso: SamLogon: Transitive Network logon of contoso\Nagappan.Veerappan from (via LOB11-RADIUS) Returns 0xC000006A
+
+03/04 19:07:35 [LOGON] [10753] contoso: SamLogon: Transitive Network logon of contoso\Nagappan.Veerappan from (via LOB11-RADIUS) Entered
+
+03/04 19:07:35 [LOGON] [10753] contoso: SamLogon: Transitive Network logon of contoso\Nagappan.Veerappan from (via LOB11-RADIUS) Returns 0xC000006A
+
+Enable RDP to your DCs in NSG to backend to configure diagnostics capture (i.e netlogon)
+https://docs.microsoft.com/azure/active-directory-domain-services/alert-nsg#inbound-security-rules
+if you have modified default NSG already , Please follow PSlet way to enable
+https://docs.microsoft.com/azure/active-directory-domain-services/network-considerations#port-3389management-using-remote-desktop
+
+To enable Netlogon log on any server
+https://docs.microsoft.com/troubleshoot/windows-client/windows-security/enable-debug-logging-netlogon-service
+ ## Next steps For more information on fine-grained password policies to adjust account lockout thresholds, see [Configure password and account lockout policies][configure-fgpp].
active-directory App Objects And Service Principals https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/app-objects-and-service-principals.md
The application object is the *global* representation of your application for us
The application object serves as the template from which common and default properties are *derived* for use in creating corresponding service principal objects. An application object therefore has a 1:1 relationship with the software application, and a 1:many relationship with its corresponding service principal object(s).
-A service principal must be created in each tenant where the application is used, enabling it to establish an identity for sign-in and/or access to resources being secured by the tenant. A single-tenant application has only one service principal (in its home tenant), created and consented for use during application registration. A multi-tenant Web application/API also has a service principal created in each tenant where a user from that tenant has consented to its use.
+A service principal must be created in each tenant where the application is used, enabling it to establish an identity for sign-in and/or access to resources being secured by the tenant. A single-tenant application has only one service principal (in its home tenant), created and consented for use during application registration. A multi-tenant application also has a service principal created in each tenant where a user from that tenant has consented to its use.
-Any changes you make to your application object, including deletion, are reflected in its service principal object in the application's home tenant only (the tenant where it was registered). For multi-tenant applications, changes to the application object are not reflected in any consumer tenants' service principal objects, until the access is removed through the [Application Access Panel](https://myapps.microsoft.com) and granted again.
-
-Native applications are registered as multi-tenant by default.
+### Consequences of modifying and deleting applications
+Any changes that you make to your application object are also reflected in its service principal object in the application's home tenant only (the tenant where it was registered). This means that deleting an application object will also delete its home tenant service principal object. However, restoring that application object will not restore its corresponding service principal. For multi-tenant applications, changes to the application object are not reflected in any consumer tenants' service principal objects, until the access is removed through the [Application Access Panel](https://myapps.microsoft.com) and granted again.
## Example
active-directory Howto Restrict Your App To A Set Of Users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/howto-restrict-your-app-to-a-set-of-users.md
description: Learn how to restrict access to your apps registered in Azure AD to
-
active-directory Msal Net Use Brokers With Xamarin Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-net-use-brokers-with-xamarin-apps.md
The forward-slash (`/`) in front of the signature in the `android:path` value is
android:path="/hgbUYHVBYUTvuvT&Y6tr554365466="/> ```
-For more information about configuring your application for system browser and Android 11 support, see [Update the Android manifest for system browser support](msal-net-xamarin-android-considerations.md#update-the-android-manifest).
+For more information about configuring your application for system browser and Android 11 support, see [Update the Android manifest for system browser support](msal-net-xamarin-android-considerations.md#update-the-android-manifest-for-system-webview-support).
As an alternative, you can configure MSAL to fall back to the embedded browser, which doesn't rely on a redirect URI:
active-directory Quickstart Remove App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-remove-app.md
To delete an application, be listed as an owner of the application or have admin
1. Search and select the **Azure Active Directory**. 1. Under **Manage**, select **App registrations** and select the application that you want to configure. Once you've selected the app, you'll see the application's **Overview** page. 1. From the **Overview** page, select **Delete**.
-1. Select **Yes** to confirm that you want to delete the app.
+1. Read the deletion consequences. Check the box if one appears at the bottom of the pane.
+1. Select **Delete** to confirm that you want to delete the app.
## Remove an application authored by another organization
active-directory Reference V2 Libraries https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/reference-v2-libraries.md
A single-page application runs entirely on the browser surface and fetches page
Because a SPA's code runs entirely in the browser, it's considered a *public client* that's unable to store secrets securely.
-| Language / framework | Project on<br/>GitHub | Package | Getting<br/>started | Sign in users | Access web APIs | Generally available (GA) *or*<br/>Public preview<sup>1</sup> |
-|-|--||:--:|:--:|::|::|
-| Angular | [MSAL Angular 2.0](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular) | [@azure/msal-angular](https://www.npmjs.com/package/@azure/msal-angular) | ΓÇö | ![Library can request ID tokens for user sign-in.][y] | ![Library can request access tokens for protected web APIs.][y] | Public preview |
-| Angular | [MSAL Angular](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/msal-angular-v1/lib/msal-angular) | [@azure/msal-angular](https://www.npmjs.com/package/@azure/msal-angular) | [Tutorial](tutorial-v2-angular.md) | ![Library can request ID tokens for user sign-in.][y] | ![Library can request access tokens for protected web APIs.][y] | GA |
-| AngularJS | [MSAL AngularJS](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-angularjs) | [@azure/msal-angularjs](https://www.npmjs.com/package/@azure/msal-angularjs) | ΓÇö | ![Library can request ID tokens for user sign-in.][y] | ![Library can request access tokens for protected web APIs.][y] | Public preview |
-| JavaScript | [MSAL.js 2.0](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-browser) | [@azure/msal-browser](https://www.npmjs.com/package/@azure/msal-browser) | [Tutorial](tutorial-v2-javascript-auth-code.md) | ![Library can request ID tokens for user sign-in.][y] | ![Library can request access tokens for protected web APIs.][y] | GA |
-| JavaScript | [MSAL.js 1.0](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-core) | [@azure/msal-core](https://www.npmjs.com/package/@azure/msal-core) | [Tutorial](tutorial-v2-javascript-spa.md) | ![Library can request ID tokens for user sign-in.][y] | ![Library can request access tokens for protected web APIs.][y] | GA |
-| React | [MSAL React](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-react) | [@azure/msal-react](https://www.npmjs.com/package/@azure/msal-react) | ΓÇö | ![Library can request ID tokens for user sign-in.][y] | ![Library can request access tokens for protected web APIs.][y] | Public preview |
-<!--
-| Vue | [Vue MSAL]( https://github.com/mvertopoulos/vue-msal) | [vue-msal]( https://www.npmjs.com/package/vue-msal) | ![X indicating no.][n] | ![Green check mark.][y] | ![Green check mark.][y] | -- |
>-
-<sup>1</sup> [Supplemental terms of use for Microsoft Azure Previews][preview-tos] apply to libraries in *Public preview*.
## Web application
A web application runs code on a server that generates and sends HTML, CSS, and
Because a web application's code runs on the web server, it's considered a *confidential client* that can store secrets securely.
-| Language / framework | Project on<br/>GitHub | Package | Getting<br/>started | Sign in users | Access web APIs | Generally available (GA) *or*<br/>Public preview<sup>1</sup> |
-|-|--||:-:|:--:|::|::|
-| .NET | [MSAL.NET](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet) | [Microsoft.Identity.Client](https://www.nuget.org/packages/Microsoft.Identity.Client) | ΓÇö | ![Library cannot request ID tokens for user sign-in.][n] | ![Library can request access tokens for protected web APIs.][y] | GA |
-| ASP.NET Core | [ASP.NET Security](/aspnet/core/security/) | [Microsoft.AspNetCore.Authentication](https://www.nuget.org/packages/Microsoft.AspNetCore.Authentication/) | ΓÇö | ![Library can request ID tokens for user sign-in.][y] | ![Library cannot request access tokens for protected web APIs.][n] | GA |
-| ASP.NET Core | [Microsoft.Identity.Web](https://github.com/AzureAD/microsoft-identity-web) | [Microsoft.Identity.Web](https://www.nuget.org/packages/Microsoft.Identity.Web) | ΓÇö | ![Library can request ID tokens for user sign-in.][y] | ![Library can request access tokens for protected web APIs.][y] | GA |
-| Java | [MSAL4J](https://github.com/AzureAD/microsoft-authentication-library-for-java) | [msal4j](https://search.maven.org/artifact/com.microsoft.azure/msal4j) | [Quickstart](quickstart-v2-java-webapp.md) | ![Library can request ID tokens for user sign-in.][y] | ![Library can request access tokens for protected web APIs.][y] | GA |
-| Node.js | [MSAL Node](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node) | [msal-node](https://www.npmjs.com/package/@azure/msal-node) | [Quickstart](quickstart-v2-nodejs-webapp-msal.md) | ![Library can request ID tokens for user sign-in.][y] | ![Library can request access tokens for protected web APIs.][y] | GA |
-| Node.js | [Azure AD Passport](https://github.com/AzureAD/passport-azure-ad) | [passport-azure-ad](https://www.npmjs.com/package/passport-azure-ad) | [Quickstart](quickstart-v2-nodejs-webapp.md) | ![Library can request ID tokens for user sign-in.][y] | ![Library can request access tokens for protected web APIs.][y] | GA |
-| Python | [MSAL Python](https://github.com/AzureAD/microsoft-authentication-library-for-python) | [msal](https://pypi.org/project/msal) | [Quickstart](quickstart-v2-python-webapp.md) | ![Library can request ID tokens for user sign-in.][y] | ![Library can request access tokens for protected web APIs.][y] | GA |
-<!--
-| Java | [ScribeJava](https://github.com/scribejava/scribejava) | [ScribeJava 3.2.0](https://github.com/scribejava/scribejava/releases/tag/scribejava-3.2.0) | ![X indicating no.][n] | ![X indicating no.][n] | ![Green check mark.][y] | -- |
-| Java | [Gluu oxAuth](https://github.com/GluuFederation/oxAuth) | [oxAuth 3.0.2](https://github.com/GluuFederation/oxAuth/releases/tag/3.0.2) | ![X indicating no.][n] | ![Green check mark.][y] | ![Green check mark.][y] | -- |
-| Node.js | [openid-client](https://github.com/panva/node-openid-client/) | [openid-client 2.4.5](https://github.com/panva/node-openid-client/releases/tag/v2.4.5) | ![X indicating no.][n] | ![Green check mark.][y] | ![Green check mark.][y] | -- |
-| PHP | [PHP League oauth2-client](https://github.com/thephpleague/oauth2-client) | [oauth2-client 1.4.2](https://github.com/thephpleague/oauth2-client/releases/tag/1.4.2) | ![X indicating no.][n] | ![X indicating no.][n] | ![Green check mark.][y] | -- |
-| Ruby | [OmniAuth](https://github.com/omniauth/omniauth) | [omniauth 1.3.1](https://github.com/omniauth/omniauth/releases/tag/v1.3.1)<br/>[omniauth-oauth2 1.4.0](https://github.com/intridea/omniauth-oauth2) | ![X indicating no.][n] | ![X indicating no.][n] | ![Green check mark.][y] | -- |
>-
-<sup>1</sup> [Supplemental terms of use for Microsoft Azure Previews][preview-tos] apply to libraries in *Public preview*.
## Desktop application
A desktop application is typically binary (compiled) code that surfaces a user i
Because a desktop application runs on the user's desktop, it's considered a *public client* that's unable to store secrets securely.
-| Language / framework | Project on<br/>GitHub | Package | Getting<br/>started | Sign in users | Access web APIs | Generally available (GA) *or*<br/>Public preview<sup>1</sup> |
-|-|--||::|:--:|::|::|
-| Electron | [MSAL Node](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node) | [@azure/msal-node](https://www.npmjs.com/package/@azure/msal-node) | [Tutorial](tutorial-v2-nodejs-desktop.md) | ![Library can request ID tokens for user sign-in.][y] | ![Library can request access tokens for protected web APIs.][y] | GA |
-| Java | [MSAL4J](https://github.com/AzureAD/microsoft-authentication-library-for-java) | [msal4j](https://mvnrepository.com/artifact/com.microsoft.azure/msal4j) | ΓÇö | ![Library can request ID tokens for user sign-in.][y] | ![Library can request access tokens for protected web APIs.][y] | GA |
-| macOS (Swift/Obj-C) | [MSAL for iOS and macOS](https://github.com/AzureAD/microsoft-authentication-library-for-objc) | [MSAL](https://cocoapods.org/pods/MSAL) | [Tutorial](tutorial-v2-ios.md) | ![Library can request ID tokens for user sign-in.][y] | ![Library can request access tokens for protected web APIs.][y] | GA |
-| UWP | [MSAL.NET](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet) | [Microsoft.Identity.Client](https://www.nuget.org/packages/Microsoft.Identity.Client) | [Tutorial](tutorial-v2-windows-uwp.md) | ![Library can request ID tokens for user sign-in.][y] | ![Library can request access tokens for protected web APIs.][y] | GA |
-| WPF | [MSAL.NET](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet) | [Microsoft.Identity.Client](https://www.nuget.org/packages/Microsoft.Identity.Client) | [Tutorial](tutorial-v2-windows-desktop.md) | ![Library can request ID tokens for user sign-in.][y] | ![Library can request access tokens for protected web APIs.][y] | GA |
-<!--
-| Java | Scribe | [Scribe Java](https://mvnrepository.com/artifact/org.scribe/scribe) | ![X indicating no.][n] | ![Green check mark.][y] | ![Green check mark.][y] | -- |
-| React Native | [React Native App Auth](https://github.com/FormidableLabs/react-native-app-auth/blob/main/docs/config-examples/azure-active-directory.md) | [react-native-app-auth](https://www.npmjs.com/package/react-native-app-auth) | ![X indicating no.][n] | ![Green check mark.][y] | ![Green check mark.][y] | -- |
>-
-<sup>1</sup> [Supplemental terms of use for Microsoft Azure Previews][preview-tos] apply to libraries in *Public preview*.
## Mobile application
A mobile application is typically binary (compiled) code that surfaces a user in
Because a mobile application runs on the user's mobile device, it's considered a *public client* that's unable to store secrets securely.
-| Platform | Project on<br/>GitHub | Package | Getting<br/>started | Sign in users | Access web APIs | Generally available (GA) *or*<br/>Public preview<sup>1</sup> |
-|-|||:--:|:--:|::|::|
-| Android (Java) | [MSAL Android](https://github.com/AzureAD/microsoft-authentication-library-for-android) | [MSAL](https://mvnrepository.com/artifact/com.microsoft.identity.client/msal) | [Quickstart](quickstart-v2-android.md) | ![Library can request ID tokens for user sign-in.][y] | ![Library can request access tokens for protected web APIs.][y] | GA |
-| Android (Kotlin) | [MSAL Android](https://github.com/AzureAD/microsoft-authentication-library-for-android) | [MSAL](https://mvnrepository.com/artifact/com.microsoft.identity.client/msal) | ΓÇö | ![Library can request ID tokens for user sign-in.][y] | ![Library can request access tokens for protected web APIs.][y] | GA |
-| iOS (Swift/Obj-C) | [MSAL for iOS and macOS](https://github.com/AzureAD/microsoft-authentication-library-for-objc) | [MSAL](https://cocoapods.org/pods/MSAL) | [Tutorial](tutorial-v2-ios.md) | ![Library can request ID tokens for user sign-in.][y] | ![Library can request access tokens for protected web APIs.][y] | GA |
-| Xamarin (.NET) | [MSAL.NET](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet) | [Microsoft.Identity.Client](https://www.nuget.org/packages/Microsoft.Identity.Client) | ΓÇö | ![Library can request ID tokens for user sign-in.][y] | ![Library can request access tokens for protected web APIs.][y] | GA |
-<!--
-| React Native |[React Native App Auth](https://github.com/FormidableLabs/react-native-app-auth/blob/main/docs/config-examples/azure-active-directory.md) | [react-native-app-auth](https://www.npmjs.com/package/react-native-app-auth) | ![X indicating no.][n] | ![Green check mark.][y] | ![Green check mark.][y] | -- |
>-
-<sup>1</sup> [Supplemental terms of use for Microsoft Azure Previews][preview-tos] apply to libraries in *Public preview*.
## Service / daemon
Services and daemons are commonly used for server-to-server and other unattended
A service or daemon that runs on a server is considered a *confidential client* that can store its secrets securely.
-| Language / framework | Project on<br/>GitHub | Package | Getting<br/>started | Sign in users | Access web APIs | Generally available (GA) *or*<br/>Public preview<sup>1</sup> |
-|-||-|::|:--:|::|::|
-| .NET | [MSAL.NET](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet) | [Microsoft.Identity.Client](https://www.nuget.org/packages/Microsoft.Identity.Client/) | [Quickstart](quickstart-v2-netcore-daemon.md) | ![Library cannot request ID tokens for user sign-in.][n] | ![Library can request access tokens for protected web APIs.][y] | GA |
-| Java | [MSAL4J](https://github.com/AzureAD/microsoft-authentication-library-for-java) | [msal4j](https://javadoc.io/doc/com.microsoft.azure/msal4j/latest/https://docsupdatetracker.net/index.html) | ΓÇö | ![Library cannot request ID tokens for user sign-in.][n] | ![Library can request access tokens for protected web APIs.][y] | GA |
-| Node | [MSAL Node](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node) | [msal-node](https://www.npmjs.com/package/@azure/msal-node) | [Quickstart](quickstart-v2-nodejs-console.md) | ![Library cannot request ID tokens for user sign-in.][n] | ![Library can request access tokens for protected web APIs.][y] | GA |
-| Python | [MSAL Python](https://github.com/AzureAD/microsoft-authentication-library-for-python) | [msal-python](https://github.com/AzureAD/microsoft-authentication-library-for-python) | ΓÇö | ![Library cannot request ID tokens for user sign-in.][n] | ![Library can request access tokens for protected web APIs.][y] | GA |
-<!--
-|PHP| [The PHP League oauth2-client](https://oauth2-client.thephpleague.com/usage/) | [League\OAuth2](https://oauth2-client.thephpleague.com/) | ![Green check mark.][n] | ![X indicating no.][n] | ![Green check mark.][y] | -- |
>-
-<sup>1</sup> [Supplemental terms of use for Microsoft Azure Previews][preview-tos] apply to libraries in *Public preview*.
## Next steps
active-directory Sample V2 Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/sample-v2-code.md
These samples show how to write a single-page application secured with Microsoft
| ![This image shows the JavaScript logo](media/sample-v2-code/logo_js.png) [JavaScript (MSAL.js 2.0)](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser) | SPA calls Microsoft Graph using Auth Code Flow w/ PKCE |[javascript-v2](https://github.com/Azure-Samples/ms-identity-javascript-v2) | | ![This image shows the JavaScript logo](media/sample-v2-code/logo_js.png) [JavaScript (MSAL.js)](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-core) | SPA calls B2C |[b2c-javascript-msal-singlepageapp](https://github.com/Azure-Samples/active-directory-b2c-javascript-msal-singlepageapp) | | ![This image shows the JavaScript logo](media/sample-v2-code/logo_js.png) [JavaScript (MSAL.js 2.0)](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser) | SPA calls B2C using Auth Code Flow w/PKCE |[b2c-javascript-spa](https://github.com/Azure-Samples/ms-identity-b2c-javascript-spa) |
+| ![This image shows the JavaScript logo](media/sample-v2-code/logo_js.png) [JavaScript (MSAL.js 2.0)](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser) | SPA calls custom web API which in turn calls Microsoft Graph | [ms-identity-javascript-tutorial-chapter4-obo](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/4-AdvancedGrants/1-call-api-graph) |
| ![This image shows the Angular logo](media/sample-v2-code/logo_angular.png) [Angular (MSAL Angular)](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular)| SPA calls Microsoft Graph | [active-directory-javascript-singlepageapp-angular](https://github.com/Azure-Samples/active-directory-javascript-singlepageapp-angular) | | ![This image shows the Angular logo](media/sample-v2-code/logo_angular.png) [Angular (MSAL Angular 2.0)](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular)| SPA calls Microsoft Graph using Auth Code Flow w/ PKCE | [ms-identity-javascript-angular-spa](https://github.com/Azure-Samples/ms-identity-javascript-angular-spa) | | ![This image shows the Angular logo](media/sample-v2-code/logo_angular.png) [Angular (MSAL Angular 2.0)](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular)| SPA calls custom Web API | [ms-identity-javascript-angular-spa-aspnetcore-webapi](https://github.com/Azure-Samples/ms-identity-javascript-angular-spa-aspnetcore-webapi) | | ![This image shows the Angular logo](media/sample-v2-code/logo_angular.png) [Angular (MSAL Angular)](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular) | SPA calls B2C |[active-directory-b2c-javascript-angular-spa](https://github.com/Azure-Samples/active-directory-b2c-javascript-angular-spa) |
+| ![This image shows the Angular logo](media/sample-v2-code/logo_angular.png) [Angular (MSAL Angular 2.0)](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular) | SPA calls custom Web API with App Roles and Security Groups |[ms-identity-javascript-angular-spa-dotnetcore-webapi-roles-groups](https://github.com/Azure-Samples/ms-identity-javascript-angular-spa-dotnetcore-webapi-roles-groups) |
| ![This image shows the React logo](media/sample-v2-code/logo_react.png) [React (MSAL React)](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-react)| SPA calls Microsoft Graph using Auth Code Flow w/ PKCE | [ms-identity-javascript-react-spa](https://github.com/Azure-Samples/ms-identity-javascript-react-spa) |
+| ![This image shows the React logo](media/sample-v2-code/logo_react.png) [React (MSAL React)](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-react)| SPA calls custom web API | [ms-identity-javascript-react-tutorial](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/3-Authorization-II/1-call-api) |
| ![This image shows the React logo](media/sample-v2-code/logo_react.png) [React (MSAL.js 2.0)](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-core)| SPA calls custom Web API which in turn calls Microsoft Graph | [ms-identity-javascript-react-spa-dotnetcore-webapi-obo](https://github.com/Azure-Samples/ms-identity-javascript-react-spa-dotnetcore-webapi-obo) |
-| ![This image shows the JavaScript logo](media/sample-v2-code/logo_js.png) [JavaScript (MSAL.js 2.0)](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser) | SPA calls custom web API which in turn calls Microsoft Graph | [ms-identity-javascript-tutorial-chapter4-obo](https://github.com/Azure-Samples/ms-identity-javascript-tutorial/tree/main/4-AdvancedGrants/1-call-api-graph) |
-| ![This image shows the Angular logo](media/sample-v2-code/logo_angular.png) [Angular (MSAL Angular 2.0)](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular) | SPA calls custom Web API with App Roles and Security Groups |[ms-identity-javascript-angular-spa-dotnetcore-webapi-roles-groups](https://github.com/Azure-Samples/ms-identity-javascript-angular-spa-dotnetcore-webapi-roles-groups) |
| ![This image shows the Blazor logo](media/sample-v2-code/logo-blazor.png) [Blazor WebAssembly (MSAL.js)](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser) | Blazor WebAssembly Tutorial to sign-in users and call APIs with Azure Active Directory |[ms-identity-blazor-wasm](https://github.com/Azure-Samples/ms-identity-blazor-wasm) | ## Web applications
active-directory Scenario Daemon App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-daemon-app-configuration.md
Learn how to configure the code for your daemon application that calls web APIs.
-## MSAL libraries that support daemon apps
+## Microsoft libraries supporting daemon apps
-These Microsoft libraries support daemon apps:
+The following Microsoft libraries support daemon apps:
- MSAL library | Description
- | -
- ![MSAL.NET](media/sample-v2-code/logo_NET.png) <br/> MSAL.NET | The .NET Framework and .NET Core platforms are supported for building daemon applications. (UWP, Xamarin.iOS, and Xamarin.Android aren't supported because those platforms are used to build public client applications.)
- ![Python](media/sample-v2-code/logo_python.png) <br/> MSAL Python | Support for daemon applications in Python.
- ![Java](media/sample-v2-code/logo_java.png) <br/> MSAL Java | Support for daemon applications in Java.
## Configure the authority
active-directory Scenario Desktop App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-desktop-app-configuration.md
Now that you've created your application, you'll learn how to configure the code with the application's coordinates.
-## Microsoft Authentication Libraries
+## Microsoft libraries supporting desktop apps
-The following Microsoft Authentication Libraries (MSALs) support desktop applications.
+The following Microsoft libraries support desktop apps:
- Microsoft Authentication Library | Description
- | -
- ![MSAL.NET](media/sample-v2-code/logo_NET.png) <br/> MSAL.NET | Supports building a desktop application in multiple platforms, such as Linux, Windows, and macOS.
- ![Python](media/sample-v2-code/logo_python.png) <br/> MSAL Python | Supports building a desktop application in multiple platforms.
- ![Java](media/sample-v2-code/logo_java.png) <br/> MSAL Java | Supports building a desktop application in multiple platforms.
- ![MSAL iOS](media/sample-v2-code/logo_iOS.png) <br/> MSAL iOS | Supports desktop applications that run on macOS only.
## Public client application
active-directory Scenario Mobile App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-mobile-app-configuration.md
After you create your application, you'll learn how to configure the code by using the app registration parameters. Mobile applications present some complexities related to fitting into their creation framework.
-## Find MSAL support for mobile apps
+## Microsoft libraries supporting mobile apps
-The following Microsoft Authentication Library (MSAL) types support mobile apps.
+The following Microsoft libraries support mobile apps:
-MSAL | Description
- | -
-![MSAL.NET](media/sample-v2-code/logo_NET.png) <br/> MSAL.NET | Used to develop portable applications. MSAL.NET supports the following platforms for building a mobile application: Universal Windows Platform (UWP), Xamarin.iOS, and Xamarin.Android.
-![MSAL.iOS](media/sample-v2-code/logo_iOS.png) <br/> MSAL.iOS | Used to develop native iOS applications by using Objective-C or Swift.
-![MSAL.Android](media/sample-v2-code/logo_android.png) <br/> MSAL.Android | Used to develop native Android applications in Java for Android.
## Instantiate the application
These tasks are necessary when you use MSAL for iOS and macOS:
If you use Xamarin.Android, do the following tasks: - [Ensure control goes back to MSAL after the interactive portion of the authentication flow ends](msal-net-xamarin-android-considerations.md#ensure-that-control-returns-to-msal)-- [Update the Android manifest](msal-net-xamarin-android-considerations.md#update-the-android-manifest)
+- [Update the Android manifest](msal-net-xamarin-android-considerations.md#update-the-android-manifest-for-system-webview-support)
- [Use the embedded web view (optional)](msal-net-xamarin-android-considerations.md#use-the-embedded-web-view-optional) - [Troubleshoot as necessary](msal-net-xamarin-android-considerations.md#troubleshooting)
For information about enabling a broker on Android, see [Brokered authentication
## Next steps Move on to the next article in this scenario,
-[Acquiring a token](scenario-mobile-acquire-token.md).
+[Acquiring a token](scenario-mobile-acquire-token.md).
active-directory Scenario Spa App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-spa-app-configuration.md
Learn how to configure the code for your single-page application (SPA).
-## MSAL libraries for SPAs and supported authentication flows
+## Microsoft libraries supporting single-page apps
-The Microsoft identity platform provides the following Microsoft Authentication Library for JavaScript (MSAL.js) to support implicit flow and authorization code flow with PKCE by using industry-recommended security practices:
+The following Microsoft libraries support single-page apps:
-| MSAL library | Flow | Description |
-|--||-|
-| ![MSAL.js](media/sample-v2-code/logo_js.png) <br/> [MSAL.js (2.x)](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-browser) | Authorization code flow (PKCE) | Plain JavaScript library for use in any client-side web app that's built through JavaScript or SPA frameworks such as Angular, Vue.js, and React.js. |
-| ![MSAL.js](media/sample-v2-code/logo_js.png) <br/> [MSAL.js (1.x)](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-core) | Implicit flow | Plain JavaScript library for use in any client-side web app that's built through JavaScript or SPA frameworks such as Angular, Vue.js, and React.js. |
-| ![MSAL Angular](medi) | Implicit flow | Wrapper of the core MSAL.js library to simplify use in single-page apps that are built through the Angular framework. |
## Application code configuration
active-directory Scenario Web App Call Api App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-web-app-call-api-app-configuration.md
As shown in the [Web app that signs in users](scenario-web-app-sign-user-overvie
The [Web app that signs in users](scenario-web-app-sign-user-overview.md) scenarios covered only the first step. Here you learn how to modify your web app so that it not only signs users in but also now calls web APIs.
-## Libraries that support web-app scenarios
+## Microsoft libraries supporting web apps
-The following libraries in the Microsoft Authentication Library (MSAL) support the authorization code flow for web apps:
+The following Microsoft libraries support web apps:
-| MSAL library | Description |
-|--|-|
-| ![MSAL.NET](media/sample-v2-code/logo_NET.png) <br/> MSAL.NET | Support for .NET Framework and .NET Core platforms. Not supported are Universal Windows Platform (UWP), Xamarin.iOS, and Xamarin.Android, because those platforms are used to build public client applications. <br/><br/>For ASP.NET Core web apps and web APIs, MSAL.NET is encapsulated in a higher-level library named [Microsoft.Identity.Web](https://aka.ms/ms-identity-web). |
-| ![MSAL Python](media/sample-v2-code/logo_python.png) <br/> MSAL for Python | Support for Python web applications. |
-| ![MSAL Java](media/sample-v2-code/logo_java.png) <br/> MSAL for Java | Support for Java web applications. |
Select the tab for the platform you're interested in:
active-directory Scenario Web App Sign User App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-web-app-sign-user-app-configuration.md
Learn how to configure the code for your web app that signs in users.
-## Libraries for protecting web apps
+## Microsoft libraries supporting web apps
<!-- This section can be in an include for web app and web APIs -->
-The libraries that are used to protect a web app (and a web API) are:
+The following Microsoft libraries are used to protect a web app (and a web API):
-| Platform | Library | Description |
-|-||-|
-| ![.NET](media/sample-v2-code/logo_NET.png) | [Identity Model Extensions for .NET](https://github.com/AzureAD/azure-activedirectory-identitymodel-extensions-for-dotnet/wiki) | Used directly by ASP.NET and ASP.NET Core, Microsoft Identity Model Extensions for .NET proposes a set of DLLs running on both .NET Framework and .NET Core. From an ASP.NET or ASP.NET Core web app, you can control token validation by using the **TokenValidationParameters** class (in particular, in some partner scenarios). In practice the complexity is encapsulated in the [Microsoft.Identity.Web](https://aka.ms/ms-identity-web) library |
-| ![Java](media/sample-v2-code/small_logo_java.png) | [MSAL Java](https://github.com/AzureAD/microsoft-authentication-library-for-java/wiki) | Support for Java web applications |
-| ![Python](media/sample-v2-code/small_logo_python.png) | [MSAL Python](https://github.com/AzureAD/microsoft-authentication-library-for-python/wiki) | Support for Python web applications |
Select the tab that corresponds to the platform you're interested in:
active-directory Scenario Web App Sign User Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-web-app-sign-user-overview.md
If you develop with Python, try the following quickstart:
You add authentication to your web app so that it can sign in users. Adding authentication enables your web app to access limited profile information in order to customize the experience for users.
-Web apps authenticate a user in a web browser. In this scenario, the web app directs the user's browser to sign them in to Azure Active Directory (Azure AD). Azure AD returns a sign-in response through the user's browser, which contains claims about the user in a security token. Signing in users takes advantage of the [Open ID Connect](./v2-protocols-oidc.md) standard protocol, simplified by the use of middleware [libraries](scenario-web-app-sign-user-app-configuration.md#libraries-for-protecting-web-apps).
+Web apps authenticate a user in a web browser. In this scenario, the web app directs the user's browser to sign them in to Azure Active Directory (Azure AD). Azure AD returns a sign-in response through the user's browser, which contains claims about the user in a security token. Signing in users takes advantage of the [Open ID Connect](./v2-protocols-oidc.md) standard protocol, simplified by the use of middleware [libraries](scenario-web-app-sign-user-app-configuration.md#microsoft libraries supporting web apps).
![Web app signs in users](./media/scenario-webapp/scenario-webapp-signs-in-users.svg)
active-directory Self Service Sign Up Add Api Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/self-service-sign-up-add-api-connector.md
To create a certificate, you can use [Azure Key Vault](../../key-vault/certifica
For Azure App Service and Azure Functions, see [configure TLS mutual authentication](../../app-service/app-service-web-configure-tls-mutual-auth.md) to learn how to enable and validate the certificate from your API endpoint.
-It's recommended you set reminder alerts for when your certificate will expire. To upload a new certificate to an existing API connector, select the API connector under **All API connectors** and click on **Upload new connector**. The most recently uploaded certificate which is not expired and is past the start date will be used automatically by Azure Active Directory.
+It's recommended you set reminder alerts for when your certificate will expire. To upload a new certificate to an existing API connector, select the API connector under **All API connectors** and click on **Upload new certificate**. The most recently uploaded certificate which is not expired and is past the start date will be used automatically by Azure Active Directory.
### API Key
-Some services use an "API key" mechanism to make it harder to access your HTTP endpoints during development. For [Azure Functions](../../azure-functions/functions-bindings-http-webhook-trigger.md#authorization-keys), you can accomplish this by including the `code` as a query parameter in the **Endpoint URL**. For example, `https://contoso.azurewebsites.net/api/endpoint`<b>`?code=0123456789`</b>).
+Some services use an "API key" mechanism to obfuscate access to your HTTP endpoints during development. For [Azure Functions](../../azure-functions/functions-bindings-http-webhook-trigger.md#authorization-keys), you can accomplish this by including the `code` as a query parameter in the **Endpoint URL**. For example, `https://contoso.azurewebsites.net/api/endpoint`<b>`?code=0123456789`</b>).
This is not a mechanism that should be used alone in production. Therefore, configuration for basic or certificate authentication is always required. If you do not wish to implement any authentication method (not recommended) for development purposes, you can choose basic authentication and use temporary values for `username` and `password` that your API can disregard while you implement the authorization in your API.
Content-type: application/json
{ "email": "johnsmith@fabrikam.onmicrosoft.com",
- "identities": [ //Sent for Google and Facebook identity providers
+ "identities": [ // Sent for Google, Facebook, and Email One Time Passcode identity providers
{ "signInType":"federated", "issuer":"facebook.com",
Content-type: application/json
{ "email": "johnsmith@fabrikam.onmicrosoft.com",
- "identities": [ //Sent for Google and Facebook identity providers
+ "identities": [ // Sent for Google, Facebook, and Email One Time Passcode identity providers
{ "signInType":"federated", "issuer":"facebook.com",
Content-type: application/json
{ "email": "johnsmith@fabrikam.onmicrosoft.com",
- "identities": [ //Sent for Google and Facebook identity providers
+ "identities": [ // Sent for Google, Facebook, and Email One Time Passcode identity providers
{ "signInType":"federated", "issuer":"facebook.com",
active-directory Self Service Sign Up Add Approvals https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/self-service-sign-up-add-approvals.md
Content-type: application/json
{ "email": "johnsmith@fabrikam.onmicrosoft.com",
- "identities": [ //Sent for Google and Facebook identity providers
+ "identities": [ //Sent for Google, Facebook, and Email One Time Passcode identity providers
{ "signInType":"federated", "issuer":"facebook.com",
Content-type: application/json
{ "email": "johnsmith@fabrikam.onmicrosoft.com",
- "identities": [ //Sent for Google and Facebook identity providers
+ "identities": [ // Sent for Google, Facebook, and Email One Time Passcode identity providers
{ "signInType":"federated", "issuer":"facebook.com",
active-directory Customize Branding https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/customize-branding.md
Your custom branding won't immediately appear when your users go to sites such a
- **Banner logo.** Select a .png or .jpg version of your logo to appear on the sign-in page after the user enters a username and on the **My Apps** portal page.
- The image can't be taller than 60 pixels or wider than 280 pixels. We recommend using a transparent image since the background might not match your logo background. We also recommend not adding padding around the image or it might make your logo look small.
+ The image can't be taller than 60 pixels or wider than 280 pixels, and the file shouldnΓÇÖt be larger than 10KB. We recommend using a transparent image since the background might not match your logo background. We also recommend not adding padding around the image or it might make your logo look small.
- **Username hint.** Type the hint text that appears to users if they forget their username. This text must be Unicode, without links or code, and can't exceed 64 characters. If guests sign in to your app, we suggest not adding this hint.
active-directory Entitlement Management Access Package Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-access-package-create.md
You can also create an access package using Microsoft Graph. A user in an appro
- [Share link to request an access package](entitlement-management-access-package-settings.md) - [Change resource roles for an access package](entitlement-management-access-package-resources.md) - [Directly assign a user to the access package](entitlement-management-access-package-assignments.md)
+- [Create an access review for an access package](entitlement-management-access-reviews-create.md)
active-directory Application Proxy Add On Premises Application https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-add-on-premises-application.md
Previously updated : 02/09/2021 Last updated : 02/17/2021
For high availability in your production environment, we recommend having more t
> The key can be set via PowerShell with the following command. > ``` > Set-ItemProperty 'HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\WinHttp\' -Name EnableDefaultHTTP2 -Value 0
->
+> ```
#### Recommendations for the connector server
-1. Physically locate the connector server close to the application servers to optimize performance between the connector and the application. For more information, see [Network topology considerations](application-proxy-network-topology.md).
+1. Physically locate the connector server close to the application servers to optimize performance between the connector and the application. For more information, see [Optimize traffic flow with Azure Active Directory Application Proxy](application-proxy-network-topology.md).
1. The connector server and the web applications servers should belong to the same Active Directory domain or span trusting domains. Having the servers in the same domain or trusting domains is a requirement for using single sign-on (SSO) with Integrated Windows Authentication (IWA) and Kerberos Constrained Delegation (KCD). If the connector server and web application servers are in different Active Directory domains, you need to use resource-based delegation for single sign-on. For more information, see [KCD for single sign-on with Application Proxy](application-proxy-configure-single-sign-on-with-kcd.md). > [!WARNING]
Start by enabling communication to Azure data centers to prepare your environmen
Open the following ports to **outbound** traffic.
- | Port number | How it's used |
- | | |
- | 80 | Downloading certificate revocation lists (CRLs) while validating the TLS/SSL certificate |
- | 443 | All outbound communication with the Application Proxy service |
+| Port number | How it's used |
+| -- | |
+| 80 | Downloading certificate revocation lists (CRLs) while validating the TLS/SSL certificate |
+| 443 | All outbound communication with the Application Proxy service |
If your firewall enforces traffic according to originating users, also open ports 80 and 443 for traffic from Windows services that run as a Network Service.
If your firewall enforces traffic according to originating users, also open port
Allow access to the following URLs: | URL | Port | How it's used |
-| | | |
-| &ast;.msappproxy.net<br>&ast;.servicebus.windows.net | 443/HTTPS | Communication between the connector and the Application Proxy cloud service |
-| crl3.digicert.com<br>crl4.digicert.com<br>ocsp.digicert.com<br>crl.microsoft.com<br>oneocsp.microsoft.com<br>ocsp.msocsp.com<br> | 80/HTTP |The connector uses these URLs to verify certificates. |
-| login.windows.net<br>secure.aadcdn.microsoftonline-p.com<br>&ast;.microsoftonline.com<br>&ast;.microsoftonline-p.com<br>&ast;.msauth.net<br>&ast;.msauthimages.net<br>&ast;.msecnd.net<br>&ast;.msftauth.net<br>&ast;.msftauthimages.net<br>&ast;.phonefactor.net<br>enterpriseregistration.windows.net<br>management.azure.com<br>policykeyservice.dc.ad.msft.net<br>ctldl.windowsupdate.com<br>www.microsoft.com/pkiops | 443/HTTPS |The connector uses these URLs during the registration process. |
-| ctldl.windowsupdate.com | 80/HTTP |The connector uses this URL during the registration process. |
+| | | |
+| &ast;.msappproxy.net<br>&ast;.servicebus.windows.net | 443/HTTPS | Communication between the connector and the Application Proxy cloud service |
+| crl3.digicert.com<br>crl4.digicert.com<br>ocsp.digicert.com<br>crl.microsoft.com<br>oneocsp.microsoft.com<br>ocsp.msocsp.com<br> | 80/HTTP | The connector uses these URLs to verify certificates. |
+| login.windows.net<br>secure.aadcdn.microsoftonline-p.com<br>&ast;.microsoftonline.com<br>&ast;.microsoftonline-p.com<br>&ast;.msauth.net<br>&ast;.msauthimages.net<br>&ast;.msecnd.net<br>&ast;.msftauth.net<br>&ast;.msftauthimages.net<br>&ast;.phonefactor.net<br>enterpriseregistration.windows.net<br>management.azure.com<br>policykeyservice.dc.ad.msft.net<br>ctldl.windowsupdate.com<br>www.microsoft.com/pkiops | 443/HTTPS | The connector uses these URLs during the registration process. |
+| ctldl.windowsupdate.com | 80/HTTP | The connector uses this URL during the registration process. |
You can allow connections to &ast;.msappproxy.net, &ast;.servicebus.windows.net, and other URLs above if your firewall or proxy lets you configure access rules based on domain suffixes. If not, you need to allow access to the [Azure IP ranges and Service Tags - Public Cloud](https://www.microsoft.com/download/details.aspx?id=56519). The IP ranges are updated each week.
To install the connector:
1. Read the Terms of Service. When you're ready, select **Accept terms & Download**. 1. At the bottom of the window, select **Run** to install the connector. An install wizard opens. 1. Follow the instructions in the wizard to install the service. When you're prompted to register the connector with the Application Proxy for your Azure AD tenant, provide your application administrator credentials.
+
- For Internet Explorer (IE), if **IE Enhanced Security Configuration** is set to **On**, you may not see the registration screen. To get access, follow the instructions in the error message. Make sure that **Internet Explorer Enhanced Security Configuration** is set to **Off**. ### General remarks
If you've previously installed a connector, reinstall to get the latest version.
If you choose to have more than one Windows server for your on-premises applications, you'll need to install and register the connector on each server. You can organize the connectors into connector groups. For more information, see [Connector groups](application-proxy-connector-groups.md).
+If you have installed connectors in different regions, you can optimize traffic by selecting the closest Application Proxy cloud service region to use with each connector group, see [Optimize traffic flow with Azure Active Directory Application Proxy](application-proxy-network-topology.md)
+ If your organization uses proxy servers to connect to the internet, you need to configure them for Application Proxy. For more information, see [Work with existing on-premises proxy servers](application-proxy-configure-connectors-with-proxy-servers.md). For information about connectors, capacity planning, and how they stay up-to-date, see [Understand Azure AD Application Proxy connectors](application-proxy-connectors.md).
Now that you've prepared your environment and installed a connector, you're read
4. Select **Add an on-premises application** button which appears about halfway down the page in the **On-premises applications** section. Alternatively, you can select **Create your own application** at the top of the page and then select **Configure Application Proxy for secure remote access to an on-premise application**. 5. In the **Add your own on-premises application** section, provide the following information about your application:
- | Field | Description |
- | :- | :- |
+ | Field | Description |
+ | : | :-- |
| **Name** | The name of the application that will appear on My Apps and in the Azure portal. | | **Internal URL** | The URL for accessing the application from inside your private network. You can provide a specific path on the backend server to publish, while the rest of the server is unpublished. In this way, you can publish different sites on the same server as different apps, and give each one its own name and access rules.<br><br>If you publish a path, make sure that it includes all the necessary images, scripts, and style sheets for your application. For example, if your app is at https:\//yourapp/app and uses images located at https:\//yourapp/media, then you should publish https:\//yourapp/ as the path. This internal URL doesn't have to be the landing page your users see. For more information, see [Set a custom home page for published apps](application-proxy-configure-custom-home-page.md). |
- | **External URL** | The address for users to access the app from outside your network. If you don't want to use the default Application Proxy domain, read about [custom domains in Azure AD Application Proxy](application-proxy-configure-custom-domain.md).|
+ | **External URL** | The address for users to access the app from outside your network. If you don't want to use the default Application Proxy domain, read about [custom domains in Azure AD Application Proxy](application-proxy-configure-custom-domain.md). |
| **Pre Authentication** | How Application Proxy verifies users before giving them access to your application.<br><br>**Azure Active Directory** - Application Proxy redirects users to sign in with Azure AD, which authenticates their permissions for the directory and application. We recommend keeping this option as the default so that you can take advantage of Azure AD security features like Conditional Access and Multi-Factor Authentication. **Azure Active Directory** is required for monitoring the application with Microsoft Cloud Application Security.<br><br>**Passthrough** - Users don't have to authenticate against Azure AD to access the application. You can still set up authentication requirements on the backend. |
- | **Connector Group** | Connectors process the remote access to your application, and connector groups help you organize connectors and apps by region, network, or purpose. If you don't have any connector groups created yet, your app is assigned to **Default**.<br><br>If your application uses WebSockets to connect, all connectors in the group must be version 1.5.612.0 or later.|
+ | **Connector Group** | Connectors process the remote access to your application, and connector groups help you organize connectors and apps by region, network, or purpose. If you don't have any connector groups created yet, your app is assigned to **Default**.<br><br>If your application uses WebSockets to connect, all connectors in the group must be version 1.5.612.0 or later. |
6. If necessary, configure **Additional settings**. For most applications, you should keep these settings in their default states. | Field | Description |
- | :- | :- |
+ | : | :-- |
| **Backend Application Timeout** | Set this value to **Long** only if your application is slow to authenticate and connect. At default, the backend application timeout has a length of 85 seconds. When set to long, the backend timeout is increased to 180 seconds. |
- | **Use HTTP-Only Cookie** | Set this value to **Yes** to have Application Proxy cookies include the HTTPOnly flag in the HTTP response header. If using Remote Desktop Services, set this value to **No**.|
+ | **Use HTTP-Only Cookie** | Set this value to **Yes** to have Application Proxy cookies include the HTTPOnly flag in the HTTP response header. If using Remote Desktop Services, set this value to **No**. |
| **Use Secure Cookie**| Set this value to **Yes** to transmit cookies over a secure channel such as an encrypted HTTPS request. | **Use Persistent Cookie**| Keep this value set to **No**. Only use this setting for applications that can't share cookies between processes. For more information about cookie settings, see [Cookie settings for accessing on-premises applications in Azure Active Directory](./application-proxy-configure-cookie-settings.md). | **Translate URLs in Headers** | Keep this value as **Yes** unless your application required the original host header in the authentication request. |
active-directory Application Proxy Configure Single Sign On With Headers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-configure-single-sign-on-with-headers.md
Previously updated : 10/05/2020 Last updated : 02/22/2021
active-directory Application Proxy Network Topology https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-network-topology.md
Title: Network topology considerations for Azure AD Application Proxy
-description: Covers network topology considerations when using Azure AD Application Proxy.
+ Title: Network topology considerations for Azure Active Directory Application Proxy
+description: Covers network topology considerations when using Azure Active Directory Application Proxy.
Previously updated : 07/22/2019 Last updated : 02/22/2021 ---+
-# Network topology considerations when using Azure Active Directory Application Proxy
+# Optimize traffic flow with Azure Active Directory Application Proxy
-This article explains network topology considerations when using Azure Active Directory (Azure AD) Application Proxy for publishing and accessing your applications remotely.
+This article explains how to optimize traffic flow and network topology considerations when using Azure Active Directory (Azure AD) Application Proxy for publishing and accessing your applications remotely.
## Traffic flow
When an application is published through Azure AD Application Proxy, traffic fro
1. The Application Proxy service connects to the Application Proxy connector 1. The Application Proxy connector connects to the target application
-![Diagram showing traffic flow from user to target application](./media/application-proxy-network-topology/application-proxy-three-hops.png)
-## Tenant location and Application Proxy service
+## Optimize connector groups to use closest Application Proxy cloud service (Preview)
-When you sign up for an Azure AD tenant, the region of your tenant is determined by the country/region you specify. When you enable Application Proxy, the Application Proxy service instances for your tenant are chosen or created in the same region as your Azure AD tenant, or the closest region to it.
+When you sign up for an Azure AD tenant, the region of your tenant is determined by the country/region you specify. When you enable Application Proxy, the **default** Application Proxy cloud service instances for your tenant are chosen in the same region as your Azure AD tenant, or the closest region to it.
-For example, if your Azure AD tenant's country or region is the United Kingdom, all your Application Proxy connectors use service instances in European data centers. When your users access published applications, their traffic goes through the Application Proxy service instances in this location.
+For example, if your Azure AD tenant's country or region is the United Kingdom, all your Application Proxy connectors at **default** will be assigned to use service instances in European data centers. When your users access published applications, their traffic goes through the Application Proxy cloud service instances in this location.
+
+If you have connectors installed in regions different from your default region, it may be beneficial to change which region your connector group is optimized for to improve performance accessing these applications. Once a region is specified for a connector group it will connected to Application Proxy cloud services in the designated region.
+
+In order to optimize the traffic flow and reduce latency to a connector group assign the connector group to the closest region. To assign a region:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) as an application administrator of the directory that uses Application Proxy. For example, if the tenant domain is contoso.com, the admin should be admin@contoso.com or any other admin alias on that domain.
+1. Select your username in the upper-right corner. Verify you're signed in to a directory that uses Application Proxy. If you need to change directories, select **Switch directory** and choose a directory that uses Application Proxy.
+1. In left navigation panel, select **Azure Active Directory**.
+1. Under **Manage**, select **Application proxy**.
+1. Select **New Connector Group**, provide a **Name** for the connector group.
+1. Next, under **Advanced Settings** and select the drop down under Optimize for a specific region and select the region closest to the connectors.
+1. Select **Create**.
+
+ :::image type="content" source="./media/application-proxy-network-topology/geo-routing.png" alt-text="Configure a new connector group." lightbox="./media/application-proxy-network-topology/geo-routing.png":::
+
+1. Once the new connector group is created, you can select which connectors to assign to this connector group.
+ - You can only move connectors to your connector group if it is in a connector group using the default region. The best approach is to always start with your connectors placed in the ΓÇ£Default groupΓÇ¥ and then move it to the appropriate connector group.
+ - You can only change the region of a connector group if there are **no** connectors assigned to it or apps assigned to it.
+1. Next assign the connector group to your applications. When accessing the apps, traffic should now go to the Application Proxy cloud service in the region the connector group is optimized for.
## Considerations for reducing latency
If you have a dedicated VPN or ExpressRoute set up with private peering between
Latency is not compromised because traffic is flowing over a dedicated connection. You also get improved Application Proxy service-to-connector latency because the connector is installed in an Azure datacenter close to your Azure AD tenant location.
-![Diagram showing connector installed within an Azure datacenter](./media/application-proxy-network-topology/application-proxy-expressroute-private.png)
### Other approaches
For these scenarios, we call each connection a "hop" and number them for easier
This is a simple pattern. You optimize hop 3 by placing the connector near the app. This is also a natural choice, because the connector typically is installed with line of sight to the app and to the datacenter to perform KCD operations.
-![Diagram that shows users, proxy, connector, and app are all in the US](./media/application-proxy-network-topology/application-proxy-pattern1.png)
### Use case 2
This is a simple pattern. You optimize hop 3 by placing the connector near the a
Again, the common pattern is to optimize hop 3, where you place the connector near the app. Hop 3 is not typically expensive, if it is all within the same region. However, hop 1 can be more expensive depending on where the user is, because users across the world must access the Application Proxy instance in the US. It's worth noting that any proxy solution has similar characteristics regarding users being spread out globally.
-![Users are spread globally, but everything else is in the US](./media/application-proxy-network-topology/application-proxy-pattern2.png)
### Use case 3
First, place the connector as close as possible to the app. Then, the system aut
If the ExpressRoute link is using Microsoft peering, the traffic between the proxy and the connector flows over that link. Hop 2 has optimized latency.
-![Diagram showing ExpressRoute between the proxy and connector](./media/application-proxy-network-topology/application-proxy-pattern3.png)
### Use case 4
Place the connector in the Azure datacenter that is connected to the corporate n
The connector can be placed in the Azure datacenter. Since the connector still has a line of sight to the application and the datacenter through the private network, hop 3 remains optimized. In addition, hop 2 is optimized further.
-![Connector in Azure datacenter, ExpressRoute between connector and app](./media/application-proxy-network-topology/application-proxy-pattern4.png)
### Use case 5
-**Scenario:** The app is in an organization's network in Europe, with the Application Proxy instance and most users in the US.
+**Scenario:** The app is in an organization's network in Europe, default tenant region is US, with most users in the Europe.
+
+**Recommendation:** Place the connector near the app. Update the connector group so it is optimized to use Europe Application Proxy service instances. For steps see, [Optimize connector groups to use closest Application Proxy cloud service](application-proxy-network-topology#Optimize connector-groups-to-use-closest-Application-Proxy-cloud-service).
+
+Because Europe users are accessing an Application Proxy instance that happens to be in the same region, hop 1 is not expensive. Hop 3 is optimized. Consider using ExpressRoute to optimize hop 2.
+
+### Use case 6
-**Recommendation:** Place the connector near the app. Because US users are accessing an Application Proxy instance that happens to be in the same region, hop 1 is not too expensive. Hop 3 is optimized. Consider using ExpressRoute to optimize hop 2.
+**Scenario:** The app is in an organization's network in Europe, default tenant region is US, with most users in the US.
-![Diagram shows users and proxy in the US, connector and app in Europe](./media/application-proxy-network-topology/application-proxy-pattern5b.png)
+**Recommendation:** Place the connector near the app. Update the connector group so it is optimized to use Europe Application Proxy service instances. For steps see, [Optimize connector groups to use closest Application Proxy cloud service](/application-proxy-network-topology#Optimize connector-groups-to-use-closest-Application-Proxy-cloud-service). Hop 1 can be more expensive since all US users must access the Application Proxy instance in Europe.
-You can also consider using one other variant in this situation. If most users in the organization are in the US, then chances are that your network extends to the US as well. Place the connector in the US, and use the dedicated internal corporate network line to the application in Europe. This way hops 2 and 3 are optimized.
+You can also consider using one other variant in this situation. If most users in the organization are in the US, then chances are that your network extends to the US as well. Place the connector in the US, continue to use the default US region for your connector groups, and use the dedicated internal corporate network line to the application in Europe. This way hops 2 and 3 are optimized.
-![Diagram shows users, proxy, and connector in the US, app in Europe](./media/application-proxy-network-topology/application-proxy-pattern5c.png)
## Next steps - [Enable Application Proxy](application-proxy-add-on-premises-application.md) - [Enable single-sign on](application-proxy-configure-single-sign-on-with-kcd.md) - [Enable Conditional Access](application-proxy-integrate-with-sharepoint-server.md)-- [Troubleshoot issues you're having with Application Proxy](application-proxy-troubleshoot.md)
+- [Troubleshoot issues you're having with Application Proxy](application-proxy-troubleshoot.md)
active-directory Application Proxy Release Version History https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-release-version-history.md
Title: 'Azure AD Application Proxy: Version release history'
-description: This article lists all releases of Azure AD Application Proxy and describes new features and fixed issues
+ Title: 'Azure Active Directory Application Proxy: Version release history'
+description: This article lists all releases of Azure Active Directory Application Proxy and describes new features and fixed issues.
ms.assetid:
Previously updated : 07/22/2020 Last updated : 02/17/2021 + # Azure AD Application Proxy: Version release history
We recommend making sure that auto-updates are enabled for your connectors to en
Here is a list of related resources:
-Resource | Details
- | |
-How to enable Application Proxy | Pre-requisites for enabling Application Proxy and installing and registering a connector are described in this [tutorial](application-proxy-add-on-premises-application.md).
-Understand Azure AD Application Proxy connectors | Find out more about [connector management](application-proxy-connectors.md) and how connectors [auto-upgrade](application-proxy-connectors.md#automatic-updates).
-Azure AD Application Proxy Connector Download | [Download the latest connector](https://download.msappproxy.net/subscription/d3c8b69d-6bf7-42be-a529-3fe9c2e70c90/connector/download).
+| Resource | Details |
+| | |
+| How to enable Application Proxy | Pre-requisites for enabling Application Proxy and installing and registering a connector are described in this [tutorial](application-proxy-add-on-premises-application.md). |
+| Understand Azure AD Application Proxy connectors | Find out more about [connector management](application-proxy-connectors.md) and how connectors [auto-upgrade](application-proxy-connectors.md#automatic-updates). |
+| Azure AD Application Proxy Connector Download | [Download the latest connector](https://download.msappproxy.net/subscription/d3c8b69d-6bf7-42be-a529-3fe9c2e70c90/connector/download). |
## 1.5.1975.0
This version is only available for install via the download page. An auto-upgrad
- Improved support for Azure Government cloud environments. For steps on how to properly install the connector for Azure Government cloud review the [pre-requisites](../hybrid/reference-connect-government-cloud.md#allow-access-to-urls) and [installation steps](../hybrid/reference-connect-government-cloud.md#install-the-agent-for-the-azure-government-cloud). - Support for using the Remote Desktop Services web client with Application Proxy. See [Publish Remote Desktop with Azure AD Application Proxy](application-proxy-integrate-with-remote-desktop-services.md) for more details. - Improved websocket extension negotiations.
+- Support for optimized routing between connector groups and Application Proxy cloud services based on region. See [Optimize traffic flow with Azure Active Directory Application Proxy](application-proxy-network-topology.md) for more details.
### Fixed issues - Fixed a websocket issue that forced lowercase strings.
active-directory Powershell Export Apps With Expriring Secrets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/scripts/powershell-export-apps-with-expriring-secrets.md
# Export apps with expiring secrets and certificates
-This PowerShell script example exports all apps with expiring secrets and certificates for the specified apps from your directory in a CSV file.
+This PowerShell script example exports all app registrations with expiring secrets, certificates and their owners for the specified apps from your directory in a CSV file.
[!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)]
This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/az
## Script explanation
+The script can be used directly without any modifications. The admin will be asked about the expiration date and whether they would like to see already expired secrets or certificates or not.
+ The "Add-Member" command is responsible for creating the columns in the CSV file.
+The "New-Object" command creates an object to be used for the columns in the CSV file export.
You can modify the "$Path" variable directly in PowerShell, with a CSV file path, in case you'd prefer the export to be non-interactive. | Command | Notes |
active-directory Powershell Export Apps With Secrets Beyond Required https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/scripts/powershell-export-apps-with-secrets-beyond-required.md
This PowerShell script example exports all app registrations secrets and certifi
[!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)]
-This sample requires the [AzureAD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [AzureAD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview&preserve-view=true) (AzureADPreview).
- ## Sample script [!code-azurepowershell[main](~/powershell_scripts/application-management/export-apps-with-secrets-beyond-required.ps1 "Exports all apps with secrets and certificates expiring beyond the required date for the specified apps in your directory.")]
The "Add-Member" command is responsible for creating the columns in the CSV file
| Command | Notes | |||
-| [Invoke-WebRequest](/powershell/module/azuread/Invoke-WebRequest?view=azureadps-2.0&preserve-view=true) | Sends HTTP and HTTPS requests to a web page or web service. It parses the response and returns collections of links, images, and other significant HTML elements. |
+| [Invoke-WebRequest](/powershell/module/microsoft.powershell.utility/invoke-webrequest?view=powershell-7.1) | Sends HTTP and HTTPS requests to a web page or web service. It parses the response and returns collections of links, images, and other significant HTML elements. |
## Next steps
active-directory Admin Units Add Manage Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/admin-units-add-manage-groups.md
Previously updated : 11/04/2020 Last updated : 03/10/2021
In the following example, use the `Add-AzureADMSAdministrativeUnitMember` cmdlet
```powershell
-$administrative unitObj = Get-AzureADMSAdministrativeUnit -Filter "displayname eq 'Test administrative unit 2'"
+$adminUnitObj = Get-AzureADMSAdministrativeUnit -Filter "displayname eq 'Test administrative unit 2'"
$GroupObj = Get-AzureADGroup -Filter "displayname eq 'TestGroup'"
-Add-AzureADMSAdministrativeUnitMember -ObjectId $administrative unitObj.ObjectId -RefObjectId $GroupObj.ObjectId
+Add-AzureADMSAdministrativeUnitMember -ObjectId $adminUnitObj.ObjectId -RefObjectId $GroupObj.ObjectId
``` ### Use Microsoft Graph Run the following commands:
+Request
+ ```http
-Http request
-POST /administrativeUnits/{Admin Unit id}/members/$ref
+POST /administrativeUnits/{admin-unit-id}/members/$ref
+```
+
+Body
-Request body
+```http
{
-"@odata.id":"https://graph.microsoft.com/v1.0/groups/{id}"
+"@odata.id":"https://graph.microsoft.com/v1.0/groups/{group-id}"
} ```
-Example:
+Example
```http {
-"@odata.id":"https://graph.microsoft.com/v1.0/groups/ 871d21ab-6b4e-4d56-b257-ba27827628f3"
+"@odata.id":"https://graph.microsoft.com/v1.0/groups/871d21ab-6b4e-4d56-b257-ba27827628f3"
} ```
Example:
To display a list of all the members of the administrative unit, run the following command: ```powershell
-$administrative unitObj = Get-AzureADMSAdministrativeUnit -Filter "displayname eq 'Test administrative unit 2'"
-Get-AzureADMSAdministrativeUnitMember -ObjectId $administrative unitObj.ObjectId
+$adminUnitObj = Get-AzureADMSAdministrativeUnit -Filter "displayname eq 'Test administrative unit 2'"
+Get-AzureADMSAdministrativeUnitMember -ObjectId $adminUnitObj.ObjectId
``` To display all the groups that are members of the administrative unit, use the following code snippet:
-```http
-foreach ($member in (Get-AzureADMSAdministrativeUnitMember -ObjectId $administrative unitObj.ObjectId))
+```powershell
+foreach ($member in (Get-AzureADMSAdministrativeUnitMember -ObjectId $adminUnitObj.ObjectId))
{ if($member.ObjectType -eq "Group") {
Get-AzureADGroup -ObjectId $member.ObjectId
Run the following command:
+Request
+
+```http
+GET /directory/administrativeUnits/{admin-unit-id}/members/$/microsoft.graph.group
+```
+
+Body
+ ```http
-HTTP request
-GET /directory/administrativeUnits/{Admin id}/members/$/microsoft.graph.group
-Request body
{} ```
Get-AzureADMSAdministrativeUnit | where { Get-AzureADMSAdministrativeUnitMember
Run the following command: ```http
-https://graph.microsoft.com/v1.0/groups/<group-id>/memberOf/$/Microsoft.Graph.AdministrativeUnit
+https://graph.microsoft.com/v1.0/groups/{group-id}/memberOf/$/Microsoft.Graph.AdministrativeUnit
``` ## Remove a group from an administrative unit
You can remove a group from an administrative unit in the Azure portal in either
Run the following command: ```powershell
-Remove-AzureADMSAdministrativeUnitMember -ObjectId $auId -MemberId $memberGroupObjId
+Remove-AzureADMSAdministrativeUnitMember -ObjectId $adminUnitId -MemberId $memberGroupObjId
``` ### Use Microsoft Graph
Remove-AzureADMSAdministrativeUnitMember -ObjectId $auId -MemberId $memberGroupO
Run the following command: ```http
-https://graph.microsoft.com/v1.0/directory/AdministrativeUnits/<adminunit-id>/members/<group-id>/$ref
+https://graph.microsoft.com/v1.0/directory/administrativeUnits/{admin-unit-id}/members/{group-id}/$ref
``` ## Next steps
active-directory Admin Units Add Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/admin-units-add-manage-users.md
You can assign users to administrative units individually or as a bulk operation
In PowerShell, use the `Add-AzureADAdministrativeUnitMember` cmdlet in the following example to add the user to the administrative unit. The object ID of the administrative unit to which you want to add the user and the object ID of the user you want to add are taken as arguments. Change the highlighted section as required for your specific environment. ```powershell
-$administrativeunitObj = Get-AzureADMSAdministrativeUnit -Filter "displayname eq 'Test administrative unit 2'"
-$UserObj = Get-AzureADUser -Filter "UserPrincipalName eq 'billjohn@fabidentity.onmicrosoft.com'"
-Add-AzureADMSAdministrativeUnitMember -Id $administrativeunitObj.ObjectId -RefObjectId $UserObj.ObjectId
+$adminUnitObj = Get-AzureADMSAdministrativeUnit -Filter "displayname eq 'Test administrative unit 2'"
+$userObj = Get-AzureADUser -Filter "UserPrincipalName eq 'bill@example.onmicrosoft.com'"
+Add-AzureADMSAdministrativeUnitMember -Id $adminUnitObj.ObjectId -RefObjectId $userObj.ObjectId
```
Add-AzureADMSAdministrativeUnitMember -Id $administrativeunitObj.ObjectId -RefOb
Replace the placeholder with test information and run the following command:
+Request
+
+```http
+POST /administrativeUnits/{admin-unit-id}/members/$ref
+```
+
+Body
+ ```http
-Http request
-POST /administrativeUnits/{Admin Unit id}/members/$ref
-Request body
{
- "@odata.id":"https://graph.microsoft.com/v1.0/users/{id}"
+ "@odata.id":"https://graph.microsoft.com/v1.0/users/{user-id}"
} ```
-Example:
+Example
```http {
- "@odata.id":"https://graph.microsoft.com/v1.0/users/johndoe@fabidentity.com"
+ "@odata.id":"https://graph.microsoft.com/v1.0/users/john@example.com"
} ```
Run the following command:
```powershell Get-AzureADMSAdministrativeUnit | where { Get-AzureADMSAdministrativeUnitMember -Id $_.ObjectId | where {$_.RefObjectId -eq $userObjId} } ```+ > [!NOTE] > By default, `Get-AzureADAdministrativeUnitMember` returns only 100 members of an administrative unit. To retrieve more members, you can add `"-All $true"`.
Get-AzureADMSAdministrativeUnit | where { Get-AzureADMSAdministrativeUnitMember
Replace the placeholder with test information and run the following command: ```http
-https://graph.microsoft.com/v1.0/users/{id}/memberOf/$/Microsoft.Graph.AdministrativeUnit
+https://graph.microsoft.com/v1.0/users/{user-id}/memberOf/$/Microsoft.Graph.AdministrativeUnit
``` ## Remove a single user from an administrative unit
You can remove a user from an administrative unit in either of two ways:
Run the following command: ```powershell
-Remove-AzureADMSAdministrativeUnitMember -Id $auId -MemberId $memberUserObjId
+Remove-AzureADMSAdministrativeUnitMember -Id $adminUnitId -MemberId $memberUserObjId
``` ### Use Microsoft Graph Replace the placeholders with test information and run the following command:
-`https://graph.microsoft.com/v1.0/directory/administrativeUnits/{adminunit-id}/members/{user-id}/$ref`
+```http
+https://graph.microsoft.com/v1.0/directory/administrativeUnits/{admin-unit-id}/members/{user-id}/$ref
+```
## Remove multiple users as a bulk operation
active-directory Admin Units Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/admin-units-assign-roles.md
You can assign a scoped role by using the Azure portal, PowerShell, or Microsoft
### Use PowerShell ```powershell
-$AdminUser = Get-AzureADUser -ObjectId "Use the user's UPN, who would be an admin on this unit"
-$Role = Get-AzureADDirectoryRole | Where-Object -Property DisplayName -EQ -Value "User Account Administrator"
-$administrativeUnit = Get-AzureADMSAdministrativeUnit -Filter "displayname eq 'The display name of the unit'"
-$RoleMember = New-Object -TypeName Microsoft.Open.AzureAD.Model.RoleMemberInfo
-$RoleMember.ObjectId = $AdminUser.ObjectId
-Add-AzureADMSScopedRoleMembership -ObjectId $administrativeUnit.ObjectId -RoleObjectId $Role.ObjectId -RoleMemberInfo $RoleMember
+$adminUser = Get-AzureADUser -ObjectId "Use the user's UPN, who would be an admin on this unit"
+$role = Get-AzureADDirectoryRole | Where-Object -Property DisplayName -EQ -Value "User Account Administrator"
+$adminUnitObj = Get-AzureADMSAdministrativeUnit -Filter "displayname eq 'The display name of the unit'"
+$roleMember = New-Object -TypeName Microsoft.Open.AzureAD.Model.RoleMemberInfo
+$roleMember.ObjectId = $adminUser.ObjectId
+Add-AzureADMSScopedRoleMembership -ObjectId $adminUnitObj.ObjectId -RoleObjectId $role.ObjectId -RoleMemberInfo $roleMember
``` You can change the highlighted section as required for the specific environment. ### Use Microsoft Graph
+Request
+ ```http
-Http request
-POST /directory/administrativeUnits/{id}/scopedRoleMembers
+POST /directory/administrativeUnits/{admin-unit-id}/scopedRoleMembers
+```
-Request body
+Body
+
+```http
{ "roleId": "roleId-value", "roleMemberInfo": {
You can view all the role assignments created with an administrative unit scope
### Use PowerShell ```powershell
-$administrativeUnit = Get-AzureADMSAdministrativeUnit -Filter "displayname eq 'The display name of the unit'"
-Get-AzureADMSScopedRoleMembership -ObjectId $administrativeUnit.ObjectId | fl *
+$adminUnitObj = Get-AzureADMSAdministrativeUnit -Filter "displayname eq 'The display name of the unit'"
+Get-AzureADMSScopedRoleMembership -ObjectId $adminUnitObj.ObjectId | fl *
``` You can change the highlighted section as required for your specific environment. ### Use Microsoft Graph
+Request
+
+```http
+GET /directory/administrativeUnits/{admin-unit-id}/scopedRoleMembers
+```
+
+Body
+ ```http
-Http request
-GET /directory/administrativeUnits/{id}/scopedRoleMembers
-Request body
{} ```
active-directory Admin Units Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/admin-units-manage.md
You can modify the values that are enclosed in quotation marks, as required.
### Use Microsoft Graph
+Request
+ ```http
-Http Request
POST /administrativeUnits
-Request body
+```
+
+Body
+
+```http
{ "displayName": "North America Operations", "description": "North America Operations administration"
In Azure AD, you can remove an administrative unit that you no longer need as a
### Use PowerShell ```powershell
-$delau = Get-AzureADMSAdministrativeUnit -Filter "displayname eq 'DeleteMe Admin Unit'"
-Remove-AzureADMSAdministrativeUnit -ObjectId $delau.ObjectId
+$adminUnitObj = Get-AzureADMSAdministrativeUnit -Filter "displayname eq 'DeleteMe Admin Unit'"
+Remove-AzureADMSAdministrativeUnit -ObjectId $adminUnitObj.ObjectId
``` You can modify the values that are enclosed in quotation marks, as required for the specific environment. ### Use the Graph API
+Request
+
+```http
+DELETE /administrativeUnits/{admin-unit-id}
+```
+
+Body
+ ```http
-HTTP request
-DELETE /administrativeUnits/{Admin id}
-Request body
{} ```
active-directory Custom Assign Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/custom-assign-powershell.md
Connect to your Azure AD organization using a global administrator account to as
Install the Azure AD PowerShell module from the [PowerShell Gallery](https://www.powershellgallery.com/packages/AzureADPreview). Then import the Azure AD PowerShell preview module, using the following command: ``` PowerShell
-Import-Module AzureADPreview
+Import-Module -Name AzureADPreview
``` To verify that the module is ready to use, match the version returned by the following command to the one listed here: ``` PowerShell
-Get-Module AzureADPreview
+Get-Module -Name AzureADPreview
ModuleType Version Name ExportedCommands - - -
- Binary 2.0.0.115 azureadpreview {Add-AzureADMSAdministrati...}
+ Binary 2.0.0.115 AzureADPreview {Add-AzureADMSAdministrati...}
``` Now you can start using the cmdlets in the module. For a full description of the cmdlets in the Azure AD module, see the online reference documentation for [Azure AD preview module](https://www.powershellgallery.com/packages/AzureADPreview).
active-directory Custom Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/custom-create.md
First, you must [download the Azure AD Preview PowerShell module](https://www.po
To install the Azure AD PowerShell module, use the following commands: ``` PowerShell
-install-module azureadpreview
-import-module azureadpreview
+Install-Module -Name AzureADPreview
+Import-Module -Name AzureADPreview
``` To verify that the module is ready to use, use the following command: ``` PowerShell
-get-module azureadpreview
+Get-Module -Name AzureADPreview
ModuleType Version Name ExportedCommands - - -
- Binary 2.0.0.115 azureadpreview {Add-AzureADAdministrati...}
+ Binary 2.0.0.115 AzureADPreview {Add-AzureADAdministrati...}
``` ### Connect to Azure
active-directory Custom Enterprise Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/custom-enterprise-apps.md
For more detail, see [Create and assign a custom role](custom-create.md) and [As
First, install the Azure AD PowerShell module from [the PowerShell Gallery](https://www.powershellgallery.com/packages/AzureADPreview/2.0.0.17). Then import the Azure AD PowerShell preview module, using the following command: ```powershell
-PowerShell
-import-module azureadpreview
+Import-Module -Name AzureADPreview
``` To verify that the module is ready to use, match the version returned by the following command to the one listed here: ```powershell
-PowerShell
-get-module azureadpreview
+Get-Module -Name AzureADPreview
ModuleType Version Name ExportedCommands - - -
- Binary 2.0.0.115 azureadpreview {Add-AzureADAdministrati...}
+ Binary 2.0.0.115 AzureADPreview {Add-AzureADAdministrati...}
``` ### Create a custom role
active-directory Custom View Assignments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/custom-view-assignments.md
First, you must [download the Azure AD preview PowerShell module](https://www.po
To install the Azure AD PowerShell module, use the following commands: ``` PowerShell
-install-module azureadpreview
-import-module azureadpreview
+Install-Module -Name AzureADPreview
+Import-Module -Name AzureADPreview
``` To verify that the module is ready to use, use the following command: ``` PowerShell
-get-module azuread
+Get-Module -Name AzureADPreview
ModuleType Version Name ExportedCommands - - -
- Binary 2.0.0.115 azuread {Add-AzureADAdministrati...}
+ Binary 2.0.0.115 AzureADPreview {Add-AzureADAdministrati...}
``` ### View the assignments of a role
active-directory Groups Create Eligible https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/groups-create-eligible.md
The group is created with any roles you might have assigned to it.
### Install the Azure AD preview module ```powershell
-install-module azureadpreview
-import-module azureadpreview
+Install-Module -Name AzureADPreview
+Import-Module -Name AzureADPreview
``` To verify that the module is ready to use, issue the following command: ```powershell
-get-module azureadpreview
+Get-Module -Name AzureADPreview
``` ### Create a group that can be assigned to role
For this type of group, `isPublic` will always be false and `isSecurityEnabled`
```powershell #Basic set up
-install-module azureadpreview
-import-module azureadpreview
-get-module azureadpreview
+Install-Module -Name AzureADPreview
+Import-Module -Name AzureADPreview
+Get-Module -Name AzureADPreview
#Connect to Azure AD. Sign in as Privileged Role Administrator or Global Administrator. Only these two roles can create a role-assignable group. Connect-AzureAD
Add-AzureADGroupMember -ObjectId $roleAssignablegroup.Id -RefObjectId $member.Ob
### Create a role-assignable group in Azure AD
-```powershell
+```http
POST https://graph.microsoft.com/beta/groups { "description": "This group is assigned to Helpdesk Administrator built-in role of Azure AD.",
active-directory Groups Pim Eligible https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/groups-pim-eligible.md
This article describes how you can assign an Azure Active Directory (Azure AD) r
To install the Azure AD #PowerShell module, use the following cmdlets: ```powershell
-install-module azureadpreview
-import-module azureadpreview
+Install-Module -Name AzureADPreview
+Import-Module -Name AzureADPreview
``` To verify that the module is ready to use, use the following cmdlet: ```powershell
-get-module azureadpreview
+Get-Module -Name AzureADPreview
``` ### Assign a group as an eligible member of a role
Open-AzureADMSPrivilegedRoleAssignmentRequest -ProviderId aadRoles -Schedule $sc
## Using Microsoft Graph API
-```powershell
+```http
POST
-https://graph.microsoft.com/beta/privilegedAccess/aadroles/roleAssignmentRequests
-
+https://graph.microsoft.com/beta/privilegedAccess/aadroles/roleAssignmentRequests
{- "roleDefinitionId": {roleDefinitionId},- "resourceId": {tenantId},- "subjectId": {GroupId},- "assignmentState": "Eligible",- "type": "AdminAdd",- "reason": "reason string",- "schedule": {- "startDateTime": {DateTime},- "endDateTime": {DateTime},- "type": "Once"- }- } ```
active-directory Groups Remove Assignment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/groups-remove-assignment.md
Remove-AzureAdMSRoleAssignment -Id $roleAssignment.Id
### Create a group that can be assigned an Azure AD role
-```powershell
+```http
POST https://graph.microsoft.com/beta/groups- { "description": "This group is assigned to Helpdesk Administrator built-in role of Azure AD", "displayName": "Contoso_Helpdesk_Administrators",
POST https://graph.microsoft.com/beta/groups
### Get the role definition
-```powershell
-GET https://graph.microsoft.com/beta/roleManagement/directory/roleDefinitions?$filter = displayName eq ΓÇÿHelpdesk AdministratorΓÇÖ
+```http
+GET https://graph.microsoft.com/beta/roleManagement/directory/roleDefinitions?$filter=displayName+eq+'Helpdesk Administrator'
``` ### Create the role assignment
-```powershell
+```http
POST https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments {
-"principalId":"<Object Id of Group>",
-"roleDefinitionId":"<Id of role definition>",
+"principalId":"{object-id-of-group}",
+"roleDefinitionId":"{role-definition-id}",
"directoryScopeId":"/" } ``` ### Delete role assignment
-```powershell
-DELETE https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments/<Id of role assignment>
+```http
+DELETE https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments/{role-assignment-id}
``` ## Next steps
active-directory Groups View Assignments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/groups-view-assignments.md
This section describes how the roles assigned to a group can be viewed using Azu
### Get object ID of the group ```powershell
-Get-AzureADMSGroup -SearchString ΓÇ£Contoso_Helpdesk_AdministratorsΓÇ¥
+Get-AzureADMSGroup -SearchString "Contoso_Helpdesk_Administrators"
``` ### View role assignment to a group
Get-AzureADMSRoleAssignment -Filter "principalId eq '<object id of group>"
### Get object ID of the group
-```powershell
-GET https://graph.microsoft.com/beta/groups?$filter displayName eq ΓÇÿContoso_Helpdesk_AdministratorΓÇÖ
+```http
+GET https://graph.microsoft.com/beta/groups?$filter=displayName+eq+'Contoso_Helpdesk_Administrator'
``` ### Get role assignments to a group
-```powershell
+```http
GET https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments?$filter=principalId eq ```
active-directory My Staff Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/my-staff-configure.md
Title: Use My Staff to delegate user management (preview) - Azure AD | Microsoft Docs
+ Title: Use My Staff to delegate user management - Azure AD | Microsoft Docs
description: Delegate user management using My Staff and administrative units documentationcenter: ''
Previously updated : 05/08/2020 Last updated : 03/11/2021
-# Manage your users with My Staff (preview)
+# Manage your users with My Staff
-My Staff enables you to delegate to a figure of authority, such as a store manager or a team lead, the permissions to ensure that their staff members are able access to their Azure AD accounts. Instead of relying on a central helpdesk, organizations can delegate common tasks such as resetting passwords or changing phone numbers to a team manager. With My Staff, a user who can't access their account can regain access in just a couple of clicks, with no helpdesk or IT staff required.
+My Staff enables you to delegate permissions to a figure of authority, such as a store manager or a team lead, to ensure that their staff members are able to access their Azure AD accounts. Instead of relying on a central helpdesk, organizations can delegate common tasks such as resetting passwords or changing phone numbers to a local team manager. With My Staff, a user who can't access their account can regain access in just a couple of clicks, with no helpdesk or IT staff required.
-Before you configure My Staff for your organization, we recommend that you review this documentation as well as the [user documentation](../user-help/my-staff-team-manager.md) to ensure you understand the functionality and impact of this feature on your users. You can leverage the user documentation to train and prepare your users for the new experience and help to ensure a successful rollout.
-
-SMS-based authentication for users is a public preview feature of Azure Active Directory. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)
+Before you configure My Staff for your organization, we recommend that you review this documentation as well as the [user documentation](../user-help/my-staff-team-manager.md) to ensure you understand how it works and hot it impacts your users. You can leverage the user documentation to train and prepare your users for the new experience and help to ensure a successful rollout.
## How My Staff works
-My Staff is based on administrative units (AUs), which are a container of resources which can be used to restrict the scope of a role assignment's administrative control. In My Staff, AUs are used to define a subset of an organization's users such as a store or department. Then, for example, a team manager could be assigned to a role whose scope is one or more AUs. In the example below, the user has been granted the Authentication Administrative role, and the three AUs are the scope of the role. For more information about administrative units, see [Administrative units management in Azure Active Directory](administrative-units.md).
+My Staff is based on administrative units, which are a container of resources which can be used to restrict the scope of a role assignment's administrative control. For more information, see [Administrative units management in Azure Active Directory](administrative-units.md). In My Staff, administrative units can used to contain a group of users in a store or department. A team manager can then be assigned to an administrative role at a scope of one or more units.
## Before you begin
To complete this article, you need the following resources and privileges:
* If needed, [create an Azure Active Directory tenant](../fundamentals/sign-up-organization.md) or [associate an Azure subscription with your account](../fundamentals/active-directory-how-subscriptions-associated-directory.md). * You need *Global administrator* privileges in your Azure AD tenant to enable SMS-based authentication.
-* Each user that's enabled in the text message authentication method policy must be licensed, even if they don't use it. Each enabled user must have one of the following Azure AD or Microsoft 365 licenses:
+* Each user who's enabled in the text message authentication method policy must be licensed, even if they don't use it. Each enabled user must have one of the following Azure AD or Microsoft 365 licenses:
* [Azure AD Premium P1 or P2](https://azure.microsoft.com/pricing/details/active-directory/) * [Microsoft 365 (M365) F1 or F3](https://www.microsoft.com/licensing/news/m365-firstline-workers)
To complete this article, you need the following resources and privileges:
## How to enable My Staff
-Once you have configured AUs, you can apply this scope to your users who access My Staff. Only users who are assigned an administrative role can access My Staff. To enable My Staff, complete the following steps:
+Once you have configured administrative units, you can apply this scope to your users who access My Staff. Only users who are assigned an administrative role can access My Staff. To enable My Staff, complete the following steps:
1. Sign into the Azure portal as a User administrator. 2. Browse to **Azure Active Directory** > **User settings** > **User feature previews** > **Manage user feature preview settings**.
Once you have configured AUs, you can apply this scope to your users who access
You can protect the My Staff portal using Azure AD Conditional Access policy. Use it for tasks like requiring multi-factor authentication before accessing My Staff.
-We strongly recommend that you protect My Staff using [Azure AD Conditional Access policies](../conditional-access/index.yml). To apply a Conditional Access policy to My Staff, you must manually create the My Staff service principal using PowerShell.
-
-### Apply a Conditional Access policy to My Staff
-
-1. Install the [Microsoft Graph Beta PowerShell cmdlets](https://github.com/microsoftgraph/msgraph-sdk-powershell/blob/dev/samples/0-InstallModule.ps1).
-1. Run the following commands:
+We strongly recommend that you protect My Staff using [Azure AD Conditional Access policies](../conditional-access/index.yml). To apply a Conditional Access policy to My Staff, you must first visit the My Staff site once for a few minutes to automatically provision the service principal in your tenant for use by Conditional Access.
- ```powershell
- Connect-Graph -Scopes "Directory.AccessAsUser.All"
- New-MgServicePrincipal -DisplayName "My Staff" -AppId "ba9ff945-a723-4ab5-a977-bd8c9044fe61"
- ```
-1. Create a Conditional Access policy that applies to the My Staff cloud application.
+You'll see the service principal when you create a Conditional Access policy that applies to the My Staff cloud application.
- ![Create a conditional access policy for the My Staff app](./media/my-staff-configure/conditional-access.png)
+![Create a conditional access policy for the My Staff app](./media/my-staff-configure/conditional-access.png)
## Using My Staff
-When a user goes to My Staff, they are shown the names of the [administrative units](administrative-units.md) over which they have administrative permissions. In the [My Staff user documentation](../user-help/my-staff-team-manager.md), we use the term "location" to refer to administrative units. If an administrator's permissions do not have an AU scope, the permissions apply across the organization. After My Staff has been enabled, the users who are enabled and have been assigned an administrative role can access it through [https://mystaff.microsoft.com](https://mystaff.microsoft.com). They can select an AU to view the users in that AU, and select a user to open their profile.
+When a user goes to My Staff, they are shown the names of the [administrative units](administrative-units.md) over which they have administrative permissions. In the [My Staff user documentation](../user-help/my-staff-team-manager.md), we use the term "location" to refer to administrative units. If an administrator's permissions do not have an administrative unit scope, the permissions apply across the organization. After My Staff has been enabled, the users who are enabled and have been assigned an administrative role can access it through [https://mystaff.microsoft.com](https://mystaff.microsoft.com). They can select an administrative unit to view the users in that unit, and select a user to open their profile.
## Reset a user's password
+Before you can rest passwords for on-premises users, you must fulfill the following prerequisite conditions. For detailed instructions, see [Enable self-service password reset](../authentication/tutorial-enable-sspr-writeback.md) tutorial.
+
+* Configure permissions for password writeback
+* Enable password writeback in Azure AD Connect
+* Enable password writeback in Azure AD self-service password reset (SSPR)
+ The following roles have permission to reset a user's password: -- [Authentication administrator](permissions-reference.md#authentication-administrator)-- [Privileged authentication administrator](permissions-reference.md#privileged-authentication-administrator)-- [Global administrator](permissions-reference.md#global-administrator)-- [Helpdesk administrator](permissions-reference.md#helpdesk-administrator)-- [User administrator](permissions-reference.md#user-administrator)-- [Password administrator](permissions-reference.md#password-administrator)
+* [Authentication administrator](permissions-reference.md#authentication-administrator)
+* [Privileged authentication administrator](permissions-reference.md#privileged-authentication-administrator)
+* [Global administrator](permissions-reference.md#global-administrator)
+* [Helpdesk administrator](permissions-reference.md#helpdesk-administrator)
+* [User administrator](permissions-reference.md#user-administrator)
+* [Password administrator](permissions-reference.md#password-administrator)
From **My Staff**, open a user's profile. Select **Reset password**. -- If the user is cloud-only, you can see a temporary password that you can give to the user.-- If the user is synced from on-premises Active Directory, you can enter a password that meets your on-premises AD policies. You can then give that password to the user.
+* If the user is cloud-only, you can see a temporary password that you can give to the user.
+* If the user is synced from on-premises Active Directory, you can enter a password that meets your on-premises AD policies. You can then give that password to the user.
![Password reset progress indicator and success notification](./media/my-staff-configure/reset-password.png)
The user is required to change their password the next time they sign in.
From **My Staff**, open a user's profile. -- Select **Add phone number** section to add a phone number for the user-- Select **Edit phone number** to change the phone number-- Select **Remove phone number** to remove the phone number for the user
+* Select **Add phone number** section to add a phone number for the user
+* Select **Edit phone number** to change the phone number
+* Select **Remove phone number** to remove the phone number for the user
Depending on your settings, the user can then use the phone number you set up to sign in with SMS, perform multi-factor authentication, and perform self-service password reset. To manage a user's phone number, you must be assigned one of the following roles: -- [Authentication administrator](permissions-reference.md#authentication-administrator)-- [Privileged authentication administrator](permissions-reference.md#privileged-authentication-administrator)-- [Global administrator](permissions-reference.md#global-administrator)
+* [Authentication administrator](permissions-reference.md#authentication-administrator)
+* [Privileged authentication administrator](permissions-reference.md#privileged-authentication-administrator)
+* [Global administrator](permissions-reference.md#global-administrator)
## Search
-You can search for AUs and users in your organization using the search bar in My Staff. You can search across all AUs and users in your organization, but you can only make changes to users who are in a AU over which you have been given admin permissions.
+You can search for administrative units and users in your organization using the search bar in My Staff. You can search across all administrative units and users in your organization, but you can only make changes to users who are in an administrative unit over which you have been given admin permissions.
-You can also search for a user within an AU. To do this, use the search bar at the top of the user list.
+You can also search for a user within an administrative unit. To do this, use the search bar at the top of the user list.
## Audit logs
You can view audit logs for actions taken in My Staff in the Azure Active Direct
## Next steps [My Staff user documentation](../user-help/my-staff-team-manager.md)
-[Administrative units documentation](administrative-units.md)
+[Administrative units documentation](administrative-units.md)
active-directory Quickstart App Registration Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/quickstart-app-registration-limits.md
There are two permissions available for granting the ability to create applicati
First, install the Azure AD PowerShell module from the [PowerShell Gallery](https://www.powershellgallery.com/packages/AzureADPreview/2.0.0.17). Then import the Azure AD PowerShell preview module, using the following command: ```powershell
-import-module azureadpreview
+Import-Module -Name AzureADPreview
``` To verify that the module is ready to use, match the version returned by the following command to the one listed here: ```powershell
-get-module azureadpreview
+Get-Module -Name AzureADPreview
ModuleType Version Name ExportedCommands - - -
- Binary 2.0.0.115 azureadpreview {Add-AzureADAdministrati...}
+ Binary 2.0.0.115 AzureADPreview {Add-AzureADAdministrati...}
``` ### Create the custom role in Azure AD PowerShell
active-directory View Assignments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/view-assignments.md
First, you must [download the Azure AD preview PowerShell module](https://www.po
To install the Azure AD PowerShell module, use the following commands: ``` PowerShell
-install-module azureadpreview
-import-module azureadpreview
+Install-Module -Name AzureADPreview
+Import-Module -Name AzureADPreview
``` To verify that the module is ready to use, use the following command: ``` PowerShell
-get-module azuread
+Get-Module -Name AzureADPreview
ModuleType Version Name ExportedCommands - - -
- Binary 2.0.0.115 azuread {Add-AzureADAdministrati...}
+ Binary 2.0.0.115 AzureADPreview {Add-AzureADAdministrati...}
``` ### View the assignments of a role
active-directory Github Enterprise Managed User Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/github-enterprise-managed-user-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure GitHub Enterprise Managed User for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to GitHub Enterprise Managed User.
+
+documentationcenter: ''
+
+writer: Zhchia
++
+ms.assetid: 6aee39c7-08a1-4110-b936-4c85d129743b
+++
+ na
+ms.devlang: na
+ Last updated : 03/05/2021+++
+# Tutorial: Configure GitHub Enterprise Managed User for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both GitHub Enterprise Managed User and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to GitHub Enterprise Managed User using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../manage-apps/user-provisioning.md).
++
+## Capabilities Supported
+> [!div class="checklist"]
+> * Create users in GitHub Enterprise Managed User
+> * Remove users in GitHub Enterprise Managed User when they do not require access anymore
+> * Keep user attributes synchronized between Azure AD and GitHub Enterprise Managed User
+> * Provision groups and group memberships in GitHub Enterprise Managed User
+> * Single sign-on to GitHub Enterprise Managed User (recommended)
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](https://docs.microsoft.com/azure/active-directory/develop/quickstart-create-new-tenant)
+* A user account in Azure AD with [permission](https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* Enterprise Managed Users enabled GitHub Enterprise and configured to login with SAML SSO through your Azure AD tenant.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](https://docs.microsoft.com/azure/active-directory/manage-apps/user-provisioning).
+2. Determine who will be in [scope for provisioning](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+3. Determine what data to [map between Azure AD and GitHub Enterprise Managed User](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes).
+
+## Step 2. Configure GitHub Enterprise Managed User to support provisioning with Azure AD
+
+1. The Tenant URL is `https://api.github.com/scim/v2/enterprises/{enterprise}`. This value will be entered in the Tenant URL field in the Provisioning tab of your GitHub Enterprise Managed User application in the Azure portal.
+
+2. As a GitHub Enterprise Managed administrator navigate to the upper-right corner -> click your profile photo -> then click **Settings**.
+
+3. In the left sidebar, click **Developer settings**.
+
+4. In the left sidebar, click **Personal access tokens**.
+
+5. Click **Generate new token**.
+
+6. Select the **admin:enterprise** scope for this token.
+
+7. Click **Generate Token**.
+
+8. Copy and save the **secret token**. This value will be entered in the Secret Token field in the Provisioning tab of your GitHub Enterprise Managed User application in the Azure portal.
+
+## Step 3. Add GitHub Enterprise Managed User from the Azure AD application gallery
+
+Add GitHub Enterprise Managed User from the Azure AD application gallery to start managing provisioning to GitHub Enterprise Managed User. If you have previously setup GitHub Enterprise Managed User for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](https://docs.microsoft.com/azure/active-directory/manage-apps/add-gallery-app).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+
+* When assigning users and groups to GitHub Enterprise Managed User, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs.
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
++
+## Step 5. Configure automatic user provisioning to GitHub Enterprise Managed User
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for GitHub Enterprise Managed User in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+2. In the applications list, select **GitHub Enterprise Managed User**.
+
+ ![The GitHub Enterprise Managed User link in the Applications list](common/all-applications.png)
+
+3. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+4. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
+
+5. Under the **Admin Credentials** section, input your GitHub Enterprise Managed User Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to GitHub Enterprise Managed User. If the connection fails, ensure your GitHub Enterprise Managed User account has created the secret token as an enterprise owner and try again.
+
+ ![Token](common/provisioning-testconnection-tenanturltoken.png)
+
+6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+7. Select **Save**.
+
+8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to GitHub Enterprise Managed User**.
+
+9. Review the user attributes that are synchronized from Azure AD to GitHub Enterprise Managed User in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in GitHub Enterprise Managed User for update operations. If you choose to change the [matching target attribute](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes), you will need to ensure that the GitHub Enterprise Managed User API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported For Filtering|
+ ||||
+ |externalId|String|&check;|
+ |userName|String|
+ |active|Boolean|
+ |roles|String|
+ |displayName|String|
+ |name.givenName|String|
+ |name.familyName|String|
+ |name.formatted|String|
+ |emails[type eq "work"].value|String|
+ |emails[type eq "home"].value|String|
+ |emails[type eq "other"].value|String|
+
+10. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to GitHub Enterprise Managed User**.
+
+11. Review the group attributes that are synchronized from Azure AD to GitHub Enterprise Managed User in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in GitHub Enterprise Managed User for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported For Filtering|
+ ||||
+ |externalId|String|&check;|
+ |displayName|String|
+ |members|Reference|
+
+12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../manage-apps/define-conditional-rules-for-provisioning-user-accounts.md).
+
+13. To enable the Azure AD provisioning service for GitHub Enterprise Managed User, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+14. Define the users and/or groups that you would like to provision to GitHub Enterprise Managed User by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+15. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+1. Use the [provisioning logs](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-provisioning-logs) to determine which users have been provisioned successfully or unsuccessfully
+2. Check the [progress bar](https://docs.microsoft.com/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user) to see the status of the provisioning cycle and how close it is to completion
+3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](https://docs.microsoft.com/azure/active-directory/manage-apps/application-provisioning-quarantine-status).
+
+## Additional resources
+
+* [Managing user account provisioning for Enterprise Apps](../manage-apps/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../manage-apps/check-status-user-account-provisioning.md)
active-directory Hcaptcha Enterprise Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/hcaptcha-enterprise-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with hCaptcha Enterprise | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and hCaptcha Enterprise.
++++++++ Last updated : 03/10/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with hCaptcha Enterprise
+
+In this tutorial, you'll learn how to integrate hCaptcha Enterprise with Azure Active Directory (Azure AD). When you integrate hCaptcha Enterprise with Azure AD, you can:
+
+* Control in Azure AD who has access to hCaptcha Enterprise.
+* Enable your users to be automatically signed-in to hCaptcha Enterprise with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* hCaptcha Enterprise single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* hCaptcha Enterprise supports **SP and IDP** initiated SSO.
+* hCaptcha Enterprise supports **Just In Time** user provisioning.
+
+## Adding hCaptcha Enterprise from the gallery
+
+To configure the integration of hCaptcha Enterprise into Azure AD, you need to add hCaptcha Enterprise from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **hCaptcha Enterprise** in the search box.
+1. Select **hCaptcha Enterprise** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for hCaptcha Enterprise
+
+Configure and test Azure AD SSO with hCaptcha Enterprise using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in hCaptcha Enterprise.
+
+To configure and test Azure AD SSO with hCaptcha Enterprise, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure hCaptcha Enterprise SSO](#configure-hcaptcha-enterprise-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create hCaptcha Enterprise test user](#create-hcaptcha-enterprise-test-user)** - to have a counterpart of B.Simon in hCaptcha Enterprise that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **hCaptcha Enterprise** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
+
+ a. In the **Identifier** text box, type a URL using the following pattern:
+ `https://accounts.hcaptcha.com/org/<YOUR_SLUG>/saml/callback`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://accounts.hcaptcha.com/org/<YOUR_SLUG>/saml/callback`
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://dashboard.hcaptcha.com/org/<YOUR_SLUG>/login`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [hCaptcha Enterprise Client support team](mailto:support@hcaptcha.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. hCaptcha Enterprise application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![image](common/default-attributes.png)
+
+1. In addition to above, hCaptcha Enterprise application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | -- | |
+ | groups | user.groups |
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to hCaptcha Enterprise.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **hCaptcha Enterprise**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure hCaptcha Enterprise SSO
+
+To configure single sign-on on **hCaptcha Enterprise** side, you need to send the **App Federation Metadata Url** to [hCaptcha Enterprise support team](mailto:support@hcaptcha.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create hCaptcha Enterprise test user
+
+In this section, a user called Britta Simon is created in hCaptcha Enterprise. hCaptcha Enterprise supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in hCaptcha Enterprise, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to hCaptcha Enterprise Sign on URL where you can initiate the login flow.
+
+* Go to hCaptcha Enterprise Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the hCaptcha Enterprise for which you set up the SSO
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the hCaptcha Enterprise tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the hCaptcha Enterprise for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
+
+## Next steps
+
+Once you configure hCaptcha Enterprise you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
++
active-directory Lexonis Talentscape Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/lexonis-talentscape-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Lexonis TalentScape | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Lexonis TalentScape.
++++++++ Last updated : 03/10/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Lexonis TalentScape
+
+In this tutorial, you'll learn how to integrate Lexonis TalentScape with Azure Active Directory (Azure AD). When you integrate Lexonis TalentScape with Azure AD, you can:
+
+* Control in Azure AD who has access to Lexonis TalentScape.
+* Enable your users to be automatically signed-in to Lexonis TalentScape with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Lexonis TalentScape single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Lexonis TalentScape supports **SP and IDP** initiated SSO
+* Lexonis TalentScape supports **Just In Time** user provisioning
+
+## Adding Lexonis TalentScape from the gallery
+
+To configure the integration of Lexonis TalentScape into Azure AD, you need to add Lexonis TalentScape from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Lexonis TalentScape** in the search box.
+1. Select **Lexonis TalentScape** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for Lexonis TalentScape
+
+Configure and test Azure AD SSO with Lexonis TalentScape using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Lexonis TalentScape.
+
+To configure and test Azure AD SSO with Lexonis TalentScape, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Lexonis TalentScape SSO](#configure-lexonis-talentscape-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Lexonis TalentScape test user](#create-lexonis-talentscape-test-user)** - to have a counterpart of B.Simon in Lexonis TalentScape that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Lexonis TalentScape** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
+
+ a. In the **Identifier** text box, type a URL using the following pattern:
+ `https://<CUSTOMER_NAME>.lexonis.com/`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://<CUSTOMER_NAME>.lexonis.com/saml2/acs`
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://<CUSTOMER_NAME>.lexonis.com/`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [Lexonis TalentScape Client support team](mailto:support@lexonis.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Lexonis TalentScape application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![image](common/default-attributes.png)
+
+1. In addition to above, Lexonis TalentScape application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | -- | |
+ | jobtitle | user.jobtitle |
+ | roles | user.assignedroles |
+
+ > [!NOTE]
+ > Lexonis TalentScape expects roles for users assigned to the application. Please set up these roles in Azure AD so that users can be assigned the appropriate roles. To understand how to configure roles in Azure AD, see [here](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps#app-roles-ui).
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Lexonis TalentScape.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Lexonis TalentScape**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Lexonis TalentScape SSO
+
+To configure single sign-on on **Lexonis TalentScape** side, you need to send the **App Federation Metadata Url** to [Lexonis TalentScape support team](mailto:support@lexonis.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Lexonis TalentScape test user
+
+In this section, a user called Britta Simon is created in Lexonis TalentScape. Lexonis TalentScape supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Lexonis TalentScape, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Lexonis TalentScape Sign on URL where you can initiate the login flow.
+
+* Go to Lexonis TalentScape Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Lexonis TalentScape for which you set up the SSO
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Lexonis TalentScape tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Lexonis TalentScape for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
++
+## Next steps
+
+Once you configure Lexonis TalentScape you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
++
active-directory Onshape Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/onshape-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the Azure portal, on the **Onshape** application integration page, find the **Manage** section and select **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
-
-1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
-
-1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
-
- In the **Sign-on URL** text box, type a URL using the following pattern:
- `https://<SUBDOMAIN>.onshape.com`
-
- > [!NOTE]
- > The value is not real. Update the value with the actual Sign-on URL. Contact [Onshape Client support team](mailto:support@onshape.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-
-1. Click **Save**.
-
-1. Onshape application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+1. If prompted to save your single sign-on setting, select **Yes**.
+1. The Onshape application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
![image](common/default-attributes.png)
-1. In addition to above, Onshape application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+1. In addition to above, the Onshape application expects few more attributes shown below to be passed to it in the SAML response. These attributes are also pre-populated but you can review them as per your requirements.
| Name | Source Attribute| | | |
Follow these steps to enable Azure AD SSO in the Azure portal.
| companyName | <COMPANY_NAME> | > [!NOTE]
- > Edit the value of the ΓÇ£companyNameΓÇ¥ claim with the ΓÇ£domain prefixΓÇ¥. E.g., if the customer accesses the Onshape application using a URL like https://acme.onshape.com, then their domain prefix is ΓÇ£acmeΓÇ¥. The attribute value must be only the prefix, not the entire DNS name.
+ > You _must_ change the value of the **companyName** attribute to the *domain prefix* of your Onshape enterprise. For example, if you access the Onshape application by using a URL like `https://acme.onshape.com`, your domain prefix is *acme*. The attribute value must be only the prefix, not the entire DNS name.
-1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
![The Certificate download link](common/metadataxml.png)
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure Onshape SSO
-To configure single sign-on on **Onshape** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Onshape support team](mailto:support@onshape.com). They set this setting to have the SAML SSO connection set properly on both sides.
+For information about how to configure single sign-on on the **Onshape** side, see [Integrating with Microsoft Azure AD](https://cad.onshape.com/help/Content/MS_AzureAD.htm).
### Create Onshape test user
active-directory Sapient Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/sapient-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Sapient | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Sapient.
++++++++ Last updated : 03/10/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Sapient
+
+In this tutorial, you'll learn how to integrate Sapient with Azure Active Directory (Azure AD). When you integrate Sapient with Azure AD, you can:
+
+* Control in Azure AD who has access to Sapient.
+* Enable your users to be automatically signed-in to Sapient with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Sapient single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Sapient supports **SP** initiated SSO
+
+* Sapient supports **Just In Time** user provisioning
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
++
+## Adding Sapient from the gallery
+
+To configure the integration of Sapient into Azure AD, you need to add Sapient from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Sapient** in the search box.
+1. Select **Sapient** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for Sapient
+
+Configure and test Azure AD SSO with Sapient using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Sapient.
+
+To configure and test Azure AD SSO with Sapient, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Sapient SSO](#configure-sapient-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Sapient test user](#create-sapient-test-user)** - to have a counterpart of B.Simon in Sapient that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Sapient** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, enter the values for the following fields:
+
+ In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://<CUSTOMERNAME>.app.sapient.industries`
+
+ > [!NOTE]
+ > The value is not real. Update the value with the actual Sign-On URL. Contact [Sapient Client support team](mailto:help@sapient.industries) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Sapient.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Sapient**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Sapient SSO
+
+To configure single sign-on on **Sapient** side, you need to send the **App Federation Metadata Url** to [Sapient support team](mailto:help@sapient.industries). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Sapient test user
+
+In this section, a user called Britta Simon is created in Sapient. Sapient supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Sapient, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Sapient Sign-on URL where you can initiate the login flow.
+
+* Go to Sapient Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Sapient tile in the My Apps, this will redirect to Sapient Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
++
+## Next steps
+
+Once you configure Sapient you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
++
active-directory Syncplicity Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/syncplicity-tutorial.md
To learn more about SaaS app integration with Azure AD, see [What is application
To get started, you need the following items:
-* An Azure AD subscription. If you don't have a subscription, you can get one-month free trial [here](https://azure.microsoft.com/pricing/free-trial/).
+* An Azure AD subscription. If you don't have a subscription, you can get a 12-month free trial [here](https://azure.microsoft.com/free/).
* Syncplicity single sign-on (SSO) enabled subscription. ## Scenario description
To configure the integration of Syncplicity into Azure AD, you need to add Syncp
1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account. 1. On the left navigation pane, select the **Azure Active Directory** service.
-1. Navigate to **Enterprise Applications** and then select **All Applications**.
-1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **Syncplicity** in the search box.
-1. Select **Syncplicity** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+1. Under **Create**, click **Enterprise Application**.
+1. In the **Browse Azure AD gallery** section, type **Syncplicity** in the search box.
+1. Select **Syncplicity** from results panel and then click **Create** to add the app. Wait a few seconds while the app is added to your tenant.
## Configure and test Azure AD SSO
To configure and test Azure AD SSO with Syncplicity, complete the following buil
4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on. 5. **[Create Syncplicity test user](#create-syncplicity-test-user)** - to have a counterpart of B.Simon in Syncplicity that is linked to the Azure AD representation of user. 6. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+7. **[Update SSO](#update-sso)**) - to make the necessary changes in Syncplicity if you have changed the SSO settings in Azure AD.
### Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Syncplicity** application integration page, find the **Manage** section and select **Single sign-on**.
-1. On the **Select a Single sign-on method** page, select **SAML**.
-1. On the **Set up Single Sign-On with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. In the [Azure portal](https://portal.azure.com/), on the **Syncplicity** application integration page, find the **Getting Started** section and select **Set up single sign-on**.
+2. On the **Select a Single sign-on method** page, select **SAML**.
+3. On the **Set up Single Sign-On with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** page, enter the values for the following fields:
+4. In the **Basic SAML Configuration** section, enter the values for the following fields:
- a. In the **Sign on URL** text box, type a URL using the following pattern:
- `https://<companyname>.syncplicity.com`
-
- b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
`https://<companyname>.syncplicity.com/sp`
+ b. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<companyname>.syncplicity.com`
+
+ c. In the **Reply URL (Assertion Consumer Service URL)** text box, type a URL using the following pattern:
+ `https://<companyname>.syncplicity.com/Auth/AssertionConsumerService.aspx`
+ > [!NOTE] > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [Syncplicity Client support team](https://www.syncplicity.com/contact-us) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Edit**. Then in the dialog click the ellipsis button next to your active certificate and select **PEM certificate download**.
![The Certificate download link](common/certificatebase64.png)
-1. On the **Set up Syncplicity** section, copy the appropriate URL(s) based on your requirement.
+ > [!NOTE]
+ > You need the PEM certificate, as Syncplicity does not accept certificates in CER format.
+
+6. On the **Set up Syncplicity** section, copy the appropriate URL(s) based on your requirement.
![Copy configuration URLs](common/copy-configuration-urls.png)
Follow these steps to enable Azure AD SSO in the Azure portal.
1. Sign in to your **Syncplicity** tenant.
-1. In the menu on the top, click **admin**, select **settings**, and then click **Custom domain and single sign-on**.
+1. In the menu on the top, click **Admin**, select **Settings**, and then click **Custom domain and single sign-on**.
![Syncplicity](./media/syncplicity-tutorial/ic769545.png "Syncplicity")
Follow these steps to enable Azure AD SSO in the Azure portal.
c. In the **Entity Id** textbox, Paste the **Identifier (Entity ID)** value, which you have used in the **Basic SAML Configuration** in the Azure portal.
- d. In the **Sign-in page URL** textbox, Paste the **Login URL** which you have copied from Azure portal.
+ d. In the **Sign-in page URL** textbox, Paste the **Sign on URL** which you have copied from the Azure portal.
- e. In the **Logout page URL** textbox, Paste the **Logout URL** which you have copied from Azure portal.
+ e. In the **Logout page URL** textbox, Paste the **Logout URL** which you have copied from the Azure portal.
f. In **Identity Provider Certificate**, click **Choose file**, and then upload the certificate which you have downloaded from the Azure portal.
Follow these steps to enable Azure AD SSO in the Azure portal.
In this section, you'll create a test user in the Azure portal called B.Simon. 1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
-1. Select **New user** at the top of the screen.
-1. In the **User** properties, follow these steps:
- 1. In the **Name** field, enter `B.Simon`.
- 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
- 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
- 1. Click **Create**.
+2. Select **New user** at the top of the screen.
+3. In the **User** properties, follow these steps:
+
+ a. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+
+ b. In the **Name** field, enter `B.Simon`.
+
+ c. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+
+ d. Click **Create**.
### Assign the Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
![The "Users and groups" link](common/users-groups-blade.png)
-1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. Select **Add user/group**.
![The Add User link](common/add-assign-user.png)-
-1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. In the **Add Assignment** page select **Users**.
+1. In the **Users** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
-1. In the **Add Assignment** dialog, click the **Assign** button.
+1. In the **Add Assignment** page, click the **Assign** button.
### Create Syncplicity test user
For Azure AD users to be able to sign in, they must be provisioned to Syncplicit
1. Sign in to your **Syncplicity** tenant (for example: `https://company.Syncplicity.com`).
-1. Click **admin** and select **user accounts** and then click **ADD A USER**.
+1. Click **Admin** and select **User Accounts**, then click **Add a User**.
![Manage Users](./media/syncplicity-tutorial/ic769764.png "Manage Users")
-1. Type the **Email addresses** of an Azure AD account you want to provision, select **User** as **Role**, and then click **NEXT**.
+1. Type the **Email addresses** of an Azure AD account you want to provision, select **User** as **Role**, and then click **Next**.
![Account Information](./media/syncplicity-tutorial/ic769765.png "Account Information") > [!NOTE]
- > The Azure AD account holder gets an email including a link to confirm and activate the account.
+ > The Azure AD account holder gets an email including a link to confirm and activate the account.
-1. Select a group in your company that your new user should become a member of, and then click **NEXT**.
+1. Select a group in your company that your new user should become a member of, and then click **Next**.
![Group Membership](./media/syncplicity-tutorial/ic769772.png "Group Membership") > [!NOTE]
- > If there are no groups listed, click **NEXT**.
+ > If there are no groups listed, click **Next**.
-1. Select the folders you would like to place under SyncplicityΓÇÖs control on the userΓÇÖs computer, and then click **NEXT**.
+1. Select the folders you would like to place under SyncplicityΓÇÖs control on the userΓÇÖs computer, and then click **Next**.
![Syncplicity Folders](./media/syncplicity-tutorial/ic769773.png "Syncplicity Folders")
For Azure AD users to be able to sign in, they must be provisioned to Syncplicit
When you select the Syncplicity tile in the Access Panel, you should be automatically signed in to the Syncplicity for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+### Update SSO
+
+Whenever you need to make changes to the SSO, you need to check the **SAML Signing Certificate** being used. If the certificate has changed, make sure to upload the new one to Syncplicity as described in **[Configure Syncplicity SSO](#configure-syncplicity-sso)**.
+
+If you are using the Syncplicity Mobile app, please contact the Syncplicity Customer Support (support@syncplicity.com) for assistance.
+ ## Additional Resources - [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md) - [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) -- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
active-directory Templafy Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/templafy-provisioning-tutorial.md
- Title: 'Tutorial: Configure Templafy for automatic user provisioning with Azure Active Directory | Microsoft Docs'
-description: Learn how to configure Azure Active Directory to automatically provision and de-provision user accounts to Templafy.
--
-writer: zchia
----- Previously updated : 07/26/2019---
-# Tutorial: Configure Templafy for automatic user provisioning
-
-The objective of this tutorial is to demonstrate the steps to be performed in Templafy and Azure Active Directory (Azure AD) to configure Azure AD to automatically provision and de-provision users and/or groups to Templafy.
-
-> [!NOTE]
-> This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
->
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-## Prerequisites
-
-The scenario outlined in this tutorial assumes that you already have the following prerequisites:
-
-* An Azure AD tenant.
-* [A Templafy tenant](https://www.templafy.com/pricing/).
-* A user account in Templafy with Admin permissions.
-
-## Assigning users to Templafy
-
-Azure Active Directory uses a concept called *assignments* to determine which users should receive access to selected apps. In the context of automatic user provisioning, only the users and/or groups that have been assigned to an application in Azure AD are synchronized.
-
-Before configuring and enabling automatic user provisioning, you should decide which users and/or groups in Azure AD need access to Templafy. Once decided, you can assign these users and/or groups to Templafy by following the instructions here:
-* [Assign a user or group to an enterprise app](../manage-apps/assign-user-or-group-access-portal.md)
-
-## Important tips for assigning users to Templafy
-
-* It is recommended that a single Azure AD user is assigned to Templafy to test the automatic user provisioning configuration. Additional users and/or groups may be assigned later.
-
-* When assigning a user to Templafy, you must select any valid application-specific role (if available) in the assignment dialog. Users with the **Default Access** role are excluded from provisioning.
-
-## Setup Templafy for provisioning
-
-Before configuring Templafy for automatic user provisioning with Azure AD, you will need to enable SCIM provisioning on Templafy.
-
-1. Sign in to your Templafy Admin Console. Click on **Administration**.
-
- ![Templafy Admin Console](media/templafy-provisioning-tutorial/image00.png)
-
-2. Click on **Authentication Method**.
-
- ![Screenshot of the Templafy administration section with the Authentication method option called out.](media/templafy-provisioning-tutorial/image01.png)
-
-3. Copy the **SCIM Api Key** value. This value will be entered in the **Secret Token** field in the Provisioning tab of your Templafy application in the Azure portal.
-
- ![A screenshot of the S C I M A P I key.](media/templafy-provisioning-tutorial/image02.png)
-
-## Add Templafy from the gallery
-
-To configure Templafy for automatic user provisioning with Azure AD, you need to add Templafy from the Azure AD application gallery to your list of managed SaaS applications.
-
-**To add Templafy from the Azure AD application gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, in the left navigation panel, select **Azure Active Directory**.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Go to **Enterprise applications**, and then select **All applications**.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add a new application, select the **New application** button at the top of the pane.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, enter **Templafy**, select **Templafy** in the results panel, and then click the **Add** button to add the application.
-
- ![Templafy in the results list](common/search-new-app.png)
-
-## Configuring automatic user provisioning to Templafy
-
-This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in Templafy based on user and/or group assignments in Azure AD.
-
-> [!TIP]
-> You may also choose to enable SAML-based single sign-on for Templafy, following the instructions provided in the [Templafy Single sign-on tutorial](templafy-tutorial.md). Single sign-on can be configured independently of automatic user provisioning, though these two features compliment each other.
-
-### To configure automatic user provisioning for Templafy in Azure AD:
-
-1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **Templafy**.
-
- ![The Templafy link in the Applications list](common/all-applications.png)
-
-3. Select the **Provisioning** tab.
-
- ![Screenshot of the Manage options with the Provisioning option called out.](common/provisioning.png)
-
-4. Set the **Provisioning Mode** to **Automatic**.
-
- ![Screenshot of the Provisioning Mode dropdown list with the Automatic option called out.](common/provisioning-automatic.png)
-
-5. Under the **Admin Credentials** section, input `https://scim.templafy.com/scim` in **Tenant URL**. Input the **SCIM API Key** value retrieved earlier in **Secret Token**. Click **Test Connection** to ensure Azure AD can connect to Templafy. If the connection fails, ensure your Templafy account has Admin permissions and try again.
-
- ![Tenant URL + Token](common/provisioning-testconnection-tenanturltoken.png)
-
-6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and check the checkbox - **Send an email notification when a failure occurs**.
-
- ![Notification Email](common/provisioning-notification-email.png)
-
-7. Click **Save**.
-
-8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Templafy**.
-
- ![Templafy User Mappings](media/templafy-provisioning-tutorial/usermapping.png)
-
-9. Review the user attributes that are synchronized from Azure AD to Templafy in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Templafy for update operations. Select the **Save** button to commit any changes.
-
- ![Templafy User Attributes](media/templafy-provisioning-tutorial/userattribute.png)
-
-10. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Templafy**.
-
- ![Templafy Group Mappings](media/templafy-provisioning-tutorial/groupmapping.png)
-
-11. Review the group attributes that are synchronized from Azure AD to Templafy in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Templafy for update operations. Select the **Save** button to commit any changes.
-
- ![Templafy Group Attributes](media/templafy-provisioning-tutorial/groupattribute.png)
-
-12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-
-13. To enable the Azure AD provisioning service for Templafy, change the **Provisioning Status** to **On** in the **Settings** section.
-
- ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
-
-14. Define the users and/or groups that you would like to provision to Templafy by choosing the desired values in **Scope** in the **Settings** section.
-
- ![Provisioning Scope](common/provisioning-scope.png)
-
-15. When you are ready to provision, click **Save**.
-
- ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
-
- This operation starts the initial synchronization of all users and/or groups defined in **Scope** in the **Settings** section. The initial sync takes longer to perform than subsequent syncs, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. You can use the **Synchronization Details** section to monitor progress and follow links to provisioning activity report, which describes all actions performed by the Azure AD provisioning service on Templafy.
-
- For more information on how to read the Azure AD provisioning logs, see [Reporting on automatic user account provisioning](../app-provisioning/check-status-user-account-provisioning.md)
-
-## Additional resources
-
-* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
-* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
-
-## Next steps
-
-* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Thrive Lxp Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/thrive-lxp-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Thrive LXP | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Thrive LXP.
++++++++ Last updated : 03/10/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Thrive LXP
+
+In this tutorial, you'll learn how to integrate Thrive LXP with Azure Active Directory (Azure AD). When you integrate Thrive LXP with Azure AD, you can:
+
+* Control in Azure AD who has access to Thrive LXP.
+* Enable your users to be automatically signed-in to Thrive LXP with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Thrive LXP single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Thrive LXP supports **SP** initiated SSO.
+
+## Adding Thrive LXP from the gallery
+
+To configure the integration of Thrive LXP into Azure AD, you need to add Thrive LXP from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Thrive LXP** in the search box.
+1. Select **Thrive LXP** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for Thrive LXP
+
+Configure and test Azure AD SSO with Thrive LXP using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Thrive LXP.
+
+To configure and test Azure AD SSO with Thrive LXP, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Thrive LXP SSO](#configure-thrive-lxp-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Thrive LXP test user](#create-thrive-lxp-test-user)** - to have a counterpart of B.Simon in Thrive LXP that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Thrive LXP** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, enter the values for the following fields:
+
+ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ `urn:amazon:cognito:sp:<THRIVE_LXP_IDENTIFIER>`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://<SUBDOMAIN>-lxp.auth.eu-west-2.amazoncognito.com/saml2/idpresponse`
+
+ c. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.learn.link`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Thrive LXP Client support team](mailto:support@thrivelearning.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/metadataxml.png)
+
+1. On the **Set up Thrive LXP** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Thrive LXP.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Thrive LXP**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Thrive LXP SSO
+
+To configure single sign-on on **Thrive LXP** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Thrive LXP support team](mailto:support@thrivelearning.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Thrive LXP test user
+
+In this section, you create a user called Britta Simon in Thrive LXP. Work with [Thrive LXP support team](mailto:support@thrivelearning.com) to add the users in the Thrive LXP platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Thrive LXP Sign-on URL where you can initiate the login flow.
+
+* Go to Thrive LXP Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Thrive LXP tile in the My Apps, this will redirect to Thrive LXP Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
+
+## Next steps
+
+Once you configure Thrive LXP you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
++
active-directory Truechoice Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/truechoice-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with TrueChoice | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and TrueChoice.
++++++++ Last updated : 03/11/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with TrueChoice
+
+In this tutorial, you'll learn how to integrate TrueChoice with Azure Active Directory (Azure AD). When you integrate TrueChoice with Azure AD, you can:
+
+* Control in Azure AD who has access to TrueChoice.
+* Enable your users to be automatically signed-in to TrueChoice with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* TrueChoice single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* TrueChoice supports **SP** initiated SSO.
+
+* TrueChoice supports **Just In Time** user provisioning.
+
+## Adding TrueChoice from the gallery
+
+To configure the integration of TrueChoice into Azure AD, you need to add TrueChoice from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **TrueChoice** in the search box.
+1. Select **TrueChoice** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for TrueChoice
+
+Configure and test Azure AD SSO with TrueChoice using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in TrueChoice.
+
+To configure and test Azure AD SSO with TrueChoice, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure TrueChoice SSO](#configure-truechoice-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create TrueChoice test user](#create-truechoice-test-user)** - to have a counterpart of B.Simon in TrueChoice that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **TrueChoice** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, enter the values for the following fields:
++
+ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ `urn:amazon:cognito:sp:<TRUECHOICE_APPID>`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://<APP>.auth.us-east-2.amazoncognito.com/saml2/idpresponse`
+
+ c. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<APP>.auth.us-east-2.amazoncognito.com/login?response_type=code&client_id=<ID>&redirect_uri=https://<APP_ID>.amplifyapp.com/auth/`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [TrueChoice Client support team](mailto:helpdesk@truechoice.io) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. TrueChoice application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![image](common/default-attributes.png)
+
+1. In addition to above, TrueChoice application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | -- | |
+ | country | user.country |
+ | name | user.displayname |
+ |
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to TrueChoice.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **TrueChoice**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure TrueChoice SSO
+
+To configure single sign-on on **TrueChoice** side, you need to send the **App Federation Metadata Url** to [TrueChoice support team](mailto:helpdesk@truechoice.io). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create TrueChoice test user
+
+In this section, a user called Britta Simon is created in TrueChoice. TrueChoice supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in TrueChoice, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to TrueChoice Sign-on URL where you can initiate the login flow.
+
+* Go to TrueChoice Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the TrueChoice tile in the My Apps, this will redirect to TrueChoice Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
++
+## Next steps
+
+Once you configure TrueChoice you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
++
aks Planned Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/planned-maintenance.md
This article assumes that you have an existing AKS cluster. If you need an AKS c
When using Planned Maintenance, the following restrictions apply: -- AKS reserves the right to break these windows for fixes and patches that are urgent or critical.-- Performing maintenance operations are considered *best-effort only* and are not guaranteed to occur within a specified window.
+- AKS reserves the right to break these windows for unplanned/reactive maintenance operations that are urgent or critical.
+- Currently, performing maintenance operations are considered *best-effort only* and are not guaranteed to occur within a specified window.
- Updates cannot be blocked for more than seven days. ### Install aks-preview CLI extension
aks Start Stop Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/start-stop-cluster.md
-# Stop and Start an Azure Kubernetes Service (AKS) cluster (preview)
+# Stop and Start an Azure Kubernetes Service (AKS) cluster
Your AKS workloads may not need to run continuously, for example a development cluster that is used only during business hours. This leads to times where your Azure Kubernetes Service (AKS) cluster might be idle, running no more than the system components. You can reduce the cluster footprint by [scaling all the `User` node pools to 0](scale-cluster.md#scale-user-node-pools-to-0), but your [`System` pool](use-system-pools.md) is still required to run the system components while the cluster is running. To optimize your costs further during these periods, you can completely turn off (stop) your cluster. This action will stop your control plane and agent nodes altogether, allowing you to save on all the compute costs, while maintaining all your objects and cluster state stored for when you start it again. You can then pick up right where you left of after a weekend or to have your cluster running only while you run your batch jobs. - ## Before you begin This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli] or [using the Azure portal][aks-quickstart-portal]. - ### Limitations When using the cluster start/stop feature, the following restrictions apply: - This feature is only supported for Virtual Machine Scale Sets backed clusters. - The cluster state of a stopped AKS cluster is preserved for up to 12 months. If your cluster is stopped for more than 12 months, the cluster state cannot be recovered. For more information, see the [AKS Support Policies](support-policies.md).-- During preview, you need to stop the cluster autoscaler (CA) before attempting to stop the cluster. - You can only start or delete a stopped AKS cluster. To perform any operation like scale or upgrade, start your cluster first.
-### Install the `aks-preview` Azure CLI
-
-You also need the *aks-preview* Azure CLI extension version 0.4.64 or later. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command.
-
-```azurecli-interactive
-# Install the aks-preview extension
-az extension add --name aks-preview
-
-# Update the extension to make sure you have the latest version installed
-az extension update --name aks-preview
-```
-
-### Register the `StartStopPreview` preview feature
-
-To use the start/stop cluster feature, you must enable the `StartStopPreview` feature flag on your subscription.
-
-Register the `StartStopPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
-
-```azurecli-interactive
-az feature register --namespace "Microsoft.ContainerService" --name "StartStopPreview"
-```
-
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature list][az-feature-list] command:
-
-```azurecli-interactive
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/StartStopPreview')].{Name:name,State:properties.state}"
-```
-
-When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
-
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
-```
- ## Stop an AKS Cluster You can use the `az aks stop` command to stop a running AKS cluster's nodes and control plane. The following example stops a cluster named *myAKSCluster*:
If the `provisioningState` shows `Stopping` that means your cluster hasn't fully
> [!IMPORTANT] > If you are using [Pod Disruption Budgets](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/) the stop operation can take longer as the drain process will take more time to complete. - ## Start an AKS Cluster You can use the `az aks start` command to start a stopped AKS cluster's nodes and control plane. The cluster is restarted with the previous control plane state and number of agent nodes.
You can verify when your cluster has started by using the [az aks show][az-aks-s
If the `provisioningState` shows `Starting` that means your cluster hasn't fully started yet. - ## Next steps - To learn how to scale `User` pools to 0, see [Scale `User` pools to 0](scale-cluster.md#scale-user-node-pools-to-0).
aks Windows Container Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/windows-container-cli.md
The following example output shows the resource group created successfully:
To run an AKS cluster that supports node pools for Windows Server containers, your cluster needs to use a network policy that uses [Azure CNI][azure-cni-about] (advanced) network plugin. For more detailed information to help plan out the required subnet ranges and network considerations, see [configure Azure CNI networking][use-advanced-networking]. Use the [az aks create][az-aks-create] command to create an AKS cluster named *myAKSCluster*. This command will create the necessary network resources if they don't exist.
-* The cluster is configured with two nodes
-* The *windows-admin-password* and *windows-admin-username* parameters set the admin credentials for any Windows Server containers created on the cluster and must meet [Windows Server password requirements][windows-server-password].
-* The node pool uses `VirtualMachineScaleSets`
+* The cluster is configured with two nodes.
+* The `--windows-admin-password` and `--windows-admin-username` parameters set the admin credentials for any Windows Server containers created on the cluster and must meet [Windows Server password requirements][windows-server-password]. If you don't specify the *windows-admin-password* parameter, you will be prompted to provide a value.
+* The node pool uses `VirtualMachineScaleSets`.
> [!NOTE] > To ensure your cluster to operate reliably, you should run at least 2 (two) nodes in the default node pool.
-Provide your own secure *PASSWORD_WIN* (remember that the commands in this article are entered into a BASH shell):
+Create a username to use as administrator credentials for your Windows Server containers on your cluster. The following commands prompt you for a username and set it WINDOWS_USERNAME for use in a later command (remember that the commands in this article are entered into a BASH shell).
```azurecli-interactive
-PASSWORD_WIN="P@ssw0rd1234"
+echo "Please enter the username to use as administrator credentials for Windows Server containers on your cluster: " && read WINDOWS_USERNAME
+```
+
+Create your cluster ensuring you specify `--windows-admin-username` parameter. The following example command creates a cluster using the value from *WINDOWS_USERNAME* you set in the previous command. Alternatively you can provide a different username directly in the parameter instead of using *WINDOWS_USERNAME*. The following command will also prompt you to create a password for the administrator credentials for your Windows Server Containers on your cluster. Alternatively, you can use the *windows-admin-password* parameter and specify your own value there.
+```azurecli-interactive
az aks create \ --resource-group myResourceGroup \ --name myAKSCluster \ --node-count 2 \ --enable-addons monitoring \ --generate-ssh-keys \
- --windows-admin-password $PASSWORD_WIN \
- --windows-admin-username azureuser \
+ --windows-admin-username $WINDOWS_USERNAME \
--vm-set-type VirtualMachineScaleSets \ --network-plugin azure ``` > [!NOTE]
-> If you get a password validation error, verify the *windows-admin-password* parameter meets the [Windows Server password requirements][windows-server-password]. If your password meets the requirements, try creating your resource group in another region. Then try creating the cluster with the new resource group.
+> If you get a password validation error, verify the password you set meets the [Windows Server password requirements][windows-server-password]. If your password meets the requirements, try creating your resource group in another region. Then try creating the cluster with the new resource group.
After a few minutes, the command completes and returns JSON-formatted information about the cluster. Occasionally the cluster can take longer than a few minutes to provision. Allow up to 10 minutes in these cases.
automation Automation Hrw Run Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-hrw-run-runbooks.md
Title: Run Azure Automation runbooks on a Hybrid Runbook Worker
description: This article describes how to run runbooks on machines in your local datacenter or other cloud provider with the Hybrid Runbook Worker. Previously updated : 01/29/2021 Last updated : 03/10/2021
Hybrid Runbook Workers on Azure virtual machines can use managed identities to a
Follow the next steps to use a managed identity for Azure resources on a Hybrid Runbook Worker: 1. Create an Azure VM.
-2. Configure managed identities for Azure resources on the VM. See [Configure managed identities for Azure resources on a VM using the Azure portal](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#enable-system-assigned-managed-identity-on-an-existing-vm).
-3. Give the VM access to a resource group in Resource Manager. Refer to [Use a Windows VM system-assigned managed identity to access Resource Manager](../active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-arm.md#grant-your-vm-access-to-a-resource-group-in-resource-manager).
-4. Install the Hybrid Runbook Worker on the VM. See [Deploy a Windows Hybrid Runbook Worker](automation-windows-hrw-install.md) or [Deploy a Linux Hybrid Runbook Worker](automation-linux-hrw-install.md).
-5. Update the runbook to use the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet with the `Identity` parameter to authenticate to Azure resources. This configuration reduces the need to use a Run As account and perform the associated account management.
+1. Configure managed identities for Azure resources on the VM. See [Configure managed identities for Azure resources on a VM using the Azure portal](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#enable-system-assigned-managed-identity-on-an-existing-vm).
+1. Give the VM access to a resource group in Resource Manager. Refer to [Use a Windows VM system-assigned managed identity to access Resource Manager](../active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-arm.md#grant-your-vm-access-to-a-resource-group-in-resource-manager).
+1. Install the Hybrid Runbook Worker on the VM. See [Deploy a Windows Hybrid Runbook Worker](automation-windows-hrw-install.md) or [Deploy a Linux Hybrid Runbook Worker](automation-linux-hrw-install.md).
+1. Update the runbook to use the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet with the `Identity` parameter to authenticate to Azure resources. This configuration reduces the need to use a Run As account and perform the associated account management.
```powershell # Connect to Azure using the managed identities for Azure resources identity configured on the Azure VM that is hosting the hybrid runbook worker
Follow the next steps to use a managed identity for Azure resources on a Hybrid
Instead of having your runbook provide its own authentication to local resources, you can specify a Run As account for a Hybrid Runbook Worker group. To specify a Run As account, you must define a [credential asset](./shared-resources/credentials.md) that has access to local resources. These resources include certificate stores and all runbooks run under these credentials on a Hybrid Runbook Worker in the group.
-The user name for the credential must be in one of the following formats:
+- The user name for the credential must be in one of the following formats:
-* domain\username
-* username@domain
-* username (for accounts local to the on-premises computer)
+ * domain\username
+ * username@domain
+ * username (for accounts local to the on-premises computer)
+
+- To use the PowerShell runbook **Export-RunAsCertificateToHybridWorker**, you need to install the Az modules for Azure Automation on the local machine.
+
+#### Use a credential asset to specify a Run As account
Use the following procedure to specify a Run As account for a Hybrid Runbook Worker group: 1. Create a [credential asset](./shared-resources/credentials.md) with access to local resources.
-2. Open the Automation account in the Azure portal.
-3. Select **Hybrid Worker Groups**, and then select the specific group.
-4. Select **All settings**, followed by **Hybrid worker group settings**.
-5. Change the value of **Run As** from **Default** to **Custom**.
-6. Select the credential and click **Save**.
+1. Open the Automation account in the Azure portal.
+1. Select **Hybrid Worker Groups**, and then select the specific group.
+1. Select **All settings**, followed by **Hybrid worker group settings**.
+1. Change the value of **Run As** from **Default** to **Custom**.
+1. Select the credential and click **Save**.
## <a name="runas-script"></a>Install Run As account certificate
Get-AzAutomationAccount | Select-Object AutomationAccountName
To finish preparing the Run As account: 1. Save the **Export-RunAsCertificateToHybridWorker** runbook to your computer with a **.ps1** extension.
-2. Import it into your Automation account.
-3. Edit the runbook, changing the value of the `Password` variable to your own password.
-4. Publish the runbook.
-5. Run the runbook, targeting the Hybrid Runbook Worker group that runs and authenticates runbooks using the Run As account.
-6. Examine the job stream to see that it reports the attempt to import the certificate into the local machine store, followed by multiple lines. This behavior depends on how many Automation accounts you define in your subscription and the degree of success of the authentication.
+1. Import it into your Automation account.
+1. Edit the runbook, changing the value of the `Password` variable to your own password.
+1. Publish the runbook.
+1. Run the runbook, targeting the Hybrid Runbook Worker group that runs and authenticates runbooks using the Run As account.
+1. Examine the job stream to see that it reports the attempt to import the certificate into the local machine store, followed by multiple lines. This behavior depends on how many Automation accounts you define in your subscription and the degree of success of the authentication.
## Work with signed runbooks on a Windows Hybrid Runbook Worker
To create the GPG keyring and keypair, use the Hybrid Runbook Worker [nxautomati
sudo su ΓÇô nxautomation ```
-2. Once you are using **nxautomation**, generate the GPG keypair. GPG guides you through the steps. You must provide name, email address, expiration time, and passphrase. Then you wait until there is enough entropy on the machine for the key to be generated.
+1. Once you are using **nxautomation**, generate the GPG keypair. GPG guides you through the steps. You must provide name, email address, expiration time, and passphrase. Then you wait until there is enough entropy on the machine for the key to be generated.
```bash sudo gpg --generate-key ```
-3. Because the GPG directory was generated with sudo, you must change its owner to **nxautomation** using the following command.
+1. Because the GPG directory was generated with sudo, you must change its owner to **nxautomation** using the following command.
```bash sudo chown -R nxautomation ~/.gnupg
automation Schedules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/shared-resources/schedules.md
$StartTime = (Get-Date "18:00:00").AddDays(1)
New-AzAutomationSchedule -AutomationAccountName "TestAzureAuto" -Name "1st, 15th and Last" -StartTime $StartTime -DaysOfMonth @("One", "Fifteenth", "Last") -ResourceGroupName "TestAzureAuto" -MonthInterval 1 ```
+## Create a schedule with a Resource Manager template
+
+In this example, we use an Automation Resource Manager (ARM) template that creates a new job schedule. For general information about this template to manage Automation job schedules, see [Microsoft.Automation automationAccounts/jobSchedules template reference](/templates/microsoft.automation/automationaccounts/jobschedules#quickstart-templates).
+
+Copy this template file into a text editor:
+
+```json
+{
+  "name": "5d5f3a05-111d-4892-8dcc-9064fa591b96",
+  "type": "Microsoft.Automation/automationAccounts/jobSchedules",
+  "apiVersion": "2015-10-31",
+  "properties": {
+    "schedule": {
+      "name": "scheduleName"
+    },
+    "runbook": {
+      "name": "runbookName"
+    },
+    "runOn": "hybridWorkerGroup",
+    "parameters": {}
+  }
+}
+```
+
+Edit the following parameter values and save the template as a JSON file:
+
+* Job schedule object name: A GUID (Globally Unique Identifier) is used as the name of the job schedule object.
+
+ >[!IMPORTANT]
+ > For each job schedule deployed with an ARM template, the GUID must be unique. Even if you're rescheduling an existing schedule, you'll need to change the GUID. This applies even if you've previously deleted an existing job schedule that was created with the same template. Reusing the same GUID results in a failed deployment.</br></br>
+ > There are services online that can generate a new GUID for you, such as this [Free Online GUID Generator](https://guidgenerator.com/).
+
+* Schedule name: Represents the name of the Automation job schedule that will be linked to the specified runbook.
+* Runbook name: Represents the name of the Automation runbook the job schedule is to be associated with.
+
+Once the file has been saved, you can create the runbook job schedule with the following PowerShell command. The command uses the `TemplateFile` parameter to specify the path and filename of the template.
+
+```powershell
+New-AzResourceGroupDeployment -ResourceGroupName "ContosoEngineering" -TemplateFile "<path>\RunbookJobSchedule.json"
+```
+ ## Link a schedule to a runbook A runbook can be linked to multiple schedules, and a schedule can have multiple runbooks linked to it. If a runbook has parameters, you can provide values for them. You must provide values for any mandatory parameters, and you also can provide values for any optional parameters. These values are used each time the runbook is started by this schedule. You can attach the same runbook to another schedule and specify different parameter values.
azure-functions Dotnet Isolated Process Developer Howtos https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/dotnet-isolated-process-developer-howtos.md
zone_pivot_groups: development-environment-functions
-# Develop and publish .NET 5 function using Azure Functions
+# Develop and publish .NET 5 functions using Azure Functions
This article shows you how to work with C# functions using .NET 5.0, which run out-of-process from the Azure Functions runtime. You'll learn how to create, debug locally, and publish these .NET isolated process functions to Azure. In Azure, these functions run in an isolated process that supports .NET 5.0. To learn more, see [Guide for running functions on .NET 5.0 in Azure](dotnet-isolated-process-guide.md).
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/dotnet-isolated-process-guide.md
This section describes the current state of the functional and behavioral differ
| Feature/behavior | In-process (.NET Core 3.1) | Out-of-process (.NET 5.0) | | - | - | - | | .NET versions | LTS (.NET Core 3.1) | Current (.NET 5.0) |
-| Core packages | [Microsoft.NET.Sdk.Functions](https://www.nuget.org/packages/Microsoft.NET.Sdk.Functions/) | [Microsoft.Azure.Functions.Worker](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/)<br/>[Microsoft.Azure.Functions.Worker](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk) |
+| Core packages | [Microsoft.NET.Sdk.Functions](https://www.nuget.org/packages/Microsoft.NET.Sdk.Functions/) | [Microsoft.Azure.Functions.Worker](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/)<br/>[Microsoft.Azure.Functions.Worker.Sdk](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk) |
| Binding extension packages | [`Microsoft.Azure.WebJobs.Extensions.*`](https://www.nuget.org/packages?q=Microsoft.Azure.WebJobs.Extensions) | Under [`Microsoft.Azure.Functions.Worker.Extensions.*`](https://www.nuget.org/packages?q=Microsoft.Azure.Functions.Worker.Extensions) | | Logging | [`ILogger`](/dotnet/api/microsoft.extensions.logging.ilogger?view=dotnet-plat-ext-5.0&preserve-view=true) passed to the function | [`ILogger`](/dotnet/api/microsoft.extensions.logging.ilogger?view=dotnet-plat-ext-5.0&preserve-view=true) obtained from `FunctionContext` | | Cancellation tokens | [Supported](functions-dotnet-class-library.md#cancellation-tokens) | Not supported |
azure-functions Durable Functions Instance Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-instance-management.md
The method returns an object with the following properties:
* **Terminated**: The instance was stopped abruptly. * **History**: The execution history of the orchestration. This field is only populated if `showHistory` is set to `true`.
+> [!NOTE]
+> An orchestrator is not marked as `Completed` until all of its scheduled tasks have finished _and_ the orchestrator has returned. In other words, it is not sufficient for an orchestrator to reach its `return` statement for it to be marked as `Completed`. This is particularly relevant for cases where `WhenAny` is used; those orchestrators often `return` before all the scheduled tasks have executed.
+ This method returns `null` (.NET), `undefined` (JavaScript), or `None` (Python) if the instance doesn't exist. # [C#](#tab/csharp)
azure-government Documentation Government Connect Vs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-connect-vs.md
Title: Connect to Azure Government with Visual Studio | Microsoft Docs
+ Title: Connect to Azure Government with Visual Studio
description: This quickstart shows how to connect to Azure Government with Visual Studio cloud: gov
ms.devlang: na
na Previously updated : 10/31/2019 Last updated : 03/09/2021 # Quickstart: Connect to Azure Government with Visual Studio
-Microsoft Azure Government delivers a dedicated cloud with world-class security and compliance, enabling US government agencies and their partners to transform their workloads to the cloud. To manage your Azure Government cloud workloads and applications you can connect to the Azure Government using different tools, as described in the following video.
+Microsoft Azure Government delivers a dedicated cloud with world-class security and compliance, enabling US government agencies and their partners to transform their workloads to the cloud. To manage your Azure Government cloud workloads and applications you can connect to Azure Government using different tools, as described in the following video.
-This quickstart shows how to connect to your Azure Government accounts and subscriptions with Visual Studio.
+> [!VIDEO https://www.youtube.com/embed/Q3kx4cmRkCA]
-If you don't have an Azure Government subscription, create a [free account](https://azure.microsoft.com/global-infrastructure/government/request/) before you begin.
+This quickstart shows how to connect to your Azure Government accounts and subscriptions with Visual Studio. If you don't have an Azure Government subscription, create a [free account](https://azure.microsoft.com/global-infrastructure/government/request/) before you begin.
## Prerequisites
-* Review [Guidance for developers](documentation-government-developer-guide.md).<br/> This article discusses Azure Government's unique URLs and endpoints for managing your environment. You must know about these endpoints in order to connect to Azure Government.
-* Review [Compare Azure Government and global Azure](compare-azure-government-global-azure.md) and click on a service of interest to see variations between Azure Government and global Azure.
-* Install <a href="https://www.visualstudio.com/downloads/" target="_blank">Visual Studio 2019</a>.
+- Review [Guidance for developers](./documentation-government-developer-guide.md).<br/> This article discusses Azure Government's unique URLs and endpoints for managing your environment. You must know about these endpoints in order to connect to Azure Government.
+- Review [Compare Azure Government and global Azure](./compare-azure-government-global-azure.md) and click on a service of interest to see variations between Azure Government and global Azure.
+- Install <a href="https://www.visualstudio.com/downloads/" target="_blank">Visual Studio 2019</a>.
## Sign in to Azure Government
-Open up Visual Studio and click **Tools** > **Options**.
-![VS-1](./media/connect-to-vs-1.png)
+1. Open up Visual Studio and click **Tools** > **Options**.
-Then navigate to **Environment** > **Accounts** and under "Registered Azure Clouds" you can see which cloud endpoints you already have registered. In order to add Azure Government, you must click the "Add" button and will see the dropdown below:
+ ![VS-1](./media/connect-to-vs-1.png)
-![VS-2](./media/connect-to-vs-2.png)
-![VS-3](./media/connect-to-vs-3.png)
+2. Then navigate to **Environment** > **Accounts** and under *Registered Azure Clouds* you can see which cloud endpoints you already have registered. In order to add Azure Government, you must click the *Add* button and you will see the dropdown below:
-From this dropdown, you can choose "Azure U.S. Government" and click add. Once you have done this you should be able to see "Azure U.S. Government" under "Registered Azure Clouds".
+ ![VS-2](./media/connect-to-vs-2.png)
+ ![VS-3](./media/connect-to-vs-3.png)
-![VS-4](./media/connect-to-vs-4.png)
-![VS-5](./media/connect-to-vs-5.png)
+3. From this dropdown, you can choose *Azure U.S. Government* and click add. Once you have done this, you should be able to see *Azure U.S. Government* under *Registered Azure Clouds*.
-Then you can click on the blue "Manage Accounts" link in the top-left corner and choose "Azure US Government" to access an Azure Government account.
+ ![VS-4](./media/connect-to-vs-4.png)
+ ![VS-5](./media/connect-to-vs-5.png)
-![VS-6](./media/connect-to-vs-6.png)
-![VS-7](./media/connect-to-vs-7.png)
+4. Then you can click on the blue *Manage Accounts* link in the top-left corner and choose *Azure U.S. Government* to access an Azure Government account.
-You are prompted to sign in, and once you have entered your credentials you will be able to see your account and subscriptions populate on the left-hand side. Now you are free to manage and interact with your Azure Government resources in Visual Studio!
+ ![VS-6](./media/connect-to-vs-6.png)
+ ![VS-7](./media/connect-to-vs-7.png)
+
+5. You are prompted to sign in. Once you have entered your credentials, you will be able to see your account and subscriptions populate on the left-hand side. Now you are free to manage and interact with your Azure Government resources in Visual Studio!
## Next steps
-This quickstart showed you how to use Visual Studio to connect to Azure Government. Once you are connected to Azure Government, you may want to explore Azure services. Make sure you check out the variations, described in [Compare Azure Government and global Azure](compare-azure-government-global-azure.md). To learn more about Azure services continue to the Azure documentation.
+This quickstart showed you how to use Visual Studio to connect to Azure Government. Once you are connected to Azure Government, you may want to explore Azure services. Make sure you check out the variations, described in [Compare Azure Government and global Azure](./compare-azure-government-global-azure.md). To learn more about Azure services, continue to the Azure documentation.
> [!div class="nextstepaction"]
-> [Azure documentation](../index.yml).
+> [Azure documentation](../index.yml)
azure-government Documentation Government Extension https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-extension.md
ms.devlang: na
na Previously updated : 12/11/2018 Last updated : 03/11/2021 # Azure Government virtual machine extensions
-This document contains a list of available [virtual machine extensions](../virtual-machines/extensions/features-windows.md) in Azure Government. If you'd like to see other extensions in Azure Government, please request them via the [Azure Government Feedback Forum](https://feedback.azure.com/forums/558487-azure-government).
+This document contains a list of available [virtual machine extensions](../virtual-machines/extensions/features-windows.md) in Azure Government. To see other extensions in Azure Government, request them via the [Azure Government Feedback Forum](https://feedback.azure.com/forums/558487-azure-government).
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] ## Virtual machine extensions
-The list of virtual machine extensions available in Azure Government can be obtained by [connecting to Azure Government via PowerShell](documentation-government-get-started-connect-with-ps.md) and running the following commands:
+You can obtain the list of virtual machine extensions available in Azure Government by [connecting to Azure Government via PowerShell](./documentation-government-get-started-connect-with-ps.md) and running the following commands:
```powershell Connect-AzAccount -Environment AzureUSGovernment
Select-Object -ExpandProperty Entry | `
Out-File vm-extensions.md -->
-The table below contains a snapshot of the list of extensions available in Azure Government as of June 06, 2019.
-
-|Extension|Versions|
-| | |
-| ADETest | 1.4.0.2 |
-| ApplicationHealthLinux | 1.0.0 |
-| ApplicationHealthWindows | 1.0.5 |
-| AquariusLinux | 1.5.0.0 |
-| AzureBackupWindowsWorkload | 1.1.0.21 |
-| AzureCATExtensionHandler | 2.2.0.68 |
-| AzureDiskEncryption | 1.1.0.1; 2.2.0.3 |
-| AzureDiskEncryptionForLinux | 0.1.0.999195; 0.1.0.999196; 0.1.0.999283; 0.1.0.999297; 0.1.0.999322; 1.1.0.17 |
-| AzureDiskEncryptionForLinuxTest | 0.1.0.999321 |
-| AzureEnhancedMonitorForLinux | 2.0.0.2; 3.0.1.0 |
-| BGInfo | 2.1 |
-| ChefClient | 1210.12.110.1000; 1210.12.110.1002; 1210.13.1.1 |
-| Compute.AKS.Linux.Billing | 1.0.0 |
-| Compute.AKS.Windows.Billing | 1.0.0 |
-| Compute.AKS-Engine.Linux.Billing | 1.0.0 |
-| Compute.AKS-Engine.Windows.Billing | 1.0.0 |
-| ConfigurationForLinux | 1.12.1 |
-| ConfigurationforWindows | 1.14.1.0; 1.8.0.0 |
-| CustomScript | 2.0.2; 2.0.6 |
-| CustomScriptExtension | 1.2; 1.3; 1.4; 1.7; 1.8; 1.9.1; 1.9.2; 1.9.3 |
-| CustomScriptForLinux | 1.0; 1.1; 1.2.2.0; 1.3.0.2; 1.4.1.0; 1.5.2.0 |
-| DSC | 2.19.0.0; 2.22.0.0; 2.23.0.0; 2.24.0.0; 2.26.0.0; 2.26.1.0; 2.71.0.0; 2.72.0.0; 2.73.0.0; 2.76.0.0; 2.77.0.0 |
-| DSCForLinux | 1.0.0.0; 2.0.0.0; 2.70.0.4 |
-| DSMSForWindows | 3.1.977.0 |
-| IaaSAntimalware | 1.3.0.0; 1.5.4.4 |
-| IaaSAutoPatchingForWindows | 1.0.1.14 |
-| IaaSDiagnostics | 1.4.3.0; 1.7.4.0; 1.9.0.0 |
-| JsonADDomainExtension | 1.3; 1.3.2 |
-| KeyVaultForWindows | 0.2.0.898 |
-| Linux | 1.0.0.9111; 1.0.0.9113; 1.0.0.9114; 1.0.0.9116 |
-| LinuxChefClient | 1210.12.109.1005; 1210.12.110.1000; 1210.12.110.1002; 1210.13.1.1 |
-| LinuxDEBIAN7 | 1.0.0.9113; 1.0.0.9114; 1.0.0.9116 |
-| LinuxDEBIAN8 | 1.0.0.9113; 1.0.0.9114; 1.0.0.9116 |
-| LinuxDiagnostic | 3.0.117; 3.0.119; 2.0.9005; 2.1.9005; 2.2.9005; 2.3.9005; 2.3.9007; 2.3.9011; 2.3.9013; 2.3.9015; 2.3.9017; 2.3.9021 |
-| LinuxOL6 | 1.0.0.9113; 1.0.0.9114; 1.0.0.9116 |
-| LinuxOL7 | 1.0.0.9113; 1.0.0.9114; 1.0.0.9116 |
-| LinuxRHEL6 | 1.0.0.9113; 1.0.0.9114; 1.0.0.9116 |
-| LinuxRHEL7 | 1.0.0.9113; 1.0.0.9114; 1.0.0.9116 |
-| LinuxSLES11SP3 | 1.0.0.9113; 1.0.0.9114; 1.0.0.9116 |
-| LinuxSLES11SP4 | 1.0.0.9113; 1.0.0.9114; 1.0.0.9116 |
-| LinuxSLES12 | 1.0.0.9111; 1.0.0.9113; 1.0.0.9114; 1.0.0.9116 |
-| LinuxUBUNTU1404 | 1.0.0.9111; 1.0.0.9113; 1.0.0.9114; 1.0.0.9116 |
-| LinuxUBUNTU1604 | 1.0.0.9109; 1.0.0.9111; 1.0.0.9113; 1.0.0.9114; 1.0.0.9116 |
-| MicrosoftMonitoringAgent | 1.0.11030.0; 1.0.11030.1; 1.0.11030.2; 1.0.11049.1; 1.0.11081.0 |
-| NetworkWatcherAgentLinux | 1.4.270.0; 1.4.306.5; 1.4.411.1; 1.4.493.1; 1.4.526.2; 1.4.585.2; 1.4.905.3 |
-| NetworkWatcherAgentWindows | 1.4.270.0; 1.4.306.5; 1.4.411.1; 1.4.493.1; 1.4.526.2; 1.4.585.2; 1.4.905.3 |
-| OmsAgentForLinux | 1.2.75.0; 1.4.45.2; 1.8.14 |
-| OSPatchingForLinux | 1.0.1.1; 2.0.0.5; 2.1.0.0; 2.2.0.0; 2.3.0.1 |
-| RDMAUpdateForLinux | 0.1.0.9 |
-| RunCommandLinux | 1.0.0 |
-| RunCommandWindows | 1.1.0 |
-| SqlIaaSAgent | 1.2.11.0; 1.2.15.0; 1.2.16.0; 1.2.17.0; 1.2.18.0; 1.2.30.0; 2.0.10.0; 2.0.18.0; 2.0.3.0; 2.0.5.0; 2.0.6.0; 2.0.9.0 |
-| VMAccessAgent | 2.0; 2.0.2; 2.3; 2.4.2; 2.4.4 |
-| VMAccessForLinux | 1.0; 1.1; 1.2; 1.3.0.1; 1.4.0.0; 1.4.5.0 |
-| VMBackupForLinuxExtension | 0.1.0.995; 0.1.0.993 |
-| VMJITAccessExtension | 1.0.0.0; 1.0.1.0 |
-| VMSnapshot | 1.0.22.0; 1.0.23.0; 1.0.26.0; 1.0.27.0; 1.0.40.0; 1.0.41.0; 1.0.42.0; 1.0.43.0; 1.0.54.0; 1.0.55.0 |
-| VMSnapshotLinux | 1.0.9111.0; 1.0.9112.0; 1.0.9117.0; 1.0.9118.0; 1.0.9128.0; 1.0.9131.0; 1.0.9133.0; 1.0.9143.0; 1.0.9147.0 |
-| VSRemoteDebugger | 1.1.3.0 |
-| Windows | 1.0.0.9110; 1.0.0.9113; 1.0.0.9114; 1.0.0.9116 |
-| WorkloadBackup | 1.1.0.8 |
- ## Next steps * [Deploy a Windows virtual machine extension](../virtual-machines/extensions/features-windows.md#run-vm-extensions) * [Deploy a Linux virtual machine extension](../virtual-machines/extensions/features-linux.md#run-vm-extensions)
azure-government Documentation Government Get Started Connect With Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-get-started-connect-with-cli.md
Title: Connect to Azure Government with Azure CLI | Microsoft Docs
-description: This quickstart shows you how to connect to Azure Government and reate a web app in Azure Government with Azure CLI
+ Title: Connect to Azure Government with Azure CLI
+description: This quickstart shows you how to connect to Azure Government and create a web app in Azure Government with Azure CLI
cloud: gov documentationcenter: ''
ms.devlang: na
na Previously updated : 08/09/2018 Last updated : 03/09/2021 #Customer intent: As a developer working for a federal government agency "x", I want to connect to Azure Government using CLI so I can start developing against Azure Government's secure isolated datacenters.
# Quickstart: Connect to Azure Government with Azure CLI
-Microsoft Azure Government delivers a dedicated cloud with world-class security and compliance, enabling US government agencies and their partners to transform their workloads to the cloud. To manage your Azure Government cloud workloads and applications you can connect to the Azure Government using different tools, as described in the following video.
-
-This quickstart shows how to use the Azure CLI to access and start managing resources in Azure Government.
+Microsoft Azure Government delivers a dedicated cloud with world-class security and compliance, enabling US government agencies and their partners to transform their workloads to the cloud. To manage your Azure Government cloud workloads and applications you can connect to Azure Government using different tools, as described in the following video.
> [!VIDEO https://www.youtube.com/embed/Q3kx4cmRkCA]
-If you don't have an Azure Government subscription, create a [free account](https://azure.microsoft.com/global-infrastructure/government/request/) before you begin.
+This quickstart shows how to use the Azure CLI to access and start managing resources in Azure Government. If you don't have an Azure Government subscription, create a [free account](https://azure.microsoft.com/global-infrastructure/government/request/) before you begin.
## Prerequisites
-* Review [Guidance for developers](documentation-government-developer-guide.md).<br/> This article discusses Azure Government's unique URLs and endpoints for managing your environment. You must know about these endpoints in order to connect to Azure Government.
-* Review [Compare Azure Government and global Azure](compare-azure-government-global-azure.md) and click on a service of interest to see variations between Azure Government and global Azure.
+- Review [Guidance for developers](./documentation-government-developer-guide.md).<br/> This article discusses Azure Government's unique URLs and endpoints for managing your environment. You must know about these endpoints in order to connect to Azure Government.
+- Review [Compare Azure Government and global Azure](./compare-azure-government-global-azure.md) and click on a service of interest to see variations between Azure Government and global Azure.
## Install Azure CLI
az account list-locations
## Next steps
-This quickstart showed you how to use CLI to connect to Azure Government. Once you are connected to Azure Government, you may want to explore Azure services. Make sure you check out the variations, described in [Compare Azure Government and global Azure](compare-azure-government-global-azure.md). To learn more about Azure services continue to the Azure documentation.
+This quickstart showed you how to use CLI to connect to Azure Government. Once you are connected to Azure Government, you may want to explore Azure services. Make sure you check out the variations, described in [Compare Azure Government and global Azure](./compare-azure-government-global-azure.md). To learn more about Azure services, continue to the Azure documentation.
> [!div class="nextstepaction"]
-> [Azure documentation](../index.yml).
+> [Azure documentation](../index.yml)
azure-government Documentation Government Get Started Connect With Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-get-started-connect-with-portal.md
Title: Create a web app with the Azure Government portal | Microsoft Docs
+ Title: Connect to Azure Government using portal
description: This quickstart shows how to connect to Azure Government and create a web app in Azure Government using portal cloud: gov
ms.devlang: na
na Previously updated : 08/09/2018 Last updated : 03/09/2021 #Customer intent: As a developer working for a federal government agency "x", I want to connect to Azure Government using portal so I can start creating apps and developing against Azure Government's secure isolated datacenters. # Quickstart: Connect to Azure Government using portal
-Microsoft Azure Government delivers a dedicated cloud with world-class security and compliance, enabling US government agencies and their partners to transform their workloads to the cloud. To manage your Azure Government cloud workloads and applications you can connect to the Azure Government using different tools, as described in the following video.
-
-This quickstart shows how to use the Azure Government portal to access and start managing resources in Azure Government. The Azure Government portal is the primary way most people will connect to their Azure Government environment.
+Microsoft Azure Government delivers a dedicated cloud with world-class security and compliance, enabling US government agencies and their partners to transform their workloads to the cloud. To manage your Azure Government cloud workloads and applications you can connect to Azure Government using different tools, as described in the following video.
> [!VIDEO https://www.youtube.com/embed/Q3kx4cmRkCA]
-If you don't have an Azure Government subscription, create a [free account](https://azure.microsoft.com/global-infrastructure/government/request/) before you begin.
+This quickstart shows how to use the Azure Government portal to access and start managing resources in Azure Government. The Azure Government portal is the primary way most people will connect to their Azure Government environment. If you don't have an Azure Government subscription, create a [free account](https://azure.microsoft.com/global-infrastructure/government/request/) before you begin.
## Prerequisites
-* Review [Guidance for developers](documentation-government-developer-guide.md).<br/> This article discusses Azure Government's unique URLs and endpoints for managing your environment. You must know about these endpoints in order to connect to Azure Government.
-* Review [Compare Azure Government and global Azure](compare-azure-government-global-azure.md) and click on a service of interest to see variations between Azure Government and global Azure.
+- Review [Guidance for developers](./documentation-government-developer-guide.md).<br/> This article discusses Azure Government's unique URLs and endpoints for managing your environment. You must know about these endpoints in order to connect to Azure Government.
+- Review [Compare Azure Government and global Azure](./compare-azure-government-global-azure.md) and click on a service of interest to see variations between Azure Government and global Azure.
## Sign in to Azure Government
Sign in using your Azure Government credentials. Once you sign it, you should se
## Check out Service health
-You can take a look at Azure Government regions and their health status by clicking on **Service Health**.
-
-Currently, you can choose from 6 available government-only datacenter regions.
+You can take a look at Azure Government regions and their health status by clicking on **Service Health**. Choose one of the available US government-only datacenter regions.
![Screenshot shows the Service Health page for Azure Government with the Region drop-down menu open.](./media/connect-with-portal/connect-with-portal.png) ## Next steps
-This quickstart showed you how to use portal to connect to Azure Government. Once you are connected to Azure Government, you may want to explore Azure services. Make sure you check out the variations, described in [Compare Azure Government and global Azure](compare-azure-government-global-azure.md). To learn more about Azure services continue to the Azure documentation.
+This quickstart showed you how to use the Azure Government portal to connect to Azure Government. Once you are connected to Azure Government, you may want to explore Azure services. Make sure you check out the variations, described in [Compare Azure Government and global Azure](./compare-azure-government-global-azure.md). To learn more about Azure services, continue to the Azure documentation.
> [!div class="nextstepaction"]
-> [Azure documentation](../index.yml).
+> [Azure documentation](../index.yml)
azure-government Documentation Government Get Started Connect With Ps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-get-started-connect-with-ps.md
Title: Azure Government Connect with PowerShell | Microsoft Docs
-description: Information on connecting your subscription in Azure Government with PowerShell
+ Title: Connect to Azure Government with PowerShell
+description: Information on connecting to your subscription in Azure Government with PowerShell
cloud: gov documentationcenter: ''
ms.devlang: na
na Previously updated : 08/09/2018 Last updated : 03/09/2021 #Customer intent: As a developer working for a federal government agency "x", I want to connect to Azure Government using PowerShell so I can start developing against Azure Government's secure isolated datacenters.
# Quickstart: Connect to Azure Government with PowerShell
-Microsoft Azure Government delivers a dedicated cloud with world-class security and compliance, enabling US government agencies and their partners to transform their workloads to the cloud. To manage your Azure Government cloud workloads and applications you can connect to the Azure Government using different tools, as described in the following video.
-
-This quickstart shows how to use PowerShell to access and start managing resources in Azure Government.
+Microsoft Azure Government delivers a dedicated cloud with world-class security and compliance, enabling US government agencies and their partners to transform their workloads to the cloud. To manage your Azure Government cloud workloads and applications you can connect to Azure Government using different tools, as described in the following video.
> [!VIDEO https://www.youtube.com/embed/Q3kx4cmRkCA]
-If you don't have an Azure Government subscription, create a [free account](https://azure.microsoft.com/global-infrastructure/government/request/) before you begin.
+This quickstart shows how to use PowerShell to access and start managing resources in Azure Government. If you don't have an Azure Government subscription, create a [free account](https://azure.microsoft.com/global-infrastructure/government/request/) before you begin.
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] ## Prerequisites
-* Review [Guidance for developers](documentation-government-developer-guide.md).<br/> This article discusses Azure Government's unique URLs and endpoints for managing your environment. You must know about these endpoints in order to connect to Azure Government.
-* Review [Compare Azure Government and global Azure](compare-azure-government-global-azure.md) and click on a service of interest to see variations between Azure Government and global Azure.
+- Review [Guidance for developers](./documentation-government-developer-guide.md).<br/> This article discusses Azure Government's unique URLs and endpoints for managing your environment. You must know about these endpoints in order to connect to Azure Government.
+- Review [Compare Azure Government and global Azure](./compare-azure-government-global-azure.md) and click on a service of interest to see variations between Azure Government and global Azure.
## Install PowerShell Install PowerShell on your local machine. For more information, check out the [Introduction to Azure PowerShell](/powershell/azure/). - ## Specifying Azure Government as the *environment* to connect to When you start PowerShell, you have to tell Azure PowerShell to connect to Azure Government by specifying an environment parameter. The parameter ensures that PowerShell is connecting to the correct endpoints. The collection of endpoints is determined when you connect log in to your account. Different APIs require different versions of the environment switch:
Get-AzureLocation # For classic deployment model
| Common Name | Display Name | Location Name | | | | | | US Gov Virginia |`USGov Virginia` | `usgovvirginia` |
-| US Gov Iowa |`USGov Iowa` | `usgoviowa` |
| US Gov Texas |`USGov Texas` | `usgovtexas` | | US Gov Arizona |`USGov Arizona` | `usgovarizona` | | US DoD East |`USDoD East` | `usdodeast` |
Get-AzureLocation # For classic deployment model
## Next steps
-This quickstart showed you how to use PowerShell to connect to Azure Government. Once you are connected to Azure Government, you may want to explore Azure services. Make sure you check out the variations, described in [Compare Azure Government and global Azure](compare-azure-government-global-azure.md). To learn more about Azure services continue to the Azure documentation.
+This quickstart showed you how to use PowerShell to connect to Azure Government. Once you are connected to Azure Government, you may want to explore Azure services. Make sure you check out the variations, described in [Compare Azure Government and global Azure](./compare-azure-government-global-azure.md). To learn more about Azure services, continue to the Azure documentation.
> [!div class="nextstepaction"]
-> [Azure documentation](../index.yml).
+> [Azure documentation](../index.yml)
azure-government Documentation Government Image Gallery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-image-gallery.md
Title: Azure Government image gallery | Microsoft Docs
-description: This article provides an overview of the Azure Government image gallery and the images included
+ Title: Azure Government Marketplace images
+description: This article provides an overview of the Azure Government image gallery
cloud: gov documentationcenter: ''
ms.devlang: na
na Previously updated : 12/11/2018 Last updated : 03/09/2021 # Azure Government Marketplace images
-The Azure Government Marketplace provides a similar experience as the global Azure portal. You can choose to deploy prebuilt images from Microsoft and our partners, or upload your own VHDs. This gives you the flexibility to deploy your own standardized images if needed.
-The following table shows a list of available images within the Azure Government Marketplace. If you'd like to see other images in Azure Government, please request them via the [Azure Government Feedback Forum](https://feedback.azure.com/forums/558487-azure-government).
-
-Some of the prebuilt images include pay-as-you-go licensing for specific software. Work with your Microsoft account team or reseller for Azure Government-specific pricing. For more information, see [Virtual machine pricing](https://azure.microsoft.com/pricing/details/virtual-machines/).
+Microsoft Azure Government Marketplace provides a similar experience as Azure Marketplace. You can choose to deploy prebuilt images from Microsoft and our partners, or upload your own VHDs. This approach gives you the flexibility to deploy your own standardized images if needed.
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] ## Images
-The list of virtual machine images available in Azure Government can be obtained by [connecting to Azure Government via PowerShell](documentation-government-get-started-connect-with-ps.md) and running the following commands:
+To obtain a list of virtual machine images available in Azure Government, [connect to Azure Government via PowerShell](documentation-government-get-started-connect-with-ps.md) and run the following commands:
```powershell Connect-AzAccount -Environment AzureUSGovernment
Select-Object -ExpandProperty Entry | `
Out-File vm-images.md -->
-The table below contains a snapshot of the list of virtual machine images available in Azure Government via Resource Manager as of June 06, 2019.
+If you'd like to see other images in Azure Government, request them via the [Azure Government feedback forum](https://feedback.azure.com/forums/558487-azure-government).
-|Publisher|Offer|SKU|
-| | | |
-| 128technology | 128t_networking_platform | 128t_networking_platform |
-| a10networks | a10-vthunder-adc | vthunder_410_byol |
-| a10networks | a10-vthunder-adc | vthunder_byol |
-| a10networks | vthunder-414-gr1 | vthunder-414gr1-byol |
-| accessdata-group | adlab64-sw-cloud | 22000111 |
-| accessdata-group | adlab64-sw-cloud | 22000112 |
-| ACEPublishing | f5-big-ip | f5-bigip-virtual-edition-best-byol |
-| actifio | actifio-sky | actifio-global-manager |
-| actifio | actifio-sky | actifio-sky |
-| akumina | akumina-interchange | akam101 |
-| alertlogic | alert-logic-tm | 20215000100-tmpbyol |
-| alertlogic | alert-logic-wsm | 20216000100-wsmpbyl |
-| altamira-corporation | lumify | lumify |
-| arista-networks | veos-router | eos-4_20_1fx-virtual-router |
-| arista-networks | veos-router | eos-4_21_0f |
-| arista-networks | veos-router | eos-4_21_3f |
-| arista-networks | veos-router | eos-4_22_0f |
-| asigra | asigra-on-azure | asigra-evaluation-vm |
-| aviatrix-systems | aviatrix-cloud-services | av-csg-byol |
-| aviatrix-systems | aviatrix-companion-gateway-v2 | aviatrix-companion-gateway-v2 |
-| barracudanetworks | barracuda-app-sec-control-center | byol |
-| barracudanetworks | barracuda-email-security-gateway | byol |
-| barracudanetworks | barracuda-email-security-gateway | hourly |
-| barracudanetworks | barracuda-message-archiver | byol |
-| barracudanetworks | barracuda-ng-cc | byol |
-| barracudanetworks | barracuda-ng-firewall | byol |
-| barracudanetworks | barracuda-ng-firewall | hourly |
-| barracudanetworks | barracuda-ng-firewall | private-barracuda-ng-firewall-hourly-teletracking1 |
-| barracudanetworks | barracuda-spam-firewall | byol |
-| barracudanetworks | waf | byol |
-| barracudanetworks | waf | hourly |
-| barracudanetworks | waf | private-waf-hourly-blackboard |
-| barracudanetworks | waf | private-waf-hourly-fusion |
-| barracudanetworks | waf | private-waf-hourly-teletracking1 |
-| barracudanetworks | waf | private-waf-hourly-us-agency-for-int-dev |
-| batch | rendering-centos73 | rendering |
-| batch | rendering-windows2016 | rendering |
-| beyondtrust | beyondinsight | uvm-azm |
-| bitnami | abantecart | 1-2 |
-| bitnami | activemq | 5-13 |
-| bitnami | activemq | default |
-| bitnami | akeneo | 1-4 |
-| bitnami | alfrescocommunity | 201602 |
-| bitnami | apachesolr | 5-5 |
-| bitnami | artifactory | 4-5 |
-| bitnami | canvaslms | 2016-02 |
-| bitnami | cassandra | 3-7 |
-| bitnami | cassandra | cassandra |
-| bitnami | cassandra | default |
-| bitnami | chyrp | 2-5 |
-| bitnami | civicrm | 4-7 |
-| bitnami | cmsmadesimple | 2-1 |
-| bitnami | concrete5 | 5-7 |
-| bitnami | consul | default |
-| bitnami | coppermine | 1-5 |
-| bitnami | couchdb | 1-6 |
-| bitnami | couchdb | couchdb |
-| bitnami | diaspora | 0-5 |
-| bitnami | discourse | 1-4 |
-| bitnami | djangostack | 1-8 |
-| bitnami | dokuwiki | 20150810a |
-| bitnami | dolibarr | 3-8 |
-| bitnami | dreamfactory | 2-1 |
-| bitnami | drupal | 8-0 |
-| bitnami | eclipseche | 4-4 |
-| bitnami | elastic-search | 2-2 |
-| bitnami | elk | 4-6 |
-| bitnami | erpnext | 6-21 |
-| bitnami | espocrm | 3-9 |
-| bitnami | etcd | default |
-| bitnami | exoplatform | 4 |
-| bitnami | exoplatformenterprise | 4-2 |
-| bitnami | ezpublish | 2014-11 |
-| bitnami | fatfreecrm | 0-13 |
-| bitnami | ghost | 0-7 |
-| bitnami | ghost | default |
-| bitnami | gitlab | 8-5 |
-| bitnami | grafana | default |
-| bitnami | hadoop | 2-7 |
-| bitnami | hadoop | default |
-| bitnami | hhvmstack | 3-9 |
-| bitnami | hordegroupwarewebmail | 5-2 |
-| bitnami | jasperreports | 6-2 |
-| bitnami | jbossas | 7-2 |
-| bitnami | jenkins | 1-650 |
-| bitnami | joomla | 3-5 |
-| bitnami | jrubystack | 9-0 |
-| bitnami | kafka | 0-1 |
-| bitnami | kafka | default |
-| bitnami | kafka | kafka |
-| bitnami | kong | default |
-| bitnami | kubernetessandbox | default |
-| bitnami | lampstack | 5-6 |
-| bitnami | lappstack | 5-6 |
-| bitnami | letschat | 0-4 |
-| bitnami | liferay | 6-2 |
-| bitnami | limesurvey | 20160228 |
-| bitnami | livehelperchat | 2-44v |
-| bitnami | magento | 2-0 |
-| bitnami | mahara | 15-10 |
-| bitnami | mantis | 1-2 |
-| bitnami | mariadb | default |
-| bitnami | mariadb | mariadb |
-| bitnami | mattermost | 3-6 |
-| bitnami | mautic | 1-2 |
-| bitnami | mean | 3-2 |
-| bitnami | mediawiki | 1-26 |
-| bitnami | memcached | 1-4 |
-| bitnami | memcached | default |
-| bitnami | memcached | memcached |
-| bitnami | modx | 2-4 |
-| bitnami | mongodb | 3-2 |
-| bitnami | mongodb | default |
-| bitnami | moodle | 3-0 |
-| bitnami | moodle | moodle-free-byol |
-| bitnami | mybb | 1-8 |
-| bitnami | mysql | 5-6 |
-| bitnami | mysql | default |
-| bitnami | nats | default |
-| bitnami | neo4j | default |
-| bitnami | neos | 2-0 |
-| bitnami | nginxstack | 1-9 |
-| bitnami | noalyss | 6-9 |
-| bitnami | nodejs | 4-3 |
-| bitnami | ocportal | 9 |
-| bitnami | odoo | 9-0 |
-| bitnami | openatrium | 2-54 |
-| bitnami | opencart | 2-1 |
-| bitnami | openedx | cypress |
-| bitnami | openfire | 4 |
-| bitnami | openproject | 5-0 |
-| bitnami | orangehrm | 3-3 |
-| bitnami | orocrm | 1 |
-| bitnami | osclass | 3-6 |
-| bitnami | osqa | 1-0rc |
-| bitnami | owncloud | 8-2 |
-| bitnami | oxid-eshop | 4-9 |
-| bitnami | parseserver | 2-1 |
-| bitnami | parseserver | default |
-| bitnami | phabricator | 20160208 |
-| bitnami | phpbb | 3-1 |
-| bitnami | phplist | 3-2 |
-| bitnami | pimcore | 3-1 |
-| bitnami | piwik | 2-16 |
-| bitnami | plone | 5-0 |
-| bitnami | pootle | 2-7 |
-| bitnami | postgresql | 9-5 |
-| bitnami | postgresql | default |
-| bitnami | postgresql | postgresql |
-| bitnami | prestashop | 1-6-1 |
-| bitnami | processmakerenterprise | 3-1 |
-| bitnami | processmakeropensourceedition | 3-0 |
-| bitnami | processwire | 2-7 |
-| bitnami | publify | 8-2 |
-| bitnami | rabbitmq | 3-6 |
-| bitnami | rabbitmq | default |
-| bitnami | rabbitmq | rabbitmq |
-| bitnami | railo | 4-2 |
-| bitnami | redash | 0-10 |
-| bitnami | redis | 3-2 |
-| bitnami | redis | default |
-| bitnami | redis | redis |
-| bitnami | redmine | 3 |
-| bitnami | redmineplusagile | public |
-| bitnami | refinerycms | 2-1 |
-| bitnami | reportserver | 2-2 |
-| bitnami | reportserverenterprise | 3-0 |
-| bitnami | resourcespace | 7-5 |
-| bitnami | reviewboard | 2-5 |
-| bitnami | reviewboardpowerpack | public |
-| bitnami | roundcube | 1-1 |
-| bitnami | rubystack | 2-0 |
-| bitnami | seopanel | 3-8 |
-| bitnami | shopware | default |
-| bitnami | silverstripe | 3-2 |
-| bitnami | simpleinvoices | 2013-1 |
-| bitnami | simplemachinesforum | 2-0 |
-| bitnami | sonarqube | 6-4 |
-| bitnami | spree | 3-0 |
-| bitnami | squash | 20151209 |
-| bitnami | subversion | 1-8 |
-| bitnami | suitecrm | 7-4 |
-| bitnami | tensorflowserving | default |
-| bitnami | testlink | 1-9 |
-| bitnami | tikiwikicmsgroupware | 14-2 |
-| bitnami | tinytinyrss | 20160220 |
-| bitnami | tom-cat | 7-0 |
-| bitnami | trac | 1-0 |
-| bitnami | typo3 | 7-6 |
-| bitnami | weblate | 2-4 |
-| bitnami | webmailpro | public |
-| bitnami | wildfly | 10-0 |
-| bitnami | wordpress | 4-4 |
-| bitnami | wordpress-multisite | 4 |
-| bitnami | wordpresspro | default |
-| bitnami | xoops | 2-5 |
-| bitnami | youtrack | 7-0 |
-| bitnami | zookeeper | default |
-| bitnami | zurmo | 3-1 |
-| bocada | bocada106 | bocada110 |
-| Canonical | UbuntuServer | 12.04.5-LTS |
-| Canonical | UbuntuServer | 14.04.4-LTS |
-| Canonical | UbuntuServer | 14.04.5-LTS |
-| Canonical | UbuntuServer | 16.04-LTS |
-| Canonical | UbuntuServer | 16.04.0-LTS |
-| Canonical | UbuntuServer | 16.10 |
-| Canonical | UbuntuServer | 17.10 |
-| Canonical | UbuntuServer | 18.04-LTS |
-| canonical-test | canonical-server-test-broadcast-001 | 16_04-daily-lts |
-| canonical-test | canonical-server-test-broadcast-001 | 16_04-lts |
-| canonical-test | canonical-server-test-broadcast-001 | 16_04-lts-gen2-tech-preview |
-| canonical-test | canonical-server-test-broadcast-001 | 16_04_0-lts |
-| canonical-test | canonical-server-test-broadcast-001 | 18_04-daily-lts |
-| canonical-test | canonical-server-test-broadcast-001 | 18_04-lts |
-| canonical-test | canonical-server-test-broadcast-001 | 18_04-lts-gen2-tech-preview |
-| canonical-test | canonical-server-test-broadcast-001 | 18_10-daily |
-| canonical-test | canonical-server-test-broadcast-001 | 19_04-daily |
-| canonical-test | canonical-server-test-broadcast-001 | 19_10-daily |
-| center-for-internet-security-inc | cis-centos-6-v2-0-2-l1 | cis-centos-6-l1 |
-| center-for-internet-security-inc | cis-centos-7-l1 | cis-centos75-l1 |
-| center-for-internet-security-inc | cis-centos-7-v2-1-1-l1 | cis-centos7-l1 |
-| center-for-internet-security-inc | cis-oracle-linux-6-v1-0-0-l1 | cis-oralce-6-l1 |
-| center-for-internet-security-inc | cis-oracle-linux-7-v2-0-0-l1 | cis-oracle7-l1 |
-| center-for-internet-security-inc | cis-rhel-6-v2-0-2-l1 | cis-rhel6-l1 |
-| center-for-internet-security-inc | cis-rhel-7-v2-2-0-l1 | cis-rhel7-l1 |
-| center-for-internet-security-inc | cis-suse-linux-11-v2-0-0-l1 | cis-suse11-l1 |
-| center-for-internet-security-inc | cis-suse-linux-12-v2-0-0-l1 | cis-suse12-l1 |
-| center-for-internet-security-inc | cis-ubuntu-linux-1404-v2-0-0-l1 | cis-ubuntu1404-l1 |
-| center-for-internet-security-inc | cis-ubuntu-linux-1604-v1-0-0-l1 | cis-ubuntu1604-l1 |
-| center-for-internet-security-inc | cis-ubuntu-linux-1804-l1 | cis-ubuntu1804-l1 |
-| center-for-internet-security-inc | cis-windows-server-2008-r2-v3-0-1-l1 | cis-ws2008-r2-l1 |
-| center-for-internet-security-inc | cis-windows-server-2008-r2-v3-0-1-l2- | cis-ws2008-r2-l2 |
-| center-for-internet-security-inc | cis-windows-server-2012-r2-v2-2-1-l1 | cis-ws2012-r2-l1 |
-| center-for-internet-security-inc | cis-windows-server-2012-r2-v2-2-1-l2 | cis-ws2012-r2-l2 |
-| center-for-internet-security-inc | cis-windows-server-2012-v2-0-1-l1 | cis-ws2012-l1 |
-| center-for-internet-security-inc | cis-windows-server-2012-v2-0-1-l2 | cis-ws2012-l2 |
-| center-for-internet-security-inc | cis-windows-server-2016-v1-0-0-l1 | cis-ws2016-l1 |
-| center-for-internet-security-inc | cis-windows-server-2016-v1-0-0-l2 | cis-ws2016-l2 |
-| checkpoint | check-point-cg-r8020-blink | mgmt-byol |
-| checkpoint | check-point-cg-r8020-blink | sg-byol |
-| checkpoint | check-point-cg-r8020-blink | sg-ngtp |
-| checkpoint | check-point-cg-r8020-blink | sg-ngtx |
-| checkpoint | check-point-cg-r8020-blink-v2 | mgmt-25 |
-| checkpoint | check-point-cg-r8020-blink-v2 | mgmt-byol |
-| checkpoint | check-point-cg-r8020-blink-v2 | sg-byol |
-| checkpoint | check-point-cg-r8020-blink-v2 | sg-ngtp |
-| checkpoint | check-point-cg-r8020-blink-v2 | sg-ngtx |
-| checkpoint | check-point-r77-10 | SG-BYOL |
-| checkpoint | check-point-r77-10 | sg-ngtp |
-| checkpoint | check-point-vsec-r80 | mgmt-25 |
-| checkpoint | check-point-vsec-r80 | sg-byol |
-| checkpoint | check-point-vsec-r80 | sg-ngtp-v2 |
-| checkpoint | check-point-vsec-r80 | sg-ngtx |
-| checkpoint | check-point-vsec-r80-blink | sg-byol |
-| checkpoint | check-point-vsec-r80-blink | sg-ngtp-v2 |
-| checkpoint | check-point-vsec-r80-blink | sg-ngtx |
-| checkpoint | check-point-vsec-r80-blink-v2 | sg-byol |
-| checkpoint | check-point-vsec-r80-blink-v2 | sg-ngtp-v2 |
-| checkpoint | check-point-vsec-r80-blink-v2 | sg-ngtx |
-| checkpoint | sg2 | sg-byol2 |
-| chef-software | chef-automate-vm-image | byol |
-| cisco | cisco-asav | asav-azure-byol |
-| cisco | cisco-csr-1000v | 16_10-byol |
-| cisco | cisco-csr-1000v | 16_10-payg-ax |
-| cisco | cisco-csr-1000v | 16_10-payg-sec |
-| cisco | cisco-csr-1000v | 16_4 |
-| cisco | cisco-csr-1000v | 16_5 |
-| cisco | cisco-csr-1000v | 16_6 |
-| cisco | cisco-csr-1000v | 16_7 |
-| cisco | cisco-csr-1000v | 16_9-byol |
-| cisco | cisco-csr-1000v | 3_16 |
-| cisco | cisco-csr-1000v | csr-azure-byol |
-| cisco | cisco-ftdv | ftdv-azure-byol |
-| cisco | cisco-ftdv | ftdv-azure-payg |
-| cisco | cisco-meraki-vmx100 | vmx100 |
-| cisco | cisco_cloud_vedge_17_2_4 | cisco_vedge_azurecloud_18_2_0 |
-| cisco | cisco_cloud_vedge_17_2_4 | cisco_vedge_azurecloud_18_3_0 |
-| cisco | cisco_cloud_vedge_17_2_4 | cisco_vedge_azurecloud_18_4_0 |
-| citrix | netscaler-sd-wan | netscalersd-wanstandardedition |
-| citrix | netscalervpx-120 | netscalerbyol |
-| citrix | netscalervpx-121 | netscalerbyol |
-| citrix | netscalervpx-130 | netscalerbyol |
-| citrix | netscalervpx110-6531 | netscalerbyol |
-| citrix | netscalervpx111 | netscalerbyol |
-| citrix | xenapp-server | coldfireserver |
-| citrix | xenapp-vda-rdsh | coldfirerdsh |
-| citrix | xenapp-vda-rdsh | server2016rdsh |
-| citrix | xenapp-vda-vdi | coldfirevdi |
-| citrix | xenapp-vda-vdi | server2016vdi |
-| clouber | cws | cuber |
-| cloud-checkr | cloudcheckr-gov | cloudcheckr-gov |
-| cloud-infrastructure-services | ad-ca-2016 | ad-ca-2016 |
-| cloud-infrastructure-services | ad-dc-2016 | ad-dc-2016 |
-| cloud-infrastructure-services | ad-dc-2019 | ad-dc-2019 |
-| cloud-infrastructure-services | adfs-server-2016 | adfs-server-windows-2016 |
-| cloud-infrastructure-services | azure-ad-connect-2016 | azure-ad-connect-server-2016 |
-| cloud-infrastructure-services | dns-server-2016 | dns-server-2016 |
-| cloud-infrastructure-services | filezilla-ftp-server | secure-ftp-server-2016 |
-| cloud-infrastructure-services | hmailserver | hmailserver-email-server-2016 |
-| cloud-infrastructure-services | jenkins-windows-2016 | jenkins-windows-2016 |
-| cloud-infrastructure-services | ms-wap-2016 | ms-wap-2016 |
-| cloud-infrastructure-services | mysql-2016 | mysql-v8-w2016 |
-| cloud-infrastructure-services | nfs-server-2016 | nfs-server-2016 |
-| cloud-infrastructure-services | radius-2016 | radius-2016 |
-| cloud-infrastructure-services | rds-farm-2019 | rds-farm-2019 |
-| cloud-infrastructure-services | sftp-2016 | sftp-2016 |
-| cloud-infrastructure-services | softether-vpn-server | softether-vpn-server |
-| cloud-infrastructure-services | squid-proxy | squid-proxy |
-| cloud-infrastructure-services | wordpress-windows-2016 | wordpress-windows-2016 |
-| cloudbolt-software | cloudbolt | cb-lab25-1 |
-| cloudera | cloudera-centos-6 | cloudera-centos-6 |
-| cloudera | cloudera-centos-os | 6_7 |
-| cloudera | cloudera-centos-os | 6_8 |
-| cloudera | cloudera-centos-os | 7_2 |
-| cloudera | cloudera-centos-os | 7_4 |
-| cloudera | cloudera-centos-os | 7_5 |
-| cloudlink | cloudlink-securevm | cloudlink-securevm-67-byol |
-| cloudlink | cloudlink-securevm | cloudlink-securevm-68-byol |
-| codelathe | codelathe-filecloud-ubuntu | filecloud_ubuntu_byol |
-| codelathe | codelathe-filecloud-win2012r2 | filecloud_byol |
-| codelathe | filecloud-efss-windows2016 | filecloud_windows2016 |
-| cognosys | 1-click-secured-joomla-on-ubuntu-1404-lts | secured-joomla-on-ubuntu |
-| cognosys | 1-click-secured-joomla-on-ubuntu-1604-lts | 1-click-secured-joomla-on-ubuntu-1604-lts |
-| cognosys | 1-click-secured-joomla-on-ubuntu-1804-lts | 1-click-secured-joomla-on-ubuntu-18-04-lts |
-| cognosys | acquiadrupal7-iis8-0-mysql-win2012r2 | acquiadrupal7-iis8-0-mysql-win2012r2 |
-| cognosys | aspxcommerce_iis8_0_sqlserver_2008_win2012_r2 | aspxcommerce_iis8_0_sqlserver_2008_win2012_r2 |
-| cognosys | blogengine-iis8-0-mysql-win2012-r2 | blogengine-iis8-0-mysql-win2012-r2 |
-| cognosys | bugnet_iis8_0_mysql_win2012_r2 | new-bugnet_win2012_r2_lic |
-| cognosys | cakephp_on_iis8_0_win2012_r2 | cakephp_on_iis8_0_win2012_r2 |
-| cognosys | centos-7-4 | hardened-centos-7-4 |
-| cognosys | centos-7-5 | hardened-centos-7-5 |
-| cognosys | compositec1-iis8-win2012-r2 | compositec1-iis8-win2012-r2 |
-| cognosys | dasblog-iis-8-win2012-r2 | dasblog-win2012-r2 |
-| cognosys | dashcommerce-iis-8-sqlserver2008-win2012-r2 | dashcommerce-iis-8-sqlserver2008-win2012-r2 |
-| cognosys | deploy-a-secured-silverstripe-on-ubuntu-14-04-lts | secured-silverstripe-on-ubuntu-14-04-lts |
-| cognosys | dotnetage-iis8-sql-server2008-win2012-r2 | dotnetage-iis8-sql-server2008-win2012-r2 |
-| cognosys | drop-things-iis8-sql-server2008-win2012-r2 | drop-things-iis8-sql-server2008-win2012-r2 |
-| cognosys | ec-cube-global-iis-8-sqlserverwin2012-r2 | ec-cube-global-iis-8-sqlserverwin2012-r2 |
-| cognosys | galleryserver-iis8-win2012-r2 | galleryserver-iis8-win2012-r2 |
-| cognosys | hardened-iis-on-windows-server-2012-r2 | hardened-iis-on-windows-ser-2012-r2 |
-| cognosys | hardened-iis-on-windows-server-2016 | hardened-iis-on-windows-ser-2016 |
-| cognosys | hostanysite-iis8-sqlserver2008-win2012-r2 | hostanysite-iis8-sqlserver2008-win2012-r2 |
-| cognosys | invoice-ninja-2-5-1-1-on-ubuntu-1404 | new-secured-invoice-ninja-on-ubuntu-basic-lic |
-| cognosys | joomla-iis8-mysql-win2012-r2 | joomla-iis8-mysql-win2012-r2 |
-| cognosys | jruby-on-ubuntu-14-04-lts | jruby-on-ubuntu-14-04-lts |
-| cognosys | kooboo-cms-iis8-win2012-r2 | kooboo-cms-iis8-win2012-r2 |
-| cognosys | lemoon-iis8-sqlserver2008-win2012-r2 | lemoon-iis8-sqlserver2008-win2012-r2 |
-| cognosys | magento-iis8-mysql-win2012-r2 | magento-iis8-mysql-win2012-r2 |
-| cognosys | mayando-iis8-sqlserver2008-win2012-r2 | mayando-iis8-sqlserver2008-win2012-r2 |
-| cognosys | modxccmx-iis8-mysql-win2012-r2 | modxccmx-iis8-mysql-win2012-r2 |
-| cognosys | moodle-iis8-mysql-win2012-r2 | moodle-win2012-r2 |
-| cognosys | mvcforum-iis8-sqlserver2008-win2012-r2 | mvcforum-iis8-sqlserver2008-win2012-r2 |
-| cognosys | mycv-iis8-win2012-r2 | mycv-iis8-win2012-r2 |
-| cognosys | n2cms-iis8-sqlserver2008-win2012-r2 | n2cms-iis8-sqlserver2008-win2012-r2 |
-| cognosys | nopcommerce-iis8-sqlserver2008-win2012-r2 | new-nopcommerce-win2012-r2-basic-lic |
-| cognosys | ocportal-iis8-mysql-win2012-r2 | ocportal-iis8-mysql-win2012-r2 |
-| cognosys | opencart-iis8-mysql-win2012-r2 | new-opencart-win2012-r2-basic-lic |
-| cognosys | orchard-iis8-win2012-r2 | orchard-iis8-win2012-r2 |
-| cognosys | orchardcollaboration-iis8-sqlser2008-win2012-r2 | orchardcollaboration-iis8-sqlser2008-win2012-r2 |
-| cognosys | owa-iis8-mysql-win2012-r2 | owa-iis8-mysql-win2012-r2 |
-| cognosys | phpbb-iis8-mysql-win2012-r2 | phpbb-iis8-mysql-win2012-r2 |
-| cognosys | piwik-iis8-mysql-win2012-r2 | piwik-iis8-mysql-win2012-r2 |
-| cognosys | razorc-iis8-win2012-r2 | razorc-iis8-win2012-r2 |
-| cognosys | schlixcms-iis8-mysql-win2012-r2 | schlixcms-iis8-mysql-win2012-r2 |
-| cognosys | sec1011-dokuwiki-on-ubuntu-1404 | new-dokuwiki-on-hardened-ubuntu-1404-pro-lic |
-| cognosys | sec1019-secured-tomcat-on-ubuntu-1404 | secured-tomcat-on-ubuntu |
-| cognosys | sec1027-secured-wordpress-on-ubuntu-1404 | sec1027-secured-wordpress-on-ubuntu-1404 |
-| cognosys | sec1030-secured-subversion-on-ubuntu-1404 | secured-subversion-on-ubuntu-basic |
-| cognosys | sec1031-secured-passenger-nginx-on-ubuntu-1404 | sec1031-secured-passenger-nginx-on-ubuntu-1404 |
-| cognosys | sec1035-secured-jenkins-on-ubuntu-1404 | new-secured-jenkins-on-ubuntu |
-| cognosys | secure-cloud-lamp-ubuntu-1404 | secured-lamp-on-ubuntu-14-04-lts |
-| cognosys | secured-abantecart-on-ubuntu-14-04-lts | new-secured-abantecart-on-ubuntu-14-04-lts |
-| cognosys | secured-artifactory-on-ubuntu-14-04-lts | secured-artifactory-on-ubuntu-14-04-lts-basic-lic |
-| cognosys | secured-cakephp-on-ubuntu-14-04-lts | new-secure-cakephp-ubuntu |
-| cognosys | secured-concrete5-on-ubuntu-14-04-lts | secured-concrete5-on-ubuntu-14-04 |
-| cognosys | secured-coppermine-on-ubuntu-14-04-lts | secured-coppermine-on-ubuntu-14-04-lts |
-| cognosys | secured-crushftp-on-ubuntu-14-04-lts | new-secured-crushftp-on-ubuntu-14-04-lts-basic-lic |
-| cognosys | secured-digital-asset-management-on-wind-2012-r2 | secured-digital-asset-management-win12-basic |
-| cognosys | secured-dolivbarr-on-ubuntu-14-04-lts | secured-dolivbarr-on-ubuntu-14-04-lts |
-| cognosys | secured-enterprise-nginx-varnish-haproxy-php | sec-enterprise-nginx-varnish-haproxy-php-lic |
-| cognosys | secured-exoplatform-on-ubuntu-14-04-lts | secured-exoplatform-on-ubuntu-14-04-lts |
-| cognosys | secured-ghost-on-ubuntu-14-04-lts | new-secured-ghost-on-ubuntu-14-04-lts |
-| cognosys | secured-gradle-on-ubuntu-14-04-lts | secured-gradle-on-ubuntu-14-04-lts |
-| cognosys | secured-jbossas-on-ubuntu-14-04-lts | secured-jbossas-on-ubuntu-14-04-lts |
-| cognosys | secured-mariadb-on-ubuntu-16-04 | secured-maria-db-on-ubuntu-16-04 |
-| cognosys | secured-ngnix-on-centos-7-3 | secured-ngnix-on-centos-7-3 |
-| cognosys | secured-ngnix-on-ubuntu-14-04-lts | new-secured-ngnix-on-ubuntu-14-04-lts-basic-lic |
-| cognosys | secured-ngnix-on-ubuntu-16-04-lts | hardened-ngnix-on-ubuntu-16-04-lts |
-| cognosys | secured-passenger-nginx-on-centos | secured-passenger-nginx-on-centos |
-| cognosys | secured-redmine-on-centos | secured-redmine-on-centos |
-| cognosys | secured-reportserverent-on-ubuntu-14-04-lts | secured-reportserverent-on-ubuntu-14-04-lts |
-| cognosys | secured-seopanel-on-ubuntu-14-04-lts | new-secured-seopanel-on-ubuntu-14-04-lts |
-| cognosys | secured-simple-machines-on-ubuntu-14-04-lts | secured-simple-machines-on-ubuntu-14-04-lts |
-| cognosys | secured-suitecrm-on-ubuntu-14-04-lts | secured-suitecrm-on-ubuntu-14-04-lts |
-| cognosys | secured-testlink-on-ubuntu-14-04-lts | secured-testlink-on-ubuntu-14-04-lts |
-| cognosys | secured-thinkup-on-ubuntu-14-04-lts | secured-thinkup-on-ubuntu-14-04-lts |
-| cognosys | secured-tikiwikicms-on-ubuntu-14-04-lts | secured-tikiwikicms-on-ubuntu-14-04-lts |
-| cognosys | secured-tinytinyrss-on-ubuntu-14-04-lts | secured-tinytinyrss-on-ubuntu-14-04-lts |
-| cognosys | secured-trac-on-ubuntu-14-04-lts | secured-trac-on-ubuntu-14-04-lts |
-| cognosys | secured-typo3-on-ubuntu-14-04-lts | new-secured-typo3-on-ubuntu-14-04-lts-basic-lic |
-| cognosys | secured-varnish-on-ubuntu-1404 | hardened-varnish-ubuntu-1404-lts |
-| cognosys | secured-wildfly-on-ubuntu-14-04-lts | secured-wildfly-on-ubuntu-14-04-lts |
-| cognosys | secured-x-cart-on-ubuntu-14-04-lts | secured-x-cart-on-ubuntu-14-04-lts |
-| cognosys | secured-xoops-on-ubuntu-14-04-lts | secured-xoops-on-ubuntu-14-04-lts |
-| cognosys | secured-zurmo-on-ubuntu-14-04-lts | secured-zurmo-on-ubuntu-14-04-lts |
-| cognosys | shoppingcart-iis8-sqlserver2008-win2012-r2 | shoppingcart-iis8-sqlserver2008-win2012-r2 |
-| cognosys | silverstripecms-iis8-win2012-r2 | new-silverstripecms-win-2012-r2-basic-lic |
-| cognosys | sql-server-2007-web-win2016-debug-utilities | sql-server-2017-web-win2016-debug-utilities |
-| cognosys | sql-server-2016-sp1-ent-win2016-debug-utilities | sql-server-2016-sp1-ent-win2016-debug-utilities |
-| cognosys | sql-server-2016-sp1-std-win2016-debug-utilities | sql-server-2016-sp1-std-win2016-debug-utilities |
-| cognosys | sql-server-2016-sp1-web-win2016-debug-utilities | sql-server-2016-sp1-web-win2016-debug-utilities |
-| cognosys | sql-server-2016-sp2-ent-win2016-debug-utilities | sql-server-2016-sp2-ent-win2016-debug-utilities |
-| cognosys | sql-server-2016-sp2-std-win2016-debug-utilities | sql-server-2016-sp2-std-win2016-debug-utilities |
-| cognosys | sql-server-2016-sp2-web-winser2016-debug-utilities | sql-server-2016-sp2-web-win2016-debug-utilities |
-| cognosys | sql-server-2017-std-win2016-utils | sql-server-2017-std-win2016-utils |
-| cognosys | sql-server-2017-win2016-debug-utilities | sql-server-2017-ent-win2016-debug-utilities |
-| cognosys | surveyproject-iis8-win2012-r2 | surveyproject-iis8-win2012-r2 |
-| cognosys | suse15 | suse-15-hardened |
-| cognosys | tikiwikicms-iis8-win2012-r2 | tikiwikicms-iis8-win2012-r2 |
-| cognosys | ubuntu-18-04 | ubuntu-18-04-hardened |
-| cognosys | ubuntu-18-04-lts | hardened-ubuntu-18-04-lts |
-| cognosys | umbracocms-iis8-win2012-r2 | umbracocms-iis8-win2012-r2 |
-| cognosys | white-hat-hacking-security-ninja-windows2012 | security-ninja-for-white-hat-hacking-for-pre-lic |
-| cohesive | vns3_4x_network_security | cohesive-vns3-4x-byol |
-| cohesive | vns3_4x_network_security | cohesive-vns3-4_4_x-byol |
-| cohesive | vns3_4x_network_security | cohesive-vns3-4_4_x-free |
-| cohesive | vns3_4x_network_security | cohesive-vns3-4_4_x-lite |
-| commvault | commvault | commvaulttrial |
-| composable | composable | composable-govt |
-| connecting-software | cb-replicator | vhd-replicator-1000users |
-| connecting-software | cb-replicator | vhd-replicator-100users |
-| connecting-software | cb-replicator | vhd-replicator-100users-trial |
-| connecting-software | cb-replicator | vhd-replicator-1500users |
-| connecting-software | cb-replicator | vhd-replicator-2000users |
-| connecting-software | cb-replicator | vhd-replicator-200users |
-| connecting-software | cb-replicator | vhd-replicator-300users |
-| connecting-software | cb-replicator | vhd-replicator-400users |
-| connecting-software | cb-replicator | vhd-replicator-500users |
-| connecting-software | cb-replicator | vhd-replicator-750users |
-| connecting-software | cb-replicator-byol | cbrep-gov-byol |
-| CoreOS | CoreOS | Stable |
-| couchbase | couchbase-server-enterprise | byol |
-| couchbase | couchbase-sync-gateway-enterprise | byol |
-| credativ | Debian | 7 |
-| credativ | Debian | 8 |
-| credativ | Debian | 8-backports |
-| credativ | Debian | 9 |
-| credativ | Debian | 9-backports |
-| credativ | Debian | 9-beta |
-| credativ | debian-test | sid |
-| cyxtera | appgatesdp-vm | v4_2_vm |
-| cyxtera | appgatesdp-vm | v4_3_vm |
-| datastax | datastax-ddac | datastaxddac |
-| datastax | datastax-ddac-dev | datastaxddacdev |
-| datastax | datastax-enterprise | datastaxenterprise |
-| dellemc | dell-emc-avamar-virtual-edition | avamar-virtual-edition-1810 |
-| dellemc | dell-emc-avamar-virtual-edition | avamar-virtual-edition-1820 |
-| dellemc | dell-emc-avamar-virtual-edition | avamar-virtual-edition-751 |
-| dellemc | dell-emc-datadomain-virtual-edition | ddve-31-ver-060100 |
-| dellemc | dell-emc-datadomain-virtual-edition | ddve-31-ver-060101 |
-| dellemc | dell-emc-datadomain-virtual-edition-v4 | ddve-40-ver-060200 |
-| dellemc | dell-emc-networker-virtual-edition | networker-virtual-edition-181 |
-| dell_software | uccs | uccs |
-| delphix | delphix_dynamic_data_platform | delphix_dynamic_data_platform_5-3 |
-| derdack | enterprisealert | enterprisealert-2017-datacenter-10cal |
-| derdack | enterprisealert | enterprisealert-2017-datacenter-15cal |
-| derdack | enterprisealert | enterprisealert-2017-datacenter-20cal |
-| derdack | enterprisealert | enterprisealert-2017-datacenter-25cal |
-| derdack | enterprisealert | enterprisealert-2017-datacenter-50cal |
-| derdack | enterprisealert | enterprisealert-2017-datacenter-byol |
-| docker | docker-ee | docker-ee |
-| docker | docker4azure-cs | docker4azure-cs-1_12 |
-| docker | docker4azure-cs | docker4azure-cs-1_1x |
-| dynatrace | ruxit-managed-vm | byol-managed |
-| enterprise-ethereum-alliance | quorum-demo | quorum-demo |
-| esri | arcgis-10-4-for-server | cloud |
-| esri | arcgis-desktop | desktop-byol-106 |
-| esri | arcgis-desktop | desktop-byol-1061 |
-| esri | arcgis-desktop | desktop-byol-107 |
-| esri | arcgis-enterprise | byol |
-| esri | arcgis-enterprise | byol-1051 |
-| esri | arcgis-enterprise-106 | byol-106 |
-| esri | arcgis-enterprise-106 | byol-1061 |
-| esri | arcgis-enterprise-107 | byol-107 |
-| esri | arcgis-for-server | cloud |
-| eventtracker | eventtracker-siem | etlm |
-| eventtracker | eventtracker-siem | etsc |
-| f5-networks | f5-big-ip-adc | f5-bigip-virtual-edition-better-byol |
-| f5-networks | f5-big-ip-adc | f5-bigip-virtual-edition-good-byol |
-| f5-networks | f5-big-ip-advanced-waf | f5-bigip-virtual-edition-1g-waf-hourly |
-| f5-networks | f5-big-ip-advanced-waf | f5-bigip-virtual-edition-200m-waf-hourly |
-| f5-networks | f5-big-ip-advanced-waf | f5-bigip-virtual-edition-25m-waf-hourly |
-| f5-networks | f5-big-ip-best | f5-bigip-virtual-edition-1g-best-hourly |
-| f5-networks | f5-big-ip-best | f5-bigip-virtual-edition-200m-best-hourly |
-| f5-networks | f5-big-ip-best | f5-bigip-virtual-edition-25m-best-hourly |
-| f5-networks | f5-big-ip-best | f5-bigip-virtual-edition-best-byol |
-| f5-networks | f5-big-ip-better | f5-bigip-virtual-edition-1g-better-hourly |
-| f5-networks | f5-big-ip-better | f5-bigip-virtual-edition-200m-better-hourly |
-| f5-networks | f5-big-ip-better | f5-bigip-virtual-edition-25m-better-hourly |
-| f5-networks | f5-big-ip-better | f5-bigip-virtual-edition-better-byol |
-| f5-networks | f5-big-ip-byol | f5-big-all-1slot-byol |
-| f5-networks | f5-big-ip-byol | f5-big-all-2slot-byol |
-| f5-networks | f5-big-ip-byol | f5-big-ltm-1slot-byol |
-| f5-networks | f5-big-ip-byol | f5-big-ltm-2slot-byol |
-| f5-networks | f5-big-ip-cloud-edition | f5-big-ip-per-app-ve-awf-200m-hourly |
-| f5-networks | f5-big-ip-cloud-edition | f5-big-ip-per-app-ve-awf-25m-hourly |
-| f5-networks | f5-big-ip-cloud-edition | f5-big-ip-per-app-ve-awf-byol |
-| f5-networks | f5-big-ip-cloud-edition | f5-big-ip-per-app-ve-ltm-200m-hourly |
-| f5-networks | f5-big-ip-cloud-edition | f5-big-ip-per-app-ve-ltm-25m-hourly |
-| f5-networks | f5-big-ip-cloud-edition | f5-big-ip-per-app-ve-ltm-byol |
-| f5-networks | f5-big-ip-cloud-edition | f5-bigiq-virtual-edition-byol |
-| f5-networks | f5-big-ip-good | f5-bigip-virtual-edition-1g-good-hourly |
-| f5-networks | f5-big-ip-good | f5-bigip-virtual-edition-200m-good-hourly |
-| f5-networks | f5-big-ip-good | f5-bigip-virtual-edition-25m-good-hourly |
-| f5-networks | f5-big-ip-good | f5-bigip-virtual-edition-good-byol |
-| f5-networks | f5-big-ip-per-app-ve | f5-big-ip-per-app-ve-awf-200m-hourly |
-| f5-networks | f5-big-ip-per-app-ve | f5-big-ip-per-app-ve-awf-25m-hourly |
-| f5-networks | f5-big-ip-per-app-ve | f5-big-ip-per-app-ve-ltm-200m-hourly |
-| f5-networks | f5-big-ip-per-app-ve | f5-big-ip-per-app-ve-ltm-25m-hourly |
-| f5-networks | f5-big-iq | f5-bigiq-virtual-edition-byol |
-| flashgrid-inc | flashgrid-racnode | fg-1709-ol |
-| flashgrid-inc | flashgrid-racnode | fg-1709-ol-mc |
-| flashgrid-inc | flashgrid-racnode | fg-1709-rh |
-| flashgrid-inc | flashgrid-racnode | fg-1709-rh-mc |
-| flashgrid-inc | flashgrid-racnode | fg-ol7-priv-byol |
-| flashgrid-inc | flashgrid-racnode | fg-rh7-priv-byol |
-| flashgrid-inc | flashgrid-skycluster | skycluster-ol |
-| flashgrid-inc | flashgrid-skycluster | skycluster-ol-1month-free-private |
-| flashgrid-inc | flashgrid-skycluster | skycluster-ol-1month-trial-priv |
-| flashgrid-inc | flashgrid-skycluster | skycluster-ol-priv-8x5 |
-| flashgrid-inc | flashgrid-skycluster | skycluster-ol-priv-byol |
-| flashgrid-inc | flashgrid-skycluster | skycluster-rh |
-| flashgrid-inc | flashgrid-skycluster | skycluster-rh-1month-free-private |
-| flashgrid-inc | flashgrid-skycluster | skycluster-rh-1month-trial-priv |
-| flashgrid-inc | flashgrid-skycluster | skycluster-rh-priv-8x5 |
-| flashgrid-inc | flashgrid-skycluster | skycluster-rh-priv-byol |
-| forcepoint-llc | forcepoint-email-security-853 | forcepoint_email_security_v853_appliance |
-| forcepoint-llc | forcepoint-email-security-853 | forcepoint_email_security_v853_fsm |
-| forcepoint-llc | forcepoint-ngfw | ngfw_byol |
-| forcepoint-llc | forcepoint-ngfw | ngfw_payg |
-| fortinet | fortinet-fortianalyzer | fortinet-fortianalyzer |
-| fortinet | fortinet-fortimanager | fortinet-fortimanager |
-| fortinet | fortinet_fortigate-vm_v5 | fortinet_fg-vm |
-| fortinet | fortinet_fortigate-vm_v5 | fortinet_fg-vm_payg |
-| fortinet | fortinet_fortimail | fortinet_fortimail |
-| fortinet | fortinet_fortisandbox_vm | fortinet_fsa-vm |
-| fortinet | fortinet_fortiweb-vm_v5 | fortinet_fw-vm |
-| gigamon-inc | gigamon-fm-5_5_00 | gfm-azure |
-| gigamon-inc | gigamon-fm-5_5_00 | gvtap-cntlr |
-| gigamon-inc | gigamon-fm-5_5_00 | vseries-cntlr |
-| gigamon-inc | gigamon-fm-5_5_00 | vseries-node |
-| gigamon-inc | gigamon-fm-5_5_00_hourly | gfm-azure-hourly |
-| gigamon-inc | gigamon-fm-5_6_00 | gfm-azure |
-| gigamon-inc | gigamon-fm-5_6_00 | gvtap-cntlr |
-| gigamon-inc | gigamon-fm-5_6_00 | vseries-cntlr |
-| gigamon-inc | gigamon-fm-5_6_00 | vseries-node |
-| gigamon-inc | gigamon-fm-5_6_00_hourly | gfm-azure-hourly |
-| GitHub | GitHub-Enterprise | GitHub-Enterprise |
-| hanu | hanu-insightv2 | hanu-insight-v2-enterprise-byol |
-| hanu | hanu-insightv2 | hanu-insight-v2-standard-byol |
-| ibm | ibm-security-guardium-multi-cloud | ibm-security-guardium-aggregator |
-| ibm | ibm-security-guardium-multi-cloud | ibm-security-guardium-collector |
-| infoblox | infoblox-vnios-te-v1420 | vnios-cp-v1400 |
-| infoblox | infoblox-vnios-te-v1420 | vnios-cp-v1405 |
-| infoblox | infoblox-vnios-te-v1420 | vnios-cp-v2200 |
-| infoblox | infoblox-vnios-te-v1420 | vnios-cp-v2205 |
-| infoblox | infoblox-vnios-te-v1420 | vnios-cp-v800 |
-| infoblox | infoblox-vnios-te-v1420 | vnios-cp-v805 |
-| infoblox | infoblox-vnios-te-v1420 | vnios-te-v1420 |
-| infoblox | infoblox-vnios-te-v1420 | vnios-te-v2220 |
-| infoblox | infoblox-vnios-te-v1420 | vnios-te-v820 |
-| infoblox | infoblox-vnios-te-v1420 | vsot |
-| informatica | big-data-management-10-2-1 | informatica-big-data-management-v-10-2-1 |
-| informatica | db-10-2-2 | db-v-10-2-2 |
-| informatica | edc-10-2-2 | eic-v-10-2-2 |
-| informatica | ihs-10-2-2 | ihs-v-10-2-2 |
-| informatica | platform_10_2_hf1_domain_rhel-7-3_byol | byol_rhel_platform_10_2_hf1 |
-| informatica | platform_10_2_hf1_domain_windows_byol | byol_win_platform_10_2_hf1 |
-| informatica | platform_10_2_hf2_domain_byol | byol_rhel_platform_10_2_hf2 |
-| isvtestukbigcat | test_test_120520180725001 | testtestukbigcatvm001 |
-| isvtestuklegacy | testtest12192018070501 | isvtestuklegacysku2 |
-| isvtestuklegacy | testtestvmoffer121720180842 | testtestvmoffer01 |
-| isvtestuklegacy | test_test_111920181025-003 | isvtestuklegacyvm003 |
-| isvtestuklegacy | test_test_111920181130-004 | isvtestuklegacyvm004 |
-| isvtestuklegacy | test_test_111920181210-005 | isvtestuklegacyvm005 |
-| jamcracker | 4632d5b4-feb0-4332-8452-f2e66133672f | jamcracker_cloud_control_appliance_version5 |
-| jamcracker | jamcracker-cloudanalytics | jamcracker-cloudanalytics-version6_1 |
-| jamcracker | jamcracker-cloudanalytics-version4 | jamcracker-cloud-analytics-version4 |
-| jamcracker | jamcracker-cloudanalytics-version5 | jamcracker-cloudanalytics-version5 |
-| jamcracker | jamcracker-csb-service-provider | jc-csbsp-version5 |
-| jamcracker | jamcracker-csb-serviceprovider | jc-csbsp-version5 |
-| jamcracker | jamcracker-csb-serviceprovider | jc-csbsp-version6-1 |
-| jamcracker | jamcracker-csb-standard | jamcracker-csb-standard-version-6-1 |
-| jamcracker | jamcracker-csb-standard | jamcracker-csb-standard-version5 |
-| jamcracker | jamcracker-csb-standard-v3 | jamcracker-csb-standard-v3 |
-| jamcracker | jamcracker-csb-standard-version4 | jamcracker-csb-standard-version4 |
-| jamcracker | jamcracker-csbservice-provider | jamcracker-csb-serviceprovider-7-2 |
-| jamcracker | jamcracker-csbservice-provider-7 | jamcracker-csb-serviceprovvider-7-3 |
-| jamcracker | jamcracker-hybrid-cloud-management-version4 | jamcracker-hybrid-cloud-management-version4 |
-| jamcracker | jamcracker_cloud_control_appliance_version4 | jamcracker-cloud-control-appliance-version4 |
-| jamcracker | jc-csb-service-provider | jamcracker-csb-service-provider-version-7 |
-| jamcracker | jsdnapp_csb_serviceprovider-version4 | jc-csbsp-version4 |
-| jamcracker | jsdnapp_hybrid | jamcracker-hybrid-cloud-management-version6-1 |
-| jamcracker | jsdnapp_hybrid_v3 | jamcracker-hybrid-cloud-management-version5 |
-| juniper-networks | vmx-services-gateway-byol | vmx-services-gateway-byol |
-| juniper-networks | vsrx-next-generation-firewall | vsrx-byol-azure-image |
-| juniper-networks | vsrx-next-generation-firewall-solution-template | vsrx-byol-azure-image-solution-template |
-| kali-linux | kali-linux | kali |
-| kemptech | kemp360central-byol | kemp360central-byol |
-| kemptech | kemp360central-byol | kemp360central-spla |
-| kemptech | vlm-azure | basic-byol |
-| kemptech | vlm-azure | freeloadmaster |
-| kemptech | vlm-azure | vlm-byol-lts |
-| kemptech | vlm-azure | vlm-spla |
-| kemptech | vlm-azure | vlm-spla-lts |
-| kinetica | kineticadbbyol | centos75-620 |
-| liebsoft | enterprise_random_password_manager | redim5521 |
-| mapd | omnisci-platform | omnisci-platform |
-| mapr-technologies | mapr52-base-dev | 5202 |
-| marand | better_care_platform | better-free |
-| marklogic | marklogic-9-byol | ml9031_centos_byol |
-| marklogic | marklogic-developer-9 | ml9031_centos |
-| mfe_azure | mcafee_vnsp_controller_for_azure | mcafee-vnsp-controller |
-| mfe_azure | mcafee_vnsp_for_azure | mcafee-vnsp-azure-ips-sensor-byol |
-| mfe_azure | mcafee_vnsp_nsm_for_azure | mcafee-vnsp-azure-nsm |
-| mico | mobile-impact-platform | mipvm |
-| microsoft-ads | linux-data-science-vm-ubuntu | linuxdsvmubuntubyol |
-| microsoft-ads | windows-data-science-vm | windows2016byol |
-| microsoft-avere | vfxt | avere-vfxt-controller |
-| microsoft-avere | vfxt | avere-vfxt-node |
-| microsoft-azure-batch | centos-container | 7-5 |
-| microsoft-azure-batch | centos-container | 7-6 |
-| microsoft-azure-batch | centos-container-rdma | 7-4 |
-| microsoft-azure-batch | ubuntu-server-container | 16-04-lts |
-| microsoft-azure-batch | ubuntu-server-container-rdma | 16-04-lts |
-| microsoft-dsvm | aml-workstation | ubuntu |
-| microsoft-dsvm | azureml | runtime |
-| microsoft-dsvm | dsvm-windows | server-2016 |
-| microsoft-dsvm | linux-data-science-vm-ubuntu | linuxdsvmubuntu |
-| microsoft-hyperv | 1903_preview | datacenter-core |
-| microsoft-hyperv | rs5_preview | 2019-datacenter |
-| MicrosoftAzureSiteRecovery | Process-Server | Windows-2012-R2-Datacenter |
-| MicrosoftHybridCloudStorage | StorSimple | StorSimple-Garda-8000-Series |
-| MicrosoftHybridCloudStorage | StorSimple | StorSimple-Garda-8000-Series-BBUpdate |
-| MicrosoftHybridCloudStorage | StorSimpleVA | StorSimpleUpdate3RC |
-| MicrosoftOSTC | FreeBSD | 10.3 |
-| MicrosoftOSTC | FreeBSD | 11 |
-| MicrosoftOSTC | FreeBSD | 11.0 |
-| MicrosoftRServer | MLServer-CentOS | Enterprise |
-| MicrosoftRServer | MLServer-RedHat | Enterprise |
-| MicrosoftRServer | MLServer-Ubuntu | Enterprise |
-| MicrosoftRServer | MLServer-WS2016 | Enterprise |
-| MicrosoftRServer | RServer-CentOS | Enterprise |
-| MicrosoftRServer | RServer-RedHat | Enterprise |
-| MicrosoftRServer | RServer-Ubuntu | Enterprise |
-| MicrosoftRServer | RServer-WS2016 | Enterprise |
-| MicrosoftSharePoint | MicrosoftSharePointServer | 2016 |
-| MicrosoftSharePoint | MicrosoftSharePointServer | 2019 |
-| MicrosoftSQLServer | SQL2008R2SP3-WS2008R2SP1 | Enterprise |
-| MicrosoftSQLServer | SQL2008R2SP3-WS2008R2SP1 | Express |
-| MicrosoftSQLServer | SQL2008R2SP3-WS2008R2SP1 | Standard |
-| MicrosoftSQLServer | SQL2008R2SP3-WS2008R2SP1 | Web |
-| MicrosoftSQLServer | sql2008r2sp3-ws2008r2sp1-byol | enterprise |
-| MicrosoftSQLServer | sql2008r2sp3-ws2008r2sp1-byol | standard |
-| MicrosoftSQLServer | SQL2012SP3-WS2012R2 | Enterprise |
-| MicrosoftSQLServer | SQL2012SP3-WS2012R2 | Express |
-| MicrosoftSQLServer | SQL2012SP3-WS2012R2 | Standard |
-| MicrosoftSQLServer | SQL2012SP3-WS2012R2 | Web |
-| MicrosoftSQLServer | SQL2012SP3-WS2012R2-BYOL | Enterprise |
-| MicrosoftSQLServer | SQL2012SP3-WS2012R2-BYOL | Standard |
-| MicrosoftSQLServer | SQL2012SP4-WS2012R2 | Enterprise |
-| MicrosoftSQLServer | SQL2012SP4-WS2012R2 | Express |
-| MicrosoftSQLServer | SQL2012SP4-WS2012R2 | Standard |
-| MicrosoftSQLServer | SQL2012SP4-WS2012R2 | Web |
-| MicrosoftSQLServer | SQL2012SP4-WS2012R2-BYOL | Enterprise |
-| MicrosoftSQLServer | SQL2012SP4-WS2012R2-BYOL | Standard |
-| MicrosoftSQLServer | SQL2014SP1-WS2012R2 | Enterprise |
-| MicrosoftSQLServer | SQL2014SP1-WS2012R2 | Express |
-| MicrosoftSQLServer | SQL2014SP1-WS2012R2 | Standard |
-| MicrosoftSQLServer | SQL2014SP1-WS2012R2 | Web |
-| MicrosoftSQLServer | SQL2014SP1-WS2012R2-BYOL | Enterprise |
-| MicrosoftSQLServer | SQL2014SP1-WS2012R2-BYOL | Standard |
-| MicrosoftSQLServer | SQL2014SP2-WS2012R2 | Enterprise |
-| MicrosoftSQLServer | SQL2014SP2-WS2012R2 | Express |
-| MicrosoftSQLServer | SQL2014SP2-WS2012R2 | Standard |
-| MicrosoftSQLServer | SQL2014SP2-WS2012R2 | Web |
-| MicrosoftSQLServer | SQL2014SP2-WS2012R2-BYOL | Enterprise |
-| MicrosoftSQLServer | SQL2014SP2-WS2012R2-BYOL | Standard |
-| MicrosoftSQLServer | sql2014sp3-ws2012r2 | enterprise |
-| MicrosoftSQLServer | sql2014sp3-ws2012r2 | enterprise-byol |
-| MicrosoftSQLServer | sql2014sp3-ws2012r2 | express |
-| MicrosoftSQLServer | sql2014sp3-ws2012r2 | sqldev |
-| MicrosoftSQLServer | sql2014sp3-ws2012r2 | standard |
-| MicrosoftSQLServer | sql2014sp3-ws2012r2 | standard-byol |
-| MicrosoftSQLServer | sql2014sp3-ws2012r2 | web |
-| MicrosoftSQLServer | sql2014sp3-ws2012r2-byol | enterprise |
-| MicrosoftSQLServer | sql2014sp3-ws2012r2-byol | standard |
-| MicrosoftSQLServer | SQL2016-WS2012R2 | Enterprise |
-| MicrosoftSQLServer | SQL2016-WS2012R2 | Express |
-| MicrosoftSQLServer | SQL2016-WS2012R2 | SQLDEV |
-| MicrosoftSQLServer | SQL2016-WS2012R2 | Standard |
-| MicrosoftSQLServer | SQL2016-WS2012R2 | Web |
-| MicrosoftSQLServer | SQL2016-WS2012R2-BYOL | Enterprise |
-| MicrosoftSQLServer | SQL2016-WS2012R2-BYOL | Standard |
-| MicrosoftSQLServer | SQL2016-WS2016 | Enterprise |
-| MicrosoftSQLServer | SQL2016-WS2016 | SQLDEV |
-| MicrosoftSQLServer | SQL2016-WS2016 | Standard |
-| MicrosoftSQLServer | SQL2016-WS2016 | Web |
-| MicrosoftSQLServer | SQL2016-WS2016-BYOL | Enterprise |
-| MicrosoftSQLServer | SQL2016-WS2016-BYOL | Standard |
-| MicrosoftSQLServer | SQL2016SP1-WS2016 | Enterprise |
-| MicrosoftSQLServer | SQL2016SP1-WS2016 | Express |
-| MicrosoftSQLServer | SQL2016SP1-WS2016 | SQLDEV |
-| MicrosoftSQLServer | SQL2016SP1-WS2016 | Standard |
-| MicrosoftSQLServer | SQL2016SP1-WS2016 | Web |
-| MicrosoftSQLServer | SQL2016SP1-WS2016-BYOL | Enterprise |
-| MicrosoftSQLServer | SQL2016SP1-WS2016-BYOL | Standard |
-| MicrosoftSQLServer | SQL2016SP2-WS2016 | Enterprise |
-| MicrosoftSQLServer | SQL2016SP2-WS2016 | Express |
-| MicrosoftSQLServer | SQL2016SP2-WS2016 | SQLDEV |
-| MicrosoftSQLServer | SQL2016SP2-WS2016 | Standard |
-| MicrosoftSQLServer | SQL2016SP2-WS2016 | Web |
-| MicrosoftSQLServer | SQL2016SP2-WS2016-BYOL | Enterprise |
-| MicrosoftSQLServer | SQL2016SP2-WS2016-BYOL | Standard |
-| MicrosoftSQLServer | SQL2017-RHEL73 | Evaluation |
-| MicrosoftSQLServer | sql2017-ubuntu1604 | Enterprise |
-| MicrosoftSQLServer | sql2017-ubuntu1604 | Express |
-| MicrosoftSQLServer | sql2017-ubuntu1604 | SQLDEV |
-| MicrosoftSQLServer | sql2017-ubuntu1604 | Standard |
-| MicrosoftSQLServer | sql2017-ubuntu1604 | Web |
-| MicrosoftSQLServer | SQL2017-WS2016 | Enterprise |
-| MicrosoftSQLServer | SQL2017-WS2016 | Express |
-| MicrosoftSQLServer | SQL2017-WS2016 | SQLDEV |
-| MicrosoftSQLServer | SQL2017-WS2016 | Standard |
-| MicrosoftSQLServer | SQL2017-WS2016 | Web |
-| MicrosoftSQLServer | SQL2017-WS2016-BYOL | Enterprise |
-| MicrosoftSQLServer | SQL2017-WS2016-BYOL | Standard |
-| MicrosoftSQLServer | sql2019-rhel7 | SQLDEV |
-| MicrosoftSQLServer | sql2019-ubuntu1604 | SQLDEV |
-| MicrosoftSQLServer | SQL2019-WS2016 | SQLDEV |
-| MicrosoftVisualStudio | VisualStudio | VS-2015-Comm-VSU3-AzureSDK-29-WS2012R2 |
-| MicrosoftVisualStudio | VisualStudio | VS-2015-Comm-VSU3-AzureSDK-291-WS2012R2 |
-| MicrosoftVisualStudio | VisualStudio | VS-2015-Ent-VSU3-AzureSDK-29-WS2012R2 |
-| MicrosoftVisualStudio | VisualStudio | VS-2017-Comm-Latest-Preview-WS2016 |
-| MicrosoftVisualStudio | VisualStudio | VS-2017-Comm-Latest-WS2016 |
-| MicrosoftVisualStudio | VisualStudio | VS-2017-Comm-WS2016 |
-| MicrosoftVisualStudio | VisualStudio | VS-2017-Ent-Latest-Preview-WS2016 |
-| MicrosoftVisualStudio | VisualStudio | VS-2017-Ent-Latest-WS2016 |
-| MicrosoftVisualStudio | VisualStudio | VS-2017-Ent-WS2016 |
-| MicrosoftVisualStudio | VisualStudio | vs-2019-preview-ws2016 |
-| MicrosoftVisualStudio | visualstudio2019 | vs-2019-comm-preview-ws2016 |
-| MicrosoftVisualStudio | visualstudio2019 | vs-2019-comm-ws2016 |
-| MicrosoftVisualStudio | visualstudio2019 | vs-2019-comm-ws2019 |
-| MicrosoftVisualStudio | visualstudio2019 | vs-2019-ent-preview-ws2016 |
-| MicrosoftVisualStudio | visualstudio2019 | vs-2019-ent-ws2016 |
-| MicrosoftVisualStudio | visualstudio2019 | vs-2019-ent-ws2019 |
-| MicrosoftVisualStudio | visualstudio2019latest | vs-2019-comm-latest-ws2019 |
-| MicrosoftVisualStudio | visualstudio2019latest | vs-2019-ent-latest-ws2019 |
-| MicrosoftWindowsDesktop | 21e23361-881e-4f0e-a27c-c53241b20896 | rs3-pro |
-| MicrosoftWindowsDesktop | 21e23361-881e-4f0e-a27c-c53241b20896 | rs3-pron |
-| MicrosoftWindowsDesktop | 676738ac-a807-468f-8a7b-961bfa3a3404 | rs5-evd |
-| MicrosoftWindowsDesktop | 676738ac-a807-468f-8a7b-961bfa3a3404 | rs5-pro |
-| MicrosoftWindowsDesktop | 676738ac-a807-468f-8a7b-961bfa3a3404 | rs5-pro-zh-cn |
-| MicrosoftWindowsDesktop | 676738ac-a807-468f-8a7b-961bfa3a3404 | rs5-pron |
-| MicrosoftWindowsDesktop | 6f8d5b91-ec98-4230-85d8-cc4e0a5c11ef | rs4-pro |
-| MicrosoftWindowsDesktop | 6f8d5b91-ec98-4230-85d8-cc4e0a5c11ef | rs4-pro-zh-cn |
-| MicrosoftWindowsDesktop | 6f8d5b91-ec98-4230-85d8-cc4e0a5c11ef | rs4-pron |
-| MicrosoftWindowsDesktop | office-365 | 1903-evd-o365pp |
-| MicrosoftWindowsDesktop | office-365 | rs5-evd-o365pp |
-| MicrosoftWindowsDesktop | Test-offer-legacy-id | RS3-Pro |
-| MicrosoftWindowsDesktop | Windows-10 | 19h1-evd |
-| MicrosoftWindowsDesktop | Windows-10 | 19h1-pro |
-| MicrosoftWindowsDesktop | Windows-10 | 19h1-pro-zh-cn |
-| MicrosoftWindowsDesktop | Windows-10 | 19h1-pron |
-| MicrosoftWindowsDesktop | Windows-10 | rs1-enterprise |
-| MicrosoftWindowsDesktop | Windows-10 | rs1-enterprisen |
-| MicrosoftWindowsDesktop | Windows-10 | RS2-Pro |
-| MicrosoftWindowsDesktop | Windows-10 | RS2-ProN |
-| MicrosoftWindowsDesktop | Windows-10 | RS3-Pro |
-| MicrosoftWindowsDesktop | Windows-10 | rs3-pro-test |
-| MicrosoftWindowsDesktop | Windows-10 | RS3-ProN |
-| MicrosoftWindowsDesktop | Windows-10 | rs4-pro |
-| MicrosoftWindowsDesktop | Windows-10 | rs4-pro-zh-cn |
-| MicrosoftWindowsDesktop | Windows-10 | rs4-pron |
-| MicrosoftWindowsDesktop | Windows-10 | rs5-enterprise |
-| MicrosoftWindowsDesktop | Windows-10 | rs5-enterprisen |
-| MicrosoftWindowsDesktop | Windows-10 | rs5-evd |
-| MicrosoftWindowsDesktop | Windows-10 | rs5-pro |
-| MicrosoftWindowsDesktop | Windows-10 | rs5-pro-zh-cn |
-| MicrosoftWindowsDesktop | Windows-10 | rs5-pron |
-| MicrosoftWindowsDesktop | windows-10-1607-vhd-client-prod-stage | rs1-enterprise |
-| MicrosoftWindowsDesktop | windows-10-1607-vhd-client-prod-stage | rs1-enterprisen |
-| MicrosoftWindowsDesktop | windows-10-1803-vhd-client-prod-stage | rs4-pro |
-| MicrosoftWindowsDesktop | windows-10-1803-vhd-client-prod-stage | rs4-pro-zh-cn |
-| MicrosoftWindowsDesktop | windows-10-1803-vhd-client-prod-stage | rs4-pron |
-| MicrosoftWindowsDesktop | windows-10-1809-vhd-client-prod-stage | rs5-enterprise |
-| MicrosoftWindowsDesktop | windows-10-1809-vhd-client-prod-stage | rs5-enterprisen |
-| MicrosoftWindowsDesktop | windows-10-1809-vhd-client-prod-stage | rs5-evd |
-| MicrosoftWindowsDesktop | windows-10-1809-vhd-client-prod-stage | rs5-pro |
-| MicrosoftWindowsDesktop | windows-10-1809-vhd-client-prod-stage | rs5-pro-zh-cn |
-| MicrosoftWindowsDesktop | windows-10-1809-vhd-client-prod-stage | rs5-pron |
-| MicrosoftWindowsDesktop | windows-10-1903-vhd-client-prod-stage | 19h1-evd |
-| MicrosoftWindowsDesktop | windows-10-1903-vhd-client-prod-stage | 19h1-pro |
-| MicrosoftWindowsDesktop | windows-10-1903-vhd-client-prod-stage | 19h1-pro-zh-cn |
-| MicrosoftWindowsDesktop | windows-10-1903-vhd-client-prod-stage | 19h1-pron |
-| MicrosoftWindowsDesktop | windows-10-ppe | RS3-Pro |
-| MicrosoftWindowsDesktop | windows-10-ppe | RS3-ProN |
-| MicrosoftWindowsDesktop | windows-10-ppe | rs4-pro |
-| MicrosoftWindowsDesktop | windows-10-ppe | rs4-pron |
-| MicrosoftWindowsDesktop | windows-10-ppe | rs5-pro |
-| MicrosoftWindowsDesktop | windows-7 | win7-pro |
-| MicrosoftWindowsServer | server2016gen2testing | 2016-datacenter |
-| MicrosoftWindowsServer | server2016gen2testing | 2016-datacenter-gen2 |
-| MicrosoftWindowsServer | WindowsServer | 2008-R2-SP1 |
-| MicrosoftWindowsServer | WindowsServer | 2008-R2-SP1-smalldisk |
-| MicrosoftWindowsServer | WindowsServer | 2008-R2-SP1-zhcn |
-| MicrosoftWindowsServer | WindowsServer | 2012-Datacenter |
-| MicrosoftWindowsServer | WindowsServer | 2012-Datacenter-smalldisk |
-| MicrosoftWindowsServer | WindowsServer | 2012-Datacenter-zhcn |
-| MicrosoftWindowsServer | WindowsServer | 2012-R2-Datacenter |
-| MicrosoftWindowsServer | WindowsServer | 2012-R2-Datacenter-smalldisk |
-| MicrosoftWindowsServer | WindowsServer | 2012-R2-Datacenter-zhcn |
-| MicrosoftWindowsServer | WindowsServer | 2016-Datacenter |
-| MicrosoftWindowsServer | WindowsServer | 2016-Datacenter-Server-Core |
-| MicrosoftWindowsServer | WindowsServer | 2016-Datacenter-Server-Core-smalldisk |
-| MicrosoftWindowsServer | WindowsServer | 2016-Datacenter-smalldisk |
-| MicrosoftWindowsServer | WindowsServer | 2016-Datacenter-with-Containers |
-| MicrosoftWindowsServer | WindowsServer | 2016-Datacenter-with-RDSH |
-| MicrosoftWindowsServer | WindowsServer | 2016-Datacenter-zhcn |
-| MicrosoftWindowsServer | WindowsServer | 2016-Nano-Server |
-| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter |
-| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-Core |
-| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-Core-smalldisk |
-| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-Core-with-Containers |
-| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-Core-with-Containers-smalldisk |
-| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-smalldisk |
-| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-with-Containers |
-| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-with-Containers-smalldisk |
-| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-zhcn |
-| MicrosoftWindowsServer | WindowsServer | Datacenter-Core-1803-with-Containers-smalldisk |
-| MicrosoftWindowsServer | WindowsServer | Datacenter-Core-1809-with-Containers-smalldisk |
-| MicrosoftWindowsServer | WindowsServer | Datacenter-Core-1903-with-Containers-smalldisk |
-| MicrosoftWindowsServer | windowsserver-gen2preview | 2012-datacenter-gen2 |
-| MicrosoftWindowsServer | windowsserver-gen2preview | 2012-r2-datacenter-gen2 |
-| MicrosoftWindowsServer | windowsserver-gen2preview | 2016-datacenter-gen2 |
-| MicrosoftWindowsServer | windowsserver-gen2preview | 2019-datacenter-gen2 |
-| MicrosoftWindowsServer | WindowsServer-HUB | 2008-R2-SP1-HUB |
-| MicrosoftWindowsServer | WindowsServer-HUB | 2012-Datacenter-HUB |
-| MicrosoftWindowsServer | WindowsServer-HUB | 2012-R2-Datacenter-HUB |
-| MicrosoftWindowsServer | WindowsServer-HUB | 2016-Datacenter-HUB |
-| MicrosoftWindowsServer | WindowsServerSemiAnnual | Datacenter-Core-1709-smalldisk |
-| MicrosoftWindowsServer | WindowsServerSemiAnnual | Datacenter-Core-1709-with-Containers-smalldisk |
-| MicrosoftWindowsServer | WindowsServerSemiAnnual | Datacenter-Core-1803-with-Containers-smalldisk |
-| MicrosoftWindowsServer | WindowsServerSemiAnnual | Datacenter-Core-1809-with-Containers-smalldisk |
-| MicrosoftWindowsServerRemoteDesktop | WindowsServer | RDSH-Office365P |
-| MicrosoftWindowsServerRemoteDesktop | WindowsServer | Remote-Desktop-Session-Host |
-| midvision | ibm-app-connect-enterprise-edition-11 | midvision-ibm-ace-11001-wmq-9100_20181101 |
-| midvision | ibm-app-connect-enterprise-edition-11 | midvision-ibm-ace-11001_20181101 |
-| midvision | ibm-business-process-manager-express-85 | midvision-ibm-bpm-exp-857_nonprod_20181204 |
-| midvision | ibm-business-process-manager-express-85 | midvision-ibm-bpm-exp-857_prod_20181204 |
-| midvision | ibm-business-process-manager-express-86 | midvision-ibm-bpm-exp-860_nonprod_20181204 |
-| midvision | ibm-business-process-manager-express-86 | midvision-ibm-bpm-exp-860_prod_20181204 |
-| midvision | ibm-business-process-manager-standard-85 | midvision-ibm-bpm-std-857_prod-wmq-8006_20181205 |
-| midvision | ibm-business-process-manager-standard-85 | midvision-ibm-bpm-std-857_prod_20181205 |
-| midvision | ibm-business-process-manager-standard-86 | midvision-ibm-bpm-std-860_prod-wmq-9003_20190329 |
-| midvision | ibm-business-process-manager-standard-86 | midvision-ibm-bpm-std-860_prod_20181205 |
-| midvision | ibm-datapower-virtual-edition-75 | midvision-ibm-dpg_ve-7_5_0_0_nprod_20180806 |
-| midvision | ibm-datapower-virtual-edition-75 | midvision-ibm-dpg_ve-7_5_0_0_prod_20180806 |
-| midvision | ibm-datapower-virtual-edition-75 | midvision-ibm-dpg_ve-7_5_1_0_nprod_20180806 |
-| midvision | ibm-datapower-virtual-edition-75 | midvision-ibm-dpg_ve-7_5_1_0_prod_20180806 |
-| midvision | ibm-datapower-virtual-edition-75 | midvision-ibm-dpg_ve-7_5_2_0_nprod_20180806 |
-| midvision | ibm-datapower-virtual-edition-75 | midvision-ibm-dpg_ve-7_5_2_0_prod_20180806 |
-| midvision | ibm-datapower-virtual-edition-76 | midvision-ibm-datapower_gateway_ve-7_6_0_0_nonprod |
-| midvision | ibm-datapower-virtual-edition-76 | midvision-ibm-datapower_gateway_ve-7_6_0_0_prod |
-| midvision | ibm-datapower-virtual-edition-76 | midvision-ibm-datapower_gateway_ve-7_6_0_1_nonprod |
-| midvision | ibm-datapower-virtual-edition-76 | midvision-ibm-datapower_gateway_ve-7_6_0_1_prod |
-| midvision | ibm-datapower-virtual-edition-76 | midvision-ibm-datapower_gateway_ve-7_6_0_2_nonprod |
-| midvision | ibm-datapower-virtual-edition-76 | midvision-ibm-datapower_gateway_ve-7_6_0_2_prod |
-| midvision | ibm-datapower-virtual-edition-76 | midvision-ibm-datapower_gateway_ve-7_6_0_3_nonprod |
-| midvision | ibm-datapower-virtual-edition-76 | midvision-ibm-datapower_gateway_ve-7_6_0_3_prod |
-| midvision | ibm-datapower-virtual-edition-76 | midvision-ibm-datapower_gateway_ve-7_6_0_4_nonprod |
-| midvision | ibm-datapower-virtual-edition-76 | midvision-ibm-datapower_gateway_ve-7_6_0_4_prod |
-| midvision | ibm-datapower-virtual-edition-77 | midvision-ibm-dpg-7711_nonprod_20181001 |
-| midvision | ibm-datapower-virtual-edition-77 | midvision-ibm-dpg-7711_prod_20181001 |
-| midvision | ibm-db2-advanced-enterprise-server-edition-11 | ibm-db2-advanced-enterprise-11100_20190218 |
-| midvision | ibm-db2-advanced-enterprise-server-edition-11 | ibm-db2-advanced-enterprise-11133_20190218 |
-| midvision | ibm-db2-advanced-workgroup-server-edition-11 | ibm-db2-advanced-workgroup-11100_20190218 |
-| midvision | ibm-db2-advanced-workgroup-server-edition-11 | ibm-db2-advanced-workgroup-11133_20190218 |
-| midvision | ibm-http-server | midvision-ibm-http-server-9_0_0_4 |
-| midvision | ibm-http-server | midvision-ihs-9005_20181001 |
-| midvision | ibm-http-server | midvision-ihs-9006_20181001 |
-| midvision | ibm-http-server | midvision-ihs-9007_20181001 |
-| midvision | ibm-http-server | midvision-ihs-9008_20190206 |
-| midvision | ibm-integration-bus-standard-edition-10 | midvision-ibm-ibse-10007-wmq-8006_20181101 |
-| midvision | ibm-integration-bus-standard-edition-10 | midvision-ibm-ibse-10007-wmq-9003_20181101 |
-| midvision | ibm-integration-bus-standard-edition-10 | midvision-ibm-ibse-10007_20181101 |
-| midvision | ibm-integration-bus-standard-edition-90 | midvision-ibm-ibse-9002-wmq-7503_20181101 |
-| midvision | ibm-integration-bus-standard-edition-90 | midvision-ibm-ibse-9007-wmq-8006_20181101 |
-| midvision | ibm-websphere-portal-server-85 | midvision-ibm-websphere-portal-server-8_5_cf13 |
-| midvision | ibm-websphere-portal-server-90 | midvision-ibm-websphere-portal-server-9_0_cf13 |
-| midvision | websphere-application-server-be | midvision-ibm_was_be-70037_20190507 |
-| midvision | websphere-application-server-be | midvision-ibm_was_be-70039_20190507 |
-| midvision | websphere-application-server-be | midvision-ibm_was_be-70043_20190507 |
-| midvision | websphere-application-server-be-80 | midvision-ibm_was_be-80007_20190507 |
-| midvision | websphere-application-server-be-80 | midvision-ibm_was_be-80010_20190507 |
-| midvision | websphere-application-server-be-80 | midvision-ibm_was_be-80011_20190507 |
-| midvision | websphere-application-server-be-80 | midvision-ibm_was_be-80012_20190507 |
-| midvision | websphere-application-server-be-80 | midvision-ibm_was_be-80013_20190507 |
-| midvision | websphere-application-server-be-80 | midvision-ibm_was_be-80014_20190507 |
-| midvision | websphere-application-server-be-80 | midvision-ibm_was_be-80015_20190507 |
-| midvision | websphere-application-server-be-85 | midvision-ibm_was_be-85002-jdk6_20190503 |
-| midvision | websphere-application-server-be-85 | midvision-ibm_was_be-85501-jdk6_20190503 |
-| midvision | websphere-application-server-be-85 | midvision-ibm_was_be-85506-jdk6_20190503 |
-| midvision | websphere-application-server-be-85 | midvision-ibm_was_be-85507-jdk6_20190503 |
-| midvision | websphere-application-server-be-85 | midvision-ibm_was_be-85508-jdk6_20190503 |
-| midvision | websphere-application-server-be-85 | midvision-ibm_was_be-85509-jdk6_20190503 |
-| midvision | websphere-application-server-be-85 | midvision-ibm_was_be-85510-jdk6_20190503 |
-| midvision | websphere-application-server-be-85 | midvision-ibm_was_be-85510-jdk7_20190503 |
-| midvision | websphere-application-server-be-85 | midvision-ibm_was_be-85511-jdk6_20190503 |
-| midvision | websphere-application-server-be-85 | midvision-ibm_was_be-85511-jdk7_20190503 |
-| midvision | websphere-application-server-be-85 | midvision-ibm_was_be-85511-jdk8_20190503 |
-| midvision | websphere-application-server-be-85 | midvision-ibm_was_be-85512-jdk6_20190503 |
-| midvision | websphere-application-server-be-85 | midvision-ibm_was_be-85512-jdk7_20190503 |
-| midvision | websphere-application-server-be-85 | midvision-ibm_was_be-85512-jdk8_20190503 |
-| midvision | websphere-application-server-be-85 | midvision-ibm_was_be-85513-jdk6_20190503 |
-| midvision | websphere-application-server-be-85 | midvision-ibm_was_be-85513-jdk7_20190503 |
-| midvision | websphere-application-server-be-85 | midvision-ibm_was_be-85513-jdk8_20190503 |
-| midvision | websphere-application-server-be-85 | midvision-ibm_was_be-85514-jdk7_20190503 |
-| midvision | websphere-application-server-be-85 | midvision-ibm_was_be-85514-jdk8_20190503 |
-| midvision | websphere-application-server-be-90 | midvision-ibm_was_be-90001_20190503 |
-| midvision | websphere-application-server-be-90 | midvision-ibm_was_be-90002_20190503 |
-| midvision | websphere-application-server-be-90 | midvision-ibm_was_be-90003_20190503 |
-| midvision | websphere-application-server-be-90 | midvision-ibm_was_be-90004_20190503 |
-| midvision | websphere-application-server-be-90 | midvision-ibm_was_be-90005_20190503 |
-| midvision | websphere-application-server-be-90 | midvision-ibm_was_be-90006_20190503 |
-| midvision | websphere-application-server-be-90 | midvision-ibm_was_be-90007_20190503 |
-| midvision | websphere-application-server-be-90 | midvision-ibm_was_be-90008_20190503 |
-| midvision | websphere-application-server-be-90 | midvision-ibm_was_be-90009_20190503 |
-| midvision | websphere-application-server-be-90 | midvision-ibm_was_be-90010_20190503 |
-| midvision | websphere-application-server-be-and-mq | midvision-was-70043-wmq-75008_20190509 |
-| midvision | websphere-application-server-be-and-mq | midvision-was-80013-wmq-80007_20190509 |
-| midvision | websphere-application-server-be-and-mq | midvision-was-85511-wmq-80007_20190509 |
-| midvision | websphere-application-server-be-and-mq | midvision-was-85513-wmq-80010_20190509 |
-| midvision | websphere-application-server-be-and-mq | midvision-was-90005-wmq-90300_20190509 |
-| midvision | websphere-application-server-lp | midvision-ibm-was_liberty_profile-8_5_5_6 |
-| midvision | websphere-application-server-lp | midvision-ibm-was_liberty_profile-8_5_5_7 |
-| midvision | websphere-application-server-lp | midvision-ibm-was_liberty_profile-8_5_5_8 |
-| midvision | websphere-application-server-lp | midvision-ibm-was_liberty_profile-8_5_5_9 |
-| midvision | websphere-application-server-lp-16 | midvision-ibm-was_liberty_profile-16_0_3 |
-| midvision | websphere-application-server-lp-16 | midvision-ibm-was_liberty_profile-16_0_4 |
-| midvision | websphere-application-server-lp-17 | midvision-ibm-was_liberty_profile-17_0_1 |
-| midvision | websphere-application-server-lp-17 | midvision-ibm-was_liberty_profile-17_0_2 |
-| midvision | websphere-application-server-lp-17 | midvision-ibm-was_liberty_profile-17_0_3 |
-| midvision | websphere-application-server-lp-17 | midvision-ibm-was_liberty_profile-17_0_4 |
-| midvision | websphere-application-server-lp-18 | midvision-ibm-was_lp-18001_20181001 |
-| midvision | websphere-application-server-lp-18 | midvision-ibm-was_lp-18002_20181001 |
-| midvision | websphere-application-server-nde | midvision-ibm-was_nd_edition-7_0_0_37 |
-| midvision | websphere-application-server-nde | midvision-ibm-was_nd_edition-7_0_0_39 |
-| midvision | websphere-application-server-nde | midvision-ibm-was_nd_edition-7_0_0_43 |
-| midvision | websphere-application-server-nde-80 | midvision-ibm-was_nde-80015_20181001 |
-| midvision | websphere-application-server-nde-80 | midvision-ibm-was_nd_edition-8_0_0_10 |
-| midvision | websphere-application-server-nde-80 | midvision-ibm-was_nd_edition-8_0_0_11 |
-| midvision | websphere-application-server-nde-80 | midvision-ibm-was_nd_edition-8_0_0_12 |
-| midvision | websphere-application-server-nde-80 | midvision-ibm-was_nd_edition-8_0_0_13 |
-| midvision | websphere-application-server-nde-80 | midvision-ibm-was_nd_edition-8_0_0_14 |
-| midvision | websphere-application-server-nde-80 | midvision-ibm-was_nd_edition-8_0_0_7 |
-| midvision | websphere-application-server-nde-85 | midvision-ibm-was_nde-85510jdk7_20181001 |
-| midvision | websphere-application-server-nde-85 | midvision-ibm-was_nde-85511jdk7_20181001 |
-| midvision | websphere-application-server-nde-85 | midvision-ibm-was_nde-85512jdk7_20181001 |
-| midvision | websphere-application-server-nde-85 | midvision-ibm-was_nde-85513jdk7_20181001 |
-| midvision | websphere-application-server-nde-85 | midvision-ibm-was_nde-85514jdk7_20181001 |
-| midvision | websphere-application-server-nde-85 | midvision-ibm-was_nde-85514_20181001 |
-| midvision | websphere-application-server-nde-85 | midvision-ibm-was_nde-8558jdk7_20181001 |
-| midvision | websphere-application-server-nde-85 | midvision-ibm-was_nde-8559jdk7_20181001 |
-| midvision | websphere-application-server-nde-85 | midvision-ibm-was_nd_edition-8_5_0_2 |
-| midvision | websphere-application-server-nde-85 | midvision-ibm-was_nd_edition-8_5_5_1 |
-| midvision | websphere-application-server-nde-85 | midvision-ibm-was_nd_edition-8_5_5_10 |
-| midvision | websphere-application-server-nde-85 | midvision-ibm-was_nd_edition-8_5_5_11 |
-| midvision | websphere-application-server-nde-85 | midvision-ibm-was_nd_edition-8_5_5_12 |
-| midvision | websphere-application-server-nde-85 | midvision-ibm-was_nd_edition-8_5_5_13 |
-| midvision | websphere-application-server-nde-85 | midvision-ibm-was_nd_edition-8_5_5_6 |
-| midvision | websphere-application-server-nde-85 | midvision-ibm-was_nd_edition-8_5_5_7 |
-| midvision | websphere-application-server-nde-85 | midvision-ibm-was_nd_edition-8_5_5_8 |
-| midvision | websphere-application-server-nde-85 | midvision-ibm-was_nd_edition-8_5_5_9 |
-| midvision | websphere-application-server-nde-90 | midvision-ibm-was_nde-90010_20190227 |
-| midvision | websphere-application-server-nde-90 | midvision-ibm-was_nde-9007_20181001 |
-| midvision | websphere-application-server-nde-90 | midvision-ibm-was_nde-9008_20181001 |
-| midvision | websphere-application-server-nde-90 | midvision-ibm-was_nde-9009_20190227 |
-| midvision | websphere-application-server-nde-90 | midvision-ibm-was_nd_edition-9_0_0_1 |
-| midvision | websphere-application-server-nde-90 | midvision-ibm-was_nd_edition-9_0_0_2 |
-| midvision | websphere-application-server-nde-90 | midvision-ibm-was_nd_edition-9_0_0_3 |
-| midvision | websphere-application-server-nde-90 | midvision-ibm-was_nd_edition-9_0_0_4 |
-| midvision | websphere-application-server-nde-90 | midvision-ibm-was_nd_edition-9_0_0_5 |
-| midvision | websphere-application-server-nde-90 | midvision-ibm-was_nd_edition-9_0_0_6 |
-| midvision | websphere-mq | midvision-ibm_wmq-80010_20181001 |
-| midvision | websphere-mq | midvision-ibm_wmq-8009_20181001 |
-| midvision | websphere-mq | midvision-ibm_wmq-8_0_0_5_x100 |
-| midvision | websphere-mq | midvision-ibm_wmq-8_0_0_6_x100 |
-| midvision | websphere-mq-75 | midvision-ibm_wmq-7_5_0_8 |
-| midvision | websphere-mq-90 | midvision-ibm_wmq-9_0_3_0 |
-| midvision | websphere-mq-90 | midvision-ibm_wmq-9_0_4_0 |
-| midvision | websphere-mq-91 | midvision-ibm_wmq-9100_20181001 |
-| midvision | websphere-mq-91 | midvision-ibm_wmq-9101_20190114 |
-| modern-systems | modpaas | ms_modpaas_01 |
-| nasuni | nasuni-nmc | nasuni_nmc_7_10_6_prod |
-| nasuni | nasuni_edge_appliance | nasuni_edge_appliance_7_10_6_prod |
-| netapp | netapp-oncommand-cloud-manager | occm-byol |
-| netapp | netapp-ontap-cloud | ontap_cloud_byol |
-| netapp | netapp-ontap-cloud | ontap_cloud_byol_ha |
-| netapp | netapp-ontap-cloud | private_40_explore_standard_premium |
-| netapp | netapp-ontap-cloud | private_40_standard_premium |
-| netapp | netapp-ontap-cloud | private_partner_0_sn |
-| noobaa | noobaa-hybrid-s3-archive-05 | pay-per-usage |
-| nuxeo | nuxeo-6-lts | nuxeo-6-lts |
-| onyx-point-inc | op-bnf-1_9 | op-bnf-1_9 |
-| onyx-point-inc | op-bnf-v1 | bnfcv1 |
-| onyx-point-inc | op-bnf1_6-v1 | bnf1_6cv1 |
-| onyx-point-inc | op-dfi-v1 | dfiv1 |
-| onyx-point-inc | op-scc-v1 | op-scc-v1 |
-| onyx-point-inc | op_simp_6_1 | op_simp_6_1 |
-| onyx-point-inc | op_simp_6_3 | op_simp_6_3 |
-| OpenLogic | CentOS | 6.10 |
-| OpenLogic | CentOS | 6.7 |
-| OpenLogic | CentOS | 6.8 |
-| OpenLogic | CentOS | 6.9 |
-| OpenLogic | CentOS | 7.2 |
-| OpenLogic | CentOS | 7.2n |
-| OpenLogic | CentOS | 7.3 |
-| OpenLogic | CentOS | 7.4 |
-| OpenLogic | CentOS | 7.5 |
-| OpenLogic | CentOS | 7.6 |
-| OpenLogic | CentOS-CI | 7-CI |
-| OpenLogic | CentOS-HPC | 6.8 |
-| OpenLogic | CentOS-HPC | 7.1 |
-| OpenLogic | CentOS-HPC | 7.4 |
-| OpenLogic | CentOS-LVM | 7-LVM |
-| opentext | opentext-content-server-16 | ot-cs16 |
-| openvpn | openvpnas | access_server_byol |
-| Oracle | Oracle-Database-Ee | 12.1.0.2 |
-| Oracle | Oracle-Database-Se | 12.1.0.2 |
-| Oracle | Oracle-Linux | 6.7 |
-| Oracle | Oracle-Linux | 6.8 |
-| Oracle | Oracle-Linux | 7.2 |
-| orfast-technologies | orfast-mam-1 | orasft_mam_01 |
-| outsystems | os11-vm-baseimage | lifetime |
-| outsystems | os11-vm-baseimage | os11-ws2016 |
-| outsystems | os11-vm-baseimage | platformserver |
-| paloaltonetworks | panorama | byol |
-| paloaltonetworks | vmseries-forms | bundle2-for-ms |
-| paloaltonetworks | vmseries1 | bundle1 |
-| paloaltonetworks | vmseries1 | bundle2 |
-| paloaltonetworks | vmseries1 | byol |
-| panzura-file-system | azura-freedom-filer-v7110 | fd-vm-azure-byol |
-| panzura-file-system | panzura-cloud-filer | fd-vm-azure-byol |
-| panzura-file-system | panzura-freedom-filer-7140-13222 | fd-vm-azure-byol |
-| panzura-file-system | panzura-freedom-filer-716-13549 | fd-vm-azure-byol |
-| panzura-file-system | panzura-freedom-filer-7180-14200 | fd-vm-azure-byol |
-| panzura-file-system | panzura-freedom-filer-7220 | fd-vm-azure-byol |
-| panzura-file-system | panzura-freedom-filer-v7020 | fd-vm-azure-byol |
-| parallels | allinone | rasallinone |
-| pivotal | bosh-windows-server | 2012r2gov |
-| pivotal | pivotal-ops-manager | pivotal-ops-manager |
-| primekey | ejbca_enterprise_cloud_edition_private_vhd | ejbca_azure_24x7 |
-| primekey | ejbca_enterprise_cloud_edition_private_vhd | ejbca_azure_8x5 |
-| pulse-secure | pulse-connect-secure-vm | pcs-byol |
-| qlik | qlik-sense | qliksense |
-| qualysguard | qualys-virtual-firewall-appliance | qwaf |
-| qualysguard | qualys-virtual-scanner-v24 | qvsa-24 |
-| quest | qorestor_5_0_1 | tier_1 |
-| quest | qorestor_5_0_1 | tier_2 |
-| quest | qorestor_5_0_1 | tier_3 |
-| quest | rapid-recovery-core-vm | quest_rapid_recovery_core_vm |
-| radiant-logic | radiantone-vms | node-centos |
-| radiant-logic | radiantone-vms | node-centos-7-5 |
-| radiant-logic | radiantone-vms | node-redhat |
-| radiant-logic | radiantone-vms | node-redhat-7-5 |
-| radiant-logic | radiantone-vms | node-ubuntu |
-| radiant-logic | radiantone-vms | node-ubuntu-18-04-lts |
-| radiant-logic | radiantone-vms | node-ws-2016 |
-| radware | alteon-va-cluster-monitor | alteon-va-cluster-monitoring |
-| radware | radware-alteon-va | radware-alteon-ng-va-adc |
-| rapid7 | nexpose-scan-engine | nexpose-scan-engine |
-| rapid7 | rapid7-vm-console | rapid7-vm-console |
-| RedHat | RHEL | 6.10 |
-| RedHat | RHEL | 6.8 |
-| RedHat | RHEL | 6.9 |
-| RedHat | RHEL | 6.9-LVM |
-| RedHat | RHEL | 7-LVM |
-| RedHat | RHEL | 7-RAW |
-| RedHat | RHEL | 7-RAW-CI |
-| RedHat | RHEL | 7.2 |
-| RedHat | RHEL | 7.3 |
-| RedHat | RHEL | 7.3-LVM |
-| RedHat | RHEL | 7.4 |
-| RedHat | RHEL | 7.4-LVM |
-| RedHat | RHEL | 7.4-RAW |
-| RedHat | RHEL | 7.4.Beta |
-| RedHat | RHEL | 7.4.Beta-LVM |
-| RedHat | RHEL | 7.5 |
-| RedHat | RHEL | 7.6 |
-| RedHat | RHEL | 8 |
-| RedHat | rhel-byos | rhel-lvm74 |
-| RedHat | rhel-byos | rhel-lvm75 |
-| RedHat | rhel-byos | rhel-lvm76 |
-| RedHat | rhel-byos | rhel-raw69 |
-| RedHat | rhel-byos | rhel-raw75 |
-| RedHat | rhel-byos | rhel-raw76 |
-| RedHat | rhel-ocp-marketplace | rhel74 |
-| RedHat | rhel-ocp-marketplace | rhel75 |
-| RedHat | rhel-ocp-marketplace | rhel76 |
-| RedHat | RHEL-SAP-APPS | 6.8 |
-| RedHat | RHEL-SAP-APPS | 7.3 |
-| RedHat | RHEL-SAP-HANA | 6.7 |
-| RedHat | RHEL-SAP-HANA | 7.2 |
-| riverbed | riverbed-scc-9-9-0 | riverbed-scc-9-9-0 |
-| riverbed | riverbed-sccm-5-5-1 | riverbed-sccm-5-5-1 |
-| riverbed | riverbed-steelhead-9-1-3 | steelhead-9-1-3 |
-| riverbed | riverbed-steelhead-9-2 | riverbed-steelhead-9-2 |
-| riverbed | riverbed-steelhead-9-5-0 | riverbed-steelhead-9-5-0 |
-| riverbed | riverbed-steelhead-9-6-0 | riverbed-steelhead-9-6-0 |
-| riverbed | riverbed_steelconnect_gw | riverbed-steelconnect-gw |
-| riverbed | riverbed_steelconnect_sh | riverbed-steelconnect-sh |
-| rsa-security-llc | securid8_4 | rsa-sid-azure-84 |
-| scalegrid | centos | free |
-| scality | scalityconnecthourly | connecthourly1 |
-| sentryone | sentryoneeval | sentryoneeval |
-| silver-peak-systems | silver_peak_edgeconnect | silver_peak_edgeconnect_8_1 |
-| silver-peak-systems | silver_peak_edgeconnect | silver_peak_edgeconnect_8_1_9_4 |
-| silver-peak-systems | silver_peak_vx | silver-peak-vx-8-1 |
-| simpligov | simpligov_002 | simpligov-vm |
-| snapt-adc | snaptadc | snaptadc |
-| softnas | mp_nas_byol | mp_enterprise_byol |
-| sophos | sophos-xg | byol |
-| splunk | splunk-enterprise-base-image | splunk-on-ubuntu-14-04-lts |
-| starwind | starwindvirtualsan | starwindbyol |
-| starwind | starwindvtl | starwindvtl |
-| stonefly | stonefly-cloud-drive | byol_stonefly |
-| SUSE | openSUSE-Leap | 15-0 |
-| SUSE | openSUSE-Leap | 42.3 |
-| SUSE | SLES | 11-SP4 |
-| SUSE | SLES | 12-SP3 |
-| SUSE | SLES | 12-SP4 |
-| SUSE | SLES | 15 |
-| SUSE | SLES-BYOS | 11-SP4 |
-| SUSE | SLES-BYOS | 12-sp1 |
-| SUSE | SLES-BYOS | 12-sp2 |
-| SUSE | SLES-BYOS | 12-SP3 |
-| SUSE | SLES-BYOS | 12-SP4 |
-| SUSE | SLES-BYOS | 15 |
-| SUSE | SLES-HPC | 12-SP3 |
-| SUSE | SLES-SAP-BYOS | 12-sp1 |
-| SUSE | SLES-SAP-BYOS | 12-SP2 |
-| SUSE | SLES-SAP-BYOS | 12-SP3 |
-| SUSE | SLES-SAP-BYOS | 12-SP4 |
-| SUSE | SLES-SAP-BYOS | 15 |
-| SUSE | SLES-SAPCAL | 11-SP4 |
-| SUSE | SUSE-CaaSP-Admin-BYOS | 2.1 |
-| SUSE | SUSE-CaaSP-Cluster-BYOS | 2-1 |
-| SUSE | SUSE-Manager-Proxy-BYOS | 3-2 |
-| SUSE | SUSE-Manager-Proxy-BYOS | 3.1 |
-| SUSE | SUSE-Manager-Server-BYOS | 3-2 |
-| SUSE | SUSE-Manager-Server-BYOS | 3.1 |
-| suse-byos | sles-byos | 12-sp1 |
-| tableau | tableau-server | bring-your-own-license |
-| talon | talon-fast | talon-azure-byol |
-| tenable | tenable-nessus-6-byol | tenable-nessus-byol |
-| tenable | tenablecorenessus | tenablecorenessusbyol |
-| tenable | tenablecorewas | tenablecorewasbyol |
-| teradata | teradata-data-mover | teradata-data-mover-agent-byol |
-| teradata | teradata-data-mover | teradata-data-mover-agent-hourly |
-| teradata | teradata-data-mover | teradata-data-mover-byol |
-| teradata | teradata-data-mover | teradata-data-mover-hourly |
-| teradata | teradata-data-mover-intellisphere | teradata-data-mover-agent-intellisphere |
-| teradata | teradata-data-mover-intellisphere | teradata-data-mover-intellisphere |
-| teradata | teradata-data-stream-controller | teradata-data-stream-controller |
-| teradata | teradata-data-stream-controller | teradata-data-stream-controller-byol |
-| teradata | teradata-database-1510-byol | teradata-database-advanced-1510-byol |
-| teradata | teradata-database-1510-byol | teradata-database-base-1510-byol |
-| teradata | teradata-database-1510-byol | teradata-database-enterprise-1510-byol |
-| teradata | teradata-database-1510-intellisphere | teradata-database-advanced-1510 |
-| teradata | teradata-database-1510-intellisphere | teradata-database-base-1510 |
-| teradata | teradata-database-1510-intellisphere | teradata-database-enterprise-1510 |
-| teradata | teradata-database-1510-v2 | teradata-database-advanced-1510 |
-| teradata | teradata-database-1510-v2 | teradata-database-base-1510 |
-| teradata | teradata-database-1510-v2 | teradata-database-developer-1510 |
-| teradata | teradata-database-1510-v2 | teradata-database-enterprise-1510 |
-| teradata | teradata-database-1620 | teradata-database-advanced-1620 |
-| teradata | teradata-database-1620 | teradata-database-advanced-1620-248 |
-| teradata | teradata-database-1620 | teradata-database-advanced-1620-256 |
-| teradata | teradata-database-1620 | teradata-database-advanced-1620-336 |
-| teradata | teradata-database-1620 | teradata-database-advanced-1620-434 |
-| teradata | teradata-database-1620 | teradata-database-advanced-1620-496 |
-| teradata | teradata-database-1620 | teradata-database-advanced-1620-594 |
-| teradata | teradata-database-1620 | teradata-database-advanced-1620-914 |
-| teradata | teradata-database-1620 | teradata-database-advanced-1620-ls |
-| teradata | teradata-database-1620 | teradata-database-base-1620 |
-| teradata | teradata-database-1620 | teradata-database-base-1620-248 |
-| teradata | teradata-database-1620 | teradata-database-base-1620-256 |
-| teradata | teradata-database-1620 | teradata-database-base-1620-336 |
-| teradata | teradata-database-1620 | teradata-database-base-1620-434 |
-| teradata | teradata-database-1620 | teradata-database-base-1620-496 |
-| teradata | teradata-database-1620 | teradata-database-base-1620-594 |
-| teradata | teradata-database-1620 | teradata-database-base-1620-914 |
-| teradata | teradata-database-1620 | teradata-database-base-1620-ls |
-| teradata | teradata-database-1620 | teradata-database-developer-1620 |
-| teradata | teradata-database-1620 | teradata-database-enterprise-1620 |
-| teradata | teradata-database-1620 | teradata-database-enterprise-1620-248 |
-| teradata | teradata-database-1620 | teradata-database-enterprise-1620-256 |
-| teradata | teradata-database-1620 | teradata-database-enterprise-1620-336 |
-| teradata | teradata-database-1620 | teradata-database-enterprise-1620-434 |
-| teradata | teradata-database-1620 | teradata-database-enterprise-1620-496 |
-| teradata | teradata-database-1620 | teradata-database-enterprise-1620-594 |
-| teradata | teradata-database-1620 | teradata-database-enterprise-1620-914 |
-| teradata | teradata-database-1620 | teradata-database-enterprise-1620-ls |
-| teradata | teradata-database-1620-byol | teradata-database-advanced-1620-byol |
-| teradata | teradata-database-1620-byol | teradata-database-base-1620-byol |
-| teradata | teradata-database-1620-byol | teradata-database-enterprise-1620-byol |
-| teradata | teradata-database-1620-intellisphere | teradata-database-advanced-1620 |
-| teradata | teradata-database-1620-intellisphere | teradata-database-advanced-1620-248 |
-| teradata | teradata-database-1620-intellisphere | teradata-database-advanced-1620-256 |
-| teradata | teradata-database-1620-intellisphere | teradata-database-advanced-1620-336 |
-| teradata | teradata-database-1620-intellisphere | teradata-database-advanced-1620-434 |
-| teradata | teradata-database-1620-intellisphere | teradata-database-advanced-1620-496 |
-| teradata | teradata-database-1620-intellisphere | teradata-database-advanced-1620-594 |
-| teradata | teradata-database-1620-intellisphere | teradata-database-advanced-1620-914 |
-| teradata | teradata-database-1620-intellisphere | teradata-database-advanced-1620-ls |
-| teradata | teradata-database-1620-intellisphere | teradata-database-base-1620 |
-| teradata | teradata-database-1620-intellisphere | teradata-database-base-1620-248 |
-| teradata | teradata-database-1620-intellisphere | teradata-database-base-1620-256 |
-| teradata | teradata-database-1620-intellisphere | teradata-database-base-1620-336 |
-| teradata | teradata-database-1620-intellisphere | teradata-database-base-1620-434 |
-| teradata | teradata-database-1620-intellisphere | teradata-database-base-1620-496 |
-| teradata | teradata-database-1620-intellisphere | teradata-database-base-1620-594 |
-| teradata | teradata-database-1620-intellisphere | teradata-database-base-1620-914 |
-| teradata | teradata-database-1620-intellisphere | teradata-database-base-1620-ls |
-| teradata | teradata-database-1620-intellisphere | teradata-database-enterprise-1620 |
-| teradata | teradata-database-1620-intellisphere | teradata-database-enterprise-1620-248 |
-| teradata | teradata-database-1620-intellisphere | teradata-database-enterprise-1620-256 |
-| teradata | teradata-database-1620-intellisphere | teradata-database-enterprise-1620-336 |
-| teradata | teradata-database-1620-intellisphere | teradata-database-enterprise-1620-434 |
-| teradata | teradata-database-1620-intellisphere | teradata-database-enterprise-1620-496 |
-| teradata | teradata-database-1620-intellisphere | teradata-database-enterprise-1620-594 |
-| teradata | teradata-database-1620-intellisphere | teradata-database-enterprise-1620-914 |
-| teradata | teradata-database-1620-intellisphere | teradata-database-enterprise-1620-ls |
-| teradata | teradata-ecosystem-manager | teradata-ecosystem-manager |
-| teradata | teradata-ecosystem-manager | teradata-ecosystem-manager-byol |
-| teradata | teradata-querygrid-manager | teradata-querygrid-manager |
-| teradata | teradata-querygrid-manager-intellisphere | teradata-querygrid-manager-intellisphere |
-| teradata | teradata-rest-services | teradata-rest-services |
-| teradata | teradata-rest-services | teradata-rest-services-byol |
-| teradata | teradata-server-management | teradata-server-management |
-| teradata | teradata-server-management | teradata-server-management-byol |
-| teradata | teradata-server-management-api-testing-7_5 | teradata-server-management |
-| teradata | teradata-server-management-api-testing-7_5 | teradata-server-management-byol |
-| teradata | teradata-unity | teradata-unity |
-| teradata | teradata-unity-intellisphere | teradata-unity-intellisphere |
-| teradata | teradata-viewpoint | teradata-viewpoint-multiple-systems-byol |
-| teradata | teradata-viewpoint | teradata-viewpoint-multiple-systems-hourly |
-| teradata | teradata-viewpoint | teradata-viewpoint-single-system-byol |
-| teradata | teradata-viewpoint | teradata-viewpoint-single-system-data-lab-byol |
-| teradata | teradata-viewpoint | teradata-viewpoint-single-system-data-lab-hourly |
-| teradata | teradata-viewpoint | teradata-viewpoint-single-system-hourly |
-| teradata | teradata-viewpoint-intellisphere | teradata-viewpoint-intellisphere |
-| thales-vormetric | ciphertrust-ckm | ciphertrust-ckm |
-| thales-vormetric | vormetric-dsm | dsm-6-0-2-5162 |
-| thales-vormetric | vormetric-dsm-6-1-0 | dsm-6-1-0-9118 |
-| thales-vormetric | vormetric-dsm-6-1-0 | dsm-6-1-0-9229 |
-| thales-vormetric | vormetric-dsm-6-2-0 | dsm-6-2-0-12050 |
-| thales-vormetric | vts-2_2_0_2604 | vts-2_2_0_2604 |
-| thales-vormetric | vts-2_3_0_400 | vts-2_3_0_400 |
-| tigergraph | tigergraph-hourly-distributed | tigergraph-enterprise-232-distributed |
-| tigergraph | tigergraph-hourly-singleserver | tigergraph_enterprise_232_singleserver |
-| uniprint-net | uniprint-infinity | up_demo01 |
-| unnisoft | inas_online | inas9 |
-| veeam | veeam-backup-replication | veeam-backup-replication-95 |
-| veeam | veeam-cloud-connect-enterprise | veeamcloudcconnectenterprise |
-| veeam | veeamcloudconnect | veeambackup |
-| velocitydb-inc | velocitydb | velocitydb |
-| velocloud | velocloud-virtual-edge-3x | velocloud-virtual-edge-3x |
-| veritas | netbackup-8-0 | netbackup_8-standard |
-| veritas | netbackup-8-0 | netbackup_8_1-standard |
-| vidizmo | c962d038-826e-4c7f-90d9-a2d7ebb50d0c | vidizmo-appdb-single |
-| vidizmo | vidizmo-highavailability-servers | vidizmo-application |
-| vidizmo | vidizmo-separate-servers | vidizmo-application |
-| vidizmo | vidizmo-separate-servers | vidizmo-database |
-| websense-apmailpe | forcepoint-email-security-85beta | forcepoint_email_security_v85_beta |
-| winmagic_securedoc_cloudvm | seccuredoc_cloudvm_5 | winmagic_securedoc_cloudvm_byol |
-| wowza | wowzastreamingengine | linux-byol |
-| wowza | wowzastreamingengine | windows-byol |
-| zerto | zerto-cloud-appliance-50 | zerto60u3p1 |
-| zerto | zerto-cloud-appliance-50 | zerto65ga |
-| zerto | zerto-vms | zerto7 |
+Some of the prebuilt images include pay-as-you-go licensing for specific software. Work with your Microsoft account team or reseller for Azure Government-specific pricing. For more information, see [Virtual machine pricing](https://azure.microsoft.com/pricing/details/virtual-machines/).
## Next steps
-* [Create a Windows virtual machine with the Azure portal](../virtual-machines/windows/quick-create-portal.md?toc=%2Fazure%2Fvirtual-machines%2Fwindows%2Ftoc.json)
-* [Create a Windows virtual machine with PowerShell](../virtual-machines/windows/quick-create-powershell.md)
-* [Create a Windows virtual machine with the Azure CLI](../virtual-machines/windows/quick-create-cli.md)
-* [Create a Linux virtual machine with the Azure portal](../virtual-machines/linux/quick-create-portal.md?toc=%2Fazure%2Fvirtual-machines%2Flinux%2Ftoc.json)
+
+- [Create a Windows virtual machine with the Azure portal](../virtual-machines/windows/quick-create-portal.md)
+- [Create a Windows virtual machine with PowerShell](../virtual-machines/windows/quick-create-powershell.md)
+- [Create a Windows virtual machine with the Azure CLI](../virtual-machines/windows/quick-create-cli.md)
+- [Create a Linux virtual machine with the Azure portal](../virtual-machines/linux/quick-create-portal.md)
azure-government Documentation Government Impact Level 5 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-impact-level-5.md
For Databases services availability in Azure Government, see [Products available
Azure API for FHIR supports Impact Level 5 workloads in Azure Government with this configuration: -- Configure encryption at rest of content in Azure API for FHIR [using customer-managed keys in Azure Key Vault](../healthcare-apis/customer-managed-key.md)
+- Configure encryption at rest of content in Azure API for FHIR [using customer-managed keys in Azure Key Vault](../healthcare-apis/fhir/customer-managed-key.md)
### [Azure Cache for Redis](https://azure.microsoft.com/services/cache/)
azure-government Documentation Government Overview Itar https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-overview-itar.md
The US Department of Commerce is responsible for enforcing the [Export Administr
The EAR is applicable to dual-use items that have both commercial and military applications and to items with purely commercial application. The BIS has provided guidance that cloud service providers (CSP) are not exporters of customersΓÇÖ data due to the customersΓÇÖ use of cloud services. Moreover, in the [final rule](https://www.federalregister.gov/documents/2016/06/03/2016-12734/revisions-to-definitions-in-the-export-administration-regulations) published on 3 June 2016, BIS clarified that EAR licensing requirements would not apply if the transmission and storage of unclassified technical data and software were encrypted end-to-end using Federal Information Processing Standard (FIPS) 140-2 validated cryptographic modules and not intentionally stored in a military-embargoed country (that is, Country Group D:5 as described in [Supplement No. 1 to Part 740](https://ecfr.io/Title-15/pt15.2.740#ap15.2.740_121.1) of the EAR) or in the Russian Federation. The US Department of Commerce has made it clear that, when data or software is uploaded to the cloud, the customer, not the cloud provider, is the ΓÇ£exporterΓÇ¥ who has the responsibility to ensure that transfers, storage, and access to that data or software complies with the EAR.
-Both Azure and Azure Government can help customers subject to the EAR meet their compliance requirements. Except for the Hong Kong region, Azure and Azure Government datacenters are not located in proscribed countries or in the Russian Federation. Azure and Azure Government rely on FIPS 140-2 validated cryptographic modules in the underlying operating system, and provide customers with a [wide range of options for encrypting data](../security/fundamentals/encryption-overview.md) in transit and at rest, including encryption key management using [Azure Key Vault](../key-vault/general/overview.md), which can store encryption keys in FIPS 140-2 validated hardware security modules (HSMs) under customer control - known as [customer-managed keys (CMK)](../security/fundamentals/encryption-models.md). Keys generated inside the Azure Key Vault HSMs are not exportable ΓÇô there can be no clear-text version of the key outside the HSMs. This binding is enforced by the underlying HSM. Moreover, Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents do not see or extract customer keys.
+Both Azure and Azure Government can help customers subject to the EAR meet their compliance requirements. Except for the Azure region in Hong Kong SAR, Azure and Azure Government datacenters are not located in proscribed countries or in the Russian Federation. Azure and Azure Government rely on FIPS 140-2 validated cryptographic modules in the underlying operating system, and provide customers with a [wide range of options for encrypting data](../security/fundamentals/encryption-overview.md) in transit and at rest, including encryption key management using [Azure Key Vault](../key-vault/general/overview.md), which can store encryption keys in FIPS 140-2 validated hardware security modules (HSMs) under customer control - known as [customer-managed keys (CMK)](../security/fundamentals/encryption-models.md). Keys generated inside the Azure Key Vault HSMs are not exportable ΓÇô there can be no clear-text version of the key outside the HSMs. This binding is enforced by the underlying HSM. Moreover, Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents do not see or extract customer keys.
Customers are responsible for choosing Azure or Azure Government regions for deploying their applications and data. Moreover, customers are responsible for designing their applications to apply end-to-end data encryption that meets EAR requirements. Microsoft does not inspect or approve customer applications deployed on Azure or Azure Government.
The US Department of State has export control authority over defense articles, s
DDTC [revised the ITAR rules](https://www.federalregister.gov/documents/2019/12/26/2019-27438/international-traffic-in-arms-regulations-creation-of-definition-of-activities-that-are-not-exports) effective 25 March 2020 to align them more closely with the EAR. These ITAR revisions introduced an end-to-end data encryption carve-out that incorporated many of the same terms that the Commerce Department adopted in 2016 for the EAR. Specifically, the revised ITAR rules state that activities that do not constitute exports, re-exports, re-transfers, or temporary imports include (among other activities) the sending, taking, or storing of technical data that is 1) unclassified, 2) secured using end-to-end encryption, 3) secured using FIPS 140-2 compliant cryptographic modules as prescribed in the regulations, 4) not intentionally sent to a person in or stored in a [country proscribed in § 126.1](https://ecfr.io/Title-22/pt22.1.126#se22.1.126_11) or the Russian Federation, and 5) not sent from a country proscribed in § 126.1 or the Russian Federation. Moreover, DDTC clarified that data in-transit via the Internet is not deemed to be stored. End-to-end encryption implies the data is always kept encrypted between the originator and intended recipient, and the means of decryption is not provided to any third party.
-There is no ITAR compliance certification; however, both Azure and Azure Government can help customers subject to ITAR meet their compliance obligations. Except for the Hong Kong region, Azure and Azure Government datacenters are not located in proscribed countries or in the Russian Federation. Azure and Azure Government rely on FIPS 140-2 validated cryptographic modules in the underlying operating system, and provide customers with a [wide range of options for encrypting data](../security/fundamentals/encryption-overview.md) in transit and at rest, including encryption key management using [Azure Key Vault](../key-vault/general/overview.md), which can store encryption keys in FIPS 140-2 validated hardware security modules (HSMs) under customer control - known as [customer-managed keys (CMK)](../security/fundamentals/encryption-models.md). Keys generated inside the Azure Key Vault HSMs are not exportable ΓÇô there can be no clear-text version of the key outside the HSMs. This binding is enforced by the underlying HSM. Moreover, Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents do not see or extract customer keys.
+There is no ITAR compliance certification; however, both Azure and Azure Government can help customers subject to ITAR meet their compliance obligations. Except for the Azure region in Hong Kong SAR, Azure and Azure Government datacenters are not located in proscribed countries or in the Russian Federation. Azure and Azure Government rely on FIPS 140-2 validated cryptographic modules in the underlying operating system, and provide customers with a [wide range of options for encrypting data](../security/fundamentals/encryption-overview.md) in transit and at rest, including encryption key management using [Azure Key Vault](../key-vault/general/overview.md), which can store encryption keys in FIPS 140-2 validated hardware security modules (HSMs) under customer control - known as [customer-managed keys (CMK)](../security/fundamentals/encryption-models.md). Keys generated inside the Azure Key Vault HSMs are not exportable ΓÇô there can be no clear-text version of the key outside the HSMs. This binding is enforced by the underlying HSM. Moreover, Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents do not see or extract customer keys.
Customers are responsible for choosing Azure or Azure Government regions for deploying their applications and data. Moreover, customers are responsible for designing their applications to apply end-to-end data encryption that meets ITAR requirements. Microsoft does not inspect or approve customer applications deployed on Azure or Azure Government.
azure-government Documentation Government Stig Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-stig-linux-vm.md
+
+ Title: Deploy STIG-compliant Linux Virtual Machines (Preview)
+description: This quickstart shows you how to deploy a STIG-compliant Linux VM (Preview) from Azure Marketplace
++++ Last updated : 03/09/2021++
+# Deploy STIG-compliant Linux Virtual Machines (Preview)
+
+Microsoft Azure Security Technical Implementation Guides (STIGs) solution templates help you accelerate your [DoD STIG compliance](https://public.cyber.mil/stigs/) by delivering an automated solution to deploy virtual machines and apply STIGs through the Azure portal. For questions about this offering, contact [Azure STIG Support](mailto:azurestigsupport@microsoft.com).
+
+This quickstart shows how to use the Azure portal to deploy a STIG-compliant Linux virtual machine (Preview).
+
+## Prerequisites
+
+- Azure Government subscription
+- Storage account
+ - If desired, must be in the same resource group/region as the VM
+ - Required if you plan to store Log Analytics diagnostics
+- Log Analytics workspace (required if you plan to store diagnostic logs)
+
+## Sign in to Azure
+
+Sign in at the [Azure Government portal](https://portal.azure.us/).
+
+## Create a STIG-compliant virtual machine
+
+1. Select *Create a resource*.
+1. Type **Azure STIG Templates for Linux** in the search bar and press enter.
+1. Select **Azure STIG Templates for Linux** from the search results and then **Create**.
+1. In the **Basics** tab, under **Project details**:
+
+ a. Select an existing *Subscription*.
+
+ b. Create a new *Resource group* or enter an existing resource group.
+
+ c. Select your *Region*.
+
+ > [!IMPORTANT]
+ > Make sure to choose an empty resource group or create a new one.
+
+ :::image type="content" source="./media/stig-project-details.png" alt-text="Project details section showing where you select the Azure subscription and the resource group for the virtual machine" border="false":::
+
+1. Under **Instance details**, enter all required information:
+
+ a. Enter the *VM name*.
+
+ b. Select the *Linux OS version*.
+
+ c. Select the instance *Size*.
+
+ d. Enter the administrator account *Username*.
+
+ e. Select the Authentication type by choosing either *Password* or *Public key*.
+
+ f. Enter a *Password* or *Public key*.
+
+ g. Confirm *Password* (*Public key* only needs to be input once).
+
+ > [!NOTE]
+ > For instructions on creating an SSH RSA public-private key pair for SSH client connections, see **[Create and manage SSH keys for authentication to a Linux VM in Azure](../virtual-machines/linux/create-ssh-keys-detailed.md).**
+
+ :::image type="content" source="./media/stig-linux-instance-details.png" alt-text="Instance details section where you provide a name for the virtual machine and select its region, image, and size" border="false":::
+
+1. Under **Disk**:
+
+ a. Select the *OS disk type*.
+
+ b. Select the *Encryption type*.
+
+ :::image type="content" source="./media/stig-disk-options.png" alt-text="Disk options section showing where you select the disk and encryption type for the virtual machine" border="false":::
+
+1. Under **Networking**:
+
+ a. Select the *Virtual Network*. Either use existing virtual network or select *Create new* (note RDP inbound is disallowed).
+
+ b. Select *Subnet*.
+
+ c. Application security group (optional).
+
+ :::image type="content" source="./media/stig-network-interface.png" alt-text="Network interface section showing where you select the network and subnet for the virtual machine" border="false":::
+
+1. Under **Management**:
+
+ a. For Diagnostic settings select *Storage account* (optional, required to store diagnostic logs).
+
+ b. Enter Log Analytics workspace (optional, required to store log analytics).
+
+ c. Enter Custom data (optional, only applicable for RHEL 7.7/7.8, CentOS 7.7/7.8/7.9 and Ubuntu 18.04).
+
+ :::image type="content" source="./media/stig-linux-diagnostic-settings.png" alt-text="Management section showing where you select the diagnostic settings for the virtual machine" border="false":::
+
+1. Select **Review + create** to review summary of all selections.
+
+1. Once the validation check is successful Select ***Create***.
+
+1. Once the creation process is started, the ***Deployment*** process page will be displayed:
+
+ a. **Deployment** ***Overview*** tab displays the deployment process including any errors that may occur. Once deployment is
+ complete, this tab provides information on the deployment and provides the opportunity to download the deployment details.
+
+ b. ***Inputs*** tab provides a list of the inputs to the deployment.
+
+ c. ***Outputs*** tab provides information on any deployment outputs.
+
+ d. ***Template*** tab provides downloadable access to the JSON scripts used in the template.
+
+1. The deployed virtual machine can be found in the resource group used for the deployment. Since inbound RDP is disallowed, Azure Bastion must be used to connect to the VM.
+
+## Clean up resources
+
+When no longer needed, you can delete the resource group, virtual machine, and all related resources.
+
+Select the resource group for the virtual machine, then select **Delete**. Confirm the name of the resource group to finish deleting the resources.
+
+## Next steps
+
+This quickstart showed you how to deploy a STIG-compliant Linux virtual machine (Preview) on Azure Government. For more information about creating virtual machines in Azure Government, see [Tutorial: Create Virtual Machines](./documentation-government-quickstarts-vm.md). To learn more about Azure services, continue to the Azure documentation.
+
+> [!div class="nextstepaction"]
+> [Azure documentation](../index.yml)
azure-government Documentation Government Stig Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-stig-windows-vm.md
+
+ Title: Deploy STIG-compliant Windows Virtual Machines (Preview)
+description: This quickstart shows you how to deploy a STIG-compliant Windows VM (Preview) from Azure Marketplace
++++ Last updated : 03/09/2021++
+# Deploy STIG-compliant Windows Virtual Machines (Preview)
+
+Microsoft Azure Security Technical Implementation Guides (STIGs) solution templates help you accelerate your [DoD STIG compliance](https://public.cyber.mil/stigs/) by delivering an automated solution to deploy virtual machines and apply STIGs through the Azure portal. For questions about this offering, contact [Azure STIG Support](mailto:azurestigsupport@microsoft.com).
+
+This quickstart shows how to use the Azure portal to deploy a STIG-compliant Windows virtual machine (Preview).
+
+## Prerequisites
+
+- Azure Government subscription
+- Storage account
+ - If desired, must be in the same resource group/region as the VM
+ - Required if you plan to store Log Analytics diagnostics
+- Log Analytics workspace (required if you plan to store diagnostic logs)
+
+## Sign in to Azure
+
+Sign in at the [Azure Government portal](https://portal.azure.us/).
+
+## Create a STIG-compliant virtual machine
+
+1. Select *Create a resource*.
+1. Type **Azure STIG Templates for Windows** in the search bar and press enter.
+1. Select **Azure STIG Templates for Windows** from the search results and then **Create**.
+1. In the **Basics** tab, under **Project details**:
+
+ a. Select an existing *Subscription*.
+
+ b. Create a new *Resource group* or enter an existing resource group.
+
+ c. Select your *Region*.
+
+ > [!IMPORTANT]
+ > Make sure to choose an empty resource group or create a new one.
+
+ :::image type="content" source="./media/stig-project-details.png" alt-text="Project details section showing where you select the Azure subscription and the resource group for the virtual machine" border="false":::
+
+1. Under **Instance details**, enter all required information:
+
+ a. Enter the *VM name*.
+
+ b. Select the *Windows OS version*.
+
+ c. Select the instance *Size*.
+
+ d. Enter the administrator account *Username*.
+
+ e. Enter the administrator account *Password*.
+
+ f. Confirm *Password*.
+
+ :::image type="content" source="./media/stig-windows-instance-details.png" alt-text="Instance details section where you provide a name for the virtual machine and select its region, image, and size" border="false":::
+
+1. Under **Disk**:
+
+ a. Select the *OS disk type*.
+
+ b. Select the *Encryption type*.
+
+ :::image type="content" source="./media/stig-disk-options.png" alt-text="Disk options section showing where you select the disk and encryption type for the virtual machine" border="false":::
+
+1. Under **Networking**:
+
+ a. Select the *Virtual Network*. Either use existing virtual network or select *Create new* (note RDP inbound is disallowed).
+
+ b. Select *Subnet*.
+
+ c. Application security group (optional).
+
+ :::image type="content" source="./media/stig-network-interface.png" alt-text="Network interface section showing where you select the network and subnet for the virtual machine" border="false":::
+
+1. Under **Management**:
+
+ a. For Diagnostic settings select *Storage account* (optional, required to store diagnostic logs).
+
+ b. Enter Log Analytics workspace (optional, required to store log analytics).
+
+ :::image type="content" source="./media/stig-windows-diagnostic-settings.png" alt-text="Management section showing where you select the diagnostic settings for the virtual machine" border="false":::
+
+1. Select **Review + create** to review summary of all selections.
+
+1. Once the validation check is successful select ***Create***.
+
+1. Once the creation process is started, the ***Deployment*** process page will be displayed:
+
+ a. **Deployment** ***Overview*** tab displays the deployment process including any errors that may occur. Once deployment is
+ complete, this tab provides information on the deployment and provides the opportunity to download the deployment details.
+
+ b. ***Inputs*** tab provides a list of the inputs to the deployment.
+
+ c. ***Outputs*** tab provides information on any deployment outputs.
+
+ d. ***Template*** tab provides downloadable access to the JSON scripts used in the template.
+
+1. The deployed virtual machine can be found in the resource group used for the deployment. Since inbound RDP is disallowed, Azure Bastion must be used to connect to the VM.
+
+## Clean up resources
+
+When no longer needed, you can delete the resource group, virtual machine, and all related resources.
+
+Select the resource group for the virtual machine, then select **Delete**. Confirm the name of the resource group to finish deleting the resources.
+
+## Next steps
+
+This quickstart showed you how to deploy a STIG-compliant Windows virtual machine (Preview) on Azure Government. For more information about creating virtual machines in Azure Government, see [Tutorial: Create Virtual Machines](./documentation-government-quickstarts-vm.md). To learn more about Azure services, continue to the Azure documentation.
+
+> [!div class="nextstepaction"]
+> [Azure documentation](../index.yml)
azure-monitor Agent Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/agent-linux.md
Starting with versions released after August 2018, we are making the following c
* Versions that have passed their manufacturer's end-of-support date are not supported. * Only support VM images; containers, even those derived from official distro publishers' images, are not supported. * New versions of AMI are not supported.
-* Only versions that run SSL 1.x by default are supported.
+* Only versions that run OpenSSL 1.x by default are supported.
>[!NOTE] >If you are using a distro or version that is not currently supported and doesn't align to our support model, we recommend that you fork this repo, acknowledging that Microsoft support will not provide assistance with forked agent versions.
The default cache size is 10 MB but can be modified in the [omsagent.conf file](
- Review [Managing and maintaining the Log Analytics agent for Windows and Linux](agent-manage.md) to learn about how to reconfigure, upgrade, or remove the agent from the virtual machine. -- Review [Troubleshooting the Linux agent](agent-linux-troubleshoot.md) if you encounter issues while installing or managing the agent.
+- Review [Troubleshooting the Linux agent](agent-linux-troubleshoot.md) if you encounter issues while installing or managing the agent.
azure-monitor Log Analytics Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/log-analytics-agent.md
Last updated 01/12/2021
The Azure Log Analytics agent collects telemetry from Windows and Linux virtual machines in any cloud, on-premises machines, and those monitored by [System Center Operations Manager](/system-center/scom/) and sends it collected data to your Log Analytics workspace in Azure Monitor. The Log Analytics agent also supports insights and other services in Azure Monitor such as [VM insights](../vm/vminsights-enable-overview.md), [Azure Security Center](../../security-center/index.yml), and [Azure Automation](../../automation/automation-intro.md). This article provides a detailed overview of the agent, system and network requirements, and deployment methods. > [!NOTE]
-> You may also see the Log Analytics agent referred to as the Microsoft Monitoring Agent (MMA) or OMS Linux agent.
+> You may also see the Log Analytics agent referred to as the Microsoft Monitoring Agent (MMA).
## Comparison to Azure diagnostics extension The [Azure diagnostics extension](./diagnostics-extension-overview.md) in Azure Monitor can also be used to collect monitoring data from the guest operating system of Azure virtual machines. You may choose to use either or both depending on your requirements. See [Overview of the Azure Monitor agents](../agents/agents-overview.md) for a detailed comparison of the Azure Monitor agents.
For example:
* Review [data sources](../agents/agent-data-sources.md) to understand the data sources available to collect data from your Windows or Linux system. * Learn about [log queries](../logs/log-query-overview.md) to analyze the data collected from data sources and solutions.
-* Learn about [monitoring solutions](../insights/solutions.md) that add functionality to Azure Monitor and also collect data into the Log Analytics workspace.
+* Learn about [monitoring solutions](../insights/solutions.md) that add functionality to Azure Monitor and also collect data into the Log Analytics workspace.
azure-monitor Alerts Action Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-action-rules.md
First choose the scope (Azure subscription, resource group, or target resource).
### Filter criteria
-You can additionally define filters to narrow them down to a specific subset of the alerts.
+You can optionally define filters so the rule will apply to a specific subset of the alerts, or to specific events on each alert (for example, only "Fired" or only "Resolved").
The available filters are:
-* **Severity**: The option to select one or more alert severities. **Severity = Sev1** means that the action rule is applicable for all alerts set to Sev1.
-* **Monitor Service**: A filter based on the originating monitoring service. This filter is also multiple-select. For example, **Monitor Service = ΓÇ£Application InsightsΓÇ¥** means that the action rule is applicable for all Application Insights-based alerts.
-* **Resource Type**: A filter based on a specific resource type. This filter is also multiple-select. For example, **Resource Type = ΓÇ£Virtual MachinesΓÇ¥** means that the action rule is applicable for all virtual machines.
-* **Alert Rule ID**: An option to filter for specific alert rules by using the Resource Manager ID of the alert rule.
-* **Monitor Condition**: A filter for alert instances with either **Fired** or **Resolved** as the monitor condition.
-* **Description**: A regex (regular expression) match that defines a string match against the description, defined as part of the alert rule. For example, **Description contains 'prod'** will match all alerts that contain the string "prod" in their descriptions.
-* **Alert Context (payload)**: A regex match that defines a string match against the alert context fields of an alert's payload. For example, **Alert context (payload) contains 'Computer-01'** will match all alerts whose payloads contain the string "Computer-01."
-
-These filters are applied in conjunction with one another. For example, if you set **Resource type' = Virtual Machines** and **Severity' = Sev0**, then you've filtered for all **Sev0** alerts on only your VMs.
+* **Severity**: this rule will apply only to alerts with the selected severities.
+For example, **Severity = Sev1** means that the rule will apply only to alerts with Sev1 severity.
+* **Monitor Service**: this rule will apply only to alerts coming from the selected monitoring services.
+For example, **Monitor Service = ΓÇ£Azure BackupΓÇ¥** means that the rule will apply only to backup alerts (coming from Azure Backup).
+* **Resource Type**: this rule will apply only to alerts on the selected resource types.
+For example, **Resource Type = ΓÇ£Virtual MachinesΓÇ¥** means that the rule will apply only to alerts on virtual machines.
+* **Alert Rule ID**: this rule will apply only to alerts coming from a specific alert rule. The value should be the Resource Manager ID of the alert rule.
+For example, **Alert Rule ID = "/subscriptions/SubId1/resourceGroups/ResourceGroup1/providers/microsoft.insights/metricalerts/MyAPI-highLatency"** means this rule will apply only to alerts coming from "MyAPI-highLatency" metric alert rule.
+* **Monitor Condition**: this rule will apply only to alert events with the specified monitor condition - either **Fired** or **Resolved**.
+* **Description**: this rule will apply only to alerts that contains a specific string in the alert description field. That field contains the alert rule description.
+For example, **Description contains 'prod'** means that the rule will only match alerts that contain the string "prod" in their description.
+* **Alert Context (payload)**: this rule will apply only to alerts that contain any of one or more specific values in the alert context fields.
+For example, **Alert context (payload) contains 'Computer-01'** means that the rule will only apply to alerts whose payload contain the string "Computer-01".
+
+If you set multiple filters in a rule, all of them apply. For example, if you set **Resource type' = Virtual Machines** and **Severity' = Sev0**, then the rule will apply only for Sev0 alerts on virtual machines.
![Action rule filters](media/alerts-action-rules/action-rules-new-rule-creation-flow-filters.png)
For every alert on VM1, action group AG1 would be triggered once. Whenever alert
## Next steps -- [Learn more about alerts in Azure](./alerts-overview.md)
+- [Learn more about alerts in Azure](./alerts-overview.md)
azure-monitor Alerts Metric Near Real Time https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-metric-near-real-time.md
Previously updated : 02/10/2021 Last updated : 03/11/2021 # Supported resources for metric alerts in Azure Monitor
Here's the full list of Azure Monitor metric sources supported by the newer aler
|Microsoft.Automation/automationAccounts | Yes| No | [Automation Accounts](../essentials/metrics-supported.md#microsoftautomationautomationaccounts) | |Microsoft.AVS/privateClouds | No | No | [Azure VMware Solution](../essentials/metrics-supported.md#microsoftavsprivateclouds) | |Microsoft.Batch/batchAccounts | Yes | No | [Batch Accounts](../essentials/metrics-supported.md#microsoftbatchbatchaccounts) |
+|Microsoft.BotService/botServices | Yes | No | [Bot Services](../essentials/metrics-supported.md#microsoftbotservicebotservices) |
|Microsoft.Cache/Redis | Yes | Yes | [Azure Cache for Redis](../essentials/metrics-supported.md#microsoftcacheredis) | |Microsoft.ClassicCompute/domainNames/slots/roles | No | No | [Classic Cloud Services](../essentials/metrics-supported.md#microsoftclassiccomputedomainnamesslotsroles) | |Microsoft.ClassicCompute/virtualMachines | No | No | [Classic Virtual Machines](../essentials/metrics-supported.md#microsoftclassiccomputevirtualmachines) |
Here's the full list of Azure Monitor metric sources supported by the newer aler
|Microsoft.ClassicStorage/storageAccounts/queueServices | Yes | No | [Storage Accounts (classic) - Queues](../essentials/metrics-supported.md#microsoftclassicstoragestorageaccountsqueueservices) | |Microsoft.ClassicStorage/storageAccounts/tableServices | Yes | No | [Storage Accounts (classic) - Tables](../essentials/metrics-supported.md#microsoftclassicstoragestorageaccountstableservices) | |Microsoft.CognitiveServices/accounts | Yes | No | [Cognitive Services](../essentials/metrics-supported.md#microsoftcognitiveservicesaccounts) |
+|Microsoft.Compute/cloudServices | Yes | No | [Cloud Services](../essentials/metrics-supported.md#microsoftcomputecloudservices) |
+|Microsoft.Compute/cloudServices/roles | Yes | No | [Cloud Service Roles](../essentials/metrics-supported.md#microsoftcomputecloudservicesroles) |
|Microsoft.Compute/virtualMachines | Yes | Yes<sup>1</sup> | [Virtual Machines](../essentials/metrics-supported.md#microsoftcomputevirtualmachines) | |Microsoft.Compute/virtualMachineScaleSets | Yes | No |[Virtual machine scale sets](../essentials/metrics-supported.md#microsoftcomputevirtualmachinescalesets) | |Microsoft.ContainerInstance/containerGroups | Yes| No | [Container groups](../essentials/metrics-supported.md#microsoftcontainerinstancecontainergroups) |
Here's the full list of Azure Monitor metric sources supported by the newer aler
|Microsoft.DataShare/accounts | Yes | No | [Data Shares](../essentials/metrics-supported.md#microsoftdatashareaccounts) | |Microsoft.DBforMariaDB/servers | No | No | [DB for MariaDB](../essentials/metrics-supported.md#microsoftdbformariadbservers) | |Microsoft.DBforMySQL/servers | No | No |[DB for MySQL](../essentials/metrics-supported.md#microsoftdbformysqlservers)|
+|Microsoft.DBforPostgreSQL/flexibleServers | Yes | No | [DB for PostgreSQL (flexible servers)](../essentials/metrics-supported.md#microsoftdbforpostgresqlflexibleservers)|
+|Microsoft.DBforPostgreSQL/serverGroupsv2 | Yes | No | DB for PostgreSQL (hyperscale) |
|Microsoft.DBforPostgreSQL/servers | No | No | [DB for PostgreSQL](../essentials/metrics-supported.md#microsoftdbforpostgresqlservers)| |Microsoft.DBforPostgreSQL/serversv2 | No | No | [DB for PostgreSQL V2](../essentials/metrics-supported.md#microsoftdbforpostgresqlserversv2)|
-|Microsoft.DBforPostgreSQL/flexibleServers | Yes | No | [DB for PostgreSQL (flexible servers)](../essentials/metrics-supported.md#microsoftdbforpostgresqlflexibleservers)|
|Microsoft.Devices/IotHubs | Yes | No |[IoT Hub](../essentials/metrics-supported.md#microsoftdevicesiothubs) | |Microsoft.Devices/provisioningServices| Yes | No | [Device Provisioning Services](../essentials/metrics-supported.md#microsoftdevicesprovisioningservices) | |Microsoft.DigitalTwins/digitalTwinsInstances | Yes | No | [Digital Twins](../essentials/metrics-supported.md#microsoftdigitaltwinsdigitaltwinsinstances) |
Here's the full list of Azure Monitor metric sources supported by the newer aler
|Microsoft.Logic/integrationServiceEnvironments | Yes | No |[Integration Service Environments](../essentials/metrics-supported.md#microsoftlogicintegrationserviceenvironments) | |Microsoft.Logic/workflows | No | No |[Logic Apps](../essentials/metrics-supported.md#microsoftlogicworkflows) | |Microsoft.MachineLearningServices/workspaces | Yes | No | [Machine Learning](../essentials/metrics-supported.md#microsoftmachinelearningservicesworkspaces) |
+|Microsoft.MachineLearningServices/workspaces/onlineEndpoints | Yes | No | Machine Learning - Endpoints |
+|Microsoft.MachineLearningServices/workspaces/onlineEndpoints/deployments | Yes | No | Machine Learning - Endpoint Deployments |
|Microsoft.Maps/accounts | Yes | No | [Maps Accounts](../essentials/metrics-supported.md#microsoftmapsaccounts) | |Microsoft.Medi#microsoftmediamediaservices) | |Microsoft.Medi#microsoftmediamediaservicesstreamingendpoints) |
The POST operation contains the following JSON payload and schema for all near n
* Learn more about the new [Alerts experience](./alerts-overview.md). * Learn about [log alerts in Azure](./alerts-unified-log.md).
-* Learn about [alerts in Azure](./alerts-overview.md).
+* Learn about [alerts in Azure](./alerts-overview.md).
azure-monitor Alerts Metric Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-metric-overview.md
Title: Understand how metric alerts work in Azure Monitor. description: Get an overview of what you can do with metric alerts and how they work in Azure Monitor. Previously updated : 01/19/2021 Last updated : 03/11/2021
This feature is currently supported for platform metrics (not custom metrics) fo
| Service | Public Azure | Government | China | |:--|:--|:--|:--|
-| Virtual machines<sup>1</sup> | **Yes** | **Yes** | No |
+| Virtual machines<sup>1</sup> | **Yes** | **Yes** | **Yes** |
| SQL server databases | **Yes** | **Yes** | **Yes** | | SQL server elastic pools | **Yes** | **Yes** | **Yes** | | NetApp files capacity pools | **Yes** | **Yes** | **Yes** |
You can find the full list of supported resource types in this [article](./alert
- [Learn how to deploy metric alerts using Azure Resource Manager templates](./alerts-metric-create-templates.md) - [Learn more about action groups](./action-groups.md) - [Learn more about Dynamic Thresholds condition type](../alerts/alerts-dynamic-thresholds.md)-- [Learn more about troubleshooting problems in metric alerts](alerts-troubleshoot-metric.md)
+- [Learn more about troubleshooting problems in metric alerts](alerts-troubleshoot-metric.md)
azure-monitor Itsmc Connections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/itsmc-connections.md
The following ITSM products/services are supported. Select the product to view d
> [!NOTE] > We propose our Cherwell and Provance customers to use [Webhook action](./action-groups.md#webhook) to Cherwell and Provance endpoint as another solution to the integration.
+## IP ranges for ITSM partners connections
+In order to list the ITSM IP addresses in order to allow ITSM connections from partners ITSM tools, we recommend the to list the whole public IP range of Azure region where their LogAnalytics workspace belongs. [details here](https://www.microsoft.com/en-us/download/details.aspx?id=56519)
+For regions EUS/WEU/EUS2/WUS2/US South Central the customer can list ActionGroup network tag only.
+ ## Next steps
-* [Troubleshooting problems in ITSM Connector](./itsmc-resync-servicenow.md)
+* [Troubleshooting problems in ITSM Connector](./itsmc-resync-servicenow.md)
azure-monitor Java In Process Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-in-process-agent.md
import com.microsoft.applicationinsights.web.internal.ThreadContext;
RequestTelemetry requestTelemetry = ThreadContext.getRequestTelemetryContext().getHttpRequestTelemetry(); requestTelemetry.setName("myname"); ```+
+### Get the request telemetry id and the operation id using the 2.x SDK
+
+> [!NOTE]
+> This feature is only in 3.0.3-BETA and later
+
+Add `applicationinsights-web-2.6.2.jar` to your application (all 2.x versions are supported by Application Insights Java 3.0, but it's worth using the latest if you have a choice):
+
+```xml
+<dependency>
+ <groupId>com.microsoft.azure</groupId>
+ <artifactId>applicationinsights-web</artifactId>
+ <version>2.6.2</version>
+</dependency>
+```
+
+and get the request telemetry id and the operation id in your code:
+
+```java
+import com.microsoft.applicationinsights.web.internal.ThreadContext;
+
+RequestTelemetry requestTelemetry = ThreadContext.getRequestTelemetryContext().getHttpRequestTelemetry();
+String requestId = requestTelemetry.getId();
+String operationId = requestTelemetry.getContext().getOperation().getId();
+```
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-config.md
Connection string is required. You can find your connection string in your Appli
} ```
-You can also set the connection string using the environment variable `APPLICATIONINSIGHTS_CONNECTION_STRING`.
+You can also set the connection string using the environment variable `APPLICATIONINSIGHTS_CONNECTION_STRING`
+(which will then take precedence if the connection string is also specified in the json configuration).
Not setting the connection string will disable the Java agent.
If you want to set the cloud role name:
If cloud role name is not set, the Application Insights resource's name will be used to label the component on the application map.
-You can also set the cloud role name using the environment variable `APPLICATIONINSIGHTS_ROLE_NAME`.
+You can also set the cloud role name using the environment variable `APPLICATIONINSIGHTS_ROLE_NAME`
+(which will then take precedence if the cloud role name is also specified in the json configuration).
## Cloud role instance
If you want to set the cloud role instance to something different rather than th
} ```
-You can also set the cloud role instance using the environment variable `APPLICATIONINSIGHTS_ROLE_INSTANCE`.
+You can also set the cloud role instance using the environment variable `APPLICATIONINSIGHTS_ROLE_INSTANCE`
+(which will then take precedence if the cloud role instance is also specified in the json configuration).
## Sampling
Here is an example how to set the sampling to capture approximately **1/3 of all
} ```
-You can also set the sampling percentage using the environment variable `APPLICATIONINSIGHTS_SAMPLING_PERCENTAGE`.
+You can also set the sampling percentage using the environment variable `APPLICATIONINSIGHTS_SAMPLING_PERCENTAGE`
+(which will then take precedence if the sampling percentage is also specified in the json configuration).
> [!NOTE] > For the sampling percentage, choose a percentage that is close to 100/N where N is an integer. Currently sampling doesn't support other values.
of the JMX MBean that you want to collect.
Numeric and boolean JMX metric values are supported. Boolean JMX metrics are mapped to `0` for false, and `1` for true.
-[//]: # "NOTE: Not documenting APPLICATIONINSIGHTS_JMX_METRICS here"
-[//]: # "json embedded in env var is messy, and should be documented only for codeless attach scenario"
- ## Custom dimensions If you want to add custom dimensions to all of your telemetry:
The default Application Insights threshold is `INFO`. If you want to change this
} ```
-You can also set the threshold using the environment variable `APPLICATIONINSIGHTS_INSTRUMENTATION_LOGGING_LEVEL`.
+You can also set the level using the environment variable `APPLICATIONINSIGHTS_INSTRUMENTATION_LOGGING_LEVEL`
+(which will then take precedence if the level is also specified in the json configuration).
These are the valid `level` values that you can specify in the `applicationinsights.json` file, and how they correspond to logging levels in different logging frameworks:
By default, Application Insights Java 3.0 sends a heartbeat metric once every 15
``` > [!NOTE]
-> You cannot decrease the frequency of the heartbeat, as the heartbeat data is also used to track Application Insights usage.
+> You cannot increase the interval to longer than 15 minutes,
+> because the heartbeat data is also used to track Application Insights usage.
## HTTP Proxy
If your application is behind a firewall and cannot connect directly to Applicat
Application Insights Java 3.0 also respects the global `-Dhttps.proxyHost` and `-Dhttps.proxyPort` if those are set.
+## Metric interval
+
+This feature is in preview.
+
+By default, metrics are captured every 60 seconds.
+
+Starting from version 3.0.3-BETA, you can change this interval:
+
+```json
+{
+ "preview": {
+ "metricIntervalSeconds": 300
+ }
+}
+```
+
+The setting applies to all of these metrics:
+
+* Default performance counters, e.g. CPU and Memory
+* Default custom metrics, e.g. Garbage collection timing
+* Configured JMX metrics ([see above](#jmx-metrics))
+* Micrometer metrics ([see above](#auto-collected-micrometer-metrics-including-spring-boot-actuator-metrics))
++ [//]: # "NOTE OpenTelemetry support is in private preview until OpenTelemetry API reaches 1.0" [//]: # "## Support for OpenTelemetry API pre-1.0 releases"
and the console, corresponding to this configuration:
`maxHistory` is the number of rolled over log files that are retained (in addition to the current log file).
-Starting from version 3.0.2, you can also set the self-diagnostics `level` using the environment variable `APPLICATIONINSIGHTS_SELF_DIAGNOSTICS_LEVEL`.
+Starting from version 3.0.2, you can also set the self-diagnostics `level` using the environment variable `APPLICATIONINSIGHTS_SELF_DIAGNOSTICS_LEVEL`
+(which will then take precedence if the self-diagnostics `level` is also specified in the json configuration).
## An example
azure-monitor Monitor Web App Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/monitor-web-app-availability.md
Title: Monitor availability and responsiveness of any web site | Microsoft Docs description: Set up web tests in Application Insights. Get alerts if a website becomes unavailable or responds slowly. Previously updated : 02/14/2021 Last updated : 03/10/2021
azure-monitor Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/functions.md
Create a function with Log Analytics in the Azure portal by clicking **Save** an
| Function Alias | Short name to use the function in other queries. May not contain spaces and must be unique. | | Category | A category to organize saved queries and functions in **Query explorer**. | -
+You can also create functions using the [REST API](/rest/api/loganalytics/savedsearches/createorupdate) or [PowerShell](/powershell/module/az.operationalinsights/new-azoperationalinsightssavedsearch).
## Use a function
See other lessons for writing Azure Monitor log queries:
- [Advanced aggregations](/azure/data-explorer/write-queries#advanced-aggregations) - [JSON and data structures](/azure/data-explorer/kusto/query/samples?&pivots=azuremonitor#json-and-data-structures) - [Joins](/azure/data-explorer/kusto/query/samples?&pivots=azuremonitor#joins)-- [Charts](/azure/data-explorer/kusto/query/samples?&pivots=azuremonitor#charts)
+- [Charts](/azure/data-explorer/kusto/query/samples?&pivots=azuremonitor#charts)
azure-percept Troubleshoot Dev Kit https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/troubleshoot-dev-kit.md
For additional information on the Azure IoT Edge commands, see the [Azure IoT Ed
|Wi-Fi |```sudo journalctl -u systemd-networkd``` |check Mariner Network stack logs | |Wi-Fi |```sudo cat /etc/hostapd/hostapd-wlan1.conf``` |check wifi access point configuration details | |OOBE |```sudo journalctl -u oobe -b``` |check OOBE logs |
-|Telemetry |```audo azure-device-health-id``` |find unique telemetry HW_ID |
+|Telemetry |```sudo azure-device-health-id``` |find unique telemetry HW_ID |
|Azure IoT Edge |```sudo iotedge check``` |run configuration and connectivity checks for common issues | |Azure IoT Edge |```sudo iotedge logs [container name]``` |check container logs, such as speech and vision modules | |Azure IoT Edge |```sudo iotedge support-bundle --since 1h``` |collect module logs, Azure IoT Edge security manager logs, container engine logs, ```iotedge check``` JSON output, and other useful debug information from the past hour |
azure-resource-manager Azure Services Resource Providers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/azure-services-resource-providers.md
The resources providers that are marked with **- registered** are registered by
| Microsoft.HanaOnAzure | [SAP HANA on Azure Large Instances](../../virtual-machines/workloads/sap/hana-overview-architecture.md) | | Microsoft.HardwareSecurityModules | [Azure Dedicated HSM](../../dedicated-hsm/index.yml) | | Microsoft.HDInsight | [HDInsight](../../hdinsight/index.yml) |
-| Microsoft.HealthcareApis | [Azure API for FHIR](../../healthcare-apis/index.yml) |
+| Microsoft.HealthcareApis | [Azure API for FHIR](../../healthcare-apis/fhir/index.yml) |
| Microsoft.HybridCompute | [Azure Arc](../../azure-arc/index.yml) | | Microsoft.HybridData | [StorSimple](../../storsimple/index.yml) | | Microsoft.HybridNetwork | [Private Edge Zones](../../networking/edge-zones-overview.md) |
azure-sql Authentication Aad Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/authentication-aad-configure.md
On all client machines, from which your applications or users connect to SQL Dat
- .NET Framework 4.6 or later from [https://msdn.microsoft.com/library/5a4x27ek.aspx](/dotnet/framework/install/guide-for-developers). - Azure Active Directory Authentication Library for SQL Server (*ADAL.DLL*). Below are the download links to install the latest SSMS, ODBC, and OLE DB driver that contains the *ADAL.DLL* library. - [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms)
- - [ODBC Driver 17 for SQL Server](https://www.microsoft.com/download/details.aspx?id=56567)
- - [OLE DB Driver 18 for SQL Server](https://www.microsoft.com/download/details.aspx?id=56730)
+ - [ODBC Driver 17 for SQL Server](/sql/connect/odbc/download-odbc-driver-for-sql-server?view=sql-server-ver15)
+ - [OLE DB Driver 18 for SQL Server](/sql/connect/oledb/download-oledb-driver-for-sql-server?view=sql-server-ver15)
You can meet these requirements by:
azure-sql Database Copy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/database-copy.md
ms.devlang:
- Previously updated : 10/30/2020+ Last updated : 03/10/2021 # Copy a transactionally consistent copy of a database in Azure SQL Database
Log in to the master database with the server administrator login or the login t
This command copies Database1 to a new database named Database2 on the same server. Depending on the size of your database, the copying operation might take some time to complete. ```sql
- -- execute on the master database to start copying
+ -- Execute on the master database to start copying
CREATE DATABASE Database2 AS COPY OF Database1; ```
This command copies Database1 to a new database named Database2 in an elastic po
Database1 can be a single or pooled database. Copying between different tier pools is supported, but some cross-tier copies will not succeed. For example, you can copy a single or elastic standard db into a general purpose pool, but you can't copy a standard elastic db into a premium pool. ```sql
- -- execute on the master database to start copying
+ -- Execute on the master database to start copying
CREATE DATABASE "Database2" AS COPY OF "Database1"
- (SERVICE_OBJECTIVE = ELASTIC_POOL( name = "pool1" ) ) ;
+ (SERVICE_OBJECTIVE = ELASTIC_POOL( name = "pool1" ) );
``` ### Copy to a different server
CREATE DATABASE Database2 AS COPY OF server1.Database1;
You can use the steps in the [Copy a SQL Database to a different server](#copy-to-a-different-server) section to copy your database to a server in a different subscription using T-SQL. Make sure you use a login that has the same name and password as the database owner of the source database. Additionally, the login must be a member of the `dbmanager` role or a server administrator, on both source and target servers. ```sql
-Step# 1
-Create login and user in the master database of the source server.
+--Step# 1
+--Create login and user in the master database of the source server.
CREATE LOGIN loginname WITH PASSWORD = 'xxxxxxxxx' GO
-CREATE USER [loginname] FOR LOGIN [loginname] WITH DEFAULT_SCHEMA=[dbo]
+CREATE USER [loginname] FOR LOGIN [loginname] WITH DEFAULT_SCHEMA=[dbo];
+GO
+ALTER ROLE dbmanager ADD MEMBER loginname;
GO
-Step# 2
-Create the user in the source database and grant dbowner permission to the database.
+--Step# 2
+--Create the user in the source database and grant dbowner permission to the database.
-CREATE USER [loginname] FOR LOGIN [loginname] WITH DEFAULT_SCHEMA=[dbo]
+CREATE USER [loginname] FOR LOGIN [loginname] WITH DEFAULT_SCHEMA=[dbo];
GO
-exec sp_addrolemember 'db_owner','loginname'
+ALTER ROLE db_owner ADD MEMBER loginname;
GO
-Step# 3
-Capture the SID of the user ΓÇ£loginnameΓÇ¥ from master database
+--Step# 3
+--Capture the SID of the user "loginname" from master database
-SELECT [sid] FROM sysusers WHERE [name] = 'loginname'
+SELECT [sid] FROM sysusers WHERE [name] = 'loginname';
-Step# 4
-Connect to Destination server.
-Create login and user in the master database, same as of the source server.
+--Step# 4
+--Connect to Destination server.
+--Create login and user in the master database, same as of the source server.
-CREATE LOGIN loginname WITH PASSWORD = 'xxxxxxxxx', SID = [SID of loginname login on source server]
+CREATE LOGIN loginname WITH PASSWORD = 'xxxxxxxxx', SID = [SID of loginname login on source server];
GO
-CREATE USER [loginname] FOR LOGIN [loginname] WITH DEFAULT_SCHEMA=[dbo]
+CREATE USER [loginname] FOR LOGIN [loginname] WITH DEFAULT_SCHEMA=[dbo];
GO
-exec sp_addrolemember 'dbmanager','loginname'
+ALTER ROLE dbmanager ADD MEMBER loginname;
GO
-Step# 5
-Execute the copy of database script from the destination server using the credentials created
+--Step# 5
+--Execute the copy of database script from the destination server using the credentials created
CREATE DATABASE new_database_name
-AS COPY OF source_server_name.source_database_name
+AS COPY OF source_server_name.source_database_name;
``` > [!NOTE]
Monitor the copying process by querying the [sys.databases](/sql/relational-data
> If you decide to cancel the copying while it is in progress, execute the [DROP DATABASE](/sql/t-sql/statements/drop-database-transact-sql) statement on the new database. > [!IMPORTANT]
-> If you need to create a copy with a substantially smaller service objective than the source, the target database may not have sufficient resources to complete the seeding process and it can cause the copy operaion to fail. In this scenario use a geo-restore request to create a copy in a different server and/or a different region. See [Recover an Azure SQL Database using database backups](recovery-using-backups.md#geo-restore) for more information.
+> If you need to create a copy with a substantially smaller service objective than the source, the target database may not have sufficient resources to complete the seeding process and it can cause the copy operation to fail. In this scenario use a geo-restore request to create a copy in a different server and/or a different region. See [Recover an Azure SQL Database using database backups](recovery-using-backups.md#geo-restore) for more information.
## Azure RBAC roles and permissions to manage database copy
azure-sql Elastic Scale Shard Map Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/elastic-scale-shard-map-management.md
To easily scale out databases on Azure SQL Database, use a shard map manager. Th
![Shard map management](./media/elastic-scale-shard-map-management/glossary.png)
-Understanding how these maps are constructed is essential to shard map management. This is done using the ShardMapManager class ([Java](/jav) to manage shard maps.
+Understanding how these maps are constructed is essential to shard map management. This is done using the ShardMapManager class ([Java](/jav) to manage shard maps.
## Shard maps and shard mappings
For each shard, you must select the type of shard map to create. The choice depe
1. List mapping 2. Range mapping
-For a single-tenant model, create a **list-mapping** shard map. The single-tenant model assigns one database per tenant. This is an effective model for SaaS developers as it simplifies management.
+For a single-tenant model, create a **list-mapping** shard map. The single-tenant model assigns one database per tenant. This is an effective model for SaaS developers as it simplifies shard map management.
![List mapping][1]
For scenarios that require data movement, however, the split-merge tool is neede
<!--Image references--> [1]: ./media/elastic-scale-shard-map-management/listmapping.png [2]: ./media/elastic-scale-shard-map-management/rangemapping.png
-[3]: ./media/elastic-scale-shard-map-management/multipleonsingledb.png
+[3]: ./media/elastic-scale-shard-map-management/multipleonsingledb.png
azure-sql Maintenance Window https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/maintenance-window.md
Previously updated : 03/05/2021 Last updated : 03/11/2021 # Maintenance window (Preview)
Choosing a maintenance window other than the default is currently available in t
- Central US - East US - East US2
+- East Asia
- Japan East - NorthCentral US - North Europe
azure-vmware Concepts Networking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-networking.md
Title: Concepts - Network interconnectivity description: Learn about key aspects and use cases of networking and interconnectivity in Azure VMware Solution. Previously updated : 02/02/2021 Last updated : 03/11/2021 # Azure VMware Solution networking and interconnectivity concepts
The diagram below shows the on-premises to private cloud interconnectivity, whic
For full interconnectivity to your private cloud, enable ExpressRoute Global Reach and then request an authorization key and private peering ID for Global Reach in the Azure portal. The authorization key and peering ID are used to establish Global Reach between an ExpressRoute circuit in your subscription and the ExpressRoute circuit for your new private cloud. Once linked, the two ExpressRoute circuits route network traffic between your on-premises environments to your private cloud. For more information on the procedures to request and use the authorization key and peering ID, see the [tutorial for creating an ExpressRoute Global Reach peering to a private cloud](tutorial-expressroute-global-reach-private-cloud.md).
+## Limitations
+ ## Next steps Now that you've covered Azure VMware Solution network and interconnectivity concepts, you may want to learn about:
azure-vmware Concepts Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-storage.md
Title: Concepts - Storage description: Learn about the key storage capabilities in Azure VMware Solution private clouds. Previously updated : 02/02/2021 Last updated : 03/11/2021 # Azure VMware Solution storage concepts
vSAN datastores use data-at-rest encryption by default. The encryption solution
## Scaling
-Native cluster storage capacity is scaled by adding hosts to a cluster. For clusters that use HE hosts, the raw cluster-wide capacity is increased by 15.4 TB with each added host. Clusters that are built with GP hosts have their raw capacity increased by 7.7 TB with each added host. In both types of clusters, hosts take about 10 minutes to be added to a cluster. For instructions on scaling clusters, see the [scale private cloud tutorial][tutorial-scale-private-cloud].
+Native cluster storage capacity is scaled by adding hosts to a cluster. For clusters that use AVS36 hosts, the raw cluster-wide capacity is increased by 15.4 TB with each added host. Hosts take about 10 minutes to be added to a cluster. For instructions on scaling clusters, see the [scale private cloud tutorial][tutorial-scale-private-cloud].
## Azure storage integration
backup Backup Azure Arm Restore Vms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-arm-restore-vms.md
Currently, secondary region [RPO](azure-backup-glossary.md#rpo-recovery-point-ob
[Azure zone pinned VMs](../virtual-machines/windows/create-portal-availability-zone.md) can be restored in any [availability zones](../availability-zones/az-overview.md) of the same region.
-In the restore process, you'll see the option **Availability Zone.** You'll see your default zone first. To choose a different zone, choose the number of the zone of your choice. If the pinned zone is unavailable, you won't be able to restore the data to another zone because the backed-up data isn't zonally replicated.
+In the restore process, you'll see the option **Availability Zone.** You'll see your default zone first. To choose a different zone, choose the number of the zone of your choice. If the pinned zone is unavailable, you won't be able to restore the data to another zone because the backed-up data isn't zonally replicated. The restore in availability zones is possible from recovery points in vault tier only.
![Choose availability zone](./media/backup-azure-arm-restore-vms/cross-zonal-restore.png)
backup Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-support-matrix.md
Azure Backup has added the Cross Region Restore feature to strengthen data avail
| Backup Management type | Supported | Supported Regions | | - | | -- |
-| Azure VM | Supported for Azure VMs with both managed and unmanaged disks. Not supported for classic VMs. | Available in all Azure public regions except for France Central, Australia Central, South Africa North, UAE North, Switzerland North, Germany West Central, Norway East. <br>For information about use in those regions, contact [AskAzureBackupTeam@microsoft.com](mailto:AskAzureBackupTeam@microsoft.com) |
-| SQL /SAP HANA | In preview | Available in all Azure public regions except for France Central, Australia Central, South Africa North, UAE North, Switzerland North, Germany West Central, Norway East. <br>For information about use in those regions, contact [AskAzureBackupTeam@microsoft.com](mailto:AskAzureBackupTeam@microsoft.com) |
+| Azure VM | Supported for Azure VMs with both managed and unmanaged disks. Not supported for classic VMs. | Available in all Azure public regions and sovereign regions except for France Central, Australia Central, South Africa North, UAE North, Switzerland North, Germany West Central, Norway East. <br>For information about use in those regions, contact [AskAzureBackupTeam@microsoft.com](mailto:AskAzureBackupTeam@microsoft.com) |
+| SQL /SAP HANA | In preview | Available in all Azure public regions and sovereign regions except for France Central, Australia Central, South Africa North, UAE North, Switzerland North, Germany West Central, Norway East. <br>For information about use in those regions, contact [AskAzureBackupTeam@microsoft.com](mailto:AskAzureBackupTeam@microsoft.com) |
| MARS Agent/On premises | No | N/A | | AFS (Azure file shares) | No | N/A |
bastion Howto Metrics Monitor Alert https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/howto-metrics-monitor-alert.md
+
+ Title: 'Configure monitoring and metrics using Azure Monitor'
+
+description: Learn about Azure Bastion monitoring, and metrics using Azure Monitor, the solution for metrics, alerting, diagnostic logs across Azure.
+++++ Last updated : 03/09/2021+++
+# How to configure monitoring and metrics for Azure Bastion using Azure Monitor
+
+This article helps you work with monitoring and metrics for Azure Bastion using Azure Monitor.
+
+>[!NOTE]
+>Using **Classic Metrics** is not recommended.
+>
+
+## About metrics
+
+Azure Bastion has various metrics that are available. The following table shows the category and dimensions for each available metric.
+
+|**Metric**|**Category**|**Dimension(s)**|
+| | | |
+|Bastion communication status**|[Availability](#availability)|N/A|
+|Total memory|[Availability](#availability)|Instance|
+|Used CPU|[Traffic](#traffic)|Instance
+|Used memory|[Traffic](#traffic)|Instance
+|Session count|[Performance](#performance)|Instance|
+
+** Bastion communication status is only applicable for bastion hosts deployed after November 2020.
+
+### <a name="availability"></a>Availability metrics
+
+#### <a name="communication-status"></a>Bastion communication status
+
+You can view the communication status of Azure Bastion, aggregated across all instances comprising the bastion host.
+
+* A value of **1** indicates that the bastion is available.
+* A value of **0** indicates that the bastion service is unavailable.
++
+#### <a name="total-memory"></a>Total memory
+
+You can view the total memory of Azure Bastion, split across each bastion instance.
++
+### <a name="traffic"></a>Traffic metrics
+
+#### <a name="used-cpu"></a>Used CPU
+
+You can view the CPU utilization of Azure Bastion, split across each bastion instance. Monitoring this metric will help gauge the availability and capacity of the instances that comprise Azure Bastion.
++
+#### <a name="used-memory"></a>Used memory
+
+You can view memory utilization across each bastion instance, split across each bastion instance. Monitoring this metric will help gauge the availability and capacity of the instances that comprise Azure Bastion.
++
+### <a name="performance"></a>Performance metrics
+
+#### Session count
+
+You can view the count of active sessions per bastion instance, aggregated across each session type (RDP and SSH). Each Azure Bastion can support a range of active RDP and SSH sessions. Monitoring this metric will help you to understand if you need to adjust the number of instances running the bastion service. For more information about the session count Azure Bastion can support, refer to the [Azure Bastion FAQ](bastion-faq.md). For more information about which Bastion SKUs support instance scaling, refer to [About Bastion SKUs](bastion-connect-vm-scale-set.md).
++
+## <a name="metrics"></a>How to view metrics
+
+1. To view metrics, navigate to your bastion host.
+1. From the **Monitoring** list, select **Metrics**.
+1. Select the parameters. If no metrics are set, click **Add metric**, and then select the parameters.
+
+ * **Scope:** By default, the scope is set to the bastion host.
+ * **Metric Namespace:** Standard Metrics.
+ * **Metric:** Select the metric that you want to view.
+
+1. Once a metric is selected, the default aggregation will be applied. Optionally, you can apply splitting, which will show the metric with different dimensions.
+
+## Next steps
+
+Read the [Bastion FAQ](bastion-faq.md).
+
batch Batch Pool Cloud Service To Virtual Machine Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-pool-cloud-service-to-virtual-machine-configuration.md
Title: Migrate Batch pool configuration from Cloud Services to Virtual Machines description: Learn how to update your pool configuration to the latest and recommended configuration Previously updated : 2/16/2021 Last updated : 03/11/2021
-# Migrate Batch pool configuration from Cloud Services to Virtual Machines
+# Migrate Batch pool configuration from Cloud Services to Virtual Machine
-Batch pools can be created using either [cloudServiceConfiguration](/rest/api/batchservice/pool/add#cloudserviceconfiguration) or [virtualMachineConfiguration](/rest/api/batchservice/pool/add#virtualmachineconfiguration). 'virtualMachineConfiguration' is the recommended configuration as it supports all Batch capabilities. 'cloudServiceConfiguration' pools do not support all features and no new features are planned.
+Currently, Batch pools can be created using either [virtualMachineConfiguration](/rest/api/batchservice/pool/add#virtualmachineconfiguration) or [cloudServiceConfiguration](/rest/api/batchservice/pool/add#cloudserviceconfiguration). We recommend using Virtual Machine Configuration only, as this configuration supports all Batch capabilities.
-If you use 'cloudServiceConfiguration' pools, it is highly recommended that you move to use 'virtualMachineConfiguration' pools. This will enable you to benefit from all Batch capabilities, such as an expanded [selection of VM series](batch-pool-vm-sizes.md), Linux VMs, [containers](batch-docker-container-workloads.md), [Azure Resource Manager virtual networks](batch-virtual-network.md), and [node disk encryption](disk-encryption.md).
+Cloud Services Configuration pools don't support some of the current Batch features, and won't support any newly-added features. You won't be able to create new 'CloudServiceConfiguration' pools or add new nodes to existing pools [after February 29, 2024](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/).
-This article describes how to migrate to 'virtualMachineConfiguration'.
+If your Batch solutions currently use 'cloudServiceConfiguration' pools, we recommend changing to 'virtualMachineConfiguration' as soon as possible. This will enable you to benefit from all Batch capabilities, such as an expanded [selection of VM series](batch-pool-vm-sizes.md), Linux VMs, [containers](batch-docker-container-workloads.md), [Azure Resource Manager virtual networks](batch-virtual-network.md), and [node disk encryption](disk-encryption.md).
-## New pools are required
+## Create a pool using Virtual Machine Configuration
-Existing active pools cannot be updated from 'cloudServiceConfiguration' to 'virtualMachineConfiguration', new pools must be created. Creating pools using 'virtualMachineConfiguration' is supported by all Batch APIs, command-line tools, Azure portal, and the Batch Explorer UI.
+You can't switch an existing active pool that uses 'cloudServiceConfiguration' to use 'virtualMachineConfiguration'. Instead, you'll need to create new pools. Once you've created your new 'virtualMachineConfiguration' pools and replicated all of your jobs and tasks, you can delete the old 'cloudServiceConfiguration'pools that you're no longer using.
-**The [.NET](tutorial-parallel-dotnet.md) and [Python](tutorial-parallel-python.md) tutorials provide examples of pool creation using 'virtualMachineConfiguration'.**
+All Batch APIs, command-line tools, the Azure portal, and the Batch Explorer UI let you create pools using 'virtualMachineConfiguration'.
+
+For a walkthrough of the process of creating pools that use 'virtualMachineConfiguration, see the [.NET tutorial](tutorial-parallel-dotnet.md) or the [Python tutorial](tutorial-parallel-python.md).
## Pool configuration differences
-The following should be considered when updating pool configuration:
+Some of the key differences between the two configurations include:
-- 'cloudServiceConfiguration' pool nodes are always Windows OS, 'virtualMachineConfiguration' pools can either be Linux or Windows OS.
+- 'cloudServiceConfiguration' pool nodes only use Windows OS. 'virtualMachineConfiguration' pools can use either Linux or Windows OS.
- Compared to 'cloudServiceConfiguration' pools, 'virtualMachineConfiguration' pools have a richer set of capabilities, such as container support, data disks, and disk encryption.
+- Pool and node startup and delete times may differ slightly between 'cloudServiceConfiguration' pools and 'virtualMachineConfiguration' pools.
- 'virtualMachineConfiguration' pool nodes utilize managed OS disks. The [managed disk type](../virtual-machines/disks-types.md) that is used for each node depends on the VM size chosen for the pool. If a 's' VM size is specified for the pool, for example 'Standard_D2s_v3', then a premium SSD is used. If a 'non-s' VM size is specified, for example 'Standard_D2_v3', then a standard HDD is used. > [!IMPORTANT]
- > As with Virtual Machines and Virtual Machine Scale Sets, the OS managed disk used for each node incurs a cost, which is additional to the cost of the VMs. There is no OS disk cost for 'cloudServiceConfiguration' nodes as the OS disk is created on the nodes local SSD.
--- Pool and node startup and delete times may differ slightly between 'cloudServiceConfiguration' pools and 'virtualMachineConfiguration' pools.
+ > As with Virtual Machines and Virtual Machine Scale Sets, the OS managed disk used for each node incurs a cost, which is additional to the cost of the VMs. There is no OS disk cost for 'cloudServiceConfiguration' nodes, as the OS disk is created on the nodes local SSD.
## Azure Data Factory custom activity pools Azure Batch pools can be used to run Data Factory custom activities. Any 'cloudServiceConfiguration' pools used to run custom activities will need to be deleted and new 'virtualMachineConfiguration' pools created. -- Pipelines should be paused before delete/recreate to ensure no executions will be interrupted.
+When creating your new pools to run Data Factory custom activities, follow these practices:
+
+- Pause all pipelines before creating the new pools and deleting the old ones to ensure no executions will be interrupted.
- The same pool id can be used to avoid linked service configuration changes. - Resume pipelines when new pools have been created.
-For more information about using Azure Batch to run Data Factory custom activities:
--- [Azure Batch linked service](../data-factory/compute-linked-services.md#azure-batch-linked-service)-- [Custom activities in a Data Factory pipeline](../data-factory/transform-data-using-dotnet-custom-activity.md)
+For more information about using Azure Batch to run Data Factory custom activities, see [Azure Batch linked service](../data-factory/compute-linked-services.md#azure-batch-linked-service) and [Custom activities in a Data Factory pipeline](../data-factory/transform-data-using-dotnet-custom-activity.md)
## Next steps - Learn more about [pool configurations](nodes-and-pools.md#configurations). - Learn more about [pool best practices](best-practices.md#pools).-- REST API reference for [pool addition](/rest/api/batchservice/pool/add) and [virtualMachineConfiguration](/rest/api/batchservice/pool/add#virtualmachineconfiguration).
+- See the REST API reference for [pool addition](/rest/api/batchservice/pool/add) and [virtualMachineConfiguration](/rest/api/batchservice/pool/add#virtualmachineconfiguration).
batch Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/best-practices.md
Title: Best practices description: Learn best practices and useful tips for developing your Azure Batch solutions. Previously updated : 02/03/2020 Last updated : 03/11/2020
This article discusses a collection of best practices and useful tips for using
- **Pool allocation mode** When creating a Batch account, you can choose between two pool allocation modes: **Batch service** or **user subscription**. For most cases, you should use the default Batch service mode, in which pools are allocated behind the scenes in Batch-managed subscriptions. In the alternative user subscription mode, Batch VMs and other resources are created directly in your subscription when a pool is created. User subscription accounts are primarily used to enable an important, but small subset of scenarios. You can read more about user subscription mode at [Additional configuration for user subscription mode](batch-account-create-portal.md#additional-configuration-for-user-subscription-mode). -- **'cloudServiceConfiguration' or 'virtualMachineConfiguration'.**
- 'virtualMachineConfiguration' should be used. All Batch features are supported by 'virtualMachineConfiguration' pools. Not all features are supported for 'cloudServiceConfiguration' pools and no new capabilities are being planned.
+- **'virtualMachineConfiguration' or 'virtualMachineConfiguration'.**
+ While you can currently create pools using either configuration, new pools should be configured using 'virtualMachineConfiguration' and not 'virtualMachineConfiguration'. All current and new Batch features will be supported by Virtual Machine Configuration pools. Cloud Services Configuration pools do not support all features and no new capabilities are planned. You won't be able to create new 'CloudServiceConfiguration' pools or add new nodes to existing pools [after February 29, 2024](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/). For more information, see [Migrate Batch pool configuration from Cloud Services to Virtual Machine](batch-pool-cloud-service-to-virtual-machine-configuration.md).
- **Consider job and task run time when determining job to pool mapping.** If you have jobs comprised primarily of short-running tasks, and the expected total task counts are small, so that the overall expected run time of the job is not long, do not allocate a new pool for each job. The allocation time of the nodes will diminish the run time of the job.
batch Nodes And Pools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/nodes-and-pools.md
Title: Nodes and pools in Azure Batch description: Learn about compute nodes and pools and how they are used in an Azure Batch workflow from a development standpoint. Previously updated : 11/20/2020 Last updated : 03/11/2021 # Nodes and pools in Azure Batch
When you create a Batch pool, you specify the Azure virtual machine configuratio
There are two types of pool configurations available in Batch. > [!IMPORTANT]
-> Pools should be configured using 'Virtual Machine Configuration' and not 'Cloud Services Configuration'. All Batch features are supported by 'Virtual Machine Configuration' pools and new features are being added. 'Cloud Services Configuration' pools do not support all features and no new capabilities are planned.
+> While you can currently create pools using either configuration, new pools should be configured using Virtual Machine Configuration and not Cloud Services Configuration. All current and new Batch features will be supported by Virtual Machine Configuration pools. Cloud Services Configuration pools do not support all features and no new capabilities are planned. You won't be able to create new 'CloudServiceConfiguration' pools or add new nodes to existing pools [after February 29, 2024](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/).
### Virtual Machine Configuration
The [Batch node agent](https://github.com/Azure/Batch/blob/master/changelogs/nod
### Cloud Services Configuration > [!WARNING]
-> Cloud Service Configuration Pools are deprecated. Please use Virtual Machine Configuration Pools instead.
+> Cloud Services Configuration pools are [deprecated](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/). Please use Virtual Machine Configuration pools instead. For more information, see [Migrate Batch pool configuration from Cloud Services to Virtual Machine](batch-pool-cloud-service-to-virtual-machine-configuration.md).
The **Cloud Services Configuration** specifies that the pool is composed of Azure Cloud Services nodes. Cloud Services provides only Windows compute nodes. Available operating systems for Cloud Services Configuration pools are listed in the [Azure Guest OS releases and SDK compatibility matrix](../cloud-services/cloud-services-guestos-update-matrix.md), and available compute node sizes are listed in [Sizes for Cloud Services](../cloud-services/cloud-services-sizes-specs.md). When you create a pool that contains Cloud Services nodes, you specify the node size and its *OS Family* (which determines which versions of .NET are installed with the OS). Cloud Services is deployed to Azure more quickly than virtual machines running Windows. If you want pools of Windows compute nodes, you may find that Cloud Services provide a performance benefit in terms of deployment time.
-As with worker roles within Cloud Services, you can specify an *OS Version* (for more information on worker roles, see the [Cloud Services overview](../cloud-services/cloud-services-choose-me.md)). We recommend that you specify `Latest (*)` for the *OS Version* so that the nodes are automatically upgraded, and there is no work required to cater to newly released versions. The primary use case for selecting a specific OS version is to ensure application compatibility, which allows backward compatibility testing to be performed before allowing the version to be updated. After validation, the *OS Version* for the pool can be updated and the new OS image can be installed. Any running tasks will be interrupted and requeued.
+As with worker roles within Cloud Services, you can specify an *OS Version*. We recommend that you specify `Latest (*)` for the *OS Version* so that the nodes are automatically upgraded, and there is no work required to cater to newly released versions. The primary use case for selecting a specific OS version is to ensure application compatibility, which allows backward compatibility testing to be performed before allowing the version to be updated. After validation, the *OS Version* for the pool can be updated and the new OS image can be installed. Any running tasks will be interrupted and requeued.
### Node Agent SKUs
cloud-services Cloud Services Sizes Specs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-sizes-specs.md
We have created the concept of the Azure Compute Unit (ACU) to provide a way of
| [A8-A11](#a-series) |225* | | [A v2](#av2-series) |100 | | [D](#d-series) |160 |
-| [D v2](#dv2-series) |160 - 190* |
+| [D v2](#dv2-series) |210 - 250* |
| [D v3](#dv3-series) |160 - 190* | | [E v3](#ev3-series) |160 - 190* | | [G](#g-series) |180 - 240* |
Get-AzureRoleSize | where SupportedByWebWorkerRoles -eq $true | select InstanceS
## Next steps * Learn about [azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md).
-* Learn more [about high performance compute VM sizes](../virtual-machines/sizes-hpc.md?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json) for HPC workloads.
+* Learn more [about high performance compute VM sizes](../virtual-machines/sizes-hpc.md?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json) for HPC workloads.
cognitive-services Query Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/Concepts/query-knowledge-base.md
A user query is the question that the end user asks of the knowledge base, such
"userId": "sd53lsY=" } ```
+You control the response by setting properties such as [scoreThreshold](./confidence-score.md#choose-a-score-threshold), [top](../how-to/improve-knowledge-base.md#use-the-top-property-in-the-generateanswer-request-to-get-several-matching-answers), and [strictFilters](../how-to/query-knowledge-base-with-metadata.md).
-You control the response by setting properties such as [scoreThreshold](./confidence-score.md#choose-a-score-threshold), [top](../how-to/improve-knowledge-base.md#use-the-top-property-in-the-generateanswer-request-to-get-several-matching-answers), and [strictFilters](../how-to/metadata-generateanswer-usage.md#filter-results-with-strictfilters-for-metadata-tags).
-
-Use [conversation context](../how-to/metadata-generateanswer-usage.md#use-question-and-answer-results-to-keep-conversation-context) with [multi-turn functionality](../how-to/multiturn-conversation.md) to keep the conversation going to refine the questions and answers, to find the correct and final answer.
+Use [conversation context](../how-to/query-knowledge-base-with-metadata.md) with [multi-turn functionality](../how-to/multiturn-conversation.md) to keep the conversation going to refine the questions and answers, to find the correct and final answer.
### The response from a call to generate an answer
cognitive-services Improve Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/How-To/improve-knowledge-base.md
When you reimport this app, the active learning continues to collect information
A bot or other client application should use the following architectural flow to use active learning:
-* Bot [gets the answer from the knowledge base](#use-the-top-property-in-the-generateanswer-request-to-get-several-matching-answers) with the GenerateAnswer API, using the `top` property to get a number of answers.
+1. Bot [gets the answer from the knowledge base](#use-the-top-property-in-the-generateanswer-request-to-get-several-matching-answers) with the GenerateAnswer API, using the `top` property to get a number of answers.
- #### Use the top property in the GenerateAnswer request to get several matching answers
-
- When submitting a question to QnA Maker for an answer, the `top` property of the JSON body sets the number of answers to return.
-
- ```json
- {
- "question": "wi-fi",
- "isTest": false,
- "top": 3
- }
- ```
-
-* Bot determines explicit feedback:
+2. Bot determines explicit feedback:
* Using your own [custom business logic](#use-the-score-property-along-with-business-logic-to-get-list-of-answers-to-show-user), filter out low scores. * In the bot or client-application, display list of possible answers to the user and get user's selected answer.
-* Bot [sends selected answer back to QnA Maker](#bot-framework-sample-code) with the [Train API](#train-api).
+3. Bot [sends selected answer back to QnA Maker](#bot-framework-sample-code) with the [Train API](#train-api).
+
+### Use the top property in the GenerateAnswer request to get several matching answers
+When submitting a question to QnA Maker for an answer, the `top` property of the JSON body sets the number of answers to return.
+
+```json
+{
+ "question": "wi-fi",
+ "isTest": false,
+ "top": 3
+}
+```
### Use the score property along with business logic to get list of answers to show user
cognitive-services Metadata Generateanswer Usage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/How-To/metadata-generateanswer-usage.md
Last updated 11/09/2020
-# Get an answer with the GenerateAnswer API and metadata
+# Get an answer with the GenerateAnswer API
To get the predicted answer to a user's question, use the GenerateAnswer API. When you publish a knowledge base, you can see information about how to use this API on the **Publish** page. You can also configure the API to filter answers based on metadata tags, and test the knowledge base from the endpoint with the test query string parameter.
-QnA Maker lets you add metadata, in the form of key and value pairs, to your pairs of questions and answers. You can then use this information to filter results to user queries, and to store additional information that can be used in follow-up conversations. For more information, see [Knowledge base](../index.yml).
-
-<a name="qna-entity"></a>
-
-## Store questions and answers with a QnA entity
-
-It's important to understand how QnA Maker stores the question and answer data. The following illustration shows a QnA entity:
-
-![Illustration of a QnA entity](../media/qnamaker-how-to-metadata-usage/qna-entity.png)
-
-Each QnA entity has a unique and persistent ID. You can use the ID to make updates to a particular QnA entity.
- <a name="generateanswer-api"></a> ## Get answer predictions with the GenerateAnswer API
The [response](/rest/api/cognitiveservices/qnamakerruntime/runtime/generateanswe
The previous JSON responded with an answer with a score of 38.5%.
+## Match questions only, by text
+
+By default, QnA Maker searches through questions and answers. If you want to search through questions only, to generate an answer, use the `RankerType=QuestionOnly` in the POST body of the GenerateAnswer request.
+
+You can search through the published kb, using `isTest=false`, or in the test kb using `isTest=true`.
+
+```json
+{
+ "question": "Hi",
+ "top": 30,
+ "isTest": true,
+ "RankerType":"QuestionOnly"
+}
+
+```
## Use QnA Maker with a bot in C# The bot framework provides access to the QnA Maker's properties with the [getAnswer API](/dotnet/api/microsoft.bot.builder.ai.qna.qnamaker.getanswersasync#Microsoft_Bot_Builder_AI_QnA_QnAMaker_GetAnswersAsync_Microsoft_Bot_Builder_ITurnContext_Microsoft_Bot_Builder_AI_QnA_QnAMakerOptions_System_Collections_Generic_Dictionary_System_String_System_String__System_Collections_Generic_Dictionary_System_String_System_Double__):
var qnaResults = await this.qnaMaker.getAnswers(stepContext.context, qnaMakerOpt
The previous JSON requested only answers that are at 30% or above the threshold score.
-<a name="metadata-example"></a>
-
-## Use metadata to filter answers by custom metadata tags
-
-Adding metadata allows you to filter the answers by these metadata tags. Add the metadata column from the **View Options** menu. Add metadata to your knowledge base by selecting the metadata **+** icon to add a metadata pair. This pair consists of one key and one value.
-
-![Screenshot of adding metadata](../media/qnamaker-how-to-metadata-usage/add-metadata.png)
-
-<a name="filter-results-with-strictfilters-for-metadata-tags"></a>
-
-## Filter results with strictFilters for metadata tags
-
-Consider the user question "When does this hotel close?", where the intent is implied for the restaurant "Paradise."
-
-Because results are required only for the restaurant "Paradise", you can set a filter in the GenerateAnswer call on the metadata "Restaurant Name". The following example shows this:
-
-```json
-{
- "question": "When does this hotel close?",
- "top": 1,
- "strictFilters": [ { "name": "restaurant", "value": "paradise"}]
-}
-```
-
-### Logical AND by default
-
-To combine several metadata filters in the query, add the additional metadata filters to the array of the `strictFilters` property. By default, the values are logically combined (AND). A logical combination requires all filters to matches the QnA pairs in order for the pair to be returned in the answer.
-
-This is equivalent to using the `strictFiltersCompoundOperationType` property with the value of `AND`.
-
-### Logical OR using strictFiltersCompoundOperationType property
-
-When combining several metadata filters, if you are only concerned with one or some of the filters matching, use the `strictFiltersCompoundOperationType` property with the value of `OR`.
-
-This allows your knowledge base to return answers when any filter matches but won't return answers that have no metadata.
-
-```json
-{
- "question": "When do facilities in this hotel close?",
- "top": 1,
- "strictFilters": [
- { "name": "type","value": "restaurant"},
- { "name": "type", "value": "bar"},
- { "name": "type", "value": "poolbar"}
- ],
- "strictFiltersCompoundOperationType": "OR"
-}
-```
-
-### Metadata examples in quickstarts
-
-Learn more about metadata in the QnA Maker portal quickstart for metadata:
-* [Authoring - add metadata to QnA pair](../quickstarts/add-question-metadata-portal.md#add-metadata-to-filter-the-answers)
-* [Query prediction - filter answers by metadata](../quickstarts/get-answer-from-knowledge-base-using-url-tool.md)
-
-<a name="keep-context"></a>
-
-## Use question and answer results to keep conversation context
-
-The response to the GenerateAnswer contains the corresponding metadata information of the matched question and answer pair. You can use this information in your client application to store the context of the previous conversation for use in later conversations.
-
-```json
-{
- "answers": [
- {
- "questions": [
- "What is the closing time?"
- ],
- "answer": "10.30 PM",
- "score": 100,
- "id": 1,
- "source": "Editorial",
- "metadata": [
- {
- "name": "restaurant",
- "value": "paradise"
- },
- {
- "name": "location",
- "value": "secunderabad"
- }
- ]
- }
- ]
-}
-```
-
-## Match questions only, by text
-
-By default, QnA Maker searches through questions and answers. If you want to search through questions only, to generate an answer, use the `RankerType=QuestionOnly` in the POST body of the GenerateAnswer request.
-
-You can search through the published kb, using `isTest=false`, or in the test kb using `isTest=true`.
-
-```json
-{
- "question": "Hi",
- "top": 30,
- "isTest": true,
- "RankerType":"QuestionOnly"
-}
-```
- ## Return Precise Answers ### Generate Answer API
cognitive-services Network Isolation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/How-To/network-isolation.md
You can add IPs to App service allowlist to restrict access or Configure App Ser
2. Select the IPs of "CognitiveServicesManagement". 3. Navigate to the networking section of your App Service resource, and click on "Configure Access Restriction" option to add the IPs to an allowlist.
- ![inbound port exceptions](../media/inbound-ports.png)
- We also have an automated script to do the same for your App Service. You can find the [PowerShell script to configure an allowlist](https://github.com/pchoudhari/QnAMakerBackupRestore/blob/master/AddRestrictedIPAzureAppService.ps1) on GitHub. You need to input subscription id, resource group and actual App Service name as script parameters. Running the script will automatically add the IPs to App Service allowlist. #### Configure App Service Environment to host QnA Maker App Service
The App Service Environment(ASE) can be used to host QnA Maker App service. Plea
3. Update the Network Security Group associated with the App Service Environment 1. Update pre-created Inbound Security Rules as per your requirements. 2. Add a new Inbound Security Rule with source as 'Service Tag' and source service tag as 'CognitiveServicesManagement'.
+
+ ![inbound port exceptions](../media/inbound-ports.png)
+ 4. Create a QnA Maker cognitive service instance (Microsoft.CognitiveServices/accounts) using Azure Resource Manager, where QnA Maker endpoint should be set to the App Service Endpoint created above (https:// mywebsite.myase.p.azurewebsite.net).
cognitive-services Query Knowledge Base With Metadata https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/How-To/query-knowledge-base-with-metadata.md
+
+ Title: Contextually filter by using metadata
+
+description: QnA Maker filters QnA pairs by metadata.
+++++ Last updated : 11/09/2020+++
+# Filter Responses with Metadata
+
+QnA Maker lets you add metadata, in the form of key and value pairs, to your pairs of questions and answers. You can then use this information to filter results to user queries, and to store additional information that can be used in follow-up conversations.
+
+<a name="qna-entity"></a>
+
+## Store questions and answers with a QnA entity
+
+It's important to understand how QnA Maker stores the question and answer data. The following illustration shows a QnA entity:
+
+![Illustration of a QnA entity](../media/qnamaker-how-to-metadata-usage/qna-entity.png)
+
+Each QnA entity has a unique and persistent ID. You can use the ID to make updates to a particular QnA entity.
+
+## Use metadata to filter answers by custom metadata tags
+
+Adding metadata allows you to filter the answers by these metadata tags. Add the metadata column from the **View Options** menu. Add metadata to your knowledge base by selecting the metadata **+** icon to add a metadata pair. This pair consists of one key and one value.
+
+![Screenshot of adding metadata](../media/qnamaker-how-to-metadata-usage/add-metadata.png)
+
+<a name="filter-results-with-strictfilters-for-metadata-tags"></a>
+
+## Filter results with strictFilters for metadata tags
+
+Consider the user question "When does this hotel close?", where the intent is implied for the restaurant "Paradise."
+
+Because results are required only for the restaurant "Paradise", you can set a filter in the GenerateAnswer call on the metadata "Restaurant Name". The following example shows this:
+
+```json
+{
+ "question": "When does this hotel close?",
+ "top": 1,
+ "strictFilters": [ { "name": "restaurant", "value": "paradise"}]
+}
+```
+
+### Logical AND by default
+
+To combine several metadata filters in the query, add the additional metadata filters to the array of the `strictFilters` property. By default, the values are logically combined (AND). A logical combination requires all filters to matches the QnA pairs in order for the pair to be returned in the answer.
+
+This is equivalent to using the `strictFiltersCompoundOperationType` property with the value of `AND`.
+
+### Logical OR using strictFiltersCompoundOperationType property
+
+When combining several metadata filters, if you are only concerned with one or some of the filters matching, use the `strictFiltersCompoundOperationType` property with the value of `OR`.
+
+This allows your knowledge base to return answers when any filter matches but won't return answers that have no metadata.
+
+```json
+{
+ "question": "When do facilities in this hotel close?",
+ "top": 1,
+ "strictFilters": [
+ { "name": "type","value": "restaurant"},
+ { "name": "type", "value": "bar"},
+ { "name": "type", "value": "poolbar"}
+ ],
+ "strictFiltersCompoundOperationType": "OR"
+}
+```
+
+### Metadata examples in quickstarts
+
+Learn more about metadata in the QnA Maker portal quickstart for metadata:
+* [Authoring - add metadata to QnA pair](../quickstarts/add-question-metadata-portal.md#add-metadata-to-filter-the-answers)
+* [Query prediction - filter answers by metadata](../quickstarts/get-answer-from-knowledge-base-using-url-tool.md)
+
+<a name="keep-context"></a>
+
+## Use question and answer results to keep conversation context
+
+The response to the GenerateAnswer contains the corresponding metadata information of the matched question and answer pair. You can use this information in your client application to store the context of the previous conversation for use in later conversations.
+
+```json
+{
+ "answers": [
+ {
+ "questions": [
+ "What is the closing time?"
+ ],
+ "answer": "10.30 PM",
+ "score": 100,
+ "id": 1,
+ "source": "Editorial",
+ "metadata": [
+ {
+ "name": "restaurant",
+ "value": "paradise"
+ },
+ {
+ "name": "location",
+ "value": "secunderabad"
+ }
+ ]
+ }
+ ]
+}
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Analyze your knowledgebase](../How-to/get-analytics-knowledge-base.md)
cognitive-services Create New Kb Csharp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/Quickstarts/create-new-kb-csharp.md
- Title: "Quickstart: Create knowledge base - REST, C# - QnA Maker"
-description: This C# REST-based quickstart walks you through creating a sample QnA Maker knowledge base, programmatically, that will appear in your Azure Dashboard of your Cognitive Services API account.
-- Previously updated : 12/16/2019---
-#Customer intent: As an API or REST developer new to the QnA Maker service, I want to programmatically create a knowledge base using C#.
--
-# Quickstart: Create a knowledge base in QnA Maker using C# with REST
-
-This quickstart walks you through programmatically creating and publishing a sample QnA Maker knowledge base. QnA Maker automatically extracts questions and answers from semi-structured content, like FAQs, from [data sources](../index.yml). The model for the knowledge base is defined in the JSON sent in the body of the API request.
-
-This quickstart calls QnA Maker APIs:
-* [Create KB](/rest/api/cognitiveservices/qnamaker/knowledgebase/create)
-* [Get Operation Details](/rest/api/cognitiveservices/qnamaker/operations/getdetails)
-
-[Reference documentation](/rest/api/cognitiveservices/qnamaker/knowledgebase) | [C# Sample](https://github.com/Azure-Samples/cognitive-services-qnamaker-csharp/blob/master/documentation-samples/quickstarts/create-knowledge-base/QnaQuickstartCreateKnowledgebase/Program.cs)
--
-## Prerequisites
-
-* The current version of [.NET Core](https://dotnet.microsoft.com/download/dotnet-core).
-* You must have a [QnA Maker resource](../How-To/set-up-qnamaker-service-azure.md). To retrieve your key and endpoint (which includes the resource name), select **Quickstart** for your resource in the Azure portal.
-
-### Create a new C# application
-
-Create a new .NET Core application in your preferred editor or IDE.
-
-In a console window (such as cmd, PowerShell, or Bash), use the `dotnet new` command to create a new console app with the name `qna-maker-quickstart`. This command creates a simple "Hello World" C# project with a single source file: *Program.cs*.
-
-```dotnetcli
-dotnet new console -n qna-maker-quickstart
-```
-
-Change your directory to the newly created app folder. You can build the application with:
-
-```dotnetcli
-dotnet build
-```
-
-The build output should contain no warnings or errors.
-
-```console
-...
-Build succeeded.
- 0 Warning(s)
- 0 Error(s)
-...
-```
-
-## Add the required dependencies
-
-At the top of Program.cs, replace the single using statement with the following lines to add necessary dependencies to the project:
--
-## Add the required constants
-
-At the top of the Program class, add the required constants to access QnA Maker.
-
-Set the following values in environment variables:
-
-* `QNA_MAKER_SUBSCRIPTION_KEY` - The **key** is a 32 character string and is available in the Azure portal, on the QnA Maker resource, on the Quickstart page. This is not the same as the prediction endpoint key.
-* `QNA_MAKER_ENDPOINT` - The **endpoint** is the URL for authoring, in the format of `https://YOUR-RESOURCE-NAME.cognitiveservices.azure.com`. This is not the same URL used to query the prediction endpoint.
--
-## Add the KB definition
-
-After the constants, add the following KB definition:
--
-## Add supporting functions and structures
-Add the following code block inside the Program class:
--
-## Add a POST request to create KB
-
-The following code makes an HTTPS request to the QnA Maker API to create a KB and receives the response:
---
-This API call returns a JSON response that includes the operation ID. Use the operation ID to determine if the KB is successfully created.
-
-```JSON
-{
- "operationState": "NotStarted",
- "createdTimestamp": "2018-09-26T05:19:01Z",
- "lastActionTimestamp": "2018-09-26T05:19:01Z",
- "userId": "XXX9549466094e1cb4fd063b646e1ad6",
- "operationId": "8dfb6a82-ae58-4bcb-95b7-d1239ae25681"
-}
-```
-
-## Add GET request to determine creation status
-
-Check the status of the operation.
---
-This API call returns a JSON response that includes the operation status:
-
-```JSON
-{
- "operationState": "NotStarted",
- "createdTimestamp": "2018-09-26T05:22:53Z",
- "lastActionTimestamp": "2018-09-26T05:22:53Z",
- "userId": "XXX9549466094e1cb4fd063b646e1ad6",
- "operationId": "177e12ff-5d04-4b73-b594-8575f9787963"
-}
-```
-
-Repeat the call until success or failure:
-
-```JSON
-{
- "operationState": "Succeeded",
- "createdTimestamp": "2018-09-26T05:22:53Z",
- "lastActionTimestamp": "2018-09-26T05:23:08Z",
- "resourceLocation": "/knowledgebases/XXX7892b-10cf-47e2-a3ae-e40683adb714",
- "userId": "XXX9549466094e1cb4fd063b646e1ad6",
- "operationId": "177e12ff-5d04-4b73-b594-8575f9787963"
-}
-```
-
-## Add CreateKB method
-
-The following method creates the KB and repeats checks on the status. The _create_ **Operation ID** is returned in the POST response header field **Location**, then used as part of the route in the GET request. Because the KB creation may take some time, you need to repeat calls to check the status until the status is either successful or fails. When the operation succeeds, the KB ID is returned in **resourceLocation**.
--
-## Add the CreateKB method to Main
-
-Change the Main method to call the CreateKB method:
--
-## Build and run the program
-
-Build and run the program. It will automatically send the request to the QnA Maker API to create the KB, then it will poll for the results every 30 seconds. Each response is printed to the console window.
-
-Once your knowledge base is created, you can view it in your QnA Maker Portal, [My knowledge bases](https://www.qnamaker.ai/Home/MyServices) page.
---
-## Next steps
-
-> [!div class="nextstepaction"]
-> [QnA Maker (V4) REST API Reference](/rest/api/cognitiveservices/qnamaker4.0/knowledgebase)
cognitive-services Create New Kb Go https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/Quickstarts/create-new-kb-go.md
- Title: "Quickstart: Create knowledge base - REST, Go - QnA Maker"
-description: This Go REST-based quickstart walks you through creating a sample QnA Maker knowledge base, programmatically, that will appear in your Azure Dashboard of your Cognitive Services API account.
-- Previously updated : 12/16/2019-----
-# Quickstart: Create a knowledge base in QnA Maker using Go
-
-This quickstart walks you through programmatically creating a sample QnA Maker knowledge base. QnA Maker automatically extracts questions and answers from semi-structured content, like FAQs, from [data sources](../index.yml). The model for the knowledge base is defined in the JSON sent in the body of the API request.
-
-This quickstart calls QnA Maker APIs:
-* [Create KB](/rest/api/cognitiveservices/qnamaker/knowledgebase/create)
-* [Get Operation Details](/rest/api/cognitiveservices/qnamaker/operations/getdetails)
-
-[Reference documentation](/rest/api/cognitiveservices/qnamaker/knowledgebase) | [GO Sample](https://github.com/Azure-Samples/cognitive-services-qnamaker-go/blob/master/documentation-samples/quickstarts/create-knowledge-base/create-new-knowledge-base.go)
--
-## Prerequisites
-
-* [Go 1.10.1](https://golang.org/dl/)
-* You must have a [QnA Maker service](../How-To/set-up-qnamaker-service-azure.md). To retrieve your key and endpoint (which includes the resource name), select **Quickstart** for your resource in the Azure portal.
-
-## Create a knowledge base Go file
-
-Create a file named `create-new-knowledge-base.go`.
-
-## Add the required dependencies
-
-At the top of `create-new-knowledge-base.go`, add the following lines to add necessary dependencies to the project:
--
-## Add the KB model definition
-After the constants, add the following KB model definition. The model is converting into a string after the definition.
--
-## Add supporting structures and functions
-
-Next, add the following supporting functions.
-
-1. Add the structure for an HTTP response:
-
- :::code language="go" source="~/cognitive-services-quickstart-code/go/QnAMaker/rest/create-kb.go" id="response":::
-
-1. Add the following method to handle a POST to the QnA Maker APIs. For this quickstart, the POST is used to send the KB definition to QnA Maker.
-
- :::code language="go" source="~/cognitive-services-quickstart-code/go/QnAMaker/rest/create-kb.go" id="post":::
-
-1. Add the following method to handle a GET to the QnA Maker APIs. For this quickstart, the GET is used to check the status of the creation operation.
-
- :::code language="go" source="~/cognitive-services-quickstart-code/go/QnAMaker/rest/create-kb.go" id="get":::
-
-## Add function to create KB
-
-Add the following functions to make an HTTP POST request to create the knowledge base. The _create_ **Operation ID** is returned in the POST response header field **Location**, then used as part of the route in the GET request. The `Ocp-Apim-Subscription-Key` is the QnA Maker service key, used for authentication.
--
-This API call returns a JSON response that includes the operation ID. Use the operation ID to determine if the KB is successfully created.
-
-```JSON
-{
- "operationState": "NotStarted",
- "createdTimestamp": "2018-09-26T05:19:01Z",
- "lastActionTimestamp": "2018-09-26T05:19:01Z",
- "userId": "XXX9549466094e1cb4fd063b646e1ad6",
- "operationId": "8dfb6a82-ae58-4bcb-95b7-d1239ae25681"
-}
-```
-
-## Add function to get status
-
-Add the following function to make an HTTP GET request to check the operation status. The `Ocp-Apim-Subscription-Key` is the QnA Maker service key, used for authentication.
--
-Repeat the call until success or failure:
-
-```JSON
-{
- "operationState": "Succeeded",
- "createdTimestamp": "2018-09-26T05:22:53Z",
- "lastActionTimestamp": "2018-09-26T05:23:08Z",
- "resourceLocation": "/knowledgebases/XXX7892b-10cf-47e2-a3ae-e40683adb714",
- "userId": "XXX9549466094e1cb4fd063b646e1ad6",
- "operationId": "177e12ff-5d04-4b73-b594-8575f9787963"
-}
-```
-## Add main function
-
-The following function is the main function and creates the KB and repeats checks on the status. Because the KB creation may take some time, you need to repeat calls to check the status until the status is either successful or fails.
---
-## Compile the program
-Enter the following command to compile the file. The command prompt does not return any information for a successful build.
-
-```bash
-go build create-new-knowledge-base.go
-```
-
-## Run the program
-
-Enter the following command at a command-line to run the program. It will send the request to the QnA Maker API to create the KB, then it will poll for the results every 30 seconds. Each response is printed to the console window.
-
-```bash
-go run create-new-knowledge-base
-```
-
-Once your knowledge base is created, you can view it in your QnA Maker Portal, [My knowledge bases](https://www.qnamaker.ai/Home/MyServices) page.
--
-## Next steps
-
-> [!div class="nextstepaction"]
-> [QnA Maker (V4) REST API Reference](/rest/api/cognitiveservices/qnamaker4.0/knowledgebase)
cognitive-services Create New Kb Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/Quickstarts/create-new-kb-java.md
- Title: "Quickstart: Create knowledge base - REST, Java - QnA Maker"
-description: This Java REST-based quickstart walks you through creating a sample QnA Maker knowledge base, programmatically, that will appear in your Azure Dashboard of your Cognitive Services API account.
-- Previously updated : 12/16/2019-----
-# Quickstart: Create a knowledge base in QnA Maker using Java
-
-This quickstart walks you through programmatically creating a sample QnA Maker knowledge base. QnA Maker automatically extracts questions and answers from semi-structured content, like FAQs, from [data sources](../index.yml). The model for the knowledge base is defined in the JSON sent in the body of the API request.
-
-This quickstart calls QnA Maker APIs:
-* [Create KB](/rest/api/cognitiveservices/qnamaker/knowledgebase/create)
-* [Get Operation Details](/rest/api/cognitiveservices/qnamaker/operations/getdetails)
-
-[Reference documentation](/rest/api/cognitiveservices/qnamaker/knowledgebase) | [Java Sample](https://github.com/Azure-Samples/cognitive-services-qnamaker-java/blob/master/documentation-samples/quickstarts/create-knowledge-base/CreateKB.java)
--
-## Prerequisites
-
-* [Go 1.10.1](https://golang.org/dl/)
-* You must have a [QnA Maker service](../How-To/set-up-qnamaker-service-azure.md). To retrieve your key and endpoint (which includes the resource name), select **Quickstart** for your resource in the Azure portal.
-
-The [sample code](https://github.com/Azure-Samples/cognitive-services-qnamaker-java/blob/master/documentation-samples/quickstarts/create-knowledge-base/CreateKB.java) is available on the GitHub repo for QnA Maker with Java.
-
-## Create a knowledge base file
-
-Create a file named `CreateKB.java`
-
-## Add the required dependencies
-
-At the top of `CreateKB.java`, add the following lines to add necessary dependencies to the project:
--
-## Add the required constants
-After the preceding required dependencies, add the required constants to the `CreateKB` class to access QnA Maker.
-
-You must have a [QnA Maker service](../How-To/set-up-qnamaker-service-azure.md). To retrieve your key and resource name, select **Quickstart** in the Azure portal for your QnA Maker resource.
-
-Set the following values:
-
-* `<your-qna-maker-subscription-key>` - The **key** is a 32 character string and is available in the Azure portal, on the QnA Maker resource, on the Quickstart page. This key is not the same as the prediction endpoint key.
-* `<your-resource-name>` - Your **resource name** is used to construct the authoring endpoint URL for authoring, in the format of `https://YOUR-RESOURCE-NAME.cognitiveservices.azure.com`. This resource name is not the same as the one used to query the prediction endpoint.
-
-You do not need to add the final curly bracket to end the class; it is in the final code snippet at the end of this quickstart.
---
-## Add the KB model definition classes
-After the constants, add the following classes and functions inside the `CreateKB` class to serialize the model definition object into JSON.
--
-## Add supporting functions
-
-Next, add the following supporting functions inside the `CreateKB` class.
-
-1. Add the following function to print out JSON in a readable format:
-
- :::code language="java" source="~/cognitive-services-quickstart-code/java/QnAMaker/rest/CreateKB.java" id="pretty":::
-
-2. Add the following class to manage the HTTP response:
-
- :::code language="java" source="~/cognitive-services-quickstart-code/java/QnAMaker/rest/CreateKB.java" id="response":::
-
-3. Add the following method to make a POST request to the QnA Maker APIs. The `Ocp-Apim-Subscription-Key` is the QnA Maker service key, used for authentication.
-
- :::code language="java" source="~/cognitive-services-quickstart-code/java/QnAMaker/rest/CreateKB.java" id="post":::
-
-4. Add the following method to make a GET request to the QnA Maker APIs.
-
- :::code language="java" source="~/cognitive-services-quickstart-code/java/QnAMaker/rest/CreateKB.java" id="get":::
-
-## Add a method to create the KB
-Add the following method to create the KB by calling into the Post method.
--
-This API call returns a JSON response that includes the operation ID. Use the operation ID to determine if the KB is successfully created.
-
-```JSON
-{
- "operationState": "NotStarted",
- "createdTimestamp": "2018-09-26T05:19:01Z",
- "lastActionTimestamp": "2018-09-26T05:19:01Z",
- "userId": "XXX9549466094e1cb4fd063b646e1ad6",
- "operationId": "8dfb6a82-ae58-4bcb-95b7-d1239ae25681"
-}
-```
-
-## Add a method to get status
-Add the following method to check the creation status.
--
-Repeat the call until success or failure:
-
-```JSON
-{
- "operationState": "Succeeded",
- "createdTimestamp": "2018-09-26T05:22:53Z",
- "lastActionTimestamp": "2018-09-26T05:23:08Z",
- "resourceLocation": "/knowledgebases/XXX7892b-10cf-47e2-a3ae-e40683adb714",
- "userId": "XXX9549466094e1cb4fd063b646e1ad6",
- "operationId": "177e12ff-5d04-4b73-b594-8575f9787963"
-}
-```
-
-## Add a main method
-The main method creates the KB, then polls for the status. The operation ID is returned in the POST response header field **Location**, then used as part of the route in the GET request. The `while` loop retries the status if it is not completed.
--
-## Compile and run the program
-
-1. Make sure the gson library is in the `./libs` directory. At the command line, compile the file `CreateKB.java`:
-
- ```bash
- javac -cp ".;libs/*" CreateKB.java
- ```
-
-2. Enter the following command at a command line to run the program. It will send the request to the QnA Maker API to create the KB, then it will poll for the results every 30 seconds. Each response is printed to the console window.
-
- ```bash
- java -cp ",;libs/*" CreateKB
- ```
-
-Once your knowledge base is created, you can view it in your QnA Maker Portal, [My knowledge bases](https://www.qnamaker.ai/Home/MyServices) page.
--
-## Next steps
-
-> [!div class="nextstepaction"]
-> [QnA Maker (V4) REST API Reference](/rest/api/cognitiveservices/qnamaker4.0/knowledgebase)
cognitive-services Create New Kb Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/Quickstarts/create-new-kb-python.md
- Title: "Quickstart: Create knowledge base - REST, Python - QnA Maker"
-description: This Python REST-based quickstart walks you through creating a sample QnA Maker knowledge base, programmatically, that will appear in your Azure Dashboard of your Cognitive Services API account.
-- Previously updated : 12/16/2019-----
-# Quickstart: Create a knowledge base in QnA Maker using Python
-
-This quickstart walks you through programmatically creating and publishing a sample QnA Maker knowledge base. QnA Maker automatically extracts questions and answers from semi-structured content, like FAQs, from [data sources](../index.yml). The model for the knowledge base is defined in the JSON sent in the body of the API request.
-
-This quickstart calls QnA Maker APIs:
-* [Create KB](/rest/api/cognitiveservices/qnamaker/knowledgebase/create)
-* [Get Operation Details](/rest/api/cognitiveservices/qnamaker/operations/getdetails)
-
-[Reference documentation](/rest/api/cognitiveservices/qnamaker/knowledgebase) | [Python Sample](https://github.com/Azure-Samples/cognitive-services-qnamaker-python/blob/master/documentation-samples/quickstarts/create-knowledge-base/create-new-knowledge-base-3x.py)
--
-## Prerequisites
-
-* [Python 3.7](https://www.python.org/downloads/)
-* You must have a [QnA Maker service](../How-To/set-up-qnamaker-service-azure.md). To retrieve your key and endpoint (which includes the resource name), select **Quickstart** for your resource in the Azure portal.
-
-## Create a knowledge base Python file
-
-Create a file named `create-new-knowledge-base-3x.py`.
-
-## Add the required dependencies
-
-At the top of `create-new-knowledge-base-3x.py`, add the following lines to add necessary dependencies to the project:
--
-## Add the required constants
-After the preceding required dependencies, add the required constants to access QnA Maker. Replace the value of the `<your-qna-maker-subscription-key>` and `<your-resource-name>` with your own QnA Maker key and resource name.
-
-At the top of the Program class, add the required constants to access QnA Maker.
-
-Set the following values:
-
-* `<your-qna-maker-subscription-key>` - The **key** is a 32 character string and is available in the Azure portal, on the QnA Maker resource, on the Quickstart page. This is not the same as the prediction endpoint key.
-* `<your-resource-name>` - Your **resource name** is used to construct the authoring endpoint URL for authoring, in the format of `https://YOUR-RESOURCE-NAME.cognitiveservices.azure.com`. This is not the same URL used to query the prediction endpoint.
--
-## Add the KB model definition
-
-After the constants, add the following KB model definition. The model is converting into a string after the definition.
--
-## Add supporting function
-
-Add the following function to print out JSON in a readable format:
--
-## Add function to create KB
-
-Add the following function to make an HTTP POST request to create the knowledge base.
-This API call returns a JSON response that includes the operation ID in the header field **Location**. Use the operation ID to determine if the KB is successfully created. The `Ocp-Apim-Subscription-Key` is the QnA Maker service key, used for authentication.
--
-This API call returns a JSON response that includes the operation ID. Use the operation ID to determine if the KB is successfully created.
-
-```JSON
-{
- "operationState": "NotStarted",
- "createdTimestamp": "2018-09-26T05:19:01Z",
- "lastActionTimestamp": "2018-09-26T05:19:01Z",
- "userId": "XXX9549466094e1cb4fd063b646e1ad6",
- "operationId": "8dfb6a82-ae58-4bcb-95b7-d1239ae25681"
-}
-```
-
-## Add function to check creation status
-
-The following function checks the creation status sending in the operation ID at the end of the URL route. The call to `check_status` is inside the main _while_ loop.
--
-This API call returns a JSON response that includes the operation status:
-
-```JSON
-{
- "operationState": "NotStarted",
- "createdTimestamp": "2018-09-26T05:22:53Z",
- "lastActionTimestamp": "2018-09-26T05:22:53Z",
- "userId": "XXX9549466094e1cb4fd063b646e1ad6",
- "operationId": "177e12ff-5d04-4b73-b594-8575f9787963"
-}
-```
-
-Repeat the call until success or failure:
-
-```JSON
-{
- "operationState": "Succeeded",
- "createdTimestamp": "2018-09-26T05:22:53Z",
- "lastActionTimestamp": "2018-09-26T05:23:08Z",
- "resourceLocation": "/knowledgebases/XXX7892b-10cf-47e2-a3ae-e40683adb714",
- "userId": "XXX9549466094e1cb4fd063b646e1ad6",
- "operationId": "177e12ff-5d04-4b73-b594-8575f9787963"
-}
-```
-
-## Add main code block
-The following loop polls for the creation operation status periodically until the operation is complete.
--
-## Build and run the program
-
-Enter the following command at a command-line to run the program. It will send the request to the QnA Maker API to create the KB, then it will poll for the results every 30 seconds. Each response is printed to the console window.
-
-```bash
-python create-new-knowledge-base-3x.py
-```
-
-Once your knowledge base is created, you can view it in your QnA Maker Portal, [My knowledge bases](https://www.qnamaker.ai/Home/MyServices) page. Select your knowledge base name, for example QnA Maker FAQ, to view.
--
-## Next steps
-
-> [!div class="nextstepaction"]
-> [QnA Maker (V4) REST API Reference](/rest/api/cognitiveservices/qnamaker4.0/knowledgebase)
cognitive-services Get Answer From Knowledge Base Csharp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/Quickstarts/get-answer-from-knowledge-base-csharp.md
- Title: "Quickstart: Get answer from knowledge base - REST, C# - QnA Maker"
-description: This C# REST-based quickstart walks you through getting an answer from a knowledge base, programmatically.
-- Previously updated : 02/08/2020---
-#Customer intent: As an API or REST developer new to the QnA Maker service, I want to programmatically get an answer a knowledge base using C#.
--
-# Quickstart: Get answers to a question from a knowledge base with C#
-
-This quickstart walks you through programmatically getting an answer from a published QnA Maker knowledge base. The knowledge base contains questions and answers from [data sources](../index.yml) such as FAQs. The [question](../how-to/metadata-generateanswer-usage.md#generateanswer-request-configuration) is sent to the QnA Maker service. The [response](../how-to/metadata-generateanswer-usage.md#generateanswer-response-properties) includes the top-predicted answer.
-
-[Reference documentation](/rest/api/cognitiveservices/qnamaker4.0/Runtime) | [Sample](https://github.com/Azure-Samples/cognitive-services-qnamaker-csharp/blob/master/documentation-samples/quickstarts/get-answer/QnAMakerAnswerQuestion/Program.cs)
-
-## Prerequisites
-
-* Latest [**Visual Studio Community edition**](https://www.visualstudio.com/downloads/).
-* You must have a [QnA Maker service](../How-To/set-up-qnamaker-service-azure.md). To retrieve your key, select **Keys** under **Resource Management** in your Azure dashboard for your QnA Maker resource.
-* **Publish** page settings. If you do not have a published knowledge base, create an empty knowledge base, then import a knowledge base on the **Settings** page, then publish. You can download and use [this basic knowledge base](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/knowledge-bases/basic-kb.tsv).
-
- The publish page settings include POST route value, Host value, and EndpointKey value.
-
- ![Publish settings](../media/qnamaker-quickstart-get-answer/publish-settings.png)
-
-## Create a knowledge base project
-
-1. Open Visual Studio 2019 Community edition.
-1. Create a new Console App (.NET Core) project and name the project QnaMakerQuickstart. Accept the defaults for the remaining settings.
-
-## Add the required dependencies
-
-At the top of the Program.cs file, replace the single using statement with the following lines to add necessary dependencies to the project:
--
-## Add the required constants
-
-At the top of the `Program` class, inside the `Main`, add the required constants to access QnA Maker. These values are on the **Publish** page after you publish the knowledge base.
--
-## Add a POST request to send question and get answer
-
-The following code makes an HTTPS request to the QnA Maker API to send the question to the knowledge base and receives the response:
--
-The `Authorization` header's value includes the string `EndpointKey`.
-
-Learn more about the [request](../how-to/metadata-generateanswer-usage.md#generateanswer-request) and [response](../how-to/metadata-generateanswer-usage.md#generateanswer-response).
-
-## Build and run the program
-
-Build and run the program from Visual Studio. It will automatically send the request to the QnA Maker API, then it will print to the console window.
---
-## Next steps
-
-> [!div class="nextstepaction"]
-> [QnA Maker (V4) REST API Reference](/rest/api/cognitiveservices/qnamaker4.0/knowledgebase)
cognitive-services Get Answer From Knowledge Base Go https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/Quickstarts/get-answer-from-knowledge-base-go.md
- Title: "Quickstart: Get answer from knowledge base - REST, Go - QnA Maker"
-description: This Go REST-based quickstart walks you through getting an answer from a knowledge base, programmatically.
-- Previously updated : 02/08/2020---
-#Customer intent: As an API or REST developer new to the QnA Maker service, I want to programmatically get an answer a knowledge base using Go.
--
-# Quickstart: Get answers to a question from a knowledge base with Go
-
-This quickstart walks you through programmatically getting an answer from a published QnA Maker knowledge base. The knowledge base contains questions and answers from [data sources](../index.yml) such as FAQs. The [question](../how-to/metadata-generateanswer-usage.md#generateanswer-request-configuration) is sent to the QnA Maker service. The [response](../how-to/metadata-generateanswer-usage.md#generateanswer-response-properties) includes the top-predicted answer.
-
-[Reference documentation](/rest/api/cognitiveservices/qnamaker4.0/Runtime) | [Sample](https://github.com/Azure-Samples/cognitive-services-qnamaker-go/blob/master/documentation-samples/quickstarts/get-answer/get-answer.go)
-
-## Prerequisites
-
-* [Go 1.10.1](https://golang.org/dl/)
-* [Visual Studio Code](https://code.visualstudio.com/)
-* You must have a [QnA Maker service](../How-To/set-up-qnamaker-service-azure.md). To retrieve your key, select **Keys** under **Resource Management** in your Azure dashboard for your QnA Maker resource.
-* **Publish** page settings. If you do not have a published knowledge base, create an empty knowledge base, then import a knowledge base on the **Settings** page, then publish. You can download and use [this basic knowledge base](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/knowledge-bases/basic-kb.tsv).
-
- The publish page settings include POST route value, Host value, and EndpointKey value.
-
- ![Publish settings](../media/qnamaker-quickstart-get-answer/publish-settings.png)
-
-## Create a Go file
-
-Open VSCode and create a new file named `get-answer.go` and add the following class:
-
-```Go
-package main
-
-func main() {
-
-}
-```
-
-## Add the required dependencies
-
-Above the `main` function, at the top of the `get-answer.go` file, add necessary dependencies to the project:
--
-## Add a POST request to send question and get answer
-
-The following code makes an HTTPS request to the QnA Maker API to send the question to the knowledge base and receives the response:
--
-The `Authorization` header's value includes the string `EndpointKey`.
-
-Learn more about the [request](../how-to/metadata-generateanswer-usage.md#generateanswer-request) and [response](../how-to/metadata-generateanswer-usage.md#generateanswer-response).
-
-## Build and run the program
-
-Build and run the program from the command line. It will automatically send the request to the QnA Maker API, then it will print to the console window.
-
-1. Build the file:
-
- ```bash
- go build get-answer.go
- ```
-
-1. Run the file:
-
- ```bash
- ./get-answer
- ```
----
-## Next steps
-
-> [!div class="nextstepaction"]
-> [QnA Maker (V4) REST API Reference](/rest/api/cognitiveservices/qnamaker4.0/knowledgebase)
cognitive-services Get Answer From Knowledge Base Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/Quickstarts/get-answer-from-knowledge-base-java.md
- Title: "Quickstart: Get answer from knowledge base - REST, Java - QnA Maker"
-description: This Java REST-based quickstart walks you through getting an answer from a knowledge base, programmatically.
-- Previously updated : 02/08/2020---
-#Customer intent: As an API or REST developer new to the QnA Maker service, I want to programmatically get an answer a knowledge base using Java.
--
-# Quickstart: Get answers to a question from a knowledge base with Java
-
-This quickstart walks you through programmatically getting an answer from a published QnA Maker knowledge base. The knowledge base contains questions and answers from [data sources](../index.yml) such as FAQs. The [question](../how-to/metadata-generateanswer-usage.md#generateanswer-request-configuration) is sent to the QnA Maker service. The [response](../how-to/metadata-generateanswer-usage.md#generateanswer-response-properties) includes the top-predicted answer.
-
-[Reference documentation](/rest/api/cognitiveservices/qnamaker4.0/Runtime) | [Sample](https://github.com/Azure-Samples/cognitive-services-qnamaker-java/blob/master/documentation-samples/quickstarts/get-answer/GetAnswer.java)
-
-## Prerequisites
-
-* [JDK SE](/azure/developer/java/fundamentals/java-jdk-long-term-support) (Java Development Kit, Standard Edition)
-* This sample uses the Apache [HTTP client](https://hc.apache.org/httpcomponents-client-ga/) from HTTP Components. You need to add the following Apache HTTP client libraries to your project:
- * httpclient-4.5.3.jar
- * httpcore-4.4.6.jar
- * commons-logging-1.2.jar
-* [Visual Studio Code](https://code.visualstudio.com/)
-* You must have a [QnA Maker service](../How-To/set-up-qnamaker-service-azure.md). To retrieve your key, select **Keys** under **Resource Management** in your Azure dashboard for your QnA Maker resource.
-* **Publish** page settings. If you do not have a published knowledge base, create an empty knowledge base, then import a knowledge base on the **Settings** page, then publish. You can download and use [this basic knowledge base](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/knowledge-bases/basic-kb.tsv).
-
- The publish page settings include POST route value, Host value, and EndpointKey value.
-
- ![Publish settings](../media/qnamaker-quickstart-get-answer/publish-settings.png)
-
-## Create a java file
-
-Open VSCode and create a new file named `GetAnswer.java` and add the following class:
-
-```Java
-public class GetAnswer {
-
- public static void main(String[] args)
- {
-
- }
-}
-```
-
-## Add the required dependencies
-
-This quickstart uses Apache classes for HTTP requests. Above the GetAnswer class, at the top of the `GetAnswer.java` file, add necessary dependencies to the project:
--
-## Add the required constants
-
-At the top of the `GetAnswer.java` class, add the required constants to access QnA Maker. These values are on the **Publish** page after you publish the knowledge base.
--
-## Add a POST request to send question
-
-The following code makes an HTTPS request to the QnA Maker API to send the question to the knowledge base and receives the response:
--
-The `Authorization` header's value includes the string `EndpointKey`.
-
-Learn more about the [request](../how-to/metadata-generateanswer-usage.md#generateanswer-request) and [response](../how-to/metadata-generateanswer-usage.md#generateanswer-response).
-
-## Build and run the program
-
-Build and run the program from the command line. It will automatically send the request to the QnA Maker API, then it will print to the console window.
-
-1. Build the file:
-
- ```bash
- javac -cp "lib/*" GetAnswer.java
- ```
-
-1. Run the file:
-
- ```bash
- java -cp ".;lib/*" GetAnswer
- ```
----
-## Next steps
-
-> [!div class="nextstepaction"]
-> [QnA Maker (V4) REST API Reference](/rest/api/cognitiveservices/qnamaker4.0/knowledgebase)
cognitive-services Get Answer From Knowledge Base Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/Quickstarts/get-answer-from-knowledge-base-nodejs.md
- Title: "Quickstart: Get answer from knowledge base - REST, Node.js - QnA Maker"
-description: This Node.js REST-based quickstart walks you through getting an answer from a knowledge base, programmatically.
-- Previously updated : 02/08/2020---
-#Customer intent: As an API or REST developer new to the QnA Maker service, I want to programmatically get an answer a knowledge base using Node.js.
--
-# Quickstart: Get answers to a question from a knowledge base with Node.js
-
-This quickstart walks you through programmatically getting an answer from a published QnA Maker knowledge base. The knowledge base contains questions and answers from [data sources](../index.yml) such as FAQs. The [question](../how-to/metadata-generateanswer-usage.md#generateanswer-request-configuration) is sent to the QnA Maker service. The [response](../how-to/metadata-generateanswer-usage.md#generateanswer-response-properties) includes the top-predicted answer.
-
-[Reference documentation](/rest/api/cognitiveservices/qnamaker4.0/Runtime) | [Sample](https://github.com/Azure-Samples/cognitive-services-qnamaker-nodejs/blob/master/documentation-samples/quickstarts/get-answer/get-answer.js)
-
-## Prerequisites
-
-* [Node.js](https://nodejs.org/en/download/)
-* [Visual Studio Code](https://code.visualstudio.com/)
-* You must have a [QnA Maker service](../How-To/set-up-qnamaker-service-azure.md). To retrieve your key, select **Keys** under **Resource Management** in your Azure dashboard for your QnA Maker resource.
-* **Publish** page settings. If you do not have a published knowledge base, create an empty knowledge base, then import a knowledge base on the **Settings** page, then publish. You can download and use [this basic knowledge base](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/knowledge-bases/basic-kb.tsv).
-
- The publish page settings include POST route value, Host value, and EndpointKey value.
-
- ![Publish settings](../media/qnamaker-quickstart-get-answer/publish-settings.png)
-
-## Create a Node.js file
-
-Open VSCode and create a new file named `get-answer.js`.
-
-## Add the required dependencies
-
-At the top of the `get-answer.js` file, add necessary dependencies to the project:
--
-## Add the required constants
-
-Next, add the required constants to access QnA Maker. These values are on the **Publish** page after you publish the knowledge base.
--
-## Add a POST request to send question and get an answer
-
-The following code makes an HTTPS request to the QnA Maker API to send the question to the knowledge base and receives the response:
--
-## Run the program
-
-Run the program from the command line. It will automatically send the request to the QnA Maker API, then it will print to the console window.
-
-Run the file:
-
-```bash
-node get-answer.js
-```
--
-Learn more about the [request](../how-to/metadata-generateanswer-usage.md#generateanswer-request) and [response](../how-to/metadata-generateanswer-usage.md#generateanswer-response).
--
-## Next steps
-
-> [!div class="nextstepaction"]
-> [QnA Maker (V4) REST API Reference](/rest/api/cognitiveservices/qnamaker4.0/knowledgebase)
cognitive-services Get Answer From Knowledge Base Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/Quickstarts/get-answer-from-knowledge-base-python.md
- Title: "Quickstart: Get answer from knowledge base - REST, Python - QnA Maker"
-description: This Python REST-based quickstart walks you through getting an answer from a knowledge base, programmatically.
-- Previously updated : 02/08/2020---
-#Customer intent: As an API or REST developer new to the QnA Maker service, I want to programmatically get an answer a knowledge base using Python.
--
-# Quickstart: Get answers to a question from a knowledge base with Python
-
-This quickstart walks you through programmatically getting an answer from a published QnA Maker knowledge base. The knowledge base contains questions and answers from [data sources](../index.yml) such as FAQs. The [question](../how-to/metadata-generateanswer-usage.md#generateanswer-request-configuration) is sent to the QnA Maker service. The [response](../how-to/metadata-generateanswer-usage.md#generateanswer-response-properties) includes the top-predicted answer.
-
-[Reference documentation](/rest/api/cognitiveservices/qnamaker4.0/Runtime) | [Sample](https://github.com/Azure-Samples/cognitive-services-qnamaker-python/blob/master/documentation-samples/quickstarts/get-answer/get-answer-3x.py)
-
-## Prerequisites
-
-* [Python 3.6 or greater](https://www.python.org/downloads/)
-* [Visual Studio Code](https://code.visualstudio.com/)
-* You must have a [QnA Maker service](../How-To/set-up-qnamaker-service-azure.md). To retrieve your key, select **Keys** under **Resource Management** in your Azure dashboard for your QnA Maker resource.
-* **Publish** page settings. If you do not have a published knowledge base, create an empty knowledge base, then import a knowledge base on the **Settings** page, then publish. You can download and use [this basic knowledge base](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/knowledge-bases/basic-kb.tsv).
-
- The publish page settings include POST route value, Host value, and EndpointKey value.
-
- ![Publish settings](../media/qnamaker-quickstart-get-answer/publish-settings.png)
-
-## Create a Python file
-
-Open VSCode and create a new file named `get-answer-3x.py`.
-
-## Add the required dependencies
-
-At the top of the `get-answer-3x.py` file, add necessary dependencies to the project:
--
-<!--TBD - reword this following paragraph -->
-
-The host and route are different than how they appear on the **Publish** page. This is because the python library doesn't allow any routing in the host. The routing that appears on the **Publish** page as part of host has been moved to the route.
-
-## Add the required constants
-
-Add the required constants to access QnA Maker. These values are on the **Publish** page after you publish the knowledge base.
--
-## Add a POST request to send question and get an answer
-
-The following code makes an HTTPS request to the QnA Maker API to send the question to the knowledge base and receives the response:
--
-The `Authorization` header's value includes the string `EndpointKey`.
-
-## Run the program
-
-Run the program from the command line. It will automatically send the request to the QnA Maker API, then it will print to the console window.
-
-Run the file:
-
-```bash
-python get-answer-3x.py
-```
--
-Learn more about the [request](../how-to/metadata-generateanswer-usage.md#generateanswer-request) and [response](../how-to/metadata-generateanswer-usage.md#generateanswer-response).
--
-## Next steps
-
-> [!div class="nextstepaction"]
-> [QnA Maker (V4) REST API Reference](/rest/api/cognitiveservices/qnamaker4.0/knowledgebase)
cognitive-services Get Answer From Knowledge Base Using Url Tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/Quickstarts/get-answer-from-knowledge-base-using-url-tool.md
Last updated 07/16/2020
Learn more about metadata: * [Authoring - add metadata to QnA pair](../How-To/edit-knowledge-base.md#add-metadata)
-* [Query prediction - filter answers by metadata](../How-To/metadata-generateanswer-usage.md#use-metadata-to-filter-answers-by-custom-metadata-tags)
+* [Query prediction - filter answers by metadata](../How-To/query-knowledge-base-with-metadata.md)
+
cognitive-services Publish Kb Csharp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/Quickstarts/publish-kb-csharp.md
- Title: "Quickstart: Publish knowledge base, REST, C# - QnA Maker"
-description: This C# REST-based quickstart publishes your knowledge base and creates an endpoint that can be called in your application or chat bot.
-- Previously updated : 02/08/2020---
-#Customer intent: As an API or REST developer new to the QnA Maker service, I want to programmatically publish a knowledge base using C#.
--
-# Quickstart: Publish a knowledge base in QnA Maker using C#
-
-This REST-based quickstart walks you through programmatically publishing your knowledge base (KB). Publishing pushes the latest version of the knowledge base to a dedicated Azure Cognitive Search index and creates an endpoint that can be called in your application or chat bot.
-
-This quickstart calls QnA Maker APIs:
-* [Publish](/rest/api/cognitiveservices/qnamaker/knowledgebase/publish) - this API doesn't require any information in the body of the request.
-
-## Prerequisites
-
-* Latest [**Visual Studio Community edition**](https://www.visualstudio.com/downloads/).
-* You must have a [QnA Maker service](../How-To/set-up-qnamaker-service-azure.md). To retrieve your key and endpoint (which includes the resource name), select **Quickstart** for your resource in the Azure portal.
-* QnA Maker knowledge base (KB) ID found in the URL in the `kbid` query string parameter as shown below.
-
- ![QnA Maker knowledge base ID](../media/qnamaker-quickstart-kb/qna-maker-id.png)
-
- If you don't have a knowledge base yet, you can create a sample one to use for this quickstart: [Create a new knowledge base](create-new-kb-csharp.md).
-
-> [!NOTE]
-> The complete solution file(s) are available from the [**Azure-Samples/cognitive-services-qnamaker-csharp** GitHub repository](https://github.com/Azure-Samples/cognitive-services-qnamaker-csharp/tree/master/documentation-samples/quickstarts/publish-knowledge-base).
-
-## Create knowledge base project
-
-1. Open Visual Studio 2019 Community edition.
-1. Create a new **Console App (.NET Core)** project and name the project `QnaMakerQuickstart`. Accept the defaults for the remaining settings.
-
-## Add required dependencies
-
-At the top of Program.cs, replace the single using statement with the following lines to add necessary dependencies to the project:
--
-## Add required constants
-
-In the **Program** class, add the required constants to access QnA Maker.
--
-## Add the Main method to publish the knowledge base
-
-After the required constants, add the following code, which makes an HTTPS request to the QnA Maker API to publish a knowledge base and receives the response:
--
-The API call returns a 204 status for a successful publish without any content in the body of the response.
-
-## Build and run the program
-
-Build and run the program. It will automatically send the request to the QnA Maker API to publish the knowledge base, then the response is printed to the console window.
-
-Once your knowledge base is published, you can query it from the endpoint with a client application or chat bot.
--
-## Next steps
-
-After the knowledge base is published, you need the [endpoint URL to generate an answer](./get-answer-from-knowledge-base-csharp.md).
-
-> [!div class="nextstepaction"]
-> [QnA Maker (V4) REST API Reference](/rest/api/cognitiveservices/qnamaker4.0/knowledgebase)
cognitive-services Publish Kb Go https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/Quickstarts/publish-kb-go.md
- Title: "Quickstart: Publish knowledge base, REST, Go - QnA Maker"
-description: This Go REST-based quickstart publishes your knowledge base and creates an endpoint that can be called in your application or chat bot.
-- Previously updated : 02/08/2020-----
-# Quickstart: Publish a knowledge base in QnA Maker using Go
-
-This REST-based quickstart walks you through programmatically publishing your knowledge base (KB). Publishing pushes the latest version of the knowledge base to a dedicated Azure Cognitive Search index and creates an endpoint that can be called in your application or chat bot.
-
-This quickstart calls QnA Maker APIs:
-* [Publish](/rest/api/cognitiveservices/qnamaker/knowledgebase/publish) - this API doesn't require any information in the body of the request.
-
-## Prerequisites
-
-* [Go 1.10.1](https://golang.org/dl/)
-* You must have a [QnA Maker service](../How-To/set-up-qnamaker-service-azure.md). To retrieve your key and endpoint (which includes the resource name), select **Quickstart** for your resource in the Azure portal.
-
-* QnA Maker knowledge base (KB) ID found in the URL in the `kbid` query string parameter as shown below.
-
- ![QnA Maker knowledge base ID](../media/qnamaker-quickstart-kb/qna-maker-id.png)
-
- If you don't have a knowledge base yet, you can create a sample one to use for this quickstart: [Create a new knowledge base](create-new-kb-csharp.md).
-
-> [!NOTE]
-> The complete solution file(s) are available from the [**Azure-Samples/cognitive-services-qnamaker-go** GitHub repository](https://github.com/Azure-Samples/cognitive-services-qnamaker-go/tree/master/documentation-samples/quickstarts/publish-knowledge-base).
-
-## Create a Go file
-
-Open VSCode and create a new file named `publish-kb.go`.
-
-## Add the required dependencies
-
-At the top of `publish-kb.go`, add the following lines to add necessary dependencies to the project:
--
-## Create the main function
-
-After the required dependencies, add the following class:
-
-```Go
-package main
-
-func main() {
-
-}
-```
-
-## Add POST request to publish KB
-
-Add the following code, which makes an HTTPS request to the QnA Maker API to publish a knowledge base and receives the response:
--
-The API call returns a 204 status for a successful publish without any content in the body of the response. The code adds content for 204 responses.
-
-For any other response, that response is returned unaltered.
-
-## Build and run the program
-
-Enter the following command to compile the file. The command prompt does not return any information for a successful build.
-
-```bash
-go build publish-kb.go
-```
-
-Enter the following command at a command-line to run the program. It will send the request to the QnA Maker API to publish the KB, then print out 204 for success or errors.
-
-```bash
-./publish-kb
-```
--
-## Next steps
-
-After the knowledge base is published, you need the [endpoint URL to generate an answer](./get-answer-from-knowledge-base-go.md).
-
-> [!div class="nextstepaction"]
-> [QnA Maker (V4) REST API Reference](/rest/api/cognitiveservices/qnamaker4.0/knowledgebase)
cognitive-services Publish Kb Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/Quickstarts/publish-kb-java.md
- Title: "Quickstart: Publish knowledge base, REST, Java - QnA Maker"
-description: This Java REST-based quickstart publishes your knowledge base and creates an endpoint that can be called in your application or chat bot.
-- Previously updated : 02/08/2020------
-# Quickstart: Publish a knowledge base in QnA Maker using Java
-
-This REST-based quickstart walks you through programmatically publishing your knowledge base (KB). Publishing pushes the latest version of the knowledge base to a dedicated Azure Cognitive Search index and creates an endpoint that can be called in your application or chat bot.
-
-This quickstart calls QnA Maker APIs:
-* [Publish](/rest/api/cognitiveservices/qnamaker/knowledgebase/publish) - this API doesn't require any information in the body of the request.
-
-## Prerequisites
-
-* [JDK SE](/azure/developer/java/fundamentals/java-jdk-long-term-support) (Java Development Kit, Standard Edition)
-* This sample uses the Apache [HTTP client](https://hc.apache.org/httpcomponents-client-ga/) from HTTP Components. You need to add the following Apache HTTP client libraries to your project:
- * httpclient-4.5.3.jar
- * httpcore-4.4.6.jar
- * commons-logging-1.2.jar
-* [Visual Studio Code](https://code.visualstudio.com/)
-* You must have a [QnA Maker service](../How-To/set-up-qnamaker-service-azure.md). To retrieve your key and endpoint (which includes the resource name), select **Quickstart** for your resource in the Azure portal.
-* QnA Maker knowledge base (KB) ID found in the URL in the `kbid` query string parameter as shown below.
-
- ![QnA Maker knowledge base ID](../media/qnamaker-quickstart-kb/qna-maker-id.png)
-
- If you don't have a knowledge base yet, you can create a sample one to use for this quickstart: [Create a new knowledge base](create-new-kb-csharp.md).
-
-> [!NOTE]
-> The complete solution file(s) are available from the [**Azure-Samples/cognitive-services-qnamaker-java** GitHub repository](https://github.com/Azure-Samples/cognitive-services-qnamaker-java/tree/master/documentation-samples/quickstarts/publish-knowledge-base).
-
-## Create a Java file
-
-Open VSCode and create a new file named `PublishKB.java`.
-
-## Add the required dependencies
-
-At the top of `PublishKB.java`, above the class, add the following lines to add necessary dependencies to the project:
--
-## Create PublishKB class with main method
-
-After the dependencies, add the following class:
-
-```Go
-public class PublishKB {
-
- public static void main(String[] args)
- {
- }
-}
-```
-
-## Add required constants
-
-In the **main** method, add the required constants to access QnA Maker. Replace the values with your own.
--
-## Add POST request to publish knowledge base
-
-After the required constants, add the following code, which makes an HTTPS request to the QnA Maker API to publish a knowledge base and receives the response:
--
-The API call returns a 204 status for a successful publish without any content in the body of the response. The code adds content for 204 responses.
-
-For any other response, that response is returned unaltered.
-
-## Build and run the program
-
-Build and run the program from the command line. It will automatically send the request to the QnA Maker API, then it will print to the console window.
-
-1. Build the file:
-
- ```bash
- javac -cp "lib/*" PublishKB.java
- ```
-
-1. Run the file:
-
- ```bash
- java -cp ".;lib/*" PublishKB
- ```
--
-## Next steps
-
-After the knowledge base is published, you need the [endpoint URL to generate an answer](./get-answer-from-knowledge-base-java.md).
-
-> [!div class="nextstepaction"]
-> [QnA Maker (V4) REST API Reference](/rest/api/cognitiveservices/qnamaker4.0/knowledgebase)
cognitive-services Publish Kb Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/Quickstarts/publish-kb-nodejs.md
- Title: "Quickstart: QnA Maker with REST APIs for Node.js"
-description: This quickstart shows how to get started with the QnA Maker REST APIs for Node.js. Follow these steps to install the package and try out the example code for basic tasks. QnA Maker enables you to power a question-and-answer service from your semi-structured content like FAQ documents, URLs, and product manuals.
-- Previously updated : 02/08/2020-----
-# Quickstart: QnA Maker REST APIs for Node.js
-
-Get started with the QnA Maker REST APIs for Node.js. Follow these steps to try out the example code for basic tasks. QnA Maker enables you to power a question-and-answer service from your semi-structured content like FAQ documents, URLs, and product manuals.
-
-Use the QnA Maker REST APIs for Node.js to:
-
-* Create a knowledge base
-* Replace a knowledge base
-* Publish a knowledge base
-* Delete a knowledge base
-* Download a knowledge base
-* Get status of an operation
-
-[Reference documentation](/rest/api/cognitiveservices/qnamaker/knowledgebase) | [Node.js Samples](https://github.com/Azure-Samples/cognitive-services-qnamaker-nodejs/tree/master/documentation-samples/quickstarts/rest-api)
--
-## Prerequisites
-
-* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services)
-* The current version of [Node.js](https://nodejs.org).
-* You must have a [QnA Maker service](../How-To/set-up-qnamaker-service-azure.md). To retrieve your key and endpoint (which includes the resource name), select **Quickstart** for your resource in the Azure portal.
-
-## Setting up
-
-### Create a QnA Maker Azure resource
-
-Azure Cognitive Services are represented by Azure resources that you subscribe to. Create a resource for QnA Maker using the [Azure portal](../../cognitive-services-apis-create-account.md) or [Azure CLI](../../cognitive-services-apis-create-account-cli.md) on your local machine.
-
-After getting a key from your resource, [create environment variables](../../cognitive-services-apis-create-account.md#configure-an-environment-variable-for-authentication) for the resource, named `QNAMAKER_RESOURCE_KEY` and `QNAMAKER_AUTHORING_ENDPOINT`. Use the key and endpoint values found in the Resource's **Quickstart** page in the Azure portal.
-
-### Create a new Node.js application
-
-In a console window (such as cmd, PowerShell, or Bash), create a new directory for your app, and navigate to it.
-
-```console
-mkdir myapp && cd myapp
-```
-
-Run the `npm init -y` command to create a node `package.json` file.
-
-```console
-npm init -y
-```
-
-Add the `reqeuestretry` and `request` NPM packages:
-
-```console
-npm install requestretry request --save
-```
-
-## Code examples
-
-These code snippets show you how to do the following with the QnA Maker REST APIs for Node.js:
-
-* [Create a knowledge base](#create-a-knowledge-base)
-* [Replace a knowledge base](#replace-a-knowledge-base)
-* [Publish a knowledge base](#publish-a-knowledge-base)
-* [Delete a knowledge base](#delete-a-knowledge-base)
-* [Download a knowledge base](#download-the-knowledge-base)
-* [Get status of an operation](#get-status-of-an-operation)
-
-## Add the dependencies
-
-Create a file named `rest-apis.js` and add the following dependencies.
--
-## Add utility functions
-
-Add the following utility functions.
--
-## Add Azure resource information
-
-Create variables for your resource's Azure endpoint and key. If you created the environment variable after you launched the application, you will need to close and reopen the editor, IDE, or shell running it to access the variable.
-
-Set the following environment values:
-
-* `QNAMAKER_RESOURCE_KEY` - The **key** is a 32 character string and is available in the Azure portal, on the QnA Maker resource, on the **Quick start** page. This is not the same as the prediction endpoint key.
-* `QNAMAKER_AUTHORING_ENDPOINT` - Your authoring endpoint, in the format of `https://YOUR-RESOURCE-NAME.cognitiveservices.azure.com`, includes your **resource name**. This is not the same URL used to query the prediction endpoint.
--
-## Create a knowledge base
-
-A knowledge base stores question and answer pairs, created from a JSON object of:
-
-* **Editorial content**.
-* **Files** - local files that do not require any permissions.
-* **URLs** - publicly available URLs.
-
-Use the [REST API to create a knowledge base](/rest/api/cognitiveservices/qnamaker/knowledgebase/create).
--
-## Replace a knowledge base
-
-Use the [REST API to replace a knowledge base](/rest/api/cognitiveservices/qnamaker/knowledgebase/replace).
--
-## Publish a knowledge base
-
-Publish the knowledge base. This process makes the knowledge base available from an HTTP query prediction endpoint.
-
-Use the [REST API to publish a knowledge base](/rest/api/cognitiveservices/qnamaker/knowledgebase/publish).
--
-## Download the knowledge base
-
-Use the [REST API to download a knowledge base](/rest/api/cognitiveservices/qnamaker/knowledgebase/download).
--
-## Delete a knowledge base
-
-When you are done using the knowledge base, delete it.
-
-Use the [REST API to delete a knowledge base](/rest/api/cognitiveservices/qnamaker/knowledgebase/delete).
--
-## Get status of an operation
-
-Long running processes such as the creation process returns an operation ID, which needs to be checked with a separate REST API call. This function takes the body of the create response. The important key is the `operationState`, which determines if you need to continue polling.
-
-Use the [REST API to monitor operations on a knowledge base](/rest/api/cognitiveservices/qnamaker/operations/getdetails).
--
-## Add main method
-
-Add the following `main` method.
--
-## Run the application
-
-Run the application with `node rest-apis.js` command from your application directory.
-
-```console
-node rest-apis.js
-```
-
-## Clean up resources
-
-If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
-
-* [Portal](../../cognitive-services-apis-create-account.md#clean-up-resources)
-* [Azure CLI](../../cognitive-services-apis-create-account-cli.md#clean-up-resources)
-
-## Next steps
-
-> [!div class="nextstepaction"]
->[Tutorial: Create and answer a KB](./create-publish-knowledge-base.md)
-
-* [What is the QnA Maker API?](../Overview/overview.md)
-* [Edit a knowledge base](../how-to/edit-knowledge-base.md)
-* [Get usage analytics](../how-to/get-analytics-knowledge-base.md)
-* The source code for this sample can be found on [GitHub](https://github.com/Azure-Samples/cognitive-services-qnamaker-nodejs/blob/master/documentation-samples/quickstarts/rest-api/rest-api.js).
cognitive-services Publish Kb Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/Quickstarts/publish-kb-python.md
- Title: "Quickstart: Publish knowledge base, REST, Python - QnA Maker"
-description: This Python REST-based quickstart publishes your knowledge base and creates an endpoint that can be called in your application or chat bot.
-- Previously updated : 02/08/2020-----
-# Quickstart: Publish a knowledge base in QnA Maker using Python
-
-This REST-based quickstart walks you through programmatically publishing your knowledge base (KB). Publishing pushes the latest version of the knowledge base to a dedicated Azure Cognitive Search index and creates an endpoint that can be called in your application or chat bot.
-
-This quickstart calls QnA Maker REST APIs:
-* [Publish](/rest/api/cognitiveservices/qnamaker/knowledgebase/publish) - this API doesn't require any information in the body of the request.
-
-## Prerequisites
-
-* [Python 3.7](https://www.python.org/downloads/)
-* You must have a [QnA Maker service](../How-To/set-up-qnamaker-service-azure.md). To retrieve your key and endpoint (which includes the resource name), select **Quickstart** for your resource in the Azure portal.
-* QnA Maker knowledge base (KB) ID found in the URL in the `kbid` query string parameter as shown below.
-
- ![QnA Maker knowledge base ID](../media/qnamaker-quickstart-kb/qna-maker-id.png)
-
- If you don't have a knowledge base yet, you can create a sample one to use for this quickstart: [Create a new knowledge base](./create-publish-knowledge-base.md).
-
-> [!NOTE]
-> The complete solution file(s) are available from the [**Azure-Samples/cognitive-services-qnamaker-python** GitHub repository](https://github.com/Azure-Samples/cognitive-services-qnamaker-python/tree/master/documentation-samples/quickstarts/publish-knowledge-base).
-
-## Create a knowledge base Python file
-
-Create a file named `publish-kb-3x.py`.
-
-## Add the required dependencies
-
-At the top of `publish-kb-3x.py`, add the following lines to add necessary dependencies to the project:
--
-## Add required constants
-
-After the preceding required dependencies, add the required constants to access QnA Maker. Replace the values with your own.
--
-## Add POST request to publish knowledge base
-
-After the required constants, add the following code, which makes an HTTPS request to the QnA Maker API to publish a knowledge base and receives the response:
--
-The API call returns a 204 status for a successful publish without any content in the body of the response. The code adds content for 204 responses.
-
-For any other response, that response is returned unaltered.
-
-## Build and run the program
-
-Enter the following command at a command-line to run the program. It will send the request to the QnA Maker API to publish the knowledge base, then print out 204 for success or errors.
-
-```bash
-python publish-kb-3x.py
-```
--
-## Next steps
-
-After the knowledge base is published, you need the [endpoint URL to generate an answer](./get-answer-from-knowledge-base-python.md).
-
-> [!div class="nextstepaction"]
-> [QnA Maker (V4) REST API Reference](/rest/api/cognitiveservices/qnamaker4.0/knowledgebase)
-
-[QnA Maker overview](../Overview/overview.md)
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/whats-new.md
Learn what's new with QnA Maker.
### July 2020
-* [Metadata: `OR` logical combination of multiple metadata pairs](how-to/metadata-generateanswer-usage.md#logical-or-using-strictfilterscompoundoperationtype-property)
+* [Metadata: `OR` logical combination of multiple metadata pairs](how-to/query-knowledge-base-with-metadata.md#logical-or-using-strictfilterscompoundoperationtype-property)
* [Steps](how-to/network-isolation.md) to configure Cognitive Search endpoints to be private, but still accessible to QnA Maker. * Free Cognitive Search resources are removed after [90 days of inactivity](how-to/set-up-qnamaker-service-azure.md#inactivity-policy-for-free-search-resources).
cognitive-services How To Custom Commands Debug Runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-commands-debug-runtime.md
If your run Custom Commands application from [client application (with Speech SD
| Error code | Details | | - | -- | | [401](#error-401) | AuthenticationFailure: WebSocket Upgrade failed with an authentication error |
-| [1002](#error-1002)] | The server returned status code '404' when status code '101' was expected. |
+| [1002](#error-1002) | The server returned status code '404' when status code '101' was expected. |
### Error 401 - The region specified in client application does not match with the region of the custom command application
You have an empty parameter in the JSON payload defined in **Send Activity to Cl
## Next steps > [!div class="nextstepaction"]
-> [See samples on GitHub](https://aka.ms/speech/cc-samples)
+> [See samples on GitHub](https://aka.ms/speech/cc-samples)
communication-services Get Started With Video Calling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/get-started-with-video-calling.md
# QuickStart: Add 1:1 video calling to your app (JavaScript)
+## Download Code
+
+Find the finalized code for this quickstart on [GitHub](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/add-1-on-1-video-calling)
+ ## Prerequisites - Obtain an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - [Node.js](https://nodejs.org/en/) Active LTS and Maintenance LTS versions (8.11.1 and 10.14.1)
Open your browser and navigate to http://localhost:8080/. You should see the fol
You can make an 1:1 outgoing video call by providing a user ID in the text field and clicking the Start Call button. ## Sample Code
-You can download the sample app from [GitHub](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/Add%201%20on%201%20video%20calling).
+You can download the sample app from [GitHub](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/add-1-on-1-video-calling).
## Clean up resources If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](https://docs.microsoft.com/azure/communication-services/quickstarts/create-communication-resource?tabs=windows&pivots=platform-azp#clean-up-resources).
communication-services Postman Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/tutorials/postman-tutorial.md
+
+ Title: Tutorial - Sign and make requests to ACS' SMS API with Postman
+
+description: Learn how to sign and makes requests for ACS with Postman to send an SMS Message.
+++ Last updated : 03/08/2021+++
+# Tutorial: Sign and make requests with Postman
+In this tutorial, we'll be setting up and using Postman to make a request against Azure Communication Services(ACS) services using HTTP. By the end of this tutorial, you'll have successfully sent an SMS message using ACS and Postman and be able to use Postman to explore other APIs within ACS.
+
+In this tutorial we'll be:
+> [!div class="checklist"]
+> * Downloading Postman
+> * Setting up Postman to sign HTTP Requests
+> * Making a request against ACS' SMS API to send a message.
+
+## Prerequisites
+
+- An Azure account with an active subscription. For details, see [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). The free account gives you $200 in Azure credits to try out any combination of services.
+- An active Communication Services resource and connection string. [Learn how to create a Communication Services resource](../quickstarts/create-communication-resource.md).
+- An ACS Telephone number that can send SMS messages, see our [Get a phone number](../quickstarts/telephony-sms/get-phone-number.md) to get one.
+
+## Downloading and installing Postman
+
+Postman, is a desktop application that is capable of making API requests against any HTTP API. It is commonly used for testing and exploring APIs. We'll be downloading the latest [Desktop version from Postman's website](https://www.postman.com/downloads/). Postman has versions for Windows, Mac, and Linux so download the version appropriate for your operating system. Once downloaded open the application. You will be presented with a start screen, which asks you to sign in or to create a Postman account. Creating an account is optional and can be skipped by clicking the "Skip and go to app" link. Creating an account will save your API request settings to Postman, which can then allow you to pick up your requests on other computers.
++
+Once you've created an account or skipped creating one, you should now see Postman's main window.
+
+## Creating and configuring a Postman collection
+
+Postman, can organize requests in many ways. For the purposes of this tutorial. We'll be creating a Postman Collection. To do this, select the collections button on the left-hand side of the application:
++
+Once selected, click "Create new Collection", to start the collection creation process. A new tab will open in the center area of Postman. Name the collection whatever you'd like. Here the collection is named "ACS":
++
+Once your collection is created and named, you are ready to configure it.
+
+### Adding collection variables
+
+To handle authentication and to make requests easier, we'll be specifying two collection variables within the newly created ACS collection. These variables are available to all requests within your ACS collection. To get started in creating variables, visit the Collection's Variable's Tab.
++
+Once on the collection tab, create two variables:
+- key - This variable should be one of your keys from your Azure Communication Services' key page within the Azure portal. For example, `oW...A==`.
+- endpoint - This variable should be your Azure Communication Services' endpoint from the key page. **Ensure you remove the trailing slash**. For example, `https://contoso.communication.azure.com`.
+
+Enter these values into the "Initial Value" column of the variables screen. Once entered, press the "Persist All" button just above the table on the right. When configured correctly your Postman screen should look something like this:
++
+You can learn more about variables by reading [Postman's documentation on them](https://learning.postman.com/docs/sending-requests/variables).
+
+### Creating a pre-request script
+
+The next step is to create a pre-request Script within Postman. A pre-request script, is a script that runs before each request in Postman and can modify or alter request parameters on your behalf. We'll be using this to sign our HTTP requests so that they can be authorized by ACS' Services. For more information about the Signing requirements, you can [read our guide on authentication](https://docs.microsoft.com/rest/api/communication/authentication).
+
+We'll be creating this script within the Collection such that it runs on any request within the collection. To do this, within the collection tab click the "Pre-request Script" Sub-Tab.
++
+On this Sub-Tab, you can create a pre-request script by entering it into the text area below. It may be easier to write this, within a full code editor such as [Visual Studio Code](https://code.visualstudio.com/) before pasting it in when complete. We'll be going through each part of the script in this tutorial. Feel free to skip to the end if you'd like to just copy it into Postman and get started. Let's start writing the script.
+
+### Writing the pre-request script
+
+The first thing we'll be doing is creating a Coordinated Universal Time (UTC) string and adding this to the "Date" HTTP Header. We also store this string in a variable to use it later when signing:
+
+```JavaScript
+// Set the Date header to our Date as a UTC String.
+const dateStr = new Date().toUTCString();
+pm.request.headers.upsert({key:'Date', value: dateStr});
+```
+
+Next, we'll hash the request body using SHA 256 and place it in the `x-ms-content-sha256` header. Postman includes some [standard libraries](https://learning.postman.com/docs/writing-scripts/script-references/postman-sandbox-api-reference/#using-external-libraries) for hashing and signing globally, so we don't need to install them or require them:
+
+```JavaScript
+// Hash the request body using SHA256 and encode it as Base64
+const hashedBodyStr = CryptoJS.SHA256(pm.request.body.raw).toString(CryptoJS.enc.Base64)
+// And add that to the header x-ms-content-sha256
+pm.request.headers.upsert({
+ key:'x-ms-content-sha256',
+ value: hashedBodyStr
+});
+```
+
+Now, we'll use our previously specified endpoint variable to discern the value for the HTTP Host header. We need to do this as the Host header is not set until after this script is processed:
+
+```JavaScript
+// Get our previously specified endpoint variable
+const endpoint = pm.variables.get('endpoint')
+// Remove the https, prefix to create a suitable "Host" value
+const hostStr = endpoint.replace('https://','');
+```
+
+With this information created, we can now create the string, which we'll be signing for the HTTP Request, this is composed of several previously created values:
+
+```JavaScript
+// This gets the part of our URL that is after the endpoint, for example in https://contoso.communication.azure.com/sms, it will get '/sms'
+const url = pm.request.url.toString().replace('{{endpoint}}','');
+
+// Construct our string which we'll sign, using various previously created values.
+const stringToSign = pm.request.method + '\n' + url + '\n' + dateStr + ';' + hostStr + ';' + hashedBodyStr;
+```
+
+Lastly, we need to sign this string using our ACS key and then add that to our request in the `Authorization` header:
+
+```JavaScript
+// Decode our access key from previously created variables, into bytes from base64.
+const key = CryptoJS.enc.Base64.parse(pm.variables.get('key'));
+// Sign our previously calculated string with HMAC 256 and our key. Convert it to Base64.
+const signature = CryptoJS.HmacSHA256(stringToSign, key).toString(CryptoJS.enc.Base64);
+
+// Add our final signature in Base64 to the authorization header of the request.
+pm.request.headers.upsert({
+ key:'Authorization',
+ value: "HMAC-SHA256 SignedHeaders=date;host;x-ms-content-sha256&Signature=" + signature
+});
+```
+
+### The final pre-request script
+
+The final pre-request script should look something like this:
+
+```JavaScript
+// Set the Date header to our Date as a UTC String.
+const dateStr = new Date().toUTCString();
+pm.request.headers.upsert({key:'Date', value: dateStr});
+
+// Hash the request body using SHA256 and encode it as Base64
+const hashedBodyStr = CryptoJS.SHA256(pm.request.body.raw).toString(CryptoJS.enc.Base64)
+// And add that to the header x-ms-content-sha256
+pm.request.headers.upsert({
+ key:'x-ms-content-sha256',
+ value: hashedBodyStr
+});
+
+// Get our previously specified endpoint variable
+const endpoint = pm.variables.get('endpoint')
+// Remove the https, prefix to create a suitable "Host" value
+const hostStr = endpoint.replace('https://','');
+
+// This gets the part of our URL that is after the endpoint, for example in https://contoso.communication.azure.com/sms, it will get '/sms'
+const url = pm.request.url.toString().replace('{{endpoint}}','');
+
+// Construct our string which we'll sign, using various previously created values.
+const stringToSign = pm.request.method + '\n' + url + '\n' + dateStr + ';' + hostStr + ';' + hashedBodyStr;
+
+// Decode our access key from previously created variables, into bytes from base64.
+const key = CryptoJS.enc.Base64.parse(pm.variables.get('key'));
+// Sign our previously calculated string with HMAC 256 and our key. Convert it to Base64.
+const signature = CryptoJS.HmacSHA256(stringToSign, key).toString(CryptoJS.enc.Base64);
+
+// Add our final signature in Base64 to the authorization header of the request.
+pm.request.headers.upsert({
+ key:'Authorization',
+ value: "HMAC-SHA256 SignedHeaders=date;host;x-ms-content-sha256&Signature=" + signature
+});
+```
+
+Enter or paste this final script, into the text area within the Pre-request Script Tab:
++
+Once entered, press CTRL + S or press the save button this will save the script to the collection.
++
+## Creating a request in Postman
+
+Now that everything is set up, we're ready to create an ACS request within Postman. To get started click the plus(+) icon next to the ACS Collection:
++
+This will create a new tab for our request within Postman. With it created we need to configure it. We'll be making a request against the SMS Send API so be sure to refer to the [documentation for this API for assistance](https://docs.microsoft.com/rest/api/communication/sms/send). Let's configure Postman's request.
+
+Start by setting, the request type to `POST` and entering `{{endpoint}}/sms?api-version=2021-03-07` into the request URL field. This URL uses our previously created `endpoint` variable to automatically send it to your ACS Resource.
++
+Now select the Body tab of the request and then change the radio button beneath to "raw". On the right, there is a dropdown that says "Text", change it to JSON:
++
+This will configure the request to send and receive information in a JSON format.
+
+In the text area below you'll need to enter a request body, it should be in the following format:
+
+```JSON
+{
+ "from":"<Your ACS Telephone Number>",
+ "message":"<The message you'd like to send>",
+ "smsRecipients": [
+ {
+ "to":"<The number you'd like to send the message to>"
+ }
+ ]
+}
+```
+
+For the "from" value, you'll need to [get a telephone number](../quickstarts/telephony-sms/get-phone-number.md) in the ACS Portal as previously mentioned. Enter it without any spaces and prefixed by your country code. For example: `+15555551234`. Your "message" can be whatever you'd like to send but `Hello from ACS` is a good example. The "to" value should be a phone you have access to that can receive SMS messages. Using your own mobile is a good idea.
+
+Once entered, we need to save this request into the ACS Collection that we previously created. This will ensure that it picks up the variables and pre-request script that we previously created. To do, this click the "save" button in the top right of the request area.
++
+This will make a dialog window appear that asks you, what you'd like to call the request and where you'd like to save it. You can name it anything you'd like but ensure you select your ACS collection in the lower half of the dialog:
++
+## Sending a request
+
+Now that everything is set up, you should be able to send the request and get an SMS message on your phone. To do this, ensure your created request is selected and then press the "Send" button on the right:
++
+If everything went well, you should now see the response from ACS, which should be 202 Status code:
++
+The Mobile phone, which owns the number you provided in the "to" value, should also have received an SMS message. You've now got a working Postman set up, which can talk to ACS' Services and send SMS messages.
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Explore ACS APIs](https://docs.microsoft.com/rest/api/communication/)
+> [Read more about Authentication](https://docs.microsoft.com/rest/api/communication/authentication)
+> [Learn more about Postman](https://learning.postman.com/)
+
+You might also want to:
+
+- [Add chat to your app](../quickstarts/chat/get-started.md)
+- [Create user access tokens](../quickstarts/access-tokens.md)
+- [Learn about client and server architecture](../concepts/client-and-server-architecture.md)
+- [Learn about authentication](../concepts/authentication.md)
container-registry Container Registry Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-azure-policy.md
The following built-in policy definitions are specific to Azure Container Regist
[!INCLUDE [azure-policy-reference-rp-containerreg](../../includes/policy/reference/byrp/microsoft.containerregistry.md)]
-See also the built-in network policy definition: [Container Registry should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4857be7-912a-4c75-87e6-e30292bcdf78).
- ## Assign policies * Assign policies using the [Azure portal](../governance/policy/assign-policy-portal.md), [Azure CLI](../governance/policy/assign-policy-azurecli.md), a [Resource Manager template](../governance/policy/assign-policy-template.md), or the Azure Policy SDKs.
cosmos-db Find Request Unit Charge Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/find-request-unit-charge-cassandra.md
Azure Cosmos DB supports many APIs, such as SQL, MongoDB, Cassandra, Gremlin, and Table. Each API has its own set of database operations. These operations range from simple point reads and writes to complex queries. Each database operation consumes system resources based on the complexity of the operation.
-The cost of all database operations is normalized by Azure Cosmos DB and is expressed by Request Units (or RUs, for short). You can think of RUs as a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB. No matter which API you use to interact with your Azure Cosmos container, costs are always measured by RUs. Whether the database operation is a write, point read, or query, costs are always measured in RUs. To learn more, see the [request units and it's considerations](request-units.md) article.
+The cost of all database operations is normalized by Azure Cosmos DB and is expressed by Request Units (or RUs, for short). Request charge is the request units consumed by all your database operations. You can think of RUs as a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB. No matter which API you use to interact with your Azure Cosmos container, costs are always measured by RUs. Whether the database operation is a write, point read, or query, costs are always measured in RUs. To learn more, see the [request units and it's considerations](request-units.md) article.
This article presents the different ways you can find the [request unit](request-units.md) (RU) consumption for any operation executed against a container in Azure Cosmos DB Cassandra API. If you are using a different API, see [API for MongoDB](find-request-unit-charge-mongodb.md), [SQL API](find-request-unit-charge.md), [Gremlin API](find-request-unit-charge-gremlin.md), and [Table API](find-request-unit-charge-table.md) articles to find the RU/s charge.
cosmos-db Find Request Unit Charge Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/find-request-unit-charge-gremlin.md
Azure Cosmos DB supports many APIs, such as SQL, MongoDB, Cassandra, Gremlin, and Table. Each API has its own set of database operations. These operations range from simple point reads and writes to complex queries. Each database operation consumes system resources based on the complexity of the operation.
-The cost of all database operations is normalized by Azure Cosmos DB and is expressed by Request Units (or RUs, for short). You can think of RUs as a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB. No matter which API you use to interact with your Azure Cosmos container, costs are always measured by RUs. Whether the database operation is a write, point read, or query, costs are always measured in RUs. To learn more, see the [request units and it's considerations](request-units.md) article.
+The cost of all database operations is normalized by Azure Cosmos DB and is expressed by Request Units (or RUs, for short). Request charge is the request units consumed by all your database operations. You can think of RUs as a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB. No matter which API you use to interact with your Azure Cosmos container, costs are always measured by RUs. Whether the database operation is a write, point read, or query, costs are always measured in RUs. To learn more, see the [request units and it's considerations](request-units.md) article.
This article presents the different ways you can find the [request unit](request-units.md) (RU) consumption for any operation executed against a container in Azure Cosmos DB Gremlin API. If you are using a different API, see [API for MongoDB](find-request-unit-charge-mongodb.md), [Cassandra API](find-request-unit-charge-cassandra.md), [SQL API](find-request-unit-charge.md), and [Table API](find-request-unit-charge-table.md) articles to find the RU/s charge.
cosmos-db Find Request Unit Charge Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/find-request-unit-charge-mongodb.md
Azure Cosmos DB supports many APIs, such as SQL, MongoDB, Cassandra, Gremlin, and Table. Each API has its own set of database operations. These operations range from simple point reads and writes to complex queries. Each database operation consumes system resources based on the complexity of the operation.
-The cost of all database operations is normalized by Azure Cosmos DB and is expressed by Request Units (or RUs, for short). You can think of RUs as a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB. No matter which API you use to interact with your Azure Cosmos container, costs are always measured by RUs. Whether the database operation is a write, point read, or query, costs are always measured in RUs. To learn more, see the [request units and it's considerations](request-units.md) article.
+The cost of all database operations is normalized by Azure Cosmos DB and is expressed by Request Units (or RUs, for short). Request charge is the request units consumed by all your database operations. You can think of RUs as a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB. No matter which API you use to interact with your Azure Cosmos container, costs are always measured by RUs. Whether the database operation is a write, point read, or query, costs are always measured in RUs. To learn more, see the [request units and it's considerations](request-units.md) article.
This article presents the different ways you can find the [request unit](request-units.md) (RU) consumption for any operation executed against a container in Azure Cosmos DB API for MongoDB. If you are using a different API, see [SQL API](find-request-unit-charge.md), [Cassandra API](find-request-unit-charge-cassandra.md), [Gremlin API](find-request-unit-charge-gremlin.md), and [Table API](find-request-unit-charge-table.md) articles to find the RU/s charge.
cosmos-db Find Request Unit Charge Table https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/find-request-unit-charge-table.md
Azure Cosmos DB supports many APIs, such as SQL, MongoDB, Cassandra, Gremlin, and Table. Each API has its own set of database operations. These operations range from simple point reads and writes to complex queries. Each database operation consumes system resources based on the complexity of the operation.
-The cost of all database operations is normalized by Azure Cosmos DB and is expressed by Request Units (or RUs, for short). You can think of RUs as a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB. No matter which API you use to interact with your Azure Cosmos container, costs are always measured by RUs. Whether the database operation is a write, point read, or query, costs are always measured in RUs. To learn more, see the [request units and it's considerations](request-units.md) article.
+The cost of all database operations is normalized by Azure Cosmos DB and is expressed by Request Units (or RUs, for short). Request charge is the request units consumed by all your database operations. You can think of RUs as a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB. No matter which API you use to interact with your Azure Cosmos container, costs are always measured by RUs. Whether the database operation is a write, point read, or query, costs are always measured in RUs. To learn more, see the [request units and it's considerations](request-units.md) article.
This article presents the different ways you can find the [request unit](request-units.md) (RU) consumption for any operation executed against a container in Azure Cosmos DB Table API. If you are using a different API, see [API for MongoDB](find-request-unit-charge-mongodb.md), [Cassandra API](find-request-unit-charge-cassandra.md), [Gremlin API](find-request-unit-charge-gremlin.md), and [SQL API](find-request-unit-charge.md) articles to find the RU/s charge.
cosmos-db Find Request Unit Charge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/find-request-unit-charge.md
Azure Cosmos DB supports many APIs, such as SQL, MongoDB, Cassandra, Gremlin, and Table. Each API has its own set of database operations. These operations range from simple point reads and writes to complex queries. Each database operation consumes system resources based on the complexity of the operation.
-The cost of all database operations is normalized by Azure Cosmos DB and is expressed by Request Units (or RUs, for short). You can think of RUs as a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB. No matter which API you use to interact with your Azure Cosmos container, costs are always measured by RUs. Whether the database operation is a write, point read, or query, costs are always measured in RUs. To learn more, see the [request units and its considerations](request-units.md) article.
+The cost of all database operations is normalized by Azure Cosmos DB and is expressed by Request Units (or RUs, for short). Request charge is the request units consumed by all your database operations. You can think of RUs as a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB. No matter which API you use to interact with your Azure Cosmos container, costs are always measured by RUs. Whether the database operation is a write, point read, or query, costs are always measured in RUs. To learn more, see the [request units and its considerations](request-units.md) article.
This article presents the different ways you can find the [request unit](request-units.md) (RU) consumption for any operation executed against a container in Azure Cosmos DB SQL API. If you are using a different API, see [API for MongoDB](find-request-unit-charge-mongodb.md), [Cassandra API](find-request-unit-charge-cassandra.md), [Gremlin API](find-request-unit-charge-gremlin.md), and [Table API](find-request-unit-charge-table.md) articles to find the RU/s charge.
cost-management-billing Quick Acm Cost Analysis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/quick-acm-cost-analysis.md
Title: Quickstart - Explore Azure costs with cost analysis
description: This quickstart helps you use cost analysis to explore and analyze your Azure organizational costs. Previously updated : 01/04/2021 Last updated : 03/10/2021
The initial cost analysis view includes the following areas.
### Understand forecast
-Cost forecast shows a projection of your estimated costs for the selected time period. The model is based on a time series regression model. It requires at least 10 days of recent cost and usage data to accurately forecast costs. For a given time period, the forecast model requires equal parts of training data for the forecast period. For example, a projection of three months requires at least three months of recent cost and usage data.
+Based on your recent usage, cost forecasts show a projection of your estimated costs for the selected time period. If a budget is set up in Cost analysis, you can view when forecasted spend is likely to exceed budget threshold. The forecast model can predict future costs for up to a year. Select filters to view the granular forecasted cost for your selected dimension.
-The model uses a maximum of six months of training data to project costs for a year. At a minimum, it needs seven days of training data to change its prediction. The prediction is based on dramatic changes, such as spikes and dips, in cost and usage patterns. Forecast doesn't generate individual projections for each item in **Group by** properties. It only provides a forecast for total accumulated costs. If you use multiple currencies, the model provides forecast for costs only in USD.
-
-Because of the model's reliance on data dips and spikes, large purchases like reserved instances will cause your forecast to become artificially inflated. The forecast time period and the size of purchases affect how long the forecast is affected. The forecast returns to normal when spending stabilizes.
+The forecast model is based on a time series regression model. It requires at least 10 days of recent cost and usage data to accurately forecast costs. For a given time period, the forecast model requires equal parts of training data for the forecast period. For example, a projection of three months requires at least three months of recent cost and usage data.
## Customize cost views
You also have the option of using the [az costmanagement export](/cli/azure/ext/
Advance to the first tutorial to learn how to create and manage budgets. > [!div class="nextstepaction"]
-> [Create and manage budgets](tutorial-acm-create-budgets.md)
+> [Create and manage budgets](tutorial-acm-create-budgets.md)
cost-management-billing Tutorial Acm Create Budgets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/tutorial-acm-create-budgets.md
Title: Tutorial - Create and manage Azure budgets
-description: This tutorial helps plan and account for the costs of Azure services that you consume.
+description: This tutorial helps you plan and account for the costs of Azure services that you consume.
Previously updated : 01/27/2021 Last updated : 03/09/2021
# Tutorial: Create and manage Azure budgets
-Budgets in Cost Management help you plan for and drive organizational accountability. With budgets, you can account for the Azure services you consume or subscribe to during a specific period. They help you inform others about their spending to proactively manage costs, and to monitor how spending progresses over time. When the budget thresholds you've created are exceeded, only notifications are triggered. None of your resources are affected and your consumption isn't stopped. You can use budgets to compare and track spending as you analyze costs.
+Budgets in Cost Management help you plan for and drive organizational accountability. They help you inform others about their spending to proactively manage costs, and to monitor how spending progresses over time. You can configure alerts based on your actual cost or forecasted cost to ensure that your spend is within your organizational spend limit. When the budget thresholds you've created are exceeded, only notifications are triggered. None of your resources are affected and your consumption isn't stopped. You can use budgets to compare and track spending as you analyze costs.
Cost and usage data is typically available within 8-24 hours and budgets are evaluated against these costs every 24 hours. Be sure to get familiar with [Cost and usage data updates](./understand-cost-mgt-data.md#cost-and-usage-data-updates-and-retention) specifics. When a budget threshold is met, email notifications are normally sent within an hour of the evaluation.
-Budgets reset automatically at the end of a period (monthly, quarterly, or annually) for the same budget amount when you select an expiration date in the future. Because they reset with the same budget amount, you need to create separate budgets when budgeted currency amounts differ for future periods. When a budget expires, it is automatically deleted.
+Budgets reset automatically at the end of a period (monthly, quarterly, or annually) for the same budget amount when you select an expiration date in the future. Because they reset with the same budget amount, you need to create separate budgets when budgeted currency amounts differ for future periods. When a budget expires, it's automatically deleted.
The examples in this tutorial walk you through creating and editing a budget for an Azure Enterprise Agreement (EA) subscription.
For more information about assigning permission to Cost Management data, see [As
You can create an Azure subscription budget for a monthly, quarterly, or annual period.
-To create or view a budget, open the desired scope in the Azure portal and select **Budgets** in the menu. For example, navigate to **Subscriptions**, select a subscription from the list, and then select **Budgets** in the menu. Use the **Scope** pill to switch to a different scope, like a management group, in Budgets. For more information about scopes, see [Understand and work with scopes](understand-work-scopes.md).
+To create or view a budget, open a scope in the Azure portal and select **Budgets** in the menu. For example, navigate to **Subscriptions**, select a subscription from the list, and then select **Budgets** in the menu. Use the **Scope** pill to switch to a different scope, like a management group, in Budgets. For more information about scopes, see [Understand and work with scopes](understand-work-scopes.md).
After you create budgets, they show a simple view of your current spending against them. Select **Add**.
-![Example showing a list of budgets already created](./media/tutorial-acm-create-budgets/budgets01.png)
In the **Create budget** window, make sure that the scope shown is correct. Choose any filters that you want to add. Filters allow you to create budgets on specific costs, such as resource groups in a subscription or a service like virtual machines. Any filter you can use in cost analysis can also be applied to a budget.
-After you've identified your scope and filters, type a budget name. Then, choose a monthly, quarterly, or annual budget reset period. This reset period determines the time window that's analyzed by the budget. The cost evaluated by the budget starts at zero at the beginning of each new period. When you create a quarterly budget, it works in the same way as a monthly budget. The difference is that the budget amount for the quarter is evenly divided among the three months of the quarter. An annual budget amount is evenly divided among all 12 months of the calendar year.
+After you identify your scope and filters, type a budget name. Then, choose a monthly, quarterly, or annual budget reset period. The reset period determines the time window that's analyzed by the budget. The cost evaluated by the budget starts at zero at the beginning of each new period. When you create a quarterly budget, it works in the same way as a monthly budget. The difference is that the budget amount for the quarter is evenly divided among the three months of the quarter. An annual budget amount is evenly divided among all 12 months of the calendar year.
If you have a Pay-As-You-Go, MSDN, or Visual Studio subscription, your invoice billing period might not align to the calendar month. For those subscription types and resource groups, you can create a budget that's aligned to your invoice period or to calendar months. To create a budget aligned to your invoice period, select a reset period of **Billing month**, **Billing quarter**, or **Billing year**. To create a budget aligned to the calendar month, select a reset period of **Monthly**, **Quarterly**, or **Annually**.
Next, identify the expiration date when the budget becomes invalid and stops eva
Based on the fields chosen in the budget so far, a graph is shown to help you select a threshold to use for your budget. The suggested budget is based on the highest forecasted cost that you might incur in future periods. You can change the budget amount.
-![Example showing budget creation with monthly cost data ](./media/tutorial-acm-create-budgets/monthly-budget01.png)
-After you configure the budget amount, select **Next** to configure budget alerts. Budgets require at least one cost threshold (% of budget) and a corresponding email address. You can optionally include up to five thresholds and five email addresses in a single budget. When a budget threshold is met, email notifications are normally sent within an hour of the evaluation.
+After you configure the budget amount, select **Next** to configure budget alerts for actual cost and forecasted budget alerts.
+
+## Configure actual costs budget alerts
+
+Budgets require at least one cost threshold (% of budget) and a corresponding email address. You can optionally include up to five thresholds and five email addresses in a single budget. When a budget threshold is met, email notifications are normally sent within an hour of the evaluation. Actual costs budget alerts are generated for the actual cost you've accrued in relation to the budget thresholds configured.
+
+## Configure forecasted budget alerts
+
+Forecasted alerts provide advanced notification that your spending trends are likely to exceed your budget. The alerts use [forecasted cost predictions](quick-acm-cost-analysis.md#understand-forecast). Alerts are generated when the forecasted cost projection exceeds the set threshold. You can configure a forecasted threshold (% of budget). When a forecasted budget threshold is met, notifications are normally sent within an hour of the evaluation.
+
+To toggle between configuring an Actual vs Forecasted cost alert, use the `Type` field when configuring the alert as shown in the following image.
If you want to receive emails, add azure-noreply@microsoft.com to your approved senders list so that emails don't go to your junk email folder. For more information about notifications, see [Use cost alerts](./cost-mgt-alerts-monitor-usage-spending.md).
-In the example below, an email alert gets generated when 90% of the budget is reached. If you create a budget with the Budgets API, you can also assign roles to people to receive alerts. Assigning roles to people isn't supported in the Azure portal. For more about the Azure budgets API, see [Budgets API](/rest/api/consumption/budgets). If you want to have an email alert sent in a different language, see [Supported locales for budget alert emails](manage-automation.md#supported-locales-for-budget-alert-emails).
+In the following example, an email alert gets generated when 90% of the budget is reached. If you create a budget with the Budgets API, you can also assign roles to people to receive alerts. Assigning roles to people isn't supported in the Azure portal. For more about the Azure budgets API, see [Budgets API](/rest/api/consumption/budgets). If you want to have an email alert sent in a different language, see [Supported locales for budget alert emails](manage-automation.md#supported-locales-for-budget-alert-emails).
-Alert limits support a range of 0.01 to 1000% of the budget threshold that you've provided.
+Alert limits support a range of 0.01% to 1000% of the budget threshold that you've provided.
-![Example showing alert conditions](./media/tutorial-acm-create-budgets/monthly-budget-alert.png)
-After you create a budget, it is shown in cost analysis. Viewing your budget against your spending trend is one of the first steps when you start to [analyze your costs and spending](./quick-acm-cost-analysis.md).
+After you create a budget, it's shown in cost analysis. Viewing your budget against your spending trend is one of the first steps when you start to [analyze your costs and spending](./quick-acm-cost-analysis.md).
-![Example budget and spending shown in cost analysis](./media/tutorial-acm-create-budgets/cost-analysis.png)
In the preceding example, you created a budget for a subscription. You can also create a budget for a resource group. If you want to create a budget for a resource group, navigate to **Cost Management + Billing** &gt; **Subscriptions** &gt; select a subscription > **Resource groups** > select a resource group > **Budgets** > and then **Add** a budget.
You can group your Azure and AWS costs together by assigning a management group
## Costs in budget evaluations
-Budget cost evaluations now include reserved instance and purchase data. If the charges apply to you, then you might receive alerts as charges are incorporated into your evaluations. We recommend that you sign in to the [Azure portal](https://portal.azure.com) to verify that budget thresholds are properly configured to account for the new costs. Your Azure billed charges aren't changed. Budgets now evaluate against a more complete set of your costs. If the charges don't apply to you, then your budget behavior remains unchanged.
+Budget cost evaluations now include reserved instance and purchase data. If the charges apply to you, then you might receive alerts as charges are incorporated into your evaluations. Sign in to the [Azure portal](https://portal.azure.com) to verify that budget thresholds are properly configured to account for the new costs. Your Azure billed charges aren't changed. Budgets now evaluate against a more complete set of your costs. If the charges don't apply to you, then your budget behavior remains unchanged.
If you want to filter the new costs so that budgets are evaluated against first party Azure consumption charges only, add the following filters to your budget:
Budget cost evaluations are based on actual cost. They don't include amortizatio
## Trigger an action group
-When you create or edit a budget for a subscription or resource group scope, you can configure it to call an action group. The action group can perform various actions when your budget threshold is met. Action Groups are currently only supported for subscription and resource group scopes. For more information about Action Groups, see [Create and manage action groups in the Azure portal](../../azure-monitor/alerts/action-groups.md). For more information about using budget-based automation with action groups, see [Manage costs with Azure budgets](../manage/cost-management-budget-scenario.md).
+When you create or edit a budget for a subscription or resource group scope, you can configure it to call an action group. The action group can perform various actions when your budget threshold is met.
-To create or update action groups, select **Manage action groups** while you're creating or editing a budget.
+Action Groups are currently only supported for subscription and resource group scopes. For more information about creating action groups, see [Configure basic action group settings](../../azure-monitor/alerts/action-groups.md#configure-basic-action-group-settings).
-![Example of creating a budget to show Manage action groups](./media/tutorial-acm-create-budgets/manage-action-groups01.png)
+For more information about using budget-based automation with action groups, see [Manage costs with Azure budgets](../manage/cost-management-budget-scenario.md).
-Next, select **Add action group** and create the action group.
+To create or update action groups, select **Manage action group** while you're creating or editing a budget.
-![Image of the Add action group box](./media/tutorial-acm-create-budgets/manage-action-groups02.png)
-After the action group is created, close the box to return to your budget.
-
-Configure your budget to use your action group when an individual threshold is met. Up to five different thresholds are supported.
-
-![Example showing action group selection for an alert condition](./media/tutorial-acm-create-budgets/manage-action-groups03.png)
-
-The following example shows budget thresholds set to 50%, 75%, and 100%. Each is configured to trigger the specified actions within the designated action group.
-
-![Example showing alert conditions configured with various action groups and type of actions](./media/tutorial-acm-create-budgets/manage-action-groups04.png)
+Next, select **Add action group** and create the action group.
Budget integration with action groups only works for action groups that have the common alert schema disabled. For more information about disabling the schema, see [How do I enable the common alert schema?](../../azure-monitor/alerts/alerts-common-schema.md#how-do-i-enable-the-common-alert-schema) ## Create and edit budgets with PowerShell
-EA customers can create and edit budgets programmatically using the Azure PowerShell module. To download the latest version of Azure PowerShell, run the following command:
+If you're an EA customer, you can create and edit budgets programmatically using the Azure PowerShell module. To download the latest version of Azure PowerShell, run the following command:
```azurepowershell-interactive install-module -name Az
cost-management-billing Understand Mca Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/understand-mca-roles.md
Previously updated : 02/05/2021 Last updated : 03/10/2021
To manage your billing account for a Microsoft Customer Agreement, use the roles described in the following sections. These roles are in addition to the built-in roles Azure has to control access to resources. For more information, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
-This article applies to a billing account for a Microsoft Customer Agreement. Check if you have access to a Microsoft Customer Agreement.
+This article applies to a billing account for a Microsoft Customer Agreement. [Check if you have access to a Microsoft Customer Agreement](#check-access-to-a-microsoft-customer-agreement).
+
+Watch the [Manage access to your MCA billing account](https://www.youtube.com/watch?v=9sqglBlKkho) video to learn how you can control access to your Microsoft Customer Agreement (MCA) billing account.
+
+>[!VIDEO https://www.youtube.com/embed/9sqglBlKkho]
## Billing role definitions
The following table describes the billing roles you use to manage your billing a
|Billing profile owner|Manage everything for billing profile| |Billing profile contributor|Manage everything except permissions on the billing profile| |Billing profile reader|Read-only view of everything on billing profile|
-|Invoice manager|View invoices for billing profile|
+|Invoice manager|View and pay invoices for billing profile|
|Invoice section owner|Manage everything on invoice section| |Invoice section contributor|Manage everything except permissions on the invoice section| |Invoice section reader|Read-only view of everything on the invoice section|
The following table describes the billing roles you use to manage your billing a
## Billing account roles and tasks
-A billing account lets you manage billing for your organization. You use billing account to organize costs, monitor charges, and invoices and control billing access for your organization. For more information, see [Understand billing account](../understand/mca-overview.md#your-billing-account).
+A billing account is created when you sign up to use Azure. You use your billing account to manage invoices, payments, and track costs. Roles on the billing account have the highest level of permissions and users in these roles get visibility into the cost and billing information for your entire account. Assign these roles only to users that need to view invoices, and track costs for your entire account like member of the finance and the accounting teams. For more information, see [Understand billing account](../understand/mca-overview.md#your-billing-account).
The following tables show what role you need to complete tasks in the context of the billing account.
The following tables show what role you need to complete tasks in the context of
|Task|Billing account owner|Billing account contributor|Billing account reader| |||||
-|View existing permissions for billing account|Γ£ö|Γ£ö|Γ£ö|
+|View role assignments for billing account|Γ£ö|Γ£ö|Γ£ö|
|Give others permissions to view and manage the billing account|Γ£ö|Γ£ÿ|Γ£ÿ|
-|View billing account properties like company name, address, and more|Γ£ö|Γ£ö|Γ£ö|
+|View billing account properties like address, agreements and more|Γ£ö|Γ£ö|Γ£ö|
+|Update billing account properties like sold-to, display name, and more|Γ£ö|Γ£ö|Γ£ÿ|
### Manage billing profiles for billing account |Task|Billing account owner|Billing account contributor|Billing account reader| |||||
-|View all billing profiles in the account|Γ£ö|Γ£ö|Γ£ö|
+|View all billing profiles for the account|Γ£ö|Γ£ö|Γ£ö|
+|Create new billing profiles|Γ£ö|Γ£ö|Γ£ÿ|
### Manage invoices for billing account |Task|Billing account owner|Billing account contributor|Billing account reader| |||||
-|View all invoices in the account|Γ£ö|Γ£ö|Γ£ö|
-|Download invoices, Azure usage and charges files, price sheets and tax documents in the account|Γ£ö|Γ£ö|Γ£ö|
+|View all invoices for the account|Γ£ö|Γ£ö|Γ£ö|
+|Pay invoices with credit card|Γ£ö|Γ£ö|Γ£ÿ|
+|Download invoices, Azure usage files, price sheets, and tax documents|Γ£ö|Γ£ö|Γ£ö|
-### Manage invoice sections for billing account
+### Manage products for billing account
|Task|Billing account owner|Billing account contributor|Billing account reader| |||||
-|View all invoice sections in the account|Γ£ö|Γ£ö|Γ£ö|
-
-### Manage transactions for billing account
-
-|Task|Billing account owner|Billing account contributor|Billing account reader|
-|||||
-|View all billing transactions for the account|Γ£ö|Γ£ö|Γ£ö|
-|View all products bought for the account|Γ£ö|Γ£ö|Γ£ö|
+|View all products purchased for the account|Γ£ö|Γ£ö|Γ£ö|
+|Manage billing for products like cancel, turn off auto renewal, and more|Γ£ö|Γ£ö|Γ£ÿ|
### Manage subscriptions for billing account |Task|Billing account owner|Billing account contributor|Billing account reader| |||||
-|View all Azure subscriptions in the billing account|Γ£ö|Γ£ö|Γ£ö|
+|View all Azure subscriptions created for the billing account|Γ£ö|Γ£ö|Γ£ö|
+|Create new Azure subscriptions|Γ£ö|Γ£ö|Γ£ÿ|
+|Cancel Azure subscriptions|Γ£ÿ|Γ£ÿ|Γ£ÿ|
## Billing profile roles and tasks
-A billing profile lets you manage your invoices and payment methods. A monthly invoice is generated for the Azure subscriptions and other products purchased using the billing profile. You use the payments methods to pay the invoice. For more information, see [Understand billing profiles](../understand/mca-overview.md#billing-profiles).
+Each billing account has at least one billing profile. Your first billing profile is set up when you sign up to use Azure. A monthly invoice is generated for the billing profile and contains all its associated charges from the prior month. You can set up more billing profiles based on your needs. Users with roles on a billing profile can view cost, set budget, and manage and pay its invoices. Assign these roles to users who are responsible for managing budget and paying invoices for the billing profile like members of the business administration teams in your organization. For more information, see [Understand billing profiles](../understand/mca-overview.md#billing-profiles).
The following tables show what role you need to complete tasks in the context of the billing profile. ### Manage billing profile permissions, properties, and policies
-|Task|Billing profile owner|Billing profile contributor|Billing profile reader|Invoice Manager|Billing account owner|Billing account contributor|Billing account reader
+|Task|Billing profile owner|Billing profile contributor|Billing profile reader|Invoice manager|Billing account owner|Billing account contributor|Billing account reader
|||||||||
-|View existing permissions for the billing profile|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Give others permissions to view and manage the billing profile|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|
-|View billing profile properties like PO number, email invoice preference, and more|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Update billing profile properties |Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|
-|View policies applied on the billing profile like enable Azure reservation purchases, enable Azure marketplace purchases, and more|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Apply policies on the billing profile |Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|
-|Manage reservation orders |Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|
-|View reservation orders |Γ£ö|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|
+|View role assignments for the billing profile|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Give others permissions to view and manage the billing profile|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ö|Γ£ÿ|Γ£ÿ|
+|View billing profile properties like PO number, bill-to, and more|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Update billing profile properties |Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ö|Γ£ö|Γ£ÿ|
+|View policies applied on the billing profile like Azure reservation purchases, Azure Marketplace purchases, and more|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Apply policies on the billing profile |Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ö|Γ£ö|Γ£ÿ|
### Manage invoices for billing profile
-|Task|Billing profile owner|Billing profile contributor|Billing profile reader|Invoice Manager|Billing account owner|Billing account contributor|Billing account reader
+|Task|Billing profile owner|Billing profile contributor|Billing profile reader|Invoice manager|Billing account owner|Billing account contributor|Billing account reader
||||||||| |View all the invoices for the billing profile|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Pay invoices with credit card|Γ£ö|Γ£ö|Γ£ÿ|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|
|Download invoices, Azure usage and charges files, price sheets and tax documents for the billing profile|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| ### Manage invoice sections for billing profile
-|Task|Billing profile owner|Billing profile contributor|Billing profile reader|Invoice Manager|Billing account owner|Billing account contributor|Billing account reader
+|Task|Billing profile owner|Billing profile contributor|Billing profile reader|Invoice manager|Billing account owner|Billing account contributor|Billing account reader
||||||||| |View all the invoice sections for the billing profile|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Create new invoice section for the billing profile|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|
+|Create new invoice section for the billing profile|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ö|Γ£ö|Γ£ÿ|
-### Manage transactions for billing profile
+### Manage products for billing profile
-|Task|Billing profile owner|Billing profile contributor|Billing profile reader|Invoice Manager|Billing account owner|Billing account contributor|Billing account reader
+|Task|Billing profile owner|Billing profile contributor|Billing profile reader|Invoice manager|Billing account owner|Billing account contributor|Billing account reader
|||||||||
-|View all billing transactions for the billing profile|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|View all products purchased for the billing profile|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Manage billing for products like cancel, turn off auto renewal, and more|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ö|Γ£ö|Γ£ÿ|
+|Change billing profile for the products|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ö|Γ£ö|Γ£ÿ|
### Manage payment methods for billing profile
-|Task|Billing profile owner|Billing profile contributor|Billing profile reader|Invoice Manager|Billing account owner|Billing account contributor|Billing account reader
+|Task|Billing profile owner|Billing profile contributor|Billing profile reader|Invoice manager|Billing account owner|Billing account contributor|Billing account reader
||||||||| |View payment methods for the billing profile|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Manage payment methods such as replacing credit card, detaching credit card, and more|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ö|Γ£ö|Γ£ÿ|
|Track Azure credits balance for the billing profile|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| ### Manage subscriptions for billing profile
-|Task|Billing profile owner|Billing profile contributor|Billing profile reader|Invoice Manager|Billing account owner|Billing account contributor|Billing account reader
+|Task|Billing profile owner|Billing profile contributor|Billing profile reader|Invoice manager|Billing account owner|Billing account contributor|Billing account reader
||||||||| |View all Azure subscriptions for the billing profile|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Create new Azure subscriptions|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ö|Γ£ö|Γ£ÿ|
+|Cancel Azure subscriptions|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|
+|Change billing profile for the Azure subscriptions|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ö|Γ£ö|Γ£ÿ|
## Invoice section roles and tasks
-An invoice section allows you to organize the costs on your invoice. You can create a section to organize your costs by department, development environment, or based on your organization's needs. Give others permission to create Azure subscriptions for the section. Any usage charges and purchases for the subscriptions then show on the section of the invoice. For more information, see [Understand invoice section](../understand/mca-overview.md#invoice-sections).
+Each billing profile contains one invoice section by default. You may create more invoice sections to group cost on the billing profile's invoice. Users with roles on an invoice section can control who creates Azure subscriptions and make other purchases. Assign these roles to users who set up Azure environment for teams in our organization like engineering leads and technical architects. For more information, see [Understand invoice section](../understand/mca-overview.md#invoice-sections).
The following tables show what role you need to complete tasks in the context of invoice sections. ### Manage invoice section permissions and properties
-|Tasks|Invoice section owner|Invoice section contributor|Invoice section reader|Azure subscription creator|Billing account owner|Billing account contributor|Billing account reader |
-|||||||||
-|View all permissions on invoice section|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Give others permissions to view and manage the invoice section|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|
-|View invoice section properties|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Update invoice section properties|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|
+|Tasks|Invoice section owner|Invoice section contributor|Invoice section reader|Azure subscription creator|Billing profile owner|Billing profile contributor|Billing profile reader |Invoice manager|Billing account owner|Billing account contributor|Billing account reader
+|||||||||||||
+|View role assignments for invoice section|Γ£ö|Γ£ö|Γ£ö|Γ£ÿ|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Give others permissions to view and manage the invoice section|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ö|Γ£ÿ|Γ£ÿ|
+|View invoice section properties|Γ£ö|Γ£ö|Γ£ö|Γ£ÿ|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Update invoice section properties|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ö|Γ£ö|Γ£ÿ|
### Manage products for invoice section
-|Tasks|Invoice section owner|Invoice section contributor|Invoice section reader|Azure subscription creator|Billing account owner|Billing account contributor|Billing account reader
-|||||||||
-|View all products bought in the invoice section|Γ£ö|Γ£ö|Γ£ö|Γ£ÿ|Γ£ö|Γ£ö|Γ£ö|
-|Manage billing for products for invoice section like cancel, turn off auto renewal, and more|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|
-|Change invoice section for the products|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|
+|Tasks|Invoice section owner|Invoice section contributor|Invoice section reader|Azure subscription creator|Billing profile owner|Billing profile contributor|Billing profile reader |Invoice manager|Billing account owner|Billing account contributor|Billing account reader
+|||||||||||||
+|View all products bought for the invoice section|Γ£ö|Γ£ö|Γ£ö|Γ£ÿ|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Manage billing for products like cancel, turn off auto renewal, and more|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ö|Γ£ö|Γ£ÿ|
+|Change invoice section for the products|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ö|Γ£ö|Γ£ÿ|
### Manage subscriptions for invoice section
-|Tasks|Invoice section owner|Invoice section contributor|Invoice section reader|Azure subscription creator|Billing account owner|Billing account contributor|Billing account reader
-|||||||||
-|View all Azure subscriptions for invoice section|Γ£ö|Γ£ö|Γ£ö|Γ£ÿ|Γ£ö|Γ£ö|Γ£ö|
-|Change invoice section for the subscriptions|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|
-|Request billing ownership of subscriptions from users in other billing accounts|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|
+|Tasks|Invoice section owner|Invoice section contributor|Invoice section reader|Azure subscription creator|Billing profile owner|Billing profile contributor|Billing profile reader |Invoice manager|Billing account owner|Billing account contributor|Billing account reader
+|||||||||||||
+|View all Azure subscriptions for invoice section|Γ£ö|Γ£ö|Γ£ö|Γ£ÿ|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Create Azure subscriptions|Γ£ö|Γ£ö|Γ£ÿ|Γ£ö|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ö|Γ£ö|Γ£ÿ|
+|Cancel Azure subscriptions|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|
+|Change invoice section for the Azure subscription|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ö|Γ£ö|Γ£ÿ|
+|Request billing ownership of subscriptions from users in other billing accounts|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ö|Γ£ö|Γ£ÿ|
## Subscription billing roles and tasks The following table shows what role you need to complete tasks in the context of a subscription.
-|Tasks|Invoice section owner|Invoice section contributor|Invoice section reader|Azure subscription creator|
-||||||
-|Create Azure subscriptions|Γ£ö|Γ£ö|Γ£ÿ|Γ£ö|
-|Update cost center for the subscription|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|
-|Change invoice section for the subscription|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|
+|Tasks|Invoice section owner|Invoice section contributor|Invoice section reader|Azure subscription creator|Billing profile owner|Billing profile contributor|Billing profile reader |Invoice manager|Billing account owner|Billing account contributor|Billing account reader
+|||||||||||||
+|Create subscriptions|Γ£ö|Γ£ö|Γ£ÿ|Γ£ö|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ö|Γ£ö|Γ£ÿ|
+|Update cost center for the subscription|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ö|Γ£ö|Γ£ÿ|
+|Change invoice section for the subscription|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ö|Γ£ö|Γ£ÿ|
+|Change billing profile for the subscription|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ö|Γ£ö|Γ£ÿ|
+|Cancel Azure subscriptions|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|
## Manage billing roles in the Azure portal
cost-management-billing Subscription Transfer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/understand/subscription-transfer.md
If you're an Enterprise Agreement (EA) customer, your enterprise administrators
Only the billing administrator of an account can transfer ownership of a subscription.
-## Determine account billing administrator
+## Determine if you are a billing administrator
<a name="whoisaa"></a>
-The billing administrator is the person who has permission to manage billing for an account. They're authorized to access billing on the [Azure portal](https://portal.azure.com) and do various billing tasks like create subscriptions, view and pay invoices, or update payment methods.
+In effort to do the transfer, locate the person who has access to manage billing for an account. They're authorized to access billing on the [Azure portal](https://portal.azure.com) and do various billing tasks like create subscriptions, view and pay invoices, or update payment methods.
-To identify accounts for which you're a billing administrator, visit the [Cost Management + Billing page in Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Billing/ModernBillingMenuBlade/Overview). Then select **All billing scopes** from the left-hand pane. The subscriptions page shows all the subscriptions where you're a billing administrator.
+### Check if you have billing access
-If you're not sure who the account administrator is for a subscription, visit the [Subscriptions page in Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade). Then select the subscription you want to check, and then look under **Settings**. Select **Properties** and the account administrator of the subscription is shown in the **Account Admin** box.
+1. To identify accounts for which you have billing access, visit the [Cost Management + Billing page in Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Billing/ModernBillingMenuBlade/Overview).
+2. Select **Billing accounts** from the left-hand menu.
+
+3. The **Billing scope** listing page shows all the subscriptions where you have access to the billing details.
+
+### Check by subscription
+
+1. If you're not sure who the account administrator is for a subscription, visit the [Subscriptions page in Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade).
+
+2. Select the subscription you want to check.
+
+3. Under the **Settings** heading, select **Properties**. See the **Account Admin** box to understand who is the account administrator of the subscription.
+
+ > [!NOTE]
+ > Not all subscription types show the Properties.
## Supported subscription types
data-factory Connector Azure Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-blob-storage.md
These properties are supported for an Azure Blob storage linked service:
|: |: |: | | type | The **type** property must be set to **AzureBlobStorage**. | Yes | | serviceEndpoint | Specify the Azure Blob storage service endpoint with the pattern of `https://<accountName>.blob.core.windows.net/`. | Yes |
-| accountKind | Specify the kind of your storage account. Allowed values are: **Storage** (general purpose v1), **StorageV2** (general purpose v2), **BlobStorage**, or **BlockBlobStorage**. <br/> When using Azure Blob linked service in data flow, managed identity or service principal authentication is not supported when account kind as empty or "Storage". Specify the proper account kind, choose a different authentication, or upgrade your storage account to general purpose v2. | No |
+| accountKind | Specify the kind of your storage account. Allowed values are: **Storage** (general purpose v1), **StorageV2** (general purpose v2), **BlobStorage**, or **BlockBlobStorage**. <br/><br/>When using Azure Blob linked service in data flow, managed identity or service principal authentication is not supported when account kind as empty or "Storage". Specify the proper account kind, choose a different authentication, or upgrade your storage account to general purpose v2. | No |
| servicePrincipalId | Specify the application's client ID. | Yes | | servicePrincipalKey | Specify the application's key. Mark this field as **SecureString** to store it securely in Data Factory, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes | | tenant | Specify the tenant information (domain name or tenant ID) under which your application resides. Retrieve it by hovering over the upper-right corner of the Azure portal. | Yes |
These properties are supported for an Azure Blob storage linked service:
| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use the Azure integration runtime or the self-hosted integration runtime (if your data store is in a private network). If this property isn't specified, the service uses the default Azure integration runtime. | No | >[!NOTE]
->If your blob account enables [soft delete](../storage/blobs/soft-delete-blob-overview.md), service principal authentication is not supported in Data Flow.
+>
+>- If your blob account enables [soft delete](../storage/blobs/soft-delete-blob-overview.md), service principal authentication is not supported in Data Flow.
+>- If you access the blob storage through private endpoint using Data Flow, note when service principal authentication is used Data Flow connects to the ADLS Gen2 endpoint instead of Blob endpoint. Make sure you create the corresponding private endpoint in ADF to enable access.
>[!NOTE] >Service principal authentication is supported only by the "AzureBlobStorage" type linked service, not the previous "AzureStorage" type linked service.
These properties are supported for an Azure Blob storage linked service:
|: |: |: | | type | The **type** property must be set to **AzureBlobStorage**. | Yes | | serviceEndpoint | Specify the Azure Blob storage service endpoint with the pattern of `https://<accountName>.blob.core.windows.net/`. | Yes |
-| accountKind | Specify the kind of your storage account. Allowed values are: **Storage** (general purpose v1), **StorageV2** (general purpose v2), **BlobStorage**, or **BlockBlobStorage**. <br/> When using Azure Blob linked service in data flow, managed identity or service principal authentication is not supported when account kind as empty or "Storage". Specify the proper account kind, choose a different authentication, or upgrade your storage account to general purpose v2. | No |
+| accountKind | Specify the kind of your storage account. Allowed values are: **Storage** (general purpose v1), **StorageV2** (general purpose v2), **BlobStorage**, or **BlockBlobStorage**. <br/><br/>When using Azure Blob linked service in data flow, managed identity or service principal authentication is not supported when account kind as empty or "Storage". Specify the proper account kind, choose a different authentication, or upgrade your storage account to general purpose v2. | No |
| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use the Azure integration runtime or the self-hosted integration runtime (if your data store is in a private network). If this property isn't specified, the service uses the default Azure integration runtime. | No | > [!NOTE]
-> If your blob account enables [soft delete](../storage/blobs/soft-delete-blob-overview.md), managed identity authentication is not supported in Data Flow.
+>
+> - If your blob account enables [soft delete](../storage/blobs/soft-delete-blob-overview.md), managed identity authentication is not supported in Data Flow.
+> - If you access the blob storage through private endpoint using Data Flow, note when managed identity authentication is used Data Flow connects to the ADLS Gen2 endpoint instead of Blob endpoint . Make sure you create the corresponding private endpoint in ADF to enable access.
> [!NOTE] > Managed identities for Azure resource authentication are supported only by the "AzureBlobStorage" type linked service, not the previous "AzureStorage" type linked service.
data-factory Connector Azure Database For Mysql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-database-for-mysql.md
Title: Copy data to and from Azure Database for MySQL
-description: Learn how to copy data to and from Azure Database for MySQL by using a copy activity in an Azure Data Factory pipeline.
+ Title: Copy and transform data in Azure Database for MySQL
+description: earn how to copy and transform data in Azure Database for MySQL by using Azure Data Factory.
Previously updated : 08/25/2019 Last updated : 03/10/2021
-# Copy data to and from Azure Database for MySQL using Azure Data Factory
+# Copy and transform data in Azure Database for MySQL by using Azure Data Factory
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-This article outlines how to use the Copy Activity in Azure Data Factory to copy data from Azure Database for MySQL. It builds on the [copy activity overview](copy-activity-overview.md) article that presents a general overview of copy activity.
+This article outlines how to use Copy Activity in Azure Data Factory to copy data from and to Azure Database for MySQL, and use Data Flow to transform data in Azure Database for MySQL. To learn about Azure Data Factory, read the [introductory article](introduction.md).
This connector is specialized for [Azure Database for MySQL service](../mysql/overview.md). To copy data from generic MySQL database located on-premises or in the cloud, use [MySQL connector](connector-mysql.md).
This connector is specialized for [Azure Database for MySQL service](../mysql/ov
This Azure Database for MySQL connector is supported for the following activities: - [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md)
+- [Mapping data flow](concepts-data-flow-overview.md)
- [Lookup activity](control-flow-lookup-activity.md)
-You can copy data from Azure Database for MySQL to any supported sink data store. Or, you can copy data from any supported source data store to Azure Database for MySQL. For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
-
-Azure Data Factory provides a built-in driver to enable connectivity, therefore you don't need to manually install any driver using this connector.
- ## Getting started [!INCLUDE [data-factory-v2-connector-get-started](../../includes/data-factory-v2-connector-get-started.md)]
To copy data to Azure Database for MySQL, the following properties are supported
] ```
+## Mapping data flow properties
+
+When transforming data in mapping data flow, you can read and write to tables from Azure Database for MySQL. For more information, see the [source transformation](data-flow-source.md) and [sink transformation](data-flow-sink.md) in mapping data flows. You can choose to use an Azure Database for MySQL dataset or an [inline dataset](data-flow-source.md#inline-datasets) as source and sink type.
+
+### Source transformation
+
+The below table lists the properties supported by Azure Database for MySQL source. You can edit these properties in the **Source options** tab.
+
+| Name | Description | Required | Allowed values | Data flow script property |
+| - | -- | -- | -- | - |
+| Table | If you select Table as input, data flow fetches all the data from the table specified in the dataset. | No | - |*(for inline dataset only)*<br>tableName |
+| Query | If you select Query as input, specify a SQL query to fetch data from source, which overrides any table you specify in dataset. Using queries is a great way to reduce rows for testing or lookups.<br><br>**Order By** clause is not supported, but you can set a full SELECT FROM statement. You can also use user-defined table functions. **select * from udfGetData()** is a UDF in SQL that returns a table that you can use in data flow.<br>Query example: `select * from mytable where customerId > 1000 and customerId < 2000` or `select * from "MyTable"`.| No | String | query |
+| Batch size | Specify a batch size to chunk large data into batches. | No | Integer | batchSize |
+| Isolation Level | Choose one of the following isolation levels:<br>- Read Committed<br>- Read Uncommitted (default)<br>- Repeatable Read<br>- Serializable<br>- None (ignore isolation level) | No | <small>READ_COMMITTED<br/>READ_UNCOMMITTED<br/>REPEATABLE_READ<br/>SERIALIZABLE<br/>NONE</small> |isolationLevel |
+
+#### Azure Database for MySQL source script example
+
+When you use Azure Database for MySQL as source type, the associated data flow script is:
+
+```
+source(allowSchemaDrift: true,
+ validateSchema: false,
+ isolationLevel: 'READ_UNCOMMITTED',
+ query: 'select * from mytable',
+ format: 'query') ~> AzureMySQLSource
+```
+
+### Sink transformation
+
+The below table lists the properties supported by Azure Database for MySQL sink. You can edit these properties in the **Sink options** tab.
+
+| Name | Description | Required | Allowed values | Data flow script property |
+| - | -- | -- | -- | - |
+| Update method | Specify what operations are allowed on your database destination. The default is to only allow inserts.<br>To update, upsert, or delete rows, an [Alter row transformation](data-flow-alter-row.md) is required to tag rows for those actions. | Yes | `true` or `false` | deletable <br/>insertable <br/>updateable <br/>upsertable |
+| Key columns | For updates, upserts and deletes, key column(s) must be set to determine which row to alter.<br>The column name that you pick as the key will be used as part of the subsequent update, upsert, delete. Therefore, you must pick a column that exists in the Sink mapping. | No | Array | keys |
+| Skip writing key columns | If you wish to not write the value to the key column, select "Skip writing key columns". | No | `true` or `false` | skipKeyWrites |
+| Table action |Determines whether to recreate or remove all rows from the destination table prior to writing.<br>- **None**: No action will be done to the table.<br>- **Recreate**: The table will get dropped and recreated. Required if creating a new table dynamically.<br>- **Truncate**: All rows from the target table will get removed. | No | `true` or `false` | recreate<br/>truncate |
+| Batch size | Specify how many rows are being written in each batch. Larger batch sizes improve compression and memory optimization, but risk out of memory exceptions when caching data. | No | Integer | batchSize |
+| Pre and Post SQL scripts | Specify multi-line SQL scripts that will execute before (pre-processing) and after (post-processing) data is written to your Sink database. | No | String | preSQLs<br>postSQLs |
+
+#### Azure Database for MySQL sink script example
+
+When you use Azure Database for MySQL as sink type, the associated data flow script is:
+
+```
+IncomingStream sink(allowSchemaDrift: true,
+ validateSchema: false,
+ deletable:false,
+ insertable:true,
+ updateable:true,
+ upsertable:true,
+ keys:['keyColumn'],
+ format: 'table',
+ skipDuplicateMapInputs: true,
+ skipDuplicateMapOutputs: true) ~> AzureMySQLSink
+```
+ ## Lookup activity properties To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
data-factory Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-overview.md
description: Learn the supported connectors in Data Factory.
Previously updated : 09/28/2020 Last updated : 03/10/2021
data-factory Continuous Integration Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/continuous-integration-deployment.md
Previously updated : 02/18/2021 Last updated : 03/11/2021 # Continuous integration and delivery in Azure Data Factory
If your development factory has an associated git repository, you can override t
* You use automated CI/CD and you want to change some properties during Resource Manager deployment, but the properties aren't parameterized by default. * Your factory is so large that the default Resource Manager template is invalid because it has more than the maximum allowed parameters (256).
- To handle custom parameter 256 limit, there are 3 options:
+ To handle custom parameter 256 limit, there are three options:
* Use the custom parameter file and remove properties that don't need parameterization, i.e., properties that can keep a default value and hence decrease the parameter count. * Refactor logic in the dataflow to reduce parameters, for example, pipeline parameters all have the same value, you can just use global parameters instead.
Here's an explanation of how the preceding template is constructed, broken down
* Although type-specific customization is available for datasets, you can provide configuration without explicitly having a \*-level configuration. In the preceding example, all dataset properties under `typeProperties` are parameterized.
+> [!NOTE]
+> **Azure alerts and matrices** if configured for a pipeline are not currently supported as parameters for ARM deployments. To reapply the alerts and matrices in new environment, please follow [Data Factory Monitoring, Alerts and Matrices.](https://docs.microsoft.com/azure/data-factory/monitor-using-azure-monitor#data-factory-metrics)
+>
+ ### Default parameterization template Below is the current default parameterization template. If you need to add only a few parameters, editing this template directly might be a good idea because you won't lose the existing parameterization structure.
If you're using Git integration with your data factory and have a CI/CD pipeline
- You can't currently host projects on Bitbucket.
+- You can't currently export and import alerts and matrices as parameters.
+ ## <a name="script"></a> Sample pre- and post-deployment script The following sample script can be used to stop triggers before deployment and restart them afterward. The script also includes code to delete resources that have been removed. Save the script in an Azure DevOps git repository and reference it via an Azure PowerShell task using version 4.*.
data-factory Data Flow Sink https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-sink.md
Previously updated : 12/08/2020 Last updated : 03/10/2021 # Sink transformation in mapping data flow
Mapping data flow follows an extract, load, and transform (ELT) approach and wor
| [Azure Cosmos DB (SQL API)](connector-azure-cosmos-db.md#mapping-data-flow-properties) | | Γ£ô/- | | [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md#mapping-data-flow-properties) | [Avro](format-avro.md#mapping-data-flow-properties) <br>[Delimited text](format-delimited-text.md#mapping-data-flow-properties) <br>[JSON](format-json.md#mapping-data-flow-properties) <br/>[ORC](format-orc.md#mapping-data-flow-properties)<br/>[Parquet](format-parquet.md#mapping-data-flow-properties) | Γ£ô/- <br>Γ£ô/- <br>Γ£ô/- <br>Γ£ô/Γ£ô<br>Γ£ô/- | | [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md#mapping-data-flow-properties) | [Avro](format-avro.md#mapping-data-flow-properties) <br/>[Common Data Model](format-common-data-model.md#sink-properties)<br>[Delimited text](format-delimited-text.md#mapping-data-flow-properties) <br>[Delta](format-delta.md) <br>[JSON](format-json.md#mapping-data-flow-properties) <br/>[ORC](format-orc.md#mapping-data-flow-properties)<br/>[Parquet](format-parquet.md#mapping-data-flow-properties) | Γ£ô/- <br>-/Γ£ô <br>Γ£ô/- <br>-/Γ£ô <br>Γ£ô/-<br>Γ£ô/Γ£ô <br>Γ£ô/- |
+| [Azure Database for MySQL](connector-azure-database-for-mysql.md) | | Γ£ô/Γ£ô |
| [Azure Database for PostgreSQL](connector-azure-database-for-postgresql.md) | | Γ£ô/Γ£ô | | [Azure SQL Database](connector-azure-sql-database.md#mapping-data-flow-properties) | | Γ£ô/- | | [Azure SQL Managed Instance (preview)](connector-azure-sql-managed-instance.md#mapping-data-flow-properties) | | Γ£ô/- |
data-factory Data Flow Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-source.md
Previously updated : 02/23/2021 Last updated : 03/10/2021 # Source transformation in mapping data flow
Mapping data flow follows an extract, load, and transform (ELT) approach and wor
| [Azure Cosmos DB (SQL API)](connector-azure-cosmos-db.md#mapping-data-flow-properties) | | Γ£ô/- | | [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md#mapping-data-flow-properties) | [Avro](format-avro.md#mapping-data-flow-properties)<br>[Delimited text](format-delimited-text.md#mapping-data-flow-properties)<br>[Excel](format-excel.md#mapping-data-flow-properties)<br>[JSON](format-json.md#mapping-data-flow-properties)<br>[ORC](format-orc.md#mapping-data-flow-properties)<br/>[Parquet](format-parquet.md#mapping-data-flow-properties)<br>[XML](format-xml.md#mapping-data-flow-properties) | Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br/>Γ£ô/-<br>Γ£ô/Γ£ô<br/>Γ£ô/-<br>Γ£ô/Γ£ô | | [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md#mapping-data-flow-properties) | [Avro](format-avro.md#mapping-data-flow-properties)<br>[Common Data Model](format-common-data-model.md#source-properties)<br>[Delimited text](format-delimited-text.md#mapping-data-flow-properties)<br>[Delta](format-delta.md)<br>[Excel](format-excel.md#mapping-data-flow-properties)<br>[JSON](format-json.md#mapping-data-flow-properties)<br>[ORC](format-orc.md#mapping-data-flow-properties)<br/>[Parquet](format-parquet.md#mapping-data-flow-properties)<br>[XML](format-xml.md#mapping-data-flow-properties) | Γ£ô/Γ£ô<br/>-/Γ£ô<br>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br>Γ£ô/Γ£ô<br>Γ£ô/-<br/>Γ£ô/Γ£ô<br/>Γ£ô/-<br>Γ£ô/Γ£ô |
+| [Azure Database for MySQL](connector-azure-database-for-mysql.md) | | Γ£ô/Γ£ô |
| [Azure Database for PostgreSQL](connector-azure-database-for-postgresql.md) | | Γ£ô/Γ£ô | | [Azure SQL Database](connector-azure-sql-database.md#mapping-data-flow-properties) | | Γ£ô/- | | [Azure SQL Managed Instance](connector-azure-sql-managed-instance.md#mapping-data-flow-properties) | | Γ£ô/- |
data-factory How To Expression Language Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-expression-language-functions.md
In this document, we will primarily focus on learning fundamental concepts with
## Azure data factory UI and parameters
-If you are new to Azure data factory parameter usage in ADF user interface, please review [Data factory UI for linked services with parameters](https://docs.microsoft.comazure/data-factory/parameterize-linked-services#data-factory-ui) and [Data factory UI for metadata driven pipeline with parameters](https://docs.microsoft.com/azure/data-factory/how-to-use-trigger-parameterization#data-factory-ui) for visual explanation.
+If you are new to Azure data factory parameter usage in ADF user interface, please review [Data factory UI for linked services with parameters](https://docs.microsoft.com/azure/data-factory/parameterize-linked-services#data-factory-ui) and [Data factory UI for metadata driven pipeline with parameters](https://docs.microsoft.com/azure/data-factory/how-to-use-trigger-parameterization#data-factory-ui) for visual explanation.
## Parameter and expression concepts
data-factory Tutorial Data Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-data-flow.md
Previously updated : 11/09/2019 Last updated : 03/11/2021 # Transform data using mapping data flows If you're new to Azure Data Factory, see [Introduction to Azure Data Factory](introduction.md). In this tutorial, you'll use the Azure Data Factory user interface (UX) to create a pipeline that copies and transforms data from an Azure Data Lake Storage (ADLS) Gen2 source to an ADLS Gen2 sink using mapping data flow. The configuration pattern in this tutorial can be expanded upon when transforming data using mapping data flow
+ >[!NOTE]
+ >This tutorial is meant for mapping data flows in general. Data flows are available both in Azure Data Factory and Synapse Pipelines. If you are new to data flows in Azure Synapse Pipelines, please follow [Data Flow using Azure Synapse Pipelines](https://docs.microsoft.com/azure/synapse-analytics/concepts-data-flow-overview)
+
In this tutorial, you do the following steps: > [!div class="checklist"]
databox-online Azure Stack Edge Gpu Deploy Add Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-add-storage-accounts.md
Previously updated : 02/22/2021 Last updated : 03/12/2021 Customer intent: As an IT admin, I need to understand how to add and connect to storage accounts on Azure Stack Edge Pro so I can use it to transfer data to Azure.
databox-online Azure Stack Edge Gpu Manage Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-manage-storage-accounts.md
Previously updated : 02/18/2021 Last updated : 03/12/2021 # Use the Azure portal to manage Edge storage accounts on your Azure Stack Edge Pro
defender-for-iot How To Identify Required Appliances https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-identify-required-appliances.md
This section provides an overview of physical sensor models that are available.
| Deployment type | Corporate | Enterprise | SMB rack mount| SMB ruggedized| |--|--|--|--|--|
-| Image | :::image type="content" source="media/how-to-prepare-your-network/corporate-hpe-proliant-dl360-v2.png" alt-text="The corporate-level model."::: | :::image type="content" source="media/how-to-prepare-your-network/enterprise-and-smb-hpe-proliant-dl20-v2.png" alt-text="The enterprise-level model."::: | :::image type="content" source="media/how-to-prepare-your-network/enterprise-and-smb-hpe-proliant-dl20-v2.png" alt-text="The SMB-level model."::: |
+| Image | :::image type="content" source="media/how-to-prepare-your-network/corporate-hpe-proliant-dl360-v2.png" alt-text="The corporate-level model."::: | :::image type="content" source="media/how-to-prepare-your-network/enterprise-and-smb-hpe-proliant-dl20-v2.png" alt-text="The enterprise-level model."::: | :::image type="content" source="media/how-to-prepare-your-network/enterprise-and-smb-hpe-proliant-dl20-v2.png" alt-text="The SMB-level model."::: | :::image type="content" source="media/how-to-prepare-your-network/office-ruggedized.png" alt-text="The SMB-ruggedized level model."::: |
| Model | HPE ProLiant DL360 | HPE ProLiant DL20 | HPE ProLiant DL20 | HPE EL300 | | Monitoring ports | Up to 15 RJ45 or 8 OPT | Up to 8 RJ45 or 6 OPT | 4 RJ45 | Up to 5 | | Maximum bandwidth [1](#anchortext) | 3 Gb/sec | 1 Gb/sec | 200 Mb/Sec | 100 Mb/sec |
digital-twins How To Ingest Iot Hub Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-ingest-iot-hub-data.md
Before continuing with this example, you'll need to set up the following resourc
### Example telemetry scenario This how-to outlines how to send messages from IoT Hub to Azure Digital Twins, using a function in Azure. There are many possible configurations and matching strategies you can use for sending messages, but the example for this article contains the following parts:
-* A thermometer device in IoT Hub, with a known device ID
+* A thermostat device in IoT Hub, with a known device ID
* A digital twin to represent the device, with a matching ID > [!NOTE]
Whenever a temperature telemetry event is sent by the thermostat device, a funct
## Add a model and twin
-You can add/upload a model using the CLI command below, and then create a twin using this model that will be updated with information from IoT Hub.
+In this section, you'll set up a [digital twin](concepts-twins-graph.md) in Azure Digital Twins that will represent the thermostat device and will be updated with information from IoT Hub.
+
+To create a thermostat-type twin, you'll first need to upload the thermostat [model](concepts-models.md) to your instance, which describes the properties of a thermostat and will be used later to create the twin.
The model looks like this: :::code language="json" source="~/digital-twins-docs-samples/models/Thermostat.json":::
-To **upload this model to your twins instance**, open the Azure CLI and run the following command:
+To **upload this model to your twins instance**, run the following Azure CLI command, which uploads the above model as inline JSON. You can run the command in [Azure Cloud Shell](/cloud-shell/overview.md) in your browser, or on your machine if you have the CLI [installed locally](/cli/azure/install-azure-cli.md).
```azurecli-interactive az dt model create --models '{ "@id": "dtmi:contosocom:DigitalTwins:Thermostat;1", "@type": "Interface", "@context": "dtmi:dtdl:context;2", "contents": [ { "@type": "Property", "name": "Temperature", "schema": "double" } ]}' -n {digital_twins_instance_name} ```
-You'll then need to **create one twin using this model**. Use the following command to create a twin and set 0.0 as an initial temperature value.
+You'll then need to **create one twin using this model**. Use the following command to create a thermostat twin named **thermostat67**, and set 0.0 as an initial temperature value.
```azurecli-interactive az dt twin create --dtmi "dtmi:contosocom:DigitalTwins:Thermostat;1" --twin-id thermostat67 --properties '{"Temperature": 0.0,}' --dt-name {digital_twins_instance_name} ```
-Output of a successful twin create command should look like this:
+>[!NOTE]
+> If you are using Cloud Shell in the PowerShell environment, you may need to escape the quotation mark characters on the inline JSON fields for their values to be parsed correctly. Here are the commands to upload the model and create the twin with this modification:
+>
+> Upload model:
+> ```azurecli-interactive
+> az dt model create --models '{ \"@id\": \"dtmi:contosocom:DigitalTwins:Thermostat;1\", \"@type\": \"Interface\", \"@context\": \"dtmi:dtdl:context;2\", \"contents\": [ { \"@type\": \"Property\", \"name\": \"Temperature\", \"schema\": \"double\" } ]}' -n {digital_twins_instance_name}
+> ```
+>
+> Create twin:
+> ```azurecli-interactive
+> az dt twin create --dtmi "dtmi:contosocom:DigitalTwins:Thermostat;1" --twin-id thermostat67 --properties '{\"Temperature\": 0.0,}' --dt-name {digital_twins_instance_name}
+> ```
+
+When the twin is created successfully, the CLI output from the command should look something like this:
```json { "$dtId": "thermostat67",
Output of a successful twin create command should look like this:
## Create a function
-In this section, you'll create an Azure function to access Azure Digital Twins and update twins based on IoT telemetry events from IoT Hub. Follow the steps below to create and publish the function.
+In this section, you'll create an Azure function to access Azure Digital Twins and update twins based on IoT telemetry events that it receives. Follow the steps below to create and publish the function.
#### Step 1: Create a function app project
First, create a new function app project in Visual Studio. For instructions on h
#### Step 2: Fill in function code
+Add the following packages to your project:
+* [Azure.DigitalTwins.Core](https://www.nuget.org/packages/Azure.DigitalTwins.Core/)
+* [Azure.Identity](https://www.nuget.org/packages/Azure.Identity/)
+* [Microsoft.Azure.WebJobs.Extensions.EventGrid](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.EventGrid/)
+ Rename the *Function1.cs* sample function that Visual Studio has generated with the new project to *IoTHubtoTwins.cs*. Replace the code in the file with the following code: :::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/IoTHubToTwins.cs":::
Next, **assign an access role** for the function and **configure the application
## Connect your function to IoT Hub
-Set up an event destination for hub data.
+In this section, you'll set up your function as an event destination for the IoT hub device data. This will ensure that the data from the thermostat device in IoT Hub will be sent to the Azure function for processing.
+ In the [Azure portal](https://portal.azure.com/), navigate to your IoT Hub instance that you created in the [*Prerequisites*](#prerequisites) section. Under **Events**, create a subscription for your function. :::image type="content" source="media/how-to-ingest-iot-hub-data/add-event-subscription.png" alt-text="Screenshot of the Azure portal that shows Adding an event subscription."::: In the **Create Event Subscription** page, fill the fields as follows:
- 1. Under **Name**, name the subscription what you would like.
- 2. Under **Event Schema**, choose _Event Grid Schema_.
- 3. Under **Event Types**, choose the _Device Telemetry_ checkbox and uncheck other event types.
- 4. Under **Endpoint Type**, Select _Azure Function_.
- 5. Under **Endpoint**, Choose _Select an endpoint_ link to create an endpoint.
+ 1. For **Name**, choose whatever name you want for the event subscription.
+ 2. For **Event Schema**, choose _Event Grid Schema_.
+ 3. For **System Topic Name**, choose whatever name you want.
+ 1. For **Filter to Event Types**, choose the _Device Telemetry_ checkbox and uncheck other event types.
+ 1. For **Endpoint Type**, Select _Azure Function_.
+ 1. For **Endpoint**, use the _Select an endpoint_ link to choose what Azure Function to use for the endpoint.
:::image type="content" source="media/how-to-ingest-iot-hub-data/create-event-subscription.png" alt-text="Screenshot of the Azure portal to create the event subscription details":::
-In the _Select Azure Function_ page that opens up, verify the below details.
+In the _Select Azure Function_ page that opens up, verify or fill in the below details.
1. **Subscription**: Your Azure subscription. 2. **Resource group**: Your resource group. 3. **Function app**: Your function app name. 4. **Slot**: _Production_.
- 5. **Function**: Select your function from the dropdown.
+ 5. **Function**: Select the function from earlier, *IoTHubtoTwins*, from the dropdown.
-Save your details by selecting _Confirm Selection_ button.
+Save your details with the _Confirm Selection_ button.
:::image type="content" source="media/how-to-ingest-iot-hub-data/select-azure-function.png" alt-text="Screenshot of the Azure portal to select the function.":::
-Select _Create_ button to create event subscription.
+Select the _Create_ button to create the event subscription.
## Send simulated IoT data
digital-twins How To Manage Twin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-twin.md
The result of calling `object result = await client.GetDigitalTwinAsync("my-moon
} ```
-The defined properties of the digital twin are returned as top-level properties on the digital twin. Metadata or system information that is not part of the DTDL definition is returned with a `$` prefix. Metadata properties include:
-* The ID of the digital twin in this Azure Digital Twins instance, as `$dtId`.
-* `$etag`, a standard HTTP field assigned by the web server.
-* Other properties in a `$metadata` section. These include:
- - The DTMI of the model of the digital twin.
- - Synchronization status for each writeable property. This is most useful for devices, where it's possible that the service and the device have diverging statuses (for example, when a device is offline). Currently, this property only applies to physical devices connected to IoT Hub. With the data in the metadata section, it is possible to understand the full status of a property, as well as the last modified timestamps. For more information about sync status, see [this IoT Hub tutorial](../iot-hub/tutorial-device-twins.md) on synchronizing device state.
- - Service-specific metadata, like from IoT Hub or Azure Digital Twins.
+The defined properties of the digital twin are returned as top-level properties on the digital twin. Metadata or system information that is not part of the DTDL definition is returned with a `$` prefix. Metadata properties include the following values:
+* `$dtId`: The ID of the digital twin in this Azure Digital Twins instance
+* `$etag`: A standard HTTP field assigned by the web server. This is updated to a new value every time the twin is updated, which can be useful to determine whether the twin's data has been updated on the server since a previous check. It can also be used in HTTP headers in these ways:
+ - with read operations to avoid fetching content that hasn't changed
+ - with write operations to support optimistic concurrency
+* `$metadata`: A set of other properties, including:
+ - The DTMI of the model of the digital twin.
+ - Synchronization status for each writeable property. This is most useful for devices, where it's possible that the service and the device have diverging statuses (for example, when a device is offline). Currently, this property only applies to physical devices connected to IoT Hub. With the data in the metadata section, it is possible to understand the full status of a property, as well as the last modified timestamps. For more information about sync status, see [this IoT Hub tutorial](../iot-hub/tutorial-device-twins.md) on synchronizing device state.
+ - Service-specific metadata, like from IoT Hub or Azure Digital Twins.
You can read more about the serialization helper classes like `BasicDigitalTwin` in [*How-to: Use the Azure Digital Twins APIs and SDKs*](how-to-use-apis-sdks.md).
frontdoor Front Door Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/front-door-overview.md
Title: Azure Front Door | Microsoft Docs description: This article provides an overview of Azure Front Door. Previously updated : 09/27/2020 Last updated : 03/09/2021 # customer intent: As an IT admin, I want to learn about Front Door and what I can use it for. # What is Azure Front Door?
+> [!IMPORTANT]
+> This documentation is for Azure Front Door. Looking for information on Azure Front Door Standard/Premium (Preview)? View [here](/standard-premium/overview.md).
+ Azure Front Door is a global, scalable entry-point that uses the Microsoft global edge network to create fast, secure, and widely scalable web applications. With Front Door, you can transform your global consumer and enterprise applications into robust, high-performing personalized modern applications with contents that reach a global audience through Azure. <p align="center">
Subscribe to the RSS feed and view the latest Azure Front Door feature updates o
## Next steps - Learn how to [create a Front Door](quickstart-create-front-door.md).-- Learn [how Front Door works](front-door-routing-architecture.md).
+- Learn [how Front Door works](front-door-routing-architecture.md).
governance Cis Azure 1 1 0 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/cis-azure-1-1-0.md
Title: CIS Microsoft Azure Foundations Benchmark blueprint sample
-description: Overview of the CIS Microsoft Azure Foundations Benchmark blueprint sample. This blueprint sample helps customers assess specific controls.
Previously updated : 01/27/2021
+ Title: CIS Microsoft Azure Foundations Benchmark v1.1.0 blueprint sample
+description: Overview of the CIS Microsoft Azure Foundations Benchmark v1.1.0 blueprint sample. This blueprint sample helps customers assess specific controls.
Last updated : 03/11/2021
-# CIS Microsoft Azure Foundations Benchmark blueprint sample
+# CIS Microsoft Azure Foundations Benchmark v1.1.0 blueprint sample
-The CIS Microsoft Azure Foundations Benchmark blueprint sample provides governance guard-rails using
-[Azure Policy](../../policy/overview.md) that help you assess specific CIS Microsoft Azure
-Foundations Benchmark recommendations. This blueprint helps customers deploy a core set of policies
-for any Azure-deployed architecture that must implement CIS Microsoft Azure Foundations Benchmark
-recommendations.
+The CIS Microsoft Azure Foundations Benchmark v1.1.0 blueprint sample provides governance
+guard-rails using [Azure Policy](../../policy/overview.md) that help you assess specific CIS
+Microsoft Azure Foundations Benchmark recommendations. This blueprint helps customers deploy a core
+set of policies for any Azure-deployed architecture that must implement CIS Microsoft Azure
+Foundations Benchmark v1.1.0 recommendations.
## Recommendation mapping The [Azure Policy recommendation mapping](../../policy/samples/cis-azure-1-1-0.md) provides details on policy definitions included within this blueprint and how these policy definitions map to the
-**compliance domains** and **controls** in CIS Microsoft Azure Foundations Benchmark v1.1.0. When
-assigned to an architecture, resources are evaluated by Azure Policy for non-compliance with
-assigned policy definitions. For more information, see [Azure Policy](../../policy/overview.md).
+**recommendations** in CIS Microsoft Azure Foundations Benchmark v1.1.0. When assigned to an
+architecture, resources are evaluated by Azure Policy for non-compliance with assigned policy
+definitions. For more information, see [Azure Policy](../../policy/overview.md).
## Deploy
-To deploy the Azure Blueprints CIS Microsoft Azure Foundations Benchmark blueprint sample, the
-following steps must be taken:
+To deploy the Azure Blueprints CIS Microsoft Azure Foundations Benchmark v1.1.0 blueprint sample,
+the following steps must be taken:
> [!div class="checklist"] > - Create a new blueprint from the sample
sample as a starter.
Your copy of the blueprint sample has now been created in your environment. It's created in **Draft** mode and must be **Published** before it can be assigned and deployed. The copy of the blueprint sample can be customized to your environment and needs, but that modification may move it
-away from alignment with CIS Microsoft Azure Foundations Benchmark recommendations.
+away from alignment with CIS Microsoft Azure Foundations Benchmark v1.1.0 recommendations.
1. Select **All services** in the left pane. Search for and select **Blueprints**.
The following table provides a list of the blueprint artifact parameters:
|Artifact name|Artifact type|Parameter name|Description| |-|-|-|-|
-|Audit CIS Microsoft Azure Foundations Benchmark 1.1.0 recommendations and deploy specific supporting VM Extensions|Policy assignment|List of regions where Network Watcher should be enabled|A semicolon-separated list of regions. To see a complete list of regions use Get-AzLocation. Ex: eastus; eastus2|
-|Audit CIS Microsoft Azure Foundations Benchmark 1.1.0 recommendations and deploy specific supporting VM Extensions|Policy assignment|List of virtual machine extensions that are approved for use|A semicolon-separated list of extensions. To see a complete list of virtual machine extensions, use Get-AzVMExtensionImage. Ex: AzureDiskEncryption; IaaSAntimalware|
+|Audit CIS Microsoft Azure Foundations Benchmark v1.1.0 recommendations and deploy specific supporting VM Extensions|Policy assignment|List of regions where Network Watcher should be enabled|A semicolon-separated list of regions. To see a complete list of regions use Get-AzLocation. Ex: eastus; eastus2|
+|Audit CIS Microsoft Azure Foundations Benchmark v1.1.0 recommendations and deploy specific supporting VM Extensions|Policy assignment|List of virtual machine extensions that are approved for use|A semicolon-separated list of extensions. To see a complete list of virtual machine extensions, use Get-AzVMExtensionImage. Ex: AzureDiskEncryption; IaaSAntimalware|
## Next steps
governance Cis Azure 1 3 0 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/cis-azure-1-3-0.md
+
+ Title: CIS Microsoft Azure Foundations Benchmark v1.3.0 blueprint sample
+description: Overview of the CIS Microsoft Azure Foundations Benchmark v1.3.0 blueprint sample. This blueprint sample helps customers assess specific controls.
Last updated : 03/11/2021++
+# CIS Microsoft Azure Foundations Benchmark v1.3.0 blueprint sample
+
+The CIS Microsoft Azure Foundations Benchmark v1.3.0 blueprint sample provides governance
+guard-rails using [Azure Policy](../../policy/overview.md) that help you assess specific CIS
+Microsoft Azure Foundations Benchmark v1.3.0 recommendations. This blueprint helps customers deploy
+a core set of policies for any Azure-deployed architecture that must implement CIS Microsoft Azure
+Foundations Benchmark v1.3.0 recommendations.
+
+## Recommendation mapping
+
+The [Azure Policy recommendation mapping](../../policy/samples/cis-azure-1-3-0.md) provides details
+on policy definitions included within this blueprint and how these policy definitions map to the
+**recommendations** in CIS Microsoft Azure Foundations Benchmark v1.3.0. When assigned to an
+architecture, resources are evaluated by Azure Policy for non-compliance with assigned policy
+definitions. For more information, see [Azure Policy](../../policy/overview.md).
+
+## Deploy
+
+To deploy the Azure Blueprints CIS Microsoft Azure Foundations Benchmark v1.3.0 blueprint sample,
+the following steps must be taken:
+
+> [!div class="checklist"]
+> - Create a new blueprint from the sample
+> - Mark your copy of the sample as **Published**
+> - Assign your copy of the blueprint to an existing subscription
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free)
+before you begin.
+
+### Create blueprint from sample
+
+First, implement the blueprint sample by creating a new blueprint in your environment using the
+sample as a starter.
+
+1. Select **All services** in the left pane. Search for and select **Blueprints**.
+
+1. From the **Getting started** page on the left, select the **Create** button under _Create a
+ blueprint_.
+
+1. Find the **CIS Microsoft Azure Foundations Benchmark v1.3.0** blueprint sample under _Other
+ Samples_ and select **Use this sample**.
+
+1. Enter the _Basics_ of the blueprint sample:
+
+ - **Blueprint name**: Provide a name for your copy of the CIS Microsoft Azure Foundations
+ Benchmark blueprint sample.
+ - **Definition location**: Use the ellipsis and select the management group to save your copy of
+ the sample to.
+
+1. Select the _Artifacts_ tab at the top of the page or **Next: Artifacts** at the bottom of the
+ page.
+
+1. Review the list of artifacts that are included in the blueprint sample. Many of the artifacts
+ have parameters that we'll define later. Select **Save Draft** when you've finished reviewing the
+ blueprint sample.
+
+### Publish the sample copy
+
+Your copy of the blueprint sample has now been created in your environment. It's created in
+**Draft** mode and must be **Published** before it can be assigned and deployed. The copy of the
+blueprint sample can be customized to your environment and needs, but that modification may move it
+away from alignment with CIS Microsoft Azure Foundations Benchmark v1.3.0 recommendations.
+
+1. Select **All services** in the left pane. Search for and select **Blueprints**.
+
+1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
+ blueprint sample and then select it.
+
+1. Select **Publish blueprint** at the top of the page. In the new page on the right, provide a
+ **Version** for your copy of the blueprint sample. This property is useful for if you make a
+ modification later. Provide **Change notes** such as "First version published from the CIS
+ Microsoft Azure Foundations Benchmark blueprint sample." Then select **Publish** at the bottom of
+ the page.
+
+### Assign the sample copy
+
+Once the copy of the blueprint sample has been successfully **Published**, it can be assigned to a
+subscription within the management group it was saved to. This step is where parameters are provided
+to make each deployment of the copy of the blueprint sample unique.
+
+1. Select **All services** in the left pane. Search for and select **Blueprints**.
+
+1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
+ blueprint sample and then select it.
+
+1. Select **Assign blueprint** at the top of the blueprint definition page.
+
+1. Provide the parameter values for the blueprint assignment:
+
+ - Basics
+
+ - **Subscriptions**: Select one or more of the subscriptions that are in the management group
+ you saved your copy of the blueprint sample to. If you select more than one subscription, an
+ assignment will be created for each using the parameters entered.
+ - **Assignment name**: The name is pre-populated for you based on the name of the blueprint.
+ Change as needed or leave as is.
+ - **Location**: Select a region for the managed identity to be created in. Azure Blueprint uses
+ this managed identity to deploy all artifacts in the assigned blueprint. To learn more, see
+ [managed identities for Azure resources](../../../active-directory/managed-identities-azure-resources/overview.md).
+ - **Blueprint definition version**: Pick a **Published** version of your copy of the blueprint
+ sample.
+
+ - Lock Assignment
+
+ Select the blueprint lock setting for your environment. For more information, see
+ [blueprints resource locking](../concepts/resource-locking.md).
+
+ - Managed Identity
+
+ Leave the default _system assigned_ managed identity option.
+
+ - Artifact parameters
+
+ The parameters defined in this section apply to the artifact under which it's defined. These
+ parameters are [dynamic parameters](../concepts/parameters.md#dynamic-parameters) since
+ they're defined during the assignment of the blueprint. For a full list or artifact parameters
+ and their descriptions, see [Artifact parameters table](#artifact-parameters-table).
+
+1. Once all parameters have been entered, select **Assign** at the bottom of the page. The blueprint
+ assignment is created and artifact deployment begins. Deployment takes roughly an hour. To check
+ on the status of deployment, open the blueprint assignment.
+
+> [!WARNING]
+> The Azure Blueprints service and the built-in blueprint samples are **free of cost**. Azure
+> resources are [priced by product](https://azure.microsoft.com/pricing/). Use the
+> [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate the cost of
+> running resources deployed by this blueprint sample.
+
+### Artifact parameters table
+
+The following table provides a list of the blueprint artifact parameters:
+
+|Artifact name|Artifact type|Parameter name|Description|
+|-|-|-|-|
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|List of virtual machine extensions that are approved for use|A semicolon-separated list of virtual machine extensions; to see a complete list of extensions, use the Azure PowerShell command Get-AzVMExtensionImage|
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: SQL managed instances should use customer-managed keys to encrypt data at rest|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Vulnerability Assessment settings for SQL server should contain an email address to receive scan reports|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Diagnostic logs in Azure Data Lake Store should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Disk encryption should be applied on virtual machines|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Key vault should have purge protection enabled|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Ensure API app has 'Client Certificates (Incoming client certificates)' set to 'On'|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: SQL servers should use customer-managed keys to encrypt data at rest|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Managed identity should be used in your Function App|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Azure Defender for Key Vault should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Custom subscription owner roles should not exist|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Keys should have expiration dates set|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Transparent Data Encryption on SQL databases should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Vulnerability assessment should be enabled on SQL Managed Instance|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Ensure that 'PHP version' is the latest, if used as a part of the API app|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An Azure Active Directory administrator should be provisioned for SQL servers|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Azure Defender for App Service should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Storage accounts should restrict network access using virtual network rules|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Managed identity should be used in your Web App|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: SSH access from the Internet should be blocked|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Unattached disks should be encrypted|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Azure Defender for Storage should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Storage accounts should restrict network access|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Diagnostic logs in Logic Apps should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Diagnostic logs in IoT Hub should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: FTPS only should be required in your Function App|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Security operations (Microsoft.Security/securitySolutions/delete)|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Security operations (Microsoft.Security/securitySolutions/write)|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Secure transfer to storage accounts should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Diagnostic logs in Batch accounts should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Auto provisioning of the Log Analytics agent should be enabled on your subscription|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Ensure that 'Java version' is the latest, if used as a part of the Web app|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: FTPS should be required in your Web App|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Azure Defender for servers should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Subscriptions should have a contact email address for security issues|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Storage account public access should be disallowed|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Azure Defender for Kubernetes should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Connection throttling should be enabled for PostgreSQL database servers|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Ensure WEB app has 'Client Certificates (Incoming client certificates)' set to 'On'|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: External accounts with write permissions should be removed from your subscription|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: External accounts with read permissions should be removed from your subscription|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Azure Defender for SQL servers on machines should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Email notification for high severity alerts should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Storage account should use customer-managed key for encryption|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Ensure that 'Python version' is the latest, if used as a part of the Web app|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Ensure that 'Python version' is the latest, if used as a part of the Function app|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Ensure that 'PHP version' is the latest, if used as a part of the WEB app|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Ensure that 'Python version' is the latest, if used as a part of the API app|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Diagnostic logs in Virtual Machine Scale Sets should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Azure Defender for Azure SQL Database servers should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Diagnostic logs in Event Hub should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: System updates should be installed on your machines|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Ensure that 'Java version' is the latest, if used as a part of the API app|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: SQL servers should be configured with 90 days auditing retention or higher.|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Ensure that 'HTTP Version' is the latest, if used to run the Web app|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Latest TLS version should be used in your API App|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: MFA should be enabled accounts with write permissions on your subscription|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Authentication should be enabled on your web app|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Secrets should have expiration dates set|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Ensure that 'HTTP Version' is the latest, if used to run the API app|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: FTPS only should be required in your API App|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Ensure that 'Java version' is the latest, if used as a part of the Function app|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Web Application should only be accessible over HTTPS|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Auditing on SQL server should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: MFA should be enabled on accounts with owner permissions on your subscription|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Advanced data security should be enabled on your SQL servers|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Advanced data security should be enabled on SQL Managed Instance|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Role-Based Access Control (RBAC) should be used on Kubernetes Services|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Monitor missing Endpoint Protection in Azure Security Center|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Diagnostic logs in Search services should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Diagnostic logs in App Services should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Administrative operations (Microsoft.Network/networkSecurityGroups/delete)|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Administrative operations (Microsoft.Network/networkSecurityGroups/securityRules/delete)|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Administrative operations (Microsoft.Network/networkSecurityGroups/securityRules/write)|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Administrative operations (Microsoft.Network/networkSecurityGroups/write)|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Administrative operations (Microsoft.Sql/servers/firewallRules/delete)|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Administrative operations (Microsoft.Sql/servers/firewallRules/write)|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Only approved VM extensions should be installed|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Azure Defender for container registries should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Managed identity should be used in your API App|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Authentication should be enabled on your API app|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Policy operations (Microsoft.Authorization/policyAssignments/delete)|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Policy operations (Microsoft.Authorization/policyAssignments/write)|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Authentication should be enabled on your Function app|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Diagnostic logs in Data Lake Analytics should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Storage accounts should allow access from trusted Microsoft services|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Diagnostic logs in Key Vault should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Enforce SSL connection should be enabled for PostgreSQL database servers|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Ensure that 'HTTP Version' is the latest, if used to run the Function app|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: MFA should be enabled on accounts with read permissions on your subscription|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: RDP access from the Internet should be blocked|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Enforce SSL connection should be enabled for MySQL database servers|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Ensure Function app has 'Client Certificates (Incoming client certificates)' set to 'On'|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Log checkpoints should be enabled for PostgreSQL database servers|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Log connections should be enabled for PostgreSQL database servers|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Disconnections should be logged for PostgreSQL database servers.|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Vulnerability assessment should be enabled on your SQL servers|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Latest TLS version should be used in your Web App|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: External accounts with owner permissions should be removed from your subscription|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Diagnostic logs in Service Bus should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Diagnostic logs in Azure Stream Analytics should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Latest TLS version should be used in your Function App|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Storage account containing the container with activity logs must be encrypted with BYOK|For more information about effects, visit [https://aka.ms/policyeffects](https://aka.ms/policyeffects) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Include AKS clusters when auditing if virtual machine scale set diagnostic logs are enabled||
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Latest Java version for App Services|Latest supported Java version for App Services|
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Latest Python version for Linux for App Services|Latest supported Python version for App Services|
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|List of regions where Network Watcher should be enabled|To see a complete list of regions, run the PowerShell command Get-AzLocation|
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Latest PHP version for App Services|Latest supported PHP version for App Services|
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Required retention period (days) for resource logs|For more information about resource logs, visit [https://aka.ms/resourcelogs](https://aka.ms/resourcelogs) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Name of the resource group for Network Watcher|Name of the resource group where Network Watchers are located|
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Required auditing setting for SQL servers||
+
+## Next steps
+
+Additional articles about blueprints and how to use them:
+
+- Learn about the [blueprint lifecycle](../concepts/lifecycle.md).
+- Understand how to use [static and dynamic parameters](../concepts/parameters.md).
+- Learn to customize the [blueprint sequencing order](../concepts/sequencing-order.md).
+- Find out how to make use of [blueprint resource locking](../concepts/resource-locking.md).
+- Learn how to [update existing assignments](../how-to/update-existing-assignments.md).
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/index.md
Title: Index of blueprint samples description: Index of compliance and standard samples for deploying environments, policies, and Cloud Adoptions Framework foundations with Azure Blueprints. Previously updated : 02/08/2020 Last updated : 03/11/2021 # Azure Blueprints samples
quality and ready to deploy today to assist you in meeting your various complian
| [Azure Security Benchmark](./azure-security-benchmark.md) | Provides guardrails for compliance to [Azure Security Benchmark](../../../security/benchmarks/overview.md). | | [Azure Security Benchmark Foundation](./azure-security-benchmark-foundation/index.md) | Deploys and configures Azure Security Benchmark Foundation. | | [Canada Federal PBMM](./canada-federal-pbmm/index.md) | Provides guardrails for compliance to Canada Federal Protected B, Medium Integrity, Medium Availability (PBMM). |
-| [CIS Microsoft Azure Foundations Benchmark](./cis-azure-1-1-0.md)| Provides a set of policies to help comply with CIS Microsoft Azure Foundations Benchmark recommendations. |
+| [CIS Microsoft Azure Foundations Benchmark v1.3.0](./cis-azure-1-3-0.md)| Provides a set of policies to help comply with CIS Microsoft Azure Foundations Benchmark v1.3.0 recommendations. |
+| [CIS Microsoft Azure Foundations Benchmark v1.1.0](./cis-azure-1-1-0.md)| Provides a set of policies to help comply with CIS Microsoft Azure Foundations Benchmark v1.1.0 recommendations. |
| [DoD Impact Level 4](./dod-impact-level-4/index.md) | Provides a set of policies to help comply with DoD Impact Level 4. | | [DoD Impact Level 5](./dod-impact-level-5/index.md) | Provides a set of policies to help comply with DoD Impact Level 5. | | [FedRAMP Moderate](./fedramp-m/index.md) | Provides a set of policies to help comply with FedRAMP Moderate. |
healthcare-apis Access Fhir Postman Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/access-fhir-postman-tutorial.md
+
+ Title: Postman FHIR server in Azure - Azure API for FHIR
+description: In this tutorial, we will walk through the steps needed to use Postman to access an FHIR server. Postman is helpful for debugging applications that access APIs.
+++++++ Last updated : 02/01/2021++
+# Access Azure API for FHIR with Postman
+
+A client application would access an FHIR API through a [REST API](https://www.hl7.org/fhir/http.html). You may also want to interact directly with the FHIR server as you build applications, for example, for debugging purposes. In this tutorial, we will walk through the steps needed to use [Postman](https://www.getpostman.com/) to access a FHIR server. Postman is a tool often used for debugging when building applications that access APIs.
+
+## Prerequisites
+
+- A FHIR endpoint in Azure. You can set that up using the managed Azure API for FHIR or the Open Source FHIR server for Azure. Set up the managed Azure API for FHIR using [Azure portal](fhir-paas-portal-quickstart.md), [PowerShell](fhir-paas-powershell-quickstart.md), or [Azure CLI](fhir-paas-cli-quickstart.md).
+- A [client application](register-confidential-azure-ad-client-app.md) you will be using to access the FHIR service.
+- You have granted permissions, for example, "FHIR Data Contributor", to the client application to access the FHIR service. More info at [Configure Azure RBAC for FHIR](configure-azure-rbac.md)
+- Postman installed. You can get it from [https://www.getpostman.com](https://www.getpostman.com)
+
+## FHIR server and authentication details
+
+In order to use Postman, the following details are needed:
+
+- Your FHIR server URL, for example `https://MYACCOUNT.azurehealthcareapis.com`
+- The identity provider `Authority` for your FHIR server, for example, `https://login.microsoftonline.com/{TENANT-ID}`
+- The configured `audience`. This is usually the URL of the FHIR server, e.g. `https://<FHIR-SERVER-NAME>.azurehealthcareapis.com` or just `https://azurehealthcareapis.com`.
+- The `client_id` (or application ID) of the [client application](register-confidential-azure-ad-client-app.md) you will be using to access the FHIR service.
+- The `client_secret` (or application secret) of the client application.
+
+Finally, you should check that `https://www.getpostman.com/oauth2/callback` is a registered reply URL for your client application.
+
+## Connect to FHIR server
+
+Using Postman, do a `GET` request to `https://fhir-server-url/metadata`:
+
+![Postman Metadata Capability Statement](media/tutorial-postman/postman-metadata.png)
+
+The metadata URL for Azure API for FHIR is `https://MYACCOUNT.azurehealthcareapis.com/metadata`. In this example, the FHIR server URL is `https://fhirdocsmsft.azurewebsites.net` and the capability statement of the server is available at `https://fhirdocsmsft.azurewebsites.net/metadata`. That endpoint should be accessible without authentication.
+
+If you attempt to access restricted resources, you should get an "Authentication failed" response:
+
+![Authentication Failed](media/tutorial-postman/postman-authentication-failed.png)
+
+## Obtaining an access token
+
+To obtain a valid access token, select "Authorization" and pick TYPE "OAuth 2.0":
+
+![Set OAuth 2.0](media/tutorial-postman/postman-select-oauth2.png)
+
+Hit "Get New Access Token" and a dialog appears:
+
+![Request New Access Token](media/tutorial-postman/postman-request-token.png)
+
+You will need to some details:
+
+| Field | Example Value | Comment |
+|--|--|-|
+| Token Name | MYTOKEN | A name you choose |
+| Grant Type | Authorization Code | |
+| Callback URL | `https://www.getpostman.com/oauth2/callback` | |
+| Auth URL | `https://login.microsoftonline.com/{TENANT-ID}/oauth2/authorize?resource=<audience>` | `audience` is `https://MYACCOUNT.azurehealthcareapis.com` for Azure API for FHIR |
+| Access Token URL | `https://login.microsoftonline.com/{TENANT ID}/oauth2/token` | |
+| Client ID | `XXXXXXXX-XXX-XXXX-XXXX-XXXXXXXXXXXX` | Application ID |
+| Client Secret | `XXXXXXXX` | Secret client key |
+| Scope | `<Leave Blank>` |
+| State | `1234` | |
+| Client Authentication | Send client credentials in body |
+
+Hit "Request Token" and you will be guided through the Azure Active Directory Authentication flow and a token will be returned to Postman. If you run into problems open the Postman Console (from the "View->Show Postman Console" menu item).
+
+Scroll down on the returned token screen and hit "Use Token":
+
+![Use Token](media/tutorial-postman/postman-use-token.png)
+
+The token should now be populated in the "Access Token" field and you can select tokens from "Available Tokens". If you "Send" again to repeat the `Patient` resource search, you should get a Status `200 OK`:
+
+![200 OK](media/tutorial-postman/postman-200-OK.png)
+
+In this case, there are no patients in the database and the search is empty.
+
+If you inspect the access token with a tool like [https://jwt.ms](https://jwt.ms), you should see content like:
+
+```jsonc
+{
+ "aud": "https://MYACCOUNT.azurehealthcareapis.com",
+ "iss": "https://sts.windows.net/{TENANT-ID}/",
+ "iat": 1545343803,
+ "nbf": 1545343803,
+ "exp": 1545347703,
+ "acr": "1",
+ "aio": "AUQAu/8JXXXXXXXXXdQxcxn1eis459j70Kf9DwcUjlKY3I2G/9aOnSbw==",
+ "amr": [
+ "pwd"
+ ],
+ "appid": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "oid": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "appidacr": "1",
+
+ ...// Truncated
+}
+```
+
+In troubleshooting situations, validating that you have the correct audience (`aud` claim) is a good place to start. If your token is from the correct issuer (`iss` claim) and has the correct audience (`aud` claim), but you are still unable to access the FHIR API, it is likely that the user or service principal (`oid` claim) does not have access to the FHIR data plane. We recommend you [use Azure role-based access control (Azure RBAC)](configure-azure-rbac.md) to assign data plane roles to users. If you are using an external, secondary Azure Active directory tenant for your data plane, you will need to [configure local RBAC assignments](configure-local-rbac.md).
+
+It is also possible to [get a token for the Azure API for FHIR using the Azure CLI](get-healthcare-apis-access-token-cli.md). If you are using a token obtained with the Azure CLI, you should use Authorization type "Bearer Token" and paste the token in directly.
+
+## Inserting a patient
+
+Now that you have a valid access token. You can insert a new patient. Switch to method "POST" and add the following JSON document in the body of the request:
+
+[!code-json[](../samples/sample-patient.json)]
+
+Hit "Send" and you should see that the patient is successfully created:
+
+![Screenshot that shows that the patient is successfully created.](media/tutorial-postman/postman-patient-created.png)
+
+If you repeat the patient search, you should now see the patient record:
+
+![Patient Created](media/tutorial-postman/postman-patient-found.png)
+
+## Next steps
+
+In this tutorial, you've accessed an FHIR API using postman. Read about the supported API features in our supported features section.
+
+>[!div class="nextstepaction"]
+>[Supported features](fhir-features-supported.md)
healthcare-apis Azure Ad Hcapi Token Validation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/azure-ad-hcapi-token-validation.md
+
+ Title: Azure API for FHIR access token validation
+description: Walks through token validation and gives tips on how to troubleshoot access issues
++++++ Last updated : 02/19/2019++
+# Azure API for FHIR access token validation
+
+How Azure API for FHIR validates the access token will depend on implementation and configuration. In this article, we will walk through the validation steps, which can be helpful when troubleshooting access issues.
+
+## Validate token has no issues with identity provider
+
+The first step in the token validation is to verify that the token was issued by the correct identity provider and that it hasn't been modified. The FHIR server will be configured to use a specific identity provider known as the authority `Authority`. The FHIR server will retrieve information about the identity provider from the `/.well-known/openid-configuration` endpoint. When using Azure AD, the full URL would be:
+
+```
+GET https://login.microsoftonline.com/<TENANT-ID>/.well-known/openid-configuration
+```
+
+where `<TENANT-ID>` is the specific Azure AD tenant (either a tenant ID or a domain name).
+
+Azure AD will return a document like the one below to the FHIR server.
+
+```json
+{
+ "authorization_endpoint": "https://login.microsoftonline.com/<TENANT-ID>/oauth2/authorize",
+ "token_endpoint": "https://login.microsoftonline.com/<TENANT-ID>/oauth2/token",
+ "token_endpoint_auth_methods_supported": [
+ "client_secret_post",
+ "private_key_jwt",
+ "client_secret_basic"
+ ],
+ "jwks_uri": "https://login.microsoftonline.com/common/discovery/keys",
+ "response_modes_supported": [
+ "query",
+ "fragment",
+ "form_post"
+ ],
+ "subject_types_supported": [
+ "pairwise"
+ ],
+ "id_token_signing_alg_values_supported": [
+ "RS256"
+ ],
+ "http_logout_supported": true,
+ "frontchannel_logout_supported": true,
+ "end_session_endpoint": "https://login.microsoftonline.com/<TENANT-ID>/oauth2/logout",
+ "response_types_supported": [
+ "code",
+ "id_token",
+ "code id_token",
+ "token id_token",
+ "token"
+ ],
+ "scopes_supported": [
+ "openid"
+ ],
+ "issuer": "https://sts.windows.net/<TENANT-ID>/",
+ "claims_supported": [
+ "sub",
+ "iss",
+ "cloud_instance_name",
+ "cloud_instance_host_name",
+ "cloud_graph_host_name",
+ "msgraph_host",
+ "aud",
+ "exp",
+ "iat",
+ "auth_time",
+ "acr",
+ "amr",
+ "nonce",
+ "email",
+ "given_name",
+ "family_name",
+ "nickname"
+ ],
+ "microsoft_multi_refresh_token": true,
+ "check_session_iframe": "https://login.microsoftonline.com/<TENANT-ID>/oauth2/checksession",
+ "userinfo_endpoint": "https://login.microsoftonline.com/<TENANT-ID>/openid/userinfo",
+ "tenant_region_scope": "WW",
+ "cloud_instance_name": "microsoftonline.com",
+ "cloud_graph_host_name": "graph.windows.net",
+ "msgraph_host": "graph.microsoft.com",
+ "rbac_url": "https://pas.windows.net"
+}
+```
+The important properties for the FHIR server are `jwks_uri`, which tells the server where to fetch the encryption keys needed to validate the token signature and `issuer`, which tells the server what will be in the issuer claim (`iss`) of tokens issued by this server. The FHIR server can use this to validate that it is receiving an authentic token.
+
+## Validate claims of the token
+
+Once the server has verified the authenticity of the token, the FHIR server will then proceed to validate that the client has the required claims to access the token.
+
+When using the Azure API for FHIR, the server will validate:
+
+1. The token has the right `Audience` (`aud` claim).
+1. The user or principal that the token was issued for is allowed to access the FHIR server data plane. The `oid` claim of the token contains an identity object ID, which uniquely identifies the user or principal.
+
+We recommend that the FHIR service be [configured to use Azure RBAC](configure-azure-rbac.md) to manage data plane role assignments. But you can also [configure local RBAC](configure-local-rbac.md) if your FHIR service uses an external or secondary Azure Active Directory tenant.
+
+When using the OSS Microsoft FHIR server for Azure, the server will validate:
+
+1. The token has the right `Audience` (`aud` claim).
+1. The token has a role in the `roles` claim, which is allowed access to the FHIR server.
+
+Consult details on how to [define roles on the FHIR server](https://github.com/microsoft/fhir-server/blob/master/docs/Roles.md).
+
+A FHIR server may also validate that an access token has the scopes (in token claim `scp`) to access the part of the FHIR API that a client is trying to access. Currently, the Azure API for FHIR and the FHIR server for Azure do not validate token scopes.
+
+## Next steps
+Now that you know how to walk through token validation, you can complete the tutorial to create a JavaScript application and read FHIR data.
+
+>[!div class="nextstepaction"]
+>[Web application tutorial](tutorial-web-app-fhir-server.md)
healthcare-apis Azure Ad Hcapi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/azure-ad-hcapi.md
+
+ Title: Azure Active Directory identity configuration for Azure API for FHIR
+description: Learn the principles of identity, authentication, and authorization for Azure FHIR servers.
++++++ Last updated : 02/19/2019+++
+# Azure Active Directory identity configuration for Azure API for FHIR
+
+An important piece when working with healthcare data is to ensure that the data is secure and cannot be accessed by unauthorized users or applications. FHIR servers use [OAuth 2.0](https://oauth.net/2/) to ensure this data security. The [Azure API for FHIR](https://azure.microsoft.com/services/azure-api-for-fhir/) is secured using [Azure Active Directory](../../active-directory/index.yml), which is an example of an OAuth 2.0 identity provider. This article provides an overview of FHIR server authorization and the steps needed to obtain a token to access a FHIR server. While these steps will apply to any FHIR server and any identity provider, we will walk through Azure API for FHIR as the FHIR server and Azure AD as our identity provider in this article.
+
+## Access control overview
+
+In order for a client application to access Azure API for FHIR, it must present an access token. The access token is a signed, [Base64](https://en.wikipedia.org/wiki/Base64) encoded collection of properties (claims) that convey information about the client's identity and roles and privileges granted to the client.
+
+There are a number of ways to obtain a token, but the Azure API for FHIR doesn't care how the token is obtained as long as it's an appropriately signed token with the correct claims.
+
+Using [authorization code flow](../../active-directory/azuread-dev/v1-protocols-oauth-code.md) as an example, accessing a FHIR server goes through the four steps below:
+
+![FHIR Authorization](media/azure-ad-hcapi/fhir-authorization.png)
+
+1. The client sends a request to the `/authorize` endpoint of Azure AD. Azure AD will redirect the client to a sign-in page where the user will authenticate using appropriate credentials (for example username and password or two-factor authentication). See details on [obtaining an authorization code](../../active-directory/azuread-dev/v1-protocols-oauth-code.md#request-an-authorization-code). Upon successful authentication, an *authorization code* is returned to the client. Azure AD will only allow this authorization code to be returned to a registered reply URL configured in the client application registration (see below).
+1. The client application exchanges the authorization code for an *access token* at the `/token` endpoint of Azure AD. When requesting a token, the client application may have to provide a client secret (the applications password). See details on [obtaining an access token](../../active-directory/azuread-dev/v1-protocols-oauth-code.md#use-the-authorization-code-to-request-an-access-token).
+1. The client makes a request to the Azure API for FHIR, for example `GET /Patient` to search all patients. When making the request, it includes the access token in an HTTP request header, for example `Authorization: Bearer eyJ0e...`, where `eyJ0e...` represents the Base64 encoded access token.
+1. The Azure API for FHIR validates that the token contains appropriate claims (properties in the token). If everything checks out, it will complete the request and return a FHIR bundle with results to the client.
+
+It is important to note that the Azure API for FHIR isn't involved in validating user credentials and it doesn't issue the token. The authentication and token creation is done by Azure AD. The Azure API for FHIR simply validates that the token is signed correctly (it is authentic) and that it has appropriate claims.
+
+## Structure of an access token
+
+Development of FHIR applications often involves debugging access issues. If a client is denied access to the Azure API for FHIR, it's useful to understand the structure of the access token and how it can be decoded to inspect the contents (the claims) of the token.
+
+FHIR servers typically expect a [JSON Web Token](https://en.wikipedia.org/wiki/JSON_Web_Token) (JWT, sometimes pronounced "jot"). It consists of three parts:
+
+1. A header, which could look like:
+ ```json
+ {
+ "alg": "HS256",
+ "typ": "JWT"
+ }
+ ```
+1. The payload (the claims), for example:
+ ```json
+ {
+ "oid": "123",
+ "iss": "https://issuerurl",
+ "iat": 1422779638,
+ "roles": [
+ "admin"
+ ]
+ }
+ ```
+1. A signature, which is calculated by concatenating the Base64 encoded contents of the header and the payload and calculating a cryptographic hash of them based on the algorithm (`alg`) specified in the header. A server will be able to obtain public keys from the identity provider and validate that this token was issued by a specific identity provider and it hasn't been tampered with.
+
+The full token consists of the Base64 encoded (actually Base64 url encoded) versions of those three segments. The three segments are concatenated and separated with a `.` (dot).
+
+An example token is seen below:
+
+```
+eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJvaWQiOiIxMjMiLCAiaXNzIjoiaHR0cHM6Ly9pc3N1ZXJ1cmwiLCJpYXQiOjE0MjI3Nzk2MzgsInJvbGVzIjpbImFkbWluIl19.gzSraSYS8EXBxLN_oWnFSRgCzcmJmMjLiuyu5CSpyHI
+```
+
+The token can be decoded and inspected with tools such as [https://jwt.ms](https://jwt.ms). The result of decoding the token is:
+
+```json
+{
+ "alg": "HS256",
+ "typ": "JWT"
+}.{
+ "oid": "123",
+ "iss": "https://issuerurl",
+ "iat": 1422779638,
+ "roles": [
+ "admin"
+ ]
+}.[Signature]
+```
+
+## Obtaining an access token
+
+As mentioned above, there are several ways to obtain a token from Azure AD. They are described in detail in the [Azure AD developer documentation](../../active-directory/develop/index.yml).
+
+Azure AD has two different versions of the OAuth 2.0 endpoints, which are referred to as `v1.0` and `v2.0`. Both of these versions are OAuth 2.0 endpoints and the `v1.0` and `v2.0` designations refer to differences in how Azure AD implements that standard.
+
+When using a FHIR server, you can use either the `v1.0` or the `v2.0` endpoints. The choice may depend on the authentication libraries you are using in your client application.
+
+The pertinent sections of the Azure AD documentation are:
+
+* `v1.0` endpoint:
+ * [Authorization code flow](../../active-directory/azuread-dev/v1-protocols-oauth-code.md).
+ * [Client credentials flow](../../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md).
+* `v2.0` endpoint:
+ * [Authorization code flow](../../active-directory/develop/v2-oauth2-auth-code-flow.md).
+ * [Client credentials flow](../../active-directory/develop/v2-oauth2-client-creds-grant-flow.md).
+
+There are other variations (for example on behalf of flow) for obtaining a token. Check the Azure AD documentation for details. When using the Azure API for FHIR, there are also some shortcuts for obtaining an access token (for debugging purposes) [using the Azure CLI](get-healthcare-apis-access-token-cli.md).
+
+## Next steps
+
+In this document, you learned some of the basic concepts involved in securing access to the Azure API for FHIR using Azure AD. To learn how to deploy an instance of the Azure API for FHIR, continue to the deployment quickstart.
+
+>[!div class="nextstepaction"]
+>[Deploy Azure API for FHIR](fhir-paas-portal-quickstart.md)
healthcare-apis Azure Api For Fhir Additional Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/azure-api-for-fhir-additional-settings.md
+
+ Title: Additional Settings for Azure API for FHIR
+description: Overview of the additional settings you can set for Azure API for FHIR
++++++ Last updated : 11/22/2019++
+# Additional settings for Azure API for FHIR
+
+In this how-to guide, we will review the additional settings you may want to set in your Azure API for FHIR. There are additional pages that drill into even more details.
+
+## Configure Database settings
+
+Azure API for FHIR uses database to store its data. Performance of the underlying database depends on the number of Request Units (RU) selected during service provisioning or in database settings after the service has been provisioned.
+
+Throughput must be provisioned to ensure that sufficient system resources are available for your database at all times. How many RUs you need for your application depends on operations you perform. Operations can range from simple read and writes to more complex queries.
+
+For more information on how to change the default settings, see [configure database settings](configure-database.md).
+
+## Access control
+
+The Azure API for FHIR will only allow authorized users to access the FHIR API. You can configure authorized users through two different mechanisms. The primary and recommended way to configure access control is using [Azure role-based access control (Azure RBAC)](../../role-based-access-control/index.yml), which is accessible through the **Access control (IAM)** blade. Azure RBAC only works if you want to secure data plane access using the Azure Active Directory tenant associated with your subscription. If you wish to use a different tenant, the Azure API for FHIR offers a local FHIR data plane access control mechanism. The configuration options are not as rich when using the local RBAC mechanism. For details, choose one of the following options:
+
+* [Azure RBAC for FHIR data plane](configure-azure-rbac.md). This is the preferred option when you are using the Azure Active Directory tenant associated with your subscription.
+* [Local FHIR data plane access control](configure-local-rbac.md). Use this option only when you need to use an external Azure Active Directory tenant for data plane access control.
+
+## Enable diagnostic logging
+You may want to enable diagnostic logging as part of your setup to be able to monitor your service and have accurate reporting for compliance purposes. For details on how to set up diagnostic logging, see our [how-to-guide](enable-diagnostic-logging.md) on how to set up diagnostic logging, along with some sample queries.
+
+## Use custom headers to add data to audit logs
+In the Azure API for FHIR, you may want to include additional information in the logs, which comes from the calling system. To do including this information, you can use custom headers.
+
+You can use custom headers to capture several types of information. For example:
+
+* Identity or authorization information
+* Origin of the caller
+* Originating organization
+* Client system details (electronic health record, patient portal)
+
+To add this data to your audit logs, see the [Use Custom HTTP headers to add data to Audit Logs](use-custom-headers.md) how-to-guide.
+
+## Next steps
+
+In this how-to guide, you set up additional settings for the Azure API for FHIR.
+
+Next check out the series of tutorials to create a web application that reads FHIR data.
+
+>[!div class="nextstepaction"]
+>[Deploy JavaScript application](tutorial-web-app-fhir-server.md)
healthcare-apis Configure Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/configure-azure-rbac.md
+
+ Title: Configure Azure role-based access control (Azure RBAC) for Azure API for FHIR
+description: This article describes how to configure Azure RBAC for the Azure API for FHIR data plane
++++ Last updated : 03/15/2020+++
+# Configure Azure RBAC for FHIR
+
+In this article, you will learn how to use [Azure role-based access control (Azure RBAC)](../../role-based-access-control/index.yml) to assign access to the Azure API for FHIR data plane. Azure RBAC is the preferred methods for assigning data plane access when data plane users are managed in the Azure Active Directory tenant associated with your Azure subscription. If you are using an external Azure Active Directory tenant, refer to the [local RBAC assignment reference](configure-local-rbac.md).
+
+## Confirm Azure RBAC mode
+
+To use Azure RBAC, your Azure API for FHIR must be configured to use your Azure subscription tenant for data plane and there should be no assigned identity object IDs. You can verify your settings by inspecting the **Authentication** blade of your Azure API for FHIR:
++
+The **Authority** should be set to the Azure Active directory tenant associated with your subscription and there should be no GUIDs in the box labeled **Allowed object IDs**. You will also notice that the box is disabled and a label indicates that Azure RBAC should be used to assign data plane roles.
+
+## Assign roles
+
+To grant users, service principals or groups access to the FHIR data plane, click **Access control (IAM)**, then click **Role assignments** and click **+ Add**:
++
+In the **Role** selection, search for one of the built-in roles for the FHIR data plane:
++
+You can choose between:
+
+* FHIR Data Reader: Can read (and search) FHIR data.
+* FHIR Data Writer: Can read, write, and soft delete FHIR data.
+* FHIR Data Exporter: Can read and export (`$export` operator) data.
+* FHIR Data Contributor: Can perform all data plane operations.
+
+If these roles are not sufficient for your need, you can also [create custom roles](../../role-based-access-control/tutorial-custom-role-powershell.md).
+
+In the **Select** box, search for a user, service principal, or group that you wish to assign the role to.
+
+## Caching behavior
+
+The Azure API for FHIR will cache decisions for up to 5 minutes. If you grant a user access to the FHIR server by adding them to the list of allowed object IDs, or you remove them from the list, you should expect it to take up to five minutes for changes in permissions to propagate.
+
+## Next steps
+
+In this article, you learned how to assign Azure roles for the FHIR data plane. To learn about additional settings for the Azure API for FHIR:
+
+>[!div class="nextstepaction"]
+>[Additional settings for Azure API for FHIR](azure-api-for-fhir-additional-settings.md)
healthcare-apis Configure Cross Origin Resource Sharing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/configure-cross-origin-resource-sharing.md
+
+ Title: Configure cross-origin resource sharing in Azure API for FHIR
+description: This article describes how to configure cross-origin resource sharing in Azure API for FHIR.
++ Last updated : 3/11/2019++++
+# Configure cross-origin resource sharing in Azure API for FHIR
+
+Azure API for Fast Healthcare Interoperability Resources (FHIR) supports [cross-origin resource sharing (CORS)](https://wikipedia.org/wiki/Cross-Origin_Resource_Sharing). CORS allows you to configure settings so that applications from one domain (origin) can access resources from a different domain, known as a cross-domain request.
+
+CORS is often used in a single-page app that must call a RESTful API to a different domain.
+
+To configure a CORS setting in the Azure API for FHIR, specify the following settings:
+
+- **Origins (Access-Control-Allow-Origin)**. A list of domains allowed to make cross-origin requests to the Azure API for FHIR. Each domain (origin) must be entered in a separate line. You can enter an asterisk (*) to allow calls from any domain, but we don't recommend it because it's a security risk.
+
+- **Headers (Access-Control-Allow-Headers)**. A list of headers that the origin request will contain. To allow all headers, enter an asterisk (*).
+
+- **Methods (Access-Control-Allow-Methods)**. The allowed methods (PUT, GET, POST, and so on) in an API call. Choose **Select all** for all methods.
+
+- **Max age (Access-Control-Max-Age)**. The value in seconds to cache preflight request results for Access-Control-Allow-Headers and Access-Control-Allow-Methods.
+
+- **Allow credentials (Access-Control-Allow-Credentials)**. CORS requests normally donΓÇÖt include cookies to prevent [cross-site request forgery (CSRF)](https://en.wikipedia.org/wiki/Cross-site_request_forgery) attacks. If you select this setting, the request can be made to include credentials, such as cookies. You can't configure this setting if you already set Origins with an asterisk (*).
+
+![Cross-origin resource sharing (CORS) settings](media/cors/cors.png)
+
+>[!NOTE]
+>You can't specify different settings for different domain origins. All settings (**Headers**, **Methods**, **Max age**, and **Allow credentials**) apply to all origins specified in the Origins setting.
+
+## Next steps
+
+In this article, you learned how to configure cross-origin sharing in Azure API for FHIR. Next deploy a fully managed Azure API for FHIR:
+
+>[!div class="nextstepaction"]
+>[Deploy Azure API for FHIR](fhir-paas-portal-quickstart.md)
healthcare-apis Configure Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/configure-database.md
+
+ Title: Configure database settings in Azure API for FHIR
+description: This article describes how to configure Database settings in Azure API for FHIR
++++ Last updated : 11/15/2019++
+# Configure database settings
+
+Azure API for FHIR uses database to store its data. Performance of the underlying database depends on the number of Request Units (RU) selected during service provisioning or in database settings after the service has been provisioned.
+
+Azure API for FHIR borrows the concept of RUs from Cosmos DB (see [Request Units in Azure Cosmos DB](../../cosmos-db/request-units.md)) when setting the performance of underlying database.
+
+Throughput must be provisioned to ensure that sufficient system resources are available for your database at all times. How many RUs you need for your application depends on operations you perform. Operations can range from simple read and writes to more complex queries.
+
+> [!NOTE]
+> As different operations consume different number of RU, we return the actual number of RUs consumed in every API call in response header. This way you can profile the number of RUs consumed by your application.
+
+## Update throughput
+
+To change this setting in the Azure portal, navigate to your Azure API for FHIR and open the Database blade. Next, change the Provisioned throughput to the desired value depending on your performance needs. You can change the value up to a maximum of 10,000 RU/s. If you need a higher value, contact Azure support.
+
+If the database throughput is greater than 10,000 RU/s or if the data stored in the database is more than 50 GB, your client application must be capable of handling continuation tokens. A new partition is created in the database for every throughput increase of 10,000 RU/s or if the amount of data stored is more than 50 GB. Multiple partitions creates a multi-page response in which pagination is implemented by using continuation tokens.
+
+> [!NOTE]
+> Higher value means higher Azure API for FHIR throughput and higher cost of the service.
+
+![Config Cosmos DB](media/database/database-settings.png)
+
+## Next steps
+
+In this article, you learned how to update your RUs for Azure API for FHIR. To learn about configuring customer-managed keys as a database setting:
+
+>[!div class="nextstepaction"]
+>[Configure customer-managed keys](customer-managed-key.md)
+
+Or you can deploy a fully managed Azure API for FHIR:
+
+>[!div class="nextstepaction"]
+>[Deploy Azure API for FHIR](fhir-paas-portal-quickstart.md)
healthcare-apis Configure Export Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/configure-export-data.md
+
+ Title: Configure export settings in Azure API for FHIR
+description: This article describes how to configure export settings in Azure API for FHIR
++++ Last updated : 3/5/2020+++
+# Configure export setting and set up the storage account
+
+Azure API for FHIR supports $export command that allows you to export the data out of Azure API for FHIR account to a storage account.
+
+There are three steps involved in configuring export in Azure API for FHIR:
+
+1. Enable Managed Identity on Azure API for FHIR Service
+2. Creating a Azure storage account (if not done before) and assigning permission to Azure API for FHIR to the storage account
+3. Selecting the storage account in Azure API for FHIR as export storage account
+
+## Enabling Managed Identity on Azure API for FHIR
+
+The first step in configuring Azure API for FHIR for export is to enable system wide managed identity on the service. You can read all about Managed Identities in Azure [here](../../active-directory/managed-identities-azure-resources/overview.md).
+
+To do so, navigate to Azure API for FHIR service and select Identity blade. Changing the status to On will enable managed identity in Azure API for FHIR Service.
+
+![Enable Managed Identity](media/export-data/fhir-mi-enabled.png)
+
+Now we can move to next step and create a storage account and assign permission to our service.
+
+## Adding permission to storage account
+
+Next step in export is to assign permission for Azure API for FHIR service to write to the storage account.
+
+After we have created a storage account, navigate to Access Control (IAM) blade in Storage Account and select Add Role Assignments
+
+![Export Role Assignment](media/export-data/fhir-export-role-assignment.png)
+
+Here we then add role Storage Blob Data Contributor to our service name.
+
+![Add Role](media/export-data/fhir-export-role-add.png)
+
+Now we are ready for next step where we can select the storage account in Azure API for FHIR as a default storage account for $export.
+
+## Selecting the storage account for $export
+
+Final step is to assign the Azure storage account that Azure API for FHIR will use to export the data to. To do this, navigate to Integration blade in Azure API for FHIR service in Azure portal and select the storage account
+
+![FHIR Export Storage](media/export-data/fhir-export-storage.png)
+
+After that we are ready to export the data using $export command.
+
+>[!div class="nextstepaction"]
+>[Additional Settings](azure-api-for-fhir-additional-settings.md)
healthcare-apis Configure Local Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/configure-local-rbac.md
+
+ Title: Configure local role-based access control (local RBAC) for Azure API for FHIR
+description: This article describes how to configure the Azure API for FHIR to use an external Azure AD tenant for data plane
++++ Last updated : 03/15/2020++
+# Configure local RBAC for FHIR
+
+This article explains how to configure the Azure API for FHIR to use an external, secondary Azure Active Directory tenant for managing data plane access. Use this mode only if it is not possible for you to use the Azure Active Directory tenant associated with your subscription.
+
+> [!NOTE]
+> If your FHIR service data plane is configured to use your primary Azure Active Directory tenant associated with your subscription, [use Azure RBAC to assign data plane roles](configure-azure-rbac.md).
+
+## Add service principal
+
+Local RBAC allows you to use an external Azure Active Directory tenant with your FHIR server. In order to allow the local RBAC system to check group memberships in this tenant, the Azure API for FHIR must have a service principal in the tenant. This service principal will get created automatically in tenants tied to subscriptions that have deployed the Azure API for FHIR, but in case your tenant has no subscription tied to it, a tenant administrator will need to create this service principal with one of the following commands:
+
+Using the `Az` PowerShell module:
+
+```azurepowershell-interactive
+New-AzADServicePrincipal -ApplicationId 3274406e-4e0a-4852-ba4f-d7226630abb7
+```
+
+or you can use the `AzureAd` PowerShell module:
+
+```azurepowershell-interactive
+New-AzureADServicePrincipal -AppId 3274406e-4e0a-4852-ba4f-d7226630abb7
+```
+
+or you can use Azure CLI:
+
+```azurecli-interactive
+az ad sp create --id 3274406e-4e0a-4852-ba4f-d7226630abb7
+```
+
+## Configure local RBAC
+
+You can configure the Azure API for FHIR to use an external or secondary Azure Active Directory tenant in the **Authentication** blade:
+
+![Local RBAC assignments](media/rbac/local-rbac-guids.png).
+
+In the authority box, enter a valid Azure Active Directory tenant. Once the tenant has been validated, the **Allowed object IDs** box should be activated and you can enter a list of identity object IDs. These IDs can be the identity object IDs of:
+
+* An Azure Active Directory user.
+* An Azure Active Directory service principal.
+* An Azure Active directory security group.
+
+You can read the article on how to [find identity object IDs](find-identity-object-ids.md) for more details.
+
+After entering the required object IDs, click **Save** and wait for changes to be saved before trying to access the data plane using the assigned users, service principals, or groups.
+
+## Caching behavior
+
+The Azure API for FHIR will cache decisions for up to 5 minutes. If you grant a user access to the FHIR server by adding them to the list of allowed object IDs, or you remove them from the list, you should expect it to take up to five minutes for changes in permissions to propagate.
+
+## Next steps
+
+In this article, you learned how to assign FHIR data plane access using an external (secondary) Azure Active Directory tenant. Next learn about additional settings for the Azure API for FHIR:
+
+>[!div class="nextstepaction"]
+>[Additional settings Azure API for FHIR](azure-api-for-fhir-additional-settings.md)
healthcare-apis Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/configure-private-link.md
+
+ Title: Private link for Azure API for FHIR
+description: This article describes how to set up a private endpoint for Azure API for FHIR services
+++++ Last updated : 03/03/2021+++
+# Configure private link
+
+Private link enables you to access Azure API for FHIR over a private endpoint, a network interface that connects you privately and securely using a private IP address from your virtual network. With private link, you can access our services securely from your Vnet as a first party service without having to go through a public DNS. This article walks you through how to create, test, and manage your private endpoint for Azure API for FHIR.
+
+>[!Note]
+>Neither Private Link nor Azure API for FHIR can be moved from one resource group or subscription to another once Private Link is enabled. To move, delete the Private Link first, then move Azure API for FHIR and create a new Private Link once the move is complete. Assess potential security ramifications before deleting Private Link.
+>
+>If exporting audit logs and/metrics is enabled for Azure API for FHIR, update the export setting through Diagnostic Settings from the portal.
+
+## Prerequisites
+
+Before creating a private endpoint, there are some Azure resources that you will need to create first:
+
+- Resource Group ΓÇô The Azure resource group that will contain the virtual network and private endpoint.
+- Azure API for FHIR ΓÇô The FHIR resource you would like to put behind a private endpoint.
+- Virtual Network ΓÇô The VNet to which your client services and Private Endpoint will be connected.
+
+For more information, check out the [Private Link Documentation](../../private-link/index.yml).
+
+## Disable public network access
+
+Creating a private endpoint for your FHIR resource does not automatically disable public traffic to it. To do that you will have to update your FHIR resource to set a new ΓÇ£Public accessΓÇ¥ property from ΓÇ£EnabledΓÇ¥ to ΓÇ£DisabledΓÇ¥. Be careful when disabling public network access as all requests to your FHIR service that are not coming from a properly configured private endpoint will be denied. Only traffic from your private endpoints will be allowed.
+
+![Disable Public Network Access](media/private-link/private-link-disable.png)
+
+## Create private endpoint
+
+To create a private endpoint, a developer with RBAC permissions on the FHIR resource can use Azure portal, [Azure PowerShell](../../private-link/create-private-endpoint-powershell.md), or [Azure CLI](../../private-link/create-private-endpoint-cli.md). This article walks you through the steps on using Azure portal. Using Azure portal is recommended as it automates the creation and configuration of the Private DNS Zone. You can reference the [Private Link Quick Start Guides](../../private-link/create-private-endpoint-portal.md) for more details.
+
+There are two ways to create a private endpoint. Auto Approval flow allows a user that has RBAC permissions on the FHIR resource to create a private endpoint without a need for approval. Manual Approval flow allows a user without permissions on the FHIR resource to request a private endpoint to be approved by owners of the FHIR resource.
+
+### Auto approval
+
+Make sure the region for the new private endpoint is the same as the region for your Virtual Network. The region for your FHIR resource can be different.
+
+![Azure portal Basics Tab](media/private-link/private-link-portal2.png)
+
+For Resource Type, search and select "Microsoft.HealthcareApis/services". For Resource, select the FHIR resource. For target sub-resource, select "fhir".
+
+![Azure portal Resource Tab](media/private-link/private-link-portal1.png)
+
+If you do not have an existing Private DNS Zone set up, select "(New)privatelink.azurehealthcareapis.com". If you already have your Private DNS Zone configured, you can select it from the list. It must be in the format of "privatelink.azurehealthcareapis.com".
+
+![Azure portal Configuration Tab](media/private-link/private-link-portal3.png)
+
+After the deployment is complete, you can go back to "Private endpoint connections" tab, on which you will see "Approved" as the connection state.
+
+### Manual Approval
+
+For manual approval, select the second option under Resource, "Connect to an Azure resource by resource ID or alias". For Target sub-resource, enter "fhir" as in Auto Approval.
+
+![Manual Approval](media/private-link/private-link-manual.png)
+
+After the deployment is complete, you can go back to "Private endpoint connections" tab, on which you can Approve, Reject, or Remove your connection.
+
+![Options](media/private-link/private-link-options.png)
+
+## Test private endpoint
+
+To make sure that your FHIR server is not receiving public traffic after disabling public network access, try hitting the /metadata endpoint for your server from your computer. You should receive a 403 Forbidden. Note that it can take up to 5 minutes after updating the public network access flag before public traffic is blocked.
+
+To make sure your private endpoint can send traffic to your server:
+
+1. Create a VM that is connected to the virtual network and subnet your private endpoint is configured on. To ensure your traffic from the VM is only using the private network, you can disable outbound internet traffic via NSG rule.
+2. RDP into the VM.
+3. Try hitting your FHIR serverΓÇÖs /metadata endpoint from the VM, you should receive the capability statement as a response.
+
+## Manage private endpoint
+
+### View
+
+Private Endpoints and the associated NIC are visible in Azure portal from the resource group they were created in.
+
+![View in resources](media/private-link/private-link-view.png)
+
+### Delete
+
+Private endpoints can only be deleted from Azure portal via the Overview blade (as below) or via the Delete option under Networking (preview)'s "Private endpoint connections" tab. Clicking the delete button will delete the private endpoint and the associated NIC. If you delete all private endpoints to the FHIR resource and the public network access is disabled, no request will make it to your FHIR server.
+
+![Delete Private Endpoint](media/private-link/private-link-delete.png)
healthcare-apis Convert Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/convert-data.md
+
+ Title: Data conversion for Azure API for FHIR
+description: Use the $convert-data endpoint and customize-converter templates to convert data in Azure API for FHIR.
+++++ Last updated : 01/19/2021++++
+# How to convert data to FHIR (Preview)
+
+> [!IMPORTANT]
+> This capability is in public preview, is provided without a service level agreement,
+> and is not recommended for production workloads. Certain features might not be supported
+> or might have constrained capabilities. For more information, see
+> [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+The $convert-data custom endpoint in the Azure API for FHIR is meant for data conversion from different formats to FHIR. It uses the Liquid template engine and the templates from the [FHIR Converter](https://github.com/microsoft/FHIR-Converter) project as the default templates. You can customize these conversion templates as needed. Currently it supports HL7v2 to FHIR conversion.
+
+## Use the $convert-data endpoint
+
+`https://<<FHIR service base URL>>/$convert-data`
+
+$convert-data takes a [Parameter](http://hl7.org/fhir/parameters.html) resource in the request body as described below:
+
+**Parameter Resource:**
+
+| Parameter Name | Description | Accepted values |
+| -- | -- | -- |
+| inputData | Data to be converted. | A valid value of JSON String datatype|
+| inputDataType | Data type of input. | ```HL7v2``` |
+| templateCollectionReference | Reference to a template collection. It can be a reference either to the **Default templates**, or a custom template image that is registered with Azure API for FHIR. See below to learn about customizing the templates, hosting those on ACR, and registering to the Azure API for FHIR. | ```microsofthealth/fhirconverter:default```, \<RegistryServer\>/\<imageName\>@\<imageDigest\> |
+| rootTemplate | The root template to use while transforming the data. | ```ADT_A01```, ```OML_O21```, ```ORU_R01```, ```VXU_V04``` |
+
+> [!WARNING]
+> Default templates help you get started quickly. However, these may get updated when we upgrade the Azure API for FHIR. In order to have consistent data conversion behavior across different versions of Azure API for FHIR, you must host your own copy of templates on an Azure Container Registry, register those to the Azure API for FHIR, and use in your API calls as described later.
+
+**Sample request:**
+
+```json
+{
+ "resourceType": "Parameters",
+ "parameter": [
+ {
+ "name": "inputData",
+ "valueString": "MSH|^~\\&|SIMHOSP|SFAC|RAPP|RFAC|20200508131015||ADT^A01|517|T|2.3|||AL||44|ASCII\nEVN|A01|20200508131015|||C005^Whittingham^Sylvia^^^Dr^^^DRNBR^PRSNL^^^ORGDR|\nPID|1|3735064194^^^SIMULATOR MRN^MRN|3735064194^^^SIMULATOR MRN^MRN~2021051528^^^NHSNBR^NHSNMBR||Kinmonth^Joanna^Chelsea^^Ms^^CURRENT||19870624000000|F|||89 Transaction House^Handmaiden Street^Wembley^^FV75 4GJ^GBR^HOME||020 3614 5541^HOME|||||||||C^White - Other^^^||||||||\nPD1|||FAMILY PRACTICE^^12345|\nPV1|1|I|OtherWard^MainRoom^Bed 183^Simulated Hospital^^BED^Main Building^4|28b|||C005^Whittingham^Sylvia^^^Dr^^^DRNBR^PRSNL^^^ORGDR|||CAR|||||||||16094728916771313876^^^^visitid||||||||||||||||||||||ARRIVED|||20200508131015||"
+ },
+ {
+ "name": "inputDataType",
+ "valueString": "Hl7v2"
+ },
+ {
+ "name": "templateCollectionReference",
+ "valueString": "microsofthealth/fhirconverter:default"
+ },
+ {
+ "name": "rootTemplate",
+ "valueString": "ADT_A01"
+ }
+ ]
+}
+```
+
+**Sample response:**
+
+```json
+{
+ "resourceType": "Bundle",
+ "type": "transaction",
+ "entry": [
+ {
+ "fullUrl": "urn:uuid:9d697ec3-48c3-3e17-db6a-29a1765e22c6",
+ "resource": {
+ "resourceType": "Patient",
+ "id": "9d697ec3-48c3-3e17-db6a-29a1765e22c6",
+ ...
+ ...
+ "request": {
+ "method": "PUT",
+ "url": "Location/50becdb5-ff56-56c6-40a1-6d554dca80f0"
+ }
+ }
+ ]
+}
+```
+
+## Customize templates
+
+You can use the [FHIR Converter extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-health-fhir-converter) for Visual Studio Code to customize the templates as per your needs. The extension provides an interactive editing experience, and makes it easy to download Microsoft-published templates and sample data. See the documentation in the extension for details.
+
+## Host and use templates
+
+It is strongly recommended that you host your own copy of templates on ACR. There are four steps involved in hosting your own copy of templates and using those in the $convert-data operation:
+
+1. Push the templates to your Azure Container Registry.
+1. Enable Managed Identity on your Azure API for FHIR instance.
+1. Provide access of the ACR to the Azure API for FHIR Managed Identity.
+1. Register the ACR servers in the Azure API for FHIR.
+
+### Push templates to Azure Container Registry
+
+After creating an ACR instance, you can use the _FHIR Converter: Push Templates_ command in the [FHIR Converter extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-health-fhir-converter) to push the customized templates to the ACR. Alternatively, you can use the [Template Management CLI tool](https://github.com/microsoft/FHIR-Converter/blob/main/docs/TemplateManagementCLI.md) for this purpose.
+
+### Enable Managed Identity on Azure API for FHIR
+
+Browse to your instance of Azure API for FHIR service in the Azure portal and select the **Identity** blade.
+Change the status to **On** to enable managed identity in Azure API for FHIR.
+
+![Enable Managed Identity](media/convert-data/fhir-mi-enabled.png)
+
+### Provide access of the ACR to Azure API for FHIR
+
+Navigate to Access Control (IAM) blade in your ACR instance and select _Add Role Assignments_.
+
+![ACR Role Assignment](media/convert-data/fhir-acr-role-assignment.png)
+
+Grant AcrPull role to your Azure API for FHIR service instance.
+
+![Add Role](media/convert-data/fhir-acr-role-add.png)
+
+### Register the ACR servers in Azure API for FHIR
+
+You can register up to twenty ACR servers in the Azure API for FHIR.
+
+Install the healthcareapis CLI from Azure PowerShell if needed:
+
+```powershell
+az extension add -n healthcareapis
+```
+
+Register the acr servers to Azure API for FHIR following the examples below:
+
+#### Register a single ACR server
+
+```powershell
+az healthcareapis acr add --login-servers "fhiracr2021.azurecr.io" --resource-group fhir-test --resource-name fhirtest2021
+```
+
+#### Register multiple ACR servers
+
+```powershell
+az healthcareapis acr add --login-servers "fhiracr2021.azurecr.io fhiracr2020.azurecr.io" --resource-group fhir-test --resource-name fhirtest2021
+```
+
+### Verify
+
+Make a call to the $convert-data API specifying your template reference in the templateCollectionReference parameter.
+
+`<RegistryServer>/<imageName>@<imageDigest>`
+
+## Known issues and workarounds
+
+- Some default template files contain UTF-8 BOM. As a result, the generated ID values will contain a BOM character. This may create an issue with FHIR server. The workaround is to pull Microsoft templates using VS Code Extension, and push those to your own ACR after removing the BOM characters from _ID/_Procedure.liquid_, _ID/_Provenance.liquid_, and _ID/_Immunization.liquid_.
+
healthcare-apis Customer Managed Key https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/customer-managed-key.md
+
+ Title: Configure customer-managed keys for Azure API for FHIR
+description: Bring your own key feature supported in Azure API for FHIR through Cosmos DB
+++++ Last updated : 09/28/2020+++
+# Configure customer-managed keys at rest
+
+When you create a new Azure API for FHIR account, your data is encrypted using Microsoft-managed keys by default. Now, you can add a second layer of encryption for the data using your own key that you choose and manage yourself.
+
+In Azure, this is typically accomplished using an encryption key in the customer's Azure Key Vault. Azure SQL, Azure Storage, and Cosmos DB are some examples that provide this capability today. Azure API for FHIR leverages this support from Cosmos DB. When you create an account, you will have the option to specify an Azure Key Vault key URI. This key will be passed on to Cosmos DB when the DB account is provisioned. When a FHIR request is made, Cosmos DB fetches your key and uses it to encrypt/decrypt the data. To get started, you can refer to the following links:
+
+- [Register the Azure Cosmos DB resource provider for your Azure subscription](../../cosmos-db/how-to-setup-cmk.md#register-resource-provider)
+- [Configure your Azure Key Vault instance](../../cosmos-db/how-to-setup-cmk.md#configure-your-azure-key-vault-instance)
+- [Add an access policy to your Azure Key Vault instance](../../cosmos-db/how-to-setup-cmk.md#add-an-access-policy-to-your-azure-key-vault-instance)
+- [Generate a key in Azure Key Vault](../../cosmos-db/how-to-setup-cmk.md#generate-a-key-in-azure-key-vault)
+
+## Using Azure portal
+
+When creating your Azure API for FHIR account on Azure portal, you can see a "Data Encryption" configuration option under the "Database Settings" on the "Additional Settings" tab. By default, the service-managed key option will be chosen.
+
+You can choose your key from the KeyPicker:
++
+Or you can specify your Azure Key Vault key here by selecting "Customer-managed key" option. You can enter the key URI here:
++
+For existing FHIR accounts, you can view the key encryption choice (service- or customer-managed key) in "Database" blade as below. The configuration option can't be modified once chosen. However, you can modify and update your key.
++
+In addition, you can create a new version of the specified key, after which your data will get encrypted with the new version without any service interruption. You can also remove access to the key to remove access to the data. When the key is disabled, queries will result in an error. If the key is re-enabled, queries will succeed again.
++++
+## Using Azure PowerShell
+
+With your Azure Key Vault key URI, you can configure CMK using PowerShell by running the PowerShell command below:
+
+```powershell
+New-AzHealthcareApisService
+ -Name "myService"
+ -Kind "fhir-R4"
+ -ResourceGroupName "myResourceGroup"
+ -Location "westus2"
+ -CosmosKeyVaultKeyUri "https://<my-vault>.vault.azure.net/keys/<my-key>"
+```
+
+## Using Azure CLI
+
+As with PowerShell method, you can configure CMK by passing your Azure Key Vault key URI under the `key-vault-key-uri` parameter and running the CLI command below:
+
+```azurecli-interactive
+az healthcareapis service create
+ --resource-group "myResourceGroup"
+ --resource-name "myResourceName"
+ --kind "fhir-R4"
+ --location "westus2"
+ --cosmos-db-configuration key-vault-key-uri="https://<my-vault>.vault.azure.net/keys/<my-key>"
+
+```
+## Using Azure Resource Manager Template
+
+With your Azure Key Vault key URI, you can configure CMK by passing it under the **keyVaultKeyUri** property in the **properties** object.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "services_myService_name": {
+ "defaultValue": "myService",
+ "type": "String"
+ }
+ },
+ "variables": {},
+ "resources": [
+ {
+ "type": "Microsoft.HealthcareApis/services",
+ "apiVersion": "2020-03-30",
+ "name": "[parameters('services_myService_name')]",
+ "location": "westus2",
+ "kind": "fhir-R4",
+ "properties": {
+ "accessPolicies": [],
+ "cosmosDbConfiguration": {
+ "offerThroughput": 400,
+ "keyVaultKeyUri": "https://<my-vault>.vault.azure.net/keys/<my-key>"
+ },
+ "authenticationConfiguration": {
+ "authority": "https://login.microsoftonline.com/72f988bf-86f1-41af-91ab-2d7cd011db47",
+ "audience": "[concat('https://', parameters('services_myService_name'), '.azurehealthcareapis.com')]",
+ "smartProxyEnabled": false
+ },
+ "corsConfiguration": {
+ "origins": [],
+ "headers": [],
+ "methods": [],
+ "maxAge": 0,
+ "allowCredentials": false
+ }
+ }
+ }
+ ]
+}
+```
+
+And you can deploy the template with the following PowerShell script:
+
+```powershell
+$resourceGroupName = "myResourceGroup"
+$accountName = "mycosmosaccount"
+$accountLocation = "West US 2"
+$keyVaultKeyUri = "https://<my-vault>.vault.azure.net/keys/<my-key>"
+
+New-AzResourceGroupDeployment `
+ -ResourceGroupName $resourceGroupName `
+ -TemplateFile "deploy.json" `
+ -accountName $accountName `
+ -location $accountLocation `
+ -keyVaultKeyUri $keyVaultKeyUri
+```
+
+## Next steps
+
+In this article, you learned how to configure customer-managed keys at rest using Azure portal, PowerShell, CLI, and Resource Manager Template. You can check out the Azure Cosmos DB FAQ section for additional questions you might have:
+
+>[!div class="nextstepaction"]
+>[Cosmos DB: how to setup CMK](../../cosmos-db/how-to-setup-cmk.md#frequently-asked-questions)
healthcare-apis De Identified Export https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/de-identified-export.md
+
+ Title: Exporting de-identified data (preview) for Azure API for FHIR
+description: This article describes how to set up and use de-identified export
++++ Last updated : 9/28/2020++
+# Exporting de-identified data (preview)
+
+> [!Note]
+> Results when using the de-identified export will vary based on factors such as data inputted, and functions selected by the customer. Microsoft is unable to evaluate the de-identified export outputs or determine the acceptability for customer's use cases and compliance needs. The de-identified export is not guaranteed to meet any specific legal, regulatory, or compliance requirements.
+
+The $export command can also be used to export de-identified data from the FHIR server. It uses the anonymization engine from [FHIR tools for anonymization](https://github.com/microsoft/FHIR-Tools-for-Anonymization), and takes anonymization config details in query parameters. You can create your own anonymization config file or use the [sample config file](https://github.com/microsoft/FHIR-Tools-for-Anonymization#sample-configuration-file-for-hipaa-safe-harbor-method) for HIPAA Safe Harbor method as a starting point.
+
+ `https://<<FHIR service base URL>>/$export?_container=<<container_name>>&_anonymizationConfig=<<config file name>>&_anonymizationConfigEtag=<<ETag on storage>>`
+
+> [!Note]
+> Right now the Azure API for FHIR only supports de-identified export at the system level ($export).
+
+|Query parameter | Example |Optionality| Description|
+|||--||
+| _\_anonymizationConfig_ |DemoConfig.json|Required for de-identified export |Name of the configuration file. See the configuration file format [here](https://github.com/microsoft/FHIR-Tools-for-Anonymization#configuration-file-format). This file should be kept inside a container named **anonymization** within the same Azure storage account that is configured as the export location. |
+| _\_anonymizationConfigEtag_|"0x8D8494A069489EC"|Optional for de-identified export|This is the Etag of the configuration file. You can get the Etag using Azure storage explorer from the blob property|
+
+> [!IMPORTANT]
+> Both raw export as well as de-identified export writes to the same Azure storage account specified as part of export configuration. It is recommended that you use different containers corresponding to different de-identified config and manage user access at the container level.
healthcare-apis Device Data Through Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/device-data-through-iot-hub.md
+
+ Title: 'Tutorial: Receive device data through Azure IoT Hub'
+description: In this tutorial, you'll learn how to enable device data routing from IoT Hub into Azure API for FHIR through Azure IoT Connector for FHIR.
+++++ Last updated : 11/13/2020+++
+# Tutorial: Receive device data through Azure IoT Hub
+
+Azure IoT Connector for Fast Healthcare Interoperability Resources (FHIR&#174;)* provides you the capability to ingest data from Internet of Medical Things (IoMT) devices into Azure API for FHIR. The [Deploy Azure IoT Connector for FHIR (preview) using Azure portal](iot-fhir-portal-quickstart.md) quickstart showed an example of device managed by Azure IoT Central [sending telemetry](iot-fhir-portal-quickstart.md#connect-your-devices-to-iot) to Azure IoT Connector for FHIR. Azure IoT Connector for FHIR can also work with devices provisioned and managed through Azure IoT Hub. This tutorial provides the procedure to connect and route device data from Azure IoT Hub to Azure IoT Connector for FHIR.
+
+## Prerequisites
+
+- An active Azure subscription - [Create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
+- Azure API for FHIR resource with at least one Azure IoT Connector for FHIR - [Deploy Azure IoT Connector for FHIR (preview) using Azure portal](iot-fhir-portal-quickstart.md)
+- Azure IoT Hub resource connected with real or simulated device(s) - [Create an IoT hub using the Azure portal](../../iot-hub/quickstart-send-telemetry-dotnet.md)
+
+> [!TIP]
+> If you are using an Azure IoT Hub simulated device application, feel free to pick the application of your choice amongst different supported languages and systems.
+
+## Get connection string for Azure IoT Connector for FHIR (preview)
+
+Azure IoT Hub requires a connection string to securely connect with your Azure IoT Connector for FHIR. Create a new connection string for your Azure IoT Connector for FHIR as described in [Generate a connection string](iot-fhir-portal-quickstart.md#generate-a-connection-string). Preserve this connection string to be used in the next step.
+
+Azure IoT Connector for FHIR uses an Azure Event Hub instance under the hood to receive device messages. The connection string created above is basically the connection string to this underlying Event Hub.
+
+## Connect Azure IoT Hub with the Azure IoT Connector for FHIR (preview)
+
+Azure IoT Hub supports a feature called [message routing](../../iot-hub/iot-hub-devguide-messages-d2c.md) that provides capability to send device data to various Azure services like Event Hub, Storage Account, and Service Bus. Azure IoT Connector for FHIR leverages this feature to connect and send device data from Azure IoT Hub to its Event Hub endpoint.
+
+> [!NOTE]
+> At this time you can only use PowerShell or CLI command to [create message routing](../../iot-hub/tutorial-routing.md) because Azure IoT Connector for FHIR's Event Hub is not hosted on the customer subscription, hence it won't be visible to you through the Azure portal. Though, once the message route objects are added using PowerShell or CLI, they are visible on the Azure portal and can be managed from there.
+
+Setting up a message routing consists of two steps.
+
+### Add an endpoint
+This step defines an endpoint to which the IoT Hub would route the data. Create this endpoint using either [Add-AzIotHubRoutingEndpoint](/powershell/module/az.iothub/Add-AzIotHubRoutingEndpoint) PowerShell command or [az iot hub routing-endpoint create](/cli/azure/iot/hub/routing-endpoint#az-iot-hub-routing-endpoint-create) CLI command, based on your preference.
+
+Here is the list of parameters to use with the command to create an endpoint:
+
+|PowerShell Parameter|CLI Parameter|Description|
+||||
+|ResourceGroupName|resource-group|Resource group name of your IoT Hub resource.|
+|Name|hub-name|Name of your IoT Hub resource.|
+|EndpointName|endpoint-name|Use a name that you would like to assign to the endpoint being created.|
+|EndpointType|endpoint-type|Type of endpoint that IoT Hub needs to connect with. Use literal value of "EventHub" for PowerShell and "eventhub" for CLI.|
+|EndpointResourceGroup|endpoint-resource-group|Resource group name for your Azure IoT Connector for FHIR's Azure API for FHIR resource. You can get this value from the Overview page of Azure API for FHIR.|
+|EndpointSubscriptionId|endpoint-subscription-id|Subscription Id for your Azure IoT Connector for FHIR's Azure API for FHIR resource. You can get this value from the Overview page of Azure API for FHIR.|
+|ConnectionString|connection-string|Connection string to your Azure IoT Connector for FHIR. Use the value you obtained in the previous step.|
+
+### Add a message route
+This step defines a message route using the endpoint created above. Create a route using either [Add-AzIotHubRoute](/powershell/module/az.iothub/Add-AzIoTHubRoute) PowerShell command or [az iot hub route create](/cli/azure/iot/hub/route#az-iot-hub-route-create) CLI command, based on your preference.
+
+Here is the list of parameters to use with the command to add a message route:
+
+|PowerShell Parameter|CLI Parameter|Description|
+||||
+|ResourceGroupName|g|Resource group name of your IoT Hub resource.|
+|Name|hub-name|Name of your IoT Hub resource.|
+|EndpointName|endpoint-name|Name of the endpoint you have created above.|
+|RouteName|route-name|A name you want to assign to message route being created.|
+|Source|source-type|Type of data to send to the endpoint. Use literal value of "DeviceMessages" for PowerShell and "devicemessages" for CLI.|
+
+## Send device message to IoT Hub
+
+Use your device (real or simulated) to send the sample heart rate message shown below to Azure IoT Hub. This message will get routed to Azure IoT Connector for FHIR, where the message will be transformed into a FHIR Observation resource and stored into the Azure API for FHIR.
+
+```json
+{
+ "HeartRate": 80,
+ "RespiratoryRate": 12,
+ "HeartRateVariability": 64,
+ "BodyTemperature": 99.08839032397609,
+ "BloodPressure": {
+ "Systolic": 23,
+ "Diastolic": 34
+ },
+ "Activity": "walking"
+}
+```
+> [!IMPORTANT]
+> Make sure to send the device message that conforms to the [mapping templates](iot-mapping-templates.md) configured with your Azure IoT Connector for FHIR.
+
+## View device data in Azure API for FHIR
+
+You can view the FHIR Observation resource(s) created by Azure IoT Connector for FHIR on Azure API for FHIR using Postman. Set up your [Postman to access Azure API for FHIR](access-fhir-postman-tutorial.md) and make a `GET` request to `https://your-fhir-server-url/Observation?code=http://loinc.org|8867-4` to view Observation FHIR resources with heart rate value submitted in the above sample message.
+
+> [!TIP]
+> Ensure that your user has appropriate access to Azure API for FHIR data plane. Use [Azure role-based access control (Azure RBAC)](configure-azure-rbac.md) to assign required data plane roles.
++
+## Next steps
+
+In this quickstart guide, you set up Azure IoT Hub to route device data to Azure IoT Connector for FHIR. Select from below next steps to learn more about Azure IoT Connector for FHIR:
+
+Understand different stages of data flow within Azure IoT Connector for FHIR.
+
+>[!div class="nextstepaction"]
+>[Azure IoT Connector for FHIR data flow](iot-data-flow.md)
+
+Learn how to configure IoT Connector using device and FHIR mapping templates.
+
+>[!div class="nextstepaction"]
+>[Azure IoT Connector for FHIR mapping templates](iot-mapping-templates.md)
+
+*In the Azure portal, Azure IoT Connector for FHIR is referred to as IoT Connector (preview). FHIR is a registered trademark of HL7 and is used with the permission of HL7.
healthcare-apis Enable Diagnostic Logging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/enable-diagnostic-logging.md
+
+ Title: Enable diagnostic logging in Azure API for FHIR
+description: This article explains how to enable diagnostic logging in Azure API for FHIR®
+++++++ Last updated : 03/03/2021++
+# Enable Diagnostic Logging in Azure API for FHIR
+
+In this article, you will learn how to enable diagnostic logging in Azure API for FHIR and be able to review some sample queries for these logs. Access to diagnostic logs is essential for any healthcare service where compliance with regulatory requirements (such as HIPAA) is a must. The feature in Azure API for FHIR that enables diagnostic logs is the [**Diagnostic settings**](../../azure-monitor/essentials/diagnostic-settings.md) in the Azure portal.
+
+## View and Download FHIR Metrics Data
+
+You can view the metrics under Monitoring | Metrics from the portal. The metrics include Number of Requests, Average Latency, Number of Errors, Data Size, RUs Used, Number of requests that exceeded capacity, and Availability (in %). The screenshot below shows RUs used for a sample environment with very few activities in the last 7 days. You can download the data in Json format.
+
+ :::image type="content" source="media/diagnostic-logging/fhir-metrics-rus-screen.png" alt-text="Azure API for FHIR Metrics from the portal" lightbox="media/diagnostic-logging/fhir-metrics-rus-screen.png":::
+
+## Enable audit logs
+1. To enable diagnostic logging in Azure API for FHIR, select your Azure API for FHIR service in the Azure portal
+2. Navigate to **Diagnostic settings**
+
+ :::image type="content" source="media/diagnostic-logging/diagnostic-settings-screen.png" alt-text="Add Azure FHIR Diagnostic Settings." lightbox="media/diagnostic-logging/diagnostic-settings-screen.png":::
+
+3. Select **+ Add diagnostic setting**
+
+4. Enter a name for the setting
+
+5. Select the method you want to use to access your diagnostic logs:
+
+ 1. **Archive to a storage account** for auditing or manual inspection. The storage account you want to use needs to be already created.
+ 2. **Stream to event hub** for ingestion by a third-party service or custom analytic solution. You will need to create an event hub namespace and event hub policy before you can configure this step.
+ 3. **Stream to the Log Analytics** workspace in Azure Monitor. You will need to create your Logs Analytics Workspace before you can select this option.
+
+6. Select **AuditLogs** and/or **AllMetrics**. The metrics include service name, availability, data size, total latency, total requests, total errors and timestamp. You can find more detail on [supported metrics](https://docs.microsoft.com/azure/azure-monitor/essentials/metrics-supported#microsofthealthcareapisservices).
+
+ :::image type="content" source="media/diagnostic-logging/fhir-diagnostic-setting.png" alt-text="Azure FHIR Diagnostic Settings. Select AuditLogs and/or AllMetrics." lightbox="media/diagnostic-logging/fhir-diagnostic-setting.png":::
+
+7. Select **Save**
++
+> [!Note]
+> It might take up to 15 minutes for the first Logs to show in Log Analytics. Also, if Azure API for FHIR is moved from one resource group or subscription to another, update the setting once the move is complete.
+
+For more information on how to work with diagnostic logs, please refer to the [Azure Resource Log documentation](../../azure-monitor/essentials/platform-logs-overview.md)
+
+## Audit log details
+At this time, the Azure API for FHIR service returns the following fields in the audit log:
+
+|Field Name |Type |Notes |
+||||
+|CallerIdentity|Dynamic|A generic property bag containing identity information
+|CallerIdentityIssuer|String|Issuer
+|CallerIdentityObjectId|String|Object_Id
+|CallerIPAddress|String|The callerΓÇÖs IP address
+|CorrelationId|String| Correlation ID
+|FhirResourceType|String|The resource type for which the operation was executed
+|LogCategory|String|The log category (we are currently returning ΓÇÿAuditLogsΓÇÖ LogCategory)
+|Location|String|The location of the server that processed the request (e.g., South Central US)
+|OperationDuration|Int|The time it took to complete this request in seconds
+|OperationName|String| Describes the type of operation (e.g. update, search-type)
+|RequestUri|String|The request URI
+|ResultType|String|The available values currently are **Started**, **Succeeded**, or **Failed**
+|StatusCode|Int|The HTTP status code. (e.g., 200)
+|TimeGenerated|DateTime|Date and time of the event|
+|Properties|String| Describes the properties of the fhirResourceType
+|SourceSystem|String| Source System (always Azure in this case)
+|TenantId|String|Tenant ID
+|Type|String|Type of log (always MicrosoftHealthcareApisAuditLog in this case)
+|_ResourceId|String|Details about the resource
+
+## Sample queries
+
+Here are a few basic Application Insights queries you can use to explore your log data.
+
+Run this query to see the **100 most recent** logs:
+
+```Application Insights
+MicrosoftHealthcareApisAuditLogs
+| limit 100
+```
+
+Run this query to group operations by **FHIR Resource Type**:
+
+```Application Insights
+MicrosoftHealthcareApisAuditLogs
+| summarize count() by FhirResourceType
+```
+
+Run this query to get all the **failed results**
+
+```Application Insights
+MicrosoftHealthcareApisAuditLogs
+| where ResultType == "Failed"
+```
+
+## Conclusion
+Having access to diagnostic logs is essential for monitoring a service and providing compliance reports. Azure API for FHIR allows you to do these actions through diagnostic logs.
+
+FHIR is the registered trademark of HL7 and is used with the permission of HL7.
+
+## Next steps
+In this article, you learned how to enable Audit Logs for Azure API for FHIR. Next, learn about other additional settings you can configure in the Azure API for FHIR
+
+>[!div class="nextstepaction"]
+>[Additional Settings](azure-api-for-fhir-additional-settings.md)
healthcare-apis Export Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/export-data.md
+
+ Title: Executing the export by invoking $export command on Azure API for FHIR
+description: This article describes how to export FHIR data using $export
++++ Last updated : 2/19/2021++
+# How to export FHIR data
++
+The Bulk Export feature allows data to be exported from the FHIR Server per the [FHIR specification](https://hl7.org/fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html).
+
+Before using $export, you will want to make sure that the Azure API for FHIR is configured to use it. For configuring export settings and creating Azure storage account, refer to [the configure export data page](configure-export-data.md).
+
+## Using $export command
+
+After configuring the Azure API for FHIR for export, you can use the $export command to export the data out of the service. The data will be stored into the storage account you specified while configuring export. To learn how to invoke $export command in FHIR server, read documentation on the [HL7 FHIR $export specification](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html).
+
+The Azure API For FHIR supports $export at the following levels:
+* [System](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#endpointsystem-level-export): `GET https://<<FHIR service base URL>>/$export>>`
+* [Patient](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#endpointall-patients): `GET https://<<FHIR service base URL>>/Patient/$export>>`
+* [Group of patients*](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#endpointgroup-of-patients) - Azure API for FHIR exports all related resources but does not export the characteristics of the group: `GET https://<<FHIR service base URL>>/Group/[ID]/$export>>`
+
+When data is exported, a separate file is created for each resource type. To ensure that the exported files don't become too large, we create a new file after the size of a single exported file becomes larger than 64 MB. The result is that you may get multiple files for each resource type, which will be enumerated (i.e. Patient-1.ndjson, Patient-2.ndjson).
++
+> [!Note]
+> `Patient/$export` and `Group/[ID]/$export` may export duplicate resources if the resource is in a compartment of more than one resource, or is in multiple groups.
+
+In addition, checking the export status through the URL returned by the location header during the queuing is supported along with canceling the actual export job.
+
+### Exporting FHIR data to ADLS Gen2
+
+Currently we support $export for ADLS Gen2 enabled storage accounts, with the following limitation:
+
+- User cannot take advantage of [hierarchical namespaces](https://docs.microsoft.com/azure/storage/blobs/data-lake-storage-namespace) yet; there isn't a way to target export to a specific sub-directory within the container. We only provide the ability to target a specific container (where we create a new folder for each export).
+
+- Once an export is complete, we never export anything to that folder again, since subsequent exports to the same container will be inside a newly created folder.
++
+## Settings and parameters
+
+### Headers
+There are two required header parameters that must be set for $export jobs. The values are defined by the current [$export specification](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#headers).
+* **Accept** - application/fhir+json
+* **Prefer** - respond-async
+
+### Query parameters
+The Azure API for FHIR supports the following query parameters. All of these parameters are optional:
+
+|Query parameter | Defined by the FHIR Spec? | Description|
+||||
+| \_outputFormat | Yes | Currently supports three values to align to the FHIR Spec: application/fhir+ndjson, application/ndjson, or just ndjson. All export jobs will return `ndjson` and the passed value has no effect on code behavior. |
+| \_since | Yes | Allows you to only export resources that have been modified since the time provided |
+| \_type | Yes | Allows you to specify which types of resources will be included. For example, \_type=Patient would return only patient resources|
+| \_typefilter | Yes | To request finer-grained filtering, you can use \_typefilter along with the \_type parameter. The value of the _typeFilter parameter is a comma-separated list of FHIR queries that further restrict the results |
+| \_container | No | Specifies the container within the configured storage account where the data should be exported. If a container is specified, the data will be exported to that container in a new folder with the name. If the container is not specified, it will be exported to a new container using timestamp and job ID. |
+
+## Secure Export to Azure Storage
+
+Azure API for FHIR supports a secure export operation. One option to run
+a secure export is to permit specific IP addresses associated with Azure API for FHIR to access the Azure storage account. Depending on whether the storage account is in the same or a different location from that of the
+Azure API for FHIR, the configurations are different.
+
+### When the Azure storage account is in a different region
+
+Select the networking blade of the Azure storage account from the
+portal.
+
+ :::image type="content" source="media/export-data/storage-networking.png" alt-text="Azure Storage Networking Settings." lightbox="media/export-data/storage-networking.png":::
+
+Select "Selected networks" and specify the IP address in the
+**Address range** box under the section of Firewall \| Add IP ranges to
+allow access from the internet or your on-premises networks. You can
+find the IP address from the table below for the Azure region where the
+Azure API for FHIR service is provisioned.
+
+|**Azure Region** |**Public IP Address** |
+|:-|:-|
+| Australia East | 20.53.44.80 |
+| Canada Central | 20.48.192.84 |
+| Central US | 52.182.208.31 |
+| East US | 20.62.128.148 |
+| East US 2 | 20.49.102.228 |
+| East US 2 EUAP | 20.39.26.254 |
+| Germany North | 51.116.51.33 |
+| Germany West Central | 51.116.146.216 |
+| Japan East | 20.191.160.26 |
+| Korea Central | 20.41.69.51 |
+| North Central US | 20.49.114.188 |
+| North Europe | 52.146.131.52 |
+| South Africa North | 102.133.220.197 |
+| South Central US | 13.73.254.220 |
+| Southeast Asia | 23.98.108.42 |
+| Switzerland North | 51.107.60.95 |
+| UK South | 51.104.30.170 |
+| UK West | 51.137.164.94 |
+| West Central US | 52.150.156.44 |
+| West Europe | 20.61.98.66 |
+| West US 2 | 40.64.135.77 |
+