Updates from: 02/10/2021 04:09:09
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-domain-services https://docs.microsoft.com/en-us/azure/active-directory-domain-services/faqs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/faqs.md
@@ -10,7 +10,7 @@
Previously updated : 09/30/2020 Last updated : 02/09/2021
@@ -148,7 +148,7 @@ Azure AD Domain Services is included in the free trial for Azure. You can sign u
No. Once you've enabled an Azure AD Domain Services managed domain, the service is available within your selected virtual network until you delete the managed domain. There's no way to pause the service. Billing continues on an hourly basis until you delete the managed domain. ### Can I fail over Azure AD Domain Services to another region for a DR event?
-No. Azure AD Domain Services doesn't currently provide a geo-redundant deployment model. It's limited to a single virtual network in an Azure region. If you want to utilize multiple Azure regions, you need to run your Active Directory Domain Controllers on Azure IaaS VMs. For architecture guidance, see [Extend your on-premises Active Directory domain to Azure](/azure/architecture/reference-architectures/identity/adds-extend-domain).
+Yes, to provide geographical resiliency for a managed domain, you can create an additional [replica set](tutorial-create-replica-set.md) to a peered virtual network in any Azure region that supports Azure AD DS. Replica sets share the same namespace and configuration with the managed domain.
### Can I get Azure AD Domain Services as part of Enterprise Mobility Suite (EMS)? Do I need Azure AD Premium to use Azure AD Domain Services? No. Azure AD Domain Services is a pay-as-you-go Azure service and isn't part of EMS. Azure AD Domain Services can be used with all editions of Azure AD (Free and Premium). You're billed on an hourly basis, depending on usage.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/customize-application-attributes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/customize-application-attributes.md
@@ -16,7 +16,7 @@
Microsoft Azure AD provides support for user provisioning to third-party SaaS applications such as Salesforce, G Suite and others. If you enable user provisioning for a third-party SaaS application, the Azure portal controls its attribute values through attribute-mappings.
-Before you get started, make sure you are familiar with app management and **Single Sign-On (SSO)** concepts, check out the following links:
+Before you get started, make sure you are familiar with app management and **Single Sign-On (SSO)** concepts. Check out the following links:
- [Quickstart Series on App Management in Azure AD](../manage-apps/view-applications-portal.md) - [What is Single Sign-On (SSO)?](../manage-apps/what-is-single-sign-on.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/how-provisioning-works https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/how-provisioning-works.md
@@ -38,7 +38,7 @@ To request an automatic Azure AD provisioning connector for an app that doesn't
## Authorization
-Credentials are required for Azure AD to connect to the application's user management API. While you're configuring automatic user provisioning for an application, you'll need to enter valid credentials. For gallery applications, you can find credential types and requirements for the application by referring to the app tutorial. For non-gallery applications, you can refer to the [SCIM](./use-scim-to-provision-users-and-groups.md#authorization-for-provisioning-connectors-in-the-application-gallery) documentation to understand the credential types and requirements. In the Azure portal, you'll be able to test the credentials by having Azure AD attempt to connect to the app's provisioning app using the supplied credentials.
+Credentials are required for Azure AD to connect to the application's user management API. While you're configuring automatic user provisioning for an application, you'll need to enter valid credentials. For gallery applications, you can find credential types and requirements for the application by referring to the app tutorial. For non-gallery applications, you can refer to the [SCIM](./use-scim-to-provision-users-and-groups.md#authorization-to-provisioning-connectors-in-the-application-gallery) documentation to understand the credential types and requirements. In the Azure portal, you'll be able to test the credentials by having Azure AD attempt to connect to the app's provisioning app using the supplied credentials.
## Mapping attributes
@@ -213,4 +213,4 @@ When developing an application, always support both soft deletes and hard delete
[Build a SCIM endpoint and configure provisioning when creating your own app](../app-provisioning/use-scim-to-provision-users-and-groups.md)
-[Troubleshoot problems with configuring and provisioning users to an application](./application-provisioning-config-problem.md).
+[Troubleshoot problems with configuring and provisioning users to an application](./application-provisioning-config-problem.md).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/use-scim-to-provision-users-and-groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/use-scim-to-provision-users-and-groups.md
@@ -15,7 +15,7 @@
# Tutorial: Develop and plan provisioning for a SCIM endpoint
-As an application developer, you can use the System for Cross-Domain Identity Management (SCIM) user management API to enable automatic provisioning of users and groups between your application and Azure AD. This article describes how to build a SCIM endpoint and integrate with the Azure AD provisioning service. The SCIM specification provides a common user schema for provisioning. When used in conjunction with federation standards like SAML or OpenID Connect, SCIM gives administrators an end-to-end, standards-based solution for access management.
+As an application developer, you can use the System for Cross-Domain Identity Management (SCIM) user management API to enable automatic provisioning of users and groups between your application and Azure AD (AAD). This article describes how to build a SCIM endpoint and integrate with the AAD provisioning service. The SCIM specification provides a common user schema for provisioning. When used in conjunction with federation standards like SAML or OpenID Connect, SCIM gives administrators an end-to-end, standards-based solution for access management.
![Provisioning from Azure AD to an app with SCIM](media/use-scim-to-provision-users-and-groups/scim-provisioning-overview.png)
@@ -23,29 +23,55 @@ SCIM is a standardized definition of two endpoints: a `/Users` endpoint and a `/
The standard user object schema and rest APIs for management defined in SCIM 2.0 (RFC [7642](https://tools.ietf.org/html/rfc7642), [7643](https://tools.ietf.org/html/rfc7643), [7644](https://tools.ietf.org/html/rfc7644)) allow identity providers and apps to more easily integrate with each other. Application developers that build a SCIM endpoint can integrate with any SCIM-compliant client without having to do custom work.
-Automating provisioning to an application requires building and integrating a SCIM endpoint with the Azure AD SCIM client. Perform the following steps to start provisioning users and groups into your application.
+To automate provisioning to an application will require building and integrating a SCIM endpoint with the Azure AD SCIM client. Use the following steps to start provisioning users and groups into your application.
- * **[Step 1: Design your user and group schema.](#step-1-design-your-user-and-group-schema)** Identify the objects and attributes your application needs, and determine how they map to the user and group schema supported by the Azure AD SCIM implementation.
+1. Design your user and group schema
- * **[Step 2: Understand the Azure AD SCIM implementation.](#step-2-understand-the-azure-ad-scim-implementation)** Understand how the Azure AD SCIM client is implemented, and model your SCIM protocol request handling and responses.
+ Identify the application's objects and attributes to determine how they map to the user and group schema supported by the AAD SCIM implementation.
- * **[Step 3: Build a SCIM endpoint.](#step-3-build-a-scim-endpoint)** An endpoint must be SCIM 2.0-compatible to integrate with the Azure AD provisioning service. As an option, you can use Microsoft Common Language Infrastructure (CLI) libraries and code samples to build your endpoint. These samples are for reference and testing only; we recommend against coding your production app to take a dependency on them.
+1. Understand the AAD SCIM implementation
- * **[Step 4: Integrate your SCIM endpoint with the Azure AD SCIM client.](#step-4-integrate-your-scim-endpoint-with-the-azure-ad-scim-client)** If your organization is using a third-party application that implements the profile of SCIM 2.0 that Azure AD supports, you can start automating both provisioning and deprovisioning of users and groups right away.
+ Understand how the AAD SCIM client is implemented to model your SCIM protocol request handling and responses.
- * **[Step 5: Publish your application to the Azure AD application gallery.](#step-5-publish-your-application-to-the-azure-ad-application-gallery)** Make it easy for customers to discover your application and easily configure provisioning.
+1. Build a SCIM endpoint
+
+ An endpoint must be SCIM 2.0-compatible to integrate with the AAD provisioning service. As an option, use Microsoft Common Language Infrastructure (CLI) libraries and code samples to build your endpoint. These samples are for reference and testing only; we recommend against using them as dependencies in your production app.
+
+1. Integrate your SCIM endpoint with the AAD SCIM client
+
+ If your organization uses a third-party application to implement a profile of SCIM 2.0 that AAD supports, you can quickly automate both provisioning and deprovisioning of users and groups.
+
+1. Publish your application to the AAD application gallery
+
+ Make it easy for customers to discover your application and easily configure provisioning.
![Steps for integrating a SCIM endpoint with Azure AD](media/use-scim-to-provision-users-and-groups/process.png)
-## Step 1: Design your user and group schema
+## Design your user and group schema
+
+Each application requires different attributes to create a user or group. Start your integration by identifying the required objects (users, groups) and attributes (name, manager, job title, etc.) that your application needs.
+
+The SCIM standard defines a schema for managing users and groups.
+
+The **core** user schema only requires three attributes (all other attributes are optional):
+
+- `id`, service provider defined identifier
+- `externalId`, client defined identifier
+- `meta`, *read-only* metadata maintained by the service provider
+
+In addition to the **core** user schema, the SCIM standard defines an **enterprise** user extension with a model for extending the user schema to meet your applicationΓÇÖs needs.
-Every application requires different attributes to create a user or group. Start your integration by identifying the objects (users, groups) and attributes (name, manager, job title, etc.) that your application requires. The SCIM standard defines a schema for managing users and groups. The core user schema only requires three attributes: **id** (service provider defined identifier), **externalId** (client defined identifier), and **meta** (read-only metadata maintained by the service provider). All other attributes are optional. In addition to the core user schema, the SCIM standard defines an enterprise user extension and a model for extending the user schema to meet your applicationΓÇÖs needs. If, for example, your application requires a userΓÇÖs manager, you can use the enterprise user schema to collect the userΓÇÖs manager and the core schema to collect the userΓÇÖs email. To design your schema, follow the steps below:
- 1. List the attributes your application requires. It can be helpful to break down your requirements into the attributes needed for authentication (e.g. loginName and email), attributes needed to manage the lifecycle of the user (e.g. status / active), and other attributes needed for your particular application to work (e.g. manager, tag).
- 2. Check whether those attributes are already defined in the core user schema or enterprise user schema. If any attributes that you need and arenΓÇÖt covered in the core or enterprise user schemas, you will need to define an extension to the user schema that covers the attributes you need. In the example below, weΓÇÖve added an extension to the user to allow provisioning a ΓÇ£tagΓÇ¥ on a user. It is best to start with just the core and enterprise user schemas and expand out to additional custom schemas later.
- 3. Map the SCIM attributes to the user attributes in Azure AD. If one of the attributes you have defined in your SCIM endpoint does not have a clear counterpart on the Azure AD user schema, there is a good chance the data isnΓÇÖt stored on the user object at all on most tenants. Consider whether this attribute can be optional for creating a user. If the attribute is critical for your application to work, guide the tenant administrator to extend their schema or use an extension attribute as shown below for the ΓÇ£tagsΓÇ¥ property.
+For example, if your application requires both a user's email and userΓÇÖs manager, use the **core** schema to collect the userΓÇÖs email and the **enterprise** user schema to collect the userΓÇÖs manager.
-### Table 1: Outline the attributes that you need
-| Step 1: Determine attributes your app requires| Step 2: Map app requirements to SCIM standard| Step 3: Map SCIM attributes to the Azure AD attributes|
+To design your schema, follow these steps:
+
+1. List the attributes your application requires, then categorize as attributes needed for authentication (e.g. loginName and email), attributes needed to manage the user lifecycle (e.g. status / active), and all other attributes needed for the application to work (e.g. manager, tag).
+
+1. Check if the attributes are already defined in the **core** user schema or **enterprise** user schema. If not, you must define an extension to the user schema that covers the missing attributes. See example below for an extension to the user to allow provisioning a user `tag`.
+
+1. Map SCIM attributes to the user attributes in Azure AD. If one of the attributes you have defined in your SCIM endpoint does not have a clear counterpart on the Azure AD user schema, guide the tenant administrator to extend their schema or use an extension attribute as shown below for the `tags` property.
+
+|Required app attribute|Mapped SCIM attribute|Mapped Azure AD attribute|
|--|--|--| |loginName|userName|userPrincipalName| |firstName|name.givenName|givenName|
@@ -55,7 +81,7 @@ Every application requires different attributes to create a user or group. Start
|tag|urn:ietf:params:scim:schemas:extension:2.0:CustomExtension:tag|extensionAttribute1| |status|active|isSoftDeleted (computed value not stored on user)|
-The schema defined above would be represented using the JSON payload below. Note that in addition to the attributes required for the application, the JSON representation includes the required `id`, `externalId`, and `meta` attributes.
+**Example list of required attributes**
```json {
@@ -85,9 +111,13 @@ The schema defined above would be represented using the JSON payload below. Note
} } ```
+**Example schema defined by a JSON payload**
+
+> [!NOTE]
+> In addition to the attributes required for the application, the JSON representation also includes the required `id`, `externalId`, and `meta` attributes.
+
+It helps to categorize between `/User` and `/Group` to map any default user attributes in Azure AD to the SCIM RFC, see [how customize attributes are mapped between Azure AD and your SCIM endpoint](customize-application-attributes.md).
-### Table 2: Default user attribute mapping
-You can then use the table below to understand how the attributes your application requires could map to an attribute in Azure AD and the SCIM RFC. You can [customize](customize-application-attributes.md) how attributes are mapped between Azure AD and your SCIM endpoint. Note that you don't need to support both users and groups or all the attributes shown below. They are a reference for how attributes in Azure AD are often mapped to properties in the SCIM protocol.
| Azure Active Directory user | "urn:ietf:params:scim:schemas:extension:enterprise:2.0:User" | | | |
@@ -110,8 +140,7 @@ You can then use the table below to understand how the attributes your applicati
| telephone-Number |phoneNumbers[type eq "work"].value | | user-PrincipalName |userName | -
-### Table 3: Default group attribute mapping
+**Example list of user and group attributes**
| Azure Active Directory group | urn:ietf:params:scim:schemas:core:2.0:Group | | | |
@@ -122,10 +151,14 @@ You can then use the table below to understand how the attributes your applicati
| objectId |externalId | | proxyAddresses |emails[type eq "other"].Value |
-There are several endpoints defined in the SCIM RFC. You can get started with the /User endpoint and then expand from there. The /Schemas endpoint is helpful when using custom attributes or if your schema changes frequently. It enables a client to retrieve the most up-to-date schema automatically. The /Bulk endpoint is especially helpful when supporting groups. The table below describes the various endpoints defined in the SCIM standard.
-
-### Table 4: Determine the endpoints that you would like to develop
-|ENDPOINT|DESCRIPTION|
+**Example list of group attributes**
+
+> [!NOTE]
+> You are not required to support both users and groups, or all the attributes shown here, it's only a reference on how attributes in Azure AD are often mapped to properties in the SCIM protocol.
+
+There are several endpoints defined in the SCIM RFC. You can start with the `/User` endpoint and then expand from there.
+
+|Endpoint|Description|
|--|--| |/User|Perform CRUD operations on a user object.| |/Group|Perform CRUD operations on a group object.|
@@ -134,49 +167,54 @@ There are several endpoints defined in the SCIM RFC. You can get started with th
|/Schemas|The set of attributes supported by each client and service provider can vary. One service provider might include `name`, `title`, and `emails`, while another service provider uses `name`, `title`, and `phoneNumbers`. The schemas endpoint allows for discovery of the attributes supported.| |/Bulk|Bulk operations allow you to perform operations on a large collection of resource objects in a single operation (e.g. update memberships for a large group).|
+**Example list of endpoints**
+
+> [!NOTE]
+> Use the `/Schemas` endpoint to support custom attributes or if your schema changes frequently as it enables a client to retrieve the most up-to-date schema automatically. Use the `/Bulk` endpoint to support groups.
+
+## Understand the AAD SCIM implementation
+
+To support a SCIM 2.0 user management API, this section describes how the AAD SCIM client is implemented and shows how to model your SCIM protocol request handling and responses.
-## Step 2: Understand the Azure AD SCIM implementation
> [!IMPORTANT] > The behavior of the Azure AD SCIM implementation was last updated on December 18, 2018. For information on what changed, see [SCIM 2.0 protocol compliance of the Azure AD User Provisioning service](application-provisioning-config-problem-scim-compatibility.md).
-If you're building an application that supports a SCIM 2.0 user management API, this section describes in detail how the Azure AD SCIM client is implemented. It also shows how to model your SCIM protocol request handling and responses. Once you've implemented your SCIM endpoint, you can test it by following the procedure described in the previous section.
-
-Within the [SCIM 2.0 protocol specification](http://www.simplecloud.info/#Specification), your application must meet these requirements:
+Within the [SCIM 2.0 protocol specification](http://www.simplecloud.info/#Specification), your application must support these requirements:
-* Supports creating users, and optionally also groups, as per section [3.3 of the SCIM protocol](https://tools.ietf.org/html/rfc7644#section-3.3).
-* Supports modifying users or groups with PATCH requests, as per [section 3.5.2 of the SCIM protocol](https://tools.ietf.org/html/rfc7644#section-3.5.2). Supporting ensures that groups and users are provisioned in a performant manner.
-* Supports retrieving a known resource for a user or group created earlier, as per [section 3.4.1 of the SCIM protocol](https://tools.ietf.org/html/rfc7644#section-3.4.1).
-* Supports querying users or groups, as per section [3.4.2 of the SCIM protocol](https://tools.ietf.org/html/rfc7644#section-3.4.2). By default, users are retrieved by their `id` and queried by their `username` and `externalId`, and groups are queried by `displayName`.
-* Supports querying user by ID and by manager, as per section 3.4.2 of the SCIM protocol.
-* Supports querying groups by ID and by member, as per section 3.4.2 of the SCIM protocol.
-* Supports the filter [excludedAttributes=members](#get-group) when querying the group resource, as per section 3.4.2.5 of the SCIM protocol.
-* Accepts a single bearer token for authentication and authorization of Azure AD to your application.
-* Supports soft-deleting a user `active=false` and restoring the user `active=true` (the user object should be returned in a request whether or not the user is active). The only time the user should not be returned is when it is hard deleted from the application.
+|Requirement|Reference notes (SCIM protocol)|
+|-|-|
+|Create users, and optionally also groups|[section 3.3](https://tools.ietf.org/html/rfc7644#section-3.3)|
+|Modify users or groups with PATCH requests|[section 3.5.2](https://tools.ietf.org/html/rfc7644#section-3.5.2). Supporting ensures that groups and users are provisioned in a performant manner.|
+|Retrieve a known resource for a user or group created earlier|[section 3.4.1](https://tools.ietf.org/html/rfc7644#section-3.4.1)|
+|Query users or groups|[section 3.4.2](https://tools.ietf.org/html/rfc7644#section-3.4.2). By default, users are retrieved by their `id` and queried by their `username` and `externalId`, and groups are queried by `displayName`.|
+|Query user by ID and by manager|section 3.4.2|
+|Query groups by ID and by member|section 3.4.2|
+|The filter [excludedAttributes=members](#get-group) when querying the group resource|section 3.4.2.5|
+|Accept a single bearer token for authentication and authorization of AAD to your application.||
+|Soft-deleting a user `active=false` and restoring the user `active=true`|The user object should be returned in a request whether or not the user is active. The only time the user should not be returned is when it is hard deleted from the application.|
-Follow these general guidelines when implementing a SCIM endpoint to ensure compatibility with Azure AD:
+Use the general guidelines when implementing a SCIM endpoint to ensure compatibility with AAD:
-* `id` is a required property for all the resources. Every response that returns a resource should ensure each resource has this property, except for `ListResponse` with zero members.
+* `id` is a required property for all resources. Every response that returns a resource should ensure each resource has this property, except for `ListResponse` with zero members.
* Response to a query/filter request should always be a `ListResponse`.
-* Groups are optional, but only supported if the SCIM implementation supports PATCH requests.
-* It isn't necessary to include the entire resource in the PATCH response.
-* Microsoft Azure AD only uses the following operators:
- - `eq`
- - `and`
-* Don't require a case-sensitive match on structural elements in SCIM, in particular PATCH `op` operation values, as defined in https://tools.ietf.org/html/rfc7644#section-3.5.2. Azure AD emits the values of 'op' as `Add`, `Replace`, and `Remove`.
-* Microsoft Azure AD makes requests to fetch a random user and group to ensure that the endpoint and the credentials are valid. It's also done as a part of **Test Connection** flow in the [Azure portal](https://portal.azure.com).
-* The attribute that the resources can be queried on should be set as a matching attribute on the application in the [Azure portal](https://portal.azure.com). For more information, see [Customizing User Provisioning Attribute Mappings](customize-application-attributes.md)
+* Groups are optional, but only supported if the SCIM implementation supports **PATCH** requests.
+* It isn't necessary to include the entire resource in the **PATCH** response.
+* Microsoft AAD only uses the following operators: `eq`, `and`
+* Don't require a case-sensitive match on structural elements in SCIM, in particular **PATCH** `op` operation values, as defined in [section 3.5.2](https://tools.ietf.org/html/rfc7644#section-3.5.2). AAD emits the values of `op` as **Add**, **Replace**, and **Remove**.
+* Microsoft AAD makes requests to fetch a random user and group to ensure that the endpoint and the credentials are valid. It's also done as a part of the **Test Connection** flow in the [Azure portal](https://portal.azure.com).
+* The attribute that the resources can be queried on should be set as a matching attribute on the application in the [Azure portal](https://portal.azure.com), see [Customizing User Provisioning Attribute Mappings](customize-application-attributes.md).
* Support HTTPS on your SCIM endpoint ### User provisioning and deprovisioning
-The following illustration shows the messages that Azure Active Directory sends to a SCIM service to manage the lifecycle of a user in your application's identity store.
+The following illustration shows the messages that AAD sends to a SCIM service to manage the lifecycle of a user in your application's identity store.
![Shows the user provisioning and deprovisioning sequence](media/use-scim-to-provision-users-and-groups/scim-figure-4.png)<br/> *User provisioning and deprovisioning sequence* ### Group provisioning and deprovisioning
-Group provisioning and deprovisioning are optional. When implemented and enabled, the following illustration shows the messages that Azure AD sends to a SCIM service to manage the lifecycle of a group in your application's identity store. Those messages differ from the messages about users in two ways:
+Group provisioning and deprovisioning are optional. When implemented and enabled, the following illustration shows the messages that AAD sends to a SCIM service to manage the lifecycle of a group in your application's identity store. Those messages differ from the messages about users in two ways:
* Requests to retrieve groups specify that the members attribute is to be excluded from any resource provided in response to the request. * Requests to determine whether a reference attribute has a certain value are requests about the members attribute.
@@ -185,10 +223,10 @@ Group provisioning and deprovisioning are optional. When implemented and enabled
*Group provisioning and deprovisioning sequence* ### SCIM protocol requests and responses
-This section provides example SCIM requests emitted by the Azure AD SCIM client and example expected responses. For best results, you should code your app to handle these requests in this format and emit the expected responses.
+This section provides example SCIM requests emitted by the AAD SCIM client and example expected responses. For best results, you should code your app to handle these requests in this format and emit the expected responses.
> [!IMPORTANT]
-> To understand how and when the Azure AD user provisioning service emits the operations described below, see the section [Provisioning cycles: Initial and incremental](how-provisioning-works.md#provisioning-cycles-initial-and-incremental) in [How provisioning works](how-provisioning-works.md).
+> To understand how and when the AAD user provisioning service emits the operations described below, see the section [Provisioning cycles: Initial and incremental](how-provisioning-works.md#provisioning-cycles-initial-and-incremental) in [How provisioning works](how-provisioning-works.md).
[User Operations](#user-operations) - [Create User](#create-user) ([Request](#request) / [Response](#response))
@@ -200,7 +238,6 @@ This section provides example SCIM requests emitted by the Azure AD SCIM client
- [Disable User](#disable-user) ([Request](#request-14) / [Response](#response-14)) - [Delete User](#delete-user) ([Request](#request-6) / [Response](#response-6)) - [Group Operations](#group-operations) - [Create Group](#create-group) ([Request](#request-7) / [Response](#response-7)) - [Get Group](#get-group) ([Request](#request-8) / [Response](#response-8))
@@ -216,7 +253,7 @@ This section provides example SCIM requests emitted by the Azure AD SCIM client
#### Create User
-###### Request
+##### Request
*POST /Users* ```json
@@ -357,7 +394,6 @@ This section provides example SCIM requests emitted by the Azure AD SCIM client
"startIndex": 1, "itemsPerPage": 20 }- ``` #### Get User by query - Zero results
@@ -377,7 +413,6 @@ This section provides example SCIM requests emitted by the Azure AD SCIM client
"startIndex": 1, "itemsPerPage": 20 }- ``` #### Update User [Multi-valued properties]
@@ -716,7 +751,6 @@ The only acceptable TLS protocol versions are TLS 1.2 and TLS 1.3. No other vers
- RSA keys must be at least 2,048 bits. - ECC keys must be at least 256 bits, generated using an approved elliptic curve - **Key Lengths** All services must use X.509 certificates generated using cryptographic keys of sufficient length, meaning:
@@ -739,7 +773,7 @@ TLS 1.2 Cipher Suites minimum bar:
### IP Ranges The Azure AD provisioning service currently operates under the IP Ranges for AzureActiveDirectory as listed [here](https://www.microsoft.com/download/details.aspx?id=56519&WT.mc_id=rss_alldownloads_all). You can add the IP ranges listed under the AzureActiveDirectory tag to allow traffic from the Azure AD provisioning service into your application. Note that you will need to review the IP range list carefully for computed addresses. An address such as '40.126.25.32' could be represented in the IP range list as '40.126.0.0/18'. You can also programmatically retrieve the IP range list using the following [API](/rest/api/virtualnetwork/servicetags/list).
-## Step 3: Build a SCIM endpoint
+## Build a SCIM endpoint
Now that you have designed your schema and understood the Azure AD SCIM implementation, you can get started developing your SCIM endpoint. Rather than starting from scratch and building the implementation completely on your own, you can rely on a number of open source SCIM libraries published by the SCIM community.
@@ -843,79 +877,77 @@ In the sample code, requests are authenticated using the Microsoft.AspNetCore.Au
A bearer token is also required to use of the provided [postman tests](https://github.com/AzureAD/SCIMReferenceCode/wiki/Test-Your-SCIM-Endpoint) and perform local debugging using localhost. The sample code uses ASP.NET Core environments to change the authentication options during development stage and enable the use a self-signed token.
-For more information on multiple environments in ASP.NET Core use the following link:
-[Use multiple environments in ASP.NET Core](
-https://docs.microsoft.com/aspnet/core/fundamentals/environments)
+For more information on multiple environments in ASP.NET Core, see [Use multiple environments in ASP.NET Core](https://docs.microsoft.com/aspnet/core/fundamentals/environments).
The following code enforces that requests to any of the serviceΓÇÖs endpoints are authenticated using a bearer token signed with a custom key: ```csharp
- public void ConfigureServices(IServiceCollection services)
+public void ConfigureServices(IServiceCollection services)
+{
+ if (_env.IsDevelopment())
+ {
+ services.AddAuthentication(options =>
{
- if (_env.IsDevelopment())
- {
- services.AddAuthentication(options =>
+ options.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme;
+ options.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme;
+ options.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme;
+ })
+ .AddJwtBearer(options =>
+ {
+ options.TokenValidationParameters =
+ new TokenValidationParameters
{
- options.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme;
- options.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme;
- options.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme;
- })
- .AddJwtBearer(options =>
- {
- options.TokenValidationParameters =
- new TokenValidationParameters
- {
- ValidateIssuer = false,
- ValidateAudience = false,
- ValidateLifetime = false,
- ValidateIssuerSigningKey = false,
- ValidIssuer = "Microsoft.Security.Bearer",
- ValidAudience = "Microsoft.Security.Bearer",
- IssuerSigningKey = new SymmetricSecurityKey(Encoding.UTF8.GetBytes("A1B2C3D4E5F6A1B2C3D4E5F6"))
- };
- });
- }
- ...
+ ValidateIssuer = false,
+ ValidateAudience = false,
+ ValidateLifetime = false,
+ ValidateIssuerSigningKey = false,
+ ValidIssuer = "Microsoft.Security.Bearer",
+ ValidAudience = "Microsoft.Security.Bearer",
+ IssuerSigningKey = new SymmetricSecurityKey(Encoding.UTF8.GetBytes("A1B2C3D4E5F6A1B2C3D4E5F6"))
+ };
+ });
+ }
+...
``` Send a GET request to the Token controller to get a valid bearer token, the method _GenerateJSONWebToken_ is responsible to create a token matching the parameters configured for development: ```csharp
- private string GenerateJSONWebToken()
- {
- // Create token key
- SymmetricSecurityKey securityKey =
- new SymmetricSecurityKey(Encoding.UTF8.GetBytes("A1B2C3D4E5F6A1B2C3D4E5F6"));
- SigningCredentials credentials =
- new SigningCredentials(securityKey, SecurityAlgorithms.HmacSha256);
-
- // Set token expiration
- DateTime startTime = DateTime.UtcNow;
- DateTime expiryTime = startTime.AddMinutes(120);
-
- // Generate the token
- JwtSecurityToken token =
- new JwtSecurityToken(
- "Microsoft.Security.Bearer",
- "Microsoft.Security.Bearer",
- null,
- notBefore: startTime,
- expires: expiryTime,
- signingCredentials: credentials);
-
- string result = new JwtSecurityTokenHandler().WriteToken(token);
- return result;
- }
+private string GenerateJSONWebToken()
+{
+ // Create token key
+ SymmetricSecurityKey securityKey =
+ new SymmetricSecurityKey(Encoding.UTF8.GetBytes("A1B2C3D4E5F6A1B2C3D4E5F6"));
+ SigningCredentials credentials =
+ new SigningCredentials(securityKey, SecurityAlgorithms.HmacSha256);
+
+ // Set token expiration
+ DateTime startTime = DateTime.UtcNow;
+ DateTime expiryTime = startTime.AddMinutes(120);
+
+ // Generate the token
+ JwtSecurityToken token =
+ new JwtSecurityToken(
+ "Microsoft.Security.Bearer",
+ "Microsoft.Security.Bearer",
+ null,
+ notBefore: startTime,
+ expires: expiryTime,
+ signingCredentials: credentials);
+
+ string result = new JwtSecurityTokenHandler().WriteToken(token);
+ return result;
+}
``` ### Handling provisioning and deprovisioning of users ***Example 1. Query the service for a matching user***
-Azure Active Directory queries the service for a user with an `externalId` attribute value matching the mailNickname attribute value of a user in Azure AD. The query is expressed as a Hypertext Transfer Protocol (HTTP) request such as this example, wherein jyoung is a sample of a mailNickname of a user in Azure Active Directory.
+Azure Active Directory (AAD) queries the service for a user with an `externalId` attribute value matching the mailNickname attribute value of a user in AAD. The query is expressed as a Hypertext Transfer Protocol (HTTP) request such as this example, wherein jyoung is a sample of a mailNickname of a user in Azure Active Directory.
>[!NOTE]
-> This is an example only. Not all users will have a mailNickname attribute, and the value a user has may not be unique in the directory. Also, the attribute used for matching (which in this case is `externalId`) is configurable in the [Azure AD attribute mappings](customize-application-attributes.md).
+> This is an example only. Not all users will have a mailNickname attribute, and the value a user has may not be unique in the directory. Also, the attribute used for matching (which in this case is `externalId`) is configurable in the [AAD attribute mappings](customize-application-attributes.md).
``` GET https://.../scim/Users?filter=externalId eq jyoung HTTP/1.1
@@ -925,15 +957,15 @@ GET https://.../scim/Users?filter=externalId eq jyoung HTTP/1.1
In the sample code the request is translated into a call to the QueryAsync method of the serviceΓÇÖs provider. Here is the signature of that method: ```csharp
- // System.Threading.Tasks.Tasks is defined in mscorlib.dll.
- // Microsoft.SCIM.IRequest is defined in
- // Microsoft.SCIM.Service.
- // Microsoft.SCIM.Resource is defined in
- // Microsoft.SCIM.Schemas.
- // Microsoft.SCIM.IQueryParameters is defined in
- // Microsoft.SCIM.Protocol.
-
- Task<Resource[]> QueryAsync(IRequest<IQueryParameters> request);
+// System.Threading.Tasks.Tasks is defined in mscorlib.dll.
+// Microsoft.SCIM.IRequest is defined in
+// Microsoft.SCIM.Service.
+// Microsoft.SCIM.Resource is defined in
+// Microsoft.SCIM.Schemas.
+// Microsoft.SCIM.IQueryParameters is defined in
+// Microsoft.SCIM.Protocol.
+
+Task<Resource[]> QueryAsync(IRequest<IQueryParameters> request);
``` In the sample query, for a user with a given value for the `externalId` attribute, values of the arguments passed to the QueryAsync method are:
@@ -945,13 +977,13 @@ In the sample query, for a user with a given value for the `externalId` attribut
***Example 2. Provision a user***
-If the response to a query to the web service for a user with an `externalId` attribute value that matches the mailNickname attribute value of a user doesn't return any users, then Azure Active Directory requests that the service provision a user corresponding to the one in Azure Active Directory. Here is an example of such a request:
+If the response to a query to the web service for a user with an `externalId` attribute value that matches the mailNickname attribute value of a user doesn't return any users, then AAD requests that the service provision a user corresponding to the one in AAD. Here is an example of such a request:
```
- POST https://.../scim/Users HTTP/1.1
- Authorization: Bearer ...
- Content-type: application/scim+json
- {
+POST https://.../scim/Users HTTP/1.1
+Authorization: Bearer ...
+Content-type: application/scim+json
+{
"schemas": [ "urn:ietf:params:scim:schemas:core:2.0:User",
@@ -981,13 +1013,13 @@ If the response to a query to the web service for a user with an `externalId` at
In the sample code the request is translated into a call to the CreateAsync method of the serviceΓÇÖs provider. Here is the signature of that method: ```csharp
- // System.Threading.Tasks.Tasks is defined in mscorlib.dll.
- // Microsoft.SCIM.IRequest is defined in
- // Microsoft.SCIM.Service.
- // Microsoft.SCIM.Resource is defined in
- // Microsoft.SCIM.Schemas.
+// System.Threading.Tasks.Tasks is defined in mscorlib.dll.
+// Microsoft.SCIM.IRequest is defined in
+// Microsoft.SCIM.Service.
+// Microsoft.SCIM.Resource is defined in
+// Microsoft.SCIM.Schemas.
- Task<Resource> CreateAsync(IRequest<Resource> request);
+Task<Resource> CreateAsync(IRequest<Resource> request);
``` In a request to provision a user, the value of the resource argument is an instance of the Microsoft.SCIM.Core2EnterpriseUser class, defined in the Microsoft.SCIM.Schemas library. If the request to provision the user succeeds, then the implementation of the method is expected to return an instance of the Microsoft.SCIM.Core2EnterpriseUser class, with the value of the Identifier property set to the unique identifier of the newly provisioned user.
@@ -997,21 +1029,21 @@ In a request to provision a user, the value of the resource argument is an insta
To update a user known to exist in an identity store fronted by an SCIM, Azure Active Directory proceeds by requesting the current state of that user from the service with a request such as: ```
- GET ~/scim/Users/54D382A4-2050-4C03-94D1-E769F1D15682 HTTP/1.1
- Authorization: Bearer ...
+GET ~/scim/Users/54D382A4-2050-4C03-94D1-E769F1D15682 HTTP/1.1
+Authorization: Bearer ...
``` In the sample code the request is translated into a call to the RetrieveAsync method of the serviceΓÇÖs provider. Here is the signature of that method: ```csharp
- // System.Threading.Tasks.Tasks is defined in mscorlib.dll.
- // Microsoft.SCIM.IRequest is defined in
- // Microsoft.SCIM.Service.
- // Microsoft.SCIM.Resource and
- // Microsoft.SCIM.IResourceRetrievalParameters
- // are defined in Microsoft.SCIM.Schemas
-
- Task<Resource> RetrieveAsync(IRequest<IResourceRetrievalParameters> request);
+// System.Threading.Tasks.Tasks is defined in mscorlib.dll.
+// Microsoft.SCIM.IRequest is defined in
+// Microsoft.SCIM.Service.
+// Microsoft.SCIM.Resource and
+// Microsoft.SCIM.IResourceRetrievalParameters
+// are defined in Microsoft.SCIM.Schemas
+
+Task<Resource> RetrieveAsync(IRequest<IResourceRetrievalParameters> request);
``` In the example of a request to retrieve the current state of a user, the values of the properties of the object provided as the value of the parameters argument are as follows:
@@ -1041,10 +1073,10 @@ Here, the value of the index x can be 0 and the value of the index y can be 1, o
Here is an example of a request from Azure Active Directory to an SCIM service to update a user: ```
- PATCH ~/scim/Users/54D382A4-2050-4C03-94D1-E769F1D15682 HTTP/1.1
- Authorization: Bearer ...
- Content-type: application/scim+json
- {
+PATCH ~/scim/Users/54D382A4-2050-4C03-94D1-E769F1D15682 HTTP/1.1
+Authorization: Bearer ...
+Content-type: application/scim+json
+{
"schemas": [ "urn:ietf:params:scim:api:messages:2.0:PatchOp"],
@@ -1063,46 +1095,48 @@ Here is an example of a request from Azure Active Directory to an SCIM service t
In the sample code the request is translated into a call to the UpdateAsync method of the serviceΓÇÖs provider. Here is the signature of that method: ```csharp
- // System.Threading.Tasks.Tasks and
- // System.Collections.Generic.IReadOnlyCollection<T> // are defined in mscorlib.dll.
- // Microsoft.SCIM.IRequest is defined in
- // Microsoft.SCIM.Service.
- // Microsoft.SCIM.IPatch,
- // is defined in Microsoft.SCIM.Protocol.
-
- Task UpdateAsync(IRequest<IPatch> request);
+// System.Threading.Tasks.Tasks and
+// System.Collections.Generic.IReadOnlyCollection<T> // are defined in mscorlib.dll.
+// Microsoft.SCIM.IRequest is defined in
+// Microsoft.SCIM.Service.
+// Microsoft.SCIM.IPatch,
+// is defined in Microsoft.SCIM.Protocol.
+
+Task UpdateAsync(IRequest<IPatch> request);
``` In the example of a request to update a user, the object provided as the value of the patch argument has these property values:
-
-* ResourceIdentifier.Identifier: "54D382A4-2050-4C03-94D1-E769F1D15682"
-* ResourceIdentifier.SchemaIdentifier: "urn:ietf:params:scim:schemas:extension:enterprise:2.0:User"
-* (PatchRequest as PatchRequest2).Operations.Count: 1
-* (PatchRequest as PatchRequest2).Operations.ElementAt(0).OperationName: OperationName.Add
-* (PatchRequest as PatchRequest2).Operations.ElementAt(0).Path.AttributePath: "manager"
-* (PatchRequest as PatchRequest2).Operations.ElementAt(0).Value.Count: 1
-* (PatchRequest as PatchRequest2).Operations.ElementAt(0).Value.ElementAt(0).Reference: http://.../scim/Users/2819c223-7f76-453a-919d-413861904646
-* (PatchRequest as PatchRequest2).Operations.ElementAt(0).Value.ElementAt(0).Value: 2819c223-7f76-453a-919d-413861904646
+
+|Argument|Value|
+|-|-|
+|ResourceIdentifier.Identifier|"54D382A4-2050-4C03-94D1-E769F1D15682"|
+|ResourceIdentifier.SchemaIdentifier|"urn:ietf:params:scim:schemas:extension:enterprise:2.0:User"|
+|(PatchRequest as PatchRequest2).Operations.Count|1|
+|(PatchRequest as PatchRequest2).Operations.ElementAt(0).OperationName|OperationName.Add|
+|(PatchRequest as PatchRequest2).Operations.ElementAt(0).Path.AttributePath|"manager"|
+|(PatchRequest as PatchRequest2).Operations.ElementAt(0).Value.Count|1|
+|(PatchRequest as PatchRequest2).Operations.ElementAt(0).Value.ElementAt(0).Reference|http://.../scim/Users/2819c223-7f76-453a-919d-413861904646|
+|(PatchRequest as PatchRequest2).Operations.ElementAt(0).Value.ElementAt(0).Value| 2819c223-7f76-453a-919d-413861904646|
***Example 6. Deprovision a user***
-To deprovision a user from an identity store fronted by an SCIM service, Azure AD sends a request such as:
+To deprovision a user from an identity store fronted by an SCIM service, AAD sends a request such as:
```
- DELETE ~/scim/Users/54D382A4-2050-4C03-94D1-E769F1D15682 HTTP/1.1
- Authorization: Bearer ...
+DELETE ~/scim/Users/54D382A4-2050-4C03-94D1-E769F1D15682 HTTP/1.1
+Authorization: Bearer ...
``` In the sample code the request is translated into a call to the DeleteAsync method of the serviceΓÇÖs provider. Here is the signature of that method: ```csharp
- // System.Threading.Tasks.Tasks is defined in mscorlib.dll.
- // Microsoft.SCIM.IRequest is defined in
- // Microsoft.SCIM.Service.
- // Microsoft.SCIM.IResourceIdentifier,
- // is defined in Microsoft.SCIM.Protocol.
+// System.Threading.Tasks.Tasks is defined in mscorlib.dll.
+// Microsoft.SCIM.IRequest is defined in
+// Microsoft.SCIM.Service.
+// Microsoft.SCIM.IResourceIdentifier,
+// is defined in Microsoft.SCIM.Protocol.
- Task DeleteAsync(IRequest<IResourceIdentifier> request);
+Task DeleteAsync(IRequest<IResourceIdentifier> request);
``` The object provided as the value of the resourceIdentifier argument has these property values in the example of a request to deprovision a user:
@@ -1110,9 +1144,9 @@ The object provided as the value of the resourceIdentifier argument has these pr
* ResourceIdentifier.Identifier: "54D382A4-2050-4C03-94D1-E769F1D15682" * ResourceIdentifier.SchemaIdentifier: "urn:ietf:params:scim:schemas:extension:enterprise:2.0:User"
-## Step 4: Integrate your SCIM endpoint with the Azure AD SCIM client
+## Integrate your SCIM endpoint with the AAD SCIM client
-Azure AD can be configured to automatically provision assigned users and groups to applications that implement a specific profile of the [SCIM 2.0 protocol](https://tools.ietf.org/html/rfc7644). The specifics of the profile are documented in [Step 2: Understand the Azure AD SCIM implementation](#step-2-understand-the-azure-ad-scim-implementation).
+Azure AD can be configured to automatically provision assigned users and groups to applications that implement a specific profile of the [SCIM 2.0 protocol](https://tools.ietf.org/html/rfc7644). The specifics of the profile are documented in [Understand the Azure AD SCIM implementation](#understand-the-aad-scim-implementation).
Check with your application provider, or your application provider's documentation for statements of compatibility with these requirements.
@@ -1125,10 +1159,10 @@ Applications that support the SCIM profile described in this article can be conn
**To connect an application that supports SCIM:**
-1. Sign in to the [Azure Active Directory portal](https://aad.portal.azure.com). Note that you can get access a free trial for Azure Active Directory with P2 licenses by signing up for the [developer program](https://developer.microsoft.com/office/dev-program)
-2. Select **Enterprise applications** from the left pane. A list of all configured apps is shown, including apps that were added from the gallery.
-3. Select **+ New application** > **+ Create your own application**.
-4. Enter a name for your application, choose the option "*integrate any other application you don't find in the gallery*" and select **Add** to create an app object. The new app is added to the list of enterprise applications and opens to its app management screen.
+1. Sign in to the [AAD portal](https://aad.portal.azure.com). Note that you can get access a free trial for Azure Active Directory with P2 licenses by signing up for the [developer program](https://developer.microsoft.com/office/dev-program)
+1. Select **Enterprise applications** from the left pane. A list of all configured apps is shown, including apps that were added from the gallery.
+1. Select **+ New application** > **+ Create your own application**.
+1. Enter a name for your application, choose the option "*integrate any other application you don't find in the gallery*" and select **Add** to create an app object. The new app is added to the list of enterprise applications and opens to its app management screen.
![Screenshot shows the Azure AD application gallery](media/use-scim-to-provision-users-and-groups/scim-figure-2b-1.png) *Azure AD application gallery*
@@ -1139,57 +1173,56 @@ Applications that support the SCIM profile described in this article can be conn
![Screenshot shows the Azure AD old app gallery experience](media/use-scim-to-provision-users-and-groups/scim-figure-2a.png) *Azure AD old app gallery experience*
-5. In the app management screen, select **Provisioning** in the left panel.
-6. In the **Provisioning Mode** menu, select **Automatic**.
+1. In the app management screen, select **Provisioning** in the left panel.
+1. In the **Provisioning Mode** menu, select **Automatic**.
![Example: An app's Provisioning page in the Azure portal](media/use-scim-to-provision-users-and-groups/scim-figure-2b.png)<br/> *Configuring provisioning in the Azure portal*
-7. In the **Tenant URL** field, enter the URL of the application's SCIM endpoint. Example: `https://api.contoso.com/scim/`
-8. If the SCIM endpoint requires an OAuth bearer token from an issuer other than Azure AD, then copy the required OAuth bearer token into the optional **Secret Token** field. If this field is left blank, Azure AD includes an OAuth bearer token issued from Azure AD with each request. Apps that use Azure AD as an identity provider can validate this Azure AD-issued token.
+1. In the **Tenant URL** field, enter the URL of the application's SCIM endpoint. Example: `https://api.contoso.com/scim/`
+1. If the SCIM endpoint requires an OAuth bearer token from an issuer other than Azure AD, then copy the required OAuth bearer token into the optional **Secret Token** field. If this field is left blank, Azure AD includes an OAuth bearer token issued from Azure AD with each request. Apps that use Azure AD as an identity provider can validate this Azure AD-issued token.
> [!NOTE] > It's ***not*** recommended to leave this field blank and rely on a token generated by Azure AD. This option is primarily available for testing purposes.
-9. Select **Test Connection** to have Azure Active Directory attempt to connect to the SCIM endpoint. If the attempt fails, error information is displayed.
+1. Select **Test Connection** to have Azure Active Directory attempt to connect to the SCIM endpoint. If the attempt fails, error information is displayed.
> [!NOTE] > **Test Connection** queries the SCIM endpoint for a user that doesn't exist, using a random GUID as the matching property selected in the Azure AD configuration. The expected correct response is HTTP 200 OK with an empty SCIM ListResponse message.
-10. If the attempts to connect to the application succeed, then select **Save** to save the admin credentials.
-11. In the **Mappings** section, there are two selectable sets of [attribute mappings](customize-application-attributes.md): one for user objects and one for group objects. Select each one to review the attributes that are synchronized from Azure Active Directory to your app. The attributes selected as **Matching** properties are used to match the users and groups in your app for update operations. Select **Save** to commit any changes.
+1. If the attempts to connect to the application succeed, then select **Save** to save the admin credentials.
+1. In the **Mappings** section, there are two selectable sets of [attribute mappings](customize-application-attributes.md): one for user objects and one for group objects. Select each one to review the attributes that are synchronized from Azure Active Directory to your app. The attributes selected as **Matching** properties are used to match the users and groups in your app for update operations. Select **Save** to commit any changes.
> [!NOTE] > You can optionally disable syncing of group objects by disabling the "groups" mapping.
-12. Under **Settings**, the **Scope** field defines which users and groups are synchronized. Select **Sync only assigned users and groups** (recommended) to only sync users and groups assigned in the **Users and groups** tab.
-13. Once your configuration is complete, set the **Provisioning Status** to **On**.
-14. Select **Save** to start the Azure AD provisioning service.
-15. If syncing only assigned users and groups (recommended), be sure to select the **Users and groups** tab and assign the users or groups you want to sync.
+1. Under **Settings**, the **Scope** field defines which users and groups are synchronized. Select **Sync only assigned users and groups** (recommended) to only sync users and groups assigned in the **Users and groups** tab.
+1. Once your configuration is complete, set the **Provisioning Status** to **On**.
+1. Select **Save** to start the Azure AD provisioning service.
+1. If syncing only assigned users and groups (recommended), be sure to select the **Users and groups** tab and assign the users or groups you want to sync.
Once the initial cycle has started, you can select **Provisioning logs** in the left panel to monitor progress, which shows all actions done by the provisioning service on your app. For more information on how to read the Azure AD provisioning logs, see [Reporting on automatic user account provisioning](check-status-user-account-provisioning.md). > [!NOTE] > The initial cycle takes longer to perform than later syncs, which occur approximately every 40 minutes as long as the service is running.
-## Step 5: Publish your application to the Azure AD application gallery
+## Publish your application to the AAD application gallery
If you're building an application that will be used by more than one tenant, you can make it available in the Azure AD application gallery. This will make it easy for organizations to discover the application and configure provisioning. Publishing your app in the Azure AD gallery and making provisioning available to others is easy. Check out the steps [here](../develop/v2-howto-app-gallery-listing.md). Microsoft will work with you to integrate your application into our gallery, test your endpoint, and release onboarding [documentation](../saas-apps/tutorial-list.md) for customers to use. ### Gallery onboarding checklist
-Follow the checklist below to ensure that your application is onboarded quickly and customers have a smooth deployment experience. The information will be gathered from you when onboarding to the gallery.
+Use the checklist to onboard your application quickly and customers have a smooth deployment experience. The information will be gathered from you when onboarding to the gallery.
> [!div class="checklist"]
-> * Support a [SCIM 2.0](#step-2-understand-the-azure-ad-scim-implementation) user and group endpoint (Only one is required but both are recommended)
+> * Support a [SCIM 2.0](#understand-the-aad-scim-implementation) user and group endpoint (Only one is required but both are recommended)
> * Support at least 25 requests per second per tenant to ensure that users and groups are provisioned and deprovisioned without delay (Required) > * Establish engineering and support contacts to guide customers post gallery onboarding (Required) > * 3 Non-expiring test credentials for your application (Required) > * Support the OAuth authorization code grant or a long lived token as described below (Required) > * Establish an engineering and support point of contact to support customers post gallery onboarding (Required)
-> * Support updating multiple group memberships with a single PATCH (Recommended)
-> * Document your SCIM endpoint publicly (Recommended)
-> * [Support schema discovery](https://tools.ietf.org/html/rfc7643#section-6) (Recommended)
+> * Support updating multiple group memberships with a single PATCH
+> * Document your SCIM endpoint publicly
+> * [Support schema discovery](https://tools.ietf.org/html/rfc7643#section-6)
-
-### Authorization for provisioning connectors in the application gallery
-The SCIM spec does not define a SCIM-specific scheme for authentication and authorization. It relies on the use of existing industry standards. The Azure AD provisioning client supports two authorization methods for applications in the gallery.
+### Authorization to provisioning connectors in the application gallery
+The SCIM spec doesn't define a SCIM-specific scheme for authentication and authorization and relies on the use of existing industry standards.
|Authorization method|Pros|Cons|Support| |--|--|--|--|
@@ -1199,53 +1232,70 @@ The SCIM spec does not define a SCIM-specific scheme for authentication and auth
|OAuth client credentials grant|Access tokens are much shorter-lived than passwords, and have an automated refresh mechanism that long-lived bearer tokens do not have. Both the authorization code grant and the client credentials grant create the same type of access token, so moving between these methods is transparent to the API. Provisioning can be completely automated, and new tokens can be silently requested without user interaction. ||Not supported for gallery and non-gallery apps. Support is in our backlog.| > [!NOTE]
-> It's not recommended to leave the token field blank in the Azure AD provisioning configuration custom app UI. The token generated is primarily available for testing purposes.
+> It's not recommended to leave the token field blank in the AAD provisioning configuration custom app UI. The token generated is primarily available for testing purposes.
+
+### OAuth code grant flow
+
+The provisioning service supports the [authorization code grant](https://tools.ietf.org/html/rfc6749#page-24) and after submitting your request for publishing your app in the gallery, our team will work with you to collect the following information:
+
+- **Authorization URL**, a URL by the client to obtain authorization from the resource owner via user-agent redirection. The user is redirected to this URL to authorize access.
+
+- **Token exchange URL**, a URL by the client to exchange an authorization grant for an access token, typically with client authentication.
-**OAuth authorization code grant flow:** The provisioning service supports the [authorization code grant](https://tools.ietf.org/html/rfc6749#page-24). After submitting your request for publishing your app in the gallery, our team will work with you to collect the following information:
-* Authorization URL: A URL by the client to obtain authorization from the resource owner via user-agent redirection. The user is redirected to this URL to authorize access. Note that this URL is currently not configurable per tenant.
-* Token exchange URL: A URL by the client to exchange an authorization grant for an access token, typically with client authentication. Note that this URL is currently not configurable per tenant.
-* Client ID: The authorization server issues the registered client a client identifier, which is a unique string representing the registration information provided by the client. The client identifier is not a secret; it is exposed to the resource owner and **must not** be used alone for client authentication.
-* Client secret: The client secret is a secret generated by the authorization server. It should be a unique value known only to the authorization server.
+- **Client ID**, the authorization server issues the registered client a client identifier, which is a unique string representing the registration information provided by the client. The client identifier is not a secret; it is exposed to the resource owner and **must not** be used alone for client authentication.
-Note that OAuth v1 is not supported due to exposure of the client secret. OAuth v2 is supported.
+- **Client secret**, a secret generated by the authorization server that should be a unique value known only to the authorization server.
-Best practices (recommended but not required):
+> [!NOTE]
+> The **Authorization URL** and **Token exchange URL** are currently not configurable per tenant.
+
+> [!NOTE]
+> OAuth v1 is not supported due to exposure of the client secret. OAuth v2 is supported.
+
+Best practices (recommended, but not required):
* Support multiple redirect URLs. Administrators can configure provisioning from both "portal.azure.com" and "aad.portal.azure.com". Supporting multiple redirect URLs will ensure that users can authorize access from either portal.
-* Support multiple secrets to ensure smooth secret renewal, without downtime.
+* Support multiple secrets for easy renewal, without downtime.
+
+#### How to setup OAuth code grant flow
-Steps in the OAuth code grant flow:
-1. User signs into the Azure portal > Enterprise applications > Select application > Provisioning > click authorize.
-2. Azure portal redirects user to the Authorization URL (sign in page for the third party app).
-3. Admin provides credentials to the third party application.
-4. Third party app redirects user back to Azure portal and provides the grant code
-5. Azure AD provisioning services calls the token URL and provides the grant code. The third party application responds with the access token, refresh token, and expiry date
-6. When the provisioning cycle begins, the service checks if the current access token is valid and exchanges it for a new token if needed. The access token is provided in each request made to the app and the validity of the request is checked before each request.
+1. Sign in to the Azure portal, go to **Enterprise applications** > **Application** > **Provisioning** and select **Authorize**.
+
+ 1. Azure portal redirects user to the Authorization URL (sign in page for the third party app).
+
+ 1. Admin provides credentials to the third party application.
+
+ 1. Third party app redirects user back to Azure portal and provides the grant code
+
+ 1. Azure AD provisioning services calls the token URL and provides the grant code. The third party application responds with the access token, refresh token, and expiry date
+
+1. When the provisioning cycle begins, the service checks if the current access token is valid and exchanges it for a new token if needed. The access token is provided in each request made to the app and the validity of the request is checked before each request.
> [!NOTE]
-> While it is not possible to setup OAuth on the non-gallery application today, you can manually generate an access token from your authorization server and input that in the secret token field of the non-gallery application. This allows you to verify compatibility of your SCIM server with the Azure AD SCIM client before onboarding to the app gallery, which does support the OAuth code grant.
+> While it's not possible to setup OAuth on the non-gallery applications, you can manually generate an access token from your authorization server and input it as the secret token to a non-gallery application. This allows you to verify compatibility of your SCIM server with the AAD SCIM client before onboarding to the app gallery, which does support the OAuth code grant.
-**Long-lived OAuth bearer tokens:** If your application does not support the OAuth authorization code grant flow, you can also generate a long lived OAuth bearer token than that an administrator can use to setup the provisioning integration. The token should be perpetual, or else the provisioning job will be [quarantined](application-provisioning-quarantine-status.md) when the token expires.
+**Long-lived OAuth bearer tokens:** If your application doesn't support the OAuth authorization code grant flow, instead generate a long lived OAuth bearer token that an administrator can use to setup the provisioning integration. The token should be perpetual, or else the provisioning job will be [quarantined](application-provisioning-quarantine-status.md) when the token expires.
For additional authentication and authorization methods, let us know on [UserVoice](https://aka.ms/appprovisioningfeaturerequest). ### Gallery go-to-market launch check list To help drive awareness and demand of our joint integration, we recommend you update your existing documentation and amplify the integration in your marketing channels. The below is a set of checklist activities we recommend you complete to support the launch
-* **Sales and customer support readiness.** Ensure your sales and support teams are aware and can speak to the integration capabilities. Brief your sales and support team, provide them with FAQs and include the integration into your sales materials.
-* **Blog post and/or press release.** Craft a blog post or press release that describes the joint integration, the benefits and how to get started. [Example: Imprivata and Azure Active Directory Press Release](https://www.imprivata.com/company/press/imprivata-introduces-iam-cloud-platform-healthcare-supported-microsoft)
-* **Social media.** Leverage your social media like Twitter, Facebook or LinkedIn to promote the integration to your customers. Be sure to include @AzureAD so we can retweet your post. [Example: Imprivata Twitter Post](https://twitter.com/azuread/status/1123964502909779968)
-* **Marketing website.** Create or update your marketing pages (e.g. integration page, partner page, pricing page, etc.) to include the availability of the joint integration. [Example: Pingboard integration Page](https://pingboard.com/org-chart-for), [Smartsheet integration page](https://www.smartsheet.com/marketplace/apps/microsoft-azure-ad), [Monday.com pricing page](https://monday.com/pricing/)
-* **Technical documentation.** Create a help center article or technical documentation on how customers can get started. [Example: Envoy + Microsoft Azure Active Directory integration.](https://envoy.help/en/articles/3453335-microsoft-azure-active-directory-integration/
+> [!div class="checklist"]
+> * Ensure your sales and customer support teams are aware, ready, and can speak to the integration capabilities. Brief your teams, provide them with FAQs and include the integration into your sales materials.
+> * Craft a blog post or press release that describes the joint integration, the benefits and how to get started. [Example: Imprivata and Azure Active Directory Press Release](https://www.imprivata.com/company/press/imprivata-introduces-iam-cloud-platform-healthcare-supported-microsoft)
+> * Leverage your social media like Twitter, Facebook or LinkedIn to promote the integration to your customers. Be sure to include @AzureAD so we can retweet your post. [Example: Imprivata Twitter Post](https://twitter.com/azuread/status/1123964502909779968)
+> * Create or update your marketing pages/website (e.g. integration page, partner page, pricing page, etc.) to include the availability of the joint integration. [Example: Pingboard integration Page](https://pingboard.com/org-chart-for), [Smartsheet integration page](https://www.smartsheet.com/marketplace/apps/microsoft-azure-ad), [Monday.com pricing page](https://monday.com/pricing/)
+> * Create a help center article or technical documentation on how customers can get started. [Example: Envoy + Microsoft Azure Active Directory integration.](https://envoy.help/en/articles/3453335-microsoft-azure-active-directory-integration/
)
-* **Customer communication.** Alert customers of the new integration through your customer communication (monthly newsletters, email campaigns, product release notes).
-
-## Related articles
-
-* [Develop a sample SCIM endpoint](use-scim-to-build-users-and-groups-endpoints.md)
-* [Automate user provisioning and deprovisioning to SaaS apps](user-provisioning.md)
-* [Customize attribute mappings for user provisioning](customize-application-attributes.md)
-* [Writing expressions for attribute mappings](functions-for-customizing-application-data.md)
-* [Scoping filters for user provisioning](define-conditional-rules-for-provisioning-user-accounts.md)
-* [Account provisioning notifications](user-provisioning.md)
-* [List of tutorials on how to integrate SaaS apps](../saas-apps/tutorial-list.md)
-
+> * Alert customers of the new integration through your customer communication (monthly newsletters, email campaigns, product release notes).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Develop a sample SCIM endpoint](use-scim-to-build-users-and-groups-endpoints.md)
+> [Automate user provisioning and deprovisioning to SaaS apps](user-provisioning.md)
+> [Customize attribute mappings for user provisioning](customize-application-attributes.md)
+> [Writing expressions for attribute mappings](functions-for-customizing-application-data.md)
+> [Scoping filters for user provisioning](define-conditional-rules-for-provisioning-user-accounts.md)
+> [Account provisioning notifications](user-provisioning.md)
+> [List of tutorials on how to integrate SaaS apps](../saas-apps/tutorial-list.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/authentication/howto-password-ban-bad-on-premises-deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-password-ban-bad-on-premises-deploy.md
@@ -82,7 +82,8 @@ The following core requirements apply:
* All machines, including domain controllers, that have Azure AD Password Protection components installed must have the Universal C Runtime installed. * You can get the runtime by making sure you have all updates from Windows Update. Or you can get it in an OS-specific update package. For more information, see [Update for Universal C Runtime in Windows](https://support.microsoft.com/help/2999226/update-for-uniersal-c-runtime-in-windows). * You need an account that has Active Directory domain administrator privileges in the forest root domain to register the Windows Server Active Directory forest with Azure AD.
-* The Key Distribution Service must be enabled on all domain controllers in the domain that run Windows Server 2012. By default, this service is enabled via manual trigger start.
+* The Key Distribution Service must be enabled on all domain controllers in the domain that run Windows Server 2012 and later versions. By default, this service is enabled via manual trigger start.
+ * Network connectivity must exist between at least one domain controller in each domain and at least one server that hosts the proxy service for Azure AD Password Protection. This connectivity must allow the domain controller to access RPC endpoint mapper port 135 and the RPC server port on the proxy service. * By default, the RPC server port is a dynamic RPC port, but it can be configured to [use a static port](#static). * All machines where the Azure AD Password Protection Proxy service will be installed must have network access to the following endpoints:
@@ -418,4 +419,4 @@ The `Get-AzureADPasswordProtectionDCAgent` cmdlet may be used to query the softw
## Next steps
-Now that you've installed the services that you need for Azure AD Password Protection on your on-premises servers, [enable on-prem Azure AD Password Protection in the Azure portal](howto-password-ban-bad-on-premises-operations.md) to complete your deployment.
+Now that you've installed the services that you need for Azure AD Password Protection on your on-premises servers, [enable on-prem Azure AD Password Protection in the Azure portal](howto-password-ban-bad-on-premises-operations.md) to complete your deployment.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-howto-app-gallery-listing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-howto-app-gallery-listing.md
@@ -181,7 +181,7 @@ You will need an Azure AD tenant in order to test your app. To set up your devel
Alternatively, an Azure AD tenant comes with every Microsoft 365 subscription. To set up a free Microsoft 365 development environment, see [Join the Microsoft 365 Developer Program](/office/developer-program/microsoft-365-developer-program).
-Once you have a tenant, test single-sign on and [provisioning](../app-provisioning/use-scim-to-provision-users-and-groups.md#step-4-integrate-your-scim-endpoint-with-the-azure-ad-scim-client).
+Once you have a tenant, test single-sign on and [provisioning](../app-provisioning/use-scim-to-provision-users-and-groups.md#integrate-your-scim-endpoint-with-the-aad-scim-client).
**For OIDC or Oath applications**, [Register your application](quickstart-register-app.md) as a multi-tenant application. ΓÇÄSelect the Accounts in any organizational directory and personal Microsoft accounts option in Supported Account types.
@@ -314,4 +314,4 @@ The Microsoft Partner Network provides instant access to exclusive resources, pr
## Next steps * [Build a SCIM endpoint and configure user provisioning](../app-provisioning/use-scim-to-provision-users-and-groups.md)
-* [Authentication scenarios for Azure AD](authentication-flows-app-scenarios.md)
+* [Authentication scenarios for Azure AD](authentication-flows-app-scenarios.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/external-identities/o365-external-user https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/o365-external-user.md
@@ -6,7 +6,7 @@
Previously updated : 11/11/2020 Last updated : 02/04/2021
@@ -30,8 +30,7 @@ OneDrive/SharePoint Online has a separate invitation manager. Support for extern
- Azure AD B2B collaboration invited users can be picked from OneDrive/SharePoint Online sharing dialog boxes. OneDrive/SharePoint Online invited users also show up in Azure AD after they redeem their invitations. -- The licensing requirements differ. To learn more about licensing, see [Azure AD B2B licensing](./external-identities-pricing.md) and ["What is an external user?" in the SharePoint Online external sharing overview](/sharepoint/external-sharing-overview#what-happens-when-users-share).-
+- The licensing requirements differ. To learn more about licensing, see [Azure AD External Identities licensing](./external-identities-pricing.md) and [the SharePoint Online external sharing overview](/sharepoint/external-sharing-overview).
To manage external sharing in OneDrive/SharePoint Online with Azure AD B2B collaboration, set the OneDrive/SharePoint Online external sharing setting to **Allow sharing only with the external users that already exist in your organization's directory**. Users can go to externally shared sites and pick from external collaborators that the admin has added. The admin can add the external collaborators through the B2B collaboration invitation APIs.
@@ -47,4 +46,4 @@ You can enable this feature by using the setting 'ShowPeoplePickerSuggestionsFor
* [Adding a B2B collaboration user to a role](add-guest-to-role.md) * [Delegate B2B collaboration invitations](delegate-invitations.md) * [Dynamic groups and B2B collaboration](use-dynamic-groups.md)
-* [Troubleshooting Azure Active Directory B2B collaboration](troubleshoot.md)
+* [Troubleshooting Azure Active Directory B2B collaboration](troubleshoot.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/monitor-sign-in-health-for-resilience https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/monitor-sign-in-health-for-resilience.md
@@ -0,0 +1,286 @@
+
+ Title: Monitor application sign-in health for resilience in Azure Active Directory
+description: Create queries and notifications to monitor the sign-in health of your applications.
+++++++ Last updated : 01/10/2021+++++++
+# Monitoring application sign-in health for resilience
+
+To increase infrastructure resilience, set up monitoring of application sign-in health for your critical applications so that you receive an alert if an impacting incident occurs. To assist you in this effort, you can configure alerts based on the sign-in health workbook.
+
+This workbook enables administrators to monitor authentication requests for applications in your tenant. It provides these key capabilities:
+
+* Configure the workbook to monitor all or individual apps with near real-time data.
+
+* Configure alerts to notify you when authentication patterns change so that you can investigate and take action.
+
+* Compare trends over a period, for example week over week, which is the workbookΓÇÖs default setting.
+
+> [!NOTE]
+> To see all available workbooks, and the prerequisites for using them, please see [How to use Azure Monitor workbooks for reports](../reports-monitoring/howto-use-azure-monitor-workbooks.md).
+
+During an impacting event, two things may happen:
+
+* The number of sign-ins for an application may drop precipitously because users can't sign in.
+
+* The number of sign-in failures can increase.
+
+This article walks through setting up the sign-in health workbook to monitor for disruptions to your usersΓÇÖ sign-ins.
+
+## Prerequisites
+
+* An Azure AD tenant.
+
+* A user with global administrator or security administrator role for the Azure AD tenant.
+
+* A Log Analytics workspace in your Azure subscription to send logs to Azure Monitor logs.
+
+ * Learn how to [create a Log Analytics workspace](https://docs.microsoft.com/azure/azure-monitor/learn/quick-create-workspace)
+
+* Azure AD logs integrated with Azure Monitor logs
+
+ * Learn how to [Integrate Azure AD Sign- in Logs with Azure Monitor Stream.](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md)
+
+
+
+## Configure the App sign in health workbook
+
+To access workbooks, open the **Azure portal**, select **Azure Active Directory**, and then select **Workbooks**.
+
+You'll see workbooks under Usage, Conditional Access, and Troubleshoot. The App sign in health workbook appears in the usage section.
+
+Once you use a workbook, it may appear in the Recently modified workbooks section.
+
+![Screenshot showing the workbooks gallery in the Azure portal.](./media/monitor-sign-in-health-for-resilience/sign-in-health-workbook.png)
++
+The App sign in health workbook enables you to visualize what is happening with your sign-ins.
+
+By default the workbook presents two graphs. These graphs compare what is happening to your app(s) now, versus the same period a week ago. The blue lines are current, and the orange lines are the previous week.
+
+![Screenshot showing sign in health graphs.](./media/monitor-sign-in-health-for-resilience/sign-in-health-graphs.png)
+
+**The first graph is Hourly usage (number of successful users)**. Comparing your current number of successful users to a typical usage period helps you to spot a drop in usage that may require investigation. A drop in successful usage rate can help detect performance and utilization issues that the failure rate can't. For example if users can't reach your application to attempt to sign in, there would be no failures, only a drop in usage. A sample query for this data can be found in the following section.
+
+The second graph is Hourly failure rate. A spike in failure rate may indicate an issue with your authentication mechanisms. Failure rate can only be measured if users can attempt to authenticate. If users Can't gain access to make the attempt, failures Won't show.
+
+You can configure an alert that notifies a specific group when the usage or failure rate exceeds a specified threshold. A sample query for this data can be found in the following section.
+
+ ## Configure the query and alerts
+
+You create alert rules in Azure Monitor and can automatically run saved queries or custom log searches at regular intervals.
+
+Use the following instructions to create email alerts based on the queries reflected in the graphs. Sample scripts below will send an email notification when
+
+* the successful usage drops by 90% from the same hour two days ago, as in the hourly usage graph in the previous section.
+
+* the failure rate increases by 90% from the same hour two days ago, as in the hourly failure rate graph in the previous section.
+
+ To configure the underlying query and set alerts, complete the following steps. You'll use the Sample Query as the basis for your configuration. An explanation of the query structure appears at the end of this section.
+
+For more information on how to create, view, and manage log alerts using Azure Monitor see [Manage log alerts](https://docs.microsoft.com/azure/azure-monitor/platform/alerts-log).
+
+
+1. In the workbook, select **Edit**, then select the **query icon** just above the right-hand side of the graph.
+
+ [![Screenshot showing edit workbook.](./media/monitor-sign-in-health-for-resilience/edit-workbook.png)](./media/monitor-sign-in-health-for-resilience/edit-workbook.png)
+
+ The query log opens.
+
+ [![Screenshot showing the query log.](./media/monitor-sign-in-health-for-resilience/query-log.png)](/media/monitor-sign-in-health-for-resilience/query-log.png)
+ΓÇÄ
+
+2. Copy one of the following sample scripts for a new Kusto query.
+
+**Kusto query for drop in usage**
+
+```Kusto
+
+let thisWeek = SigninLogs
+
+| where TimeGenerated > ago(1h)
+
+| project TimeGenerated, AppDisplayName, UserPrincipalName
+
+//| where AppDisplayName contains "Office 365 Exchange Online"
+
+| summarize users = dcount(UserPrincipalName) by bin(TimeGenerated, 1hr)
+
+| sort by TimeGenerated desc
+
+| serialize rn = row_number();
+
+let lastWeek = SigninLogs
+
+| where TimeGenerated between((ago(1h) - totimespan(2d))..(now() - totimespan(2d)))
+
+| project TimeGenerated, AppDisplayName, UserPrincipalName
+
+//| where AppDisplayName contains "Office 365 Exchange Online"
+
+| summarize usersPriorWeek = dcount(UserPrincipalName) by bin(TimeGenerated, 1hr)
+
+| sort by TimeGenerated desc
+
+| serialize rn = row_number();
+
+thisWeek
+
+| join
+
+(
+
+ lastWeek
+
+)
+
+on rn
+
+| project TimeGenerated, users, usersPriorWeek, difference = abs(users - usersPriorWeek), max = max_of(users, usersPriorWeek)
+
+| where (difference * 2.0) / max > 0.9
+
+```
+
+
+
+**Kusto query for increase in failure rate**
++
+```kusto
+
+let thisWeek = SigninLogs
+
+| where TimeGenerated > ago(1 h)
+
+| project TimeGenerated, UserPrincipalName, AppDisplayName, status = case(Status.errorCode == "0", "success", "failure")
+
+| where AppDisplayName == **APP NAME**
+
+| summarize success = countif(status == "success"), failure = countif(status == "failure") by bin(TimeGenerated, 1h)
+
+| project TimeGenerated, failureRate = (failure * 1.0) / ((failure + success) * 1.0)
+
+| sort by TimeGenerated desc
+
+| serialize rn = row_number();
+
+let lastWeek = SigninLogs
+
+| where TimeGenerated between((ago(1 h) - totimespan(2d))..(ago(1h) - totimespan(2d)))
+
+| project TimeGenerated, UserPrincipalName, AppDisplayName, status = case(Status.errorCode == "0", "success", "failure")
+
+| where AppDisplayName == **APP NAME**
+
+| summarize success = countif(status == "success"), failure = countif(status == "failure") by bin(TimeGenerated, 1h)
+
+| project TimeGenerated, failureRatePriorWeek = (failure * 1.0) / ((failure + success) * 1.0)
+
+| sort by TimeGenerated desc
+
+| serialize rn = row_number();
+
+thisWeek
+
+| join (lastWeek) on rn
+
+| project TimeGenerated, failureRate, failureRatePriorWeek
+
+| where abs(failureRate ΓÇô failureRatePriorWeek) > **THRESHOLD VALUE**
+
+```
+
+3. Paste the query in the window and select **Run**. Ensure you see the Completed message shown in the image below, and results below that message.
+
+ [![Screenshot showing the run query results.](./media/monitor-sign-in-health-for-resilience/run-query.png)](./media/monitor-sign-in-health-for-resilience/run-query.png)
+
+4. Highlight the query, and select + **New alert rule**.
+
+ [![Screenshot showing the new alert rule screen.](./media/monitor-sign-in-health-for-resilience/new-alert-rule.png)](./media/monitor-sign-in-health-for-resilience/new-alert-rule.png)
++
+5. Configure alert conditions.
+ΓÇÄIn the Condition section, select the link **Whenever the average custom log search is greater than logic defined count**. In the configure signal logic pane, scroll to Alert logic
+
+ [![Screenshot showing configure alerts screen.](./media/monitor-sign-in-health-for-resilience/configure-alerts.png)](./media/monitor-sign-in-health-for-resilience/configure-alerts.png)
+
+ * **Threshold value**: 0. This value will alert on any results.
+
+ * **Evaluation period (in minutes)**: 60. This value looks at an hour of time
+
+ * **Frequency (in minutes)**: 60. This value sets the evaluation period to once per hour for the previous hour.
+
+ * Select **Done**.
+
+6. In the **Actions** section, configure these settings:
+
+ [![Screenshot showing the Create alert rule page.](./media/monitor-sign-in-health-for-resilience/create-alert-rule.png)](./media/monitor-sign-in-health-for-resilience/create-alert-rule.png)
+
+ * Under **Actions**, choose **Select action group**, and add the group you want to be notified of alerts.
+
+ * Under **Customize actions** select **Email alerts**.
+
+ * Add a **subject line**.
+
+7. Under **Alert rule details**, configure these settings:
+
+ * Add a descriptive name and a description.
+
+ * Select the **resource group** to which to add the alert.
+
+ * Select the default **severity** of the alert.
+
+ * Select **Enable alert rule upon creation** if you want it live immediately, else select **Suppress alerts**.
+
+8. Select **Create alert rule**.
+
+9. Select **Save**, enter a name for the query, **Save as a Query with a category of Alert**. Then select **Save** again.
+
+ [![Screenshot showing the save query button.](./media/monitor-sign-in-health-for-resilience/save-query.png)](./media/monitor-sign-in-health-for-resilience/save-query.png)
+++
+### Refine your queries and alerts
+Modify your queries and alerts for maximum effectiveness.
+
+* Be sure to test your alerts.
+
+* Modify alert sensitivity and frequency so that you get important notifications. Admins can become desensitized to alerts if they get too many and miss something important.
+
+* Ensure the email from which alerts come in your administratorΓÇÖs email clients is added to allowed senders list. Otherwise you may miss notifications due to a spam filter on your email client.
+
+* Alerts query in Azure Monitor can only include results from past 48 hours. [This is a current limitation by design](https://github.com/MicrosoftDocs/azure-docs/issues/22637).
+
+## Create processes to manage alerts
+
+Once you have set up the query and alerts, create business processes to manage the alerts.
+
+* Who will monitor the workbook and when?
+* When an alert is generated, who will investigate?
+
+* What are the communication needs? Who will create the communications and who will receive them?
+
+* If an outage occurs, what business processes need to be triggered?
+
+## Next steps
+
+[Learn more about workbooks](https://docs.microsoft.com/azure/active-directory/reports-monitoring/howto-use-azure-monitor-workbooks)
+
+
+
+
+
+
active-directory https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/whats-new https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/whats-new.md
@@ -44,7 +44,7 @@ This page is updated monthly, so revisit it regularly. If you're looking for ite
In the past, the secret token field could be kept empty when setting up provisioning on the custom / BYOA application. This function was intended to solely be used for testing. We'll update the UI to make the field required.
-Customers can work around this requirement for testing purposes by using a feature flag in the browser URL. [Learn more](../app-provisioning/use-scim-to-provision-users-and-groups.md#authorization-for-provisioning-connectors-in-the-application-gallery).
+Customers can work around this requirement for testing purposes by using a feature flag in the browser URL. [Learn more](../app-provisioning/use-scim-to-provision-users-and-groups.md#authorization-to-provisioning-connectors-in-the-application-gallery).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/governance/complete-access-review https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/complete-access-review.md
@@ -12,7 +12,7 @@ na
ms.devlang: na Previously updated : 12/07/2020 Last updated : 02/08/2021
@@ -43,35 +43,47 @@ You can track the progress as the reviewers complete their reviews.
To view future instances of an access reviews, navigate to the access review, and select Scheduled reviews.
- On the **Overview** page, you can see the progress. No access rights are changed in the directory until the review is completed.
+ On the **Overview** page, you can see the progress of the current instance. No access rights are changed in the directory until the review is completed.
- ![Access reviews progress](./media/complete-access-review/overview-progress.png)
-
- If you are viewing an access review that reviews guest access across Microsoft 365 groups (Preview), the Overview blade lists each group in the review.
+ ![Review of All company group](./media/complete-access-review/all-company-group.png)
- ![review guest access across Microsoft 365 groups](./media/complete-access-review/review-guest-access-across-365-groups.png)
+ All blades under Current are only viewable during the duration of each review instance.
- Click on a group to see the progress of the review on that group.
+ The Results page provides more information on each user under review in the instance, including the ability to Stop, Reset and Download results.
+
+ ![Review guest access across Microsoft 365 groups](./media/complete-access-review/all-company-group-results.png)
++
+ If you are viewing an access review that reviews guest access across Microsoft 365 groups (Preview), the Overview blade lists each group in the review.
+
+ ![review guest access across Microsoft 365 groups](./media/complete-access-review/review-guest-access-across-365-groups.png)
+
+ Click on a group to see the progress of the review on that group, as well as to Stop, Reset, Apply and Delete.
![review guest access across Microsoft 365 groups in detail](./media/complete-access-review/progress-group-review.png) 1. If you want to stop an access review before it has reached the scheduled end date, click the **Stop** button.
- When stop a review, reviewers will no longer be able to give responses. You can't restart a review after it's stopped.
+ When you stop a review, reviewers will no longer be able to give responses. You can't restart a review after it's stopped.
1. If you're no longer interested in the access review, you can delete it by clicking the **Delete** button. ## Apply the changes
-If **Auto apply results to resource** was enabled and based on your selections in **Upon completion settings**, auto-apply will be executed after the review's end date or when you manually stop the review.
+If **Auto apply results to resource** was enabled based on your selections in **Upon completion settings**, auto-apply will be executed after the review's end date or when you manually stop the review.
-If **Auto apply results to resource** wasn't enabled for the review, click **Apply** to manually apply the changes. If a user's access was denied in the review, when you click **Apply**, Azure AD removes their membership or application assignment.
+If **Auto apply results to resource** wasn't enabled for the review, navigate to **Review History** under **Series** after the review duration ends or the review was stopped early, and click on the instance of the review youΓÇÖd like to Apply.
![Apply access review changes](./media/complete-access-review/apply-changes.png)
+Click **Apply** to manually apply the changes. If a user's access was denied in the review, when you click **Apply**, Azure AD removes their membership or application assignment.
+
+![Apply access review changes button](./media/complete-access-review/apply-changes-button.png)
++ The status of the review will change from **Completed** through intermediate states such as **Applying** and finally to state **Result applied**. You should expect to see denied users, if any, being removed from the group membership or application assignment in a few minutes.
-A configured auto applying review, or selecting **Apply** doesn't have an effect on a group that originates in an on-premises directory or a dynamic group. If you want to change a group that originates on-premises, download the results and apply those changes to the representation of the group in that directory.
+Manually or automatically applying results doesn't have an effect on a group that originates in an on-premises directory or a dynamic group. If you want to change a group that originates on-premises, download the results and apply those changes to the representation of the group in that directory.
## Retrieve the results
active-directory https://docs.microsoft.com/en-us/azure/active-directory/hybrid/how-to-connect-staged-rollout https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-staged-rollout.md
@@ -79,10 +79,6 @@ The following scenarios are not supported for staged rollout:
- Dynamic groups are *not supported* for staged rollout. - Contact objects inside the group will block the group from being added. -- You still need to make the final cutover from federated to cloud authentication by using Azure AD Connect or PowerShell. Staged rollout doesn't switch domains from federated to managed. For more information about domain cutover, see [Migrate from federation to password hash synchronization](plan-migrate-adfs-password-hash-sync.md) and [Migrate from federation to pass-through authentication](plan-migrate-adfs-pass-through-authentication.md)--- - When you first add a security group for staged rollout, you're limited to 200 users to avoid a UX time-out. After you've added the group, you can add more users directly to it, as required. - While users are in Staged Rollout, when EnforceCloudPasswordPolicyForPasswordSyncedUsers is enabled, password expiration policy is set to 90 days with no option to customize it.
@@ -91,7 +87,9 @@ The following scenarios are not supported for staged rollout:
- Windows 10 Hybrid Join or Azure AD Join primary refresh token acquisition for all versions, when userΓÇÖs on-premises UPN is not routable. This scenario will fall back to the WS-Trust endpoint while in staged rollout mode, but will stop working when staged migration is complete and user sign-on is no longer relying on federation server. -
+ >[!NOTE]
+ >You still need to make the final cutover from federated to cloud authentication by using Azure AD Connect or PowerShell. Staged rollout doesn't switch domains from federated to managed. For more information about domain cutover, see [Migrate from federation to password hash synchronization](plan-migrate-adfs-password-hash-sync.md#step-3-change-the-sign-in-method-to-password-hash-synchronization-and-enable-seamless-sso) and [Migrate from federation to pass-through authentication](plan-migrate-adfs-password-hash-sync.md#step-3-change-the-sign-in-method-to-password-hash-synchronization-and-enable-seamless-sso)
+
## Get started with staged rollout To test the *password hash sync* sign-in by using staged rollout, follow the pre-work instructions in the next section.
@@ -254,3 +252,5 @@ A: Yes. To learn how to use PowerShell to perform staged rollout, see [Azure AD
## Next steps - [Azure AD 2.0 preview](/powershell/module/azuread/?view=azureadps-2.0-preview#staged_rollout )
+- [Change the sign-in method to password hash synchronization](plan-migrate-adfs-password-hash-sync.md#step-3-change-the-sign-in-method-to-password-hash-synchronization-and-enable-seamless-sso)
+- [Change sign-in method to pass-through authentication](plan-migrate-adfs-password-hash-sync.md#step-3-change-the-sign-in-method-to-password-hash-synchronization-and-enable-seamless-sso)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/application-proxy-add-on-premises-application https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-add-on-premises-application.md
@@ -8,18 +8,22 @@
Previously updated : 02/04/2021 Last updated : 02/09/2021 -+ # Tutorial: Add an on-premises application for remote access through Application Proxy in Azure Active Directory
-Azure Active Directory (Azure AD) has an Application Proxy service that enables users to access on-premises applications by signing in with their Azure AD account. This tutorial prepares your environment for use with Application Proxy. Once your environment is ready, you'll use the Azure portal to add an on-premises application to your Azure AD tenant. To **view your apps and get up to speed quickly** with App Management in Azure, be sure to check out the [Quickstart Series](view-applications-portal.md).
+Azure Active Directory (Azure AD) has an Application Proxy service that enables users to access on-premises applications by signing in with their Azure AD account. To learn more about Application Proxy, see [What is App Proxy?](what-is-application-proxy.md). This tutorial prepares your environment for use with Application Proxy. Once your environment is ready, you'll use the Azure portal to add an on-premises application to your Azure AD tenant.
:::image type="content" source="./media/application-proxy-add-on-premises-application/app-proxy-diagram.png" alt-text="Application Proxy Overview Diagram" lightbox="./media/application-proxy-add-on-premises-application/app-proxy-diagram.png":::
+Before you get started, make sure you are familiar with app management and **Single Sign-On (SSO)** concepts. Check out the following links:
+- [Quickstart Series on App Management in Azure AD](view-applications-portal.md)
+- [What is Single Sign-On (SSO)?](what-is-single-sign-on.md)
+ Connectors are a key part of Application Proxy. To learn more about connectors, see [Understand Azure AD Application Proxy connectors](application-proxy-connectors.md). This tutorial:
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/migrate-application-authentication-to-azure-active-directory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/migrate-application-authentication-to-azure-active-directory.md
@@ -0,0 +1,634 @@
+
+ Title: 'Migrate application authentication to Azure Active Directory'
+description: This whitepaper details the planning for and benefits of migrating your application authentication to Azure AD.
+++++++
+ na
+ms.devlang: na
Last updated : 02/05/2021+++++
+# Migrate application authentication to Azure Active Directory
+
+## About this paper
+
+This whitepaper details the planning for and benefits of migrating your application authentication to Azure AD. It is designed for Azure administrators and identity professionals.
+
+Breaking the process into four phases, each with detailed planning and exit criteria, it is designed to help you plan your migration strategy and understand how Azure AD authentication supports your organizational goals.
+
+## Introduction
+
+Today, your organization requires a slew of applications (apps) for users to get work done. You likely continue to add, develop, or retire apps every day. Users access these applications from a vast range of corporate and personal devices, and locations. They open apps in many ways, including:
+
+- through a company homepage or portal
+
+- by bookmarking on their browsers
+
+- via a vendorΓÇÖs URL for software as a service (SaaS) apps
+
+- links pushed directly to userΓÇÖs desktops or mobile devices via a mobile device/application management (MDM/ MAM) solution
+
+Your applications are likely using the following types of authentication:
+
+- On-premises federation solutions (such as Active Directory Federation Services (ADFS) and Ping)
+
+- Active Directory (such as Kerberos Auth and Windows Integrated Auth)
+
+- Other cloud-based identity and access management (IAM) solutions (such as Okta or Oracle)
+
+- On-premises web infrastructure (such as IIS and Apache)
+
+- Cloud-hosted infrastructure (such as Azure and AWS)
+
+**To ensure that the users can easily and securely access applications, your goal is to have a single set of access controls and policies across your on-premises and cloud environments.**
+
+[Azure Active Directory (Azure AD)](/azure/active-directory/fundamentals/active-directory-whatis) offers a universal identity platform that provides your people, partners, and customers a single identity to access the applications they want and collaborate from any platform and device.
+
+![A diagram of Azure Active Directory connectivity](media/migrating-application-authentication-to-azure-active-directory-1.jpg)
+
+Azure AD has a [full suite of identity management capabilities](/azure/active-directory/fundamentals/active-directory-whatis#which-features-work-in-azure-ad). Standardizing your app authentication and authorization to Azure AD enables you get the benefits these capabilities provide.
+
+See additional migration resources at [https://aka.ms/migrateapps](https://aka.ms/migrateapps)
+
+## Benefits of migrating app authentication to Azure AD
+
+Moving app authentication to Azure AD will help you manage risk and cost, increase productivity, and address compliance and governance requirements.
+
+### Manage risk
+
+Safeguarding your apps requires that you have a full view of all the risk factors. Migrating your apps to Azure AD consolidates your security solutions. With it you can:
+
+- Improve secure user access to applications and associated corporate data using [Conditional Access policies](/azure/active-directory/active-directory-conditional-access-azure-portal), [Multi-Factor Authentication](/azure/active-directory/authentication/concept-mfa-howitworks), and real-time risk-based [Identity Protection](/azure/active-directory/active-directory-identityprotection) technologies.
+
+- Protect privileged userΓÇÖs access to your environment with [Just-In-Time](/azure/managed-applications/request-just-in-time-access) admin access.
+
+- Use the [multi-tenant, geo-distributed, high availability design of Azure AD](https://cloudblogs.microsoft.com/enterprisemobility/2014/09/02/azure-ad-under-the-hood-of-our-geo-redundant-highly-available-distributed-cloud-directory/)for your most critical business needs.
+
+- Protect your legacy applications with one of our [secure hybrid access partner integrations](https://aka.ms/secure-hybrid-access) that you may have already deployed.
+
+### Manage cost
+
+Your organization may have multiple Identity Access Management (IAM) solutions in place. Migrating to one Azure AD infrastructure is an opportunity to reduce dependencies on IAM licenses (on-premises or in the cloud) and infrastructure costs. In cases where you may have already paid for Azure AD via M365 licenses, there is no reason to pay the added cost of another IAM solution.
+
+**With Azure AD, you can reduce infrastructure costs by:**
+
+- Providing secure remote access to on-premises apps using [Azure AD Application Proxy](/azure/active-directory/manage-apps/application-proxy).
+
+- Decoupling apps from the on-prem credential approach in your tenant by [setting up Azure AD as the trusted universal identity provider](/azure/active-directory/hybrid/plan-connect-user-signin#choosing-the-user-sign-in-method-for-your-organization).
+
+### Increase productivity
+
+Economics and security benefits drive organizations to adopt Azure AD, but full adoption and compliance are more likely if users benefit too. With Azure AD, you can:
+
+- Improve end-user [Single Sign-On (SSO)](/azure/active-directory/manage-apps/what-is-single-sign-on) experience through seamless and secure access to any application, from any device and any location.
+
+- Leverage self-service IAM capabilities, such as [Self-Service Password Resets](/azure/active-directory/authentication/concept-sspr-howitworks) and [SelfService Group Management](/azure/active-directory/users-groups-roles/groups-self-service-management).
+
+- Reduce administrative overhead by managing only a single identity for each user across cloud and on-premises environments:
+
+ - [Automate provisioning](/azure/active-directory/active-directory-saas-app-provisioning) of user accounts (in [Azure AD Gallery](https://azuremarketplace.microsoft.com/marketplace/apps/category/azure-active-directory-apps))based on Azure AD identities
+ - Access all your apps from MyApps panel in the [Azure portal ](https://portal.azure.com/)
+
+- Enable developers to secure access to their apps and improve the end-user experience by using the [Microsoft Identity Platform](/azure/active-directory/develop/about-microsoft-identity-platform) with the Microsoft Authentication Library (MSAL).
+
+- Empower your partners with access to cloud resources using [Azure AD B2B collaboration](/azure/active-directory/active-directory-b2b-what-is-azure-ad-b2b). This removes the overhead of configuring point-to-point federation with your partners.
+
+### Address compliance and governance
+
+Ensure compliance with regulatory requirements by enforcing corporate access policies and monitoring user access to applications and associated data using integrated audit tools and APIs. With Azure AD, you can monitor application sign-ins through reports that leverage [Security Incident and Event Monitoring (SIEM) tools](/azure/active-directory/reports-monitoring/plan-monitoring-and-reporting). You can access the reports from the portal or APIs, and programmatically audit who has access to your applications and remove access to inactive users via access reviews.
+
+## Plan your migration phases and project strategy
+
+When technology projects fail, it is often due to mismatched expectations, the right stakeholders not being involved, or a lack of communication. Ensure your success by planning the project itself.
+
+### The phases of migration
+
+Before we get into the tools, you should understand how to think through the migration process. Through several direct-to-customer workshops, we recommend the following four phases:
+
+![A diagram of the phases of migration](media/migrating-application-authentication-to-azure-active-directory-2.jpg)
+
+### Assemble the project team
+
+Application migration is a team effort, and you need to ensure that you have all the vital positions filled. Support from senior business leaders is important. Ensure that you involve the right set of executive sponsors, business decision-makers, and subject matter experts (SMEs.)
+
+During the migration project, one person may fulfill multiple roles, or multiple people fulfill each role, depending on your organizationΓÇÖs size and structure. You may also have a dependency on other teams that play a key role in your security landscape.
+
+The following table includes the key roles and their contributions:
+
+| Role | Contributions |
+| - | - |
+| **Project Manager** | Project coach accountable for guiding the project, including:<br /> - gain executive support<br /> - bring in stakeholders<br /> - manage schedules, documentation, and communications |
+| **Identity Architect / Azure AD App Administrator** | They are responsible for the following:<br /> - design the solution in cooperation with stakeholders<br /> - document the solution design and operational procedures for handoff to the operations team<br /> - manage the pre-production and production environments |
+| **On premises AD operations team** | The organization that manages the different on-premises identity sources such as AD forests, LDAP directories, HR systems etc.<br /> - perform any remediation tasks needed before synchronizing<br /> - Provide the service accounts required for synchronization<br /> - provide access to configure federation to Azure AD |
+| **IT Support Manager** | A representative from the IT support organization who can provide input on the supportability of this change from a helpdesk perspective. |
+| **Security Owner** | A representative from the security team that can ensure that the plan will meet the security requirements of your organization. |
+| **Application technical owners** | Includes technical owners of the apps and services that will integrate with Azure AD. They provide the applicationsΓÇÖ identity attributes that should include in the synchronization process. They usually have a relationship with CSV representatives. |
+| **Application business Owners** | Representative colleagues who can provide input on the user experience and usefulness of this change from a userΓÇÖs perspective and owns the overall business aspect of the application, which may include managing access. |
+| **Pilot group of users** | Users who will test as a part of their daily work, the pilot experience, and provide feedback to guide the rest of the deployments. |
+
+### Plan communications
+
+Effective business engagement and communication is the key to success. It is important to give stakeholders and end-users an avenue to get information and keep informed of schedule updates. Educate everyone about the value of the migration, what the expected timelines are, and how to plan for any temporary business disruption. Use multiple avenues such as briefing sessions, emails, one-to-one meetings, banners, and townhalls.
+
+Based on the communication strategy that you have chosen for the app you may want to remind users of the pending downtime. You should also verify that there are no recent changes or business impacts that would require to postpone the deployment.
+
+In the following table you will find the minimum suggested communication to keep your stakeholders informed:
+
+**Plan phases and project strategy**:
+
+| Communication | Audience |
+| | - |
+| Awareness and business / technical value of project | All except end-users |
+| Solicitation for pilot apps | - App business owners<br />- App technical owners<br />- Architects and Identity team |
+
+**Phase 1- Discover and Scope**:
+
+| Communication | Audience |
+| | - |
+| - Solicitation for application information<br />- Outcome of scoping exercise | - App technical owners<br />- App business owners |
+
+**Phase 2- Classify apps and plan pilot**:
+
+| Communication | Audience |
+| | - |
+| - Outcome of classifications and what that means for migration schedule<br />- Preliminary migration schedule | - App technical owners<br /> - App business owners |
+
+**Phase 3 ΓÇô Plan migration and testing**:
+
+| Communication | Audience |
+| | - |
+| - Outcome of application migration testing | - App technical owners<br />- App business owners |
+| - Notification that migration is coming and explanation of resultant end-user experiences.<br />- Downtime coming and complete communications, including what they should now do, feedback, and how to get help | - End users (and all others) |
+
+**Phase 4 ΓÇô Manage and gain insights**:
+
+| Communication | Audience |
+| | - |
+| Available analytics and how to access | - App technical owners<br />- App business owners |
+
+### Migration states communication dashboard
+
+Communicating the overall state of the migration project is crucial, as it shows progress, and helps app owners whose apps are coming up for migration to prepare for the move. You can put together a simple dashboard using Power BI or other reporting tools to provide visibility into the status of applications during the migration.
+
+The migration states you might consider using are as follows:
+
+| Migration states | Action plan |
+| - | |
+| **Initial Request** | Find the app and contact the owner for more information |
+| **Assessment Complete** | App owner evaluates the app requirements and returns the app questionnaire</td>
+| **Configuration in Progress** | Develop the changes necessary to manage authentication against Azure AD |
+| **Test Configuration Successful** | Evaluate the changes and authenticate the app against the test Azure AD tenant in the test environment |
+| **Production Configuration Successful** | Change the configurations to work against the production AD tenant and assess the app authentication in the test environment |
+| **Complete / Sign Off** | Deploy the changes for the app to the production environment and execute the against the production Azure AD tenant |
+
+This will ensure app owners know what the app migration and testing schedule are when their apps are up for migration, and what the results are from other apps that have already been migrated. You might also consider providing links to your bug tracker database for owners to be able to file and view issues for apps that are being migrated.
+
+### Best practices
+
+The following are our customer and partnerΓÇÖs success stories, and suggested best practices:
+
+- [Five tips to improve the migration process to Azure Active Directory](https://techcommunity.microsoft.com/t5/Azure-Active-Directory-Identity/Five-tips-to-improve-the-migration-process-to-Azure-Active/ba-p/445364) by Patriot Consulting, a member of our partner network that focuses on helping customers deploy Microsoft cloud solutions securely.
+
+- [Develop a risk management strategy for your Azure AD application migration](https://techcommunity.microsoft.com/t5/Azure-Active-Directory-Identity/Develop-a-risk-management-strategy-for-your-Azure-AD-application/ba-p/566488) by Edgile, a partner that focuses on IAM and risk management solutions.
+
+## Phase 1: Discover and scope apps
+
+**Application discovery and analysis is a fundamental exercise to give you a good start.** You may not know everything so be prepared to accommodate the unknown apps.
+
+### Find your apps
+
+The first decision point in an application migration is which apps to migrate, which if any should remain, and which apps to deprecate. There is always an opportunity to deprecate the apps that you will not use in your organization. There are several ways to find apps in your organization. **While discovering apps, ensure you are including in-development and planned apps. Use Azure AD for authentication in all future apps.**
+
+**Using Active Directory Federation Services (AD FS) To gather a correct app inventory:**
+
+- **Use Azure AD Connect Health.** If you have an Azure AD Premium license, we recommend deploying [Azure AD Connect Health](/azure/active-directory/hybrid/how-to-connect-health-adfs) to analyze the app usage in your on premises environment. You can use the [ADFS application report](/azure/active-directory/manage-apps/migrate-adfs-application-activity) (preview) to discover ADFS applications that can be migrated and evaluate the readiness of the application to be migrated. After completing your migration, deploy [Cloud Discovery](/cloud-app-security/set-up-cloud-discovery) that allows you to continuously monitor Shadow IT in your organization once youΓÇÖre in the cloud.
+
+- **AD FS log parsing**. If you donΓÇÖt have Azure AD Premium licenses, we recommend using the ADFS to Azure AD app migration tools based on [PowerShell.](https://github.com/AzureAD/Deployment-Plans/tree/master/ADFS%20to%20AzureAD%20App%20Migration). Refer to [Solution guide](https://aka.ms/migrateapps/adfssolutionguide):
+
+[Migrating apps from Active Directory Federation Services (AD FS) to Azure AD.](https://aka.ms/migrateapps/adfssolutionguide)
+
+### Using other identity providers (IdPs)
+
+For other identity providers (such as Okta or Ping), you can use their tools to export the application inventory. You may consider looking at service principles registered on Active Directory that correspond to the web apps in your organization.
+
+### Using cloud discovery tools
+
+In the cloud environment, you need rich visibility, control over data travel, and sophisticated analytics to find and combat cyber threats across all your cloud services. You can gather your cloud app inventory using the following tools:
+
+- **Cloud Access Security Broker (CASB**) ΓÇô A [CASB](/cloud-app-security/) typically works alongside your firewall to provide visibility into your employeesΓÇÖ cloud application usage and helps you protect your corporate data from cybersecurity threats. The CASB report can help you determine the most used apps in your organization, and the early targets to migrate to Azure AD.
+
+- **Cloud Discovery** - By configuring [Cloud Discovery](/cloud-app-security/set-up-cloud-discovery), you gain visibility into the cloud app usage, and can discover unsanctioned or Shadow IT apps.
+
+- **APIs** - For apps connected to cloud infrastructure, you can use the APIs and tools on those systems to begin to take an inventory of hosted apps. In the Azure environment:
+
+ - Use the [Get-AzureWebsite](/powershell/module/servicemanagement/azure/get-azurewebsite?view=azuresmps-4.0.0&redirectedfrom=MSDN&preserve-view=true)cmdlet to get information about Azure websites.
+
+ - Use the [Get-AzureRMWebApp](/powershell/module/azurerm.websites/get-azurermwebapp?view=azurermps-6.13.0&viewFallbackFrom=azurermps-6.2.0&preserve-view=true)cmdlet to get information about your Azure Web Apps.
+
+ - You can find all the apps running on Microsoft IIS from the Windows command line using [AppCmd.exe](/iis/get-started/getting-started-with-iis/getting-started-with-appcmdexe#working-with-sites-applications-virtual-directories-and-application-pools).
+
+ - Use [Applications](/previous-versions/azure/ad/graph/api/entity-and-complex-type-reference#application-entity) and [Service Principals](/previous-versions/azure/ad/graph/api/entity-and-complex-type-reference#serviceprincipal-entity) to get you information on an app and app instance in a directory in Azure AD.
+
+### Using manual processes
+
+Once you have taken the automated approaches described above, you will have a good handle on your applications. However, you might consider doing the following to ensure you have good coverage across all user access areas:
+
+- Contact the various business owners in your organization to find the applications in use in your organization.
+
+- Run an HTTP inspection tool on your proxy server, or analyze proxy logs, to see where traffic is commonly routed.
+
+- Review weblogs from popular company portal sites to see what links users access the most.
+
+- Reach out to executives or other key business members to ensure that you have covered the business-critical apps.
+
+### Type of apps to migrate
+
+Once you find your apps, you will identify these types of apps in your organization:
+
+- Apps that use modern authentication protocols already
+
+- Apps that use legacy authentication protocols that you choose to modernize
+
+- Apps that use legacy authentication protocols that you choose NOT to modernize
+
+- New Line of Business (LoB) apps
+
+### Apps that use modern authentication already
+
+The already modernized apps are the most likely to be moved to Azure AD. These apps already use modern authentication protocols (such as SAML or OpenID Connect) and can be reconfigured to authenticate with Azure AD.
+
+In addition to the choices in the [Azure AD app gallery,](https://azuremarketplace.microsoft.com/marketplace/apps/category/azure-active-directory-apps) these could be apps that already exist in your organization or any third-party apps from a vendor who is not a part of the Azure AD gallery ([non-gallery applications)](/azure/active-directory/manage-apps/add-non-gallery-app).
+
+Legacy apps that you choose to modernize
+
+For legacy apps that you want to modernize, moving to Azure AD for core authentication and authorization unlocks all the power and data-richness that the [Microsoft Graph](https://developer.microsoft.com/graph/gallery/?filterBy=Samples,SDKs) and [Intelligent Security Graph](https://www.microsoft.com/security/operations/intelligence?rtc=1) have to offer.
+
+We recommend **updating the authentication stack code** for these applications from the legacy protocol (such as Windows Integrated Authentication, Kerberos Constrained Delegation, HTTP Headers-based authentication) to a modern protocol (such as SAML or OpenID Connect).
+
+### Legacy apps that you choose NOT to modernize
+
+For certain apps using legacy authentication protocols, sometimes modernizing their authentication is not the right thing to do for business reasons. These include the following types of apps:
+
+- Apps kept on-premises for compliance or control reasons.
+
+- Apps connected to an on-premises identity or federation provider that you do not want to change.
+
+- Apps developed using on-premises authentication standards that you have no plans to move
+
+Azure AD can bring great benefits to these legacy apps, as you can enable modern Azure AD security and governance features like [Multi-Factor Authentication](/azure/active-directory/authentication/concept-mfa-howitworks), [Conditional Access](/azure/active-directory/conditional-access/overview), [Identity Protection](/azure/active-directory/identity-protection/), [Delegated Application Access](/azure/active-directory/manage-apps/access-panel-manage-self-service-access), and [Access Reviews](https://docs.microsoft.com/azure/active-directory/governance/manage-user-access-with-access-reviews#create-and-perform-an-access-review) against these apps without touching the app at all!
+
+Start by **extending these apps into the cloud** with Azure AD [Application Proxy](/azure/active-directory/manage-apps/application-proxy-configure-single-sign-on-password-vaulting) using simple means of authentication (like Password Vaulting) to get your users migrated quickly, or via our [partner integrations](https://azure.microsoft.com/services/active-directory/sso/secure-hybrid-access/) with application delivery controllers you might have deployed already.
+
+### New Line of Business (LoB) apps
+
+You usually develop LoB apps for your organizationΓÇÖs in-house use. If you have new apps in the pipeline, we recommend using the [Microsoft Identity Platform](/azure/active-directory/develop/about-microsoft-identity-platform) to implement OpenID Connect.
+
+### Apps to deprecate
+
+Apps without clear owners and clear maintenance and monitoring present a security risk for your organization. Consider deprecating applications when:
+
+- their **functionality is highly redundant** with other systems ΓÇó there is **no business owner**
+
+- there is clearly **no usage**.
+
+Of course, **do not deprecate high impact, business-critical applications**. In those cases, work with business owners to determine the right strategy.
+
+### Exit criteria
+
+You are successful in this phase with:
+
+- A good understanding of the systems in scope for your migration (that you can retire once you have moved to Azure AD)
+
+- A list of apps that includes:
+
+ - What systems those apps connect to o From where and on what devices users access them
+
+ - Whether they will be migrated, deprecated, or connected with [Azure AD Connect](/azure/active-directory/hybrid/whatis-azure-ad-connect).
+
+> [!NOTE]
+> You can download the [Application Discovery Worksheet](https://download.microsoft.com/download/2/8/3/283F995C-5169-43A0-B81D-B0ED539FB3DD/Application%20Discovery%20worksheet.xlsx) to record the applications that you want to migrate to Azure AD authentication, and those you want to leave but manage by using [Azure AD Connect](/azure/active-directory/hybrid/whatis-azure-ad-connect).
+
+## Phase 2: Classify apps and plan pilot
+
+Classifying the migration of your apps is an important exercise. Not every app needs to be migrated and transitioned at the same time. Once you have collected information about each of the apps, you can rationalize which apps should be migrated first and which may take added time.
+
+### Classify in-scope apps
+
+One way to think about this is along the axes of business criticality, usage, and lifespan, each of which is dependent on multiple factors.
+
+### Business criticality
+
+Business criticality will take on different dimensions for each business, but the two measures that you should consider are **features and functionality** and **user profiles**. Assign apps with unique functionality a higher point value than those with redundant or obsolete functionality.
+
+![A diagram of the spectrums of Features & Functionality and User Profiles](media/migrating-application-authentication-to-azure-active-directory-3.jpg)
+
+### Usage
+
+Applications with **high usage numbers** should receive a higher value than apps with low usage. Assign a higher value to apps with external, executive, or security team users. For each app in your migration portfolio, complete these assessments.
+
+![A diagram of the spectrums of User Volume and User Breadth](media/migrating-application-authentication-to-azure-active-directory-4.jpg)
+
+Once you have determined values for business criticality and usage, you can then determine the **application lifespan**, and create a matrix of priority. See one such matrix below:
+
+![A triangle diagram showing the relationships between Usage, Expected Lifespan, and Business Criticality](media/migrating-application-authentication-to-azure-active-directory5.jpg)
+
+### Prioritize apps for migration
+
+You can choose to begin the app migration with either the lowest priority apps or the highest priority apps based on your organizationΓÇÖs needs.
+
+In a scenario where you may not have experience using Azure AD and Identity services, consider moving your **lowest priority apps** to Azure AD first. This will minimize your business impact, and you can build momentum. Once you have successfully moved these apps and have gained the stakeholderΓÇÖs confidence, you can continue to migrate the other apps.
+
+If there is no clear priority, you should consider moving the apps that are in the [Azure AD Gallery](https://azuremarketplace.microsoft.com/marketplace/apps/category/azure-active-directory-apps) first and support multiple identity providers (ADFS or Okta) because they are easier to integrate. It is likely that these apps are the **highest-priority apps** in your organization. To help integrate your SaaS applications with Azure AD, we have developed a collection of [tutorials](/azure/active-directory/saas-apps/tutorial-list) that walk you through configuration.
+
+When you have a deadline to migrate the apps, these highest priority apps bucket will take the major workload. You can eventually select the lower priority apps as they will not change the cost even though you have moved the deadline. Even if you must renew the license, it will be for a small amount.
+
+In addition to this classification and depending on the urgency of your migration, you may also consider putting up a **migration schedule** within which app owners must engage to have their apps migrated. At the end of this process, you should have a list of all applications in prioritized buckets for migration.
+
+### Document your apps
+
+First, start by gathering key details about your applications. The [Application Discovery Worksheet](https://download.microsoft.com/download/2/8/3/283F995C-5169-43A0-B81D-B0ED539FB3DD/Application%20Discovery%20worksheet.xlsx)will help you to make your migration decisions quickly and get a recommendation out to your business group in no time at all.
+
+Information that is important to making your migration decision includes:
+
+- **App name** ΓÇô what is this app known as to the business?
+
+- **App type** ΓÇô is it a 3rd party SaaS app? A custom line of business web app? An API?
+
+- **Business criticality** ΓÇô is its high criticality? Low? Or somewhere in between?
+
+- **User access volume** ΓÇô does everyone access this app or just a few people?
+
+- **Planned lifespan** ΓÇô how long will this app be around? Less than 6 months? More than 2 years?
+
+- **Current identity provider** ΓÇô what is the primary IdP for this app? Or does it rely on local storage?
+
+- **Method of authentication** ΓÇô does the app authenticate using open standards?
+
+- **Whether you plan to update the app code** ΓÇô is the app under planned or active development?
+
+- **Whether you plan to keep the app on-premises** ΓÇô do you want to keep the app in your datacenter long-term?
+
+- **Whether the app depends on other apps or APIs** ΓÇô does the app currently call into other apps or APIs?
+
+- **Whether the app is in the Azure AD gallery** ΓÇô is the app currently already integrated with the [Azure AD Gallery](https://azuremarketplace.microsoft.com/marketplace/apps/category/azure-active-directory-apps)?
+
+Other data that will help you later, but that you do not need to make an immediate migration decision includes:
+
+- **App URL** ΓÇô where do users go to access the app?
+
+- **App description** ΓÇô what is a brief description of what the app does?
+
+- **App owner** ΓÇô who in the business is the main POC for the app?
+
+- **General comments or notes** ΓÇô any other general information about the app or business ownership
+
+Once you have classified your application and documented the details, then be sure to gain business owner buy-in to your planned migration strategy.
+
+### Plan a pilot
+
+The app(s) you select for the pilot should represent the key identity and security requirements of your organization, and you must have clear buy-in from the application owners. Pilots typically run in a separate test environment. See [best practices for pilots](/azure/active-directory/fundamentals/active-directory-deployment-plans#best-practices-for-a-pilot) on the deployment plans page.
+
+**DonΓÇÖt forget about your external partners.** Make sure that they participate in migration schedules and testing. Finally, ensure they have a way to access your helpdesk in case of breaking issues.
+
+### Plan for limitations
+
+While some apps are easy to migrate, others may take longer due to multiple servers or instances. For example, SharePoint migration may take longer due to custom sign in pages.
+
+Many SaaS app vendors charge for changing the SSO connection. Check with them and plan for this.
+
+Azure AD also has [service limits and restrictions](/azure/active-directory/users-groups-roles/directory-service-limits-restrictions) you should be aware of.
+
+### App owner sign-off
+
+Business critical and universally used applications may need a group of pilot users to test the app in the pilot stage. Once you have tested an app in the pre-production or pilot environment, ensure that app business owners sign off on performance prior to the migration of the app and all users to production use of Azure AD for authentication.
+
+### Plan the security posture
+
+Before you initiate the migration process, take time to fully consider the security posture you wish to develop for your corporate identity system. This is based on gathering these valuable sets of information: **Identities and data, who is accessing your data, and devices and locations**.
+
+### Identities and data
+
+Most organizations have specific requirements about identities and data protection that vary by industry segment and by job functions within organizations. Refer to [identity and device access configurations](/microsoft-365/enterprise/microsoft-365-policies-configurations) for our recommendations including a prescribed set of [conditional access policies](/azure/active-directory/active-directory-conditional-access-azure-portal) and related capabilities.
+
+You can use this information to protect access to all services integrated with Azure AD. These recommendations are aligned with Microsoft Secure Score as well as the [identity score in Azure AD](/azure/active-directory/fundamentals/identity-secure-score). The score helps you to:
+
+- Objectively measure your identity security posture
+
+- Plan identity security improvements
+
+- Review the success of your improvements
+
+This will also help you implement the [five steps to securing your identity infrastructure](/azure/security/azure-ad-secure-steps). Use the guidance as a starting point for your organization and adjust the policies to meet your organization's specific requirements.
+
+### Who is accessing your data?
+
+There are two main categories of users of your apps and resources that Azure AD supports:
+
+- **Internal:** Employees, contractors, and vendors that have accounts within your identity provider. This might need further pivots with different rules for managers or leadership versus other employees.
+
+- **External:** Vendors, suppliers, distributors, or other business partners that interact with your organization in the regular course of business with [Azure AD B2B collaboration.](/azure/active-directory/b2b/what-is-b2b)
+
+You can define groups for these users and populate these groups in diverse ways. You may choose that an administrator must manually add members into a group, or you can enable selfservice group membership. Rules can be established that automatically add members into groups based on the specified criteria using [dynamic groups](/azure/active-directory/users-groups-roles/groups-dynamic-membership).
+
+External users may also refer to customers which requires special consideration. [Azure AD B2C](/azure/active-directory-b2c/active-directory-b2c-overview), a separate product supports customer authentication. However, it is outside the scope of this paper.
+
+### Device/location used to access data
+
+The device and location that a user uses to access an app are also important. Devices physically connected to your corporate network are more secure. Connections from outside the network over VPN may need scrutiny.
+
+![A diagram showing the relationship between User Location and Data Access](media/migrating-application-authentication-to-azure-active-directory-6.jpg)
+
+With these aspects of resource, user, and device in mind, you may choose to use [Azure AD Conditional Access](/azure/active-directory/active-directory-conditional-access-azure-portal) capabilities. Conditional access goes beyond user permissions: it is based on a combination of factors, such as the identity of a user or group, the network that the user is connected to, the device and application they are using, and the type of data they are trying to access. The access granted to the user adapts to this broader set of conditions.
+
+### Exit criteria
+
+You are successful in this phase when you:
+
+- Know your apps
+ - Have fully documented the apps you intend to migrate
+ - Have prioritized apps based on business criticality, usage volume, and lifespan
+
+- Have selected apps that represent your requirements for a pilot
+
+- Business-owner buy-in to your prioritization and strategy
+
+- Understand your security posture needs and how to implement them
+
+## Phase 3: Plan migration and testing
+
+Once you have gained business buy-in, the next step is to start migrating these apps to Azure AD authentication.
+
+### Migration tools and guidance
+
+Use the tools and guidance below to follow the precise steps needed to migrate your applications to Azure AD:
+
+- **General migration guidance** ΓÇô Use the whitepaper, tools, email templates, and applications questionnaire in the [Azure AD apps migration toolkit](https://aka.ms/migrateapps) to discover, classify, and migrate your apps.
+
+- **SaaS applications** ΓÇô See our list of [hundreds of SaaS app tutorials](/azure/active-directory/active-directory-saas-tutorial-list) and the complete [Azure AD SSO deployment plan](https://aka.ms/ssodeploymentplan) to walk through the end-to-end process.
+
+- **Applications running on-premises** ΓÇô Learn all [about the Azure AD Application Proxy](/azure/active-directory/manage-apps/application-proxy) and use the complete [Azure AD Application Proxy deployment plan](https://aka.ms/AppProxyDPDownload) to get going quickly.
+
+- **Apps youΓÇÖre developing** ΓÇô Read our step by step [integration](/azure/active-directory/develop/active-directory-integrating-applications) and [registration](/azure/active-directory/develop/active-directory-v2-app-registration) guidance.
+
+After migration, you may choose to send communication informing the users of the successful deployment and remind them of any new steps that they need to take.
+
+### Plan testing
+
+During the process of the migration, your app may already have a test environment used during regular deployments. You can continue to use this environment for migration testing. If a test environment is not currently available, you may be able to set one up using Azure App Service or Azure Virtual Machines, depending on the architecture of the application. You may choose to set up a separate test Azure AD tenant to use as you develop your app configurations. This tenant will start in a clean state and will not configured to sync with any system.
+
+You can test each app by logging in with a test user and make sure all functionality is the same as prior to the migration. If you determine during testing that users will need to update their [MFA](/active-directory/authentication/howto-mfa-userstates) or [SSPR](/azure/active-directory/authentication/quickstart-sspr)settings, or you are adding this functionality during the migration, be sure to add that to your end user communication plan. See [MFA](https://aka.ms/mfatemplates) and [SSPR](https://aka.ms/ssprtemplates) end-user communication templates.
+
+Once you have migrated the apps, go to the [Azure Portal](https://aad.portal.azure.com/) to test if the migration was a success. Follow the instructions below:
+
+- Select **Enterprise Applications &gt; All applications** and find your app from the list.
+
+- Select **Manage &gt; Users and groups** to assign at least one user or group to the app.
+
+- Select **Manage &gt; Conditional Access**. Review your list of policies and ensure that you are not blocking access to the application with a [conditional access policy](/azure/active-directory/active-directory-conditional-access-azure-portal).
+
+Depending on how you configure your app, verify that SSO works properly.
+
+| Authentication type | Testing |
+| | |
+| **OAuth / OpenID Connect** | Select **Enterprise applications &gt; Permissions** and ensure you have consented to the application to be used in your organization in the user settings for your app. |
+| **SAML-based SSO** | Use the [Test SAML Settings](/azure/active-directory/develop/howto-v1-debug-saml-sso-issues) button found under **Single Sign-On.** |
+| **Password-Based SSO** | Download and install the [MyApps Secure Sign-in Extension](/azure/active-directory/user-help/active-directory-saas-access-panel-introduction#my-apps-secure-sign-in-extension). This extension helps you start any of your organization's cloud apps that require you to use an SSO process. |
+| **[Application Proxy](/azure/active-directory/manage-apps/application-proxy)** | Ensure your connector is running and assigned to your application. Visit the [Application Proxy troubleshooting guide](/azure/active-directory/manage-apps/application-proxy-troubleshoot) for further assistance. |
+
+### Troubleshoot
+
+If you run into problems, check out our [apps troubleshooting guide](https://aka.ms/troubleshoot-apps) to get help. See also [Problems signing in to a custom-developed application](/azure/active-directory/manage-apps/application-sign-in-problem-custom-dev).
+
+### Plan rollback
+
+If your migration fails, the best strategy is to rollback and test. Here are the steps that you can take to mitigate migration issues:
+
+- **Take screenshots** of the existing configuration of your app. You can look back if you must reconfigure the app once again.
+
+- You might also consider **providing links to the legacy authentication**, in case of issues with cloud authentication.
+
+- Before you complete your migration, **do not change your existing configuration** with the earlier identity provider.
+
+- Begin by migrating **the apps that support multiple IdPs**. If something goes wrong, you can always change to the preferred IdPΓÇÖs configuration.
+
+- Ensure that your app experience has a **Feedback button** or pointers to your **helpdesk** in case of issues.
+
+### Exit criteria
+
+You are successful in this phase when you have:
+
+- Determined how each app will be migrated
+
+- Reviewed the migration tools
+
+- Planned your testing including test environments and groups
+
+- Planned rollback
+
+## Phase 4: Plan management and insights
+
+Once apps are migrated, you must ensure that:
+
+- Users can securely access and manage
+
+- You can gain the appropriate insights into usage and app health
+
+We recommend taking the following actions as appropriate to your organization.
+
+### Manage your usersΓÇÖ app access
+
+Once you have migrated the apps, you can enrich your userΓÇÖs experience in many ways
+
+**Make apps discoverable**
+
+**Point your user** to the [MyApps](/azure/active-directory/user-help/my-apps-portal-end-user-access#my-apps-secure-sign-in-extension)portal experience. Here, they can access all cloud-based apps, apps you make available by using [Azure AD Connect](/azure/active-directory/hybrid/whatis-azure-ad-connect), and apps using [Application Proxy](/azure/active-directory/manage-apps/application-proxy) provided they have permissions to access those apps.
+
+You can guide your users on how to discover their apps:
+
+- Use the [Existing Single Sign-on](/azure/active-directory/active-directory-saas-custom-apps#existing-single-sign-on) feature to **link your users to any app**
+
+- Enable [Self-Service Application Access](/azure/active-directory/application-access-self-service-how-to)to an app and **let users add apps that you curate**
+
+- [Hide applications from end-users](/azure/active-directory/manage-apps/hide-application-from-user-portal) (default Microsoft apps or other apps) to **make the apps they do need more discoverable**
+
+### Make apps accessible
+
+**Let users access apps from their mobile devices**. Users can access the MyApps portal with Intune-managed browser on their [iOS](/azure/active-directory/manage-apps/hide-application-from-user-portal) 7.0 or later or [Android](/azure/active-directory/manage-apps/hide-application-from-user-portal) devices.
+
+Users can download an **Intune-managed browser**:
+
+- **For Android devices**, from the [Google play store](https://play.google.com/store/apps/details?id=com.microsoft.intune.mam.managedbrowser)
+
+- **For Apple devices**, from the [Apple App Store](https://itunes.apple.com/us/app/microsoft-intune-managed-browser/id943264951?mt=8) or they can download the [My Apps mobile app for iOS ](https://apps.apple.com/us/app/my-apps-azure-active-directory/id824048653)
+
+**Let users open their apps from a browser extension.**
+
+Users can [download the MyApps Secure Sign-in Extension](https://www.microsoft.com/p/my-apps-secure-sign-in-extension/9pc9sckkzk84?rtc=1&activetab=pivot%3Aoverviewtab) in [Chrome,](https://chrome.google.com/webstore/detail/my-apps-secure-sign-in-ex/ggjhpefgjjfobnfoldnjipclpcfbgbhl) [FireFox,](https://addons.mozilla.org/firefox/addon/access-panel-extension/) or [Microsoft Edge](https://www.microsoft.com/p/my-apps-secure-sign-in-extension/9pc9sckkzk84?rtc=1&activetab=pivot%3Aoverviewtab) and can launch apps right from their browser bar to:
+
+- **Search for their apps and have their most-recently-used apps appear**
+
+- **Automatically convert internal URLs** that you have configured in [Application Proxy](/azure/active-directory/manage-apps/application-proxy) to the appropriate external URLs. Your users can now work with the links they are familiar with no matter where they are.
+
+**Let users open their apps from Office.com.**
+
+Users can go to [Office.com](https://www.office.com/) to **search for their apps and have their most-recently-used apps appear** for them right from where they do work.
+
+### Secure app access
+
+Azure AD provides a centralized access location to manage your migrated apps. Go to the [Azure portal](https://portal.azure.com/) and enable the following capabilities:
+
+- **Secure user access to apps.** Enable [Conditional Access policies](/azure/active-directory/active-directory-conditional-access-azure-portal)or [Identity Protection](/azure/active-directory/active-directory-identityprotection)to secure user access to applications based on device state, location, and more.
+
+- **Automatic provisioning.** Set up [automatic provisioning of users](/azure/active-directory/manage-apps/user-provisioning) with a variety of third-party SaaS apps that users need to access. In addition to creating user identities, it includes the maintenance and removal of user identities as status or roles change.
+
+- **Delegate user access** **management**. As appropriate, enable self-service application access to your apps and *assign a business approver to approve access to those apps*. Use [Self-Service Group Management](/azure/active-directory/users-groups-roles/groups-self-service-management)for groups assigned to collections of apps.
+
+- **Delegate admin access.** using **Directory Role** to assign an admin role (such as Application administrator, Cloud Application administrator, or Application developer) to your user.
+
+### Audit and gain insights of your apps
+
+You can also use the [Azure portal](https://portal.azure.com/) to audit all your apps from a centralized location,
+
+- **Audit your app** using **Enterprise Applications, Audit** or access the same information from the [Azure AD Reporting API](/azure/active-directory/active-directory-reporting-api-getting-started-azure-portal) to integrate into your favorite tools.
+
+- **View the permissions for an app** using **Enterprise Applications, Permissions** for apps using OAuth / OpenID Connect.
+
+- **Get sign-in insights** using **Enterprise Applications, Sign-Ins**. Access the same information from the [Azure AD Reporting API.](/azure/active-directory/active-directory-reporting-api-getting-started-azure-portal)
+
+- **Visualize your appΓÇÖs usage** from the [Azure AD PowerBI content pack](/azure/active-directory/active-directory-reporting-power-bi-content-pack-how-to)
+
+### Exit criteria
+
+You are successful in this phase when you:
+
+- Provide secure app access to your users
+
+- Manage to audit and gain insights of the migrated apps
+
+### Do even more with deployment plans
+
+Deployment plans walk you through the business value, planning, implementation steps, and management of Azure AD solutions, including app migration scenarios. They bring together everything that you need to start deploying and getting value out of Azure AD capabilities. The deployment guides include content such as Microsoft recommended best practices, end-user communications, planning guides, implementation steps, test cases, and more.
+
+Many [deployment plans](https://aka.ms/deploymentplans) are available for your use, and weΓÇÖre always making more!
+
+### Contact support
+
+Visit the following support links to create or track support ticket and monitor health.
+
+- **Azure Support:** You can call [Microsoft Support](https://azure.microsoft.com/support) and open a ticket for any Azure
+
+Identity deployment issue depending on your Enterprise Agreement with Microsoft.
+
+- **FastTrack**: If you have purchased Enterprise Mobility and Security (EMS) or Azure AD Premium licenses, you are eligible to receive deployment assistance from the [FastTrack program.](/enterprise-mobility-security/solutions/enterprise-mobility-fasttrack-program)
+
+- **Engage the Product Engineering team:** If you are working on a major customer deployment with millions of users, you are entitled to support from the Microsoft account team or your Cloud Solutions Architect. Based on the projectΓÇÖs deployment complexity, you can work directly with the [Azure Identity Product Engineering team.](https://aad.portal.azure.com/#blade/Microsoft_Azure_Marketplace/MarketplaceOffersBlade/selectedMenuItemId/solutionProviders)
+
+- **Azure AD Identity blog:** Subscribe to the [Azure AD Identity blog](https://techcommunity.microsoft.com/t5/Azure-Active-Directory-Identity/bg-p/Identity) to stay up to date with all the latest product announcements, deep dives, and roadmap information provided directly by the Identity engineering team.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/known-issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/known-issues.md
@@ -13,7 +13,7 @@ ms.devlang:
Previously updated : 12/01/2020 Last updated : 02/04/2021
@@ -45,6 +45,10 @@ Managed Identities for Azure resources have only one of those components: A Serv
Managed identities don't have an application object in the directory, which is what is commonly used to grant app permissions for MS graph. Instead, MS graph permissions for managed identities need to be granted directly to the Service Principal.
+### Can the same managed identity be used across multiple regions?
+
+In short, yes you can use user assigned managed identities in more than one Azure region. The longer answer is that while user assigned managed identities are created as regional resources the associated [service principal](../develop/app-objects-and-service-principals.md#service-principal-object) (SPN) created in Azure AD is available globally. The service principal can be used from any Azure region and its availability is dependent on the availability of Azure AD. For example, if you created a user assigned managed identity in the South-Central region and that region becomes unavailable this issue only impacts [control plane](../../azure-resource-manager/management/control-plane-and-data-plane.md) activities on the managed identity itself. The activities performed by any resources already configured to use the managed identities would not be impacted.
+ ### Does managed identities for Azure resources work with Azure Cloud Services? No, there are no plans to support managed identities for Azure resources in Azure Cloud Services.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/services-support-managed-identities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/services-support-managed-identities.md
@@ -351,6 +351,17 @@ Refer to the following list to configure managed identity for Azure SignalR Serv
- [Azure Resource Manager template](../../azure-signalr/howto-use-managed-identity.md)
+### Azure Resource Mover
+
+Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
+| | :-: | :-: | :-: | :-: |
+| System assigned | Available in the regions where Azure Resource Mover service is available | Not available | Not available | Not available |
+| User assigned | Not available | Not available | Not available | Not available |
+
+Refer to the following document to use Azure Resource Mover:
+
+- [Azure Resource Mover](../../resource-mover/overview.md)
+ ## Azure services that support Azure AD authentication The following services support Azure AD authentication, and have been tested with client services that use managed identities for Azure resources.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/roles/groups-concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/groups-concept.md
@@ -24,7 +24,7 @@ Consider this example: Contoso has hired people across geographies to manage and
## How this feature works
-Create a new Microsoft 365 or security group with the ΓÇÿisAssignableToRoleΓÇÖ property set to ΓÇÿtrueΓÇÖ. You could also enable this property when creating a group in the Azure portal by turning on **Azure AD roles can be assigned to the group**. Either way, you can then assign the group to one or more Azure AD roles in the same way as you assign roles to users. A maximum of 200 role-assignable groups can be created in a single Azure AD organization (tenant).
+Create a new Microsoft 365 or security group with the ΓÇÿisAssignableToRoleΓÇÖ property set to ΓÇÿtrueΓÇÖ. You could also enable this property when creating a group in the Azure portal by turning on **Azure AD roles can be assigned to the group**. Either way, you can then assign the group to one or more Azure AD roles in the same way as you assign roles to users. A maximum of 250 role-assignable groups can be created in a single Azure AD organization (tenant).
If you do not want members of the group to have standing access to the role, you can use Azure AD Privileged Identity Management. Assign a group as an eligible member of an Azure AD role. Each member of the group is then eligible to have their assignment activated for the role that the group is assigned to. They can then activate their role assignment for a fixed time duration.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/roles/permissions-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/permissions-reference.md
@@ -85,6 +85,10 @@ Users in this role can create attack payloads but not actually launch or schedul
Users in this role can create and manage all aspects of attack simulation creation, launch/scheduling of a simulation, and the review of simulation results. Members of this role have this access for all simulations in the tenant.
+### [Azure AD Joined Device Local Administrator](#azure-ad-joined-device-local-administrator-permissions)/Device Administrators
+
+This role is available for assignment only as an additional local administrator in [Device settings](https://aad.portal.azure.com/#blade/Microsoft_AAD_IAM/DevicesMenuBlade/DeviceSettings/menuId/). Users with this role become local machine administrators on all Windows 10 devices that are joined to Azure Active Directory. They do not have the ability to manage devices objects in Azure Active Directory.
+ ### [Azure DevOps Administrator](#azure-devops-administrator-permissions) Users with this role can manage the Azure DevOps policy to restrict new Azure DevOps organization creation to a set of configurable users or groups. Users in this role can manage this policy through any Azure DevOps organization that is backed by the company's Azure AD organization. This role grants no other Azure DevOps-specific permissions (for example, Project Collection Administrators) inside any of the Azure DevOps organizations backed by the company's Azure AD organization.
@@ -159,13 +163,8 @@ Manages [Customer Lockbox requests](/office365/admin/manage/customer-lockbox-req
### [Desktop Analytics Administrator](#desktop-analytics-administrator-permissions) - Users in this role can manage the Desktop Analytics and Office Customization & Policy services. For Desktop Analytics, this includes the ability to view asset inventory, create deployment plans, view deployment and health status. For Office Customization & Policy service, this role enables users to manage Office policies.
-### [Device Administrators](#device-administrators-permissions)
-
-This role is available for assignment only as an additional local administrator in [Device settings](https://aad.portal.azure.com/#blade/Microsoft_AAD_IAM/DevicesMenuBlade/DeviceSettings/menuId/). Users with this role become local machine administrators on all Windows 10 devices that are joined to Azure Active Directory. They do not have the ability to manage devices objects in Azure Active Directory.
- ### [Directory Readers](#directory-readers-permissions) Users in this role can read basic directory information. This role should be used for:
@@ -204,7 +203,7 @@ Users with this role can create and manage user flows (also called "built-in" po
Users with this role add or delete custom attributes available to all user flows in the Azure AD organization. As such, users with this role can change or add new elements to the end-user schema and impact the behavior of all user flows and indirectly result in changes to what data may be asked of end users and ultimately sent as claims to applications. This role cannot edit user flows.
-### [External IDentity Provider Administrator](#external-identity-provider-administrator-permissions)
+### [External Identity Provider Administrator](#external-identity-provider-administrator-permissions)
This administrator manages federation between Azure AD organizations and external identity providers. With this role, users can add new identity providers and configure all available settings (e.g. authentication path, service ID, assigned key containers). This user can enable the Azure AD organization to trust authentications from external identity providers. The resulting impact on end-user experiences depends on the type of organization:
@@ -444,6 +443,10 @@ Users with this role have global permissions within Microsoft Skype for Business
> [!NOTE] > In the Microsoft Graph API and Azure AD PowerShell, this role is identified as "Lync Service Administrator." It is "Skype for Business Administrator" in the [Azure portal](https://portal.azure.com/).
+### [Teams Administrator](#teams-administrator-permissions)
+
+Users in this role can manage all aspects of the Microsoft Teams workload via the Microsoft Teams & Skype for Business admin center and the respective PowerShell modules. This includes, among other areas, all management tools related to telephony, messaging, meetings, and the teams themselves. This role additionally grants the ability to create and manage all Microsoft 365 groups, manage support tickets, and monitor service health.
+ ### [Teams Communications Administrator](#teams-communications-administrator-permissions) Users in this role can manage aspects of the Microsoft Teams workload related to voice & telephony. This includes the management tools for telephone number assignment, voice and meeting policies, and full access to the call analytics toolset.
@@ -460,10 +463,6 @@ Users in this role can troubleshoot communication issues within Microsoft Teams
Users with this role can manage [Teams-certified devices](https://www.microsoft.com/microsoft-365/microsoft-teams/across-devices/devices) from the Teams Admin Center. This role allows viewing all devices at single glance, with ability to search and filter devices. The user can check details of each device including logged-in account, make and model of the device. The user can change the settings on the device and update the software versions. This role does not grant permissions to check Teams activity and call quality of the device.
-### [Teams Service Administrator](#teams-service-administrator-permissions)
-
-Users in this role can manage all aspects of the Microsoft Teams workload via the Microsoft Teams & Skype for Business admin center and the respective PowerShell modules. This includes, among other areas, all management tools related to telephony, messaging, meetings, and the teams themselves. This role additionally grants the ability to create and manage all Microsoft 365 groups, manage support tickets, and monitor service health.
- ### [Usage Summary Reports Reader](#usage-summary-reports-reader-permissions) Users with this role can access tenant level aggregated data and associated insights in Microsoft 365 Admin Center for Usage and Productivity Score but cannot access any user level details or insights. In Microsoft 365 Admin Center for the two reports, we differentiate between tenant level aggregated data and user level details. This role gives an extra layer of protection on individual user identifiable data, which was requested by both customers and legal teams.
@@ -598,6 +597,16 @@ Can create and manage all aspects of attack simulation campaigns.
> | microsoft.office365.protectionCenter/attackSimulator/reports/allProperties/read | Read reports of attack simulation, responses, and associated training. | > | microsoft.office365.protectionCenter/attackSimulator/simulation/allProperties/allTasks | Create and manage attack simulation templates in Attack Simulator. |
+### Azure AD Joined Device Local Administrator permissions
+
+Users assigned to this role are added to the local administrators group on Azure AD-joined devices.
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | microsoft.directory/groupSettings/basic/read | Read basic properties on groupSettings in Azure Active Directory. |
+> | microsoft.directory/groupSettingTemplates/basic/read | Read basic properties on groupSettingTemplates in Azure Active Directory. |
+ ### Azure DevOps Administrator permissions Can manage Azure DevOps organization policy and settings.
@@ -907,16 +916,6 @@ Can manage the Desktop Analytics and Office Customization & Policy services. For
> | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Microsoft 365 Service Health. | > | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Office 365 support tickets. |
-### Device Administrators permissions
-
-Users assigned to this role are added to the local administrators group on Azure AD-joined devices.
-
-> [!div class="mx-tableFixed"]
-> | Actions | Description |
-> | | |
-> | microsoft.directory/groupSettings/basic/read | Read basic properties on groupSettings in Azure Active Directory. |
-> | microsoft.directory/groupSettingTemplates/basic/read | Read basic properties on groupSettingTemplates in Azure Active Directory. |
- ### Directory Readers permissions Can read basic directory information. For granting access to applications, not intended for users.
@@ -1842,6 +1841,36 @@ Can manage all aspects of the SharePoint service.
> | microsoft.office365.usageReports/allEntities/allProperties/read | Read Office 365 usage reports. | > | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in microsoft.office365.webPortal. |
+### Teams Administrator permissions
+
+Can manage the Microsoft Teams service.
+
+> [!NOTE]
+> This role has additional permissions outside of Azure Active Directory. For more information, see role description above.
++
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health. |
+> | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets for directory-level services. |
+> | microsoft.directory/groups/hiddenMembers/read | Read groups.hiddenMembers property in Azure Active Directory. |
+> | microsoft.directory/groups/unified/appRoleAssignments/update | Update groups.unified property in Azure Active Directory. |
+> | microsoft.directory/groups.unified/basic/update | Update basic properties of Microsoft 365 groups. |
+> | microsoft.directory/groups.unified/create | Create Microsoft 365 groups. |
+> | microsoft.directory/groups.unified/delete | Delete Microsoft 365 groups. |
+> | microsoft.directory/groups.unified/members/update | Update membership of Microsoft 365 groups. |
+> | microsoft.directory/groups.unified/owners/update | Update ownership of Microsoft 365 groups. |
+> | microsoft.directory/groups.unified/restore | Restore Microsoft 365 groups |
+> | microsoft.directory/servicePrincipals/managePermissionGrantsForGroup.microsoft-all-application-permissions | Grant consent to delegated permissions on behalf of a group |
+> | microsoft.office365.network/performance/allProperties/read | Read network performance pages in M365 Admin Center. |
+> | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Microsoft 365 Service Health. |
+> | microsoft.office365.skypeForBusiness/allEntities/allTasks | Manage all aspects of Skype for Business Online |
+> | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Office 365 support tickets. |
+> | microsoft.office365.usageReports/allEntities/allProperties/read | Read Office 365 usage reports. |
+> | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in microsoft.office365.webPortal. |
+> | microsoft.teams/allEntities/allProperties/allTasks | Manage all resources in Teams. |
+ ### Teams Communications Administrator permissions Can manage calling and meetings features within the Microsoft Teams service.
@@ -1909,36 +1938,6 @@ Can perform management related tasks on Teams certified devices.
> | microsoft.office365.webPortal/allEntities/basic/read | Read basic properties on all resources in microsoft.office365.webPortal. | > | microsoft.teams/devices/basic/read | Manage all aspects of Teams-certified devices including configuration policies. |
-### Teams Service Administrator permissions
-
-Can manage the Microsoft Teams service.
-
-> [!NOTE]
-> This role has additional permissions outside of Azure Active Directory. For more information, see role description above.
--
-> [!div class="mx-tableFixed"]
-> | Actions | Description |
-> | | |
-> | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health. |
-> | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets for directory-level services. |
-> | microsoft.directory/groups/hiddenMembers/read | Read groups.hiddenMembers property in Azure Active Directory. |
-> | microsoft.directory/groups/unified/appRoleAssignments/update | Update groups.unified property in Azure Active Directory. |
-> | microsoft.directory/groups.unified/basic/update | Update basic properties of Microsoft 365 groups. |
-> | microsoft.directory/groups.unified/create | Create Microsoft 365 groups. |
-> | microsoft.directory/groups.unified/delete | Delete Microsoft 365 groups. |
-> | microsoft.directory/groups.unified/members/update | Update membership of Microsoft 365 groups. |
-> | microsoft.directory/groups.unified/owners/update | Update ownership of Microsoft 365 groups. |
-> | microsoft.directory/groups.unified/restore | Restore Microsoft 365 groups |
-> | microsoft.directory/servicePrincipals/managePermissionGrantsForGroup.microsoft-all-application-permissions | Grant consent to delegated permissions on behalf of a group |
-> | microsoft.office365.network/performance/allProperties/read | Read network performance pages in M365 Admin Center. |
-> | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Microsoft 365 Service Health. |
-> | microsoft.office365.skypeForBusiness/allEntities/allTasks | Manage all aspects of Skype for Business Online |
-> | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Office 365 support tickets. |
-> | microsoft.office365.usageReports/allEntities/allProperties/read | Read Office 365 usage reports. |
-> | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in microsoft.office365.webPortal. |
-> | microsoft.teams/allEntities/allProperties/allTasks | Manage all resources in Teams. |
- ### Usage Summary Reports Reader permissions Can see only tenant level aggregates in M365 Usage Analytics and Productivity Score.
@@ -1998,7 +1997,7 @@ Application Developer | Application developer | CF1C38E5-3621-4004-A7CB-879624DC
Authentication Administrator | Authentication administrator | c4e39bd9-1100-46d3-8c65-fb160da0071f Attack Payload Author | Attack payload author | 9c6df0f2-1e7c-4dc3-b195-66dfbd24aa8f Attack Simulation Administrator | Attack simulation administrator | c430b396-e693-46cc-96f3-db01bf8bb62a
-Azure AD Joined Device Local Administrator | Azure AD Joined Device Local Administrator | 9f06204d-73c1-4d4c-880a-6edb90606fd8
+Azure AD Joined Device Local Administrator | Azure AD joined device local administrator | 9f06204d-73c1-4d4c-880a-6edb90606fd8
Azure DevOps Administrator | Azure DevOps administrator | e3973bdf-4987-49ae-837a-ba8e231c7286 Azure Information Protection Administrator | Azure Information Protection administrator | 7495fdc4-34c4-4d15-a289-98788ce399fd B2C IEF Keyset Administrator | B2C IEF Keyset Administrator | aaf43236-0c0d-4d5f-883a-6955382ac081
@@ -2056,11 +2055,11 @@ Security Reader | Security reader | 5d6b6bb7-de71-4623-b4af-96380a352509
Service Support Administrator | Service support administrator | f023fd81-a637-4b56-95fd-791ac0226033 SharePoint Administrator | SharePoint administrator | f28a1f50-f6e7-4571-818b-6a12f2af6b6c Skype for Business Administrator | Skype for Business administrator | 75941009-915a-4869-abe7-691bff18279e
+Teams Administrator | Teams administrator | 69091246-20e8-4a56-aa4d-066075b2a7a8
Teams Communications Administrator | Teams Communications Administrator | baf37b3a-610e-45da-9e62-d9d1e5e8914b Teams Communications Support Engineer | Teams Communications Support Engineer | f70938a0-fc10-4177-9e90-2178f8765737 Teams Communications Support Specialist | Teams Communications Support Specialist | fcf91098-03e3-41a9-b5ba-6f0ec8188a12 Teams Devices Administrator | Teams Devices Administrator | 3d762c5a-1b6c-493f-843e-55a3b42923d4
-Teams Service Administrator | Teams Service Administrator | 69091246-20e8-4a56-aa4d-066075b2a7a8
Usage Summary Reports Reader | Usage summary reports reader | 75934031-6c7e-415a-99d7-48dbd49e875e User | Not shown because it can't be used | a0b1b346-4d3e-4e8b-98f8-753987be4970 User Administrator | User administrator | fe930be7-5e62-47db-91af-98c3a49a38b1
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/adobeexperiencemanager-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/adobeexperiencemanager-tutorial.md
@@ -9,31 +9,27 @@
Previously updated : 01/17/2019 Last updated : 01/17/2021 # Tutorial: Azure Active Directory integration with Adobe Experience Manager
-In this tutorial, you learn how to integrate Adobe Experience Manager with Azure Active Directory (Azure AD).
-Integrating Adobe Experience Manager with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Adobe Experience Manager with Azure Active Directory (Azure AD). When you integrate Adobe Experience Manager with Azure AD, you can:
-* You can control in Azure AD who has access to Adobe Experience Manager.
-* You can enable your users to be automatically signed-in to Adobe Experience Manager (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Adobe Experience Manager.
+* Enable your users to be automatically signed-in to Adobe Experience Manager with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with Adobe Experience Manager, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* Adobe Experience Manager single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Adobe Experience Manager single sign-on (SSO) enabled subscription.
## Scenario description
-In this tutorial, you configure and test Azure AD single sign-on in a test environment.
+In this tutorial, you configure and test Azure AD SSO in a test environment.
* Adobe Experience Manager supports **SP and IDP** initiated SSO
@@ -43,59 +39,38 @@ In this tutorial, you configure and test Azure AD single sign-on in a test envir
To configure the integration of Adobe Experience Manager into Azure AD, you need to add Adobe Experience Manager from the gallery to your list of managed SaaS apps.
-**To add Adobe Experience Manager from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **Adobe Experience Manager**, select **Adobe Experience Manager** from result panel then click **Add** button to add the application.
-
- ![Adobe Experience Manager in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with [Application name] based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in [Application name] needs to be established.
-
-To configure and test Azure AD single sign-on with [Application name], you need to complete the following building blocks:
-
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Adobe Experience Manager Single Sign-On](#configure-adobe-experience-manager-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Adobe Experience Manager test user](#create-adobe-experience-manager-test-user)** - to have a counterpart of Britta Simon in Adobe Experience Manager that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Adobe Experience Manager** in the search box.
+1. Select **Adobe Experience Manager** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-### Configure Azure AD single sign-on
-In this section, you enable Azure AD single sign-on in the Azure portal.
+## Configure and test Azure AD SSO for Adobe Experience Manager
-To configure Azure AD single sign-on with [Application name], perform the following steps:
+Configure and test Azure AD SSO with Adobe Experience Manager using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Adobe Experience Manager.
-1. In the [Azure portal](https://portal.azure.com/), on the **Adobe Experience Manager** application integration page, select **Single sign-on**.
+To configure and test Azure AD SSO with Adobe Experience Manager, perform the following steps:
- ![Configure single sign-on link](common/select-sso.png)
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+2. **[Configure Adobe Experience Manager SSO](#configure-adobe-experience-manager-sso)** - to configure the Single Sign-On settings on application side.
+ 1. **[Create Adobe Experience Manager test user](#create-adobe-experience-manager-test-user)** - to have a counterpart of Britta Simon in Adobe Experience Manager that is linked to the Azure AD representation of user.
+6. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+## Configure Azure AD SSO
- ![Single sign-on select mode](common/select-saml-option.png)
+Follow these steps to enable Azure AD SSO in the Azure portal.
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
+1. In the Azure portal, on the **Adobe Experience Manager** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
-4. On the **Basic SAML Configuration** section, If you wish to configure the application in **IDP** initiated mode, perform the following steps:
-
- ![Screenshot that shows Basic SAML Configuration section and highlights the Identifier and Reply URL text boxes.](common/idp-intiated.png)
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
a. In the **Identifier** text box, type a unique value that you define on your AEM server as well.
@@ -107,8 +82,6 @@ To configure Azure AD single sign-on with [Application name], perform the follow
5. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
- ![Adobe Experience Manager Domain and URLs single sign-on information](common/metadata-upload-additional-signon.png)
- In the **Sign-on URL** text box, type your Adobe Experience Manager server URL. 6. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
@@ -119,59 +92,77 @@ To configure Azure AD single sign-on with [Application name], perform the follow
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
- b. Azure Ad Identifier
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Adobe Experience Manager.
- c. Logout URL
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Adobe Experience Manager**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
-### Configure Adobe Experience Manager Single Sign-On
+## Configure Adobe Experience Manager SSO
1. In another browser window, open the **Adobe Experience Manager** admin portal. 2. Select **Settings** > **Security** > **Users**.
- ![Screenshot that shows the Users tile in the Adobe Experience Manager.](./media/adobeexperiencemanager-tutorial/tutorial_adobeexperiencemanager_user.png)
+ ![Screenshot that shows the Users tile in the Adobe Experience Manager.](./media/adobe-experience-manager-tutorial/user-1.png)
3. Select **Administrator** or any other relevant user.
- ![Screenshot that highlights the Adminisrator user.](./media/adobeexperiencemanager-tutorial/tutorial_adobeexperiencemanager_admin6.png)
+ ![Screenshot that highlights the Adminisrator user.](./media/adobe-experience-manager-tutorial/tutorial-admin-6.png)
4. Select **Account settings** > **Manage TrustStore**.
- ![Screenshot that shows Manage TrustStore under Account settings.](./media/adobeexperiencemanager-tutorial/tutorial_adobeexperiencemanager_managetrust.png)
+ ![Screenshot that shows Manage TrustStore under Account settings.](./media/adobe-experience-manager-tutorial/manage-trust.png)
5. Under **Add Certificate from CER file**, click **Select Certificate File**. Browse to and select the certificate file, which you already downloaded from the Azure portal.
- ![Screenshot that highlights the Select Certificate File button.](./media/adobeexperiencemanager-tutorial/tutorial_adobeexperiencemanager_user2.png)
+ ![Screenshot that highlights the Select Certificate File button.](./media/adobe-experience-manager-tutorial/user-2.png)
6. The certificate is added to the TrustStore. Note the alias of the certificate.
- ![Screenshot that shows that the certificate is added to the TrustStore.](./media/adobeexperiencemanager-tutorial/tutorial_adobeexperiencemanager_admin7.png)
+ ![Screenshot that shows that the certificate is added to the TrustStore.](./media/adobe-experience-manager-tutorial/tutorial-admin-7.png)
7. On the **Users** page, select **authentication-service**.
- ![Sreenshot that highlights authentication-service on the screen.](./media/adobeexperiencemanager-tutorial/tutorial_adobeexperiencemanager_admin8.png)
+ ![Sreenshot that highlights authentication-service on the screen.](./media/adobe-experience-manager-tutorial/tutorial-admin-8.png)
8. Select **Account settings** > **Create/Manage KeyStore**. Create KeyStore by supplying a password.
- ![Screenshot that highlights Manage KeyStore.](./media/adobeexperiencemanager-tutorial/tutorial_adobeexperiencemanager_admin9.png)
+ ![Screenshot that highlights Manage KeyStore.](./media/adobe-experience-manager-tutorial/tutorial-admin-9.png)
9. Go back to the admin screen. Then select **Settings** > **Operations** > **Web Console**.
- ![Screenshot that highlights Web Console under Operations within the Settings section.](./media/adobeexperiencemanager-tutorial/tutorial_adobeexperiencemanager_admin1.png)
+ ![Screenshot that highlights Web Console under Operations within the Settings section.](./media/adobe-experience-manager-tutorial/tutorial-admin-1.png)
This opens the configuration page.
- ![Configure the single sign-on save button](./media/adobeexperiencemanager-tutorial/tutorial_adobeexperiencemanager_admin2.png)
+ ![Configure the single sign-on save button](./media/adobe-experience-manager-tutorial/tutorial-admin-2.png)
10. Find **Adobe Granite SAML 2.0 Authentication Handler**. Then select the **Add** icon.
- ![Screenshot that highlights Adobe Granite SAML 2.0 Authentication Handler.](./media/adobeexperiencemanager-tutorial/tutorial_adobeexperiencemanager_admin3.png)
+ ![Screenshot that highlights Adobe Granite SAML 2.0 Authentication Handler.](./media/adobe-experience-manager-tutorial/tutorial-admin-3.png)
11. Take the following actions on this page.
- ![Configure Single Sign-On Save button](./media/adobeexperiencemanager-tutorial/tutorial_adobeexperiencemanager_admin4.png)
+ ![Configure Single Sign-On Save button](./media/adobe-experience-manager-tutorial/tutorial-admin-4.png)
a. In the **Path** box, enter **/**.
@@ -193,73 +184,29 @@ To configure Azure AD single sign-on with [Application name], perform the follow
j. Select **Save**.
-### Create an Azure AD test user
-
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Adobe Experience Manager.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Adobe Experience Manager**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **Adobe Experience Manager**.
-
- ![The Adobe Experience Manager link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
+### Create Adobe Experience Manager test user
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
+In this section, you create a user called Britta Simon in Adobe Experience Manager. If you selected the **Autocreate CRX Users** option, users are created automatically after successful authentication.
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
+If you want to create users manually, work with the [Adobe Experience Manager support team](https://helpx.adobe.com/support/experience-manager.html) to add the users in the Adobe Experience Manager platform.
-7. In the **Add Assignment** dialog click the **Assign** button.
+## Test SSO
-### Create Adobe Experience Manager test user
+In this section, you test your Azure AD single sign-on configuration with following options.
-In this section, you create a user called Britta Simon in Adobe Experience Manager. If you selected the **Autocreate CRX Users** option, users are created automatically after successful authentication.
+#### SP initiated:
-If you want to create users manually, work with the [Adobe Experience Manager support team](https://helpx.adobe.com/support/experience-manager.html) to add the users in the Adobe Experience Manager platform.
+* Click on **Test this application** in Azure portal. This will redirect to Adobe Experience Manager Sign on URL where you can initiate the login flow.
-### Test single sign-on
+* Go to Adobe Experience Manager Sign-on URL directly and initiate the login flow from there.
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+#### IDP initiated:
-When you click the Adobe Experience Manager tile in the Access Panel, you should be automatically signed in to the Adobe Experience Manager for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Adobe Experience Manager for which you set up the SSO
-## Additional Resources
+You can also use Microsoft My Apps to test the application in any mode. When you click the Adobe Experience Manager tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Adobe Experience Manager for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md) -- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Adobe Experience Manager you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/deskradar-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/deskradar-tutorial.md
@@ -9,7 +9,7 @@
Previously updated : 10/24/2019 Last updated : 02/04/2021
@@ -21,8 +21,6 @@ In this tutorial, you'll learn how to integrate Deskradar with Azure Active Dire
* Enable your users to be automatically signed-in to Deskradar with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
@@ -34,29 +32,24 @@ To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment. -- * Deskradar supports **SP and IDP** initiated SSO --
-## Adding Deskradar from the gallery
+## Add Deskradar from the gallery
To configure the integration of Deskradar into Azure AD, you need to add Deskradar from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Deskradar** in the search box. 1. Select **Deskradar** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. -
-## Configure and test Azure AD single sign-on for Deskradar
+## Configure and test Azure AD SSO for Deskradar
Configure and test Azure AD SSO with Deskradar using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Deskradar.
-To configure and test Azure AD SSO with Deskradar, complete the following building blocks:
+To configure and test Azure AD SSO with Deskradar, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
@@ -69,9 +62,9 @@ To configure and test Azure AD SSO with Deskradar, complete the following buildi
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Deskradar** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Deskradar** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -131,18 +124,12 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Deskradar**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button.
-### Configure Deskradar SSO
+## Configure Deskradar SSO
1. To automate the configuration within Deskradar, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
@@ -178,16 +165,20 @@ In this section, you create a user called B.Simon in Deskradar. Work with [Deskr
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Deskradar Sign on URL where you can initiate the login flow.
-When you click the Deskradar tile in the Access Panel, you should be automatically signed in to the Deskradar for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Go to Deskradar Sign-on URL directly and initiate the login flow from there.
-## Additional resources
+#### IDP initiated:
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Deskradar for which you set up the SSO.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the Deskradar tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Deskradar for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try Deskradar with Azure AD](https://aad.portal.azure.com/)
+Once you configure Deskradar you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/headerf5-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/headerf5-tutorial.md
@@ -9,7 +9,7 @@
Previously updated : 11/19/2019 Last updated : 02/09/2021
@@ -21,7 +21,8 @@ In this tutorial, you'll learn how to integrate F5 with Azure Active Directory (
* Enable your users to be automatically signed-in to F5 with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with single sign-on in Azure AD, see [Single sign-on to applications in Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
+> [!NOTE]
+> F5 BIG-IP APM [Purchase Now](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/f5-networks.f5-big-ip-best?tab=Overview).
## Prerequisites
@@ -108,18 +109,18 @@ In this tutorial, you configure and test Azure AD SSO in a test environment.
To configure the integration of F5 into Azure AD, you need to add F5 from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **F5** in the search box. 1. Select **F5** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for F5
+## Configure and test Azure AD SSO for F5
Configure and test Azure AD SSO with F5 using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in F5.
-To configure and test Azure AD SSO with F5, complete the following building blocks:
+To configure and test Azure AD SSO with F5, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
@@ -132,7 +133,7 @@ To configure and test Azure AD SSO with F5, complete the following building bloc
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **F5** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **F5** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
@@ -181,19 +182,10 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **F5**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button.
-1. Click on **Conditional Access** .
-1. Click on **New Policy**.
-1. You can now see your F5 App as a resource for Conditional Access policy and apply any conditional access including Multifactor Auth, Device based access control or Identity Protection Policy.
## Configure F5 SSO
@@ -461,26 +453,24 @@ In this section, you create a user called B.Simon in F5. Work with [F5 Client s
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the F5 tile in the Access Panel, you should be automatically signed in to the F5 for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+#### SP initiated:
-## Additional resources
+* Click on **Test this application** in Azure portal. This will redirect to F5 Sign on URL where you can initiate the login flow.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Go to F5 Sign-on URL directly and initiate the login flow from there.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+#### IDP initiated:
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the F5 for which you set up the SSO
-- [Try F5 with Azure AD](https://aad.portal.azure.com/)
+You can also use Microsoft My Apps to test the application in any mode. When you click the F5 tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the F5 for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [Configure F5 single sign-on for Kerberos application](kerbf5-tutorial.md)--- [Configure F5 single sign-on for Advanced Kerberos application](advance-kerbf5-tutorial.md)
+> [!NOTE]
+> F5 BIG-IP APM [Purchase Now](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/f5-networks.f5-big-ip-best?tab=Overview).
-- [F5 BIG-IP APM and Azure AD integration for secure hybrid access](https://docs.microsoft.com/azure/active-directory/manage-apps/f5-aad-integration)
+## Next steps
-- [Tutorial to deploy F5 BIG-IP Virtual Edition VM in Azure IaaS for secure hybrid access](https://docs.microsoft.com/azure/active-directory/manage-apps/f5-bigip-deployment-guide)
+Once you configure F5 you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
-- [Tutorial for Azure Active Directory single sign-on integration with F5 BIG-IP for Password-less VPN](https://docs.microsoft.com/azure/active-directory/manage-apps/f5-aad-password-less-vpn)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/infor-cloud-suite-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/infor-cloud-suite-tutorial.md
@@ -9,27 +9,23 @@
Previously updated : 04/14/2019 Last updated : 02/05/2021 # Tutorial: Azure Active Directory integration with Infor CloudSuite
-In this tutorial, you learn how to integrate Infor CloudSuite with Azure Active Directory (Azure AD).
-Integrating Infor CloudSuite with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Infor CloudSuite with Azure Active Directory (Azure AD). When you integrate Infor CloudSuite with Azure AD, you can:
-* You can control in Azure AD who has access to Infor CloudSuite.
-* You can enable your users to be automatically signed-in to Infor CloudSuite (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Infor CloudSuite.
+* Enable your users to be automatically signed-in to Infor CloudSuite with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites To configure Azure AD integration with Infor CloudSuite, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/)
-* Infor CloudSuite single sign-on enabled subscription
+* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/).
+* Infor CloudSuite single sign-on enabled subscription.
## Scenario description
@@ -38,65 +34,43 @@ In this tutorial, you configure and test Azure AD single sign-on in a test envir
* Infor CloudSuite supports **SP and IDP** initiated SSO * Infor CloudSuite supports **Just In Time** user provisioning
-## Adding Infor CloudSuite from the gallery
+## Add Infor CloudSuite from the gallery
To configure the integration of Infor CloudSuite into Azure AD, you need to add Infor CloudSuite from the gallery to your list of managed SaaS apps.
-**To add Infor CloudSuite from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **Infor CloudSuite**, select **Infor CloudSuite** from result panel then click **Add** button to add the application.
-
- ![Infor CloudSuite in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Infor CloudSuite** in the search box.
+1. Select **Infor CloudSuite** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you configure and test Azure AD single sign-on with Infor CloudSuite based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Infor CloudSuite needs to be established.
+## Configure and test Azure AD SSO for Infor CloudSuite
-To configure and test Azure AD single sign-on with Infor CloudSuite, you need to complete the following building blocks:
+Configure and test Azure AD SSO with Infor CloudSuite using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Infor CloudSuite.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Infor CloudSuite Single Sign-On](#configure-infor-cloudsuite-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Infor CloudSuite test user](#create-infor-cloudsuite-test-user)** - to have a counterpart of Britta Simon in Infor CloudSuite that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+To configure and test Azure AD SSO with Infor CloudSuite, perform the following steps:
-### Configure Azure AD single sign-on
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Infor CloudSuite SSO](#configure-infor-cloudsuite-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Infor CloudSuite test user](#create-infor-cloudsuite-test-user)** - to have a counterpart of B.Simon in Infor CloudSuite that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+## Configure Azure AD SSO
-To configure Azure AD single sign-on with Infor CloudSuite, perform the following steps:
+Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Infor CloudSuite** application integration page, select **Single sign-on**.
+1. In the Azure portal, on the **Infor CloudSuite** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Configure single sign-on link](common/select-sso.png)
-
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
-
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following steps:
- ![Screenshot shows the Basic SAML Configuration, where you can enter Identifier, Reply U R L, and select Save.](common/idp-intiated.png)
-
- a. In the **Identifier** text box, type a URL using the following pattern:
+ a. In the **Identifier** text box, type the URL using one of the following patterns:
```http http://mingle-sso.inforcloudsuite.com
@@ -105,7 +79,7 @@ To configure Azure AD single sign-on with Infor CloudSuite, perform the followin
http://mingle-sso.se2.inforcloudsuite.com ```
- b. In the **Reply URL** text box, type a URL using the following pattern:
+ b. In the **Reply URL** text box, type the URL using one of the following patterns:
```http https://mingle-sso.inforcloudsuite.com:443/sp/ACS.saml2
@@ -116,9 +90,7 @@ To configure Azure AD single sign-on with Infor CloudSuite, perform the followin
5. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
- ![Screenshot shows Set additional U R Ls where you can enter a Sign on U R L.](common/metadata-upload-additional-signon.png)
-
- In the **Sign-on URL** text box, type a URL using the following pattern:
+ In the **Sign-on URL** text box, type a URL using one of the following patterns:
```http https://mingle-portal.inforcloudsuite.com/Tenant-Name/
@@ -138,80 +110,54 @@ To configure Azure AD single sign-on with Infor CloudSuite, perform the followin
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure AD Identifier
-
- c. Logout URL
-
-### Configure Infor CloudSuite Single Sign-On
-
-To configure single sign-on on **Infor CloudSuite** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Infor CloudSuite support team](mailto:support@infor.com). They set this setting to have the SAML SSO connection set properly on both sides.
- ### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type `brittasimon@yourcompanydomain.extension`. For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
+In this section, you'll create a test user in the Azure portal called B.Simon.
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Infor CloudSuite.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Infor CloudSuite.
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Infor CloudSuite**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Infor CloudSuite**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![Enterprise applications blade](common/enterprise-applications.png)
+## Configure Infor CloudSuite SSO
-2. In the applications list, select **Infor CloudSuite**.
-
- ![The Infor CloudSuite link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
+To configure single sign-on on **Infor CloudSuite** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Infor CloudSuite support team](mailto:support@infor.com). They set this setting to have the SAML SSO connection set properly on both sides.
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
+### Create Infor CloudSuite test user
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
+In this section, a user called Britta Simon is created in Infor CloudSuite. Infor CloudSuite supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Infor CloudSuite, a new one is created after authentication. If you need to create a user manually, contact [Infor CloudSuite support team](mailto:support@infor.com).
-7. In the **Add Assignment** dialog click the **Assign** button.
+## Test SSO
-### Create Infor CloudSuite test user
+In this section, you test your Azure AD single sign-on configuration with following options.
-In this section, a user called Britta Simon is created in Infor CloudSuite. Infor CloudSuite supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Infor CloudSuite, a new one is created after authentication. If you need to create a user manually, contact [Infor CloudSuite support team](mailto:support@infor.com).
+#### SP initiated:
-### Test single sign-on
+* Click on **Test this application** in Azure portal. This will redirect to Infor CloudSuite Sign on URL where you can initiate the login flow.
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+* Go to Infor CloudSuite Sign-on URL directly and initiate the login flow from there.
-When you click the Infor CloudSuite tile in the Access Panel, you should be automatically signed in to the Infor CloudSuite for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+#### IDP initiated:
-## Additional Resources
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Infor CloudSuite for which you set up the SSO.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the Infor CloudSuite tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Infor CloudSuite for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Infor CloudSuite you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/mondaycom-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/mondaycom-tutorial.md
@@ -9,7 +9,7 @@
Previously updated : 12/17/2019 Last updated : 02/08/2021
@@ -21,8 +21,6 @@ In this tutorial, you'll learn how to integrate monday.com with Azure Active Dir
* Enable your users to be automatically signed-in to monday.com with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
@@ -37,22 +35,22 @@ In this tutorial, you configure and test Azure AD SSO in a test environment.
* monday.com supports **SP and IDP** initiated SSO * monday.com supports **Just In Time** user provisioning
-## Adding monday.com from the gallery
+## Add monday.com from the gallery
To configure the integration of monday.com into Azure AD, you need to add monday.com from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **monday.com** in the search box. 1. Select **monday.com** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for monday.com
+## Configure and test Azure AD SSO for monday.com
Configure and test Azure AD SSO with monday.com using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in monday.com.
-To configure and test Azure AD SSO with monday.com, complete the following building blocks:
+To configure and test Azure AD SSO with monday.com, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
@@ -65,9 +63,9 @@ To configure and test Azure AD SSO with monday.com, complete the following build
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **monday.com** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **monday.com** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -83,15 +81,11 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
c. After the metadata file is successfully uploaded, the **Identifier** and **Reply URL** values get auto populated in Basic SAML Configuration section.
- ![Screenshot shows the Basic SAML Configuration, where you can enter Identifier, Reply U R L, and select Save.](common/idp-intiated.png)
- > [!Note] > If the **Identifier** and **Reply URL** values do not get populated automatically, then fill in the values manually. The **Identifier** and the **Reply URL** are the same and value is in the following pattern: `https://<your-domain>.monday.com/saml/saml_callback` 1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
- ![Screenshot shows Set additional U R Ls where you can enter a Sign on U R L.](common/metadata-upload-additional-signon.png)
- In the **Sign-on URL** text box, type a URL using the following pattern: `https://<YOUR_DOMAIN>.monday.com`
@@ -137,15 +131,9 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **monday.com**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure monday.com SSO
@@ -162,15 +150,15 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. Go to the **Profile** on the top right corner of page and click on **Admin**.
- ![Screenshot shows the Admin profile selected.](./media/mondaycom-tutorial/configuration01.png)
+ ![Screenshot shows the Admin profile selected.](./media/mondaycom-tutorial/configuration-1.png)
1. Select **Security** and make sure to click on **Open** next to SAML.
- ![Screenshot shows the Security tab with the option to Open next to SAML.](./media/mondaycom-tutorial/configuration02.png)
+ ![Screenshot shows the Security tab with the option to Open next to SAML.](./media/mondaycom-tutorial/configuration-2.png)
1. Fill in the details below from your IDP.
- ![Screenshot shows the SAML provider where you can enter information from your I D P.](./media/mondaycom-tutorial/configuration03.png)
+ ![Screenshot shows the SAML provider where you can enter information from your I D P.](./media/mondaycom-tutorial/configuration-3.png)
> [!NOTE] > For more details refer [this](https://support.monday.com/hc/articles/360000460605-SAML-Single-Sign-on?abcb=34642) article
@@ -181,16 +169,20 @@ In this section, a user called B.Simon is created in monday.com. monday.com supp
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to monday.com Sign on URL where you can initiate the login flow.
-When you click the monday.com tile in the Access Panel, you should be automatically signed in to the monday.com for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Go to monday.com Sign-on URL directly and initiate the login flow from there.
-## Additional resources
+#### IDP initiated:
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the monday.com for which you set up the SSO.
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the monday.com tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the monday.com for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try monday.com with Azure AD](https://aad.portal.azure.com/)
+Once you configure monday.com you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/paylocity-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/paylocity-tutorial.md
@@ -9,7 +9,7 @@
Previously updated : 01/21/2020 Last updated : 02/08/2021
@@ -21,8 +21,6 @@ In this tutorial, you'll learn how to integrate Paylocity with Azure Active Dire
* Enable your users to be automatically signed-in to Paylocity with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
@@ -36,24 +34,22 @@ In this tutorial, you configure and test Azure AD SSO in a test environment.
* Paylocity supports **SP and IDP** initiated SSO
-* Once you configure the Paylocity you can enforce session controls, which protect exfiltration and infiltration of your organizationΓÇÖs sensitive data in real-time. Session controls extend from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
-
-## Adding Paylocity from the gallery
+## Add Paylocity from the gallery
To configure the integration of Paylocity into Azure AD, you need to add Paylocity from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Paylocity** in the search box. 1. Select **Paylocity** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for Paylocity
+## Configure and test Azure AD SSO for Paylocity
Configure and test Azure AD SSO with Paylocity using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Paylocity.
-To configure and test Azure AD SSO with Paylocity, complete the following building blocks:
+To configure and test Azure AD SSO with Paylocity, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. * **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
@@ -66,9 +62,9 @@ To configure and test Azure AD SSO with Paylocity, complete the following buildi
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Paylocity** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Paylocity** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -76,7 +72,7 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
- In the **Sign-on URL** text box, type a URL:
+ In the **Sign-on URL** text box, type the URL:
`https://access.paylocity.com/` 1. Click **Save**.
@@ -131,15 +127,9 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Paylocity**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure Paylocity SSO
@@ -160,20 +150,20 @@ In this section, you create a user called B.Simon in Paylocity. Work with [Payl
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the Paylocity tile in the Access Panel, you should be automatically signed in to the Paylocity for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+#### SP initiated:
-## Additional resources
+* Click on **Test this application** in Azure portal. This will redirect to Paylocity Sign on URL where you can initiate the login flow.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Go to Paylocity Sign-on URL directly and initiate the login flow from there.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+#### IDP initiated:
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Paylocity for which you set up the SSO.
-- [Try Paylocity with Azure AD](https://aad.portal.azure.com/)
+You can also use Microsoft My Apps to test the application in any mode. When you click the Paylocity tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Paylocity for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-* [What is session control in Microsoft Cloud App Security?](/cloud-app-security/proxy-intro-aad)
+## Next steps
-* [How to protect Paylocity with advanced visibility and controls](/cloud-app-security/proxy-intro-aad)
+Once you configure Paylocity you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/saml-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/saml-tutorial.md
@@ -9,27 +9,23 @@
Previously updated : 12/24/2018 Last updated : 02/05/2021 # Tutorial: Azure Active Directory integration with SAML 1.1 Token enabled LOB App
-In this tutorial, you learn how to integrate SAML 1.1 Token enabled LOB App with Azure Active Directory (Azure AD).
-Integrating SAML 1.1 Token enabled LOB App with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate SAML 1.1 Token enabled LOB App with Azure Active Directory (Azure AD). When you integrate SAML 1.1 Token enabled LOB App with Azure AD, you can:
-* You can control in Azure AD who has access to SAML 1.1 Token enabled LOB App.
-* You can enable your users to be automatically signed-in to SAML 1.1 Token enabled LOB App (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to SAML 1.1 Token enabled LOB App.
+* Enable your users to be automatically signed-in to SAML 1.1 Token enabled LOB App with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with SAML 1.1 Token enabled LOB App, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* SAML 1.1 Token enabled LOB App single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* SAML 1.1 Token enabled LOB App single sign-on (SSO) enabled subscription.
## Scenario description
@@ -37,64 +33,45 @@ In this tutorial, you configure and test Azure AD single sign-on in a test envir
* SAML 1.1 Token enabled LOB App supports **SP** initiated SSO
-## Adding SAML 1.1 Token enabled LOB App from the gallery
-
-To configure the integration of SAML 1.1 Token enabled LOB App into Azure AD, you need to add SAML 1.1 Token enabled LOB App from the gallery to your list of managed SaaS apps.
-
-**To add SAML 1.1 Token enabled LOB App from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-4. In the search box, type **SAML 1.1 Token enabled LOB App**, select **SAML 1.1 Token enabled LOB App** from result panel then click **Add** button to add the application.
+## Add SAML 1.1 Token enabled LOB App from the gallery
- ![SAML 1.1 Token enabled LOB App in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with SAML 1.1 Token enabled LOB App based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in SAML 1.1 Token enabled LOB App needs to be established.
-
-To configure and test Azure AD single sign-on with SAML 1.1 Token enabled LOB App, you need to complete the following building blocks:
-
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure SAML 1.1 Token enabled LOB App Single Sign-On](#configure-saml-11-token-enabled-lob-app-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create SAML 1.1 Token enabled LOB App test user](#create-saml-11-token-enabled-lob-app-test-user)** - to have a counterpart of Britta Simon in SAML 1.1 Token enabled LOB App that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+To configure the integration of SAML 1.1 Token enabled LOB App into Azure AD, you need to add SAML 1.1 Token enabled LOB App from the gallery to your list of managed SaaS apps.
-### Configure Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **SAML 1.1 Token enabled LOB App** in the search box.
+1. Select **SAML 1.1 Token enabled LOB App** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+## Configure and test Azure AD SSO for SAML 1.1 Token enabled LOB App
-To configure Azure AD single sign-on with SAML 1.1 Token enabled LOB App, perform the following steps:
+Configure and test Azure AD SSO with SAML 1.1 Token enabled LOB App using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in SAML 1.1 Token enabled LOB App.
-1. In the [Azure portal](https://portal.azure.com/), on the **SAML 1.1 Token enabled LOB App** application integration page, select **Single sign-on**.
+To configure and test Azure AD SSO with SAML 1.1 Token enabled LOB App, perform the following steps:
- ![Configure single sign-on link](common/select-sso.png)
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure SAML 1.1 Token enabled LOB App SSO](#configure-saml-11-token-enabled-lob-app-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create SAML 1.1 Token enabled LOB App test user](#create-saml-11-token-enabled-lob-app-test-user)** - to have a counterpart of B.Simon in SAML 1.1 Token enabled LOB App that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+## Configure Azure AD SSO
- ![Single sign-on select mode](common/select-saml-option.png)
+Follow these steps to enable Azure AD SSO in the Azure portal.
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
+1. In the Azure portal, on the **SAML 1.1 Token enabled LOB App** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, perform the following steps:
- ![SAML 1.1 Token enabled LOB App Domain and URLs single sign-on information](common/sp-identifier.png)
- a. In the **Sign on URL** text box, type a URL using the following pattern: `https://your-app-url`
@@ -104,7 +81,7 @@ To configure Azure AD single sign-on with SAML 1.1 Token enabled LOB App, perfor
> [!NOTE] > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact SAML 1.1 Token enabled LOB App Client support team to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-4. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
+5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
![The Certificate download link](common/certificatebase64.png)
@@ -112,81 +89,48 @@ To configure Azure AD single sign-on with SAML 1.1 Token enabled LOB App, perfor
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure Ad Identifier
-
- c. Logout URL
-
-### Configure SAML 1.1 Token enabled LOB App Single Sign-On
-
-To configure single sign-on on **SAML 1.1 Token enabled LOB App** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to SAML 1.1 Token enabled LOB App support team. They set this setting to have the SAML SSO connection set properly on both sides.
- ### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
+In this section, you'll create a test user in the Azure portal called B.Simon.
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to SAML 1.1 Token enabled LOB App.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to SAML 1.1 Token enabled LOB App.
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **SAML 1.1 Token enabled LOB App**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **SAML 1.1 Token enabled LOB App**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![Enterprise applications blade](common/enterprise-applications.png)
+## Configure SAML 1.1 Token enabled LOB App SSO
-2. In the applications list, type and select **SAML 1.1 Token enabled LOB App**.
-
- ![The SAML 1.1 Token enabled LOB App link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **SAML 1.1 Token enabled LOB App** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to SAML 1.1 Token enabled LOB App support team. They set this setting to have the SAML SSO connection set properly on both sides.
### Create SAML 1.1 Token enabled LOB App test user In this section, you create a user called Britta Simon in SAML 1.1 Token enabled LOB App. Work with SAML 1.1 Token enabled LOB App support team to add the users in the SAML 1.1 Token enabled LOB App platform. Users must be created and activated before you use single sign-on.
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the SAML 1.1 Token enabled LOB App tile in the Access Panel, you should be automatically signed in to the SAML 1.1 Token enabled LOB App for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Click on **Test this application** in Azure portal. This will redirect to SAML 1.1 Token enabled LOB App Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to SAML 1.1 Token enabled LOB App Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the SAML 1.1 Token enabled LOB App tile in the My Apps, this will redirect to SAML 1.1 Token enabled LOB App Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure SAML 1.1 Token enabled LOB App you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/samlssoconfluence-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/samlssoconfluence-tutorial.md
@@ -9,91 +9,65 @@
Previously updated : 12/24/2018 Last updated : 02/04/2021 # Tutorial: Azure Active Directory integration with SAML SSO for Confluence by resolution GmbH
-In this tutorial, you learn how to integrate SAML SSO for Confluence by resolution GmbH with Azure Active Directory (Azure AD).
-Integrating SAML SSO for Confluence by resolution GmbH with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate SAML SSO for Confluence by resolution GmbH with Azure Active Directory (Azure AD). When you integrate SAML SSO for Confluence by resolution GmbH with Azure AD, you can:
-* You can control in Azure AD who has access to SAML SSO for Confluence by resolution GmbH.
-* You can enable your users to be automatically signed-in to SAML SSO for Confluence by resolution GmbH (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to SAML SSO for Confluence by resolution GmbH.
+* Enable your users to be automatically signed-in to SAML SSO for Confluence by resolution GmbH with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with SAML SSO for Confluence by resolution GmbH, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* SAML SSO for Confluence by resolution GmbH single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* SAML SSO for Confluence by resolution GmbH single sign-on (SSO) enabled subscription.
## Scenario description
-In this tutorial, you configure and test Azure AD single sign-on in a test environment.
+In this tutorial, you configure and test Azure AD SSO in a test environment.
-* SAML SSO for Confluence by resolution GmbH supports **SP** and **IDP** initiated SSO
+* SAML SSO for Confluence by resolution GmbH supports **SP and IDP** initiated SSO
-## Adding SAML SSO for Confluence by resolution GmbH from the gallery
+## Add SAML SSO for Confluence by resolution GmbH from the gallery
To configure the integration of SAML SSO for Confluence by resolution GmbH into Azure AD, you need to add SAML SSO for Confluence by resolution GmbH from the gallery to your list of managed SaaS apps.
-**To add SAML SSO for Confluence by resolution GmbH from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **SAML SSO for Confluence by resolution GmbH**, select **SAML SSO for Confluence by resolution GmbH** from result panel then click **Add** button to add the application.
-
- ![SAML SSO for Confluence by resolution GmbH in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with SAML SSO for Confluence by resolution GmbH based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in SAML SSO for Confluence by resolution GmbH needs to be established.
-
-To configure and test Azure AD single sign-on with SAML SSO for Confluence by resolution GmbH, you need to complete the following building blocks:
-
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure SAML SSO for Confluence by resolution GmbH Single Sign-On](#configure-saml-sso-for-confluence-by-resolution-gmbh-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create SAML SSO for Confluence by resolution GmbH test user](#create-saml-sso-for-confluence-by-resolution-gmbh-test-user)** - to have a counterpart of Britta Simon in SAML SSO for Confluence by resolution GmbH that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
-
-### Configure Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **SAML SSO for Confluence by resolution GmbH** in the search box.
+1. Select **SAML SSO for Confluence by resolution GmbH** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+## Configure and test Azure AD SSO for SAML SSO for Confluence by resolution GmbH
-To configure Azure AD single sign-on with SAML SSO for Confluence by resolution GmbH, perform the following steps:
+Configure and test Azure AD SSO with SAML SSO for Confluence by resolution GmbH using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in SAML SSO for Confluence by resolution GmbH.
-1. In the [Azure portal](https://portal.azure.com/), on the **SAML SSO for Confluence by resolution GmbH** application integration page, select **Single sign-on**.
+To configure and test Azure AD SSO with SAML SSO for Confluence by resolution GmbH, perform the following steps:
- ![Configure single sign-on link](common/select-sso.png)
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+2. **[Configure SAML SSO for Confluence by resolution GmbH SSO](#configure-saml-sso-for-confluence-by-resolution-gmbh-sso)** - to configure the Single Sign-On settings on application side.
+ 1. **[Create SAML SSO for Confluence by resolution GmbH test user](#create-saml-sso-for-confluence-by-resolution-gmbh-test-user)** - to have a counterpart of Britta Simon in SAML SSO for Confluence by resolution GmbH that is linked to the Azure AD representation of user.
+6. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+## Configure Azure AD SSO
- ![Single sign-on select mode](common/select-saml-option.png)
+Follow these steps to enable Azure AD SSO in the Azure portal.
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
+1. In the Azure portal, on the **SAML SSO for Confluence by resolution GmbH** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
-4. On the **Basic SAML Configuration** section perform the following steps, if you wish to configure the application in **IDP** Initiated mode:
-
- ![Screenshot that shows the "Basic S A M L Configuration" with the "Identifier" and "Reply U R L" text boxes highlighted, and the "Save" action selected.](common/idp-intiated.png)
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
a. In the **Identifier** text box, type a URL using the following pattern: `https://<server-base-url>/plugins/servlet/samlsso`
@@ -101,9 +75,7 @@ To configure Azure AD single sign-on with SAML SSO for Confluence by resolution
b. In the **Reply URL** text box, type a URL using the following pattern: `https://<server-base-url>/plugins/servlet/samlsso`
- c. Click **Set additional URLs** and perform the following step if you wish to configure the application in SP initiated mode:
-
- ![SAML SSO for Confluence by resolution GmbH Domain and URLs single sign-on information](common/metadata-upload-additional-signon.png)
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
In the **Sign-on URL** text box, type a URL using the following pattern: `https://<server-base-url>/plugins/servlet/samlsso`
@@ -115,51 +87,77 @@ To configure Azure AD single sign-on with SAML SSO for Confluence by resolution
![The Certificate download link](common/metadataxml.png)
-### Configure SAML SSO for Confluence by resolution GmbH Single Sign-On
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to SAML SSO for Confluence by resolution GmbH.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **SAML SSO for Confluence by resolution GmbH**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
++
+## Configure SAML SSO for Confluence by resolution GmbH SSO
1. In a different web browser window, log in to your **SAML SSO for Confluence by resolution GmbH admin portal** as an administrator. 2. Hover on cog and click the **Add-ons**.
- ![Screenshot that shows the "Cog" icon selected, and "Add-ons" selected from the drop-down.](./media/samlssoconfluence-tutorial/addon1.png)
+ ![Screenshot that shows the "Cog" icon selected, and "Add-ons" selected from the drop-down.](./media/saml-sso-confluence-tutorial/add-on-1.png)
3. You are redirected to Administrator Access page. Enter the password and click **Confirm** button.
- ![Screenshot that shows the "Administrator Access" page with the "Confirm" button selected.](./media/samlssoconfluence-tutorial/addon2.png)
+ ![Screenshot that shows the "Administrator Access" page with the "Confirm" button selected.](./media/saml-sso-confluence-tutorial/add-on-2.png)
4. Under **ATLASSIAN MARKETPLACE** tab, click **Find new add-ons**.
- ![Screenshot that shows the "Attlassian Marketplace" tab with "Find new add-ons" selected.](./media/samlssoconfluence-tutorial/addon.png)
+ ![Screenshot that shows the "Attlassian Marketplace" tab with "Find new add-ons" selected.](./media/saml-sso-confluence-tutorial/add-on.png)
5. Search **SAML Single Sign On (SSO) for Confluence** and click **Install** button to install the new SAML plugin.
- ![Screenshot that shows the "Find new add-ons" page with "S A M L Single Sign On (S S O) for Confluence" in the search box and the "Install" button selected.](./media/samlssoconfluence-tutorial/addon7.png)
+ ![Screenshot that shows the "Find new add-ons" page with "S A M L Single Sign On (S S O) for Confluence" in the search box and the "Install" button selected.](./media/saml-sso-confluence-tutorial/add-on-7.png)
6. The plugin installation will start. Click **Close**.
- ![Screenshot that shows the "Installing" dialog.](./media/samlssoconfluence-tutorial/addon8.png)
+ ![Screenshot that shows the "Installing" dialog.](./media/saml-sso-confluence-tutorial/add-on-8.png)
- ![Screenshot that shows the "Installed and ready to go!" dialog with the "Close" action selected.](./media/samlssoconfluence-tutorial/addon9.png)
+ ![Screenshot that shows the "Installed and ready to go!" dialog with the "Close" action selected.](./media/saml-sso-confluence-tutorial/add-on-9.png)
7. Click **Manage**.
- ![Screenshot that shows the "S A M L Single Sign On (S S O) for Confluence" app page with the "Manage" button selected.](./media/samlssoconfluence-tutorial/addon10.png)
+ ![Screenshot that shows the "S A M L Single Sign On (S S O) for Confluence" app page with the "Manage" button selected.](./media/saml-sso-confluence-tutorial/add-on-10.png)
8. Click **Configure** to configure the new plugin.
- ![Screenshot that shows the "Manage" page with the "Configure" button selected.](./media/samlssoconfluence-tutorial/addon11.png)
+ ![Screenshot that shows the "Manage" page with the "Configure" button selected.](./media/saml-sso-confluence-tutorial/add-on-11.png)
9. This new plugin can also be found under **USERS & SECURITY** tab.
- ![Screenshot that shows the "Users & Security" tab with "S A M L SingleSignOn" selected.](./media/samlssoconfluence-tutorial/addon3.png)
+ ![Screenshot that shows the "Users & Security" tab with "S A M L SingleSignOn" selected.](./media/saml-sso-confluence-tutorial/add-on-3.png)
10. On **SAML SingleSignOn Plugin Configuration** page, click **Add new IdP** button to configure the settings of Identity Provider.
- ![Screenshot that shows the "S A M L SingleSignOn Plugin Configuration" page, with the "Add new I d P" button selected.](./media/samlssoconfluence-tutorial/addon4.png)
+ ![Screenshot that shows the "S A M L SingleSignOn Plugin Configuration" page, with the "Add new I d P" button selected.](./media/saml-sso-confluence-tutorial/add-on-4.png)
11. On **Choose your SAML Identity Provider** page, perform the following steps:
- ![Screenshot that shows the "Choose your S A M L Identity Provider" page with the "I d P Type", "Name", and "Description" text boxes highlighted.](./media/samlssoconfluence-tutorial/addon5a.png)
+ ![Screenshot that shows the "Choose your S A M L Identity Provider" page with the "I d P Type", "Name", and "Description" text boxes highlighted.](./media/saml-sso-confluence-tutorial/add-on-5-a.png)
a. Set **Azure AD** as the IdP type.
@@ -171,11 +169,11 @@ To configure Azure AD single sign-on with SAML SSO for Confluence by resolution
12. On **Identity provider configuration** page, click **Next** button.
- ![Screenshot that shows the "Identity provider configuration" page with the "Next" button selected.](./media/samlssoconfluence-tutorial/addon5b.png)
+ ![Screenshot that shows the "Identity provider configuration" page with the "Next" button selected.](./media/saml-sso-confluence-tutorial/add-on-5-b.png)
13. On **Import SAML IdP Metadata** page, perform the following steps:
- ![Screenshot that shows the "Import S A M L I d P Metadata" page with the "Import", "Load File", and "Next" buttons selected.](./media/samlssoconfluence-tutorial/addon5c.png)
+ ![Screenshot that shows the "Import S A M L I d P Metadata" page with the "Import", "Load File", and "Next" buttons selected.](./media/saml-sso-confluence-tutorial/add-on-5-c.png)
a. Click **Load File** button and pick Metadata XML file you downloaded in Step 5.
@@ -187,70 +185,20 @@ To configure Azure AD single sign-on with SAML SSO for Confluence by resolution
14. On **User ID attribute and transformation** page, click **Next** button.
- ![Screenshot that shows the "User ID attribute and transformation" page with the "Next" button selected.](./media/samlssoconfluence-tutorial/addon5d.png)
+ ![Screenshot that shows the "User ID attribute and transformation" page with the "Next" button selected.](./media/saml-sso-confluence-tutorial/add-on-5-d.png)
15. On **User creation and update** page, click **Save & Next** to save settings.
- ![Screenshot that shows the "User creation and update" page with the "Save & Next" button selected.](./media/samlssoconfluence-tutorial/addon6a.png)
+ ![Screenshot that shows the "User creation and update" page with the "Save & Next" button selected.](./media/saml-sso-confluence-tutorial/add-on-6-a.png)
16. On **Test your settings** page, click **Skip test & configure manually** to skip the user test for now. This will be performed in the next section and requires some settings in Azure portal.
- ![Screenshot that shows the "Test your settings" page with the "Skip test & configure manually" button selected.](./media/samlssoconfluence-tutorial/addon6b.png)
+ ![Screenshot that shows the "Test your settings" page with the "Skip test & configure manually" button selected.](./media/saml-sso-confluence-tutorial/add-on-6-b.png)
17. In the appearing dialog reading **Skipping the test means...**, click **OK**.
- ![Configure Single Sign-On](./media/samlssoconfluence-tutorial/addon6c.png)
-
-### Create an Azure AD test user
-
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to SAML SSO for Confluence by resolution GmbH.
+ ![Configure Single Sign-On](./media/saml-sso-confluence-tutorial/add-on-6-c.png)
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **SAML SSO for Confluence by resolution GmbH**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, type and select **SAML SSO for Confluence by resolution GmbH**.
-
- ![The SAML SSO for Confluence by resolution GmbH link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
### Create SAML SSO for Confluence by resolution GmbH test user
@@ -263,11 +211,11 @@ In SAML SSO for Confluence by resolution GmbH, provisioning is a manual task.
2. Hover on cog and click the **User management**.
- ![Screenshot that shows the "Cog" icon selected, and "User management" selected from the menu.](./media/samlssoconfluence-tutorial/user1.png)
+ ![Screenshot that shows the "Cog" icon selected, and "User management" selected from the menu.](./media/saml-sso-confluence-tutorial/user-1.png)
3. Under Users section, click **Add users** tab. On the **ΓÇ£Add a UserΓÇ¥** dialog page, perform the following steps:
- ![Add Employee](./media/samlssoconfluence-tutorial/user2.png)
+ ![Add Employee](./media/saml-sso-confluence-tutorial/user-2.png)
a. In the **Username** textbox, type the email of user like Britta Simon.
@@ -281,16 +229,22 @@ In SAML SSO for Confluence by resolution GmbH, provisioning is a manual task.
f. Click **Add** button.
-### Test single sign-on
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to SAML SSO for Confluence by resolution GmbH Sign on URL where you can initiate the login flow.
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+* Go to SAML SSO for Confluence by resolution GmbH Sign-on URL directly and initiate the login flow from there.
-When you click the SAML SSO for Confluence by resolution GmbH tile in the Access Panel, you should be automatically signed in to the SAML SSO for Confluence by resolution GmbH for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+#### IDP initiated:
-## Additional resources
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the SAML SSO for Confluence by resolution GmbH for which you set up the SSO
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the SAML SSO for Confluence by resolution GmbH tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the SAML SSO for Confluence by resolution GmbH for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure SAML SSO for Confluence by resolution GmbH you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/secretserver-on-premises-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/secretserver-on-premises-tutorial.md
@@ -9,7 +9,7 @@
Previously updated : 08/07/2019 Last updated : 02/05/2021
@@ -21,8 +21,6 @@ In this tutorial, you'll learn how to integrate Secret Server (On-Premises) with
* Enable your users to be automatically signed-in to Secret Server (On-Premises) with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
@@ -36,44 +34,43 @@ In this tutorial, you configure and test Azure AD SSO in a test environment.
* Secret Server (On-Premises) supports **SP and IDP** initiated SSO
-## Adding Secret Server (On-Premises) from the gallery
+## Add Secret Server (On-Premises) from the gallery
To configure the integration of Secret Server (On-Premises) into Azure AD, you need to add Secret Server (On-Premises) from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Secret Server (On-Premises)** in the search box. 1. Select **Secret Server (On-Premises)** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. -
-## Configure and test Azure AD single sign-on
+## Configure and test Azure AD SSO for Secret Server (On-Premises)
Configure and test Azure AD SSO with Secret Server (On-Premises) using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Secret Server (On-Premises).
-To configure and test Azure AD SSO with Secret Server (On-Premises), complete the following building blocks:
+To configure and test Azure AD SSO with Secret Server (On-Premises), perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
-2. **[Configure Secret Server (On-Premises) SSO](#configure-secret-server-on-premises-sso)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-5. **[Create Secret Server (On-Premises) test user](#create-secret-server-on-premises-test-user)** - to have a counterpart of B.Simon in Secret Server (On-Premises) that is linked to the Azure AD representation of user.
-6. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Secret Server (On-Premises) SSO](#configure-secret-server-on-premises-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Secret Server (On-Premises) test user](#create-secret-server-on-premises-test-user)** - to have a counterpart of B.Simon in Secret Server (On-Premises) that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-### Configure Azure AD SSO
+## Configure Azure AD SSO
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Secret Server (On-Premises)** application integration page, find the **Manage** section and select **Single sign-on**.
+1. In the Azure portal, on the **Secret Server (On-Premises)** application integration page, find the **Manage** section and select **Single sign-on**.
1. On the **Select a Single sign-on method** page, select **SAML**.
-1. On the **Set up Single Sign-On with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up Single Sign-On with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png) 1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
- a. In the **Identifier** text box, enter the user chosen value as an example:
+ a. In the **Identifier** text box, type the URL:
`https://secretserveronpremises.azure` b. In the **Reply URL** text box, type a URL using the following pattern:
@@ -106,10 +103,6 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
![Copy configuration URLs](common/copy-configuration-urls.png)
-### Configure Secret Server (On-Premises) SSO
-
-To configure single sign-on on the **Secret Server (On-Premises)** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from the Azure portal to the [Secret Server (On-Premises) support team](https://thycotic.force.com/support/s/). They set this setting to have the SAML SSO connection set properly on both sides.
- ### Create an Azure AD test user In this section, you'll create a test user in the Azure portal called B.Simon.
@@ -129,31 +122,35 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Secret Server (On-Premises)**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button.
+## Configure Secret Server (On-Premises) SSO
+
+To configure single sign-on on the **Secret Server (On-Premises)** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from the Azure portal to the [Secret Server (On-Premises) support team](https://thycotic.force.com/support/s/). They set this setting to have the SAML SSO connection set properly on both sides.
+ ### Create Secret Server (On-Premises) test user In this section, you create a user called Britta Simon in Secret Server (On-Premises). Work with [Secret Server (On-Premises) support team](https://thycotic.force.com/support/s/) to add the users in the Secret Server (On-Premises) platform. Users must be created and activated before you use single sign-on.
-### Test SSO
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Secret Server (On-Premises) Sign on URL where you can initiate the login flow.
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+* Go to Secret Server (On-Premises) Sign-on URL directly and initiate the login flow from there.
-When you click the Secret Server (On-Premises) tile in the Access Panel, you should be automatically signed in to the Secret Server (On-Premises) for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+#### IDP initiated:
-## Additional resources
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Secret Server (On-Premises) for which you set up the SSO.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the Secret Server (On-Premises) tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Secret Server (On-Premises) for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Secret Server (On-Premises) you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/user-help/my-account-change-password-page https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/user-help/my-account-change-password-page.md
@@ -10,7 +10,7 @@
Previously updated : 07/29/2020 Last updated : 01/19/2021
@@ -19,7 +19,12 @@
The **Change password** page of the **My Account** portal helps you to update an existing password for your work or school account, assuming you remember the password and that you're not locked out of your account. If you don't remember your password, if you're locked out of your account, or if you never got a password from your organization, you can use your security info and your mobile device to reset your password. >[!Important]
->This article is intended for users trying to update a known password for an existing work or school account. If you're a user trying to get into a personal account, such as for Xbox, Hotmail, or Outlook.com, try the suggestions in the [When you can't sign in to your Microsoft account](https://support.microsoft.com/help/12429/microsoft-account-sign-in-cant) article. If you're an administrator trying to find more information about how to test up self-service password reset for your employees or other users, see [Self-service password reset](../authentication/tutorial-enable-sspr.md).
+>This article is intended for users trying to update a known password for an existing work or school account. If you're a user trying to get into a personal account, such as for Xbox, Hotmail, or Outlook.com, try the suggestions in the [When you can't sign in to your Microsoft account](https://support.microsoft.com/help/12429/microsoft-account-sign-in-cant) article. If you see an error while signing in with a personal Microsoft account, you can still sign in by using the domain name for your organization (such as contoso.com) or the **Tenant ID** of your organization from your administrator in one of the following URLs:
+>
+> - https://myaccount.microsoft.com?tenantId=*your_domain_name*
+> - https://myaccount.microsoft.com?tenant=*your_tenant_ID*
+>
+>If you're an administrator trying to find more information about how to test up self-service password reset for your employees or other users, see [Self-service password reset](../authentication/tutorial-enable-sspr.md).
## Update a password from the Change password page
active-directory https://docs.microsoft.com/en-us/azure/active-directory/user-help/my-account-portal-devices-page https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/user-help/my-account-portal-devices-page.md
@@ -10,7 +10,7 @@
Previously updated : 07/29/2020 Last updated : 01/19/2021
@@ -23,7 +23,12 @@ The **Devices** page of the **My Account** portal helps you to manage the device
- Disable any devices you no longer own, have lost, or that have been stolen. >[!Important]
->This article is intended for users trying to update the device info connected to a work or school account. If you're an administrator looking for information about device management for your employees and other uses, see the [Device Identities Documentation](../devices/index.yml).
+>This article is intended for users trying to update the device info connected to a work or school account. If you see an error while signing in with a personal Microsoft account, you can still sign in by using the domain name for your organization (such as contoso.com) or the **Tenant ID** of your organization from your administrator in one of the following URLs:
+>
+> - https://myaccount.microsoft.com?tenantId=*your_domain_name*
+> - https://myaccount.microsoft.com?tenant=*your_tenant_ID*
+>
+>If you're an administrator looking for information about device management for your employees and other uses, see the [Device Identities Documentation](../devices/index.yml).
## View your connected devices
@@ -58,7 +63,7 @@ If you're locked out of your device or have a fatal error, you can go to another
![Device page with BitLocker key option](media/my-account-portal/my-account-portal-devices-bitlocker.png)
-2. Select **View Bitlocker Keys** for the locked out device and write down the BitLocker key for your locked device.
+2. Select **View BitLocker Keys** for the locked out device and write down the BitLocker key for your locked device.
## Next steps
@@ -82,4 +87,4 @@ After viewing your connected devices, you can:
- [Go to the Office **My installs** page](https://portal.office.com/account/#installs) -- [Go to the Office **Subscriptions** page](https://portal.office.com/account/#subscriptions)
+- [Go to the Office **Subscriptions** page](https://portal.office.com/account/#subscriptions)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/user-help/my-account-portal-organizations-page https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/user-help/my-account-portal-organizations-page.md
@@ -10,7 +10,7 @@
Previously updated : 09/10/2020 Last updated : 01/19/2021
@@ -26,21 +26,26 @@ The **Organizations** page of the **My Account** portal helps you to manage the
- **Other organizations.** The other organizations are any group that you've signed in to previously using your work or school account. You can leave any of these organizations at any time. >[!Important]
->This article is intended for users trying to update the organization info accessed by a work or school account. If you're an administrator looking for information about group and user management for your employees and other uses, see the [Enterprise user management documentation](../enterprise-users/index.yml).
+>This article is intended for users trying to update the organization info accessed by a work or school account. If you see an error while signing in with a personal Microsoft account, you can still sign in by using the domain name for your organization (such as contoso.com) or the **Tenant ID** of your organization from your administrator in one of the following URLs:
+>
+> - https://myaccount.microsoft.com?tenantId=*your_domain_name*
+> - https://myaccount.microsoft.com?tenant=*your_tenant_ID*
+>
+>If you're an administrator looking for information about group and user management for your employees and other uses, see the [Enterprise user management documentation](../enterprise-users/index.yml).
## View your organizations
-1. Sign in to your work or school account, and then go to the **My Account** (https://myaccount.microsoft.com/) page.
+1. Sign in to your work or school account, and then go to the **My Account** (https://myaccount.microsoft.com/) page.
-2. Select **Organizations** from the left navigation pane or select the **Manage organizations** link from the **Organizations** block.
+1. Select **Organizations** from the left navigation pane or select the **Manage organizations** link from the **Organizations** block.
![My Account page, showing highlighted Organizations links](media/my-account-portal/my-account-portal-organizations.png)
-3. Review the information for your **Home organization**.
+1. Review the information for your **Home organization**.
![Organizations page](media/my-account-portal/my-account-portal-organization-page.png)
-4. Review your other organizations, making sure you recognize all of the organizations that you have access to.
+1. Review your other organizations, making sure you recognize all of the organizations that you have access to.
## Leave an organization
active-directory https://docs.microsoft.com/en-us/azure/active-directory/user-help/my-account-portal-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/user-help/my-account-portal-overview.md
@@ -10,7 +10,7 @@
Previously updated : 07/29/2020 Last updated : 01/19/2021
active-directory https://docs.microsoft.com/en-us/azure/active-directory/user-help/my-account-portal-privacy-page https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/user-help/my-account-portal-privacy-page.md
@@ -10,7 +10,7 @@
Previously updated : 10/28/2019 Last updated : 01/19/2021
@@ -18,9 +18,18 @@
You can view how your organization uses your data from the **Settings & Privacy** page of the **My Account** portal.
+>[!Note]
+> If you see an error while signing in with a personal Microsoft account, you can still sign in by using the domain name for your organization (such as contoso.com) or the **Tenant ID** of your organization from your administrator in one of the following URLs:
+>
+> - https://myaccount.microsoft.com?tenantId=*your_domain_name*
+> - https://myaccount.microsoft.com?tenant=*your_tenant_ID*
+ ## View your privacy-related info
-1. Sign in to your work or school account and then go to your https://myaccount.microsoft.com/ page.
+1. Sign in to your work or school account and then go to your https://myaccount.microsoft.com/ page. If you are signing in with a personal Microsoft account, you can still sign in by using the domain name for your organization (such as contoso.com) or the **Tenant ID** of your organization from your administrator in one of the following URLs:
+
+ - https://myaccount.microsoft.com?tenantId=*your_domain_name*
+ - https://myaccount.microsoft.com?tenant=*your_tenant_ID*
2. Select **Settings & Privacy** from the left navigation pane or select the **View Settings and Privacy** link from the **Settings & Privacy** block.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/user-help/my-account-portal-settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/user-help/my-account-portal-settings.md
@@ -10,7 +10,7 @@
Previously updated : 07/29/2020 Last updated : 01/19/2021
@@ -18,9 +18,18 @@
You can view or change your account settings in the My Account portal, such as language or time zone, from the **Settings & Privacy** page of the **My Account** portal.
+>[!Note]
+> If you see an error while signing in with a personal Microsoft account, you can still sign in by using the domain name for your organization (such as contoso.com) or the **Tenant ID** of your organization from your administrator in one of the following URLs:
+>
+> - https://myaccount.microsoft.com?tenantId=*your_domain_name*
+> - https://myaccount.microsoft.com?tenant=*your_tenant_ID*
+ ## View and manage your language and regional settings
-1. Sign in to your work or school account and then go to your https://myaccount.microsoft.com/ page.
+1. Sign in to your work or school account and then go to your https://myaccount.microsoft.com/ page. If you are signing in with a personal Microsoft account, you can still sign in by using the domain name for your organization (such as contoso.com) or the **Tenant ID** of your organization from your administrator in one of the following URLs:
+
+ - https://myaccount.microsoft.com?tenantId=*your_domain_name*
+ - https://myaccount.microsoft.com?tenant=*your_tenant_ID*
1. Select **Settings & Privacy** from the left navigation pane or select the **View Settings And Privacy** link from the **Settings & Privacy** block.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/user-help/my-account-portal-sign-ins-page https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/user-help/my-account-portal-sign-ins-page.md
@@ -10,7 +10,7 @@
Previously updated : 08/03/2020 Last updated : 01/19/2021
@@ -22,6 +22,12 @@ You can view all of your recent work or school account sign-in activity, from th
- If an attacker successfully signed in to your account, and from what location. - What apps the attacker tried to access.
+>[!Note]
+> If you see an error while signing in with a personal Microsoft account, you can still sign in by using the domain name for your organization (such as contoso.com) or the **Tenant ID** of your organization from your administrator in one of the following URLs:
+>
+> - https://myaccount.microsoft.com?tenantId=*your_domain_name*
+> - https://myaccount.microsoft.com?tenant=*your_tenant_ID*
+ ## View your recent sign-in activity 1. Sign in to your work or school account and then go to your https://myaccount.microsoft.com/ page.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/user-help/my-apps-portal-end-user-access-reviews https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/user-help/my-apps-portal-end-user-access-reviews.md
@@ -8,7 +8,7 @@
Previously updated : 10/19/2020 Last updated : 01/19/2021
@@ -24,6 +24,11 @@ If you donΓÇÖt have access to the **My Apps** portal, contact your Helpdesk for
>[!Important] >This content is intended for **My Apps** users. If you're an administrator, you can find more information about how to set up and manage your cloud-based apps in the [Application Management Documentation](../manage-apps/index.yml).
+>
+> If you see an error signing in with a personal Microsoft account, you can still sign in by using the domain name for your organization (such as contoso.com) or the **Tenant ID** of your organization from your administrator in one of the following URLs:
+>
+> - https://myapplications.microsoft.com?tenantId=*your_domain_name*
+> - https://myapplications.microsoft.com?tenant=*your_tenant_ID*
## Manage access reviews
@@ -36,17 +41,17 @@ If your administrator has given you permission to perform your own access review
1. Sign in to your work or school account.
-2. Open your web browser and go to https://myapps.microsoft.com, or use the link provided by your organization. For example, you might be directed to a customized page for your organization, such as https://myapps.microsoft.com/contoso.com.
+1. Open your web browser and go to https://myapps.microsoft.com, or use the link provided by your organization. For example, you might be directed to a customized page for your organization, such as https://myapps.microsoft.com/contoso.com.
The **Apps** page appears, showing all the cloud-based apps owned by your organization and available for you to use. ![Apps page in the My Apps portal](media/my-apps-portal/my-apps-home.png)
-3. Select the **Access reviews** tile to see a list of access reviews waiting for your approval.
+1. Select the **Access reviews** tile to see a list of access reviews waiting for your approval.
![Access reviews page with pending access reviews for the organization](media/my-apps-portal/my-apps-portal-access-reviews-page.png)
-4. Select **Begin review** to start your access review.
+1. Select **Begin review** to start your access review.
5. Review your access and determine whether it's still necessary.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/user-help/my-apps-portal-end-user-access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/user-help/my-apps-portal-end-user-access.md
@@ -8,7 +8,7 @@
Previously updated : 10/19/2020 Last updated : 01/19/2021
@@ -26,6 +26,11 @@ If you donΓÇÖt have access to the **My Apps** portal, contact your organization'
> [!IMPORTANT] > This content is intended for **My Apps** users. If you're an administrator, you can find more information about how to set up and manage your cloud-based apps in the [Application Management Documentation](../manage-apps/index.yml).
+>
+> If you see an error signing in with a personal Microsoft account, you can still sign in by using the domain name for your organization (such as contoso.com) or the **Tenant ID** of your organization from your administrator in one of the following URLs:
+>
+> - https://myapplications.microsoft.com?tenantId=*your_domain_name*
+> - https://myapplications.microsoft.com?tenant=*your_tenant_ID*
## Supported browsers
active-directory https://docs.microsoft.com/en-us/azure/active-directory/user-help/my-apps-portal-end-user-groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/user-help/my-apps-portal-end-user-groups.md
@@ -8,7 +8,7 @@
Previously updated : 10/19/2020 Last updated : 01/19/2021
@@ -22,6 +22,11 @@ You can use your work or school account with the web-based **My Apps** portal to
>[!Important] >This content is intended for users. If you're an administrator, you can find more information about how to set up and manage your cloud-based apps in the [Application Management Documentation](../manage-apps/index.yml).
+>
+> If you see an error signing in with a personal Microsoft account, you can still sign in by using the domain name for your organization (such as contoso.com) or the **Tenant ID** of your organization from your administrator in one of the following URLs:
+>
+> - https://myapplications.microsoft.com?tenantId=*your_domain_name*
+> - https://myapplications.microsoft.com?tenant=*your_tenant_ID*
## View your Groups information
@@ -35,7 +40,11 @@ If your administrator has given you permission to view the **Groups** tile, you
1. Sign in to your work or school account.
-2. Open your web browser and go to https://myapps.microsoft.com, or use the link provided by your organization. For example, you might be directed to a customized page for your organization, such as https://myapps.microsoft.com/contoso.com.
+2. Open your web browser and go to https://myapps.microsoft.com, or use the link provided by your organization. For example, you might be directed to a customized page for your organization, such as https://myapps.microsoft.com/contoso.com. If you are signing in with a personal Microsoft account, you can still sign in by using the domain name for your organization (such as contoso.com) or the **Tenant ID** of your organization from your administrator in one of the following URLs:
+
+ - https://myapplications.microsoft.com?tenantId=*your_domain_name*
+ - https://myapplications.microsoft.com?tenant=*your_tenant_ID*
+ The **Apps** page appears, showing all the cloud-based apps owned by your organization and available for you to use.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/user-help/my-apps-portal-end-user-troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/user-help/my-apps-portal-end-user-troubleshoot.md
@@ -9,7 +9,7 @@
Previously updated : 03/21/2019 Last updated : 01/19/2021
@@ -27,9 +27,9 @@ If you're having problems installing the My Apps Secure Sign-in Extension:
- **Microsoft Edge.** Running on Windows 10 Anniversary Edition or later.
- - **Google Chrome.** Running on Windows 7 or later, and on Mac OS X or later.
+ - **Google Chrome.** Running on Windows 7 or later, and on macOS X or later.
- - **Mozilla Firefox 26.0 or later.** Running on Windows XP SP2 or later, and on Mac OS X 10.6 or later.
+ - **Mozilla Firefox 26.0 or later.** Running on Windows XP SP2 or later, and on macOS X 10.6 or later.
- **Internet Explorer 11.** Running on Windows 7 or later (limited support).
@@ -43,6 +43,11 @@ If you're having problems installing the My Apps Secure Sign-in Extension:
If you're having trouble signing into the **My Apps** portal, you can try the following:
+- If you see an error signing in with a personal Microsoft account, you can still sign in by using the domain name for your organization (such as contoso.com) or the **Tenant ID** of your organization from your administrator in one of the following URLs:
+
+ - https://myapplications.microsoft.com?tenantId=*your_domain_name*
+ - https://myapplications.microsoft.com?tenant=*your_tenant_ID*
+ - Make sure you're using the right URL. It should be https://myapps.microsoft.com or a customized page for your organization, such as https://myapps.microsoft.com/contoso.com. - Make sure your password is correct and hasn't expired. For more info, see [Reset your work or school password](active-directory-passwords-update-your-own-password.md).
@@ -87,4 +92,4 @@ After you sign in to the **My Apps** portal, you can also update your profile an
- [View and update your groups-related information](my-apps-portal-end-user-groups.md). -- [Perform your own access reviews](my-apps-portal-end-user-access-reviews.md).
+- [Perform your own access reviews](my-apps-portal-end-user-access-reviews.md).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/user-help/my-apps-portal-user-collections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/user-help/my-apps-portal-user-collections.md
@@ -8,7 +8,7 @@
Previously updated : 11/20/2020 Last updated : 01/19/2021
@@ -30,6 +30,12 @@ In this article, youΓÇÖll learn how to:
- Show hidden collections - Delete collections
+>[!Note]
+>If you see an error while signing in with a personal Microsoft account, you can still sign in by using the domain name for your organization (such as contoso.com) or the **Tenant ID** of your organization from your administrator in one of the following URLs:
+>
+> - https://myapplications.microsoft.com?tenantId=*your_domain_name*
+> - https://myapplications.microsoft.com?tenant=*your_tenant_ID*
+ ## Create a collection 1. Go to [My Apps collections](https://myapplications.microsoft.com/?endUserCollections) and sign in using your work or school account.
@@ -99,6 +105,7 @@ To hide a collection:
To make a hidden collection visible: 1. Go to [My Apps collections](https://myapplications.microsoft.com/?endUserCollections) and sign in using your work or school account.+ 1. Open the page menu :::image type="content" source="media/my-apps-portal-user-collections/17-ellipsis-icon.png" alt-text="Select the ellipsis icon for the page-level menu":::, and then select **Manage**. :::image type="content" source="media/my-apps-portal-user-collections/13-manage-apps-again.png" alt-text="The page menu contains the Manage command to manage your apps":::
aks https://docs.microsoft.com/en-us/azure/aks/howto-deploy-java-liberty-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/howto-deploy-java-liberty-app.md
@@ -11,7 +11,14 @@ keywords: java, jakartaee, javaee, microprofile, open-liberty, websphere-liberty
# Deploy a Java application with Open Liberty or WebSphere Liberty on an Azure Kubernetes Service (AKS) cluster
-This guide demonstrates how to run your Java, Java EE, [Jakarta EE](https://jakarta.ee/), or [MicroProfile](https://microprofile.io/) application on the Open Liberty or WebSphere Liberty runtime and then deploy the containerized application to an AKS cluster using the Open Liberty Operator. The Open Liberty Operator simplifies the deployment and management of applications running on Open Liberty Kubernetes clusters. You can also perform more advanced operations such as gathering traces and dumps using the operator. This article will walk you through preparing a Liberty application, building the application Docker image and running the containerized application on an AKS cluster. For more details on Open Liberty, see [the Open Liberty project page](https://openliberty.io/). For more details on IBM WebSphere Liberty, see [the WebSphere Liberty product page](https://www.ibm.com/cloud/websphere-liberty).
+This article demonstrates how to:
+* Run your Java, Java EE, Jakarta EE, or MicroProfile application on the Open Liberty or WebSphere Liberty runtime.
+* Build the application Docker image using Open Liberty container images.
+* Deploy the containerized application to an AKS cluster using the Open Liberty Operator.
+
+The Open Liberty Operator simplifies the deployment and management of applications running on Kubernetes clusters. With Open Liberty Operator, you can also perform more advanced operations, such as gathering traces and dumps.
+
+For more details on Open Liberty, see [the Open Liberty project page](https://openliberty.io/). For more details on IBM WebSphere Liberty, see [the WebSphere Liberty product page](https://www.ibm.com/cloud/websphere-liberty).
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
@@ -19,17 +26,20 @@ This guide demonstrates how to run your Java, Java EE, [Jakarta EE](https://jaka
* This article requires the latest version of Azure CLI. If using Azure Cloud Shell, the latest version is already installed. * If running the commands in this guide locally (instead of Azure Cloud Shell):
- * Prepare a local machine with Unix-like operating system installed (for example, Ubuntu, macOS).
+ * Prepare a local machine with Unix-like operating system installed (for example, Ubuntu, macOS, Windows Subsystem for Linux).
* Install a Java SE implementation (for example, [AdoptOpenJDK OpenJDK 8 LTS/OpenJ9](https://adoptopenjdk.net/?variant=openjdk8&jvmVariant=openj9)). * Install [Maven](https://maven.apache.org/download.cgi) 3.5.0 or higher. * Install [Docker](https://docs.docker.com/get-docker/) for your OS. ## Create a resource group
-An Azure resource group is a logical group in which Azure resources are deployed and managed. Create a resource group, *java-liberty-project* using the [az group create](/cli/azure/group#az_group_create) command in the *eastus* location. It will be used for creating the Azure Container Registry (ACR) instance and the AKS cluster later.
+An Azure resource group is a logical group in which Azure resources are deployed and managed.
+
+Create a resource group called *java-liberty-project* using the [az group create](/cli/azure/group#az_group_create) command in the *eastus* location. This resource group will be used later for creating the Azure Container Registry (ACR) instance and the AKS cluster.
```azurecli-interactive
-az group create --name java-liberty-project --location eastus
+RESOURCE_GROUP_NAME=java-liberty-project
+az group create --name $RESOURCE_GROUP_NAME --location eastus
``` ## Create an ACR instance
@@ -37,7 +47,8 @@ az group create --name java-liberty-project --location eastus
Use the [az acr create](/cli/azure/acr#az_acr_create) command to create the ACR instance. The following example creates an ACR instance named *youruniqueacrname*. Make sure *youruniqueacrname* is unique within Azure. ```azurecli-interactive
-az acr create --resource-group java-liberty-project --name youruniqueacrname --sku Basic --admin-enabled
+REGISTRY_NAME=youruniqueacrname
+az acr create --resource-group $RESOURCE_GROUP_NAME --name $REGISTRY_NAME --sku Basic --admin-enabled
``` After a short time, you should see a JSON output that contains:
@@ -50,10 +61,9 @@ After a short time, you should see a JSON output that contains:
### Connect to the ACR instance
-To push an image to the ACR instance, you need to log into it first. Run the following commands to verify the connection:
+You will need to sign in to the ACR instance before you can push an image to it. Run the following commands to verify the connection:
```azurecli-interactive
-REGISTRY_NAME=youruniqueacrname
LOGIN_SERVER=$(az acr show -n $REGISTRY_NAME --query 'loginServer' -o tsv) USER_NAME=$(az acr credential show -n $REGISTRY_NAME --query 'username' -o tsv) PASSWORD=$(az acr credential show -n $REGISTRY_NAME --query 'passwords[0].value' -o tsv)
@@ -68,7 +78,8 @@ You should see `Login Succeeded` at the end of command output if you have logged
Use the [az aks create](/cli/azure/aks#az_aks_create) command to create an AKS cluster. The following example creates a cluster named *myAKSCluster* with one node. This will take several minutes to complete. ```azurecli-interactive
-az aks create --resource-group java-liberty-project --name myAKSCluster --node-count 1 --generate-ssh-keys --enable-managed-identity
+CLUSTER_NAME=myAKSCluster
+az aks create --resource-group $RESOURCE_GROUP_NAME --name $CLUSTER_NAME --node-count 1 --generate-ssh-keys --enable-managed-identity
``` After a few minutes, the command completes and returns JSON-formatted information about the cluster, including the following:
@@ -91,7 +102,7 @@ az aks install-cli
To configure `kubectl` to connect to your Kubernetes cluster, use the [az aks get-credentials](/cli/azure/aks#az_aks_get_credentials) command. This command downloads credentials and configures the Kubernetes CLI to use them. ```azurecli-interactive
-az aks get-credentials --resource-group java-liberty-project --name myAKSCluster --overwrite-existing
+az aks get-credentials --resource-group $RESOURCE_GROUP_NAME --name $CLUSTER_NAME --overwrite-existing
``` > [!NOTE]
@@ -139,6 +150,7 @@ To deploy and run your Liberty application on the AKS cluster, containerize your
1. Clone the sample code for this guide. The sample is on [GitHub](https://github.com/Azure-Samples/open-liberty-on-aks). 1. Change directory to `javaee-app-simple-cluster` of your local clone. 1. Run `mvn clean package` to package the application.
+1. Run `mvn liberty:dev` to test the application. You should see `The defaultServer server is ready to run a smarter planet.` in the command output if successful. Use `CTRL-C` to stop the application.
1. Run one of the following commands to build the application image and push it to the ACR instance. * Build with Open Liberty base image if you prefer to use Open Liberty as a lightweight open source JavaΓäó runtime:
@@ -201,12 +213,12 @@ To monitor progress, use the [kubectl get service](https://kubernetes.io/docs/re
kubectl get service javaee-app-simple-cluster --watch NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-javaee-app-simple-cluster LoadBalancer 10.0.251.169 52.152.189.57 9080:31732/TCP 68s
+javaee-app-simple-cluster LoadBalancer 10.0.251.169 52.152.189.57 80:31732/TCP 68s
```
-Wait until the *EXTERNAL-IP* address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process.
+Once the *EXTERNAL-IP* address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process.
-Open a web browser to the external IP address and port of your service (`52.152.189.57:9080` for the above example) to see the application home page. You should see the pod name of your application replicas displayed at the top-left of the page. Wait for a few minutes and refresh the page, you will probably see a different pod name displayed due to load balancing provided by the AKS cluster.
+Open a web browser to the external IP address of your service (`52.152.189.57` for the above example) to see the application home page. You should see the pod name of your application replicas displayed at the top-left of the page. Wait for a few minutes and refresh the page to see a different pod name displayed due to load balancing provided by the AKS cluster.
:::image type="content" source="./media/howto-deploy-java-liberty-app/deploy-succeeded.png" alt-text="Java liberty application successfully deployed on AKS":::
@@ -215,10 +227,10 @@ Open a web browser to the external IP address and port of your service (`52.152.
## Clean up the resources
-To avoid Azure charges, you should clean up unneeded resources. When the cluster is no longer needed, use the [az group delete](/cli/azure/group#az_group_delete) command to remove the resource group, container service, container registry, and all related resources.
+To avoid Azure charges, you should clean up unnecessary resources. When the cluster is no longer needed, use the [az group delete](/cli/azure/group#az_group_delete) command to remove the resource group, container service, container registry, and all related resources.
```azurecli-interactive
-az group delete --name java-liberty-project --yes --no-wait
+az group delete --name $RESOURCE_GROUP_NAME --yes --no-wait
``` ## Next steps
aks https://docs.microsoft.com/en-us/azure/aks/intro-kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/intro-kubernetes.md
@@ -3,30 +3,37 @@ Title: Introduction to Azure Kubernetes Service
description: Learn the features and benefits of Azure Kubernetes Service to deploy and manage container-based applications in Azure. Previously updated : 05/06/2019 Last updated : 02/09/2021 # Azure Kubernetes Service (AKS)
-Azure Kubernetes Service (AKS) makes it simple to deploy a managed Kubernetes cluster in Azure. AKS reduces the complexity and operational overhead of managing Kubernetes by offloading much of that responsibility to Azure. As a hosted Kubernetes service, Azure handles critical tasks like health monitoring and maintenance for you. The Kubernetes masters are managed by Azure. You only manage and maintain the agent nodes. As a managed Kubernetes service, AKS is free - you only pay for the agent nodes within your clusters, not for the masters.
+Azure Kubernetes Service (AKS) simplifies deploying a managed Kubernetes cluster in Azure by offloading much of the complexity and operational overhead to Azure. As a hosted Kubernetes service, Azure handles critical tasks for you, like health monitoring and maintenance.
-You can create an AKS cluster in the Azure portal, with the Azure CLI, or template driven deployment options such as Resource Manager templates and Terraform. When you deploy an AKS cluster, the Kubernetes master and all nodes are deployed and configured for you. Additional features such as advanced networking, Azure Active Directory integration, and monitoring can also be configured during the deployment process. Windows Server containers are supported in AKS.
+Since the Kubernetes masters are managed by Azure, you only manage and maintain the agent nodes. Thus, as a managed Kubernetes service, AKS is free; you only pay for the agent nodes within your clusters, not for the masters.
+
+You can create an AKS cluster using the Azure portal, the Azure CLI, Azure PowerShell, or using template-driven deployment options, such as Resource Manager templates and Terraform. When you deploy an AKS cluster, the Kubernetes master and all nodes are deployed and configured for you.
+Additional features such as advanced networking, Azure Active Directory integration, and monitoring can also be configured during the deployment process. Windows Server containers are supported in AKS.
For more information on Kubernetes basics, see [Kubernetes core concepts for AKS][concepts-clusters-workloads].
-To get started, complete the AKS quickstart [in the Azure portal][aks-portal] or [with the Azure CLI][aks-cli].
+To get started, complete the AKS Quickstart [in the Azure portal][aks-portal] or [with the Azure CLI][aks-cli].
[!INCLUDE [azure-lighthouse-supported-service](../../includes/azure-lighthouse-supported-service.md)] ## Access, security, and monitoring
-For improved security and management, AKS lets you integrate with Azure Active Directory and use Kubernetes role-based access control (Kubernetes RBAC). You can also monitor the health of your cluster and resources.
+For improved security and management, AKS lets you integrate with Azure Active Directory (Azure AD) and:
+* Use Kubernetes role-based access control (Kubernetes RBAC).
+* Monitor the health of your cluster and resources.
### Identity and security management
-To limit access to cluster resources, AKS supports [Kubernetes role-based access control (Kubernetes RBAC)][kubernetes-rbac]. Kubernetes RBAC lets you control access to Kubernetes resources and namespaces, and permissions to those resources. You can also configure an AKS cluster to integrate with Azure Active Directory (AD). With Azure AD integration, Kubernetes access can be configured based on existing identity and group membership. Your existing Azure AD users and groups can be provided access to AKS resources and with an integrated sign-on experience.
+To limit access to cluster resources, AKS supports [Kubernetes RBAC][kubernetes-rbac]. Kubernetes RBAC lets you control access and permissions to Kubernetes resources and namespaces.
+
+You can also configure an AKS cluster to integrate with Azure AD. With Azure AD integration, you can configure Kubernetes access based on existing identity and group membership. Your existing Azure AD users and groups can be provided with an integrated sign-on experience and access to AKS resources.
For more information on identity, see [Access and identity options for AKS][concepts-identity].
@@ -34,13 +41,15 @@ To secure your AKS clusters, see [Integrate Azure Active Directory with AKS][aks
### Integrated logging and monitoring
-To understand how your AKS cluster and deployed applications are performing, Azure Monitor for container health collects memory and processor metrics from containers, nodes, and controllers. Container logs are available, and you can also [review the Kubernetes master logs][aks-master-logs]. This monitoring data is stored in an Azure Log Analytics workspace, and is available through the Azure portal, Azure CLI, or a REST endpoint.
+Azure Monitor for Container Health collects memory and processor performance metrics from containers, nodes, and controllers within your AKS cluster and deployed applications. You can review both the container logs and [the Kubernetes master logs][aks-master-logs]. This monitoring data is stored in an Azure Log Analytics workspace and is available through the Azure portal, Azure CLI, or a REST endpoint.
For more information, see [Monitor Azure Kubernetes Service container health][container-health]. ## Clusters and nodes
-AKS nodes run on Azure virtual machines. You can connect storage to nodes and pods, upgrade cluster components, and use GPUs. AKS supports Kubernetes clusters that run multiple node pools to support mixed operating systems and Windows Server containers. Linux nodes run a customized Ubuntu OS image, and Windows Server nodes run a customized Windows Server 2019 OS image.
+AKS nodes run on Azure virtual machines (VMs). With AKS nodes, you can connect storage to nodes and pods, upgrade cluster components, and use GPUs. AKS supports Kubernetes clusters that run multiple node pools to support mixed operating systems and Windows Server containers.
+
+For more information regarding Kubernetes cluster, node, and node pool capabilities, see [Kubernetes core concepts for AKS][concepts-clusters-workloads].
### Cluster node and pod scaling
@@ -50,7 +59,7 @@ For more information, see [Scale an Azure Kubernetes Service (AKS) cluster][aks-
### Cluster node upgrades
-Azure Kubernetes Service offers multiple Kubernetes versions. As new versions become available in AKS, your cluster can be upgraded using the Azure portal or Azure CLI. During the upgrade process, nodes are carefully cordoned and drained to minimize disruption to running applications.
+AKS offers multiple Kubernetes versions. As new versions become available in AKS, your cluster can be upgraded using the Azure portal or Azure CLI. During the upgrade process, nodes are carefully cordoned and drained to minimize disruption to running applications.
To learn more about lifecycle versions, see [Supported Kubernetes versions in AKS][aks-supported versions]. For steps on how to upgrade, see [Upgrade an Azure Kubernetes Service (AKS) cluster][aks-upgrade].
@@ -62,13 +71,13 @@ For more information, see [Using GPUs on AKS][aks-gpu].
### Confidential computing nodes (public preview)
-AKS supports the creation of Intel SGX based confidential computing node pools (DCSv2 VMs). Confidential computing nodes allow containers to run in a hardware-based trusted execution environment (enclaves). Isolation between containers, combined with code integrity through attestation, can help with your defense-in-depth container security strategy. Confidential computing nodes supports both confidential containers (existing Docker apps) and enclave-aware containers.
+AKS supports the creation of Intel SGX-based, confidential computing node pools (DCSv2 VMs). Confidential computing nodes allow containers to run in a hardware-based, trusted execution environment (enclaves). Isolation between containers, combined with code integrity through attestation, can help with your defense-in-depth container security strategy. Confidential computing nodes support both confidential containers (existing Docker apps) and enclave-aware containers.
For more information, see [Confidential computing nodes on AKS][conf-com-node]. ### Storage volume support
-To support application workloads, you can mount storage volumes for persistent data. Both static and dynamic volumes can be used. Depending on how many connected pods are to share the storage, you can use storage backed by either Azure Disks for single pod access, or Azure Files for multiple concurrent pod access.
+To support application workloads, you can mount storage volumes for persistent data. You can use both static and dynamic volumes. Depending on the number of connected pods expected to share the storage volumes, you can use storage backed by either Azure Disks for single pod access, or Azure Files for multiple concurrent pod access.
For more information, see [Storage options for applications in AKS][concepts-storage].
@@ -76,25 +85,31 @@ Get started with dynamic persistent volumes using [Azure Disks][azure-disk] or [
## Virtual networks and ingress
-An AKS cluster can be deployed into an existing virtual network. In this configuration, every pod in the cluster is assigned an IP address in the virtual network, and can directly communicate with other pods in the cluster, and other nodes in the virtual network. Pods can also connect to other services in a peered virtual network, and to on-premises networks over ExpressRoute or site-to-site (S2S) VPN connections.
+An AKS cluster can be deployed into an existing virtual network. In this configuration, every pod in the cluster is assigned an IP address in the virtual network, and can directly communicate with other pods in the cluster and other nodes in the virtual network. Pods can also connect to other services in a peered virtual network and to on-premises networks over ExpressRoute or site-to-site (S2S) VPN connections.
For more information, see the [Network concepts for applications in AKS][aks-networking].
-To get started with ingress traffic, see [HTTP application routing][aks-http-routing].
- ### Ingress with HTTP application routing
-The HTTP application routing add-on makes it easy to access applications deployed to your AKS cluster. When enabled, the HTTP application routing solution configures an ingress controller in your AKS cluster. As applications are deployed, publicly accessible DNS names are auto configured. The HTTP application routing configures a DNS zone and integrates it with the AKS cluster. You can then deploy Kubernetes ingress resources as normal.
+The HTTP application routing add-on makes it easy to access applications deployed to your AKS cluster. When enabled, the HTTP application routing solution configures an ingress controller in your AKS cluster.
+
+As applications are deployed, publicly accessible DNS names are autoconfigured. The HTTP application routing sets up a DNS zone and integrates it with the AKS cluster. You can then deploy Kubernetes ingress resources as normal.
To get started with ingress traffic, see [HTTP application routing][aks-http-routing]. ## Development tooling integration
-Kubernetes has a rich ecosystem of development and management tools such as Helm and the Kubernetes extension for Visual Studio Code. These tools work seamlessly with AKS.
+Kubernetes has a rich ecosystem of development and management tools that work seamlessly with AKS. These tools include Helm and the Kubernetes extension for Visual Studio Code. These tools work seamlessly with AKS.
+
+Additionally, Azure provides several tools that help streamline Kubernetes, such as Azure Dev Spaces and DevOps Starter.
-Additionally, Azure Dev Spaces provides a rapid, iterative Kubernetes development experience for teams. With minimal configuration, you can run and debug containers directly in AKS. To get started, see [Azure Dev Spaces][azure-dev-spaces].
+Azure Dev Spaces provides a rapid, iterative Kubernetes development experience for teams. With minimal configuration, you can run and debug containers directly in AKS. To get started, see [Azure Dev Spaces][azure-dev-spaces].
-DevOps Starter provides a simple solution for bringing existing code and Git repositories into Azure. DevOps Starter automatically creates Azure resources such as AKS, a release pipeline in Azure DevOps Services that includes a build pipeline for CI, sets up a release pipeline for CD, and then creates an Azure Application Insights resource for monitoring.
+DevOps Starter provides a simple solution for bringing existing code and Git repositories into Azure. DevOps Starter automatically:
+* Creates Azure resources (such as AKS);
+* Configures a release pipeline in Azure DevOps Services that includes a build pipeline for CI;
+* Sets up a release pipeline for CD; and,
+* Generates an Azure Application Insights resource for monitoring.
For more information, see [DevOps Starter][azure-devops].
@@ -106,15 +121,15 @@ To create a private image store, see [Azure Container Registry][acr-docs].
## Kubernetes certification
-Azure Kubernetes Service (AKS) has been CNCF certified as Kubernetes conformant.
+AKS has been CNCF-certified as Kubernetes conformant.
## Regulatory compliance
-Azure Kubernetes Service (AKS) is compliant with SOC, ISO, PCI DSS, and HIPAA. For more information, see [Overview of Microsoft Azure compliance][compliance-doc].
+AKS is compliant with SOC, ISO, PCI DSS, and HIPAA. For more information, see [Overview of Microsoft Azure compliance][compliance-doc].
## Next steps
-Learn more about deploying and managing AKS with the Azure CLI quickstart.
+Learn more about deploying and managing AKS with the Azure CLI Quickstart.
> [!div class="nextstepaction"] > [AKS quickstart][aks-cli]
aks https://docs.microsoft.com/en-us/azure/aks/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021
aks https://docs.microsoft.com/en-us/azure/aks/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/security-controls-policy.md
@@ -1,7 +1,7 @@
Title: Azure Policy Regulatory Compliance controls for Azure Kubernetes Service (AKS) description: Lists Azure Policy Regulatory Compliance controls available for Azure Kubernetes Service (AKS). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021
api-management https://docs.microsoft.com/en-us/azure/api-management/api-management-howto-mutual-certificates-for-clients https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-howto-mutual-certificates-for-clients.md
@@ -91,6 +91,18 @@ The following example shows how to check the thumbprint of a client certificate
> Client certificate deadlock issue described in this [article](https://techcommunity.microsoft.com/t5/Networking-Blog/HTTPS-Client-Certificate-Request-freezes-when-the-Server-is/ba-p/339672) can manifest itself in several ways, e.g. requests freeze, requests result in `403 Forbidden` status code after timing out, `context.Request.Certificate` is `null`. This problem usually affects `POST` and `PUT` requests with content length of approximately 60KB or larger. > To prevent this issue from occurring turn on "Negotiate client certificate" setting for desired hostnames on the "Custom domains" blade as shown in the first image of this document. This feature is not available in the Consumption tier.
+## Certificate validation in self-hosted gateway
+
+The default API Management [self-hosted gateway](self-hosted-gateway-overview.md) image doesn't support validating server and client certificates using [CA root certificates](api-management-howto-ca-certificates.md) uploaded to an API Management instance. Clients presenting a custom certificate to the self-hosted gateway may experience slow responses, because certificate revocation list (CRL) validation can take a long time to time out on the gateway.
+
+As a workaround when running the gateway, you may configure the PKI IP address to point to the localhost address (127.0.0.1) instead of the API Management instance. This causes the CRL validation to fail quickly when the gateway attempts to validate the client certificate. To configure the gateway, add a DNS entry for the API Management instance to resolve to the localhost in the `/etc/hosts` file in the container. You can add this entry during gateway deployment:
+
+* For Docker deployment - add the `--add-host <hostname>:127.0.0.1` parameter to the `docker run` command. For more information, see [Add entries to container hosts file](https://docs.docker.com/engine/reference/commandline/run/#add-entries-to-container-hosts-fileadd-host)
+
+* For Kubernetes deployment - Add a `hostAliases` specification to the `myGateway.yaml` configuration file. For more information, see [Adding entries to Pod /etc/hosts with Host Aliases](https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/).
+++ ## Next steps
api-management https://docs.microsoft.com/en-us/azure/api-management/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure API Management description: Lists Azure Policy built-in policy definitions for Azure API Management. These built-in policy definitions provide approaches to managing your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021
api-management https://docs.microsoft.com/en-us/azure/api-management/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/security-controls-policy.md
@@ -1,7 +1,7 @@
Title: Azure Policy Regulatory Compliance controls for Azure API Management description: Lists Azure Policy Regulatory Compliance controls available for Azure API Management. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021
api-management https://docs.microsoft.com/en-us/azure/api-management/self-hosted-gateway-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/self-hosted-gateway-overview.md
@@ -9,7 +9,7 @@ editor: ''
Previously updated : 04/26/2020 Last updated : 01/25/2021
@@ -39,13 +39,13 @@ Deploying self-hosted gateways into the same environments where the backend API
## Packaging and features
-The self-hosted gateway is a containerized, functionally-equivalent version of the managed gateway deployed to Azure as part of every API Management service. The self-hosted gateway is available as a Linux-based Docker [container](https://aka.ms/apim/sputnik/dhub) from the Microsoft Container Registry. It can be deployed to Docker, Kubernetes, or any other container orchestration solution running on a server cluster on premises, cloud infrastructure, or for evaluation and development purposes, on a personal computer.
+The self-hosted gateway is a containerized, functionally equivalent version of the managed gateway deployed to Azure as part of every API Management service. The self-hosted gateway is available as a Linux-based Docker [container](https://aka.ms/apim/sputnik/dhub) from the Microsoft Container Registry. It can be deployed to Docker, Kubernetes, or any other container orchestration solution running on a server cluster on premises, cloud infrastructure, or for evaluation and development purposes, on a personal computer.
The following functionality found in the managed gateways is **not available** in the self-hosted gateways: - Azure Monitor logs - Upstream (backend side) TLS version and cipher management-- Validation of server and client certificates using [CA root certificates](api-management-howto-ca-certificates.md) uploaded to API Management service. To add support for custom CA, add a layer to the self-hosted gateway container image that installs the CA's root certificate.
+- Validation of server and client certificates using [CA root certificates](api-management-howto-ca-certificates.md) uploaded to API Management service. For more information, see [Certificate validation in self-hosted gateway](api-management-howto-mutual-certificates-for-clients.md#certificate-validation-in-self-hosted-gateway).
- Integration with the [Service Fabric](../service-fabric/service-fabric-api-management-overview.md) - TLS session resumption - Client certificate renegotiation. This means that for [client certificate authentication](api-management-howto-mutual-certificates-for-clients.md) to work API consumers must present their certificates as part of the initial TLS handshake. To ensure that, enable the negotiate client certificate setting when configuring a self-hosted gateway custom hostname.
app-service https://docs.microsoft.com/en-us/azure/app-service/app-service-key-vault-references https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-key-vault-references.md
@@ -4,7 +4,7 @@ description: Learn how to set up Azure App Service and Azure Functions to use Az
Previously updated : 10/09/2019 Last updated : 02/05/2021
@@ -37,24 +37,24 @@ A Key Vault reference is of the form `@Microsoft.KeyVault({referenceString})`, w
> [!div class="mx-tdBreakAll"] > | Reference string | Description | > |--||
-> | SecretUri=_secretUri_ | The **SecretUri** should be the full data-plane URI of a secret in Key Vault, including a version, e.g., https://myvault.vault.azure.net/secrets/mysecret/ec96f02080254f109c51a1f14cdb1931 |
-> | VaultName=_vaultName_;SecretName=_secretName_;SecretVersion=_secretVersion_ | The **VaultName** should the name of your Key Vault resource. The **SecretName** should be the name of the target secret. The **SecretVersion** should be the version of the secret to use. |
+> | SecretUri=_secretUri_ | The **SecretUri** should be the full data-plane URI of a secret in Key Vault, optionally including a version, e.g., `https://myvault.vault.azure.net/secrets/mysecret/` or `https://myvault.vault.azure.net/secrets/mysecret/ec96f02080254f109c51a1f14cdb1931` |
+> | VaultName=_vaultName_;SecretName=_secretName_;SecretVersion=_secretVersion_ | The **VaultName** is required and should the name of your Key Vault resource. The **SecretName** is required and should be the name of the target secret. The **SecretVersion** is optional but if present indicates the version of the secret to use. |
-> [!NOTE]
-> Versions are currently required. When rotating secrets, you will need to update the version in your application configuration.
For example, a complete reference would look like the following: - ```
-@Microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/ec96f02080254f109c51a1f14cdb1931)
+@Microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret)
``` Alternatively: ```
-@Microsoft.KeyVault(VaultName=myvault;SecretName=mysecret;SecretVersion=ec96f02080254f109c51a1f14cdb1931)
+@Microsoft.KeyVault(VaultName=myvault;SecretName=mysecret)
```
+## Rotation
+
+If a version is not specified in the reference, then the app will use the latest version that exists in Key Vault. When newer versions become available, such as with a rotation event, the app will automatically update and begin using the latest version within one day. Any configuration changes made to the app will cause an immediate update to the latest versions of all referenced secrets.
## Source Application Settings from Key Vault
app-service https://docs.microsoft.com/en-us/azure/app-service/app-service-web-tutorial-custom-domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-web-tutorial-custom-domain.md
@@ -305,17 +305,20 @@ If you receive an HTTP 404 (Not Found) error when you browse to the URL of your
- The custom domain configured is missing an A record or a CNAME record. - The browser client has cached the old IP address of your domain. Clear the cache, and test DNS resolution again. On a Windows machine, you clear the cache with `ipconfig /flushdns`.
-<a name="virtualdir" aria-hidden="true"></a>
- ## Migrate an active domain To migrate a live site and its DNS domain name to App Service with no downtime, see [Migrate an active DNS name to Azure App Service](manage-custom-dns-migrate-domain.md).
+<a name="virtualdir" aria-hidden="true"></a>
+ ## Redirect to a custom directory By default, App Service directs web requests to the root directory of your app code. But certain web frameworks don't start in the root directory. For example, [Laravel](https://laravel.com/) starts in the `public` subdirectory. To continue the `contoso.com` DNS example, such an app is accessible at `http://contoso.com/public`, but you want to direct `http://contoso.com` to the `public` directory instead. This step doesn't involve DNS resolution but is about customizing the virtual directory.
-To do customize a virtual directory, select **Application settings** in the left pane of your web app page.
+To do customize a virtual directory for Windows apps, select **Application settings** in the left pane of your web app page.
+
+> [!NOTE]
+> Linux apps don't have this page. To change the site root for Linux apps, see the language-specific configuration guides ([PHP](configure-language-php.md?pivots=platform-linux#change-site-root), for example).
At the bottom of the page, the root virtual directory `/` points to `site\wwwroot` by default, which is the root directory of your app code. Change it to point to the `site\wwwroot\public` instead, for example, and save your changes.
app-service https://docs.microsoft.com/en-us/azure/app-service/faq-app-service-linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/faq-app-service-linux.md
@@ -105,7 +105,7 @@ Yes, during a Git deployment, Kudu should detect that you're deploying a PHP app
**I'm using my own custom container. I want the platform to mount an SMB share to the `/home/` directory.**
-If `WEBSITES_ENABLE_APP_SERVICE_STORAGE` setting is **unspecified** or set to *true*, the `/home/` directory **will be shared** across scale instances, and files written **will persist** across restarts. Explicitly setting `WEBSITES_ENABLE_APP_SERVICE_STORAGE` to *false* will disable the mount.
+If `WEBSITES_ENABLE_APP_SERVICE_STORAGE` setting is **unspecified** or set to *false*, the `/home/` directory **will not be shared** across scale instances, and files written **will not persist** across restarts. Explicitly setting `WEBSITES_ENABLE_APP_SERVICE_STORAGE` to *true* will enable the mount.
**My custom container takes a long time to start, and the platform restarts the container before it finishes starting up.**
app-service https://docs.microsoft.com/en-us/azure/app-service/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure App Service description: Lists Azure Policy built-in policy definitions for Azure App Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021
app-service https://docs.microsoft.com/en-us/azure/app-service/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/security-controls-policy.md
@@ -1,7 +1,7 @@
Title: Azure Policy Regulatory Compliance controls for Azure App Service description: Lists Azure Policy Regulatory Compliance controls available for Azure App Service. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021
automation https://docs.microsoft.com/en-us/azure/automation/automation-solution-vm-management-remove https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-solution-vm-management-remove.md
@@ -0,0 +1,114 @@
+
+ Title: Remove Azure Automation Start/Stop VMs during off-hours overview
+description: This article describes how to remove the Start/Stop VMs during off-hours feature and unlink an Automation account from the Log Analytics workspace.
++ Last updated : 02/04/2021+++
+# Remove Start/Stop VMs during off-hours from Automation account
+
+After you enable the Start/Stop VMs during off-hours feature to manage the running state of your Azure VMs, you may decide to stop using it. Removing this feature can be done using one of the following methods based on the supported deployment models:
+
+* Delete the resource group containing the Automation account and linked Azure Monitor Log Analytics workspace, each dedicated to support this feature.
+* Unlink the Log Analytics workspace from the Automation account and delete the Automation account dedicated for this feature.
+* Delete the feature from an Automation account and linked workspace that are supporting other management and monitoring objectives.
+
+Deleting this feature only removes the associated runbooks, it doesn't delete the schedules or variables that were created during deployment or any custom-defined ones created after.
+
+## Delete the dedicated resource group
+
+1. Sign in to Azure at [https://portal.azure.com](https://portal.azure.com).
+
+2. Navigate to your Automation account, and select **Linked workspace** under **Related resources**.
+
+3. Select **Go to workspace**.
+
+4. Click **Solutions** under **General**.
+
+5. On the Solutions page, select **Start-Stop-VM[Workspace]**.
+
+6. On the **VMManagementSolution[Workspace]** page, select **Delete** from the menu.
+
+ ![Delete VM management feature](media/automation-solution-vm-management/vm-management-solution-delete.png)
+
+7. To delete the resource group created to only support Start/Stop VMs during off-hours, follow the steps outlined in the [Azure Resource Manager resource group and resource deletion](../azure-resource-manager/management/delete-resource-group.md) article.
+
+## Delete the Automation account
+
+To delete your Automation account dedicated to Start/Stop VMs during off-hours, perform the following steps.
+
+1. Sign in to Azure at [https://portal.azure.com](https://portal.azure.com).
+
+2. Navigate to your Automation account, and select **Linked workspace** under **Related resources**.
+
+3. Select **Go to workspace**.
+
+4. Click **Solutions** under **General**.
+
+5. On the Solutions page, select **Start-Stop-VM[Workspace]**.
+
+6. On the **VMManagementSolution[Workspace]** page, select **Delete** from the menu.
+
+7. While the information is verified and the feature is deleted, you can track the progress under **Notifications**, chosen from the menu. You're returned to the Solutions page after the removal process.
+
+### Unlink workspace from Automation account
+
+There are two options for unlinking the Log Analytics workspace from your Automation account. You can perform this process from the Automation account or from the linked workspace.
+
+To unlink from your Automation account, perform the following steps.
+
+1. In the Azure portal, select **Automation Accounts**.
+
+2. Open your Automation account and select **Linked workspace** under **Related Resources** on the left.
+
+3. On the **Unlink workspace** page, select **Unlink workspace** and respond to prompts.
+
+ ![Unlink workspace page](media/automation-solution-vm-management-remove/automation-unlink-workspace-blade.png)
+
+ While it attempts to unlink the Log Analytics workspace, you can track the progress under **Notifications** from the menu.
+
+To unlink from the workspace, perform the following steps.
+
+1. In the Azure portal, select **Log Analytics workspaces**.
+
+2. From the workspace, select **Automation Account** under **Related Resources**.
+
+3. On the Automation Account page, select **Unlink account** and respond to prompts.
+
+While it attempts to unlink the Automation account, you can track the progress under **Notifications** from the menu.
+
+### Delete Automation account
+
+1. In the Azure portal, select **Automation Accounts**.
+
+2. Open your Automation account and select **Delete** from the menu.
+
+While the information is verified and the account is deleted, you can track the progress under **Notifications**, chosen from the menu.
+
+## Delete the feature
+
+To delete Start/Stop VMs during off-hours from your Automation account, perform the following steps. The Automation account and Log Analytics workspace aren't deleted as part of this process. If you don't want to keep the Log Analytics workspace, you must manually delete it. For more information about deleting your workspace, see [Delete and recover Azure Log Analytics workspace](../azure-monitor/platform/delete-workspace.md).
+
+1. Navigate to your Automation account, and select **Linked workspace** under **Related resources**.
+
+2. Select **Go to workspace**.
+
+3. Click **Solutions** under **General**.
+
+4. On the Solutions page, select **Start-Stop-VM[Workspace]**.
+
+5. On the **VMManagementSolution[Workspace]** page, select **Delete** from the menu.
+
+ ![Delete VM management feature](media/automation-solution-vm-management/vm-management-solution-delete.png)
+
+6. In the Delete Solution window, confirm that you want to delete the feature.
+
+7. While the information is verified and the feature is deleted, you can track the progress under **Notifications**, chosen from the menu. You're returned to the Solutions page after the removal process.
+
+8. If you don't want to keep the [resources](automation-solution-vm-management.md#components) created by the feature or by you afterwards (such as, variables, schedules, etc.), you have to manually delete them from the account.
+
+## Next steps
+
+To re-enable this feature, see [Enable Start/Stop during off-hours](automation-solution-vm-management-enable.md).
automation https://docs.microsoft.com/en-us/azure/automation/automation-solution-vm-management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-solution-vm-management.md
@@ -1,15 +1,15 @@
Title: Azure Automation Start/Stop VMs during off-hours overview
-description: This article describes the Start/Stop VMs during off-hours feature, which starts or stops VMs on a schedule and proactively monitors them from Azure Monitor logs.
+description: This article describes the Start/Stop VMs during off-hours feature, which starts or stops VMs on a schedule and proactively monitor them from Azure Monitor Logs.
Previously updated : 09/22/2020 Last updated : 02/04/2020 # Start/Stop VMs during off-hours overview
-The Start/Stop VMs during off-hours feature start or stops enabled Azure VMs. It starts or stops machines on user-defined schedules, provides insights through Azure Monitor logs, and sends optional emails by using [action groups](../azure-monitor/platform/action-groups.md). The feature can be enabled on both Azure Resource Manager and classic VMs for most scenarios.
+The Start/Stop VMs during off-hours feature start or stops enabled Azure VMs. It starts or stops machines on user-defined schedules, provides insights through Azure Monitor logs, and sends optional emails by using [action groups](../azure-monitor/platform/action-groups.md). The feature can be enabled on both Azure Resource Manager and classic VMs for most scenarios.
This feature uses [Start-AzVm](/powershell/module/az.compute/start-azvm) cmdlet to start VMs. It uses [Stop-AzVM](/powershell/module/az.compute/stop-azvm) for stopping VMs.
@@ -34,9 +34,9 @@ The following are limitations with the current feature:
- The runbooks for the Start/Stop VMs during off hours feature work with an [Azure Run As account](./automation-security-overview.md#run-as-accounts). The Run As account is the preferred authentication method because it uses certificate authentication instead of a password that might expire or change frequently. -- The linked Automation account and Log Analytics workspace need to be in the same resource group.
+- An [Azure Monitor Log Analytics workspace](../azure-monitor/platform/design-logs-deployment.md) that stores the runbook job logs and job stream results in a workspace to query and analyze. The Automation account can be linked to a new or existing Log Analytics workspace, and both resources need to be in the same resource group.
-- We recommend that you use a separate Automation account for working with VMs enabled for the Start/Stop VMs during off-hours feature. Azure module versions are frequently upgraded, and their parameters might change. The feature isn't upgraded on the same cadence and it might not work with newer versions of the cmdlets that it uses. You're recommended to test module updates in a test Automation account before importing them into your production Automation account(s).
+We recommend that you use a separate Automation account for working with VMs enabled for the Start/Stop VMs during off-hours feature. Azure module versions are frequently upgraded, and their parameters might change. The feature isn't upgraded on the same cadence and it might not work with newer versions of the cmdlets that it uses. Before importing the updated modules into your production Automation account(s), we recommend you import them into a test Automation account to verify there aren't any compatibility issues.
## Permissions
@@ -143,7 +143,7 @@ The following table lists the variables created in your Automation account. Only
|Internal_ResourceGroupName | The Automation account resource group name.| >[!NOTE]
->For the variable `External_WaitTimeForVMRetryInSeconds`, the default value has been updated from 600 to 2100.
+>For the variable `External_WaitTimeForVMRetryInSeconds`, the default value has been updated from 600 to 2100.
Across all scenarios, the variables `External_Start_ResourceGroupNames`, `External_Stop_ResourceGroupNames`, and `External_ExcludeVMNames` are necessary for targeting VMs, except for the comma-separated VM lists for the **AutoStop_CreateAlert_Parent**, **SequencedStartStop_Parent**, and **ScheduledStartStop_Parent** runbooks. That is, your VMs must belong to target resource groups for start and stop actions to occur. The logic works similar to Azure Policy, in that you can target the subscription or resource group and have actions inherited by newly created VMs. This approach avoids having to maintain a separate schedule for every VM and manage starts and stops in scale.
@@ -169,8 +169,8 @@ For use of the feature with classic VMs, you need a Classic Run As account, whic
If you have more than 20 VMs per cloud service, here are some recommendations:
-* Create multiple schedules with the parent runbook **ScheduledStartStop_Parent** and specifying 20 VMs per schedule.
-* In the schedule properties, use the `VMList` parameter to specify VM names as a comma-separated list (no whitespaces).
+* Create multiple schedules with the parent runbook **ScheduledStartStop_Parent** and specifying 20 VMs per schedule.
+* In the schedule properties, use the `VMList` parameter to specify VM names as a comma-separated list (no whitespaces).
Otherwise, if the Automation job for this feature runs more than three hours, it's temporarily unloaded or stopped per the [fair share](automation-runbook-execution.md#fair-share) limit.
@@ -178,10 +178,6 @@ Azure CSP subscriptions support only the Azure Resource Manager model. Non-Azure
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../includes/azure-monitor-log-analytics-rebrand.md)]
-## Enable the feature
-
-To begin using the feature, follow the steps in [Enable Start/Stop VMs during off-hours](automation-solution-vm-management-enable.md).
- ## View the feature Use one of the following mechanisms to access the enabled feature:
@@ -190,7 +186,7 @@ Use one of the following mechanisms to access the enabled feature:
* Navigate to the Log Analytics workspace linked to your Automation account. After after selecting the workspace, choose **Solutions** from the left pane. On the Solutions page, select **Start-Stop-VM[workspace]** from the list.
-Selecting the feature displays the Start-Stop-VM[workspace] page. Here you can review important details, such as the information in the **StartStopVM** tile. As in your Log Analytics workspace, this tile displays a count and a graphical representation of the runbook jobs for the feature that have started and have finished successfully.
+Selecting the feature displays the **Start-Stop-VM[workspace]** page. Here you can review important details, such as the information in the **StartStopVM** tile. As in your Log Analytics workspace, this tile displays a count and a graphical representation of the runbook jobs for the feature that have started and have finished successfully.
![Automation Update Management page](media/automation-solution-vm-management/azure-portal-vmupdate-solution-01.png)
@@ -198,37 +194,7 @@ You can perform further analysis of the job records by clicking the donut tile.
## Update the feature
-If you've deployed a previous version of Start/Stop VMs during off-hours, delete it from your account before deploying an updated release. Follow the steps to [remove the feature](#remove-the-feature) and then follow the steps to [enable it](automation-solution-vm-management-enable.md).
-
-## Remove the feature
-
-If you no longer need to use the feature, you can delete it from the Automation account. Deleting the feature only removes the associated runbooks. It doesn't delete the schedules or variables that were created when the feature was added.
-
-To delete Start/Stop VMs during off-hours:
-
-1. From your Automation account, select **Linked workspace** under **Related resources**.
-
-2. Select **Go to workspace**.
-
-3. Click **Solutions** under **General**.
-
-4. On the Solutions page, select **Start-Stop-VM[Workspace]**.
-
-5. On the VMManagementSolution[Workspace] page, select **Delete** from the menu.<br><br> ![Delete VM management feature](media/automation-solution-vm-management/vm-management-solution-delete.png)
-
-6. In the Delete Solution window, confirm that you want to delete the feature.
-
-7. While the information is verified and the feature is deleted, you can track the progress under **Notifications**, chosen from the menu. You're returned to the Solutions page after the removal process.
-
-8. The Automation account and Log Analytics workspace aren't deleted as part of this process. If you don't want to keep the Log Analytics workspace, you must manually delete it from the Azure portal:
-
- 1. Search for and select **Log Analytics workspaces**.
-
- 2. On the Log Analytics workspace page, select the workspace.
-
- 3. Select **Delete** from the menu.
-
- 4. If you don't want to keep the Azure Automation account [feature components](#components), you can manually delete each.
+If you've deployed a previous version of Start/Stop VMs during off-hours, delete it from your account before deploying an updated release. Follow the steps to [remove the feature](automation-solution-vm-management-remove.md#delete-the-feature) and then follow the steps to [enable it](automation-solution-vm-management-enable.md).
## Next steps
automation https://docs.microsoft.com/en-us/azure/automation/automation-windows-hrw-install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-windows-hrw-install.md
@@ -32,8 +32,8 @@ The Hybrid Runbook Worker role requires the [Log Analytics agent](../azure-monit
The Hybrid Runbook Worker feature supports the following operating systems:
-* Windows Server 2019
-* Windows Server 2016, version 1709 and 1803
+* Windows Server 2019 (including Server Core)
+* Windows Server 2016, version 1709 and 1803 (excluding Server Core)
* Windows Server 2012, 2012 R2 * Windows Server 2008 SP2 (x64), 2008 R2 * Windows 10 Enterprise (including multi-session) and Pro
automation https://docs.microsoft.com/en-us/azure/automation/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure Automation description: Lists Azure Policy built-in policy definitions for Azure Automation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021
automation https://docs.microsoft.com/en-us/azure/automation/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/security-controls-policy.md
@@ -1,7 +1,7 @@
Title: Azure Policy Regulatory Compliance controls for Azure Automation description: Lists Azure Policy Regulatory Compliance controls available for Azure Automation. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021
automation https://docs.microsoft.com/en-us/azure/automation/troubleshoot/onboarding https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/troubleshoot/onboarding.md
@@ -143,7 +143,7 @@ Remove the resources for the following features from your workspace if you're us
After you remove the feature resources, you can unlink your workspace. It's important to clean up any existing artifacts from these features from your workspace and your Automation account: * For Update Management, remove **Update Deployments (Schedules)** from your Automation account.
-* For Start/Stop VMs during off-hours, remove any locks on feature components in your Automation account under **Settings** > **Locks**. For more information, see [Remove the feature](../automation-solution-vm-management.md#remove-the-feature).
+* For Start/Stop VMs during off-hours, remove any locks on feature components in your Automation account under **Settings** > **Locks**. For more information, see [Remove the feature](../automation-solution-vm-management-remove.md).
## <a name="mma-extension-failures"></a>Log Analytics for Windows extension failures
automation https://docs.microsoft.com/en-us/azure/automation/update-management/overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/update-management/overview.md
@@ -68,7 +68,7 @@ The following table lists the supported operating systems for update assessments
|Operating system |Notes | |||
-|Windows Server 2019 (Datacenter/Datacenter Core/Standard)<br>Windows Server 2016 (Datacenter/Datacenter Core/Standard)<br>Windows Server 2012 R2(Datacenter/Standard)<br>Windows Server 2012 |
+|Windows Server 2019 (Datacenter/Standard including Server Core)<br><br>Windows Server 2016 (Datacenter/Standard excluding Server Core)<br><br>Windows Server 2012 R2(Datacenter/Standard)<br><br>Windows Server 2012 | |
|Windows Server 2008 R2 (RTM and SP1 Standard)| Update Management supports assessments and patching for this operating system. The [Hybrid Runbook Worker](../automation-windows-hrw-install.md) is supported for Windows Server 2008 R2. | |CentOS 6 and 7 (x64) | Linux agents require access to an update repository. Classification-based patching requires `yum` to return security data that CentOS doesn't have in its RTM releases. For more information on classification-based patching on CentOS, see [Update classifications on Linux](view-update-assessments.md#linux). | |Red Hat Enterprise 6 and 7 (x64) | Linux agents require access to an update repository. |
azure-app-configuration https://docs.microsoft.com/en-us/azure/azure-app-configuration/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021
azure-app-configuration https://docs.microsoft.com/en-us/azure/azure-app-configuration/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/security-controls-policy.md
@@ -1,7 +1,7 @@
Title: Azure Policy Regulatory Compliance controls for Azure App Configuration description: Lists Azure Policy Regulatory Compliance controls available for Azure App Configuration. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/kubernetes/connect-cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/connect-cluster.md
@@ -14,34 +14,34 @@
# Connect an Azure Arc-enabled Kubernetes cluster (Preview)
-This document covers the process of connecting any Cloud Native Computing Foundation (CNCF) certified Kubernetes cluster such as AKS-engine on Azure, AKS-engine on Azure Stack Hub, GKE, EKS and VMware vSphere cluster to Azure Arc.
+This article covers the process of connecting any Cloud Native Computing Foundation (CNCF) certified Kubernetes cluster, such as AKS-engine on Azure, AKS-engine on Azure Stack Hub, GKE, EKS, and VMware vSphere cluster to Azure Arc.
## Before you begin
-Verify you have the following requirements ready:
+Verify you have prepared the following prerequisites:
-* A Kubernetes cluster that is up and running. If you do not have an existing Kubernetes cluster, you can use one of the following guides to create a test cluster:
- * Create a Kubernetes cluster using [Kubernetes in Docker (kind)](https://kind.sigs.k8s.io/)
- * Create a Kubernetes cluster using Docker for [Mac](https://docs.docker.com/docker-for-mac/#kubernetes) or [Windows](https://docs.docker.com/docker-for-windows/#kubernetes)
-* You'll need a kubeconfig file to access the cluster and cluster-admin role on the cluster for deployment of Arc enabled Kubernetes agents.
+* An up-and-running Kubernetes cluster. If you do not have an existing Kubernetes cluster, you can use one of the following guides to create a test cluster:
+ * Create a Kubernetes cluster using [Kubernetes in Docker (kind)](https://kind.sigs.k8s.io/).
+ * Create a Kubernetes cluster using Docker for [Mac](https://docs.docker.com/docker-for-mac/#kubernetes) or [Windows](https://docs.docker.com/docker-for-windows/#kubernetes).
+* A kubeconfig file to access the cluster and cluster-admin role on the cluster for deployment of Arc-enabled Kubernetes agents.
* The user or service principal used with `az login` and `az connectedk8s connect` commands must have the 'Read' and 'Write' permissions on the 'Microsoft.Kubernetes/connectedclusters' resource type. The "Kubernetes Cluster - Azure Arc Onboarding" role has these permissions and can be used for role assignments on the user or service principal.
-* Helm 3 is required for the onboarding the cluster using connectedk8s extension. [Install the latest release of Helm 3](https://helm.sh/docs/intro/install) to meet this requirement.
-* Azure CLI version 2.15+ is required for installing the Azure Arc enabled Kubernetes CLI extensions. [Install Azure CLI](/cli/azure/install-azure-cli?view=azure-cli-latest&preserve-view=true) or update to the latest version to ensure that you have Azure CLI version 2.15+.
-* Install the Arc enabled Kubernetes CLI extensions:
+* Helm 3 for the onboarding the cluster using a connectedk8s extension. [Install the latest release of Helm 3](https://helm.sh/docs/intro/install) to meet this requirement.
+* Azure CLI version 2.15+ for installing the Azure Arc-enabled Kubernetes CLI extensions. [Install Azure CLI](/cli/azure/install-azure-cli?view=azure-cli-latest&preserve-view=true) or update to the latest version.
+* Install the Arc-enabled Kubernetes CLI extensions:
- Install the `connectedk8s` extension, which helps you connect Kubernetes clusters to Azure:
+ * Install the `connectedk8s` extension, which helps you connect Kubernetes clusters to Azure:
```azurecli az extension add --name connectedk8s ```
- Install the `k8sconfiguration` extension:
+ * Install the `k8sconfiguration` extension:
```azurecli az extension add --name k8sconfiguration ```
-
- If you want to update these extensions later, run the following commands:
+
+ * If you want to update these extensions later, run the following commands:
```azurecli az extension update --name connectedk8s
@@ -55,20 +55,20 @@ Verify you have the following requirements ready:
## Network requirements
-Azure Arc agents require the following protocols/ports/outbound URLs to function.
+Azure Arc agents require the following protocols/ports/outbound URLs to function:
-* TCP on port 443 --> `https://:443`
-* TCP on port 9418 --> `git://:9418`
+* TCP on port 443: `https://:443`
+* TCP on port 9418: `git://:9418`
| Endpoint (DNS) | Description | | | |
-| `https://management.azure.com` | Required for the agent to connect to Azure and register the cluster |
-| `https://eastus.dp.kubernetesconfiguration.azure.com`, `https://westeurope.dp.kubernetesconfiguration.azure.com` | Data plane endpoint for the agent to push status and fetch configuration information |
-| `https://login.microsoftonline.com` | Required to fetch and update Azure Resource Manager tokens |
-| `https://mcr.microsoft.com` | Required to pull container images for Azure Arc agents |
-| `https://eus.his.arc.azure.com`, `https://weu.his.arc.azure.com` | Required to pull system-assigned managed identity certificates |
+| `https://management.azure.com` | Required for the agent to connect to Azure and register the cluster. |
+| `https://eastus.dp.kubernetesconfiguration.azure.com`, `https://westeurope.dp.kubernetesconfiguration.azure.com` | Data plane endpoint for the agent to push status and fetch configuration information. |
+| `https://login.microsoftonline.com` | Required to fetch and update Azure Resource Manager tokens. |
+| `https://mcr.microsoft.com` | Required to pull container images for Azure Arc agents. |
+| `https://eus.his.arc.azure.com`, `https://weu.his.arc.azure.com` | Required to pull system-assigned managed identity certificates. |
-## Register the two providers for Azure Arc enabled Kubernetes:
+## Register the two providers for Azure Arc-enabled Kubernetes:
```console az provider register --namespace Microsoft.Kubernetes
@@ -76,7 +76,7 @@ az provider register --namespace Microsoft.Kubernetes
az provider register --namespace Microsoft.KubernetesConfiguration ```
-Registration is an asynchronous process. Registration may take approximately 10 minutes. You can monitor the registration process with the following commands:
+Registration is an asynchronous process and may take approximately 10 minutes. You can monitor the registration process with the following commands:
```console az provider show -n Microsoft.Kubernetes -o table
@@ -106,10 +106,13 @@ eastus AzureArcTest
## Connect a cluster
-Next, we will connect our Kubernetes cluster to Azure. The workflow for `az connectedk8s connect` is as follows:
+Next, we will connect our Kubernetes cluster to Azure using `az connectedk8s connect`:
-1. Verify connectivity to your Kubernetes cluster: via `KUBECONFIG`, `~/.kube/config`, or `--kube-config`
-1. Deploy Azure Arc Agents for Kubernetes using Helm 3, into the `azure-arc` namespace
+1. Verify connectivity to your Kubernetes cluster via one of the following:
+ 1. `KUBECONFIG`
+ 1. `~/.kube/config`
+ 1. `--kube-config`
+1. Deploy Azure Arc agents for Kubernetes using Helm 3 into the `azure-arc` namespace:
```console az connectedk8s connect --name AzureArcTest1 --resource-group AzureArcTest
@@ -147,7 +150,7 @@ Helm release deployment succeeded
## Verify connected cluster
-List your connected clusters:
+Use the following command to list your connected clusters:
```console az connectedk8s list -g AzureArcTest -o table
@@ -162,22 +165,22 @@ Name Location ResourceGroup
AzureArcTest1 eastus AzureArcTest ```
-You can also view this resource on the [Azure portal](https://portal.azure.com/). Once you have the portal open in your browser, navigate to the resource group and the Azure Arc enabled Kubernetes resource based on the resource name and resource group name inputs used earlier in the `az connectedk8s connect` command.
+You can also view this resource on the [Azure portal](https://portal.azure.com/). Open the portal in your browser and navigate to the resource group and the Azure Arc-enabled Kubernetes resource, based on the resource name and resource group name inputs used earlier in the `az connectedk8s connect` command.
> [!NOTE]
-> After onboarding the cluster, it takes around 5 to 10 minutes for the cluster metadata (cluster version, agent version, number of nodes) to surface on the overview page of the Azure Arc enabled Kubernetes resource in Azure portal.
+> After onboarding the cluster, it takes around 5 to 10 minutes for the cluster metadata (cluster version, agent version, number of nodes, etc.) to surface on the overview page of the Azure Arc-enabled Kubernetes resource in Azure portal.
## Connect using an outbound proxy server
-If your cluster is behind an outbound proxy server, Azure CLI and the Arc enabled Kubernetes agents need to route their requests via the outbound proxy server. The following configuration enables that:
+If your cluster is behind an outbound proxy server, Azure CLI and the Arc-enabled Kubernetes agents need to route their requests via the outbound proxy server:
-1. Check the version of `connectedk8s` extension installed on your machine by running this command:
+1. Check the version of `connectedk8s` extension installed on your machine:
```console az -v ```
- You need `connectedk8s` extension version >= 0.2.5 to set up agents with outbound proxy. If you have version < 0.2.3 on your machine, follow the [update steps](#before-you-begin) to get the latest version of extension on your machine.
+ You need `connectedk8s` extension version 0.2.5+ to set up agents with outbound proxy. If you have version 0.2.3 or older on your machine, follow the [update steps](#before-you-begin) to get the latest version of extension on your machine.
2. Set the environment variables needed for Azure CLI to use the outbound proxy server:
@@ -204,13 +207,13 @@ If your cluster is behind an outbound proxy server, Azure CLI and the Arc enable
``` > [!NOTE]
-> 1. Specifying excludedCIDR under --proxy-skip-range is important to ensure in-cluster communication is not broken for the agents.
-> 2. While --proxy-http, --proxy-https and --proxy-skip-range are expected for most outbound proxy environments, --proxy-cert is only required if there are trusted certificates from proxy that need to be injected into trusted certificate store of agent pods.
-> 3. The above proxy specification is currently applied only for Arc agents and not for the flux pods used in sourceControlConfiguration. The Arc enabled Kubernetes team is actively working on this feature and it will be available soon.
+> 1. Specifying `excludedCIDR` under `--proxy-skip-range` is important to ensure in-cluster communication is not broken for the agents.
+> 2. While `--proxy-http`, `--proxy-https`, and `--proxy-skip-range` are expected for most outbound proxy environments, `--proxy-cert` is only required if trusted certificates from proxy need to be injected into trusted certificate store of agent pods.
+> 3. The above proxy specification is currently applied only for Arc agents and not for the flux pods used in sourceControlConfiguration. The Arc-enabled Kubernetes team is actively working on this feature and it will be available soon.
## Azure Arc agents for Kubernetes
-Azure Arc enabled Kubernetes deploys a few operators into the `azure-arc` namespace. You can view these deployments and pods here:
+Azure Arc-enabled Kubernetes deploys a few operators into the `azure-arc` namespace. You can view these deployments and pods using:
```console kubectl -n azure-arc get deployments,pods
@@ -238,28 +241,32 @@ pod/metrics-agent-58b765c8db-n5l7k 2/2 Running 0 16h
pod/resource-sync-agent-5cf85976c7-522p5 3/3 Running 0 16h ```
-Azure Arc enabled Kubernetes consists of a few agents (operators) that run in your cluster deployed to the `azure-arc` namespace.
+Azure Arc-enabled Kubernetes consists of a few agents (operators) that run in your cluster deployed to the `azure-arc` namespace.
-* `deployment.apps/config-agent`: watches the connected cluster for source control configuration resources applied on the cluster and updates compliance state
-* `deployment.apps/controller-manager`: is an operator of operators and orchestrates interactions between Azure Arc components
-* `deployment.apps/metrics-agent`: collects metrics of other Arc agents to ensure that these agents are exhibiting optimal performance
-* `deployment.apps/cluster-metadata-operator`: gathers cluster metadata - cluster version, node count, and Azure Arc agent version
-* `deployment.apps/resource-sync-agent`: syncs the above mentioned cluster metadata to Azure
-* `deployment.apps/clusteridentityoperator`: Azure Arc enabled Kubernetes currently supports system assigned identity. clusteridentityoperator maintains the managed service identity (MSI) certificate used by other agents for communication with Azure.
-* `deployment.apps/flux-logs-agent`: collects logs from the flux operators deployed as a part of source control configuration
+| Agents (Operators) | Description |
+| | |
+| `deployment.apps/config-agent` | Watches the connected cluster for source control configuration resources applied on the cluster and updates compliance state. |
+| `deployment.apps/controller-manager` | An operator of operators that orchestrates interactions between Azure Arc components. |
+| `deployment.apps/metrics-agent` | Collects performance metrics of other Arc agents. |
+| `deployment.apps/cluster-metadata-operator` | Gathers cluster metadata, such as cluster version, node count, and Azure Arc agent version. |
+| `deployment.apps/resource-sync-agent` | Syncs the above mentioned cluster metadata to Azure. |
+| `deployment.apps/clusteridentityoperator` | Azure Arc-enabled Kubernetes currently supports system-assigned identity. `clusteridentityoperator` maintains the managed service identity (MSI) certificate used by other agents for communication with Azure. |
+| `deployment.apps/flux-logs-agent` | Collects logs from the flux operators deployed as a part of source control configuration. |
## Delete a connected cluster You can delete a `Microsoft.Kubernetes/connectedcluster` resource using the Azure CLI or Azure portal.
-* **Deletion using Azure CLI**: The following Azure CLI command can be used to initiate deletion of the Azure Arc enabled Kubernetes resource.
+* **Deletion using Azure CLI**: Use the following Azure CLI command to initiate deletion of the Azure Arc-enabled Kubernetes resource.
```console az connectedk8s delete --name AzureArcTest1 --resource-group AzureArcTest ```
- This command removes the `Microsoft.Kubernetes/connectedCluster` resource and any associated `sourcecontrolconfiguration` resources in Azure. The Azure CLI uses helm uninstall to remove the agents running on the cluster as well.
+ This command removes the `Microsoft.Kubernetes/connectedCluster` resource and any associated `sourcecontrolconfiguration` resources in Azure. The Azure CLI uses `helm uninstall` to remove the agents running on the cluster as well.
+
+* **Deletion on Azure portal**: Deletion of the Azure Arc-enabled Kubernetes resource on Azure portal deletes the `Microsoft.Kubernetes/connectedcluster` resource and any associated `sourcecontrolconfiguration` resources in Azure, but it *does not* remove the agents running on the cluster.
-* **Deletion on Azure portal**: Deletion of the Azure Arc enabled Kubernetes resource on Azure portal deletes the `Microsoft.Kubernetes/connectedcluster` resource and any associated `sourcecontrolconfiguration` resources in Azure, but it doesn't delete the agents running on the cluster. To delete the agents running on the cluster, run the following command.
+ To remove the agents running on the cluster, run the following command:
```console az connectedk8s delete --name AzureArcTest1 --resource-group AzureArcTest
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/kubernetes/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure Arc enabled Kubernetes description: Lists Azure Policy built-in policy definitions for Azure Arc enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021 #
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/kubernetes/use-gitops-connected-cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/use-gitops-connected-cluster.md
@@ -144,17 +144,17 @@ To customize the configuration, here are more parameters you can use:
`--helm-operator-params` : *Optional* chart values for Helm operator (if enabled). For example, '--set helm.versions=v3'.
-`--helm-operator-chart-version` : *Optional* chart version for Helm operator (if enabled). Default: '1.2.0'.
+`--helm-operator-version` : *Optional* chart version for Helm operator (if enabled). Use '1.2.0' or greater. Default: '1.2.0'.
`--operator-namespace` : *Optional* name for the operator namespace. Default: 'default'. Max 23 characters.
-`--operator-params` : *Optional* parameters for operator. Must be given within single quotes. For example, ```--operator-params='--git-readonly --git-path=releases --sync-garbage-collection' ```
+`--operator-params` : *Optional* parameters for operator. Must be given within single quotes. For example, ```--operator-params='--git-readonly --sync-garbage-collection --git-branch=main' ```
Options supported in --operator-params | Option | Description | | - | - |
-| --git-branch | Branch of Git repo to use for Kubernetes manifests. Default is 'master'. |
+| --git-branch | Branch of Git repo to use for Kubernetes manifests. Default is 'master'. Newer repositories have root branch named 'main', in which case you need to set --git-branch=main. |
| --git-path | Relative path within the Git repo for Flux to locate Kubernetes manifests. | | --git-readonly | Git repo will be considered read-only; Flux will not attempt to write to it. | | --manifest-generation | If enabled, Flux will look for .flux.yaml and run Kustomize or other manifest generators. |
@@ -222,16 +222,13 @@ Command group 'k8sconfiguration' is in preview. It may be changed/removed in a f
} ```
-When the `sourceControlConfiguration` is created, a few things happen under the hood:
+When a `sourceControlConfiguration` is created or updated, a few things happen under the hood:
-1. The Azure Arc `config-agent` monitors Azure Resource Manager for new or updated configurations (`Microsoft.KubernetesConfiguration/sourceControlConfigurations`)
-1. `config-agent` notices the new `Pending` configuration
-1. `config-agent` reads the configuration properties and prepares to deploy a managed instance of `flux`
- * `config-agent` creates the destination namespace
- * `config-agent` prepares a Kubernetes Service Account with the appropriate permission (`cluster` or `namespace` scope)
- * `config-agent` deploys an instance of `flux`
- * `flux` generates an SSH key and logs the public key (if using the option of SSH with Flux-generated keys)
-1. `config-agent` reports status back to the `sourceControlConfiguration` resource in Azure
+1. The Azure Arc `config-agent` is monitoring Azure Resource Manager for new or updated configurations (`Microsoft.KubernetesConfiguration/sourceControlConfigurations`) and notices the new `Pending` configuration.
+1. The `config-agent` reads the configuration properties and creates the destination namespace.
+1. The Azure Arc `controller-manager` prepares a Kubernetes Service Account with the appropriate permission (`cluster` or `namespace` scope) and then deploys an instance of `flux`.
+1. If using the option of SSH with Flux-generated keys, `flux` generates an SSH key and logs the public key.
+1. The `config-agent` reports status back to the `sourceControlConfiguration` resource in Azure.
While the provisioning process happens, the `sourceControlConfiguration` will move through a few state changes. Monitor progress with the `az k8sconfiguration show ...` command above:
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/servers/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure Arc enabled servers description: Lists Azure Policy built-in policy definitions for Azure Arc enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/servers/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/security-controls-policy.md
@@ -1,7 +1,7 @@
Title: Azure Policy Regulatory Compliance controls for Azure Arc enabled servers (preview) description: Lists Azure Policy Regulatory Compliance controls available for Azure Arc enabled servers (preview). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021
azure-cache-for-redis https://docs.microsoft.com/en-us/azure/azure-cache-for-redis/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure Cache for Redis description: Lists Azure Policy built-in policy definitions for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021
azure-cache-for-redis https://docs.microsoft.com/en-us/azure/azure-cache-for-redis/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/security-controls-policy.md
@@ -1,7 +1,7 @@
Title: Azure Policy Regulatory Compliance controls for Azure Cache for Redis description: Lists Azure Policy Regulatory Compliance controls available for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-rabbitmq-output https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-rabbitmq-output.md
@@ -203,7 +203,7 @@ def main(req: func.HttpRequest, outputMessage: func.Out[str]) -> func.HttpRespon
# [Java](#tab/java)
-The following example shows a Java function that sends a message to RabbitMQ queue when triggered by a TimerTrigger every 5 minutes.
+The following Java function uses the `@RabbitMQOutput` annotation from the [Java RabbitMQ types](https://mvnrepository.com/artifact/com.microsoft.azure.functions/azure-functions-java-library-rabbitmq) to describe the configuration for a RabbitMQ queue output binding. The function sends a message to the RabbitMQ queue when triggered by a TimerTrigger every 5 minutes.
```java @FunctionName("RabbitMQOutputExample")
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/java-troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-troubleshoot.md
@@ -165,7 +165,8 @@ Assuming you [set up your app for Application Insights][java], click Browse, sel
Yes, provided your server can send telemetry to the Application Insights portal through the public internet.
-In your firewall, you might have to open TCP ports 80 and 443 for outgoing traffic to dc.services.visualstudio.com and f5.services.visualstudio.com.
+You may need to [open some outgoing ports in your server's firewall](./ip-addresses.md#outgoing-ports)
+to allow the SDK to send data to the portal.
## Data retention **How long is data retained in the portal? Is it secure?**
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/sla-report https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/sla-report.md
@@ -0,0 +1,70 @@
+
+ Title: Downtime, SLA, and outage workbook - Application Insights
+description: Calculate and report SLA for Web Test through a single pane of glass across your Application Insights resources and Azure subscriptions.
+ Last updated : 02/8/2021+++
+# Downtime, SLA, and outages workbook
+
+Introducing a simple way to calculate and report SLA (service-level agreement) for Web Tests through a single pane of glass across your Application Insights resources and Azure subscriptions. The Downtime and Outage report provides powerful pre-built queries and data visualizations to enhance your understanding of your customer's connectivity, typical application response time, and experienced down time.
+
+The SLA workbook template is accessible through the workbook gallery in your Application Insights resource or through the availability tab by selecting **SLA Reports** at the top.
++
+## Parameter flexibility
+
+The parameters set in the workbook influence the rest of your report.
++
+`Subscriptions`, `App Insights Resources`, and `Web Test` parameters determine your high-level resource options. These parameters are based on log analytics queries and used in every report query.
+
+`Failure Threshold` and `Outage Window` allow you to determine your own criteria for a service outage, for example, the criteria for App Insights Availability alert based upon failed location counter over a chosen period. The typical threshold is three locations over a five-minute window.
+
+`Maintenance Period` enables you to select your typical maintenance frequency and `Maintenance Window` is a datetime selector for an example maintenance period. All data that occurs during the identified period will be ignored in your results.
+
+`Availability Target 9s` specifies your Target 9s objective from two 9s to five 9s.
+
+## Overview page
+
+The overview page contains high-level information about your total SLA (excluding maintenance periods if defined), end to end outage instances, and application downtime. Outage instances are defined by when a test starts to fail until it is successful based on your outage parameters. If a test starts failing at 8:00 am and succeeds again at 10:00 am, then that entire period of data is considered the same outage.
++
+You can also investigate your longest outage that occurred over your reporting period.
+
+Some tests are linkable back to their Application Insights resource for further investigation but that is only possible in the [Workspace-based Application Insights resource](create-workspace-resource.md).
+
+## Downtime, outages, and failures
+
+The **Outages and Downtime** tab has information on total outage instances and total down time broken down by test. The **Failures by Location** tab have a geo-map of failed testing locations to help identify potential problem connection areas.
++
+## Edit the report
+
+You can edit the report like any other [Azure Monitor Workbook](../platform/workbooks-overview.md). You can customize the queries or visualizations based on your team's needs.
++
+### Log Analytics
+
+The queries can all be run in [Log Analytics](../log-query/log-analytics-overview.md) and used in other reports or dashboards. Remove the parameter restriction and reuse the core query.
++
+## Access and sharing
+
+The report can be shared with your teams, leadership, or pinned to a dashboard for further use. The user needs to have read permission/access to the Applications Insights resource where the actual workbook is stored.
++
+## Next steps
+
+- [Log Analytics query optimization tips](../log-query/query-optimization.md).
+- Learn how to [create a chart in workbooks](../platform/workbooks-chart-visualizations.md).
+- Learn how to monitor your website with [availability tests](monitor-web-app-availability.md).
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/app/work-item-integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/work-item-integration.md
@@ -0,0 +1,56 @@
+
+ Title: Work Item Integration (preview) - Application Insights
+description: Learn how to create work items in GitHub or Azure DevOps with Application Insights data embedded in them.
+ Last updated : 02/9/2021+++
+# Work Item Integration (preview)
+
+Work item integration functionality allows you to easily create work items in GitHub or Azure DevOps that have relevant Application Insights data embedded in them.
+
+> [!IMPORTANT]
+> Work Item integration is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Create and configure a work item template
+
+1. To create a work item template, go to your Application Insights resource and on the left under *Configure* select **Work Items** then at the top select **Create a new template**
+
+ :::image type="content" source="./media/work-item-integration/create-work-item-template.png" alt-text=" Screenshot of the Work Items tab with create a new template selected." lightbox="./media/work-item-integration/create-work-item-template.png":::
+
+ You can also create a work item template from the End-to-end transaction details tab, if no template currently exists. Select an event and on the right select **Create a work item**, then **Start with a workbook template**.
+
+ :::image type="content" source="./media/work-item-integration/create-template-from-transaction-details.png" alt-text=" Screenshot of end-to-end transaction details tab with create a work item, start with a workbook template selected." lightbox="./media/work-item-integration/create-template-from-transaction-details.png":::
+
+2. After you select **create a new template**, you can choose your tracking systems, name your workbook, link to your selected tracking system, and choose a region to storage the template (the default is the region your Application Insights resource is located in). The URL parameters are the default URL for your repository, for example, `https://github.com/myusername/reponame` or `https://mydevops.visualstudio.com/myproject`.
+
+ :::image type="content" source="./media/work-item-integration/create-workbook.png" alt-text=" Screenshot of create a new work item workbook template.":::
+
+## Create a work item
+
+ You can access your new template from any End-to-end transaction details that you can access from Performance, Failures, Availability, or other tabs.
+
+1. To create a work item go to End-to-end transaction details, select an event then select **Create work item** and choose your work item template.
+
+ :::image type="content" source="./media/work-item-integration/create-work-item.png" alt-text=" Screenshot of end to end transaction details tab with create work item selected." lightbox="./media/work-item-integration/create-work-item.png":::
+
+1. A new tab in your browser will open up to your select tracking system. In Azure DevOps you can create a bug or task, and in GitHub you can create a new issue in your repository. A new work item is automatically create with contextual information provided by Application Insights.
+
+ :::image type="content" source="./media/work-item-integration/github-work-item.png" alt-text=" Screenshot of automatically created GitHub issue" lightbox="./media/work-item-integration/github-work-item.png":::
+
+ :::image type="content" source="./media/work-item-integration/azure-devops-work-item.png" alt-text=" Screenshot of automatically create bug in Azure DevOps." lightbox="./media/work-item-integration/azure-devops-work-item.png":::
+
+## Edit a template
+
+To edit your template, go to the **Work Items** tab under *Configure* and select the pencil icon next to the workbook you would like to update.
++
+Select edit ![edit icon](./medi). The work item information is generated using the keyword query language. You can modify the queries to add more context essential to your team. When you are done editing, save the workbook by selecting the save icon ![save icon](./media/work-item-integration/save-icon.png) in the top toolbar.
++
+You can create more than one work item configuration and have a custom workbook to meet each scenario. The workbooks can also be deployed by Azure Resource Manager ensuring standard implementations across your environments.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/faq.md
@@ -77,10 +77,12 @@ Azure Data Explorer is a fast and highly scalable data exploration service for l
### How do I retrieve log data? All data is retrieved from a Log Analytics workspace using a log query written using Kusto Query Language (KQL). You can write your own queries or use solutions and insights that include log queries for a particular application or service. See [Overview of log queries in Azure Monitor](log-query/log-query-overview.md).
- p
+ ### Can I delete data from a Log Analytics workspace? Data is removed from a workspace according to its [retention period](platform/manage-cost-storage.md#change-the-data-retention-period). You can delete specific data for privacy or compliance reasons. See [How to export and delete private data](platform/personal-data-mgmt.md#how-to-export-and-delete-private-data) for more information.
+### Is Log Analytics storage immutable?
+Data in database storage cannot be altered once ingested but can be deleted via [*purge* API path for deleting private data](platform/personal-data-mgmt.md#delete). Although data cannot be altered, some certifications require that data is kept immutable and cannot be changed or deleted in storage. Data immutability can be achieved using [data export](platform/logs-data-export.md) to a storage account that is configured as [immutable storage](../storage/blobs/storage-blob-immutability-policies-manage.md).
### What is a Log Analytics workspace? All log data collected by Azure Monitor is stored in a Log Analytics workspace. A workspace is essentially a container where log data is collected from a variety of sources. You may have a single Log Analytics workspace for all your monitoring data or may have requirements for multiple workspaces. See [Designing your Azure Monitor Logs deployment](platform/design-logs-deployment.md).
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/data-security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/data-security.md
@@ -170,6 +170,8 @@ The Log Analytics service ensures that incoming data is from a trusted source by
The retention period of collected data stored in the database depends on the selected pricing plan. For the *Free* tier, collected data is available for seven days. For the *Paid* tier, collected data is available for 31 days by default, but can be extended to 730 days. Data is stored encrypted at rest in Azure storage, to ensure data confidentiality, and the data is replicated within the local region using locally redundant storage (LRS). The last two weeks of data are also stored in SSD-based cache and this cache is encrypted.
+Data in database storage cannot be altered once ingested but can be deleted via [*purge* API path](personal-data-mgmt.md#delete). Although data cannot be altered, some certifications require that data is kept immutable and cannot be changed or deleted in storage. Data immutability can be achieved using [data export](logs-data-export.md) to a storage account that is configured as [immutable storage](../../storage/blobs/storage-blob-immutability-policies-manage.md).
+ ## 4. Use Log Analytics to access the data To access your Log Analytics workspace, you sign into the Azure portal using the organizational account or Microsoft account that you set up previously. All traffic between the portal and Log Analytics service is sent over a secure HTTPS channel. When using the portal, a session ID is generated on the user client (web browser) and data is stored in a local cache until the session is terminated. When terminated, the cache is deleted. Client-side cookies, which do not contain personally identifiable information, are not automatically removed. Session cookies are marked HTTPOnly and are secured. After a pre-determined idle period, the Azure portal session is terminated.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/itsmc-connections-cherwell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/itsmc-connections-cherwell.md
@@ -14,7 +14,8 @@ Last updated 12/21/2020
This article provides information about how to configure the connection between your Cherwell instance and the IT Service Management Connector (ITSMC) in Log Analytics to centrally manage your work items. > [!NOTE]
-> We propose our Cherwell and Provance customers to use [Webhook action](./action-groups.md#webhook) to Cherwell and Provance endpoint as another solution to the integration.
+> As of 1-Oct-2020 Cherwell ITSM integration with Azure Alert will no longer be enabled for new customers. New ITSM Connections will not be supported.
+> Existing ITSM connections will be supported.
The following sections provide details about how to connect your Cherwell product to ITSMC in Azure.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/itsmc-connections-provance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/itsmc-connections-provance.md
@@ -14,7 +14,8 @@ Last updated 12/21/2020
This article provides information about how to configure the connection between your Provance instance and the IT Service Management Connector (ITSMC) in Log Analytics to centrally manage your work items. > [!NOTE]
-> We propose our Cherwell and Provance customers to use [Webhook action](./action-groups.md#webhook) to Cherwell and Provance endpoint as another solution to the integration.
+> As of 1-Oct-2020 Provance ITSM integration with Azure Alert will no longer be enabled for new customers. New ITSM Connections will not be supported.
+> Existing ITSM connections will be supported.
The following sections provide details about how to connect your Provance product to ITSMC in Azure.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/itsmc-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/itsmc-overview.md
@@ -28,7 +28,8 @@ ITSMC supports connections with the following ITSM tools:
- Cherwell >[!NOTE]
-> We propose our Cherwell and Provance customers to use [Webhook action](./action-groups.md#webhook) to Cherwell and Provance endpoint as another solution to the integration.
+> As of 1-Oct-2020 Cherwell and Provance ITSM integrations with Azure Alert will no longer be enabled for new customers. New ITSM Connections will not be supported.
+> Existing ITSM connections will be supported.
With ITSMC, you can:
@@ -46,4 +47,4 @@ You can start using ITSMC by completing the following steps:
## Next steps
-* [Troubleshooting problems in ITSM Connector](./itsmc-resync-servicenow.md)
+* [Troubleshooting problems in ITSM Connector](./itsmc-resync-servicenow.md)
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/manage-cost-storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/manage-cost-storage.md
@@ -11,7 +11,7 @@
na Previously updated : 12/24/2020 Last updated : 01/31/2021
@@ -35,7 +35,7 @@ The default pricing for Log Analytics is a **Pay-As-You-Go** model based on data
In addition to the Pay-As-You-Go model, Log Analytics has **Capacity Reservation** tiers which enable you to save as much as 25% compared to the Pay-As-You-Go price. The capacity reservation pricing enables you to buy a reservation starting at 100 GB/day. Any usage above the reservation level will be billed at the Pay-As-You-Go rate. The Capacity Reservation tiers have a 31-day commitment period. During the commitment period, you can change to a higher level Capacity Reservation tier (which will restart the 31-day commitment period), but you cannot move back to Pay-As-You-Go or to a lower Capacity Reservation tier until after the commitment period is finished. Billing for the Capacity Reservation tiers is done on a daily basis. [Learn more](https://azure.microsoft.com/pricing/details/monitor/) about Log Analytics Pay-As-You-Go and Capacity Reservation pricing.
-In all pricing tiers, an event's data size is calculated from a string representation of the properties which are stored in Log Analytics for this event, whether the data is sent from an agent or added during the ingestion process. This includes any [custom fields](custom-fields.md) that are added as data is collected and then stored in Log Analytics. Several properties common to all data types, including some [Log Analytics Standard Properties](./log-standard-columns.md), are excluded in the calculation of the event size. This includes `_ResourceId`, `_ItemId`, `_IsBillable`, `_BilledSize` and `Type`. All other properties stored in Log Analytics are included in the calculation of the event size. Some data types are free from data ingestion charges altogether, for example the AzureActivity, Heartbeat and Usage types. To determine whether an event was excluded from billing for data ingestion, you can use the `_IsBillable` property as shown [below](#data-volume-for-specific-events). Usage is reported in GB (1.0E9 bytes).
+In all pricing tiers, an event's data size is calculated from a string representation of the properties which are stored in Log Analytics for this event, whether the data is sent from an agent or added during the ingestion process. This includes any [custom fields](custom-fields.md) that are added as data is collected and then stored in Log Analytics. Several properties common to all data types, including some [Log Analytics Standard Properties](./log-standard-columns.md), are excluded in the calculation of the event size. This includes `_ResourceId`, `_SubscriptionId`, `_ItemId`, `_IsBillable`, `_BilledSize` and `Type`. All other properties stored in Log Analytics are included in the calculation of the event size. Some data types are free from data ingestion charges altogether, for example the AzureActivity, Heartbeat and Usage types. To determine whether an event was excluded from billing for data ingestion, you can use the `_IsBillable` property as shown [below](#data-volume-for-specific-events). Usage is reported in GB (1.0E9 bytes).
Also, note that some solutions, such as [Azure Security Center](https://azure.microsoft.com/pricing/details/security-center/), [Azure Sentinel](https://azure.microsoft.com/pricing/details/azure-sentinel/) and [Configuration management](https://azure.microsoft.com/pricing/details/automation/) have their own pricing models.
@@ -61,11 +61,11 @@ If you're not yet using Azure Monitor Logs, you can use the [Azure Monitor prici
If you're using Azure Monitor Logs now, it's easy to understand what the costs are likely be based on recent usage patterns. To do this, use **Log Analytics Usage and Estimated Costs** to review and analyze data usage. This shows how much data is collected by each solution, how much data is being retained and an estimate of your costs based on the amount of data ingested and any additional retention beyond the included amount.
-![Usage and estimated costs](media/manage-cost-storage/usage-estimated-cost-dashboard-01.png)
To explore your data in more detail, click on the icon at the top right of either of the charts on the **Usage and Estimated Costs** page. Now you can work with this query to explore more details of your usage.
-![Logs view](media/manage-cost-storage/logs.png)
From the **Usage and Estimated Costs** page you can review your data volume for the month. This includes all the billable data received and retained in your Log Analytics workspace.
@@ -86,8 +86,8 @@ To change the Log Analytics pricing tier of your workspace,
2. Review the estimated costs for each of the pricing tiers. This estimate is based on the last 31 days of usage, so this cost estimate relies on the last 31 days being representative of your typical usage. In the example below you can see how, based on the data patterns from the last 31 days, this workspace would cost less in the Pay-As-You-Go tier (#1) compared to the 100 GB/day Capacity Reservation tier (#2).
- ![Pricing tiers](media/manage-cost-storage/pricing-tier-estimated-costs.png)
-
+
3. After reviewing the estimated costs based on the last 31 days of usage, if you decide to change the pricing tier, click **Select**. You can also [set the pricing tier via Azure Resource Manager](../samples/resource-manager-workspace.md) using the `sku` parameter (`pricingTier` in the Azure Resource Manager template).
@@ -128,7 +128,7 @@ None of the legacy pricing tiers has regional-based pricing.
## Change the data retention period
-The following steps describe how to configure how long log data is kept by in your workspace. Data retention at the workspace level can be configured from 30 to 730 days (2 years) for all workspaces unless they are using the legacy Free pricing tier.[Learn more](https://azure.microsoft.com/pricing/details/monitor/) about pricing for longer data retention. Retention for individual data types can be set as low as 4 days.
+The following steps describe how to configure how long log data is kept by in your workspace. Data retention at the workspace level can be configured from 30 to 730 days (2 years) for all workspaces unless they are using the legacy Free pricing tier. Retention for individual data types can be set as low as 4 days. [Learn more](https://azure.microsoft.com/pricing/details/monitor/) about pricing for longer data retention. To retain data longer than 730 days, consider using [Log Analytics workspace data export](logs-data-export.md).
### Workspace level default retention
@@ -138,11 +138,11 @@ To set the default retention for your workspace,
2. On the **Usage and estimated costs** page, click **Data Retention** from the top of the page. 3. On the pane, move the slider to increase or decrease the number of days and then click **OK**. If you are on the *free* tier, you will not be able to modify the data retention period and you need to upgrade to the paid tier in order to control this setting.
- ![Change workspace data retention setting](media/manage-cost-storage/manage-cost-change-retention-01.png)
When the retention is lowered, there is a several day grace period before the data older than the new retention setting is removed.
-The retention can also be [set via Azure Resource Manager](../samples/resource-manager-workspace.md) using the `retentionInDays` parameter. When you set the data retention to 30 days, you can trigger an immediate purge of older data using the `immediatePurgeDataOn30Days` parameter (eliminating the several day grace period). This may be useful for compliance-related scenarios where immediate data removal is imperative. This immediate purge functionality is only exposed via Azure Resource Manager.
+The **Data Retention** page allows retention settings of 30, 31, 60, 90, 120, 180, 270, 365, 550 and 730 days. If another setting is required, that can be configured using [Azure Resource Manager](../samples/resource-manager-workspace.md) using the `retentionInDays` parameter. When you set the data retention to 30 days, you can trigger an immediate purge of older data using the `immediatePurgeDataOn30Days` parameter (eliminating the several day grace period). This may be useful for compliance-related scenarios where immediate data removal is imperative. This immediate purge functionality is only exposed via Azure Resource Manager.
Workspaces with 30 days retention may actually retain data for 31 days. If it is imperative that data be kept for only 30 days, use the Azure Resource Manager to set the retention to 30 days and with the `immediatePurgeDataOn30Days` parameter.
@@ -226,7 +226,7 @@ The following steps describe how to configure a limit to manage the volume of da
2. On the **Usage and estimated costs** page for the selected workspace, click **Data Cap** from the top of the page. 3. Daily cap is **OFF** by default ? click **ON** to enable it, and then set the data volume limit in GB/day.
- ![Log Analytics configure data limit](media/manage-cost-storage/set-daily-volume-cap-01.png)
The daily cap can be configured via ARM by setting the `dailyQuotaGb` parameter under `WorkspaceCapping` as described at [Workspaces - Create Or Update](/rest/api/loganalytics/workspaces/createorupdate#workspacecapping).
@@ -241,9 +241,11 @@ Usage
| extend TimeGenerated=datetime_add("hour",-1*DailyCapResetHour,TimeGenerated) | where TimeGenerated > startofday(ago(31d)) | where IsBillable
-| summarize IngestedGbBetweenDailyCapResets=sum(_BilledSize)/1000. by day=bin(TimeGenerated, 1d) | render areachart
+| summarize IngestedGbBetweenDailyCapResets=sum(Quantity)/1000. by day=bin(TimeGenerated, 1d) | render areachart
```
+(In the Usage data type, the units of `Quantity` are in MB.)
+ ### Alert when Daily Cap reached While we present a visual cue in the Azure portal when your data limit threshold is met, this behavior doesn't necessarily align to how you manage operational issues requiring immediate attention. To receive an alert notification, you can create a new alert rule in Azure Monitor. To learn more, see [how to create, view, and manage alerts](alerts-metric.md).
@@ -417,9 +419,10 @@ find where TimeGenerated > ago(24h) project _ResourceId, _BilledSize, _IsBillabl
For data from nodes hosted in Azure you can get the **size** of ingested data __per Azure subscription__, get use the `_SubscriptionId` property as: ```kusto
-find where TimeGenerated > ago(24h) project _ResourceId, _BilledSize, _IsBillable, _SubscriptionId
+find where TimeGenerated > ago(24h) project _ResourceId, _BilledSize, _IsBillable
| where _IsBillable == true
-| summarize BillableDataBytes = sum(_BilledSize) by _SubscriptionId | sort by BillableDataBytes nulls last
+| summarize BillableDataBytes = sum(_BilledSize) by _ResourceId
+| summarize BillableDataBytes = sum(BillableDataBytes) by _SubscriptionId | sort by BillableDataBytes nulls last
``` To get data volume by resource group, you can parse `_ResourceId`:
@@ -482,6 +485,9 @@ Some suggestions for reducing the volume of logs collected include:
| Syslog | Change [syslog configuration](data-sources-syslog.md) to: <br> - Reduce the number of facilities collected <br> - Collect only required event levels. For example, do not collect *Info* and *Debug* level events | | AzureDiagnostics | Change [resource log collection](./diagnostic-settings.md#create-in-azure-portal) to: <br> - Reduce the number of resources send logs to Log Analytics <br> - Collect only required logs | | Solution data from computers that don't need the solution | Use [solution targeting](../insights/solution-targeting.md) to collect data from only required groups of computers. |
+| Application Insights | Review options for [https://docs.microsoft.com/azure/azure-monitor/app/pricing#managing-your-data-volume](managing Application Insights data volume) |
+| [SQL Analytics](https://docs.microsoft.com/azure/azure-monitor/insights/azure-sql) | Use [Set-AzSqlServerAudit](https://docs.microsoft.com/powershell/module/az.sql/set-azsqlserveraudit) to tune the auditing settings. |
+| Azure Sentinel | Review any [Sentinel data sources](https://docs.microsoft.com/azure/sentinel/connect-data-sources) which you recently enabled as sources of additional data volume. |
### Getting nodes as billed in the Per Node pricing tier
@@ -663,4 +669,4 @@ There are some additional Log Analytics limits, some of which depend on the Log
- Change [performance counter configuration](data-sources-performance-counters.md). - To modify your event collection settings, review [event log configuration](data-sources-windows-events.md). - To modify your syslog collection settings, review [syslog configuration](data-sources-syslog.md).-- To modify your syslog collection settings, review [syslog configuration](data-sources-syslog.md).
+- To modify your syslog collection settings, review [syslog configuration](data-sources-syslog.md).
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/security-controls-policy.md
@@ -1,7 +1,7 @@
Title: Azure Policy Regulatory Compliance controls for Azure Monitor description: Lists Azure Policy Regulatory Compliance controls available for Azure Monitor. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/samples/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/samples/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure Monitor description: Lists Azure Policy built-in policy definitions for Azure Monitor. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021
azure-portal https://docs.microsoft.com/en-us/azure/azure-portal/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure portal description: Lists Azure Policy built-in policy definitions for Azure portal. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/custom-providers/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/custom-providers/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure Custom Resource Providers description: Lists Azure Policy built-in policy definitions for Azure Custom Resource Providers. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/managed-applications/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/managed-applications/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure Managed Applications description: Lists Azure Policy built-in policy definitions for Azure Managed Applications. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure Resource Manager description: Lists Azure Policy built-in policy definitions for Azure Resource Manager. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/region-move-support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/region-move-support.md
@@ -146,7 +146,7 @@ Jump to a resource provider namespace:
> - [Microsoft.Portal](#microsoftportal) > - [Microsoft.PowerBI](#microsoftpowerbi) > - [Microsoft.PowerBIDedicated](#microsoftpowerbidedicated)
-> - [Microsoft.ProjectBabylon](#microsoftprojectbabylon)
+> - [Microsoft.Purview](#microsoftpurview)
> - [Microsoft.ProviderHub](#microsoftproviderhub) > - [Microsoft.Quantum](#microsoftquantum) > - [Microsoft.RecoveryServices](#microsoftrecoveryservices)
@@ -1513,7 +1513,7 @@ Jump to a resource provider namespace:
> | - | -- | > | capacities | No |
-## Microsoft.ProjectBabylon
+## Microsoft.Purview
> [!div class="mx-tableFixed"] > | Resource type | Region move |
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/security-controls-policy.md
@@ -1,7 +1,7 @@
Title: Azure Policy Regulatory Compliance controls for Azure Resource Manager description: Lists Azure Policy Regulatory Compliance controls available for Azure Resource Manager. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021
azure-signalr https://docs.microsoft.com/en-us/azure/azure-signalr/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure SignalR description: Lists Azure Policy built-in policy definitions for Azure SignalR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021
azure-signalr https://docs.microsoft.com/en-us/azure/azure-signalr/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/security-controls-policy.md
@@ -1,7 +1,7 @@
Title: Azure Policy Regulatory Compliance controls for Azure SignalR description: Lists Azure Policy Regulatory Compliance controls available for Azure SignalR. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure SQL Database description: Lists Azure Policy built-in policy definitions for Azure SQL Database and SQL Managed Instance. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/resource-limits-vcore-single-databases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/resource-limits-vcore-single-databases.md
@@ -206,7 +206,7 @@ The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Max log size (TB)|Unlimited |Unlimited |Unlimited |Unlimited |Unlimited |Unlimited |Unlimited | |TempDB max data size (GB)|512|576|640|768|1024|1280|2560| |Storage type| [Note 1](#notes) |[Note 1](#notes)|[Note 1](#notes)|[Note 1](#notes) |[Note 1](#notes) |[Note 1](#notes) |[Note 1](#notes) |
-|Max local SSD IOPS *|64000 |72000 |80000 |96000 |160000 |192000 |204800 |
+|Max local SSD IOPS *|64000 |72000 |80000 |96000 |128000 |160000 |204800 |
|Max log rate (MBps)|100 |100 |100 |100 |100 |100 |100 | |IO latency (approximate)|[Note 2](#notes)|[Note 2](#notes)|[Note 2](#notes)|[Note 2](#notes)|[Note 2](#notes)|[Note 2](#notes)|[Note 2](#notes)| |Max concurrent workers (requests)|1600|1800|2000|2400|3200|4000|8000|
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/security-controls-policy.md
@@ -1,7 +1,7 @@
Title: Azure Policy Regulatory Compliance controls for Azure SQL Database description: Lists Azure Policy Regulatory Compliance controls available for Azure SQL Database and SQL Managed Instance. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/virtual-machines/windows/availability-group-load-balancer-portal-configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/availability-group-load-balancer-portal-configure.md
@@ -73,7 +73,7 @@ First, create the load balancer.
| **Virtual network** |Select the virtual network that the SQL Server instances are in. | | **Subnet** |Select the subnet that the SQL Server instances are in. | | **IP address assignment** |**Static** |
- | **Private IP address** |Specify an available IP address from the subnet. Use this IP address when you create a listener on the cluster. In a PowerShell script, later in this article, use this address for the `$ILBIP` variable. |
+ | **Private IP address** |Specify an available IP address from the subnet. Use this IP address when you create a listener on the cluster. In a PowerShell script, later in this article, use this address for the `$ListenerILBIP` variable. |
| **Subscription** |If you have multiple subscriptions, this field might appear. Select the subscription that you want to associate with this resource. It's normally the same subscription as all the resources for the availability group. | | **Resource group** |Select the resource group that the SQL Server instances are in. | | **Location** |Select the Azure location that the SQL Server instances are in. |
@@ -317,4 +317,4 @@ If you have an Azure Network Security Group to restrict access, make sure that t
## Next steps -- [Configure a SQL Server Always On availability group on Azure virtual machines in different regions](availability-group-manually-configure-multiple-regions.md)
+- [Configure a SQL Server Always On availability group on Azure virtual machines in different regions](availability-group-manually-configure-multiple-regions.md)
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/concepts-upgrades https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-upgrades.md
@@ -48,7 +48,7 @@ In addition to making updates, Azure VMware Solution takes a configuration backu
At times of failure, Azure VMware Solution can restore these from the configuration backup.
-For more information on VMware software versions, see the [private clouds and clusters concept article](concepts-private-clouds-clusters.md) and the [FAQ](faq.md).
+For more information on VMware software versions, see the [private clouds and clusters concept article](concepts-private-clouds-clusters.md) and the [FAQ](faq.yml).
## Next steps
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/deploy-traffic-manager-balance-workloads https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/deploy-traffic-manager-balance-workloads.md
@@ -2,7 +2,7 @@
Title: Deploy Traffic Manager to balance Azure VMware Solution workloads description: Learn how to integrate Traffic Manager with Azure VMware Solution to balance application workloads across multiple endpoints in different regions. Previously updated : 12/29/2020 Last updated : 02/08/2021 # Deploy Traffic Manager to balance Azure VMware Solution workloads
@@ -125,9 +125,9 @@ The following steps verify the configuration of the NSX-T segment in the Azure V
## Next steps
-Learn more about:
+Now that you've covered integrating Azure Traffic Manager with Azure VMware Solution, you may want to learn about:
-- [Using Azure Application Gateway on Azure VMware Solution](protect-azure-vmware-solution-with-application-gateway.md)-- [Traffic Manager routing methods](../traffic-manager/traffic-manager-routing-methods.md)-- [Combining load-balancing services in Azure](../traffic-manager/traffic-manager-load-balancing-azure.md)-- [Measuring Traffic Manager performance](../traffic-manager/traffic-manager-performance-considerations.md)
+- [Using Azure Application Gateway on Azure VMware Solution](protect-azure-vmware-solution-with-application-gateway.md).
+- [Traffic Manager routing methods](../traffic-manager/traffic-manager-routing-methods.md).
+- [Combining load-balancing services in Azure](../traffic-manager/traffic-manager-load-balancing-azure.md).
+- [Measuring Traffic Manager performance](../traffic-manager/traffic-manager-performance-considerations.md).
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/faq.md
@@ -1,332 +0,0 @@
- Title: Frequently asked questions
-description: Provides answers to some of the common questions about Azure VMware Solution.
- Previously updated : 1/27/2021--
-# Frequently asked questions about Azure VMware Solution
-
-In this article, we'll answer frequently asked questions about Azure VMware Solution.
-
-## General
-
-### What is Azure VMware Solution?
-
-As enterprises pursue IT modernization strategies to improve business agility, reduce costs, and accelerate innovation, hybrid cloud platforms have emerged as key enablers of customers' digital transformation. Azure VMware Solution combines VMware's Software-Defined Data Center (SDDC) software with Microsoft's Azure global cloud service ecosystem. Azure VMware Solution is managed to meet performance, availability, security, and compliance requirements.
-
-## Azure VMware Solution Service
-
-### Where is Azure VMware Solution available today?
-
-The service is continuously being added to new regions, so view the [latest service availability information](https://azure.microsoft.com/global-infrastructure/services/?products=azure-vmware) for more details.
-
-### Can workloads running in an Azure VMware Solution instance consume or integrate with Azure services?
-
-All Azure services will be available to Azure VMware Solution customers. Performance and availability limitations for specific services will need to be addressed on a case-by-case basis.
-
-### What guest operating systems are compatible with Azure VMware Solution?
-
-You can find information about guest operating system compatibility with vSphere by using the [VMware Compatibility Guide](https://www.vmware.com/resources/compatibility/search.php?deviceCategory=software&details=1&releases=485&page=1&display_interval=10&sortColumn=Partner&sortOrder=Asc&testConfig=16). To identify the version of vSphere running in Azure VMware Solution, see [VMware software versions](concepts-private-clouds-clusters.md#vmware-software-versions).
-
-### Do I use the same tools that I use now to manage private cloud resources?
-
-Yes. The Azure portal is used for deployment and several management operations. vCenter and NSX Manager are used to manage vSphere and NSX-T resources.
-
-### Can I manage a private cloud with my on-premises vCenter?
-
-At launch, Azure VMware Solution won't support a single management experience across on-premises and private cloud environments. Private cloud clusters will be managed with vCenter and NSX Manager local to a private cloud.
-
-### Can I use vRealize Suite running on-premises?
-
-Specific integrations and use cases may be evaluated on a case-by-case basis.
-
-### Can I migrate vSphere VMs from on-premises environments to Azure VMware Solution private clouds?
-
-Yes. VM migration and vMotion can be used to move VMs to a private cloud if standard cross vCenter [vMotion requirements](https://kb.vmware.com/s/article/2106952?lang=en_US&queryTerm=2106952) are met.
-
-### Is a specific version of vSphere required in on-premises environments?
-
-All cloud environments come with VMware HCX, vSphere 5.5, or later in on-premises environments for vMotion.
-
-### What does the change control process look like?
-
-Updates made to the service itself follows Microsoft Azure's standard change management process. Customers are responsible for any workload administration tasks and the associated change management processes.
-
-### How is this different from Azure VMware Solution by CloudSimple?
-
-With the new Azure VMware Solution, Microsoft and VMware have a direct cloud provider partnership. The new solution is entirely designed, built, and supported by Microsoft, and endorsed by VMware. Architecturally, the solutions are consistent, with the VMware technology stack running on a dedicated Azure infrastructure.
-
-### Can Azure VMware Solution VMs be managed by VMRC?
-Yes. Provided the system it's installed on can access the private cloud vCenter and is using public DNS to resolve ESXi hostnames.
-
-### Are there special instructions for installing and using VMRC with Azure VMware Solution VMs?
-No. To meet the VM prerequisites follow the [instructions provided by VMware](https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.vm_admin.doc/GUID-89E7E8F0-DB2B-437F-8F70-BA34C505053F.html).
-
-### Is VMware HCX supported on VPNs?
-No, because of bandwidth and latency requirements.
-
-### Can Azure Bastion be used for connecting to Azure VMware Solution VMs?
-Azure Bastion is the service recommended to connect to the jump box to prevent exposing Azure VMware Solution to the internet. You can't use Azure Bastion to connect to Azure VMware Solution VMs since they aren't Azure IaaS objects.
-
-### Can Azure Load Balancer internal be used for Azure VMware Solution VMs?
-No. Azure Load Balancer internal-only supports Azure IaaS VMs. Azure Load Balancer doesn't support IP-based backend pools; only Azure VMs or virtual machine scale set objects in which Azure VMware Solution VMs aren't Azure objects.
-
-### Can an existing ExpressRoute Gateway be used to connect to Azure VMware Solution?
-Yes. Use an existing ExpressRoute Gateway to connect to Azure VMware Solution as long as it doesn't exceed the limit of four ExpressRoute circuits per virtual network. To access Azure VMware Solution from on-premises through ExpressRoute, you must have ExpressRoute Global Reach since the ExpressRoute Gateway doesn't provide transitive routing between its connected circuits.
-
-### Why does Azure VMware Solution use a Public 4-byte Autonomous System Number (ASN)?
-Azure VMware Solution uses the officially registered Public 4-byte ASNs to ensure there is never a conflict with your on-premises use of Private ASNs in the customer's routing path to Azure VMware Solution.
-
-### How can I use ExpressRoute to connect to Azure VMware Solution if the on-premises ExpressRoute-carrier partners/ISPs don't support 4-byte ASN?
-The only way to connect to Azure VMware Solution through ExpressRoute is for your environment and the on-premises ExpressRoute-carrier partners/ISPs support 4-byte ASN or have backward compatibility from 4 byte to 2-byte ASN in the BGP prefix ASN path advertisement.
-
-## Compute, network, storage, and backup
-
-### Is there more than one type of host available?
-
-There's only one type of host available.
-
-### What are the CPU specifications in each type of host?
-
-The servers have dual 18 core 2.3 GHz Intel CPUs.
-
-### How much memory is in each host?
-
-The servers have 576 GB of RAM.
-
-### What is the storage capacity of each host?
-
-Each ESXi host has two vSAN diskgroups with a capacity tier of 15.2 TB and a 3.2-TB NVMe cache tier (1.6 TB in each diskgroup).
-
-### How much network bandwidth is available in each ESXi host?
-
-Each ESXi host in Azure VMware Solution is configured with four 25-Gbps NICs, two NICs provisioned for ESXi system traffic, and two NICs provisioned for workload traffic.
-
-### Is data stored on the vSAN datastores encrypted at rest?
-
-Yes, all vSAN data is encrypted by default using keys stored in Azure Key Vault.
-
-### What independent software vendors (ISVs) backup solutions work with Azure VMware Solution?
-
-Commvault, Veritas, and Veeam have extended their backup solutions to work with Azure VMware Solution. However, any backup solution that uses VMware VADP with the HotAdd transport mode would work right out of the box on Azure VMware Solution.
-
-### What about support for ISV backup solutions?
-
-As these backup solutions are installed and managed by customers, they can reach out to the respective ISV for support.
-
-### What is the correct storage policy for the dedupe setup?
-
-Use the *thin_provision* storage policy for your VM template. The default is *thick_provision*.
-
-### Are the SNMP infrastructure logs shared?
-
-No.
-
-## Hosts, clusters, and private clouds
-
-### Is the underlying infrastructure shared?
-
-No, private cloud hosts and clusters are dedicated and securely erased before and after use.
-
-### What are the minimum and maximum number of hosts per cluster?
-
-Clusters can scale between 3 and 16 ESXi hosts. Trial clusters are limited to three hosts.
-
-### Can I scale my private cloud clusters?
-
-Yes, clusters scale between the minimum and the maximum number of ESXi hosts. Trial clusters are limited to three hosts.
-
-### What are trial clusters?
-
-Trial clusters are three host clusters used for one-month evaluations of Azure VMware Solution private clouds.
-
-### Can I use High-end hosts for trial clusters?
-
-No. High-end ESXi hosts are reserved for use in production clusters.
-
-## Azure VMware Solution and VMware software
-
-### What versions of VMware software is used in private clouds?
---
-### Do private clouds use VMware NSX?
-
-Yes, NSX-T 2.5 is used for the software-defined networking in Azure VMware Solution private clouds.
-
-### Can I use VMware NSX-V in a private cloud?
-
-No. NSX-T is the only supported version of NSX.
-
-### Is NSX required in on-premises environments or networks that connect to a private cloud?
-
-No, you aren't required to use NSX on-premises.
-
-### What is the upgrade and update schedule for VMware software in a private cloud?
-
-The private cloud software bundle upgrades keep the software within one version of the most recent software bundle release from VMware. The private cloud software versions may differ from the most recent versions of the individual software components (ESXi, NSX-T, vCenter, vSAN).
-
-### How often will the private cloud software stack be updated?
-
-The private cloud software is upgraded on a schedule that tracks the software bundle's release from VMware. Your private cloud doesn't require downtime for upgrades.
-
-## Connectivity
-
-### What network IP address planning is required to incorporate private clouds with on-premises environments?
-
-A private network /22 address space is required to deploy an Azure VMware Solution private cloud. This private address space shouldn't overlap with other virtual networks in a subscription or with on-premises networks.
-
-### How do I connect from on-premises environments to an Azure VMware Solution private cloud?
-
-You can connect to the service in one of two methods:
--- With a VM or application gateway deployed on an Azure virtual network that is peered through ExpressRoute to the private cloud.-- Through ExpressRoute Global Reach from your on-premises data center to an Azure ExpressRoute circuit.-
-### How do I connect a workload VM to the internet or an Azure service endpoint?
-
-In the Azure portal, enable internet connectivity for a private cloud. With NSX-T manager, create an NSX-T T1 router and a logical switch. You then use vCenter to deploy a VM on the network segment defined by the logical switch. That VM will have network access to the internet and Azure services.
-
-### Do I need to restrict access from the internet to VMs on logical networks in a private cloud?
-
-No. Network traffic inbound from the internet directly to private clouds isn't allowed by default. However, you're able to expose Azure VMware Solution VMs to the internet through the [Public IP](public-ip-usage.md) option in your Azure portal for your Azure VMware Solution private cloud.
-
-### Do I need to restrict internet access from VMs on logical networks to the internet?
-
-Yes. You'll need to use NSX-T manager to create a firewall to restrict VM access to the internet.
--
-### Can Azure VMware Solution use Azure Virtual WAN hosted ExpressRoute Gateways?
-Yes.
-
-### Can transit connectivity be established between on-premises and Azure VMware Solution through Azure Virtual WAN over ExpressRoute Global Reach?
-Azure Virtual WAN doesn't provide transitive routing between two connected ExpressRoute circuits and non-virtual WAN ExpressRoute Gateway. Using ExpressRoute Global Reach allows connectivity between on-premises and Azure VMware Solution, but goes through Microsoft's global network instead of the Virtual WAN Hub.
-
-### Could I use HCX through public Internet communications as a workaround for the non-supportability of HCX when using VPN S2S with vWAN for on-premises communications?
-
-Currently, the only supported method for VMware HCX is through ExpressRoute.
-
-## Accounts and privileges
-
-### What accounts and privileges will I get with my new Azure VMware Solution private cloud?
-
-You're provided credentials for a cloudadmin user in vCenter and admin access on NSX-T Manager. There's also a CloudAdmin group that can be used to incorporate Azure Active Directory. For more information, see [Access and Identity Concepts](concepts-identity.md).
-
-### Can have administrator access to ESXi hosts?
-
-No, administrator access to ESXi is restricted to meet the security requirements of the solution.
-
-### What privileges and permissions will I have in vCenter?
-
-You'll have CloudAdmin group privileges. For more information, see [Access and Identity Concepts](concepts-identity.md).
-
-### What privileges and permissions will I have on the NSX-T manager?
-
-You'll have full administrator privileges on NSX-T and can manage vSphere role-based access control as you would with NSX-T Data Center on-premises. For more information, see [Access and Identity Concepts](concepts-identity.md).
-
-> [!NOTE]
-> A T0 router is created and configured as part of a private cloud deployment. Any modification to that logical router or the NSX-T edge node VMs could affect connectivity to your private cloud.
-
-## Billing and Support
-
-### How will pricing be structured for Azure VMware Solution?
-
-For general questions on pricing, see the Azure VMware Solution [pricing](https://azure.microsoft.com/pricing/details/azure-vmware) page.
-
-### Can Azure VMware Solution be purchased through a Microsoft CSP?
-
-Yes, customers can deploy Azure VMware Solution within an Azure subscription managed by a CSP.
-
-### Who supports Azure VMware Solution?
-
-Microsoft delivers support for Azure VMware Solution. You can submit a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
-
-For CSP-managed subscriptions, the first level of support provides the Solution Provider in the same fashion as CSP does for other Azure services.
-
-### What accounts do I need to create an Azure VMware Solution private cloud?
-
-You'll need an Azure account in an Azure subscription.
-
-### Are Red Hat solutions supported on Azure VMware Solution?
-
-Microsoft and Red Hat share an integrated, colocated support team that provides a unified contact point for Red Hat ecosystems running on the Azure platform. Like other Azure platform services that work with Red Hat Enterprise Linux, Azure VMware Solution falls under the Cloud Access and integrated support umbrella. Red Hat Enterprise Linux is supported for running on top of Azure VMware Solution within Azure.
-
-### Is VMware HCX Enterprise available, and if so, how much does it cost?
-
-VMware HCX Enterprise is available with Azure VMware Solution as a *Preview* function/service. While VMware HCX Enterprise for Azure VMware Solution is in Preview, it's a free function/service and subject to Preview service terms and conditions. Once the VMware HCX Enterprise service goes GA, you'll get a 30-day notice that billing will switch over. You can switch it off or opt-out of the service.
-
-### How do I request a host quota increase for Azure VMware Solution?
-
-For CSP-managed subscriptions, the customer must submit the request to the partner. The partner team then engages with Microsoft to get the quota increased for the subscription. For more information, see [How to enable Azure VMware Solution resource](enable-azure-vmware-solution.md).
-
-For EA subscriptions, use the following procedure. First, you'll need:
-
-* An [Azure Enterprise Agreement (EA)](../cost-management-billing/manage/ea-portal-agreements.md) with Microsoft.
-* An Azure account in an Azure subscription.
-
-Before you can create your Azure VMware Solution resource, you'll submit a support ticket to have your hosts allocated. It takes up to five business days to confirm and fulfill your request. If you have an existing Azure VMware Solution private cloud and want more hosts allocated, you'll go through the same process.
-
-1. In your Azure portal, under **Help + Support**, create a **[New support request](https://rc.portal.azure.com/#create/Microsoft.Support)** and provide the following information for the ticket:
- - **Issue type:** Technical
- - **Subscription:** Select your subscription
- - **Service:** All services > Azure VMware Solution
- - **Resource:** General question
- - **Summary:** Need capacity
- - **Problem type:** Capacity Management Issues
- - **Problem subtype:** Customer Request for Additional Host Quota/Capacity
-
-1. In the **Description** of the support ticket, on the **Details** tab, provide:
-
- - POC or Production
- - Region Name
- - Number of hosts
- - Any other details
-
- >[!NOTE]
- >Azure VMware Solution recommends a minimum of three hosts to spin up your private cloud and for redundancy N+1 hosts.
-
-1. Select **Review + Create** to submit the request.
-
- It will take up to five business days for a support representative to confirm your request.
-
- >[!IMPORTANT]
- >If you already have an existing Azure VMware Solution and request additional hosts, please note that we need five business days to allocate the hosts.
-
-1. Before you can provision your hosts, make sure that you register the **Microsoft.AVS** resource provider in the Azure portal.
-
- ```azurecli-interactive
- az provider register -n Microsoft.AVS --subscription <your subscription ID>
- ```
-
- For more ways to register the resource provider, see [Azure resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md).
-
-### Are Reserved Instances available for purchasing through the Cloud Solution Provider (CSP) program?
-
-Yes. CSP can purchase reserved instances for their customers. For more information, see [Save costs with a reserved instance](reserved-instance.md).
-
-### Does Azure VMware Solution offer multi-tenancy for hosting CSP partners?
-
-No. Currently, Azure VMware Solution doesn't offer multi-tenancy.
-
-### Will traffic between on-premises and Azure VMware Solution over ExpressRoute incur any outbound data transfer charge in the metered data plan?
-
-Traffic in the Azure VMware Solution ExpressRoute circuit isn't metered in any way. Traffic from your ExpressRoute circuit connecting to your on-premises to Azure is charged according to ExpressRoute pricing plans.
--
-## Customer communication
-
-### How can I receive an alert when Azure sends service health notifications to my Azure subscription?
-
-Service issues, planned maintenance, health advisories, security advisories notifications are published through **Service Health** in the Azure portal. You can take timely actions when you set up activity log alerts for these notifications. For more information, see [Create service health alerts using the Azure portal](../service-health/alerts-activity-log-service-notifications-portal.md#create-service-health-alert-using-azure-portal).
----
-<!-- LINKS - external -->
-[kb2106952]: https://kb.vmware.com/s/article/2106952?lang=en_US&queryTerm=21069522
-
-<!-- LINKS - internal -->
-[Access and Identity Concepts]: concepts-identity.md
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/lifecycle-management-of-azure-vmware-solution-vms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/lifecycle-management-of-azure-vmware-solution-vms.md
@@ -2,7 +2,7 @@
Title: Lifecycle management of Azure VMware Solution VMs description: Learn to manage all aspects of the lifecycle of your Azure VMware Solution VMs with Microsoft Azure native tools. Previously updated : 09/11/2020 Last updated : 02/08/2021 # Lifecycle management of Azure VMware Solution VMs
@@ -104,4 +104,12 @@ Azure Monitor is a comprehensive solution for collecting, analyzing, and acting
- [Create, view, and manage metric alerts using Azure Monitor](../azure-monitor/platform/alerts-metric.md). - [Create, view, and manage log alerts using Azure Monitor](../azure-monitor/platform/alerts-log.md). - [Action rules](../azure-monitor/platform/alerts-action-rules.md) to set automated actions and notifications.
- - [Connect Azure to ITSM tools using IT Service Management Connector](../azure-monitor/platform/itsmc-overview.md).
+ - [Connect Azure to ITSM tools using IT Service Management Connector](../azure-monitor/platform/itsmc-overview.md).
+
+ ## Next steps
+
+Now that you've covered using Azure's native tools to manage your Azure VMware Solution VMs throughout their lifecycle, you may want to learn about:
+
+- [Protecting your Azure VMware Solution VMs with Azure Security Center](azure-security-integration.md).
+- [Setting up Azure Backup Server for Azure VMware Solution](set-up-backup-server-for-azure-vmware-solution.md).
+- [Integrating Azure VMware Solution in a hub and spoke architecture](concepts-hub-and-spoke.md).
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/netapp-files-with-azure-vmware-solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/netapp-files-with-azure-vmware-solution.md
@@ -2,7 +2,7 @@
Title: Azure NetApp Files with Azure VMware Solution description: Use Azure NetApp Files with Azure VMware Solution VMs to migrate and sync data across on-premises servers, Azure VMware Solution VMs, and cloud infrastructures. Previously updated : 02/01/2021 Last updated : 02/08/2021 # Azure NetApp Files with Azure VMware Solution
@@ -11,7 +11,7 @@ In this article, we'll walk through the steps of integrating Azure NetApp Files
## Azure NetApp Files overview
-[Azure NetApp Files](../azure-netapp-files/azure-netapp-files-introduction.md) is an Azure first-party service for migration and running the most demanding enterprise file-workloads in the cloud, including databases, SAP, and high-performance computing applications, with no code changes.
+[Azure NetApp Files](../azure-netapp-files/azure-netapp-files-introduction.md) is an Azure service for migration and running the most demanding enterprise file-workloads in the cloud. This includes databases, SAP, and high-performance computing applications, with no code changes.
### Features (Services where Azure NetApp Files are used.)
@@ -26,11 +26,11 @@ Azure NetApp Files is available in many Azure regions and supports cross-region
## Reference architecture
-The following diagram illustrates a connection via Azure ExpressRoute to an Azure VMware Solution private cloud. It shows the usage of an Azure NetApp Files share, mounted on Azure VMware Solution VMs, being accessed by the Azure VMware Solution environment.
+The following diagram illustrates a connection via Azure ExpressRoute to an Azure VMware Solution private cloud. The Azure VMware Solution environment accesses the Azure NetApp Files share, which is mounted on Azure VMware Solution VMs.
![Diagram showing NetApp Files for Azure VMware Solution architecture.](media/net-app-files/net-app-files-topology.png)
-This article covers instructions to set up, test, and verify the Azure NetApp Files volume as a file share for Azure VMware Solution VMs. In this scenario, we have used the NFS protocol. Azure NetApp Files and Azure VMware Solution are created in the same Azure region.
+This article covers instructions to set up, test, and verify the Azure NetApp Files volume as a file share for Azure VMware Solution VMs. In this scenario, we've used the NFS protocol. Azure NetApp Files and Azure VMware Solution are created in the same Azure region.
## Prerequisites
@@ -78,11 +78,11 @@ The following steps include verification of the pre-configured Azure NetApp File
:::image type="content" source="media/net-app-files/configuration-of-volume.png" alt-text="Screenshot showing configuration details of a volume.":::
- You can see that the volume anfvolume, with a size of 200 GiB, was created in capacity pool anfpool1 and exported as an NFS file share via 10.22.3.4:/ANFVOLUME. One private IP from the Azure Virtual Network (VNet) was created for Azure NetApp Files and the NFS path to mount on the VM. For information on Azure NetApp Files volume performance relative to size ("Quota"), see [Performance considerations for Azure NetApp Files](../azure-netapp-files/azure-netapp-files-performance-considerations.md).
+ You can see that the volume anfvolume has a size of 200 GiB and is in capacity pool anfpool1. It's exported as an NFS file share via 10.22.3.4:/ANFVOLUME. One private IP from the Azure Virtual Network (VNet) was created for Azure NetApp Files and the NFS path to mount on the VM. To learn about Azure NetApp Files volume performance by size or "Quota," see [Performance considerations for Azure NetApp Files](../azure-netapp-files/azure-netapp-files-performance-considerations.md).
## Verify pre-configured Azure VMware Solution VM share mapping
-Before showcasing the accessibility of Azure NetApp Files share to an Azure VMware Solution VM, it's important to understand SMB and NFS share mapping. Only after configuring the SMB or NFS volumes, can they be mounted as documented here.
+To make an Azure NetApp Files share accessible to an Azure VMware Solution VM, it's important to understand SMB and NFS share mapping. Only after configuring the SMB or NFS volumes, can they be mounted as documented here.
- SMB share: Create an Active Directory connection before deploying an SMB volume. The specified domain controllers must be accessible by the delegated subnet of Azure NetApp Files for a successful connection. Once the Active Directory is configured within the Azure NetApp Files account, it will appear as a selectable item while creating SMB volumes.
@@ -98,7 +98,7 @@ The following are just a few compelling Azure NetApp Files use cases.
## Next steps
-Once you've integrated Azure NetApp Files with your Azure VMware Solution workloads, you may want to learn more about:
+Now that you've covered integrating Azure NetApp Files with your Azure VMware Solution workloads, you may want to learn about:
- [Resource limits for Azure NetApp Files](../azure-netapp-files/azure-netapp-files-resource-limits.md#resource-limits). - [Guidelines for Azure NetApp Files network planning](../azure-netapp-files/azure-netapp-files-network-topologies.md).
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/protect-azure-vmware-solution-with-application-gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/protect-azure-vmware-solution-with-application-gateway.md
@@ -2,7 +2,7 @@
Title: Use Azure Application Gateway to protect your web apps on Azure VMware Solution description: Configure Azure Application Gateway to securely expose your web apps running on Azure VMware Solution. Previously updated : 11/13/2020 Last updated : 02/08/2021 # Use Azure Application Gateway to protect your web apps on Azure VMware Solution
@@ -183,4 +183,8 @@ This procedure shows you how to define backend address pools using VMs running o
## Next Steps
-Review the [Azure Application Gateway documentation](../application-gateway/index.yml) for more configuration examples.
+Now that you've covered using Application Gateway to protect a web app running on Azure VMware Solution, you may want to learn about:
+
+- [Configuring Azure Application Gateway for different scenarios](../application-gateway/configuration-overview.md).
+- [Deploying Traffic Manager to balance Azure VMware Solution workloads](deploy-traffic-manager-balance-workloads.md).
+- [Integrating Azure NetApp Files with Azure VMware Solution-based workloads](netapp-files-with-azure-vmware-solution.md).
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/tutorial-create-private-cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-create-private-cloud.md
@@ -70,25 +70,6 @@ Provide a name for the resource group and the private cloud, a location, and the
az vmware private-cloud create -g myResourceGroup -n myPrivateCloudName --location eastus --cluster-size 3 --network-block xx.xx.xx.xx/22 --sku AV36 ```
-## Delete an Azure VMware Solution private cloud
-
-If you have an Azure VMware Solution private cloud that you no longer need, you can delete it. An Azure VMware Solution private cloud includes an isolated network domain, one or more provisioned vSphere clusters on dedicated server hosts, and several virtual machines. When a private cloud is deleted, all of the virtual machines, their data, and clusters are deleted. The dedicated bare-metal hosts are securely wiped and returned to the free pool. The network domain provisioned for the customer is deleted.
-
-> [!CAUTION]
-> Deleting the private cloud is an irreversible operation. Once the private cloud is deleted, the data cannot be recovered, as it terminates all running workloads and components and destroys all private cloud data and configuration settings, including public IP addresses.
-
-### Prerequisites
-
-Once a private cloud is deleted, there's no way to recover the virtual machines and their data. If the virtual machine data will be required later, the admin must first back up all of the data before deleting the private cloud.
-
-### Steps to delete an Azure VMware Solution private cloud
-
-1. Access the Azure VMware Solutions page in the Azure portal.
-
-2. Select the private cloud to be deleted.
-
-3. Enter the name of the private cloud and select **Yes**. In a few hours, the deletion process completes.
- ## Azure VMware commands For a list of commands you can use with Azure VMware Solution, see [Azure VMware commands](/cli/azure/ext/vmware/vmware).
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/tutorial-delete-private-cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-delete-private-cloud.md
@@ -0,0 +1,29 @@
+
+ Title: Tutorial - Delete an Azure VMware Solution private cloud
+description: Learn how to delete an Azure VMware Solution private cloud that you no longer need.
+ Last updated : 02/09/2021++
+# Tutorial: Delete an Azure VMware Solution private cloud
+
+If you have an Azure VMware Solution private cloud that you no longer need, you can delete it. The private cloud includes an isolated network domain, one or more provisioned vSphere clusters on dedicated server hosts, and several virtual machines (VMs). When you delete a private cloud, all of the VMs, their data, and clusters are deleted. The dedicated hosts are securely wiped and returned to the free pool. The network domain provisioned for the customer is also deleted.
+
+> [!CAUTION]
+> Deleting the private cloud is an irreversible operation. Once the private cloud is deleted, the data cannot be recovered, as it terminates all running workloads and components and destroys all private cloud data and configuration settings, including public IP addresses.
+
+## Prerequisites
+
+If you require the VMs and their data later, make sure to back up the data before you delete the private cloud. There's no way to recover the VMs and their data.
++
+## Delete the private cloud
+
+1. Access the Azure VMware Solutions console in the [Azure portal](https://portal.azure.com).
+
+2. Select the private cloud to be deleted.
+
+3. Enter the name of the private cloud and select **Yes**.
+
+>[!NOTE]
+>The deletion process takes a few hours to complete.
backup https://docs.microsoft.com/en-us/azure/backup/backup-azure-vms-automation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-vms-automation.md
@@ -523,6 +523,53 @@ A user can selectively restore few disks instead of the entire backed up set. Pr
Once you restore the disks, go to the next section to create the VM.
+#### Restore disks to a secondary region
+
+If cross-region restore is enabled on the vault with which you've protected your VMs, the backup data is replicated to the secondary region. You can use the backup data to perform a restore. Perform the following steps to trigger a restore in the secondary region:
+
+1. [Fetch the vault ID](#fetch-the-vault-id) with which your VMs are protected.
+1. Select the [correct backup item to restore](#select-the-vm-when-restoring-files).
+1. Select the appropriate recovery point in the secondary region that you want to use to perform the restore.
+
+ To complete this step, run this command:
+
+ ```powershell
+ $rp=Get-AzRecoveryServicesBackupRecoveryPoint -UseSecondaryRegion -Item $backupitem -VaultId $targetVault.ID
+ $rp=$rp[0]
+ ```
+
+1. Execute the [Restore-AzRecoveryServicesBackupItem](/powershell/module/az.recoveryservices/restore-azrecoveryservicesbackupitem) cmdlet with the `-RestoreToSecondaryRegion` parameter to trigger a restore in the secondary region.
+
+ To complete this step, run this command:
+
+ ```powershell
+ $restorejob = Restore-AzRecoveryServicesBackupItem -RecoveryPoint $rp[0] -StorageAccountName "DestAccount" -StorageAccountResourceGroupName "DestRG" -TargetResourceGroupName "DestRGforManagedDisks" -VaultId $targetVault.ID -VaultLocation $targetVault.Location -RestoreToSecondaryRegion -RestoreOnlyOSDisk
+ ```
+
+ The output will be similar to the following example:
+
+ ```output
+ WorkloadName Operation Status StartTime EndTime JobID
+ - -
+ V2VM CrossRegionRestore InProgress 4/23/2016 5:00:30 PM cf4b3ef5-2fac-4c8e-a215-d2eba4124f27
+ ```
+
+1. Execute the [Get-AzRecoveryServicesBackupJob](/powershell/module/az.recoveryservices/get-azrecoveryservicesbackupjob) cmdlet with the `-UseSecondaryRegion` parameter to monitor the restore job.
+
+ To complete this step, run this command:
+
+ ```powershell
+ Get-AzRecoveryServicesBackupJob -From (Get-Date).AddDays(-7).ToUniversalTime() -To (Get-Date).ToUniversalTime() -UseSecondaryRegion -VaultId $targetVault.ID
+ ```
+
+ The output will be similar to the following example:
+
+ ```output
+ WorkloadName Operation Status StartTime EndTime JobID
+ - --
+ V2VM CrossRegionRestore InProgress 2/8/2021 4:24:57 PM 2d071b07-8f7c-4368-bc39-98c7fb2983f7
+ ```
+ ## Replace disks in Azure VM To replace the disks and configuration information, perform the following steps:
backup https://docs.microsoft.com/en-us/azure/backup/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure Backup description: Lists Azure Policy built-in policy definitions for Azure Backup. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021
backup https://docs.microsoft.com/en-us/azure/backup/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/security-controls-policy.md
@@ -1,7 +1,7 @@
Title: Azure Policy Regulatory Compliance controls for Azure Backup description: Lists Azure Policy Regulatory Compliance controls available for Azure Backup. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021
backup https://docs.microsoft.com/en-us/azure/backup/tutorial-backup-sap-hana-db https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/tutorial-backup-sap-hana-db.md
@@ -94,6 +94,46 @@ You can also use the following FQDNs to allow access to the required services fr
When you back up an SAP HANA database running on an Azure VM, the backup extension on the VM uses the HTTPS APIs to send management commands to Azure Backup and data to Azure Storage. The backup extension also uses Azure AD for authentication. Route the backup extension traffic for these three services through the HTTP proxy. Use the list of IPs and FQDNs mentioned above for allowing access to the required services. Authenticated proxy servers aren't supported.
+## Understanding backup and restore throughput performance
+
+The backups (log and non-log) in SAP HANA Azure VMs provided via Backint are streams to Azure Recovery services vaults and so it is important to understand this streaming methodology.
+
+The Backint component of HANA provides the 'pipes' (a pipe to read from and a pipe to write into), connected to underlying disks where database files reside, which are then read by the Azure Backup service and transported to Azure Recovery Services vault. The Azure Backup service also performs a checksum to validate the streams, in addition to the backint native validation checks. These validations will make sure that the data present in Azure Recovery Services vault is indeed reliable and recoverable.
+
+Since the streams primarily deal with disks, you need to understand the disk performance to gauge the backup and restore performance. Refer to [this article](https://docs.microsoft.com/azure/virtual-machines/disks-performance) for an in-depth understanding of disk throughput and performance in Azure VMs. These are also applicable to backup and restore performance.
+
+**The Azure Backup service attempts to achieve upto ~420 MBps for non-log backups (such as full, differential and incremental) and upto 100 MBps for log backups for HANA**. As mentioned above, these are not guaranteed speeds and depend on following factors:
+
+* Max Uncached disk throughput of the VM
+* Underlying disk type and its throughput
+* The number of processes which are trying to read and write into the same disk at the same time.
+
+> [!IMPORTANT]
+> In smaller VMs, where the uncached disk throughput is very close to or lesser than 400 MBps, you may be concerned that the entire disk IOPS are consumed by the backup service which may affect SAP HANA's operations related to read/write from the disks. In that case, if you wishes to throttle or limit the backup service consumption to the maximum limit, you can refer to the next section.
+
+### Limiting backup throughput performance
+
+If you want to throttle backup service disk IOPS consumption to a maximum value, then perform the following steps.
+
+1. Go to the "opt/msawb/bin" folder
+2. Create a new JSON file named "ExtensionSettingOverrides.JSON"
+3. Add a key-value pair to the JSON file as follows:
+
+ ```json
+ {
+ "MaxUsableVMThroughputInMBPS": 200
+ }
+ ```
+
+4. Change the permissions and ownership of the file as follows:
+
+ ```bash
+ chmod 750 ExtensionSettingsOverrides.json
+ chown root:msawb ExtensionSettingsOverrides.json
+ ```
+
+5. No restart of any service is required. The Azure Backup service will attempt to limit the throughput performance as mentioned in this file.
+ ## What the pre-registration script does Running the pre-registration script performs the following functions:
batch https://docs.microsoft.com/en-us/azure/batch/batch-customer-managed-key https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-customer-managed-key.md
@@ -18,11 +18,6 @@ There are two types of managed identities: [*system-assigned* and *user-assigned
You can either create your Batch account with system-assigned managed identity, or create a separate user-assigned managed identity that will have access to the customer-managed keys. Review the [comparison table](../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) to understand the differences and consider which option works best for your solution. For example, if you want to use the same managed identity to access multiple Azure resources, a user-assigned managed identity will be needed. If not, a system-assigned managed identity associated with your Batch account may be sufficient. Using a user-assigned managed identity also gives you the option to enforce customer-managed keys at Batch account creation, as shown [in the example below](#create-a-batch-account-with-user-assigned-managed-identity-and-customer-managed-keys).
-> [!IMPORTANT]
-> Support for customer-managed keys in Azure Batch is currently in public preview for the West Europe, North Europe, Switzerland North, Central US, South Central US, West Central US, East US, East US 2, West US 2, US Gov Virginia, and US Gov Arizona regions.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ## Create a Batch account with system-assigned managed identity If you don't need a separate user-assigned managed identity, you can enable system-assigned managed identity when you create your Batch account.
batch https://docs.microsoft.com/en-us/azure/batch/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure Batch description: Lists Azure Policy built-in policy definitions for Azure Batch. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021
batch https://docs.microsoft.com/en-us/azure/batch/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/security-controls-policy.md
@@ -1,7 +1,7 @@
Title: Azure Policy Regulatory Compliance controls for Azure Batch description: Lists Azure Policy Regulatory Compliance controls available for Azure Batch. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Computer-vision/computer-vision-how-to-install-containers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/computer-vision-how-to-install-containers.md
@@ -87,7 +87,7 @@ Container images for Read are available.
| Container | Container Registry / Repository / Image Name | |--|| | Read 2.0-preview | `mcr.microsoft.com/azure-cognitive-services/vision/read:2.0-preview` |
-| Read 3.2-preview | `mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-preview.1` |
+| Read 3.2-preview | `mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-preview.2` |
Use the [`docker pull`](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image.
@@ -96,7 +96,7 @@ Use the [`docker pull`](https://docs.docker.com/engine/reference/commandline/pul
# [Version 3.2-preview](#tab/version-3-2) ```bash
-docker pull mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-preview.1
+docker pull mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-preview.2
``` # [Version 2.0-preview](#tab/version-2)
@@ -126,7 +126,7 @@ Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/)
```bash docker run --rm -it -p 5000:5000 --memory 18g --cpus 8 \
-mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-preview.1 \
+mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-preview.2 \
Eula=accept \ Billing={ENDPOINT_URI} \ ApiKey={API_KEY}
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Computer-vision/whats-new https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/whats-new.md
@@ -19,13 +19,14 @@ Learn what's new in the service. These items may be release notes, videos, blog
## February 2021 ### Read API v3.2 Public Preview with OCR support for 73 languages
-Computer Vision's Read API v3.2 public preview includes these capabilities:
+Computer Vision's Read API v3.2 public preview, available as cloud service and Docker container, includes these updates:
* [OCR for 73 languages](./language-support.md#optical-character-recognition-ocr) including Simplified and Traditional Chinese, Japanese, Korean, and Latin languages.
-* Output the text lines in the natural reading order.
-* Classify text lines as handwriting style or not along with a confidence score (Latin languages only).
-* For a multi-page document extract text only for selected pages or page range.
+* Natural reading order for the text line output.
+* Handwriting style classification for text lines along with a confidence score (Latin languages only).
+* Extract text only for selected pages for a multi-page document.
+* Available as a [Distroless container](./computer-vision-how-to-install-containers.md?tabs=version-3-2) for on-premise deployment.
-See [Read preview features](concept-recognizing-text.md#natural-reading-order-output) for more information.
+[Learn more](concept-recognizing-text.md) about the Read API.
> [!div class="nextstepaction"] > [Use the Read API v3.2 Public Preview](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-preview-2/operations/5d986960601faab4bf452005)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/getting-started-improving-your-classifier https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/getting-started-improving-your-classifier.md
@@ -1,7 +1,7 @@
Title: Improving your classifier - Custom Vision Service
+ Title: Improving your model - Custom Vision Service
-description: In this article you'll learn how the amount, quality and variety of data can improve the quality of your classifier in the Custom Vision service.
+description: In this article you'll learn how the amount, quality and variety of data can improve the quality of your model in the Custom Vision service.
@@ -9,16 +9,16 @@
Previously updated : 03/21/2019 Last updated : 02/09/2021
-# How to improve your classifier
+# How to improve your Custom Vision model
-In this guide you will learn how to improve the quality of your Custom Vision Service classifier. The quality of your classifier depends on the amount, quality, and variety of the labeled data you provide it and how balanced the overall dataset is. A good classifier has a balanced training dataset that is representative of what will be submitted to the classifier. The process of building such a classifier is iterative; it's common to take a few rounds of training to reach expected results.
+In this guide, you'll learn how to improve the quality of your Custom Vision Service model. The quality of your [classifier](https://docs.microsoft.com/azure/cognitive-services/custom-vision-service/getting-started-build-a-classifier) or [object detector](https://docs.microsoft.com/azure/cognitive-services/custom-vision-service/get-started-build-detector) depends on the amount, quality, and variety of the labeled data you provide it and how balanced the overall dataset is. A good model has a balanced training dataset that is representative of what will be submitted to it. The process of building such a model is iterative; it's common to take a few rounds of training to reach expected results.
-The following is a general pattern to help you build a more accurate classifier:
+The following is a general pattern to help you train a more accurate model:
1. First-round training 1. Add more images and balance data; retrain
@@ -28,15 +28,15 @@ The following is a general pattern to help you build a more accurate classifier:
## Prevent overfitting
-Sometimes, a classifier will learn to make predictions based on arbitrary characteristics that your images have in common. For example, if you are creating a classifier for apples vs. citrus, and you've used images of apples in hands and of citrus on white plates, the classifier may give undue importance to hands vs. plates, rather than apples vs. citrus.
+Sometimes, a model will learn to make predictions based on arbitrary characteristics that your images have in common. For example, if you are creating a classifier for apples vs. citrus, and you've used images of apples in hands and of citrus on white plates, the classifier may give undue importance to hands vs. plates, rather than apples vs. citrus.
![Image of unexpected classification](./media/getting-started-improving-your-classifier/unexpected.png)
-To correct this problem, use the following guidance on training with more varied images: provide images with different angles, backgrounds, object size, groups, and other variations.
+To correct this problem, provide images with different angles, backgrounds, object size, groups, and other variations. The following sections expand upon these concepts.
## Data quantity
-The number of training images is the most important factor. We recommend using at least 50 images per label as a starting point. With fewer images, there's a higher risk of overfitting, and while your performance numbers may suggest good quality, your model may struggle with real-world data.
+The number of training images is the most important factor for your dataset. We recommend using at least 50 images per label as a starting point. With fewer images, there's a higher risk of overfitting, and while your performance numbers may suggest good quality, your model may struggle with real-world data.
## Data balance
@@ -44,11 +44,11 @@ It's also important to consider the relative quantities of your training data. F
## Data variety
-Be sure to use images that are representative of what will be submitted to the classifier during normal use. Otherwise, your classifier could learn to make predictions based on arbitrary characteristics that your images have in common. For example, if you are creating a classifier for apples vs. citrus, and you've used images of apples in hands and of citrus on white plates, the classifier may give undue importance to hands vs. plates, rather than apples vs. citrus.
+Be sure to use images that are representative of what will be submitted to the classifier during normal use. Otherwise, your model could learn to make predictions based on arbitrary characteristics that your images have in common. For example, if you are creating a classifier for apples vs. citrus, and you've used images of apples in hands and of citrus on white plates, the classifier may give undue importance to hands vs. plates, rather than apples vs. citrus.
![Image of unexpected classification](./media/getting-started-improving-your-classifier/unexpected.png)
-To correct this problem, include a variety of images to ensure that your classifier can generalize well. Below are some ways you can make your training set more diverse:
+To correct this problem, include a variety of images to ensure that your model can generalize well. Below are some ways you can make your training set more diverse:
* __Background:__ Provide images of your object in front of different backgrounds. Photos in natural contexts are better than photos in front of neutral backgrounds as they provide more information for the classifier.
@@ -70,30 +70,39 @@ To correct this problem, include a variety of images to ensure that your classif
![Image of style samples](./media/getting-started-improving-your-classifier/style.png)
-## Negative images
+## Negative images (classifiers only)
-At some point in your project, you may need to add _negative samples_ to help make your classifier more accurate. Negative samples are those which do not match any of the other tags. When you upload these images, apply the special **Negative** label to them.
+If you're using an image classifier, you may need to add _negative samples_ to help make your classifier more accurate. Negative samples are images which do not match any of the other tags. When you upload these images, apply the special **Negative** label to them.
+
+Object detectors handle negative samples automatically, because any image areas outside of the drawn bounding boxes are considered negative.
> [!NOTE] > The Custom Vision Service supports some automatic negative image handling. For example, if you are building a grape vs. banana classifier and submit an image of a shoe for prediction, the classifier should score that image as close to 0% for both grape and banana. > > On the other hand, in cases where the negative images are just a variation of the images used in training, it is likely that the model will classify the negative images as a labeled class due to the great similarities. For example, if you have an orange vs. grapefruit classifier, and you feed in an image of a clementine, it may score the clementine as an orange because many features of the clementine resemble those of oranges. If your negative images are of this nature, we recommend you create one or more additional tags (such as **Other**) and label the negative images with this tag during training to allow the model to better differentiate between these classes.
+## Consider occlusion and truncation (object detectors only)
+
+If you want your object detector to detect truncated objects (object is partially cut out of the image) or occluded objects (object is partially blocked by another object in the image), you'll need to include training images that cover those cases.
+
+> [!NOTE]
+> The issue of objects being occluded by other objects is not to be confused with **Overlap Threshold**, a parameter for rating model performance. The **Overlap Threshold** slider on the [Custom Vision website](https://customvision.ai) deals with how much a predicted bounding box must overlap with the true bounding box to be considered correct.
+ ## Use prediction images for further training
-When you use or test the image classifier by submitting images to the prediction endpoint, the Custom Vision service stores those images. You can then use them to improve the model.
+When you use or test the model by submitting images to the prediction endpoint, the Custom Vision service stores those images. You can then use them to improve the model.
-1. To view images submitted to the classifier, open the [Custom Vision web page](https://customvision.ai), go to your project, and select the __Predictions__ tab. The default view shows images from the current iteration. You can use the __Iteration__ drop down menu to view images submitted during previous iterations.
+1. To view images submitted to the model, open the [Custom Vision web page](https://customvision.ai), go to your project, and select the __Predictions__ tab. The default view shows images from the current iteration. You can use the __Iteration__ drop down menu to view images submitted during previous iterations.
![screenshot of the predictions tab, with images in view](./media/getting-started-improving-your-classifier/predictions.png)
-2. Hover over an image to see the tags that were predicted by the classifier. Images are sorted so that the ones which can bring the most improvements to the classifier are listed the top. To use a different sorting method, make a selection in the __Sort__ section.
+2. Hover over an image to see the tags that were predicted by the model. Images are sorted so that the ones which can bring the most improvements to the model are listed the top. To use a different sorting method, make a selection in the __Sort__ section.
To add an image to your existing training data, select the image, set the correct tag(s), and click __Save and close__. The image will be removed from __Predictions__ and added to the set of training images. You can view it by selecting the __Training Images__ tab. ![Image of the tagging page](./media/getting-started-improving-your-classifier/tag.png)
-3. Then use the __Train__ button to retrain the classifier.
+3. Then use the __Train__ button to retrain the model.
## Visually inspect predictions
@@ -105,7 +114,7 @@ Sometimes a visual inspection can identify patterns that you can then correct by
## Next steps
-In this guide, you learned several techniques to make your custom image classification model more accurate. Next, learn how to test images programmatically by submitting them to the Prediction API.
+In this guide, you learned several techniques to make your custom image classification model or object detector model more accurate. Next, learn how to test images programmatically by submitting them to the Prediction API.
> [!div class="nextstepaction"] > [Use the prediction API](use-prediction-api.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/includes/choose-training-images https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/includes/choose-training-images.md
@@ -24,4 +24,4 @@ Additionally, make sure all of your training images meet the following criteria:
* no less than 256 pixels on the shortest edge; any images shorter than this will be automatically scaled up by the Custom Vision Service > [!NOTE]
-> Do you need a broader set of images to complete your training? Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/en-us/ai/trove?activetab=pivot1:primaryr3) to learn more.
+> Do you need a broader set of images to complete your training? Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/ai/trove?activetab=pivot1:primaryr3) to learn more.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/includes/quickstarts/csharp-tutorial-od https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/includes/quickstarts/csharp-tutorial-od.md
@@ -139,7 +139,7 @@ This method defines the tags that you will train the model on.
First, download the sample images for this project. Save the contents of the [sample Images folder](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/CustomVision/ObjectDetection/Images) to your local device. > [!NOTE]
-> Do you need a broader set of images to complete your training? Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/en-us/ai/trove?activetab=pivot1:primaryr3) to learn more.
+> Do you need a broader set of images to complete your training? Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/ai/trove?activetab=pivot1:primaryr3) to learn more.
When you tag images in object detection projects, you need to specify the region of each tagged object using normalized coordinates. The following code associates each of the sample images with its tagged region.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/includes/quickstarts/csharp-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/includes/quickstarts/csharp-tutorial.md
@@ -144,7 +144,7 @@ This method defines the tags that you will train the model on.
First, download the sample images for this project. Save the contents of the [sample Images folder](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/CustomVision/ImageClassification/Images) to your local device. > [!NOTE]
-> Do you need a broader set of images to complete your training? Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/en-us/ai/trove?activetab=pivot1:primaryr3) to learn more.
+> Do you need a broader set of images to complete your training? Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/ai/trove?activetab=pivot1:primaryr3) to learn more.
Then define a helper method to upload the images in this directory. You may need to edit the **GetFiles** argument to point to the location where your images are saved.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/includes/quickstarts/java-tutorial-od https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/includes/quickstarts/java-tutorial-od.md
@@ -150,7 +150,7 @@ This method defines the tags that you will train the model on.
First, download the sample images for this project. Save the contents of the [sample Images folder](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/CustomVision/ObjectDetection/Images) to your local device. > [!NOTE]
-> Do you need a broader set of images to complete your training? Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/en-us/ai/trove?activetab=pivot1:primaryr3) to learn more.
+> Do you need a broader set of images to complete your training? Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/ai/trove?activetab=pivot1:primaryr3) to learn more.
When you tag images in object detection projects, you need to specify the region of each tagged object using normalized coordinates. The following code associates each of the sample images with its tagged region.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/includes/quickstarts/java-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/includes/quickstarts/java-tutorial.md
@@ -152,7 +152,7 @@ This method defines the tags that you will train the model on.
First, download the sample images for this project. Save the contents of the [sample Images folder](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/CustomVision/ImageClassification/Images) to your local device. > [!NOTE]
-> Do you need a broader set of images to complete your training? Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/en-us/ai/trove?activetab=pivot1:primaryr3) to learn more.
+> Do you need a broader set of images to complete your training? Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/ai/trove?activetab=pivot1:primaryr3) to learn more.
[!code-java[](~/cognitive-services-quickstart-code/java/CustomVision/src/main/java/com/microsoft/azure/cognitiveservices/vision/customvision/samples/CustomVisionSamples.java?name=snippet_upload)]
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/includes/quickstarts/node-tutorial-object-detection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/includes/quickstarts/node-tutorial-object-detection.md
@@ -121,7 +121,7 @@ Start a new function to contain all of your Custom Vision function calls. Add th
First, download the sample images for this project. Save the contents of the [sample Images folder](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/CustomVision/ObjectDetection/Images) to your local device. > [!NOTE]
-> Do you need a broader set of images to complete your training? Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/en-us/ai/trove?activetab=pivot1:primaryr3) to learn more.
+> Do you need a broader set of images to complete your training? Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/ai/trove?activetab=pivot1:primaryr3) to learn more.
To add the sample images to the project, insert the following code after the tag creation. This code uploads each image with its corresponding tag. When you tag images in object detection projects, you need to specify the region of each tagged object using normalized coordinates. For this tutorial, the regions are hardcoded inline with the code. The regions specify the bounding box in normalized coordinates, and the coordinates are given in the order: left, top, width, height. You can upload up to 64 images in a single batch.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/includes/quickstarts/node-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/includes/quickstarts/node-tutorial.md
@@ -126,7 +126,7 @@ To create classification tags to your project, add the following code to your fu
First, download the sample images for this project. Save the contents of the [sample Images folder](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/CustomVision/ImageClassification/Images) to your local device. > [!NOTE]
-> Do you need a broader set of images to complete your training? Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/en-us/ai/trove?activetab=pivot1:primaryr3) to learn more.
+> Do you need a broader set of images to complete your training? Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/ai/trove?activetab=pivot1:primaryr3) to learn more.
To add the sample images to the project, insert the following code after the tag creation. This code uploads each image with its corresponding tag.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/includes/quickstarts/python-tutorial-od https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/includes/quickstarts/python-tutorial-od.md
@@ -108,7 +108,7 @@ To create object tags in your project, add the following code:
First, download the sample images for this project. Save the contents of the [sample Images folder](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/CustomVision/ObjectDetection/Images) to your local device. > [!NOTE]
-> Do you need a broader set of images to complete your training? Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/en-us/ai/trove?activetab=pivot1:primaryr3) to learn more.
+> Do you need a broader set of images to complete your training? Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/ai/trove?activetab=pivot1:primaryr3) to learn more.
When you tag images in object detection projects, you need to specify the region of each tagged object using normalized coordinates. The following code associates each of the sample images with its tagged region. The regions specify the bounding box in normalized coordinates, and the coordinates are given in the order: left, top, width, height.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/includes/quickstarts/python-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/includes/quickstarts/python-tutorial.md
@@ -105,7 +105,7 @@ To add classification tags to your project, add the following code:
First, download the sample images for this project. Save the contents of the [sample Images folder](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/CustomVision/ImageClassification/Images) to your local device. > [!NOTE]
-> Do you need a broader set of images to complete your training? Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/en-us/ai/trove?activetab=pivot1:primaryr3) to learn more.
+> Do you need a broader set of images to complete your training? Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/ai/trove?activetab=pivot1:primaryr3) to learn more.
To add the sample images to the project, insert the following code after the tag creation. This code uploads each image with its corresponding tag. You can upload up to 64 images in a single batch.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/includes/quickstarts/rest-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/includes/quickstarts/rest-tutorial.md
@@ -99,7 +99,7 @@ You'll get a JSON response like the following. Save the `"id"` value of each tag
Next, download the sample images for this project. Save the contents of the [sample Images folder](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/CustomVision/ImageClassification/Images) to your local device. > [!NOTE]
-> Do you need a broader set of images to complete your training? Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/en-us/ai/trove?activetab=pivot1:primaryr3) to learn more.
+> Do you need a broader set of images to complete your training? Trove, a Microsoft Garage project, allows you to collect and purchase sets of images for training purposes. Once you've collected your images, you can download them and then import them into your Custom Vision project in the usual way. Visit the [Trove page](https://www.microsoft.com/ai/trove?activetab=pivot1:primaryr3) to learn more.
Use the following command to upload the images and apply tags; once for the "Hemlock" images, and separately for the "Japanese Cherry" images. See the [Create Images From Data](https://southcentralus.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeb5) API for more options.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/test-your-model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/test-your-model.md
@@ -55,4 +55,4 @@ To use the image submitted previously for training, use the following steps:
## Next steps
-[Improve your classifier](getting-started-improving-your-classifier.md)
+[Improve your model](getting-started-improving-your-classifier.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/LUIS/luis-how-to-azure-subscription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/luis-how-to-azure-subscription.md
@@ -231,6 +231,10 @@ For automated processes like CI/CD pipelines, you might want to automate the ass
1. Get an Azure Resource Manager token from [this website](https://resources.azure.com/api/token?plaintext=true). This token does expire, so use it right away. The request returns an Azure Resource Manager token.
+ ```azurecli
+ az account get-access-token --resource=https://management.core.windows.net/ --query accessToken --output tsv
+ ```
+
![Screenshot that shows the website for requesting an Azure Resource Manager token.](./media/luis-manage-keys/get-arm-token.png) 1. Use the token to request the LUIS runtime resources across subscriptions. Use the [Get LUIS Azure accounts API](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5be313cec181ae720aa2b26c), which your user account has access to.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/QnAMaker/Concepts/best-practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/Concepts/best-practices.md
@@ -136,7 +136,7 @@ For example, you might have two separate QnAs with the following questions:
Since these two QnAs are phrased with very similar words, this similarity could cause very similar scores for many user queries that are phrased like *"where is the `<x>` location"*. Instead, try to clearly differentiate with queries like *"where is the parking lot"* and *"where is the ATM"*, by avoiding words like "location" that could be in many questions in your KB. ## Collaborate
-QnA Maker allows users to [collaborate](../index.yml) on a knowledge base. Users need access to the Azure QnA Maker resource group in order to access the knowledge bases. Some organizations may want to outsource the knowledge base editing and maintenance, and still be able to protect access to their Azure resources. This editor-approver model is done by setting up two identical [QnA Maker services](../How-to/set-up-qnamaker-service-azure.md) in different subscriptions and selecting one for the edit-testing cycle. Once testing is finished, the knowledge base contents are transferred with an [import-export](../Tutorials/migrate-knowledge-base.md) process to the QnA Maker service of the approver that will finally publish the knowledge base and update the endpoint.
+QnA Maker allows users to collaborate on a knowledge base. Users need access to the Azure QnA Maker resource group in order to access the knowledge bases. Some organizations may want to outsource the knowledge base editing and maintenance, and still be able to protect access to their Azure resources. This editor-approver model is done by setting up two identical [QnA Maker services](../How-to/set-up-qnamaker-service-azure.md) in different subscriptions and selecting one for the edit-testing cycle. Once testing is finished, the knowledge base contents are transferred with an [import-export](../Tutorials/migrate-knowledge-base.md) process to the QnA Maker service of the approver that will finally publish the knowledge base and update the endpoint.
@@ -147,4 +147,4 @@ QnA Maker allows users to [collaborate](../index.yml) on a knowledge base. Users
## Next steps > [!div class="nextstepaction"]
-> [Edit a knowledge base](../How-to/edit-knowledge-base.md)
+> [Edit a knowledge base](../How-to/edit-knowledge-base.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/QnAMaker/Concepts/plan https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/Concepts/plan.md
@@ -119,17 +119,17 @@ You should design your conversational flow with a loop in mind so that a user kn
Collaborators may be other developers who share the full development stack of the knowledge base application or may be limited to just authoring the knowledge base.
-Knowledge base authoring supports several [role-based access permissions](../reference-role-based-access-control.md) you apply in the Azure portal to limit the scope of a collaborator's abilities.
+Knowledge base authoring supports several role-based access permissions you apply in the Azure portal to limit the scope of a collaborator's abilities.
## Integration with client applications
-Integration with [client applications](../index.yml) is accomplished by sending a query to the prediction runtime endpoint. A query is sent to your specific knowledge base with an SDK or REST-based request to your QnA Maker's web app endpoint.
+Integration with client applications is accomplished by sending a query to the prediction runtime endpoint. A query is sent to your specific knowledge base with an SDK or REST-based request to your QnA Maker's web app endpoint.
To authenticate a client request correctly, the client application must send the correct credentials and knowledge base ID. If you're using an Azure Bot Service, configure these settings as part of the bot configuration in the Azure portal. ### Conversation flow in a client application
-Conversation flow in a [client application](../index.yml), such as an Azure bot, may require functionality before and after interacting with the knowledge base.
+Conversation flow in a client application, such as an Azure bot, may require functionality before and after interacting with the knowledge base.
Does your client application support conversation flow, either by providing alternate means to handle follow-up prompts or including chit-chit? If so, design these early and make sure the client application query is handled correctly by another service or when sent to your knowledge base.
@@ -143,7 +143,7 @@ In such a [shared architecture](../choose-natural-language-processing-service.md
### Active learning from a client application
-QnA Maker uses _active learning_ to improve your knowledge base by suggesting alternate questions to an answer. The client application is responsible for a part of this [active learning](active-learning-suggestions.md). Through conversational prompts, the client application can determine that the knowledge base returned an answer that's not useful to the user, and it can determine a better answer. The client application needs to [send that information back to the knowledge base](active-learning-suggestions.md#how-you-give-explicit-feedback-with-the-train-api) to improve the prediction quality.
+QnA Maker uses _active learning_ to improve your knowledge base by suggesting alternate questions to an answer. The client application is responsible for a part of this [active learning](../How-To/use-active-learning.md). Through conversational prompts, the client application can determine that the knowledge base returned an answer that's not useful to the user, and it can determine a better answer. The client application needs to send that information back to the knowledge base to improve the prediction quality.
### Providing a default answer
@@ -203,16 +203,16 @@ The [development lifecycle](development-lifecycle-knowledge-base.md) of a knowle
### Knowledge base development of QnA Maker pairs
-Your [QnA pairs](question-answer-set.md) should be designed and developed based on your client application usage.
+Your QnA pairs should be designed and developed based on your client application usage.
Each pair can contain: * Metadata - filterable when querying to allow you to tag your QnA pairs with additional information about the source, content, format, and purpose of your data. * Follow-up prompts - helps to determine a path through your knowledge base so the user arrives at the correct answer.
-* Alternate questions - important to allow search to match to your answer from different forms of the question. [Active learning suggestions](active-learning-suggestions.md) turn into alternate questions.
+* Alternate questions - important to allow search to match to your answer from different forms of the question. [Active learning suggestions](../How-To/use-active-learning.md) turn into alternate questions.
### DevOps development
-Developing a knowledge base to insert into a DevOps pipeline requires that the knowledge base is isolated during [batch testing](../index.yml).
+Developing a knowledge base to insert into a DevOps pipeline requires that the knowledge base is isolated during batch testing.
A knowledge base shares the Cognitive Search index with all other knowledge bases on the QnA Maker resource. While the knowledge base is isolated by partition, sharing the index can cause a difference in the score when compared to the published knowledge base.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/QnAMaker/Concepts/role-based-access-control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/Concepts/role-based-access-control.md
@@ -41,7 +41,7 @@ If you intend to call the [authoring APIs](../index.yml), learn more about how t
## Authenticate by QnA Maker portal
-If you author and collaborate using the QnA Maker portal, after you [add the appropriate role to the resource for a collaborator](../index.yml), the QnA Maker portal manages all the access permissions.
+If you author and collaborate using the QnA Maker portal, after you add the appropriate role to the resource for a collaborator, the QnA Maker portal manages all the access permissions.
## Authenticate by QnA Maker APIs and SDKs
@@ -49,4 +49,4 @@ If you author and collaborate using the APIs, either through REST or the SDKs, y
## Next step
-* Design a knowledge base for [languages](../index.yml) and for [client applications](../index.yml)
+* Design a knowledge base for languages and for client applications
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/QnAMaker/How-To/manage-qna-maker-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/How-To/manage-qna-maker-app.md
@@ -12,10 +12,6 @@ Last updated 11/09/2020
QnA Maker allows you to collaborate with different authors and content editors by offering a capability to restrict collaborator access based on the collaborator's role. Learn more about [QnA Maker collaborator authentication concepts](../Concepts/role-based-access-control.md).
-You can also improve the quality of your knowledge base by suggesting alternative questions through [active learning](../Concepts/active-learning-suggestions.md). User-submissions are taken into consideration and appear as suggestions in the alternate questions list. You have the flexibility to either add those suggestions as alternate questions or reject them.
-
-Your knowledge base doesn't change automatically. In order for any change to take effect, you must accept the suggestions. These suggestions add questions but don't change or remove existing questions.
- ## Add Azure role-based access control (Azure RBAC) QnA Maker allows multiple people to collaborate on all knowledge bases in the same QnA Maker resource. This feature is provided with [Azure role-based access control (Azure RBAC)](../../../role-based-access-control/role-assignments-portal.md).
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/QnAMaker/Quickstarts/create-publish-knowledge-base https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/Quickstarts/create-publish-knowledge-base.md
@@ -208,4 +208,4 @@ If you are not continuing to the next quickstart, delete the QnA Maker and Bot f
For more information: * [Markdown format in answers](../reference-markdown-format.md)
-* QnA Maker [data sources](../index.yml).
+* QnA Maker [data sources](../Concepts/data-sources-and-content.md).
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/how-to-custom-voice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-voice.md
@@ -56,15 +56,28 @@ Once you've created an Azure account and a Speech service subscription, you'll n
## How to create a project
-Content like data, models, tests, and endpoints are organized into **Projects** in the Custom Voice portal. Each project is specific to a country/language and the gender of the voice you want to create. For example, you may create a project for a female voice for your call center's chat bots that use English in the United States (en-US).
+Content like data, models, tests, and endpoints are organized into **Projects** in the Custom Voice portal. Each project is specific to a country/language and the gender of the voice you want to create. For example, you may create a project for a female voice for your call center's chat bots that use English in the United States ('en-US').
To create your first project, select the **Text-to-Speech/Custom Voice** tab, then click **New Project**. Follow the instructions provided by the wizard to create your project. After you've created a project, you will see four tabs: **Data**, **Training**, **Testing**, and **Deployment**. Use the links provided in [Next steps](#next-steps) to learn how to use each tab. > [!IMPORTANT] > The [Custom Voice portal](https://aka.ms/custom-voice) was recently updated! If you created previous data, models, tests, and published endpoints in the CRIS.ai portal or with APIs, you need to create a new project in the new portal to connect to these old entities.
+## How to migrate to Custom Neural Voice
+
+If you are using the non-neural (or standard) Custom Voice, consider to migrate to Custom Neural Voice immediately following the steps below. Moving to Custom Neural Voice will help you develop more realistic voices for even more natural conversational interfaces and enable your customers and end users to benefit from the latest Text-to-Speech technology, in a responsible way.
+
+1. Learn more about our [policy on the limit access](https://aka.ms/gating-overview) and [apply here](https://aka.ms/customneural). Note that the access to the Custom Neural Voice service is subject to MicrosoftΓÇÖs sole discretion based on our eligibility criteria. Customers may gain access to the technology only after their application is reviewed and they have committed to using it in alignment with our [Responsible AI principles](https://microsoft.com/ai/responsible-ai) and the [code of conduct](https://aka.ms/custom-neural-code-of-conduct).
+2. Once your application is approved, you will be provided with the access to the "neural" training feature. Make sure you log in to the [Custom Voice portal](https://speech.microsoft.com/customvoice) using the same Azure subscription that you provide in your application.
+ > [!IMPORTANT]
+ > To protect voice talent and prevent training of voice models with unauthorized recording or without the acknowledgement from the voice talent, we require the customer to upload a recorded statement of the voice talent giving his or her consent. When preparing your recording script, make sure you include this sentence.
+ > ΓÇ£I [state your first and last name] am aware that recordings of my voice will be used by [state the name of the company] to create and use a synthetic version of my voice.ΓÇ¥
+ > This sentence must be uploaded to the **Voice Talent** tab as a verbal consent file. It will be used to verify if the recordings in your training datasets are done by the same person that makes the consent.
+3. After the Custom Neural Voice model is created, deploy the voice model to a new endpoint. To create a new custom voice endpoint with your neural voice model, go to **Text-to-Speech > Custom Voice > Deployment**. Select **Deploy model** and enter a **Name** and **Description** for your custom endpoint. Then select the custom neural voice model you would like to associate with this endpoint and confirm the deployment.
+4. Update your code in your apps if you have created a new endpoint with a new model.
+ ## Next steps - [Prepare Custom Voice data](how-to-custom-voice-prepare-data.md) - [Create a Custom Voice](how-to-custom-voice-create-voice.md)-- [Guide: Record your voice samples](record-custom-voice-samples.md)
+- [Guide: Record your voice samples](record-custom-voice-samples.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/long-audio-api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/long-audio-api.md
@@ -1,5 +1,5 @@
Title: Long Audio API (Preview) - Speech service
+ Title: Long Audio API - Speech service
description: Learn how the Long Audio API is designed for asynchronous synthesis of long-form text to speech.
@@ -12,9 +12,9 @@ Last updated 08/11/2020
-# Long Audio API (Preview)
+# Long Audio API
-The Long Audio API is designed for asynchronous synthesis of long-form text to speech (for example: audio books, news articles and documents). This API doesn't return synthesized audio in real-time, instead the expectation is that you will poll for the response(s) and consume the output(s) as they are made available from the service. Unlike the text to speech API that's used by the Speech SDK, the Long Audio API can create synthesized audio longer than 10 minutes, making it ideal for publishers and audio content platforms.
+The Long Audio API is designed for asynchronous synthesis of long-form text to speech (for example: audio books, news articles and documents). This API doesn't return synthesized audio in real-time, instead the expectation is that you will poll for the response(s) and consume the output(s) as they are made available from the service. Unlike the text to speech API that's used by the Speech SDK, the Long Audio API can create synthesized audio longer than 10 minutes, making it ideal for publishers and audio content platforms to create long audio content like audio books in a batch.
Additional benefits of the Long Audio API:
@@ -42,53 +42,41 @@ When preparing your text file, make sure it:
* Contains more than 400 characters for plain text or 400 [billable characters](./text-to-speech.md#pricing-note) for SSML text, and less than 10,000 paragraphs * For plain text, each paragraph is separated by hitting **Enter/Return** - View [plain text input example](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice-API-Samples/Java/en-US.txt) * For SSML text, each SSML piece is considered a paragraph. SSML pieces shall be separated by different paragraphs - View [SSML text input example](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice-API-Samples/Java/SSMLTextInputSample.txt)
-> [!NOTE]
-> For Chinese (Mainland), Chinese (Hong Kong SAR), Chinese (Taiwan), Japanese, and Korean, one word will be counted as two characters.
## Python example
-This section contains Python examples that show the basic usage of the Long Audio API. Create a new Python project using your favorite IDE or editor. Then copy this code snippet into a file named `voice_synthesis_client.py`.
+This section contains Python examples that show the basic usage of the Long Audio API. Create a new Python project using your favorite IDE or editor. Then copy this code snippet into a file named `long_audio_synthesis_client.py`.
```python
-import argparse
import json import ntpath
-import urllib3
import requests
-import time
-from json import dumps, loads, JSONEncoder, JSONDecoder
-import pickle
-
-urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
```
-These libraries are used to parse arguments, construct the HTTP request, and call the text-to-speech long audio REST API.
+These libraries are used to construct the HTTP request, and call the text-to-speech long audio synthesis REST API.
### Get a list of supported voices
-This code allows you to get a full list of voices for a specific region/endpoint that you can use. Add the code to `voice_synthesis_client.py`:
+To get a list of supported voices, send a GET request to `https://<endpoint>/api/texttospeech/v3.0/longaudiosynthesis/voices`.
+
+This code allows you to get a full list of voices for a specific region/endpoint that you can use.
```python
-parser = argparse.ArgumentParser(description='Text-to-speech client tool to submit voice synthesis requests.')
-parser.add_argument('--voices', action="store_true", default=False, help='print voice list')
-parser.add_argument('-key', action="store", dest="key", required=True, help='the speech subscription key, like fg1f763i01d94768bda32u7a******** ')
-parser.add_argument('-region', action="store", dest="region", required=True, help='the region information, could be centralindia, canadacentral or uksouth')
-args = parser.parse_args()
-baseAddress = 'https://%s.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0-beta1/' % args.region
-
-def getVoices():
- response=requests.get(baseAddress+"voicesynthesis/voices", headers={"Ocp-Apim-Subscription-Key":args.key}, verify=False)
- voices = json.loads(response.text)
- return voices
-
-if args.voices:
- voices = getVoices()
- print("There are %d voices available:" % len(voices))
- for voice in voices:
- print ("Name: %s, Description: %s, Id: %s, Locale: %s, Gender: %s, PublicVoice: %s, Created: %s" % (voice['name'], voice['description'], voice['id'], voice['locale'], voice['gender'], voice['isPublicVoice'], voice['created']))
+def get_voices():
+ region = '<region>'
+ key = '<your_key>'
+ url = 'https://{}.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/longaudiosynthesis/voices'.format(region)
+ header = {
+ 'Ocp-Apim-Subscription-Key': key
+ }
+
+ response = requests.get(url, headers=header)
+ print(response.text)
+
+get_voices()
```
-Run the script using the command `python voice_synthesis_client.py --voices -key <your_key> -region <region>`, and replace the following values:
+Replace the following values:
* Replace `<your_key>` with your Speech service subscription key. This information is available in the **Overview** tab for your resource in the [Azure portal](https://aka.ms/azureportal). * Replace `<region>` with the region where your Speech resource was created (for example: `eastus` or `westus`). This information is available in the **Overview** tab for your resource in the [Azure portal](https://aka.ms/azureportal).
@@ -96,164 +84,321 @@ Run the script using the command `python voice_synthesis_client.py --voices -key
You'll see an output that looks like this: ```console
-There are xx voices available:
-
-Name: Microsoft Server Speech Text to Speech Voice (en-US, xxx), Description: xxx , Id: xxx, Locale: en-US, Gender: Male, PublicVoice: xxx, Created: 2019-07-22T09:38:14Z
-Name: Microsoft Server Speech Text to Speech Voice (zh-CN, xxx), Description: xxx , Id: xxx, Locale: zh-CN, Gender: Female, PublicVoice: xxx, Created: 2019-08-26T04:55:39Z
+{
+ "values": [
+ {
+ "locale": "en-US",
+ "voiceName": "en-US-AriaNeural",
+ "description": "",
+ "gender": "Female",
+ "createdDateTime": "2020-05-21T05:57:39.123Z",
+ "properties": {
+ "publicAvailable": true
+ }
+ },
+ {
+ "id": "8fafd8cd-5f95-4a27-a0ce-59260f873141"
+ "locale": "en-US",
+ "voiceName": "my custom neural voice",
+ "description": "",
+ "gender": "Male",
+ "createdDateTime": "2020-05-21T05:25:40.243Z",
+ "properties": {
+ "publicAvailable": false
+ }
+ }
+ ]
+}
```
-If **PublicVoice** parameter is **True**, the voice is public neural voice. Otherwise, it's custom neural voice.
+If **properties.publicAvailable** is **true**, the voice is a public neural voice. Otherwise, it's a custom neural voice.
### Convert text to speech
-Prepare an input text file, in either plain text or SSML text, then add the following code to `voice_synthesis_client.py`:
+Prepare an input text file, in either plain text or SSML text, then add the following code to `long_audio_synthesis_client.py`:
> [!NOTE]
-> 'concatenateResult' is an optional parameter. If this parameter isn't set, the audio outputs will be generated per paragraph. You can also concatenate the audios into 1 output by setting the parameter.
-> By default, the audio output is set to riff-16khz-16bit-mono-pcm. For more information about supported audio outputs, see [Audio output formats](#audio-output-formats).
+> `concatenateResult` is an optional parameter. If this parameter isn't set, the audio outputs will be generated per paragraph. You can also concatenate the audios into 1 output by setting the parameter.
+> `outputFormat` is also optional. By default, the audio output is set to riff-16khz-16bit-mono-pcm. For more information about supported audio output formats, see [Audio output formats](#audio-output-formats).
```python
-parser.add_argument('--submit', action="store_true", default=False, help='submit a synthesis request')
-parser.add_argument('--concatenateResult', action="store_true", default=False, help='If concatenate result in a single wave file')
-parser.add_argument('-file', action="store", dest="file", help='the input text script file path')
-parser.add_argument('-voiceId', action="store", nargs='+', dest="voiceId", help='the id of the voice which used to synthesis')
-parser.add_argument('-locale', action="store", dest="locale", help='the locale information like zh-CN/en-US')
-parser.add_argument('-format', action="store", dest="format", default='riff-16khz-16bit-mono-pcm', help='the output audio format')
-
-def submitSynthesis():
- modelList = args.voiceId
- data={'name': 'simple test', 'description': 'desc...', 'models': json.dumps(modelList), 'locale': args.locale, 'outputformat': args.format}
- if args.concatenateResult:
- properties={'ConcatenateResult': 'true'}
- data['properties'] = json.dumps(properties)
- if args.file is not None:
- scriptfilename=ntpath.basename(args.file)
- files = {'script': (scriptfilename, open(args.file, 'rb'), 'text/plain')}
- response = requests.post(baseAddress+"voicesynthesis", data, headers={"Ocp-Apim-Subscription-Key":args.key}, files=files, verify=False)
- if response.status_code == 202:
- location = response.headers['Location']
- id = location.split("/")[-1]
- print("Submit synthesis request successful")
- return id
- else:
- print("Submit synthesis request failed")
- print("response.status_code: %d" % response.status_code)
- print("response.text: %s" % response.text)
- return 0
-
-def getSubmittedSynthesis(id):
- response=requests.get(baseAddress+"voicesynthesis/"+id, headers={"Ocp-Apim-Subscription-Key":args.key}, verify=False)
- synthesis = json.loads(response.text)
- return synthesis
-
-if args.submit:
- id = submitSynthesis()
- if (id == 0):
- exit(1)
-
- while(1):
- print("\r\nChecking status")
- synthesis=getSubmittedSynthesis(id)
- if synthesis['status'] == "Succeeded":
- r = requests.get(synthesis['resultsUrl'])
- filename=id + ".zip"
- with open(filename, 'wb') as f:
- f.write(r.content)
- print("Succeeded... Result file downloaded : " + filename)
- break
- elif synthesis['status'] == "Failed":
- print("Failed...")
- break
- elif synthesis['status'] == "Running":
- print("Running...")
- elif synthesis['status'] == "NotStarted":
- print("NotStarted...")
- time.sleep(10)
+def submit_synthesis():
+ region = '<region>'
+ key = '<your_key>'
+ input_file_path = '<input_file_path>'
+ locale = '<locale>'
+ url = 'https://{}.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/longaudiosynthesis'.format(region)
+ header = {
+ 'Ocp-Apim-Subscription-Key': key
+ }
+
+ voice_identities = [
+ {
+ 'voicename': '<voice_name>'
+ }
+ ]
+
+ payload = {
+ 'displayname': 'long audio synthesis sample',
+ 'description': 'sample description',
+ 'locale': locale,
+ 'voices': json.dumps(voice_identities),
+ 'outputformat': 'riff-16khz-16bit-mono-pcm',
+ 'concatenateresult': True,
+ }
+
+ filename = ntpath.basename(input_file_path)
+ files = {
+ 'script': (filename, open(input_file_path, 'rb'), 'text/plain')
+ }
+
+ response = requests.post(url, payload, headers=header, files=files)
+ print('response.status_code: %d' % response.status_code)
+ print(response.headers['Location'])
+
+submit_synthesis()
```
-Run the script using the command `python voice_synthesis_client.py --submit -key <your_key> -region <region> -file <input> -locale <locale> -voiceId <voice_guid>`, and replace the following values:
+Replace the following values:
* Replace `<your_key>` with your Speech service subscription key. This information is available in the **Overview** tab for your resource in the [Azure portal](https://aka.ms/azureportal). * Replace `<region>` with the region where your Speech resource was created (for example: `eastus` or `westus`). This information is available in the **Overview** tab for your resource in the [Azure portal](https://aka.ms/azureportal).
-* Replace `<input>` with the path to the text file you've prepared for text-to-speech.
+* Replace `<input_file_path>` with the path to the text file you've prepared for text-to-speech.
* Replace `<locale>` with the desired output locale. For more information, see [language support](language-support.md#neural-voices).
-* Replace `<voice_guid>` with the desired output voice. Use one of the voices returned by your previous call to the `/voicesynthesis/voices` endpoint.
+
+Use one of the voices returned by your previous call to the `/voices` endpoint.
+
+* If you are using public neural voice, replace `<voice_name>` with the desired output voice.
+* To use a custom neural voice, replace `voice_identities` variable with following, and replace `<voice_id>` with the `id` of your custom neural voice.
+```Python
+voice_identities = [
+ {
+ 'id': '<voice_id>'
+ }
+]
+```
You'll see an output that looks like this: ```console
-Submit synthesis request successful
+response.status_code: 202
+https://<endpoint>/api/texttospeech/v3.0/longaudiosynthesis/<guid>
+```
+
+> [!NOTE]
+> If you have more than 1 input files, you will need to submit multiple requests. There are some limitations that needs to be aware.
+> * The client is allowed to submit up to **5** requests to server per second for each Azure subscription account. If it exceeds the limitation, client will get a 429 error code (too many requests). Please reduce the request amount per second.
+> * The server is allowed to run and queue up to **120** requests for each Azure subscription account. If it exceeds the limitation, server will return a 429 error code(too many requests). Please wait and avoid submitting new request until some requests are completed.
-Checking status
-NotStarted...
+The URL in output can be used for getting the request status.
-Checking status
-Running...
+### Get information of a submitted request
-Checking status
-Running...
+To get status of a submitted synthesis request, simply send a GET request to the URL returned by previous step.
+```Python
-Checking status
-Succeeded... Result file downloaded : xxxx.zip
+def get_synthesis():
+ url = '<url>'
+ key = '<your_key>'
+ header = {
+ 'Ocp-Apim-Subscription-Key': key
+ }
+ response = requests.get(url, headers=header)
+ print(response.text)
+
+get_synthesis()
+```
+Output will be like this:
+```console
+response.status_code: 200
+{
+ "models": [
+ {
+ "voiceName": "en-US-AriaNeural"
+ }
+ ],
+ "properties": {
+ "outputFormat": "riff-16khz-16bit-mono-pcm",
+ "concatenateResult": false,
+ "totalDuration": "PT5M57.252S",
+ "billableCharacterCount": 3048
+ },
+ "id": "eb3d7a81-ee3e-4e9a-b725-713383e71677",
+ "lastActionDateTime": "2021-01-14T11:12:27.240Z",
+ "status": "Succeeded",
+ "createdDateTime": "2021-01-14T11:11:02.557Z",
+ "locale": "en-US",
+ "displayName": "long audio synthesis sample",
+ "description": "sample description"
+}
```
-The result contains the input text and the audio output files that are generated by the service. You can download these files in a zip.
+From `status` property, you can read status of this request. The request will start from `NotStarted` status, then change to `Running`, and finally become `Succeeded` or `Failed`. You can use a loop to poll this API until the status becomes `Succeeded`.
-> [!NOTE]
-> If you have more than 1 input files, you will need to submit multiple requests. There are some limitations that needs to be aware.
-> * The client is allowed to submit up to **5** requests to server per second for each Azure subscription account. If it exceeds the limitation, client will get a 429 error code(too many requests). Please reduce the request amount per second
-> * The server is allowed to run and queue up to **120** requests for each Azure subscription account. If it exceeds the limitation, server will return a 429 error code(too many requests). Please wait and avoid submitting new request until some requests are completed
+### Download audio result
-### Remove previous requests
+Once a synthesis request succeeds, you can download the audio result by calling GET `/files` API.
-The service will keep up to **20,000** requests for each Azure subscription account. If your request amount exceeds this limitation, please remove previous requests before making new ones. If you don't remove existing requests, you'll receive an error notification.
+```python
+def get_files():
+ id = '<request_id>'
+ region = '<region>'
+ key = '<your_key>'
+ url = 'https://{}.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/longaudiosynthesis/{}/files'.format(region, id)
+ header = {
+ 'Ocp-Apim-Subscription-Key': key
+ }
+
+ response = requests.get(url, headers=header)
+ print('response.status_code: %d' % response.status_code)
+ print(response.text)
+
+get_files()
+```
+Replace `<request_id>` with the ID of request you want to download the result. It can be found in the response of previous step.
-Add the following code to `voice_synthesis_client.py`:
+Output will be like this:
+```console
+response.status_code: 200
+{
+ "values": [
+ {
+ "name": "2779f2aa-4e21-4d13-8afb-6b3104d6661a.txt",
+ "kind": "LongAudioSynthesisScript",
+ "properties": {
+ "size": 4200
+ },
+ "createdDateTime": "2021-01-14T11:11:02.410Z",
+ "links": {
+ "contentUrl": "https://customvoice-usw.blob.core.windows.net/artifacts/input.txt?st=2018-02-09T18%3A07%3A00Z&se=2018-02-10T18%3A07%3A00Z&sp=rl&sv=2017-04-17&sr=b&sig=e05d8d56-9675-448b-820c-4318ae64c8d5"
+ }
+ },
+ {
+ "name": "voicesynthesis_waves.zip",
+ "kind": "LongAudioSynthesisResult",
+ "properties": {
+ "size": 9290000
+ },
+ "createdDateTime": "2021-01-14T11:12:27.226Z",
+ "links": {
+ "contentUrl": "https://customvoice-usw.blob.core.windows.net/artifacts/voicesynthesis_waves.zip?st=2018-02-09T18%3A07%3A00Z&se=2018-02-10T18%3A07%3A00Z&sp=rl&sv=2017-04-17&sr=b&sig=e05d8d56-9675-448b-820c-4318ae64c8d5"
+ }
+ }
+ ]
+}
+```
+The output contains information of 2 files. The one with `"kind": "LongAudioSynthesisScript"` is the input script submitted. The other one with `"kind": "LongAudioSynthesisResult"` is the result of this request.
+The result is zip which contains the audio output files generated, along with a copy of the input text.
+
+Both files can be downloaded from the URL in their `links.contentUrl` property.
+
+### Get all synthesis requests
+
+You can get a list of all submitted requests with following code:
```python
-parser.add_argument('--syntheses', action="store_true", default=False, help='print synthesis list')
-parser.add_argument('--delete', action="store_true", default=False, help='delete a synthesis request')
-parser.add_argument('-synthesisId', action="store", nargs='+', dest="synthesisId", help='the id of the voice synthesis which need to be deleted')
-
-def getSubmittedSyntheses():
- response=requests.get(baseAddress+"voicesynthesis", headers={"Ocp-Apim-Subscription-Key":args.key}, verify=False)
- syntheses = json.loads(response.text)
- return syntheses
-
-def deleteSynthesis(ids):
- for id in ids:
- print("delete voice synthesis %s " % id)
- response = requests.delete(baseAddress+"voicesynthesis/"+id, headers={"Ocp-Apim-Subscription-Key":args.key}, verify=False)
- if (response.status_code == 204):
- print("delete successful")
- else:
- print("delete failed, response.status_code: %d, response.text: %s " % (response.status_code, response.text))
-
-if args.syntheses:
- synthese = getSubmittedSyntheses()
- print("There are %d synthesis requests submitted:" % len(synthese))
- for synthesis in synthese:
- print ("ID : %s , Name : %s, Status : %s " % (synthesis['id'], synthesis['name'], synthesis['status']))
-
-if args.delete:
- deleteSynthesis(args.synthesisId)
+def get_synthesis():
+ region = '<region>'
+ key = '<your_key>'
+ url = 'https://{}.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/longaudiosynthesis/'.format(region)
+ header = {
+ 'Ocp-Apim-Subscription-Key': key
+ }
+
+ response = requests.get(url, headers=header)
+ print('response.status_code: %d' % response.status_code)
+ print(response.text)
+
+get_synthesis()
```
-Run `python voice_synthesis_client.py --syntheses -key <your_key> -region <region>` to get a list of synthesis requests that you've made. You'll see an output like this:
+Output will be like:
+```console
+response.status_code: 200
+{
+ "values": [
+ {
+ "models": [
+ {
+ "id": "8fafd8cd-5f95-4a27-a0ce-59260f873141",
+ "voiceName": "my custom neural voice"
+ }
+ ],
+ "properties": {
+ "outputFormat": "riff-16khz-16bit-mono-pcm",
+ "concatenateResult": false,
+ "totalDuration": "PT1S",
+ "billableCharacterCount": 5
+ },
+ "id": "f9f0bb74-dfa5-423d-95e7-58a5e1479315",
+ "lastActionDateTime": "2021-01-05T07:25:42.433Z",
+ "status": "Succeeded",
+ "createdDateTime": "2021-01-05T07:25:13.600Z",
+ "locale": "en-US",
+ "displayName": "Long Audio Synthesis",
+ "description": "Long audio synthesis sample"
+ },
+ {
+ "models": [
+ {
+ "voiceName": "en-US-AriaNeural"
+ }
+ ],
+ "properties": {
+ "outputFormat": "riff-16khz-16bit-mono-pcm",
+ "concatenateResult": false,
+ "totalDuration": "PT5M57.252S",
+ "billableCharacterCount": 3048
+ },
+ "id": "eb3d7a81-ee3e-4e9a-b725-713383e71677",
+ "lastActionDateTime": "2021-01-14T11:12:27.240Z",
+ "status": "Succeeded",
+ "createdDateTime": "2021-01-14T11:11:02.557Z",
+ "locale": "en-US",
+ "displayName": "long audio synthesis sample",
+ "description": "sample description"
+ }
+ ]
+}
+```
+
+`values` property contains a list of synthesis requests. The list is paginated, with a maximum page size of 100. If there are more than 100 requests, a `"@nextLink"` property will be provided to get the next page of the paginated list.
```console
-There are <number> synthesis requests submitted:
-ID : xxx , Name : xxx, Status : Succeeded
-ID : xxx , Name : xxx, Status : Running
-ID : xxx , Name : xxx : Succeeded
+ "@nextLink": "https://<endpoint>/api/texttospeech/v3.0/longaudiosynthesis/?top=100&skip=100"
+```
+
+You can also customize page size and skip number by providing `skip` and `top` in URL parameter.
+
+### Remove previous requests
+
+The service will keep up to **20,000** requests for each Azure subscription account. If your request amount exceeds this limitation, please remove previous requests before making new ones. If you don't remove existing requests, you'll receive an error notification.
+
+The following code shows how to remove a specific synthesis request.
+```python
+def delete_synthesis():
+ id = '<request_id>'
+ region = '<region>'
+ key = '<your_key>'
+ url = 'https://{}.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/longaudiosynthesis/{}/'.format(region, id)
+ header = {
+ 'Ocp-Apim-Subscription-Key': key
+ }
+
+ response = requests.delete(url, headers=header)
+ print('response.status_code: %d' % response.status_code)
```
-To delete a request, run `python voice_synthesis_client.py --delete -key <your_key> -region <Region> -synthesisId <synthesis_id>` and replace `<synthesis_id>` with a request ID value returned from the previous request.
+If the request is successfully removed, the response status code will be HTTP 204 (No Content).
+
+```console
+response.status_code: 204
+```
> [!NOTE]
-> Requests with a status of ΓÇÿRunningΓÇÖ/'Waiting' cannot be removed or deleted.
+> Requests with a status of `NotStarted` or `Running` cannot be removed or deleted.
-The completed `voice_synthesis_client.py` is available on [GitHub](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice-API-Samples/Python/voiceclient.py).
+The completed `long_audio_synthesis_client.py` is available on [GitHub](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice-API-Samples/Python/voiceclient.py).
## HTTP status codes
@@ -312,4 +457,4 @@ Sample code for Long Audio API is available on GitHub.
* [Sample code: Python](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/CustomVoice-API-Samples/Python) * [Sample code: C#](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/CustomVoice-API-Samples/CSharp)
-* [Sample code: Java](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice-API-Samples/Java/)
+* [Sample code: Java](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice-API-Samples/Java/)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/speech-container-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-container-faq.md
@@ -1,629 +0,0 @@
- Title: Speech service containers frequently asked questions (FAQ)-
-description: Install and run speech containers. speech-to-text transcribes audio streams to text in real time that your applications, tools, or devices can consume or display. Text-to-speech converts input text into human-like synthesized speech.
------ Previously updated : 11/12/2020----
-# Speech service containers frequently asked questions (FAQ)
-
-When using the Speech service with containers, rely on this collection of frequently asked questions before escalating to support. This article captures questions varying degree, from general to technical. To expand an answer, click on the question.
-
-## General questions
-
-<details>
-<summary>
-<b>How do Speech containers work and how do I set them up?</b>
-</summary>
-
-**Answer:** When setting up the production cluster, there are several things to consider. First, setting up single language, multiple containers, on the same machine, should not be a large issue. If you are experiencing problems, it may be a hardware-related issue - so we would first look at resource, that is; CPU and memory specifications.
-
-Consider for a moment, the `ja-JP` container and latest model. The acoustic model is the most demanding piece CPU-wise, while the language model demands the most memory. When we benchmarked the use, it takes about 0.6 CPU cores to process a single speech-to-text request when audio is flowing in at real-time (like from the microphone). If you are feeding audio faster than real-time (like from a file), that usage can double (1.2x cores). Meanwhile, the memory listed below is operating memory for decoding speech. It does *not* take into account the actual full size of the language model, which will reside in file cache. For `ja-JP` that's an additional 2 GB; for `en-US`, it may be more (6-7 GB).
-
-If you have a machine where memory is scarce, and you are trying to deploy multiple languages on it, it is possible that file cache is full, and the OS is forced to page models in and out. For a running transcription, that could be disastrous, and may lead to slowdowns and other performance implications.
-
-Furthermore, we pre-package executables for machines with the [advanced vector extension (AVX2)](speech-container-howto.md#advanced-vector-extension-support) instruction set. A machine with the AVX512 instruction set will require code generation for that target, and starting 10 containers for 10 languages may temporarily exhaust CPU. A message like this one will appear in the docker logs:
-
-```console
-2020-01-16 16:46:54.981118943
-[W:onnxruntime:Default, tvm_utils.cc:276 LoadTVMPackedFuncFromCache]
-Cannot find Scan4_llvm__mcpu_skylake_avx512 in cache, using JIT...
-```
-
-You can set the number of decoders you want inside a *single* container using `DECODER MAX_COUNT` variable. So, basically, we should start with your SKU (CPU/memory), and we can suggest how to get the best out of it. A great starting point is referring to the recommended host machine resource specifications.
-
-<br>
-</details>
-
-<details>
-<summary>
-<b>Could you help with capacity planning and cost estimation of on-prem Speech-to-text containers?</b>
-</summary>
-
-**Answer:** For container capacity in batch processing mode, each decoder could handle 2-3x in real time, with two CPU cores, for a single recognition. We do not recommend keeping more than two concurrent recognitions per container instance, but recommend running more instances of containers for reliability/availability reasons, behind a load balancer.
-
-Though we could have each container instance running with more decoders. For example, we may be able to set up 7 decoders per container instance on an eight core machine (at at more than 2x each), yielding 15x throughput. There is a param `DECODER_MAX_COUNT` to be aware of. For the extreme case, reliability and latency issues arise, with throughput increased significantly. For a microphone, it will be at 1x real time. The overall usage should be at about one core for a single recognition.
-
-For scenario of processing 1 K hours/day in batch processing mode, in an extreme case, 3 VMs could handle it within 24 hours but not guaranteed. To handle spike days, failover, update, and to provide minimum backup/BCP, we recommend 4-5 machines instead of 3 per cluster, and with 2+ clusters.
-
-For hardware, we use standard Azure VM `DS13_v2` as a reference (each core must be 2.6 GHz or better, with AVX2 instruction set enabled).
-
-| Instance | vCPU(s) | RAM | Temp storage | Pay-as-you-go with AHB | 1-year reserve with AHB (% Savings) | 3-year reserved with AHB (% Savings) |
-|--||--|--||-|--|
-| `DS13 v2` | 8 | 56 GiB | 112 GiB | $0.598/hour | $0.3528/hour (~41%) | $0.2333/hour (~61%) |
-
-Based on the design reference (two clusters of 5 VMs to handle 1 K hours/day audio batch processing), 1-year hardware cost will be:
-
-> 2 (clusters) * 5 (VMs per cluster) * $0.3528/hour * 365 (days) * 24 (hours) = $31K / year
-
-When mapping to physical machine, a general estimation is 1 vCPU = 1 Physical CPU Core. In reality, 1vCPU is more powerful than a single core.
-
-For on-prem, all of these additional factors come into play:
--- On what type the physical CPU is and how many cores on it-- How many CPUs running together on the same box/machine-- How VMs are set up-- How hyper-threading / multi-threading is used-- How memory is shared-- The OS, etc.-
-Normally it is not as well tuned as Azure the environment. Considering other overhead, I would say a safe estimation is 10 physical CPU cores = 8 Azure vCPU. Though popular CPUs only have eight cores. With on-prem deployment, the cost will be higher than using Azure VMs. Also, consider the depreciation rate.
-
-Service cost is the same as the online service: $1/hour for speech-to-text. The Speech service cost is:
-
-> $1 * 1000 * 365 = $365K
-
-Maintenance cost paid to Microsoft depends on the service level and content of the service. It various from $29.99/month for basic level to hundreds of thousands if onsite service involved. A rough number is $300/hour for service/maintain. People cost is not included. Other infrastructure costs (such as storage, networks, and load balancers) are not included.
-
-<br>
-</details>
-
-<details>
-<summary>
-<b>Why is punctuation missing from the transcription?</b>
-</summary>
-
-**Answer:** The `speech_recognition_language=<YOUR_LANGUAGE>` should be explicitly configured in the request if they are using Carbon client.
-
-For example:
-
-```python
-if not recognize_once(
- speechsdk.SpeechRecognizer(
- speech_config=speechsdk.SpeechConfig(
- endpoint=template.format("interactive"),
- speech_recognition_language="ja-JP"),
- audio_config=audio_config)):
-
- print("Failed interactive endpoint")
- exit(1)
-```
-Here is the output:
-
-```cmd
-RECOGNIZED: SpeechRecognitionResult(
- result_id=2111117c8700404a84f521b7b805c4e7,
- text="まだ早いまだ早いは猫である名前はまだないどこで生まれたかとんと見当を検討をなつかぬ。
- 何でも薄暗いじめじめした所でながら泣いていた事だけは記憶している。
- まだは今ここで初めて人間と言うものを見た。
- しかも後で聞くと、それは書生という人間中で一番同額同額。",
- reason=ResultReason.RecognizedSpeech)
-```
-
-<br>
-</details>
-
-<details>
-<summary>
-<b>Can I use a custom acoustic model and language model with Speech container?</b>
-</summary>
-
-We are currently only able to pass one model ID, either custom language model or custom acoustic model.
-
-**Answer:** The decision to *not* support both acoustic and language models concurrently was made. This will remain in effect, until a unified identifier is created to reduce API breaks. So, unfortunately this is not supported right now.
-
-<br>
-</details>
-
-<details>
-<summary>
-<b>Could you explain these errors from the custom speech-to-text container?</b>
-</summary>
-
-**Error 1:**
-
-```cmd
-Failed to fetch manifest: Status: 400 Bad Request Body:
-{
- "code": "InvalidModel",
- "message": "The specified model is not supported for endpoint manifests."
-}
-```
-
-**Answer 1:** If you're training with the latest custom model, we currently don't support that. If you train with an older version, it should be possible to use. We are still working on supporting the latest versions.
-
-Essentially, the custom containers do not support Halide or ONNX-based acoustic models (which is the default in the custom training portal). This is due to custom models not being encrypted and we don't want to expose ONNX models, however; language models are fine. The customer will need to explicitly select an older non-ONNX model for custom training. Accuracy will not be affected. The model size may be larger (by 100 MB).
-
-> Support model > 20190220 (v4.5 Unified)
-
-**Error 2:**
-
-```cmd
-HTTPAPI result code = HTTPAPI_OK.
-HTTP status code = 400.
-Reason: Synthesis failed.
-StatusCode: InvalidArgument,
-Details: Voice does not match.
-```
-
-**Answer 2:** You need to provide the correct voice name in the request, which is case-sensitive. Refer to the full service name mapping.
-
-**Error 3:**
-
-```json
-{
- "code": "InvalidProductId",
- "message": "The subscription SKU \"CognitiveServices.S0\" is not supported in this service instance."
-}
-```
-
-**Answer 3:** You reed to create a Speech resource, not a Cognitive Services resource.
--
-<br>
-</details>
-
-<details>
-<summary>
-<b>What API protocols are supported, REST or WS?</b>
-</summary>
-
-**Answer:** For speech-to-text and custom speech-to-text containers, we currently only support the websocket based protocol. The SDK only supports calling in WS but not REST. There's a plan to add REST support, but not ETA for the moment. Always refer to the official documentation, see [query prediction endpoints](speech-container-howto.md#query-the-containers-prediction-endpoint).
-
-<br>
-</details>
-
-<details>
-<summary>
-<b>Is CentOS supported for Speech containers?</b>
-</summary>
-
-**Answer:** CentOS 7 is not supported by Python SDK yet, also Ubuntu 19.04 is not supported.
-
-The Python Speech SDK package is available for these operating systems:
-- **Windows** - x64 and x86-- **Mac** - macOS X version 10.12 or later-- **Linux** - Ubuntu 16.04, Ubuntu 18.04, Debian 9 on x64-
-For more information on environment setup, see [Python platform setup](quickstarts/setup-platform.md?pivots=programming-language-python). For now, Ubuntu 18.04 is the recommended version.
-
-<br>
-</details>
-
-<details>
-<summary>
-<b>Why am I getting errors when attempting to call LUIS prediction endpoints?</b>
-</summary>
-
-I am using the LUIS container in an IoT Edge deployment and am attempting to call the LUIS prediction endpoint from another container. The LUIS container is listening on port 5001, and the URL I'm using is this:
-
-```csharp
-var luisEndpoint =
- $"ws://192.168.1.91:5001/luis/prediction/v3.0/apps/{luisAppId}/slots/production/predict";
-var config = SpeechConfig.FromEndpoint(new Uri(luisEndpoint));
-```
-
-The error I'm getting is:
-
-```cmd
-WebSocket Upgrade failed with HTTP status code: 404 SessionId: 3cfe2509ef4e49919e594abf639ccfeb
-```
-
-I see the request in the LUIS container logs and the message says:
-
-```cmd
-The request path /luis//predict" does not match a supported file type.
-```
-
-What does this mean? What am I missing? I was following the example for the Speech SDK, from [here](https://github.com/Azure-Samples/cognitive-services-speech-sdk). The scenario is that we are detecting the audio directly from the PC microphone and trying to determine the intent, based on the LUIS app we trained. The example I linked to does exactly that. And it works well with the LUIS cloud-based service. Using the Speech SDK seemed to save us from having to make a separate explicit call to the speech-to-text API and then a second call to LUIS.
-
-So, all I am attempting to do is switch from the scenario of using LUIS in the cloud to using the LUIS container. I can't imagine if the Speech SDK works for one, it won't work for the other.
-
-**Answer:**
-The Speech SDK should not be used against a LUIS container. For using the LUIS container, the LUIS SDK or LUIS REST API should be used. Speech SDK should be used against a speech container.
-
-A cloud is different than a container. A cloud can be composed of multiple aggregated containers (sometimes called micro services). So there is a LUIS container and then there is a Speech container - Two separate containers. The Speech container only does speech. The LUIS container only does LUIS. In the cloud, because both containers are known to be deployed, and it is bad performance for a remote client to go to the cloud, do speech, come back, then go to the cloud again and do LUIS, we provide a feature that allows the client to go to Speech, stay in the cloud, go to LUIS then come back to the client. Thus even in this scenario the Speech SDK goes to Speech cloud container with audio, and then Speech cloud container talks to LUIS cloud container with text. The LUIS container has no concept of accepting audio (it would not make sense for LUIS container to accept streaming audio - LUIS is a text-based service). With on-prem, we have no certainty our customer has deployed both containers, we don't presume to orchestrate between containers in our customers' premises, and if both containers are deployed on-prem, given they are more local to the client, it is not a burden to go the SR first, back to client, and have the customer then take that text and go to LUIS.
-
-<br>
-</details>
-
-<details>
-<summary>
-<b>Why are we getting errors with macOS, Speech container and the Python SDK?</b>
-</summary>
-
-When we send a *.wav* file to be transcribed, the result comes back with:
-
-```cmd
-recognition is running....
-Speech Recognition canceled: CancellationReason.Error
-Error details: Timeout: no recognition result received.
-When creating a websocket connection from the browser a test, we get:
-wb = new WebSocket("ws://localhost:5000/speech/recognition/dictation/cognitiveservices/v1")
-WebSocket
-{
- url: "ws://localhost:5000/speech/recognition/dictation/cognitiveservices/v1",
- readyState: 0,
- bufferedAmount: 0,
- onopen: null,
- onerror: null,
- ...
-}
-```
-
-We know the websocket is set up correctly.
-
-**Answer:**
-If that is the case, then see [this GitHub issue](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/310). We have a work-around, [proposed here](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/310#issuecomment-527542722).
-
-Carbon fixed this at version 1.8.
--
-<br>
-</details>
-
-<details>
-<summary>
-<b>What are the differences in the Speech container endpoints?</b>
-</summary>
-
-Could you help fill the following test metrics, including what functions to test, and how to test the SDK and REST APIs? Especially, differences in "interactive" and "conversation", which I did not see from existing doc/sample.
-
-| Endpoint | Functional test | SDK | REST API |
-||-|--|-|
-| `/speech/synthesize/cognitiveservices/v1` | Synthesize Text (text-to-speech) | | Yes |
-| `/speech/recognition/dictation/cognitiveservices/v1` | Cognitive Services on-prem dictation v1 websocket endpoint | Yes | No |
-| `/speech/recognition/interactive/cognitiveservices/v1` | The Cognitive Services on-prem interactive v1 websocket endpoint | | |
-| `/speech/recognition/conversation/cognitiveservices/v1` | The cognitive services on-prem conversation v1 websocket endpoint | | |
-
-**Answer:**
-This is a fusion of:
-- People trying the dictation endpoint for containers, (I'm not sure how they got that URL)-- The 1<sup>st</sup> party endpoint being the one in a container.-- The 1<sup>st</sup> party endpoint returning speech.fragment messages instead of the `speech.hypothesis` messages the 3<sup>rd</sup> part endpoints return for the dictation endpoint.-- The Carbon quickstarts all use `RecognizeOnce` (interactive mode)-- Carbon having an assert that for `speech.fragment` messages requiring they aren't returned in interactive mode.-- Carbon having the asserts fire in release builds (killing the process).-
-The workaround is either switch to using continuous recognition in your code, or (quicker) connect to either the interactive or continuous endpoints in the container.
-For your code, set the endpoint to `host:port`/speech/recognition/interactive/cognitiveservices/v1
-
-For the various modes, see Speech modes - see below:
-
-## Speech modes - Interactive, conversation, dictation
--
-The proper fix is coming with SDK 1.8, which has on-prem support (will pick the right endpoint, so we will be no worse than online service). In the meantime, there is a sample for continuous recognition, why don't we point to it?
-
-https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/6805d96bf69d9e95c9137fe129bc5d81e35f6309/samples/python/console/speech_sample.py#L196
-
-<br>
-</details>
-
-<details>
-<summary>
-<b>Which mode should I use for various audio files?</b>
-</summary>
-
-**Answer:** Here's a [quickstart using Python](./get-started-speech-to-text.md?pivots=programming-language-python). You can find the other languages linked on the docs site.
-
-Just to clarify for the interactive, conversation, and dictation; this is an advanced way of specifying the particular way in which our service will handle the speech request. Unfortunately, for the on-prem containers we have to specify the full URI (since it includes local machine), so this information leaked from the abstraction. We are working with the SDK team to make this more usable in the future.
-
-<br>
-</details>
-
-<details>
-<summary>
-<b>How can we benchmark a rough measure of transactions/second/core?</b>
-</summary>
-
-**Answer:** Here are some of the rough numbers to expect from existing model (will change for the better in the one we will ship in GA):
--- For files, the throttling will be in the speech SDK, at 2x. First five seconds of audio are not throttled. Decoder is capable of doing about 3x real time. For this, the overall CPU usage will be close to 2 cores for a single recognition.-- For mic, it will be at 1x real time. The overall usage should be at about 1 core for a single recognition.-
-This can all be verified from the docker logs. We actually dump the line with session and phrase/utterance statistics, and that includes the RTF numbers.
--
-<br>
-</details>
-
-<details>
-<summary>
-<b>Is it common to split audio files into chucks for Speech container usage?</b>
-</summary>
-
-My current plan is to take an existing audio file and split it up into 10 second chunks and send those through the container. Is that an acceptable scenario? Is there a better way to process larger audio files with the container?
-
-**Answer:** Just use the speech SDK and give it the file, it will do the right thing. Why do you need to chunk the file?
--
-<br>
-</details>
-
-<details>
-<summary>
-<b>How do I make multiple containers run on the same host?</b>
-</summary>
-
-The doc says to expose a different port, which I do, but the LUIS container is still listening on port 5000?
-
-**Answer:** Try `-p <outside_unique_port>:5000`. For example, `-p 5001:5000`.
--
-<br>
-</details>
-
-## Technical questions
-
-<details>
-<summary>
-<b>How can I get non-batch APIs to handle audio &lt;15 seconds long?</b>
-</summary>
-
-**Answer:** `RecognizeOnce()` in interactive mode only processes up to 15 seconds of audio, as the mode is intended for Speech Commanding where utterances are expected to be short. If you use `StartContinuousRecognition()` for dictation or conversation, there is no 15 second limit.
--
-<br>
-</details>
-
-<details>
-<summary>
-<b>What are the recommended resources, CPU and RAM; for 50 concurrent requests?</b>
-</summary>
-
-How many concurrent requests will a 4 core, 4 GB RAM handle? If we have to serve for example, 50 concurrent requests, how many Core and RAM is recommended?
-
-**Answer:**
-At real time, 8 with our latest `en-US`, so we recommend using more docker containers beyond 6 concurrent requests. It gets crazier beyond 16 cores, and it becomes non-uniform memory access (NUMA) node sensitive. The following table describes the minimum and recommended allocation of resources for each Speech container.
-
-# [Speech-to-text](#tab/stt)
-
-| Container | Minimum | Recommended |
-|-|||
-| Speech-to-text | 2 core, 2-GB memory | 4 core, 4-GB memory |
-
-# [Custom Speech-to-text](#tab/cstt)
-
-| Container | Minimum | Recommended |
-|--|||
-| Custom Speech-to-text | 2 core, 2-GB memory | 4 core, 4-GB memory |
-
-# [Text-to-speech](#tab/tts)
-
-| Container | Minimum | Recommended |
-|-|||
-| Text-to-speech | 1 core, 2-GB memory | 2 core, 3-GB memory |
-
-# [Custom Text-to-speech](#tab/ctts)
-
-| Container | Minimum | Recommended |
-|--|||
-| Custom Text-to-speech | 1 core, 2-GB memory | 2 core, 3-GB memory |
-
-***
--- Each core must be at least 2.6 GHz or faster.-- For files, the throttling will be in the Speech SDK, at 2x (first 5 seconds of audio are not throttled).-- The decoder is capable of doing about 2-3x real time. For this, the overall CPU usage will be close to two cores for a single recognition. That's why we do not recommend keeping more than two active connections, per container instance. The extreme side would be to put about 10 decoders at 2x real time in an eight core machine like `DS13_V2`. For the container version 1.3 and later, there's a param you could try setting `DECODER_MAX_COUNT=20`.-- For microphone, it will be at 1x real time. The overall usage should be at about one core for a single recognition.-
-Consider the total number of hours of audio you have. If the number is large, to improve reliability/availability, we suggest running more instances of containers, either on a single box or on multiple boxes, behind a load balancer. Orchestration could be done using Kubernetes (K8S) and Helm, or with Docker compose.
-
-As an example, to handle 1000 hours/24 hours, we have tried setting up 3-4 VMs, with 10 instances/decoders per VM.
-
-<br>
-</details>
-
-<details>
-<summary>
-<b>Does the Speech container support punctuation?</b>
-</summary>
-
-**Answer:** We have capitalization (ITN) available in the on-prem container. Punctuation is language-dependent, and not supported for some languages, including Chinese and Japanese.
-
-We *do* have implicit and basic punctuation support for the existing containers, but it is `off` by default. What that means is that you can get the `.` character in your example, but not the `。` character. To enable this implicit logic, here's an example of how to do so in Python using our Speech SDK (it would be similar in other languages):
-
-```python
-speech_config.set_service_property(
- name='punctuation',
- value='implicit',
- channel=speechsdk.ServicePropertyChannel.UriQueryParameter
-)
-```
-
-<br>
-</details>
-
-<details>
-<summary>
-<b>Why am I getting 404 errors when attempting to POST data to speech-to-text container?</b>
-</summary>
-
-Here is an example HTTP POST:
-
-```http
-POST /speech/recognition/conversation/cognitiveservices/v1?language=en-US&format=detailed HTTP/1.1
-Accept: application/json;text/xml
-Content-Type: audio/wav; codecs=audio/pcm; samplerate=16000
-Transfer-Encoding: chunked
-User-Agent: PostmanRuntime/7.18.0
-Cache-Control: no-cache
-Postman-Token: xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
-Host: 10.0.75.2:5000
-Accept-Encoding: gzip, deflate
-Content-Length: 360044
-Connection: keep-alive
-HTTP/1.1 404 Not Found
-Date: Tue, 22 Oct 2019 15:42:56 GMT
-Server: Kestrel
-Content-Length: 0
-```
-
-**Answer:** We do not support REST API in either speech-to-text container, we only support WebSockets through the Speech SDK. Always refer to the official documentation, see [query prediction endpoints](speech-container-howto.md#query-the-containers-prediction-endpoint).
-
-<br>
-</details>
--
-<details>
-<summary>
-<b> Why is the container running as a non-root user? What issues might occur because of this?</b>
-</summary>
-
-**Answer:** Note that the default user inside the container is a non-root user. This provides protection against processes escaping the container and obtaining escalated permissions on the host node. By default, some platforms like the OpenShift Container Platform already do this by running containers using an arbitrarily assigned user ID. For these platforms, the non-root user will need to have permissions to write to any externally mapped volume that requires writes. For example a logging folder, or a custom model download folder.
-<br>
-</details>
-
-<details>
-<summary>
-<b>When using the speech-to-text service, why am I getting this error?</b>
-</summary>
-
-```cmd
-Error in STT call for file 9136835610040002161_413008000252496:
-{
- "reason": "ResultReason.Canceled",
- "error_details": "Due to service inactivity the client buffer size exceeded. Resetting the buffer. SessionId: xxxxx..."
-}
-```
-
-**Answer:** This typically happens when you feed the audio faster than the Speech recognition container can take it. Client buffers fill up, and the cancellation is triggered. You need to control the concurrency and the RTF at which you send the audio.
-
-<br>
-</details>
-
-<details>
-<summary>
-<b>Could you explain these text-to-speech container errors from the C++ examples?</b>
-</summary>
-
-**Answer:** If the container version is older than 1.3, then this code should be used:
-
-```cpp
-const auto endpoint = "http://localhost:5000/speech/synthesize/cognitiveservices/v1";
-auto config = SpeechConfig::FromEndpoint(endpoint);
-auto synthesizer = SpeechSynthesizer::FromConfig(config);
-auto result = synthesizer->SpeakTextAsync("{{{text1}}}").get();
-```
-
-Older containers don't have the required endpoint for Carbon to work with the `FromHost` API. If the containers used for version 1.3, then this code should be used:
-
-```cpp
-const auto host = "http://localhost:5000";
-auto config = SpeechConfig::FromHost(host);
-config->SetSpeechSynthesisVoiceName(
- "Microsoft Server Speech Text to Speech Voice (en-US, AriaRUS)");
-auto synthesizer = SpeechSynthesizer::FromConfig(config);
-auto result = synthesizer->SpeakTextAsync("{{{text1}}}").get();
-```
-
-Below is an example of using the `FromEndpoint` API:
-
-```cpp
-const auto endpoint = "http://localhost:5000/cognitiveservices/v1";
-auto config = SpeechConfig::FromEndpoint(endpoint);
-config->SetSpeechSynthesisVoiceName(
- "Microsoft Server Speech Text to Speech Voice (en-US, AriaRUS)");
-auto synthesizer = SpeechSynthesizer::FromConfig(config);
-auto result = synthesizer->SpeakTextAsync("{{{text2}}}").get();
-```
-
- The `SetSpeechSynthesisVoiceName` function is called because the containers with an updated text-to-speech engine require the voice name.
-
-<br>
-</details>
-
-<details>
-<summary>
-<b>How can I use v1.7 of the Speech SDK with a Speech container?</b>
-</summary>
-
-**Answer:** There are three endpoints on the Speech container for different usages, they're defined as Speech modes - see below:
-
-## Speech modes
--
-They are for different purposes and are used differently.
-
-Python [samples](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/python/console/speech_sample.py):
-- For single recognition (interactive mode) with a custom endpoint (that is; `SpeechConfig` with an endpoint parameter), see `speech_recognize_once_from_file_with_custom_endpoint_parameters()`.-- For continuous recognition (conversation mode), and just modify to use a custom endpoint as above, see `speech_recognize_continuous_from_file()`.-- To enable dictation in samples like above (only if you really need it), right after you create `speech_config`, add code `speech_config.enable_dictation()`.-
-In C# to enable dictation, invoke the `SpeechConfig.EnableDictation()` function.
-
-### `FromEndpoint` APIs
-| Language | API details |
-|-|:|
-| C++ | <a href="https://docs.microsoft.com/cpp/cognitive-services/speech/speechconfig#fromendpoint" target="_blank">`SpeechConfig::FromEndpoint` <span class="docon docon-navigate-external x-hidden-focus"></span></a> |
-| C# | <a href="https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.speechconfig.fromendpoint?view=azure-dotnet" target="_blank">`SpeechConfig.FromEndpoint` <span class="docon docon-navigate-external x-hidden-focus"></span></a> |
-| Java | <a href="https://docs.microsoft.com/java/api/com.microsoft.cognitiveservices.speech.speechconfig.fromendpoint" target="_blank">`SpeechConfig.fromendpoint` <span class="docon docon-navigate-external x-hidden-focus"></span></a> |
-| Objective-C | <a href="https://docs.microsoft.com/objectivec/cognitive-services/speech/spxspeechconfiguration#initwithendpoint" target="_blank">`SPXSpeechConfiguration:initWithEndpoint;` <span class="docon docon-navigate-external x-hidden-focus"></span></a> |
-| Python | <a href="https://docs.microsoft.com/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechconfig?view=azure-python" target="_blank">`SpeechConfig;` <span class="docon docon-navigate-external x-hidden-focus"></span></a> |
-| JavaScript | Not currently supported, nor is it planned. |
-
-<br>
-</details>
-
-<details>
-<summary>
-<b>How can I use v1.8 of the Speech SDK with a Speech container?</b>
-</summary>
-
-**Answer:** There's a new `FromHost` API. This does not replace or modify any existing APIs. It just adds an alternative way to create a speech config using a custom host.
-
-### `FromHost` APIs
-
-| Language | API details |
-|--|:-|
-| C# | <a href="https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.speechconfig.fromhost?view=azure-dotnet" target="_blank">`SpeechConfig.FromHost` <span class="docon docon-navigate-external x-hidden-focus"></span></a> |
-| C++ | <a href="https://docs.microsoft.com/cpp/cognitive-services/speech/speechconfig#fromhost" target="_blank">`SpeechConfig::FromHost` <span class="docon docon-navigate-external x-hidden-focus"></span></a> |
-| Java | <a href="https://docs.microsoft.com/java/api/com.microsoft.cognitiveservices.speech.speechconfig.fromhost" target="_blank">`SpeechConfig.fromHost` <span class="docon docon-navigate-external x-hidden-focus"></span></a> |
-| Objective-C | <a href="https://docs.microsoft.com/objectivec/cognitive-services/speech/spxspeechconfiguration#initwithhost" target="_blank">`SPXSpeechConfiguration:initWithHost;` <span class="docon docon-navigate-external x-hidden-focus"></span></a> |
-| Python | <a href="https://docs.microsoft.com/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechconfig?view=azure-python" target="_blank">`SpeechConfig;` <span class="docon docon-navigate-external x-hidden-focus"></span></a> |
-| JavaScript | Not currently supported |
-
-> Parameters: host (mandatory), subscription key (optional, if you can use the service without it).
-
-Format for host is `protocol://hostname:port` where `:port` is optional (see below):
-- If the container is running locally, the hostname is `localhost`.-- If the container is running on a remote server, use the hostname or IPv4 address of that server.-
-Host parameter examples for speech-to-text:
-- `ws://localhost:5000` - non-secure connection to a local container using port 5000-- `ws://some.host.com:5000` - non-secure connection to a container running on a remote server-
-Python samples from above, but use `host` parameter instead of `endpoint`:
-
-```python
-speech_config = speechsdk.SpeechConfig(host="ws://localhost:5000")
-```
-
-<br>
-</details>
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Cognitive Services containers](speech-container-howto.md)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/cognitive-services-virtual-networks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/cognitive-services-virtual-networks.md
@@ -7,7 +7,7 @@
Previously updated : 01/27/2021 Last updated : 02/09/2021
@@ -54,7 +54,7 @@ Virtual networks (VNETs) are supported in [regions where Cognitive Services are
> [!NOTE]
-> If you're using LUIS, the **CognitiveServicesManagement** tag only enables you use the service using the SDK or REST API. To access and use LUIS portal from a virtual network, you will need to use the following tags:
+> If you're using LUIS or Speech Services, the **CognitiveServicesManagement** tag only enables you use the service using the SDK or REST API. To access and use LUIS portal and/or Speech Studio from a virtual network, you will need to use the following tags:
> * **AzureActiveDirectory** > * **AzureFrontDoor.Frontend** > * **AzureResourceManager**
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure Cognitive Services description: Lists Azure Policy built-in policy definitions for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/security-controls-policy.md
@@ -1,7 +1,7 @@
Title: Azure Policy Regulatory Compliance controls for Azure Cognitive Services description: Lists Azure Policy Regulatory Compliance controls available for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 01/27/2021 Last updated : 02/09/2021
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-call-api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/how-tos/text-analytics-how-to-call-api.md
@@ -30,6 +30,16 @@ Before you use the Text Analytics API, you will need to create a Azure resource
3. Create the Text Analytics resource and go to the ΓÇ£keys and endpoint bladeΓÇ¥ in the left of the page. Copy the key to be used later when you call the APIs. You'll add this later as a value for the `Ocp-Apim-Subscription-Key` header.
+## Change your pricing tier
+
+If you have an existing Text Analytics resource using the S0 through S4 pricing tier, you can update it to use the Standard (S) [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics/):
+
+1. Navigate to your Text Analytics resource in the [Azure portal](https://portal.azure.com/).
+2. Select **Pricing tier** in the left navigation menu. It will be below **RESOURCE MANAGEMENT**.
+3. Choose the Standard (S) pricing tier. Then click **Select**.
+
+You can also create a new Text Analytics resource with the Standard (S) pricing tier, and migrate your applications to use the credentials for the new resource.
+ ## Using the API synchronously You can call Text Analytics synchronously (for low latency scenarios). You have to call each API (feature) separately when using synchronous API. If you need to call multiple features then check out below section on how to call Text Analytics asynchronously.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/whats-new-docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/whats-new-docs.md
@@ -1,69 +1,61 @@
Title: "Cognitive
-description: "What's new in the Cognitive Services docs for December 1, 2020 - December 31, 2020."
+description: "What's new in the Cognitive Services docs for January 1, 2021 - January 31, 2021."
Previously updated : 01/05/2021 Last updated : 02/08/2021
-# Cognitive Services docs: What's new for December 1, 2020 - December 31, 2020
+# Cognitive Services docs: What's new for January 1, 2021 - January 31, 2021
-Welcome to what's new in the Cognitive Services docs from December 1, 2020 through December 31, 2020. This article lists some of the major changes to docs during this period.
+Welcome to what's new in the Cognitive Services docs from January 1, 2021 through January 31, 2021. This article lists some of the major changes to docs during this period.
## Cognitive Services
-### New articles
+**Updated articles**
- [Plan and manage costs for Azure Cognitive Services](plan-manage-costs.md)
+- [Azure Cognitive Services containers](cognitive-services-container-support.md)
-### Updated articles
--- [Configure Azure Cognitive Services virtual networks](cognitive-services-virtual-networks.md)-
-## Anomaly Detector
-
-### Updated articles
+## Form Recognizer
-- [Anomaly Detector REST API quickstart](./anomaly-detector/quickstarts/client-libraries.md?pivots=rest-api&tabs=windows)
+**New articles**
-## Bing Visual Search
+- [Tutorial: Extract form data in bulk using Azure Data Factory](/azure/cognitive-services/form-recognizer/tutorial-bulk-processing.md)
-### Updated articles
+**Updated articles**
-- [Use an insights token to get insights for an image](./bing-visual-search/use-insights-token.md)
+- [What is Form Recognizer?](/azure/cognitive-services/form-recognizer/overview.md)
-## Containers
+## Immersive Reader
-### Updated articles
+**Updated articles**
-- [Deploy and run container on Azure Container Instance](./containers/azure-container-instance-recipe.md)
+- [Create an Immersive Reader resource and configure Azure Active Directory authentication](/azure/cognitive-services/immersive-reader/how-to-create-immersive-reader.md)
-## Form Recognizer
+## Personalizer
-### Updated articles
+**Updated articles**
-- [Form Recognizer landing page](./form-recognizer/index.yml)-- [Quickstart: Use the Form Recognizer client library](./form-recognizer/quickstarts/client-library.md)
+- [Features are information about actions and context](/azure/cognitive-services/personalizer/concepts-features.md)
## Text Analytics
-### Updated articles
+**Updated articles**
-- [Text Analytics API v3 language support](./text-analytics/language-support.md)-- [How to call the Text Analytics REST API](./text-analytics/how-tos/text-analytics-how-to-call-api.md)-- [How to use Named Entity Recognition in Text Analytics](./text-analytics/how-tos/text-analytics-how-to-entity-linking.md)-- [Example: How to extract key phrases using Text Analytics](./text-analytics/how-tos/text-analytics-how-to-keyword-extraction.md)-- [Text Analytics API Documentation - Tutorials, API Reference - Azure Cognitive Services | Microsoft Docs](./text-analytics/index.yml)-- [Quickstart: Use the Text Analytics client library and REST API](./text-analytics/quickstarts/client-libraries-rest-api.md)
+- [Text Analytics API v3 language support](/azure/cognitive-services/text-analytics/language-support.md)
+- [Migrate to version 3.x of the Text Analytics API](/azure/cognitive-services/text-analytics/migration-guide.md)
+- [What's new in the Text Analytics API?](/azure/cognitive-services/text-analytics/whats-new.md)
## Community contributors
-The following people contributed to the Cognitive Services docs during this period. Thank you!
+The following people contributed to the Cognitive Services docs during this period. Thank you! Learn how to contribute by following the links under "Get involved" in the [what's new landing page](index.yml).
-- [hyoshioka0128](https://github.com/hyoshioka0128) - Hiroshi Yoshioka (1)-- [pymia](https://github.com/pymia) - Mia // Huai-Wen Chang (1)
+- [AnweshGangula](https://github.com/AnweshGangula) - Anwesh Gangula (1)
+- [cdglasz](https://github.com/cdglasz) - Christopher Glasz (1)
+- [huybuidac](https://github.com/huybuidac) - Bui Dac Huy (1)
[!INCLUDE [Service specific updates](./includes/service-specific-updates.md)]
communication-services https://docs.microsoft.com/en-us/azure/communication-services/concepts/teams-interop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/teams-interop.md
@@ -0,0 +1,44 @@
+
+ Title: Teams meeting interoperability
+
+description: Join Teams meetings
+++++ Last updated : 10/10/2020++++
+# Teams interoperability
++
+Azure Communication Services can be used to build custom meeting experiences that interact with Microsoft Teams. Users of your Communication Services solution(s) can interact with Teams participants over voice, video, and screen sharing.
+
+This interoperability allows you to create custom Azure applications that connect users to Teams meetings. Users of your custom applications don't need to have Azure Active Directory identities or Teams licenses to experience this capability. This is ideal for bringing employees (who may be familiar with Teams) and external users (using a custom application experience) together into a seamless meeting experience. This allows you to build experiences similar to the following:
+
+1. Employees use Teams to schedule a meeting
+2. Your custom Communication Services application uses the Microsoft Graph APIs to access meeting details
+3. Meeting details are shared with external users through your custom application
+4. External users use your custom application to join the Teams meeting (via the Communication Services Calling client library)
+
+The high-level architecture for this use-case looks like this:
+
+![Architecture for Teams interop](./media/call-flows/teams-interop.png)
+
+While certain Teams meeting features such as raised hand, together mode, and breakout rooms will only be available for Teams users, your custom application will have access to the meeting's core audio, video, and screen sharing capabilities.
+
+When a Communication Services user joins the Teams meeting, the display name provided through the Calling client library will be shown to Teams users. The Communication Services user will otherwise be treated like an anonymous user in Teams. Your custom application should consider user authentication and other security measures to protect Teams meetings. Be mindful of the security implications of enabling anonymous users to join meetings, and use the [Teams security guide](/microsoftteams/teams-security-guide#addressing-threats-to-teams-meetings) to configure capabilities available to anonymous users.
+
+Communication Services users can join scheduled Teams meetings as long as anonymous joins are enabled in the [meeting settings](/microsoftteams/meeting-settings-in-teams).
+
+## Teams in Government Clouds (GCC)
+Azure Communication Services interoperability is not allowed to Teams deployments using [Microsoft 365 government clouds (GCC)](/MicrosoftTeams/plan-for-government-gcc) at this time.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Join your calling app to a Teams meeting](../quickstarts/voice-video-calling/get-started-teams-interop.md)
communication-services https://docs.microsoft.com/en-us/azure/communication-services/concepts/ui-framework/ui-sdk-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/ui-framework/ui-sdk-overview.md
@@ -43,7 +43,7 @@ Understanding these requirements will help you choose the right client library:
Details about feature availability in the varied [UI SDKs is available here](ui-sdk-features.md), but key trade-offs are summarized below.
-|Client library / SDK|Implementation Complexity| Customization Ability| Calling| Chat| [Teams Interop](./../voice-video-calling/teams-interop.md)
+|Client library / SDK|Implementation Complexity| Customization Ability| Calling| Chat| [Teams Interop](./../teams-interop.md)
|||||||| |Composite Components|Low|Low|Γ£ö|Γ£ö|Γ£ò |Base Components|Medium|Medium|Γ£ö|Γ£ö|Γ£ò
container-registry https://docs.microsoft.com/en-us/azure/container-registry/container-registry-authentication-managed-identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-authentication-managed-identity.md
@@ -48,7 +48,7 @@ This article assumes you have the `aci-helloworld:v1` container image stored in
## Create a Docker-enabled VM
-Create a Docker-enabled Ubuntu virtual machine. You also need to install the [Azure CLI](/cli/azure/install-azure-cli?view=azure-cli-latest) on the virtual machine. If you already have an Azure virtual machine, skip this step to create the virtual machine.
+Create a Docker-enabled Ubuntu virtual machine. You also need to install the [Azure CLI](/cli/azure/install-azure-cli) on the virtual machine. If you already have an Azure virtual machine, skip this step to create the virtual machine.
Deploy a default Ubuntu Azure virtual machine with [az vm create][az-vm-create]. The following example creates a VM named *myDockerVM* in an existing resource group named *myResourceGroup*:
@@ -81,7 +81,7 @@ sudo apt install docker.io -y
After installation, run the following command to verify that Docker is running properly on the VM: ```bash
-sudo docker run -it hello-world
+sudo docker run -it mcr.microsoft.com/hello-world
``` Output:
@@ -94,7 +94,7 @@ This message shows that your installation appears to be working correctly.
### Install the Azure CLI
-Follow the steps in [Install Azure CLI with apt](/cli/azure/install-azure-cli-apt?view=azure-cli-latest) to install the Azure CLI on your Ubuntu virtual machine. For this article, ensure that you install version 2.0.55 or later.
+Follow the steps in [Install Azure CLI with apt](/cli/azure/install-azure-cli-apt) to install the Azure CLI on your Ubuntu virtual machine. For this article, ensure that you install version 2.0.55 or later.
Exit the SSH session.
@@ -102,7 +102,7 @@ Exit the SSH session.
### Create an identity
-Create an identity in your subscription using the [az identity create](/cli/azure/identity?view=azure-cli-latest#az-identity-create) command. You can use the same resource group you used previously to create the container registry or virtual machine, or a different one.
+Create an identity in your subscription using the [az identity create](/cli/azure/identit#az-identity-create) command. You can use the same resource group you used previously to create the container registry or virtual machine, or a different one.
```azurecli-interactive az identity create --resource-group myResourceGroup --name myACRId
container-registry https://docs.microsoft.com/en-us/azure/container-registry/container-registry-concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-concepts.md
@@ -76,7 +76,30 @@ To provide secure isolation and protection from potential layer manipulation, la
### Manifest
-Each container image or artifact pushed to a container registry is associated with a *manifest*. The manifest, generated by the registry when the image is pushed, uniquely identifies the image and specifies its layers. You can list the manifests for a repository with the Azure CLI command [az acr repository show-manifests][az-acr-repository-show-manifests]:
+Each container image or artifact pushed to a container registry is associated with a *manifest*. The manifest, generated by the registry when the image is pushed, uniquely identifies the image and specifies its layers.
+
+A basic manifest for a Linux `hello-world` image looks similar to the following:
+
+ ```json
+ {
+ "schemaVersion": 2,
+ "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
+ "config": {
+ "mediaType": "application/vnd.docker.container.image.v1+json",
+ "size": 1510,
+ "digest": "sha256:fbf289e99eb9bca977dae136fbe2a82b6b7d4c372474c9235adc1741675f587e"
+ },
+ "layers": [
+ {
+ "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
+ "size": 977,
+ "digest": "sha256:2c930d010525941c1d56ec53b97bd057a67ae1865eebf042686d2a2d18271ced"
+ }
+ ]
+ }
+ ```
+
+You can list the manifests for a repository with the Azure CLI command [az acr repository show-manifests][az-acr-repository-show-manifests]:
```azurecli az acr repository show-manifests --name <acrName> --repository <repositoryName>
container-registry https://docs.microsoft.com/en-us/azure/container-registry/container-registry-get-started-docker-cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-get-started-docker-cli.md
@@ -1,16 +1,16 @@
Title: Push & pull Docker image
-description: Push and pull Docker images to a private container registry in Azure using the Docker CLI
+ Title: Push & pull container image
+description: Push and pull Docker images to your private container registry in Azure using the Docker CLI
Last updated 01/23/2019
-# Push your first image to a private Docker container registry using the Docker CLI
+# Push your first image to your Azure container registry using the Docker CLI
-An Azure container registry stores and manages private [Docker](https://hub.docker.com) container images, similar to the way [Docker Hub](https://hub.docker.com/) stores public Docker images. You can use the [Docker command-line interface](https://docs.docker.com/engine/reference/commandline/cli/) (Docker CLI) for [login](https://docs.docker.com/engine/reference/commandline/login/), [push](https://docs.docker.com/engine/reference/commandline/push/), [pull](https://docs.docker.com/engine/reference/commandline/pull/), and other operations on your container registry.
+An Azure container registry stores and manages private container images and other artifacts, similar to the way [Docker Hub](https://hub.docker.com/) stores public Docker container images. You can use the [Docker command-line interface](https://docs.docker.com/engine/reference/commandline/cli/) (Docker CLI) for [login](https://docs.docker.com/engine/reference/commandline/login/), [push](https://docs.docker.com/engine/reference/commandline/push/), [pull](https://docs.docker.com/engine/reference/commandline/pull/), and other container image operations on your container registry.
-In the following steps, you download an official [Nginx image](https://store.docker.com/images/nginx) from the public Docker Hub registry, tag it for your private Azure container registry, push it to your registry, and then pull it from the registry.
+In the following steps, you download a public [Nginx image](https://store.docker.com/images/nginx), tag it for your private Azure container registry, push it to your registry, and then pull it from the registry.
## Prerequisites
@@ -19,9 +19,10 @@ In the following steps, you download an official [Nginx image](https://store.doc
## Log in to a registry
-There are [several ways to authenticate](container-registry-authentication.md) to your private container registry. The recommended method when working in a command line is with the Azure CLI command [az acr login](/cli/azure/acr?view=azure-cli-latest#az-acr-login). For example, to log in to a registry named *myregistry*:
+There are [several ways to authenticate](container-registry-authentication.md) to your private container registry. The recommended method when working in a command line is with the Azure CLI command [az acr login](/cli/azure/acr#az-acr-login). For example, to log in to a registry named *myregistry*, log into the Azure CLI and then authenticate to your registry:
```azurecli
+az login
az acr login --name myregistry ```
@@ -38,20 +39,20 @@ Both commands return `Login Succeeded` once completed.
> [!TIP] > Always specify the fully qualified registry name (all lowercase) when you use `docker login` and when you tag images for pushing to your registry. In the examples in this article, the fully qualified name is *myregistry.azurecr.io*.
-## Pull the official Nginx image
+## Pull a public Nginx image
-First, pull the public Nginx image to your local computer.
+First, pull a public Nginx image to your local computer. This example pulls an image from Microsoft Container Registry.
```
-docker pull nginx
+docker pull mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
``` ## Run the container locally
-Execute following [docker run](https://docs.docker.com/engine/reference/run/) command to start a local instance of the Nginx container interactively (`-it`) on port 8080. The `--rm` argument specifies that the container should be removed when you stop it.
+Execute the following [docker run](https://docs.docker.com/engine/reference/run/) command to start a local instance of the Nginx container interactively (`-it`) on port 8080. The `--rm` argument specifies that the container should be removed when you stop it.
```
-docker run -it --rm -p 8080:80 nginx
+docker run -it --rm -p 8080:80 mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
``` Browse to `http://localhost:8080` to view the default web page served by Nginx in the running container. You should see a page similar to the following:
@@ -67,7 +68,7 @@ To stop and remove the container, press `Control`+`C`.
Use [docker tag](https://docs.docker.com/engine/reference/commandline/tag/) to create an alias of the image with the fully qualified path to your registry. This example specifies the `samples` namespace to avoid clutter in the root of the registry. ```
-docker tag nginx myregistry.azurecr.io/samples/nginx
+docker tag mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine myregistry.azurecr.io/samples/nginx
``` For more information about tagging with namespaces, see the [Repository namespaces](container-registry-best-practices.md#repository-namespaces) section of [Best practices for Azure Container Registry](container-registry-best-practices.md).
container-registry https://docs.microsoft.com/en-us/azure/container-registry/container-registry-image-formats https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-image-formats.md
@@ -15,11 +15,11 @@ The following Docker container image formats are supported:
* [Docker Image Manifest V2, Schema 1](https://docs.docker.com/registry/spec/manifest-v2-1/)
-* [Docker Image Manifest V2, Schema 2](https://docs.docker.com/registry/spec/manifest-v2-2/) - includes Manifest Lists which allow registries to store multiplatform images under a single "image:tag" reference
+* [Docker Image Manifest V2, Schema 2](https://docs.docker.com/registry/spec/manifest-v2-2/) - includes Manifest Lists which allow registries to store [multi-architecture images](push-multi-architecture-images.md) under a single `image:tag` reference
## OCI images
-Azure Container Registry supports images that meet the [Open Container Initiative (OCI) Image Format Specification](https://github.com/opencontainers/image-spec/blob/master/spec.md). Packaging formats include [Singularity Image Format (SIF)](https://github.com/sylabs/sif).
+Azure Container Registry supports images that meet the [Open Container Initiative (OCI) Image Format Specification](https://github.com/opencontainers/image-spec/blob/master/spec.md), including the optional [image index](https://github.com/opencontainers/image-spec/blob/master/image-index.md) specification. Packaging formats include [Singularity Image Format (SIF)](https://github.com/sylabs/sif).
## OCI artifacts
container-registry https://docs.microsoft.com/en-us/azure/container-registry/container-registry-import-images https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-import-images.md
@@ -63,13 +63,15 @@ az acr repository show-manifests \
--repository hello-world ```
-The following example imports a public image from the `tensorflow` repository in Docker Hub:
+If you have a [Docker Hub account](https://www.docker.com/pricing), we recommend that you use the credentials when importing an image from Docker Hub. Pass the Docker Hub user name and the password or a [personal access token](https://docs.docker.com/docker-hub/access-tokens/) as parameters to `az acr import`. The following example imports a public image from the `tensorflow` repository in Docker Hub, using Docker Hub credentials:
```azurecli az acr import \ --name myregistry \ --source docker.io/tensorflow/tensorflow:latest-gpu \ --image tensorflow:latest-gpu
+ --username <Docker Hub user name>
+ --password <Docker Hub token>
``` ### Import from Microsoft Container Registry
container-registry https://docs.microsoft.com/en-us/azure/container-registry/container-registry-oci-artifacts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-oci-artifacts.md
@@ -4,7 +4,7 @@ description: Push and pull Open Container Initiative (OCI) artifacts using a pri
Previously updated : 08/12/2020 Last updated : 02/03/2021
@@ -41,7 +41,7 @@ To read the password from Stdin, use `--password-stdin`.
[Sign in](/cli/azure/authenticate-azure-cli) to the Azure CLI with your identity to push and pull artifacts from the container registry.
-Then, use the Azure CLI command [az acr login](/cli/azure/acr?view=azure-cli-latest#az-acr-login) to access the registry. For example, to authenticate to a registry named *myregistry*:
+Then, use the Azure CLI command [az acr login](/cli/azure/acr#az-acr-login) to access the registry. For example, to authenticate to a registry named *myregistry*:
```azurecli az login
@@ -56,12 +56,12 @@ az acr login --name myregistry
Create a text file in a local working working directory with some sample text. For example, in a bash shell: ```bash
-echo "Here is an artifact!" > artifact.txt
+echo "Here is an artifact" > artifact.txt
``` Use the `oras push` command to push this text file to your registry. The following example pushes the sample text file to the `samples/artifact` repo. The registry is identified with the fully qualified registry name *myregistry.azurecr.io* (all lowercase). The artifact is tagged `1.0`. The artifact has an undefined type, by default, identified by the *media type* string following the filename `artifact.txt`. See [OCI Artifacts](https://github.com/opencontainers/artifacts) for additional types.
-**Linux**
+**Linux or macOS**
```bash oras push myregistry.azurecr.io/samples/artifact:1.0 \
@@ -132,7 +132,7 @@ Verify that the pull was successful:
```bash $ cat artifact.txt
-Here is an artifact!
+Here is an artifact
``` ## Remove the artifact (optional)
@@ -152,7 +152,7 @@ Source code and binaries to build a container image can be stored as OCI artifac
For example, create a one-line Dockerfile: ```bash
-echo "FROM hello-world" > hello-world.dockerfile
+echo "FROM mcr.microsoft.com/hello-world" > hello-world.dockerfile
``` Log in to the destination container registry.
@@ -165,14 +165,15 @@ az acr login --name myregistry
Create and push a new OCI artifact to the destination registry by using the `oras push` command. This example sets the default media type for the artifact. ```bash
-oras push myregistry.azurecr.io/hello-world:1.0 hello-world.dockerfile
+oras push myregistry.azurecr.io/dockerfile:1.0 hello-world.dockerfile
``` Run the [az acr build](/cli/azure/acr#az-acr-build) command to build the hello-world image using the new artifact as build context: ```azurecli
-az acr build --registry myregistry --file hello-world.dockerfile \
- oci://myregistry.azurecr.io/hello-world:1.0
+az acr build --registry myregistry --image builds/hello-world:v1 \
+ --file hello-world.dockerfile \
+ oci://myregistry.azurecr.io/dockerfile:1.0
``` ## Next steps
container-registry https://docs.microsoft.com/en-us/azure/container-registry/container-registry-repository-scoped-permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-repository-scoped-permissions.md
@@ -2,12 +2,12 @@
Title: Permissions to repositories in Azure Container Registry description: Create a token with permissions scoped to specific repositories in a Premium registry to pull or push images, or perform other actions Previously updated : 05/27/2020 Last updated : 02/04/2021 # Create a token with repository-scoped permissions
-This article describes how to create tokens and scope maps to manage repository-scoped permissions in your container registry. By creating tokens, a registry owner can provide users or services with scoped, time-limited access to repositories to pull or push images or perform other actions. A token provides more fine-grained permissions than other registry [authentication options](container-registry-authentication.md), which scope permissions to an entire registry.
+This article describes how to create tokens and scope maps to manage access to specific repositories in your container registry. By creating tokens, a registry owner can provide users or services with scoped, time-limited access to repositories to pull or push images or perform other actions. A token provides more fine-grained permissions than other registry [authentication options](container-registry-authentication.md), which scope permissions to an entire registry.
Scenarios for creating a token include:
@@ -56,7 +56,7 @@ The following image shows the relationship between tokens and scope maps.
## Prerequisites
-* **Azure CLI** - Azure CLI commands to create and manage tokens are available in Azure CLI version 2.0.76 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+* **Azure CLI** - Azure CLI commands command examples in this article require Azure CLI version 2.17.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
* **Docker** - To authenticate with the registry to pull or push images, you need a local Docker installation. Docker provides installation instructions for [macOS](https://docs.docker.com/docker-for-mac/), [Windows](https://docs.docker.com/docker-for-windows/), and [Linux](https://docs.docker.com/engine/installation/#supported-platforms) systems. * **Container registry** - If you don't have one, create a Premium container registry in your Azure subscription, or upgrade an existing registry. For example, use the [Azure portal](container-registry-get-started-portal.md) or the [Azure CLI](container-registry-get-started-azure-cli.md).
@@ -74,7 +74,7 @@ az acr token create --name MyToken --registry myregistry \
content/write content/read ```
-The output shows details about the token. By default, two passwords are generated. It's recommended to save the passwords in a safe place to use later for authentication. The passwords can't be retrieved again, but new ones can be generated.
+The output shows details about the token. By default, two passwords are generated that don't expire, but you can optionally set an expiration date. It's recommended to save the passwords in a safe place to use later for authentication. The passwords can't be retrieved again, but new ones can be generated.
```console {
@@ -108,7 +108,7 @@ The output shows details about the token. By default, two passwords are generate
``` > [!NOTE]
-> If you want to regenerate token passwords and set password expiration periods, see [Regenerate token passwords](#regenerate-token-passwords) later in this article.
+> To regenerate token passwords and expiration periods, see [Regenerate token passwords](#regenerate-token-passwords) later in this article.
The output includes details about the scope map the command created. You can use the scope map, here named `MyToken-scope-map`, to apply the same repository actions to other tokens. Or, update the scope map later to change the permissions of the associated tokens.
@@ -136,7 +136,7 @@ az acr token create --name MyToken \
The output shows details about the token. By default, two passwords are generated. It's recommended to save the passwords in a safe place to use later for authentication. The passwords can't be retrieved again, but new ones can be generated. > [!NOTE]
-> If you want to regenerate token passwords and set password expiration periods, see [Regenerate token passwords](#regenerate-token-passwords) later in this article.
+> To regenerate token passwords and expiration periods, see [Regenerate token passwords](#regenerate-token-passwords) later in this article.
## Create token - portal
@@ -193,13 +193,13 @@ The following examples use the token created earlier in this article to perform
### Pull and tag test images
-For the following examples, pull the `hello-world` and `alpine` images from Docker Hub, and tag them for your registry and repository.
+For the following examples, pull public `hello-world` and `nginx` images from Microsoft Container Registry, and tag them for your registry and repository.
```bash
-docker pull hello-world
-docker pull alpine
-docker tag hello-world myregistry.azurecr.io/samples/hello-world:v1
-docker tag alpine myregistry.azurecr.io/samples/alpine:v1
+docker pull mcr.microsoft.com/hello-world
+docker pull mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
+docker tag mcr.microsoft.com/hello-world myregistry.azurecr.io/samples/hello-world:v1
+docker tag mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine myregistry.azurecr.io/samples/nginx:v1
``` ### Authenticate using token
@@ -229,17 +229,17 @@ After successful login, attempt to push the tagged images to the registry. Becau
docker push myregistry.azurecr.io/samples/hello-world:v1 ```
-The token doesn't have permissions to the `samples/alpine` repo, so the following push attempt fails with an error similar to `requested access to the resource is denied`:
+The token doesn't have permissions to the `samples/nginx` repo, so the following push attempt fails with an error similar to `requested access to the resource is denied`:
```bash
-docker push myregistry.azurecr.io/samples/alpine:v1
+docker push myregistry.azurecr.io/samples/nginx:v1
``` ### Update token permissions To update the permissions of a token, update the permissions in the associated scope map. The updated scope map is applied immediately to all associated tokens.
-For example, update `MyToken-scope-map` with `content/write` and `content/read` actions on the `samples/alpine` repository, and remove the `content/write` action on the `samples/hello-world` repository.
+For example, update `MyToken-scope-map` with `content/write` and `content/read` actions on the `samples/ngnx` repository, and remove the `content/write` action on the `samples/hello-world` repository.
To use the Azure CLI, run [az acr scope-map update][az-acr-scope-map-update] to update the scope map:
@@ -247,21 +247,21 @@ To use the Azure CLI, run [az acr scope-map update][az-acr-scope-map-update] to
az acr scope-map update \ --name MyScopeMap \ --registry myregistry \
- --add samples/alpine content/write content/read \
- --remove samples/hello-world content/write
+ --add-repository samples/nginx content/write content/read \
+ --remove-repository samples/hello-world content/write
``` In the Azure portal: 1. Navigate to your container registry. 1. Under **Repository permissions**, select **Scope maps (Preview)**, and select the scope map to update.
-1. Under **Repositories**, enter `samples/alpine`, and under **Permissions**, select `content/read` and `content/write`. Then select **+Add**.
+1. Under **Repositories**, enter `samples/nginx`, and under **Permissions**, select `content/read` and `content/write`. Then select **+Add**.
1. Under **Repositories**, select `samples/hello-world` and under **Permissions**, deselect `content/write`. Then select **Save**. After updating the scope map, the following push succeeds: ```bash
-docker push myregistry.azurecr.io/samples/alpine:v1
+docker push myregistry.azurecr.io/samples/nginx:v1
``` Because the scope map only has the `content/read` permission on the `samples/hello-world` repository, a push attempt to the `samples/hello-world` repo now fails:
@@ -273,12 +273,12 @@ docker push myregistry.azurecr.io/samples/hello-world:v1
Pulling images from both repos succeeds, because the scope map provides `content/read` permissions on both repositories: ```bash
-docker pull myregistry.azurecr.io/samples/alpine:v1
+docker pull myregistry.azurecr.io/samples/nginx:v1
docker pull myregistry.azurecr.io/samples/hello-world:v1 ``` ### Delete images
-Update the scope map by adding the `content/delete` action to the `alpine` repository. This action allows deletion of images in the repository, or deletion of the entire repository.
+Update the scope map by adding the `content/delete` action to the `nginx` repository. This action allows deletion of images in the repository, or deletion of the entire repository.
For brevity, we show only the [az acr scope-map update][az-acr-scope-map-update] command to update the scope map:
@@ -286,16 +286,16 @@ For brevity, we show only the [az acr scope-map update][az-acr-scope-map-update]
az acr scope-map update \ --name MyScopeMap \ --registry myregistry \
- --add samples/alpine content/delete
+ --add-repository samples/nginx content/delete
``` To update the scope map using the portal, see the [previous section](#update-token-permissions).
-Use the following [az acr repository delete][az-acr-repository-delete] command to delete the `samples/alpine` repository. To delete images or repositories, pass the token's name and password to the command. The following example uses the environment variables created earlier in the article:
+Use the following [az acr repository delete][az-acr-repository-delete] command to delete the `samples/nginx` repository. To delete images or repositories, pass the token's name and password to the command. The following example uses the environment variables created earlier in the article:
```azurecli az acr repository delete \
- --name myregistry --repository samples/alpine \
+ --name myregistry --repository samples/nginx \
--username $TOKEN_NAME --password $TOKEN_PWD ```
@@ -309,7 +309,7 @@ For brevity, we show only the [az acr scope-map update][az-acr-scope-map-update]
az acr scope-map update \ --name MyScopeMap \ --registry myregistry \
- --add samples/hello-world metadata/read
+ --add-repository samples/hello-world metadata/read
``` To update the scope map using the portal, see the [previous section](#update-token-permissions).
@@ -377,7 +377,7 @@ The following example generates a new value for password1 for the *MyToken* toke
```azurecli TOKEN_PWD=$(az acr token credential generate \
- --name MyToken --registry myregistry --days 30 \
+ --name MyToken --registry myregistry --expiration-in-days 30 \
--password1 --query 'passwords[0].value' --output tsv) ```
container-registry https://docs.microsoft.com/en-us/azure/container-registry/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure Container Registry description: Lists Azure Policy built-in policy definitions for Azure Container Registry. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021
container-registry https://docs.microsoft.com/en-us/azure/container-registry/push-multi-architecture-images https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/push-multi-architecture-images.md
@@ -0,0 +1,169 @@
+
+ Title: Multi-architecture images in your registry
+description: Use your Azure container registry to build, import, store, and deploy multi-architecture (multi-arch) images
+ Last updated : 02/07/2021+++
+# Multi-architecture images in your Azure container registry
+
+This article introduces *multi-architecture* (*multi-arch*) images and how you can use Azure Container Registry features to help create, store, and use them.
+
+A multi-arch image is a type of container image that may combine variants for different architectures, and sometimes for different operating systems. When running an image with multi-architecture support, container clients will automatically select an image variant that matches your OS and architecture.
+
+## Manifests and manifest lists
+
+Multi-arch images are based on image manifests and manifest lists.
+
+### Manifest
+
+Each container image is represented by a [manifest](container-registry-concepts.md#manifest). A manifest is a JSON file that uniquely identifies the image, referencing its layers and their corresponding sizes.
+
+A basic manifest for a Linux `hello-world` image looks similar to the following:
+
+ ```json
+ {
+ "schemaVersion": 2,
+ "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
+ "config": {
+ "mediaType": "application/vnd.docker.container.image.v1+json",
+ "size": 1510,
+ "digest": "sha256:fbf289e99eb9bca977dae136fbe2a82b6b7d4c372474c9235adc1741675f587e"
+ },
+ "layers": [
+ {
+ "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
+ "size": 977,
+ "digest": "sha256:2c930d010525941c1d56ec53b97bd057a67ae1865eebf042686d2a2d18271ced"
+ }
+ ]
+ }
+ ```
+
+You can view a manifest in Azure Container Registry using the Azure portal or tools such as the [az acr repository show-manifests](/cli/azure/acr/repository#az_acr_repository_show_manifests) command in the Azure CLI.
+
+### Manifest list
+
+A *manifest list* for a multi-arch image (known more generally as an [image index](https://github.com/opencontainers/image-spec/blob/master/image-index.md) for OCI images) is a collection (index) of images, and you create one by specifying one or more image names. It includes details about each of the images such as the supported OS and architecture, size, and manifest digest. The manifest list can be used in the same way as an image name in `docker pull` and `docker run` commands.
+
+The `docker` CLI manages manifests and manifest lists using the [docker manifest](https://docs.docker.com/engine/reference/commandline/manifest/) command.
+
+> [!NOTE]
+> Currently, the `docker manifest` command and subcommands are experimental. See the Docker documentation for details about using experimental commands.
+
+You can view a manifest list using the `docker manifest inspect` command. The following is the output for the multi-arch image `mcr.microsoft.com/mcr/hello-world:latest`, which has three manifests: two for Linux OS architectures and one for a Windows architecture.
+```json
+{
+ "schemaVersion": 2,
+ "mediaType": "application/vnd.docker.distribution.manifest.list.v2+json",
+ "manifests": [
+ {
+ "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
+ "size": 524,
+ "digest": "sha256:83c7f9c92844bbbb5d0a101b22f7c2a7949e40f8ea90c8b3bc396879d95e899a",
+ "platform": {
+ "architecture": "amd64",
+ "os": "linux"
+ }
+ },
+ {
+ "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
+ "size": 525,
+ "digest": "sha256:873612c5503f3f1674f315c67089dee577d8cc6afc18565e0b4183ae355fb343",
+ "platform": {
+ "architecture": "arm64",
+ "os": "linux"
+ }
+ },
+ {
+ "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
+ "size": 1124,
+ "digest": "sha256:b791ad98d505abb8c9618868fc43c74aa94d08f1d7afe37d19647c0030905cae",
+ "platform": {
+ "architecture": "amd64",
+ "os": "windows",
+ "os.version": "10.0.17763.1697"
+ }
+ }
+ ]
+}
+```
+
+When a multi-arch manifest list is stored in Azure Container Registry, you can also view the manifest list using the Azure portal or with tools such as the [az acr repository show-manifests](/cli/azure/acr/repository#az_acr_repository_how_manifests) command.
+
+## Import a multi-arch image
+
+An existing multi-arch image can be imported to an Azure container registry using the [az acr import](/cli/azure/acr#az_acr_import) command. The image import syntax is the same as with a single-architecture image. Like import of a single-architecture image, import of a multi-arch image doesn't use Docker commands.
+
+For details, see [Import container images to a container registry](container-registry-import-images.md).
+
+## Push a multi-arch image
+
+When you have build workflows to create container images for different architectures, follow these steps to push a multi-arch image to your Azure container registry.
+
+1. Tag and push each architecture-specific image to your container registry. The following example assumes two Linux architectures: arm64 and amd64.
+
+ ```console
+ docker tag myimage:arm64 \
+ myregistry.azurecr.io/multi-arch-samples/myimage:arm64
+
+ docker push myregistry.azurecr.io/multi-arch-samples/myimage:arm64
+
+ docker tag myimage:amd64 \
+ myregistry.azurecr.io/multi-arch-samples/myimage:amd64
+
+ docker push myregistry.azurecr.io/multi-arch-samples/myimage:amd64
+ ```
+
+1. Run `docker manifest create` to create a manifest list to combine the preceding images into a multi-arch image.
+
+ ```console
+ docker manifest create myregistry.azurecr.io/multi-arch-samples/myimage:multi \
+ myregistry.azurecr.io/multi-arch-samples/myimage:arm64 \
+ myregistry.azurecr.io/multi-arch-samples/myimage:amd64
+ ```
+
+1. Push the manifest to your container registry using `docker manifest push`:
+
+ ```console
+ docker manifest push myregistry.azurecr.io/multi-arch-samples/myimage:multi
+ ```
+
+1. Use the `docker manifest inspect` command to view the manifest list. An example of command output is shown in a preceding section.
+
+After you push the multi-arch manifest to your registry, work with the multi-arch image the same way that you do with a single-architecture image. For example, pull the image using `docker pull`, and use [az acr repository](/cli/azure/acr/repository#az_acr_repository) commands to view tags, manifests, and other properties of the image.
+
+## Build and push a multi-arch image
+
+Using features of [ACR Tasks](container-registry-tasks-overview.md), you can build and push a multi-arch image to your Azure container registry. For example, define a [multi-step task](container-registry-tasks-multi-step.md) in a YAML file that builds a Linux multi-arch image.
+
+The following example assumes that you have separate Dockerfiles for two architectures, arm64 and amd64. It builds and pushes the architecture-specific images, then creates and pushes a multi-arch manifest that has the `latest` tag:
+
+```yml
+version: v1.1.0
+
+steps:
+- build: -t {{.Run.Registry}}/multi-arch-samples/myimage:{{.Run.ID}}-amd64 -f dockerfile.arm64 .
+- build: -t {{.Run.Registry}}/multi-arch-samples/myyimage:{{.Run.ID}}-arm64 -f dockerfile.amd64 .
+- push:
+ - {{.Run.Registry}}/multi-arch-samples/myimage:{{.Run.ID}}-arm64
+ - {{.Run.Registry}}/multi-arch-samples/myimage:{{.Run.ID}}-amd64
+- cmd: >
+ docker manifest create
+ {{.Run.Registry}}/multi-arch-samples/myimage:latest
+ {{.Run.Registry}}/multi-arch-samples/myimage:{{.Run.ID}}-arm64
+ {{.Run.Registry}}/multi-arch-samples/myimage:{{.Run.ID}}-amd64
+- cmd: docker manifest push --purge {{.Run.Registry}}/multi-arch-samples/myimage:latest
+- cmd: docker manifest inspect {{.Run.Registry}}/multi-arch-samples/myimage:latest
+```
+
+## Next steps
+
+* Use [Azure Pipelines](/azure/devops/pipelines/get-started/what-is-azure-pipelines.md) to build container images for different architectures.
+* Learn about building multi-platform images using the experimental Docker [buildx](https://docs.docker.com/buildx/working-with-buildx/) plug-in.
+
+<!-- LINKS - external -->
+[docker-linux]: https://docs.docker.com/engine/installation/#supported-platforms
+[docker-mac]: https://docs.docker.com/docker-for-mac/
+[docker-windows]: https://docs.docker.com/docker-for-windows/
container-registry https://docs.microsoft.com/en-us/azure/container-registry/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/security-controls-policy.md
@@ -1,7 +1,7 @@
Title: Azure Policy Regulatory Compliance controls for Azure Container Registry description: Lists Azure Policy Regulatory Compliance controls available for Azure Container Registry. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/connect-mongodb-account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/connect-mongodb-account.md
@@ -11,7 +11,7 @@
adobe-target: true adobe-target-activity: DocsExp-A/B-384740-MongoDB-2.8.2021 adobe-target-experience: Experience B
-adobe-target-content: connect-mongodb-account-experimental.md
+adobe-target-content: ./connect-mongodb-account-experimental
# Connect a MongoDB application to Azure Cosmos DB
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/graph-partitioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/graph-partitioning.md
@@ -14,7 +14,7 @@
One of the key features of the Gremlin API in Azure Cosmos DB is the ability to handle large-scale graphs through horizontal scaling. The containers can scale independently in terms of storage and throughput. You can create containers in Azure Cosmos DB that can be automatically scaled to store a graph data. The data is automatically balanced based on the specified **partition key**.
-Partitioning is done internally if the container is expected to store more than 20 GB in size or if you want to allocate more than 10,000 request units per second (RUs). Data is automatically partitioned based on the partition key you specify. Partition key is required if you create graph containers from the Azure portal or the 3.x or higher versions of Gremlin drivers. Partition key is not required if you use 2.x or lower versions of Gremlin drivers.
+Partitioning is done internally if the container is expected to store more than 20 GB in size or if you want to allocate more than 10,000 request units per second (RUs). Data is automatically partitioned based on the partition key you specify. Partition key is required if you create graph containers from the Azure portal or the 3.x or higher versions of Gremlin drivers. Partition key is not required if you use 2.x or lower versions of Gremlin drivers.
The same general principles from the [Azure Cosmos DB partitioning mechanism](partitioning-overview.md) apply with a few graph-specific optimizations described below.
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure Cosmos DB description: Lists Azure Policy built-in policy definitions for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/security-controls-policy.md
@@ -1,7 +1,7 @@
Title: Azure Policy Regulatory Compliance controls for Azure Cosmos DB description: Lists Azure Policy Regulatory Compliance controls available for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/table-storage-design-guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table-storage-design-guide.md
@@ -637,7 +637,7 @@ For this option, use index entities that store the following data:
:::image type="content" source="./media/storage-table-design-guide/storage-table-design-IMAGE15.png" alt-text="Screenshot that shows the Employee index entity that contains a list of employee IDs for employees with the last name stored in the RowKey and PartitionKey.":::
-The `EmployeeIDs` property contains a list of employee IDs for employees with the last name stored in the `RowKey` and `PartitionKey`.
+The `EmployeeDetails` property contains a list of employee IDs and department name pairs for employees with the last name stored in the `RowKey`.
You can't use EGTs to maintain consistency, because the index entities are in a separate partition from the employee entities. Ensure that the index entities are eventually consistent with the employee entities.
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/manage/troubleshoot-customer-agreement-billing-issues-usage-file-pivot-tables https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/troubleshoot-customer-agreement-billing-issues-usage-file-pivot-tables.md
@@ -79,6 +79,8 @@ A single resource can incur several charges for different services. For example,
[!INCLUDE [Transform data before using large usage files](../../../includes/cost-management-billing-transform-data-before-using-large-usage-files.md)] + ## Next steps - [Explore and analyze costs with cost analysis](../costs/quick-acm-cost-analysis.md).
cost-management-billing https://docs.microsoft.com/en-us/azure/cost-management-billing/manage/troubleshoot-ea-billing-issues-usage-file-pivot-tables https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/troubleshoot-ea-billing-issues-usage-file-pivot-tables.md
@@ -76,6 +76,8 @@ A single resource can incur several charges for different services. For example,
[!INCLUDE [Transform data before using large usage files](../../../includes/cost-management-billing-transform-data-before-using-large-usage-files.md)] + ## Next steps - [Explore and analyze costs with cost analysis](../costs/quick-acm-cost-analysis.md).
data-lake-analytics https://docs.microsoft.com/en-us/azure/data-lake-analytics/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-analytics/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure Data Lake Analytics description: Lists Azure Policy built-in policy definitions for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021
data-lake-analytics https://docs.microsoft.com/en-us/azure/data-lake-analytics/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-analytics/security-controls-policy.md
@@ -1,7 +1,7 @@
Title: Azure Policy Regulatory Compliance controls for Azure Data Lake Analytics description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021
data-lake-store https://docs.microsoft.com/en-us/azure/data-lake-store/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-store/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure Data Lake Storage Gen1 description: Lists Azure Policy built-in policy definitions for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021
data-lake-store https://docs.microsoft.com/en-us/azure/data-lake-store/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-store/security-controls-policy.md
@@ -1,7 +1,7 @@
Title: Azure Policy Regulatory Compliance controls for Azure Data Lake Storage Gen1 description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-gpu-deploy-arc-data-controller https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-arc-data-controller.md
@@ -0,0 +1,341 @@
+
+ Title: Deploy an Azure Arc Data Controller on your Azure Stack Edge Pro GPU device| Microsoft Docs
+description: Describes how to deploy an Azure Arc Data Controller and Azure Data Services on your Azure Stack Edge Pro GPU device.
++++++ Last updated : 02/08/2021++
+# Deploy Azure Data Services on your Azure Stack Edge Pro GPU device
++
+This article describes the process of creating an Azure Arc Data Controller and then deploying Azure Data Services on your Azure Stack Edge Pro GPU device.
+
+Azure Arc Data Controller is the local control plane that enables Azure Data Services in customer-managed environments. Once you have created the Azure Arc Data Controller on the Kubernetes cluster that runs on your Azure Stack Edge Pro device, you can deploy Azure Data Services such as SQL Managed Instance (Preview) on that data controller.
+
+The procedure to create Data Controller and then deploy an SQL Managed Instance involves the use of PowerShell and `kubectl` - a native tool that provides command-line access to the Kubernetes cluster on the device.
++
+## Prerequisites
+
+Before you begin, make sure that:
+
+1. You've access to an Azure Stack Edge Pro device and you've activated your device as described in [Activate Azure Stack Edge Pro](azure-stack-edge-gpu-deploy-activate.md).
+
+1. You've enabled the compute role on the device. A Kubernetes cluster was also created on the device when you configured compute on the device as per the instructions in [Configure compute on your Azure Stack Edge Pro device](azure-stack-edge-gpu-deploy-configure-compute.md).
+
+1. You have the Kubernetes API endpoint from the **Device** page of your local web UI. For more information, see the instructions in [Get Kubernetes API endpoint](azure-stack-edge-gpu-deploy-configure-compute.md#get-kubernetes-endpoints).
+
+1. You've access to a client that will connect to your device.
+ 1. This article uses a Windows client system running PowerShell 5.0 or later to access the device. You can use any other client with a [Supported operating system](azure-stack-edge-gpu-system-requirements.md#supported-os-for-clients-connected-to-device).
+ 1. Install `kubectl` on your client. For the client version:
+ 1. Identify the Kubernetes server version installed on the device. In the local UI of the device, go to **Software updates** page. Note the **Kubernetes server version** in this page.
+ 1. Download a client that is skewed no more than one minor version from the master. The client version but may lead the master by up to one minor version. For example, a v1.3 master should work with v1.1, v1.2, and v1.3 nodes, and should work with v1.2, v1.3, and v1.4 clients. For more information on Kubernetes client version, see [Kubernetes version and version skew support policy](https://kubernetes.io/docs/setup/release/version-skew-policy/#supported-version-skew).
+
+1. Optionally, [Install client tools for deploying and managing Azure Arc enabled data services](../azure-arc/dat). These tools are not required but recommended.
+1. Make sure you have enough resources available on your device to provision a data controller and one SQL Managed Instance. For data controller and one SQL Managed Instance, you will need a minimum of 16 GB of RAM and 4 CPU cores. For detailed guidance, go to [Minimum requirements for Azure Arc enabled data services deployment](../azure-arc/dat#minimum-deployment-requirements).
++
+## Configure Kubernetes external service IPs
+
+1. Go the local web UI of the device and then go to **Compute**.
+1. Select the network enabled for compute.
+
+ ![Compute page in local UI 2](./media/azure-stack-edge-gpu-deploy-arc-data-controller/compute-network-1.png)
+
+1. Make sure that you provide three additional Kubernetes external service IPs (in addition to the IPs you have already configured for other external services or containers). The data controller will use two service IPs and the third IP is used when you create a SQL Managed Instance. You will need one IP for each additional Data Service you will deploy.
+
+ ![Compute page in local UI 3](./media/azure-stack-edge-gpu-deploy-arc-data-controller/compute-network-2.png)
+
+1. Apply the settings and these new IPs will immediately take effect on an already existing Kubernetes cluster.
++
+## Deploy Azure Arc Data Controller
+
+Before you deploy a data controller, you'll need to create a namespace.
+
+### Create namespace
+
+Create a new, dedicated namespace where you will deploy the Data Controller. You'll also create a user and then grant user the access to the namespace that you created.
+
+> [!NOTE]
+> For both namespace and user names, the [DNS subdomain naming conventions](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names) apply.
+
+1. [Connect to the PowerShell interface](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface).
+1. Create a namespace. Type:
+
+ `New-HcsKubernetesNamespace -Namespace <Name of namespace>`
+
+1. Create a user. Type:
+
+ `New-HcsKubernetesUser -UserName <User name>`
+
+1. A config file is displayed in plain text. Copy this file and save it as a *config* file.
+
+ > [!IMPORTANT]
+ > Do not save the config file as *.txt* file, save the file without any file extension.
+
+1. The config file should live in the `.kube` folder of your user profile on the local machine. Copy the file to that folder in your user profile.
+
+ ![Location of config file on client](media/azure-stack-edge-j-series-create-kubernetes-cluster/location-config-file.png)
+1. Grant the user access to the namespace that you created. Type:
+
+ `Grant-HcsKubernetesNamespaceAccess -Namespace <Name of namespace> -UserName <User name>`
+
+ Here is a sample output of the preceding commands. In this example, we create a `myadstest` namespace, a `myadsuser` user and granted the user access to the namespace.
+
+ ```powershell
+ [10.100.10.10]: PS>New-HcsKubernetesNamespace -Namespace myadstest
+ [10.100.10.10]: PS>New-HcsKubernetesUser -UserName myadsuser
+ apiVersion: v1
+ clusters:
+ - cluster:
+ certificate-authority-data: LS0tLS1CRUdJTiBD=======//snipped//=======VSVElGSUNBVEUtLS0tLQo=
+ server: https://compute.myasegpudev.wdshcsso.com:6443
+ name: kubernetes
+ contexts:
+ - context:
+ cluster: kubernetes
+ user: myadsuser
+ name: myadsuser@kubernetes
+ current-context: myadsuser@kubernetes
+ kind: Config
+ preferences: {}
+ users:
+ - name: myadsuser
+ user:
+ client-certificate-data: LS0tLS1CRUdJTiBDRV=========//snipped//=====EE9PQotLS0kFURSBLRVktLS0tLQo=
+
+ [10.100.10.10]: PS>Grant-HcsKubernetesNamespaceAccess -Namespace myadstest -UserName myadsuser
+ [10.100.10.10]: PS>Set-HcsKubernetesAzureArcDataController -SubscriptionId db4e2fdb-6d80-4e6e-b7cd-736098270664 -ResourceGroupName myasegpurg -Location "EastUS" -UserName myadsuser -Password "Password1" -DataControllerName "arctestcontroller" -Namespace myadstest
+ [10.100.10.10]: PS>
+ ```
+1. Add a DNS entry to the hosts file on your system.
+
+ 1. Run Notepad as administrator and open the `hosts` file located at `C:\windows\system32\drivers\etc\hosts`.
+ 2. Use the information that you saved from the **Device** page in the local UI (prerequisite) to create the entry in the hosts file.
+
+ For example, copy this endpoint `https://compute.myasegpudev.microsoftdatabox.com/[10.100.10.10]` to create the following entry with device IP address and DNS domain:
+
+ `10.100.10.10 compute.myasegpudev.microsoftdatabox.com`
+
+1. To verify that you can connect to the Kubernetes pods, start a cmd prompt or a PowerShell session. Type:
+
+ ```powershell
+ PS C:\WINDOWS\system32> kubectl get pods -n "myadstest"
+ No resources found.
+ PS C:\WINDOWS\system32>
+ ```
+You can now deploy your data controller and data services applications in the namespace, then view the applications and their logs.
+
+### Create data controller
+
+The data controller is a collection of pods that are deployed to your Kubernetes cluster to provide an API, the controller service, the bootstrapper, and the monitoring databases and dashboards. Follow these steps to create a data controller on the Kubernetes cluster that exists on your Azure Stack Edge device in the namespace that you created earlier.
+
+1. Gather the following information that you'll need to create a data controller:
+
+
+ |Column1 |Column2 |
+ |||
+ |Data controller name |A descriptive name for your data controller. For example, `arctestdatacontroller`. |
+ |Data controller username |Any username for the data controller administrator user. The data controller username and password are used to authenticate to the data controller API to perform administrative functions. |
+ |Data controller password |A password for the data controller administrator user. Choose a secure password and share it with only those that need to have cluster administrator privileges. |
+ |Name of your Kubernetes namespace |The name of the Kubernetes namespace that you want to create the data controller in. |
+ |Azure subscription ID |The Azure subscription GUID for where you want the data controller resource in Azure to be created. |
+ |Azure resource group name |The name of the resource group where you want the data controller resource in Azure to be created. |
+ |Azure location |The Azure location where the data controller resource metadata will be stored in Azure. For a list of available regions, see Azure global infrastructure / Products by region.|
++
+1. Connect to the PowerShell interface. To create the data controller, type:
+
+ ```powershell
+ Set-HcsKubernetesAzureArcDataController -SubscriptionId <Subscription ID> -ResourceGroupName <Resource group name> -Location <Location without spaces> -UserName <User you created> -Password <Password to authenticate to Data Controller> -DataControllerName <Data Controller Name> -Namespace <Namespace you created>
+ ```
+ Here is a sample output of the preceding commands.
+
+ ```powershell
+ [10.100.10.10]: PS>Set-HcsKubernetesAzureArcDataController -SubscriptionId db4e2fdb-6d80-4e6e-b7cd-736098270664 -ResourceGroupName myasegpurg -Location "EastUS" -UserName myadsuser -Password "Password1" -DataControllerName "arctestcontroller" -Namespace myadstest
+ [10.100.10.10]: PS>
+ ```
+
+ The deployment may take approximately 5 minutes to complete.
+
+ > [!NOTE]
+ > The data controller created on Kubernetes cluster on your Azure Stack Edge Pro device works only in the disconnected mode in the current release.
+
+### Monitor data creation status
+
+1. Open another PowerShell window.
+1. Use the following `kubectl` command to monitor the creation status of the data controller.
+
+ ```powershell
+ kubectl get datacontroller/<Data controller name> --namespace <Name of your namespace>
+ ```
+ When the controller is created, the status should be `Ready`.
+ Here is a sample output of the preceding command:
+
+ ```powershell
+ PS C:\WINDOWS\system32> kubectl get datacontroller/arctestcontroller --namespace myadstest
+ NAME STATE
+ arctestcontroller Ready
+ PS C:\WINDOWS\system32>
+ ```
+1. To identify the IPs assigned to the external services running on the data controller, use the `kubectl get svc -n <namespace>` command. Here is a sample output:
+
+ ```powershell
+ PS C:\WINDOWS\system32> kubectl get svc -n myadstest
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ controldb-svc ClusterIP 172.28.157.130 <none> 1433/TCP,8311/TCP,8411/TCP 3d21h
+ controller-svc ClusterIP 172.28.123.251 <none> 443/TCP,8311/TCP,8301/TCP,8411/TCP,8401/TCP 3d21h
+ controller-svc-external LoadBalancer 172.28.154.30 10.57.48.63 30080:31090/TCP 3d21h
+ logsdb-svc ClusterIP 172.28.52.196 <none> 9200/TCP,8300/TCP,8400/TCP 3d20h
+ logsui-svc ClusterIP 172.28.85.97 <none> 5601/TCP,8300/TCP,8400/TCP 3d20h
+ metricsdb-svc ClusterIP 172.28.255.103 <none> 8086/TCP,8300/TCP,8400/TCP 3d20h
+ metricsdc-svc ClusterIP 172.28.208.191 <none> 8300/TCP,8400/TCP 3d20h
+ metricsui-svc ClusterIP 172.28.158.163 <none> 3000/TCP,8300/TCP,8400/TCP 3d20h
+ mgmtproxy-svc ClusterIP 172.28.228.229 <none> 443/TCP,8300/TCP,8311/TCP,8400/TCP,8411/TCP 3d20h
+ mgmtproxy-svc-external LoadBalancer 172.28.166.214 10.57.48.64 30777:30621/TCP 3d20h
+ sqlex-svc ClusterIP None <none> 1433/TCP 3d20h
+ PS C:\WINDOWS\system32>
+ ```
+
+## Deploy SQL managed instance
+
+After you have successfully created the data controller, you can use a template to deploy a SQL Managed Instance on the data controller.
+
+### Deployment template
+
+Use the following deployment template to deploy a SQL Managed Instance on the data controller on your device.
+
+```yml
+apiVersion: v1
+data:
+ password: UGFzc3dvcmQx
+ username: bXlhZHN1c2Vy
+kind: Secret
+metadata:
+ name: sqlex-login-secret
+type: Opaque
+
+apiVersion: sql.arcdata.microsoft.com/v1alpha1
+kind: sqlmanagedinstance
+metadata:
+ name: sqlex
+spec:
+ limits:
+ memory: 4Gi
+ vcores: "4"
+ requests:
+ memory: 2Gi
+ vcores: "1"
+ service:
+ type: LoadBalancer
+ storage:
+ backups:
+ className: ase-node-local
+ size: 5Gi
+ data:
+ className: ase-node-local
+ size: 5Gi
+ datalogs:
+ className: ase-node-local
+ size: 5Gi
+ logs:
+ className: ase-node-local
+ size: 1Gi
+```
++
+#### Metadata name
+
+The metadata name is the name of the SQL Managed Instance. The associated pod in the preceding `deployment.yaml` will be name as `sqlex-n` (`n` is the number of pods associated with the application).
+
+#### Password and username data
+
+The data controller username and password are used to authenticate to the data controller API to perform administrative functions. The Kubernetes secret for the data controller username and password in the deployment template are base64 encoded strings.
+
+You can use an online tool to base64 encode your desired username and password or you can use built in CLI tools depending on your platform. When using an online Base64 encode tool, provide the user name and password strings (that you entered while creating the data controller) in the tool and the tool will generate the corresponding Base64 encoded strings.
+
+#### Service type
+
+Service type should be set to `LoadBalancer`.
+
+#### Storage class name
+
+You can identify the storage class on your Azure Stack Edge device that the deployment will use for data, backups, data logs and logs. Use the `kubectl get storageclass` command to get the storage class deployed on your device.
+
+```powershell
+PS C:\WINDOWS\system32> kubectl get storageclass
+NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
+ase-node-local rancher.io/local-path Delete WaitForFirstConsumer false 5d23h
+PS C:\WINDOWS\system32>
+```
+In the preceding sample output, the storage class `ase-node-local` on your device should be specified in the template.
+ 
+#### Spec
+
+To create an SQL Managed Instance on your Azure Stack Edge device, you can specify your memory and CPU requirements in the spec section of the `deployment.yaml`. Each SQL managed instance must request a minimum of 2-GB memory and 1 CPU core as shown in the following example.
+
+```yml
+spec:
+ limits:
+ memory: 4Gi
+ vcores: "4"
+ requests:
+ memory: 2Gi
+ vcores: "1"
+```
+
+For detailed sizing guidance for data controller and 1 SQL Managed Instance, review [SQL managed instance sizing details](../azure-arc/dat#sql-managed-instance-sizing-details).
+
+### Run deployment template
+
+Run the `deployment.yaml` using the following command:
+
+```powershell
+kubectl create -n <Name of namespace that you created> -f <Path to the deployment yaml>
+```
+
+Here is a sample output of the following command:
+
+```powershell
+PS C:\WINDOWS\system32> kubectl get pods -n "myadstest"
+No resources found.
+PS C:\WINDOWS\system32>
+PS C:\WINDOWS\system32> kubectl create -n myadstest -f C:\azure-arc-data-services\sqlex.yaml secret/sqlex-login-secret created
+sqlmanagedinstance.sql.arcdata.microsoft.com/sqlex created
+PS C:\WINDOWS\system32> kubectl get pods --namespace myadstest
+NAME READY STATUS RESTARTS AGE
+bootstrapper-mv2cd 1/1 Running 0 83m
+control-w9s9l 2/2 Running 0 78m
+controldb-0 2/2 Running 0 78m
+controlwd-4bmc5 1/1 Running 0 64m
+logsdb-0 1/1 Running 0 64m
+logsui-wpmw2 1/1 Running 0 64m
+metricsdb-0 1/1 Running 0 64m
+metricsdc-fb5r5 1/1 Running 0 64m
+metricsui-56qzs 1/1 Running 0 64m
+mgmtproxy-2ckl7 2/2 Running 0 64m
+sqlex-0 3/3 Running 0 13m
+PS C:\WINDOWS\system32>
+```
+
+The `sqlex-0` pod in the sample output indicates the status of the SQL Managed Instance.
+
+## Remove data controller
+
+To remove the data controller, delete the dedicated namespace in which you deployed it.
++
+```powershell
+kubectl delete ns <Name of your namespace>
+```
++
+## Next steps
+
+- [Deploy a stateless application on your Azure Stack Edge Pro](azure-stack-edge-j-series-deploy-stateless-application-kubernetes.md).
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-migrate-fpga-gpu https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-migrate-fpga-gpu.md
@@ -0,0 +1,202 @@
+
+ Title: Migration guide for Azure Stack Edge Pro FPGA to GPU physical device
+description: This guide contains instructions to migrate workloads from an Azure Stack Edge Pro FPGA device to an Azure Stack Edge Pro GPU device.
++++++ Last updated : 02/09/2021++
+# Migrate workloads from an Azure Stack Edge Pro FPGA to an Azure Stack Edge Pro GPU
+
+This article describes how to migrate workloads and data from an Azure Stack Edge Pro FPGA device to an Azure Stack Edge Pro GPU device. The migration procedure involves an overview of migration including a comparison between the two devices, migration considerations, detailed steps, and verification followed by cleanup.
+
+<!--Azure Stack Edge Pro FPGA devices will reach end-of-life in February 2024. If you are considering new deployments, we recommend that you explore Azure Stack Edge Pro GPU devices for your workloads.-->
+
+## About migration
+
+Migration is the process of moving workloads and application data from one storage location to another. This entails making an exact copy of an organizationΓÇÖs current data from one storage device to another storage device - preferably without disrupting or disabling active applications - and then redirecting all input/output (I/O) activity to the new device.
+
+This migration guide provides a step-by-step walkthrough of the steps required to migrate data from an Azure Stack Edge Pro FPGA device to an Azure Stack Edge Pro GPU device. This document is intended for information technology (IT) professionals and knowledge workers who are responsible for operating, deploying, and managing Azure Stack Edge devices in the datacenter.
+
+In this article, the Azure Stack Edge Pro FPGA device is referred to as the *source* device and the Azure Stack Edge Pro GPU device is the *target* device.
+
+## Comparison summary
+
+This section provides a comparative summary of capabilities between the Azure Stack Edge Pro GPU vs. the Azure Stack Edge Pro FPGA devices. The hardware in both the source and the target device is largely identical and differs only with respect to the hardware acceleration card and the storage capacity.
+
+| Capability | Azure Stack Edge Pro GPU (Target device) | Azure Stack Edge Pro FPGA (Source device)|
+|-|--||
+| Hardware | Hardware acceleration: 1 or 2 Nvidia T4 GPUs <br> Compute, memory, network interface, power supply unit, power cord specifications are identical to the device with FPGA. | Hardware acceleration: Intel Arria 10 FPGA <br> Compute, memory, network interface, power supply unit, power cord specifications are identical to the device with GPU. |
+| Usable storage | 4.19 TB <br> After reserving space for parity resiliency and internal use | 12.5 TB <br> After reserving space for internal use |
+| Security | Certificates | |
+| Workloads | IoT Edge workloads <br> VM workloads <br> Kubernetes workloads| IoT Edge workloads |
+| Pricing | [Pricing](https://azure.microsoft.com/pricing/details/azure-stack/edge/) | [Pricing](https://azure.microsoft.com/pricing/details/azure-stack/edge/)|
+
+## Migration plan
+
+To create your migration plan, consider the following information:
+
+- Develop a schedule for migration.
+- When you migrate data, you may experience a downtime. We recommend that you schedule migration during a downtime maintenance window as the process is disruptive. You will set up and restore configurations in this downtime as described later in this document.
+- Understand the total length of downtime and communicate it to all the stakeholders.
+- Identify the local data that needs to be migrated from the source device. As a precaution, ensure that all the data on the existing storage has a recent backup.
++
+## Migration considerations
+
+Before you proceed with the migration, consider the following information:
+
+- An Azure Stack Edge Pro GPU device can't be activated against an Azure Stack Edge Pro FPGA resource. A new resource should be created for the Azure Stack Edge Pro GPU device as described in the [Create an Azure Stack Edge Pro GPU order](azure-stack-edge-gpu-deploy-prep.md#create-a-new-resource).
+- The Machine Learning models deployed on the source device that used the FPGA will need to be changed for the target device with GPU. For help with the models, you can contact Microsoft Support. The custom models deployed on the source device that did not use the FPGA (used CPU only) should work as-is on the target device (using CPU).
+- The IoT Edge modules deployed on the source device may require changes before these can be successfully deployed on the target device.
+- The source device supports NFS 3.0 and 4.1 protocols. The target device only supports NFS 3.0 protocol.
+- The source device support SMB and NFS protocols. The target device supports storage via REST protocol using storage accounts in addition to SMB and NFS protocols for shares.
+- The share access on the source device is via the IP address whereas the share access on the target device is via the device name.
+
+## Migration steps at-a-glance
+
+This table summarizes the overall flow for migration, describing the steps required for migration and the location where to take these steps.
+
+| In this phase | Do this step| On this device |
+||-|-|
+| Prepare source device* | 1. Record configuration data <br>2. Back up share data <br>3. Prepare IoT Edge workloads| Source device |
+| Prepare target device* |1. Create a new order <br>2. Configure and activate| Target device |
+| Migrate data | 1. Migrate data from shares <br>2. Redeploy IoT Edge workloads| Target device |
+| Verify data |Verify migrated data |Target device |
+| Clean up, return |Erase data and return| Source device |
+
+**The source and target devices can be prepared in parallel.*
+
+## Prepare source device
+
+The preparation includes that you identify the Edge cloud shares, Edge local shares, and the IoT Edge modules deployed on the device.
+
+### 1. Record configuration data
+
+Do these steps on your source device via the local UI.
+
+Record the configuration data on the *source* device. Use the [Deployment checklist](azure-stack-edge-gpu-deploy-checklist.md) to help you record the device configuration. During migration, you'll use this configuration information to configure the new target device.
+
+### 2. Back up share data
+
+The device data can be of one of the following types:
+
+- Data in Edge cloud shares
+- Data in local shares
+
+#### Data in Edge cloud shares
+
+Edge cloud shares tier data from your device to Azure. Do these steps on your *source* device via the Azure portal.
+
+- Make a list of all the Edge cloud shares and users that you have on the source device.
+- Make a list of all the bandwidth schedules that you have. You will recreate these bandwidth schedules on your target device.
+- Depending on the network bandwidth available, configure bandwidth schedules on your device so as to maximize the data tiered to the cloud. This would minimize the local data on the device.
+- Ensure that the shares are fully tiered to the cloud. This can be confirmed by checking the share status in the Azure portal.
+
+#### Data in Edge local shares
+
+Data in Edge local shares stays on the device. Do these steps on your *source* device via the Azure portal.
+
+- Make a list of the Edge local shares that you have on the device.
+- Given this is one-time migration of the data, create a copy of the Edge local share data to another on-premises server. You can use copy tools such as `robocopy` (SMB) or `rsync` (NFS) to copy the data. Optionally you may have already deployed a third-party data protection solution to back up the data in your local shares. The following third-party solutions are supported for use with Azure Stack Edge Pro FPGA devices:
+
+ | Third-party software | Reference to the solution |
+ |--||
+ | Cohesity | [https://www.cohesity.com/solution/cloud/azure/](https://www.cohesity.com/solution/cloud/azure/) <br> For details, contact Cohesity. |
+ | Commvault | [https://www.commvault.com/azure](https://www.commvault.com/azure) <br> For details, contact Commvault. |
+ | Veritas | [http://veritas.com/azure](http://veritas.com/azure) <br> For details, contact Veritas. |
+ | Veeam | [https://www.veeam.com/kb4041](https://www.veeam.com/kb4041) <br> For details, contact Veeam. |
++
+### 3. Prepare IoT Edge workloads
+
+- If you have deployed IoT Edge modules and are using FPGA acceleration, you may need to modify the modules before these will run on the GPU device. Follow the instructions in [Modify IoT Edge modules](azure-stack-edge-gpu-modify-fpga-modules-gpu.md).
+
+<! If you have deployed IoT Edge workloads, the configuration data is shared on a share on the device. Back up the data in these shares.-->
++
+## Prepare target device
+
+### 1. Create new order
+
+You need to create a new order (and a new resource) for your *target* device. The target device must be activated against the GPU resource and not against the FPGA resource.
+
+To place an order, [Create a new Azure Stack Edge resource](azure-stack-edge-gpu-deploy-prep.md#create-a-new-resource) in the Azure portal.
++
+### 2. Set up, activate
+
+You need to set up and activate the *target* device against the new resource you created earlier.
+
+Follow these steps to configure the *target* device via the Azure portal:
+
+1. Gather the information required in the [Deployment checklist](azure-stack-edge-gpu-deploy-checklist.md). You can use the information that you saved from the source device configuration.
+1. [Unpack](azure-stack-edge-gpu-deploy-install.md#unpack-the-device), [rack mount](azure-stack-edge-gpu-deploy-install.md#rack-the-device) and [cable your device](azure-stack-edge-gpu-deploy-install.md#cable-the-device).
+1. [Connect to the local UI of the device](azure-stack-edge-gpu-deploy-connect.md).
+1. Configure the network using a different set of IP addresses (if using static IPs) than the ones that you used for your old device. See how to [configure network settings](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md).
+1. Assign the same device name as your old device and provide a DNS domain. See how to [configure device setting](azure-stack-edge-gpu-deploy-set-up-device-update-time.md).
+1. Configure certificates on the new device. See how to [configure certificates](azure-stack-edge-gpu-deploy-configure-certificates.md).
+1. Get the activation key from the Azure portal and activate the new device. See how to [activate the device](azure-stack-edge-gpu-deploy-activate.md).
+
+You are now ready to restore the share data and deploy the workloads that you were running on the old device.
+
+## Migrate data
+
+You will now copy data from the source device to the Edge cloud shares and Edge local shares on your *target* device.
+
+### 1. From Edge cloud shares
+
+Follow these steps to sync the data on the Edge cloud shares on your target device:
+
+1. [Add shares](azure-stack-edge-j-series-manage-shares.md#add-a-share) corresponding to the share names created on the source device. Make sure that while creating shares, **Select blob container** is set to **Use existing** option and then select the container that was used with the previous device.
+1. [Add users](azure-stack-edge-j-series-manage-users.md#add-a-user) that had access to the previous device.
+1. [Refresh the share](azure-stack-edge-j-series-manage-shares.md#refresh-shares) data from Azure. This pulls down all the cloud data from the existing container to the shares.
+1. Recreate the bandwidth schedules to be associated with your shares. See [Add a bandwidth schedule](azure-stack-edge-j-series-manage-bandwidth-schedules.md#add-a-schedule) for detailed steps.
++
+### 2. From Edge local shares
+
+You may have deployed a third-party backup solution to protect the local shares data for your IoT workloads. You will now need to restore that data.
+
+After the replacement device is fully configured, enable the device for local storage.
+
+Follow these steps to recover the data from local shares:
+
+1. [Configure compute on the device](azure-stack-edge-gpu-deploy-configure-compute.md).
+1. Add all the local shares on the target device. See the detailed steps in [Add a local share](azure-stack-edge-j-series-manage-shares.md#add-a-local-share).
+1. Accessing the SMB shares on the source device will use the IP addresses whereas on the target device, you'll use device name. See [Connect to an SMB share on Azure Stack Edge Pro GPU](azure-stack-edge-j-series-deploy-add-shares.md#connect-to-an-smb-share). To connect to NFS shares on the target device, you'll need to use the new IP addresses associated with the device. See [Connect to an NFS share on Azure Stack Edge Pro GPU](azure-stack-edge-j-series-deploy-add-shares.md#connect-to-an-nfs-share).
+
+ If you copied over your share data to an intermediate server over SMB/NFS, you can copy this data over to shares on the target device. You can also copy the data over directly from the source device if both the source and the target device are *online*.
+
+ If you had used a third-party software to back up the data in the local shares, you will need to run the recovery procedure provided by the data protection solution of choice. See references in the following table.
+
+ | Third-party software | Reference to the solution |
+ |--||
+ | Cohesity | [https://www.cohesity.com/solution/cloud/azure/](https://www.cohesity.com/solution/cloud/azure/) <br> For details, contact Cohesity. |
+ | Commvault | [https://www.commvault.com/azure](https://www.commvault.com/azure) <br> For details, contact Commvault. |
+ | Veritas | [http://veritas.com/azure](http://veritas.com/azure) <br> For details, contact Veritas. |
+ | Veeam | [https://www.veeam.com/kb4041](https://www.veeam.com/kb4041) <br> For details, contact Veeam. |
+
+### 3. Redeploy IoT Edge workloads
+
+Once the IoT Edge modules are prepared, you will need to deploy IoT Edge workloads on your target device. If you face any errors in deploying IoT Edge modules, see:
+
+- [Common issues and resolutions for Azure IoT Edge](../iot-edge/troubleshoot-common-errors.md), and
+- [IoT Edge runtime errors][Manage an Azure Stack Edge Pro GPU device via Windows PowerShell](azure-stack-edge-gpu-troubleshoot.md#troubleshoot-iot-edge-errors).
+
+## Verify data
+
+After migration, verify that all the data has migrated and the workloads have been deployed on the target device.
+
+## Erase data, return
+
+After the data migration is complete, erase local data and return the source device. Follow the steps in [Return your Azure Stack Edge Pro device](azure-stack-edge-return-device.md).
++
+## Next steps
+
+[Learn how to deploy IoT Edge workloads on Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-deploy-compute-module-simple.md)
databox-online https://docs.microsoft.com/en-us/azure/databox-online/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure Stack Edge description: Lists Azure Policy built-in policy definitions for Azure Stack Edge. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021
databox https://docs.microsoft.com/en-us/azure/databox/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure Data Box description: Lists Azure Policy built-in policy definitions for Azure Data Box. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021
databox https://docs.microsoft.com/en-us/azure/databox/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/security-controls-policy.md
@@ -1,7 +1,7 @@
Title: Azure Policy Regulatory Compliance controls for Azure Data Box description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Box. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021
event-grid https://docs.microsoft.com/en-us/azure/event-grid/managed-service-identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/managed-service-identity.md
@@ -232,7 +232,7 @@ az eventgrid event-subscription create
``` ### Use the Azure CLI - Azure Storage queue
-In this section, you learn how to use the Azure CLI to enable the use of a system-assigned identity to deliver events to an Azure Storage queue. The identity must be a member of the **Storage Blob Data Contributor** role on the storage account.
+In this section, you learn how to use the Azure CLI to enable the use of a system-assigned identity to deliver events to an Azure Storage queue. The identity must be a member of the **Storage Queue Data Message Sender** role on the storage account. It must also be a member of the **Storage Blob Data Contributor** role on the storage account that's used for dead-lettering.
#### Define variables
event-grid https://docs.microsoft.com/en-us/azure/event-grid/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure Event Grid description: Lists Azure Policy built-in policy definitions for Azure Event Grid. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021
event-grid https://docs.microsoft.com/en-us/azure/event-grid/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/security-controls-policy.md
@@ -1,7 +1,7 @@
Title: Azure Policy Regulatory Compliance controls for Azure Event Grid description: Lists Azure Policy Regulatory Compliance controls available for Azure Event Grid. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/policy-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/policy-reference.md
@@ -1,7 +1,7 @@
Title: Built-in policy definitions for Azure Event Hubs description: Lists Azure Policy built-in policy definitions for Azure Event Hubs. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021
event-hubs https://docs.microsoft.com/en-us/azure/event-hubs/security-controls-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/security-controls-policy.md
@@ -1,7 +1,7 @@
Title: Azure Policy Regulatory Compliance controls for Azure Event Hubs description: Lists Azure Policy Regulatory Compliance controls available for Azure Event Hubs. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/04/2021 Last updated : 02/09/2021
governance https://docs.microsoft.com/en-us/azure/governance/management-groups/create-management-group-go https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/management-groups/create-management-group-go.md
@@ -0,0 +1,157 @@
+
+ Title: "Quickstart: Create a management group with Go"
+description: In this quickstart, you use Go to create a management group to organize your resources into a resource hierarchy.
Last updated : 09/30/2020+++
+# Quickstart: Create a management group with Go
+
+Management groups are containers that help you manage access, policy, and compliance across multiple
+subscriptions. Create these containers to build an effective and efficient hierarchy that can be
+used with [Azure Policy](../policy/overview.md) and [Azure Role Based Access
+Controls](../../role-based-access-control/overview.md). For more information on management groups,
+see [Organize your resources with Azure management groups](overview.md).
+
+The first management group created in the directory could take up to 15 minutes to complete. There
+are processes that run the first time to set up the management groups service within Azure for your
+directory. You receive a notification when the process is complete. For more information, see
+[initial setup of management groups](./overview.md#initial-setup-of-management-groups).
+
+## Prerequisites
+
+- If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/)
+ account before you begin.
+
+- An Azure service principal, including the _clientId_ and _clientSecret_. If you don't have a
+ service principal for use with Azure Policy or want to create a new one, see
+ [Azure management libraries for .NET authentication](/dotnet/azure/sdk/authentication#mgmt-auth).
+ Skip the step to install the .NET Core packages as we'll do that in the next steps.
+
+- Any Azure AD user in the tenant can create a management group without the management group write
+ permission assigned to that user if
+ [hierarchy protection](./how-to/protect-resource-hierarchy.md#settingrequire-authorization)
+ isn't enabled. This new management group becomes a child of the Root Management Group or the
+ [default management group](./how-to/protect-resource-hierarchy.md#settingdefault-management-group)
+ and the creator is given an "Owner" role assignment. Management group service allows this ability
+ so that role assignments aren't needed at the root level. No users have access to the Root
+ Management Group when it's created. To avoid the hurdle of finding the Azure AD Global Admins to
+ start using management groups, we allow the creation of the initial management groups at the root
+ level.
++
+## Add the management group package
+
+To enable Go to manage management groups, the package must be added. This package works wherever Go
+can be used, including [bash on Windows 10](/windows/wsl/install-win10) or locally installed.
+
+1. Check that the latest Go is installed (at least **1.15**). If it isn't yet installed, download it
+ at [Golang.org](https://golang.org/dl/).
+
+1. Check that the latest Azure CLI is installed (at least **2.5.1**). If it isn't yet installed, see
+ [Install the Azure CLI](/cli/azure/install-azure-cli).
+
+ > [!NOTE]
+ > Azure CLI is required to enable Go to use the `auth.NewAuthorizerFromCLI()` method in the
+ > following example. For information about other options, see
+ > [Azure SDK for Go - More authentication details](https://github.com/Azure/azure-sdk-for-go#more-authentication-details).
+
+1. Authenticate through Azure CLI.
+
+ ```azurecli
+ az login
+ ```
+
+1. In your Go environment of choice, install the required packages for management groups:
+
+ ```bash
+ # Add the management group package for Go
+ go get -u github.com/Azure/azure-sdk-for-go/services/preview/resources/mgmt/2018-03-01-preview/managementgroups
+
+ # Add the Azure auth package for Go
+ go get -u github.com/Azure/go-autorest/autorest/azure/auth
+ ```
+
+## Application setup
+
+With the Go packages added to your environment of choice, it's time to setup the Go application that
+can create a management group.
+
+1. Create the Go application and save the following source as `mgCreate.go`:
+
+ ```Go
+ package main
+
+ import (
+ "context"
+ "fmt"
+ "os"
+
+ mg "github.com/Azure/azure-sdk-for-go/services/preview/resources/mgmt/2018-03-01-preview/managementgroups"
+ "github.com/Azure/go-autorest/autorest/azure/auth"
+ )
+
+ func main() {
+ // Get variables from command line arguments
+ var mgName = os.Args[1]
+
+ // Create and authorize a client
+ mgClient := mg.NewClient()
+ authorizer, err := auth.NewAuthorizerFromCLI()
+ if err == nil {
+ mgClient.Authorizer = authorizer
+ } else {
+ fmt.Printf(err.Error())
+ }
+
+ // Create the request
+ Request := mg.CreateManagementGroupRequest{
+ Name: &mgName,
+ }
+
+ // Run the query and get the results
+ var results, queryErr = mgClient.CreateOrUpdate(context.Background(), mgName, Request, "no-cache")
+ if queryErr == nil {
+ fmt.Printf("Results: " + fmt.Sprint(results) + "\n")
+ } else {
+ fmt.Printf(queryErr.Error())
+ }
+ }
+ ```
+
+1. Build the Go application:
+
+ ```bash
+ go build mgCreate.go
+ ```
+
+1. Create a management group using the compiled Go application. Replace `<Name>` with the name of
+ your new management group:
+
+ ```bash
+ mgCreate "<Name>"
+ ```
+
+The result is a new management group in the root management group.
+
+## Clean up resources
+
+If you wish to remove the installed packages from your Go environment, you can do so by using
+the following command:
+
+```bash
+# Remove the installed packages from the Go environment
+go clean -i github.com/Azure/azure-sdk-for-go/services/preview/resources/mgmt/2018-03-01-preview/managementgroups
+go clean -i github.com/Azure/go-autorest/autorest/azure/auth
+```
+
+## Next steps
+
+In this quickstart, you created a management group to organize your resource hierarchy. The
+management group can hold subscriptions or other management groups.
+
+To learn more about management groups and how to manage your resource hierarchy, continue to:
+
+> [!div class="nextstepaction"]
+> [Manage your resources with management groups](./manage.md)
governance https://docs.microsoft.com/en-us/azure/governance/management-groups/create-management-group-javascript https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/management-groups/create-management-group-javascript.md
@@ -41,7 +41,7 @@ directory. You receive a notification when the process is complete. For more inf
## Application setup
-To enable JavaScript to query Azure Resource Graph, the environment must be set up. This setup works
+To enable JavaScript to manage management groups, the environment must be set up. This setup works
wherever JavaScript can be used, including [bash on Windows 10](/windows/wsl/install-win10). 1. Set up a new Node.js project by running the following command.
governance https://docs.microsoft.com/en-us/azure/governance/management-groups/create-management-group-python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/management-groups/create-management-group-python.md
@@ -0,0 +1,151 @@
+
+ Title: "Quickstart: Create a management group with Python"
+description: In this quickstart, you use Python to create a management group to organize your resources into a resource hierarchy.
Last updated : 01/29/2021+++
+# Quickstart: Create a management group with Python
+
+Management groups are containers that help you manage access, policy, and compliance across multiple
+subscriptions. Create these containers to build an effective and efficient hierarchy that can be
+used with [Azure Policy](../policy/overview.md) and [Azure Role Based Access
+Controls](../../role-based-access-control/overview.md). For more information on management groups,
+see [Organize your resources with Azure management groups](overview.md).
+
+The first management group created in the directory could take up to 15 minutes to complete. There
+are processes that run the first time to set up the management groups service within Azure for your
+directory. You receive a notification when the process is complete. For more information, see
+[initial setup of management groups](./overview.md#initial-setup-of-management-groups).
+
+## Prerequisites
+
+- If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/)
+ account before you begin.
+
+- Any Azure AD user in the tenant can create a management group without the management group write
+ permission assigned to that user if
+ [hierarchy protection](./how-to/protect-resource-hierarchy.md#settingrequire-authorization)
+ isn't enabled. This new management group becomes a child of the Root Management Group or the
+ [default management group](./how-to/protect-resource-hierarchy.md#settingdefault-management-group)
+ and the creator is given an "Owner" role assignment. Management group service allows this ability
+ so that role assignments aren't needed at the root level. No users have access to the Root
+ Management Group when it's created. To avoid the hurdle of finding the Azure AD Global Admins to
+ start using management groups, we allow the creation of the initial management groups at the root
+ level.
++
+## Add the Resource Graph library
+
+To enable Python to manage management groups, the library must be added. This library works wherever
+Python can be used, including [bash on Windows 10](/windows/wsl/install-win10) or locally installed.
+
+1. Check that the latest Python is installed (at least **3.8**). If it isn't yet installed, download
+ it at [Python.org](https://www.python.org/downloads/).
+
+1. Check that the latest Azure CLI is installed (at least **2.5.1**). If it isn't yet installed, see
+ [Install the Azure CLI](/cli/azure/install-azure-cli).
+
+ > [!NOTE]
+ > Azure CLI is required to enable Python to use the **CLI-based authentication** in the following
+ > examples. For information about other options, see
+ > [Authenticate using the Azure management libraries for Python](/azure/developer/python/azure-sdk-authenticate).
+
+1. Authenticate through Azure CLI.
+
+ ```azurecli
+ az login
+ ```
+
+1. In your Python environment of choice, install the required libraries for management groups:
+
+ ```bash
+ # Add the management groups library for Python
+ pip install azure-mgmt-managementgroups
+
+ # Add the Resources library for Python
+ pip install azure-mgmt-resource
+
+ # Add the CLI Core library for Python for authentication (development only!)
+ pip install azure-cli-core
+ ```
+
+ > [!NOTE]
+ > If Python is installed for all users, these commands must be run from an elevated console.
+
+1. Validate that the libraries have been installed. `azure-mgmt-managementgroups` should be
+ **0.2.0** or higher, `azure-mgmt-resource` should be **9.0.0** or higher, and `azure-cli-core`
+ should be **2.5.0** or higher.
+
+ ```bash
+ # Check each installed library
+ pip show azure-mgmt-managementgroups azure-mgmt-resource azure-cli-core
+ ```
+
+## Create the management group
+
+1. Create the Python script and save the following source as `mgCreate.py`:
+
+ ```python
+ # Import management group classes
+ from azure.mgmt.managementgroups import ManagementGroupsAPI
+
+ # Import specific methods and models from other libraries
+ from azure.common.credentials import get_azure_cli_credentials
+ from azure.common.client_factory import get_client_from_cli_profile
+ from azure.mgmt.resource import ResourceManagementClient, SubscriptionClient
+
+ # Wrap all the work in a function
+ def createmanagementgroup( strName ):
+ # Get your credentials from Azure CLI (development only!) and get your subscription list
+ subsClient = get_client_from_cli_profile(SubscriptionClient)
+ subsRaw = []
+ for sub in subsClient.subscriptions.list():
+ subsRaw.append(sub.as_dict())
+ subsList = []
+ for sub in subsRaw:
+ subsList.append(sub.get('subscription_id'))
+
+ # Create management group client and set options
+ mgClient = get_client_from_cli_profile(ManagementGroupsAPI)
+ mg_request = {'name': strName, 'display_name': strName}
+
+ # Create management group
+ mg = mgClient.management_groups.create_or_update(group_id=strName,create_management_group_request=mg_request)
+
+ # Show results
+ print(mg)
+
+ createmanagementgroup("MyNewMG")
+ ```
+
+1. Authenticate with Azure CLI with `az login`.
+
+1. Enter the following command in the terminal:
+
+ ```bash
+ py mgCreate.py
+ ```
+
+The result of creating the management group is output to the console as an `LROPoller` object.
+
+## Clean up resources
+
+If you wish to remove the installed libraries from your Python environment, you can do so by using
+the following command:
+
+```bash
+# Remove the installed libraries from the Python environment
+pip uninstall azure-mgmt-managementgroups azure-mgmt-resource azure-cli-core
+```
+
+## Next steps
+
+In this quickstart, you created a management group to organize your resource hierarchy. The
+management group can hold subscriptions or other management groups.
+
+To learn more about management groups and how to manage your resource hierarchy, continue to:
+
+> [!div class="nextstepaction"]
+> [Manage your resources with management groups](./manage.md)
governance https://docs.microsoft.com/en-us/azure/governance/policy/samples/azure-security-benchmark https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/azure-security-benchmark.md
@@ -1,7 +1,7 @@
Title: Regulatory Compliance details for Azure Security Benchmark description: Details of the Azure Security Benchmark Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 02/04/2021 Last updated : 02/09/2021
@@ -59,7 +59,7 @@ initiative definition.
|[Public network access should be disabled for MariaDB servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffdccbe47-f3e3-4213-ad5d-ea459b2fa077) |Disable the public network access property to improve security and ensure your Azure Database for MariaDB can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MariaDB_DisablePublicNetworkAccess_Audit.json) | |[Public network access should be disabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd9844e8a-1437-4aeb-a32c-0c992f056095) |Disable the public network access property to improve security and ensure your Azure Database for MySQL can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_DisablePublicNetworkAccess_Audit.json) | |[Public network access should be disabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb52376f7-9612-48a1-81cd-1ffe4b61032c) |Disable the public network access property to improve security and ensure your Azure Database for PostgreSQL can only be accessed from a private endpoint. This configuration disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_DisablePublicNetworkAccess_Audit.json) |
-|[Storage accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34c877ad-507e-4c82-993e-3452a6e0ad3c) |Network access to storage accounts should be restricted. Configure network rules so only applications from allowed networks can access the storage account. To allow connections from specific internet or on-premise clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_NetworkAcls_Audit.json) |
+|[Storage accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34c877ad-507e-4c82-993e-3452a6e0ad3c) |Network access to storage accounts should be restricted. Configure network rules so only applications from allowed networks can access the storage account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_NetworkAcls_Audit.json) |
|[Storage accounts should restrict network access using virtual network rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2a1a9cdf-e04d-429a-8416-3bfb72a1b26f) |Protect your storage accounts from potential threats using virtual network rules as a preferred method instead of IP-based filtering. Disabling IP-based filtering prevents public IPs from accessing your storage accounts. |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountOnlyVnetRulesEnabled_Audit.json) | |[Subnets should be associated with a Network Security Group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe71308d3-144b-4262-b144-efdc3cc90517) |Protect your subnet from potential threats by restricting access to it with a Network Security Group (NSG). NSGs contain a list of Access Control List (ACL) rules that allow or deny network traffic to your subnet. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnSubnets_Audit.json) |
@@ -71,7 +71,7 @@ initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[App Configuration should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fca610c1d-041c-4332-9d88-7ed3094967c7) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your app configuration instances instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/appconfig/private-endpoint](https://aka.ms/appconfig/private-endpoint). |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Configuration/PrivateLink_Audit.json) |
-|[Azure Cache for Redis should reside within a virtual network](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7d092e0a-7acd-40d2-a975-dca21cae48c4) |Azure Virtual Network (VNet) deployment provides enhanced security and isolation for your Azure Cache for Redis, as well as subnets, access control policies, and other features to further restrict access.When an Azure Cache for Redis instance is configured with a VNet, it is not publicly addressable and can only be accessed from virtual machines and applications within the VNet. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cache/RedisCache_CacheInVnet_Audit.json) |
+|[Azure Cache for Redis should reside within a virtual network](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7d092e0a-7acd-40d2-a975-dca21cae48c4) |Azure Virtual Network deployment provides enhanced security and isolation for your Azure Cache for Redis, as well as subnets, access control policies, and other features to further restrict access.When an Azure Cache for Redis instance is configured with a virtual network, it is not publicly addressable and can only be accessed from virtual machines and applications within the virtual network. |Audit, Deny, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cache/RedisCache_CacheInVnet_Audit.json) |
|[Azure Event Grid domains should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9830b652-8523-49cc-b1b3-e17dce1127ca) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network.By mapping private endpoints to your Event Grid domains instead of the entire service, you'll also be protected against data leakage risks.Learn more at: [https://aka.ms/privateendpoints](https://aka.ms/privateendpoints). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Grid/EventGridDomains_EnablePrivateEndpoint_Audit.json) | |[Azure Event Grid topics should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4b90e17e-8448-49db-875e-bd83fb6f804f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your topics instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/privateendpoints](https://aka.ms/privateendpoints). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Grid/EventGridTopics_EnablePrivateEndpoint_Audit.json) | |[Azure Machine Learning workspaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F40cec1dd-a100-4920-b15b-3024fe8901ab) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure Machine Learning workspaces instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/azureml-workspaces-privatelink](https://aka.ms/azureml-workspaces-privatelink). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_PrivateLinkEnabled_Audit.json) |
@@ -125,7 +125,7 @@ initiative definition.
|[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) | |[RDP access from the Internet should be blocked](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe372f825-a257-4fb8-9175-797a8a8627d6) |This policy audits any network security rule that allows RDP access from Internet |Audit, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkSecurityGroup_RDPAccess_Audit.json) | |[SSH access from the Internet should be blocked](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2c89a2e5-7285-40fe-afe0-ae8654b92fab) |This policy audits any network security rule that allows SSH access from Internet |Audit, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkSecurityGroup_SSHAccess_Audit.json) |
-|[Storage accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34c877ad-507e-4c82-993e-3452a6e0ad3c) |Network access to storage accounts should be restricted. Configure network rules so only applications from allowed networks can access the storage account. To allow connections from specific internet or on-premise clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_NetworkAcls_Audit.json) |
+|[Storage accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34c877ad-507e-4c82-993e-3452a6e0ad3c) |Network access to storage accounts should be restricted. Configure network rules so only applications from allowed networks can access the storage account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_NetworkAcls_Audit.json) |
|[Subnets should be associated with a Network Security Group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe71308d3-144b-4262-b144-efdc3cc90517) |Protect your subnet from potential threats by restricting access to it with a Network Security Group (NSG). NSGs contain a list of Access Control List (ACL) rules that allow or deny network traffic to your subnet. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnSubnets_Audit.json) | |[Web Application Firewall (WAF) should be enabled for Application Gateway](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F564feb30-bf6a-4854-b4bb-0d2d2d1e6c66) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AppGatewayEnabled_Audit.json) | |[Web Application Firewall (WAF) should be enabled for Azure Front Door Service service](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F055aa869-bc98-4af8-bafc-23f1ab6ffe2c) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AFD_Enabled_Audit.json) |
@@ -285,18 +285,18 @@ initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[Automation account variables should be encrypted](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3657f5a0-770e-44a3-b44e-9431ba1e9735) |It is important to enable encryption of Automation account variable assets when storing sensitive data |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Automation/Automation_AuditUnencryptedVars_Audit.json) |
-|[Azure Cosmos DB accounts should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f905d99-2ab7-462c-a6b0-f709acca6c8f) |Use customer-managed keys to manage the encryption at rest of your Azure Cosmos DB. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys (CMK) are commonly required to meet regulatory compliance standards. CMKs enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more about CMK encryption at [https://aka.ms/cosmosdb-cmk](https://aka.ms/cosmosdb-cmk). |audit, deny, disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_CMK_Deny.json) |
-|[Azure Machine Learning workspaces should be encrypted with a customer-managed key (CMK)](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fba769a63-b8cc-4b2d-abf6-ac33c7204be8) |Manage encryption at rest of your Azure Machine Learning workspace data with customer-managed keys (CMK). By default, customer data is encrypted with service-managed keys, but CMKs are commonly required to meet regulatory compliance standards. CMKs enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more about CMK encryption at [https://aka.ms/azureml-workspaces-cmk](https://aka.ms/azureml-workspaces-cmk). |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_CMKEnabled_Audit.json) |
-|[Bring your own key data protection should be enabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F83cef61d-dbd1-4b20-a4fc-5fbc7da10833) |Use customer-managed keys to manage the encryption at rest of your MySQL servers. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys (CMK) are commonly required to meet regulatory compliance standards. CMKs enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_EnableByok_Audit.json) |
-|[Bring your own key data protection should be enabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F18adea5e-f416-4d0f-8aa8-d24321e3e274) |Use customer-managed keys to manage the encryption at rest of your PostgreSQL servers. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys (CMK) are commonly required to meet regulatory compliance standards. CMKs enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableByok_Audit.json) |
-|[Cognitive Services accounts should enable data encryption with a customer-managed key (CMK)](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F67121cc7-ff39-4ab8-b7e3-95b84dab487d) |Customer-managed keys (CMK) are commonly required to meet regulatory compliance standards. CMKs enable the data stored in Cognitive Services to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more about CMK encryption at [https://aka.ms/cosmosdb-cmk](https://aka.ms/cosmosdb-cmk). |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_CustomerManagedKey_Audit.json) |
+|[Azure Cosmos DB accounts should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f905d99-2ab7-462c-a6b0-f709acca6c8f) |Use customer-managed keys to manage the encryption at rest of your Azure Cosmos DB. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/cosmosdb-cmk](https://aka.ms/cosmosdb-cmk). |audit, deny, disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_CMK_Deny.json) |
+|[Azure Machine Learning workspaces should be encrypted with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fba769a63-b8cc-4b2d-abf6-ac33c7204be8) |Manage encryption at rest of Azure Machine Learning workspace data with customer-managed keys. By default, customer data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/azureml-workspaces-cmk](https://aka.ms/azureml-workspaces-cmk). |Audit, Deny, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_CMKEnabled_Audit.json) |
+|[Bring your own key data protection should be enabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F83cef61d-dbd1-4b20-a4fc-5fbc7da10833) |Use customer-managed keys to manage the encryption at rest of your MySQL servers. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_EnableByok_Audit.json) |
+|[Bring your own key data protection should be enabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F18adea5e-f416-4d0f-8aa8-d24321e3e274) |Use customer-managed keys to manage the encryption at rest of your PostgreSQL servers. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableByok_Audit.json) |
+|[Cognitive Services accounts should enable data encryption with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F67121cc7-ff39-4ab8-b7e3-95b84dab487d) |Customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data stored in Cognitive Services to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more about customer-managed key encryption at [https://aka.ms/cosmosdb-cmk](https://aka.ms/cosmosdb-cmk). |Audit, Deny, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_CustomerManagedKey_Audit.json) |
|[Cognitive Services accounts should use customer owned storage or enable data encryption.](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F11566b39-f7f7-4b82-ab06-68d8700eb0a4) |This policy audits any Cognitive Services account not using customer owned storage nor data encryption. For each Cognitive Services account with storage, use either customer owned storage or enable data encryption. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_BYOX_Audit.json) |
-|[Container registries should be encrypted with a customer-managed key (CMK)](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5b9159ae-1701-4a6f-9a7a-aa9c8ddd0580) |Use customer-managed keys to manage the encryption at rest of the contents of your registries. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys (CMK) are commonly required to meet regulatory compliance standards. CMKs enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more about CMK encryption at [https://aka.ms/acr/CMK](https://aka.ms/acr/CMK). |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_CMKEncryptionEnabled_Audit.json) |
+|[Container registries should be encrypted with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5b9159ae-1701-4a6f-9a7a-aa9c8ddd0580) |Use customer-managed keys to manage the encryption at rest of the contents of your registries. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/acr/CMK](https://aka.ms/acr/CMK). |Audit, Deny, Disabled |[1.1.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_CMKEncryptionEnabled_Audit.json) |
|[Disk encryption should be applied on virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0961003e-5a0a-4549-abde-af6a37f2724d) |Virtual machines without an enabled disk encryption will be monitored by Azure Security Center as recommendations. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnencryptedVMDisks_Audit.json) | |[Service Fabric clusters should have the ClusterProtectionLevel property set to EncryptAndSign](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F617c02be-7f02-4efd-8836-3180d47b6c68) |Service Fabric provides three levels of protection (None, Sign and EncryptAndSign) for node-to-node communication using a primary cluster certificate. Set the protection level to ensure that all node-to-node messages are encrypted and digitally signed |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Fabric/ServiceFabric_AuditClusterProtectionLevel_Audit.json) | |[SQL managed instances should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F048248b0-55cd-46da-b1ff-39efd52db260) |Implementing Transparent Data Encryption (TDE) with your own key provides you with increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. This recommendation applies to organizations with a related compliance requirement. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_EnsureServerTDEisEncryptedWithYourOwnKey_Audit.json) | |[SQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0d134df8-db83-46fb-ad72-fe0c9428c8dd) |Implementing Transparent Data Encryption (TDE) with your own key provides increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. This recommendation applies to organizations with a related compliance requirement. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_EnsureServerTDEisEncryptedWithYourOwnKey_Audit.json) |
-|[Storage accounts should use customer-managed key (CMK) for encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6fac406b-40ca-413b-bf8e-0bf964659c25) |Secure your storage account with greater flexibility using customer-managed keys (CMKs). When you specify a CMK, that key is used to protect and control access to the key that encrypts your data. Using CMKs provides additional capabilities to control rotation of the key encryption key or cryptographically erase data. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountCustomerManagedKeyEnabled_Audit.json) |
+|[Storage accounts should use customer-managed key for encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6fac406b-40ca-413b-bf8e-0bf964659c25) |Secure your storage account with greater flexibility using customer-managed keys. When you specify a customer-managed key, that key is used to protect and control access to the key that encrypts your data. Using customer-managed keys provides additional capabilities to control rotation of the key encryption key or cryptographically erase data. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountCustomerManagedKeyEnabled_Audit.json) |
|[Transparent Data Encryption on SQL databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F17k78e20-9358-41c9-923c-fb736d382a12) |Transparent data encryption should be enabled to protect data-at-rest and meet compliance requirements |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlDBEncryption_Audit.json) | ## Asset Management
@@ -376,17 +376,17 @@ initiative definition.
||||| |[Auditing on SQL server should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6fb4358-5bf4-4ad7-ba82-2cd2f41ce5e9) |Auditing on your SQL Server should be enabled to track database activities across all databases on the server and save them in an audit log. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditing_Audit.json) | |[Diagnostic logs in App Services should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb607c5de-e7d9-4eee-9e5c-83f1bcee4fa0) |Audit enabling of diagnostic logs on the app. This enables you to recreate activity trails for investigation purposes if a security incident occurs or your network is compromised |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_AuditLoggingMonitoring_Audit.json) |
-|[Diagnostic logs in Azure Data Lake Store should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F057ef27e-665e-4328-8ea3-04b3122bd9fb) |Audit enabling of diagnostic logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Data%20Lake/DataLakeStore_AuditDiagnosticLog_Audit.json) |
-|[Diagnostic logs in Azure Stream Analytics should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff9be5368-9bf5-4b84-9e0a-7850da98bb46) |Audit enabling of diagnostic logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Stream%20Analytics/StreamAnalytics_AuditDiagnosticLog_Audit.json) |
-|[Diagnostic logs in Batch accounts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F428256e6-1fac-4f48-a757-df34c2b3336d) |Audit enabling of diagnostic logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Batch/Batch_AuditDiagnosticLog_Audit.json) |
-|[Diagnostic logs in Data Lake Analytics should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc95c74d9-38fe-4f0d-af86-0c7d626a315c) |Audit enabling of diagnostic logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Data%20Lake/DataLakeAnalytics_AuditDiagnosticLog_Audit.json) |
-|[Diagnostic logs in Event Hub should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F83a214f7-d01a-484b-91a9-ed54470c9a6a) |Audit enabling of diagnostic logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Hub/EventHub_AuditDiagnosticLog_Audit.json) |
-|[Diagnostic logs in IoT Hub should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F383856f8-de7f-44a2-81fc-e5135b5c2aa4) |Audit enabling of diagnostic logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Internet%20of%20Things/IoTHub_AuditDiagnosticLog_Audit.json) |
-|[Diagnostic logs in Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcf820ca0-f99e-4f3e-84fb-66e913812d21) |Audit enabling of diagnostic logs. This enables you to recreate activity trails to use for investigation purposes when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_AuditDiagnosticLog_Audit.json) |
-|[Diagnostic logs in Logic Apps should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34f95f76-5386-4de7-b824-0d8478470c9d) |Audit enabling of diagnostic logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Logic%20Apps/LogicApps_AuditDiagnosticLog_Audit.json) |
-|[Diagnostic logs in Search services should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb4330a05-a843-4bc8-bf9a-cacce50c67f4) |Audit enabling of diagnostic logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Search/Search_AuditDiagnosticLog_Audit.json) |
-|[Diagnostic logs in Service Bus should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8d36e2f-389b-4ee4-898d-21aeb69a0f45) |Audit enabling of diagnostic logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Bus/ServiceBus_AuditDiagnosticLog_Audit.json) |
-|[Diagnostic logs in Virtual Machine Scale Sets should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7c1b1214-f927-48bf-8882-84f0af6588b1) |It is recommended to enable Logs so that activity trail can be recreated when investigations are required in the event of an incident or a compromise. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/ServiceFabric_and_VMSS_AuditVMSSDiagnostics.json) |
+|[Resource logs in Azure Data Lake Store should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F057ef27e-665e-4328-8ea3-04b3122bd9fb) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[4.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Data%20Lake/DataLakeStore_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Azure Stream Analytics should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff9be5368-9bf5-4b84-9e0a-7850da98bb46) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[4.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Stream%20Analytics/StreamAnalytics_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Batch accounts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F428256e6-1fac-4f48-a757-df34c2b3336d) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[4.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Batch/Batch_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Data Lake Analytics should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc95c74d9-38fe-4f0d-af86-0c7d626a315c) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[4.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Data%20Lake/DataLakeAnalytics_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Event Hub should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F83a214f7-d01a-484b-91a9-ed54470c9a6a) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[4.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Hub/EventHub_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in IoT Hub should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F383856f8-de7f-44a2-81fc-e5135b5c2aa4) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Internet%20of%20Things/IoTHub_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcf820ca0-f99e-4f3e-84fb-66e913812d21) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[4.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Logic Apps should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34f95f76-5386-4de7-b824-0d8478470c9d) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[4.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Logic%20Apps/LogicApps_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Search services should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb4330a05-a843-4bc8-bf9a-cacce50c67f4) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[4.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Search/Search_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Service Bus should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8d36e2f-389b-4ee4-898d-21aeb69a0f45) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[4.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Bus/ServiceBus_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Virtual Machine Scale Sets should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7c1b1214-f927-48bf-8882-84f0af6588b1) |It is recommended to enable Logs so that activity trail can be recreated when investigations are required in the event of an incident or a compromise. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/ServiceFabric_and_VMSS_AuditVMSSDiagnostics.json) |
### Centralize security log management and analysis
@@ -593,7 +593,7 @@ initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[Key vaults should have purge protection enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b60c0b2-2dc2-4e1c-b5c9-abbed971de53) |Malicious deletion of a key vault can lead to permanent data loss. A malicious insider in your organization can potentially delete and purge key vaults. Purge protection protects you from insider attacks by enforcing a mandatory retention period for soft deleted key vaults. No one inside your organization or Microsoft will be able to purge your key vaults during the soft delete retention period. |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_Recoverable_Audit.json) |
-|[Key vaults should have soft delete enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1e66c121-a66a-4b1f-9b83-0fd99bf0fc2d) |Deleting a key vault without soft delete enabled permanently deletes all secrets, keys, and certificates stored in the key vault. Accidental deletion of a key vault can lead to permanent data loss. Soft delete allows you to recover an accidently deleted key vault for a configurable retention period. |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_SoftDeleteMustBeEnabled_Audit.json) |
+|[Key vaults should have soft delete enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1e66c121-a66a-4b1f-9b83-0fd99bf0fc2d) |Deleting a key vault without soft delete enabled permanently deletes all secrets, keys, and certificates stored in the key vault. Accidental deletion of a key vault can lead to permanent data loss. Soft delete allows you to recover an accidentally deleted key vault for a configurable retention period. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_SoftDeleteMustBeEnabled_Audit.json) |
> [!NOTE] > Availability of specific Azure Policy definitions may vary in Azure Government and other national
governance https://docs.microsoft.com/en-us/azure/governance/policy/samples/azure-security-benchmarkv1 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/azure-security-benchmarkv1.md
@@ -1,7 +1,7 @@
Title: Regulatory Compliance details for Azure Security Benchmark v1 description: Details of the Azure Security Benchmark v1 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 02/04/2021 Last updated : 02/09/2021
@@ -62,7 +62,7 @@ This built-in initiative is deployed as part of the
|[Private endpoint should be enabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0564d078-92f5-4f97-8398-b9f58a51f70b) |Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for PostgreSQL. Configure a private endpoint connection to enable access to traffic coming only from known networks and prevent access from all other IP addresses, including within Azure. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnablePrivateEndPoint_Audit.json) | |[Service Bus should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F235359c5-7c52-4b82-9055-01c75cf9f60e) |This policy audits any Service Bus not configured to use a virtual network service endpoint. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_ServiceBus_AuditIfNotExists.json) | |[SQL Server should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fae5d2f14-d830-42b6-9899-df6cfe9c71a3) |This policy audits any SQL Server not configured to use a virtual network service endpoint. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_SQLServer_AuditIfNotExists.json) |
-|[Storage accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34c877ad-507e-4c82-993e-3452a6e0ad3c) |Network access to storage accounts should be restricted. Configure network rules so only applications from allowed networks can access the storage account. To allow connections from specific internet or on-premise clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_NetworkAcls_Audit.json) |
+|[Storage accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34c877ad-507e-4c82-993e-3452a6e0ad3c) |Network access to storage accounts should be restricted. Configure network rules so only applications from allowed networks can access the storage account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_NetworkAcls_Audit.json) |
|[Storage Accounts should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F60d21c4f-21a3-4d94-85f4-b924e6aeeda4) |This policy audits any Storage Account not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_StorageAccount_Audit.json) | |[Subnets should be associated with a Network Security Group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe71308d3-144b-4262-b144-efdc3cc90517) |Protect your subnet from potential threats by restricting access to it with a Network Security Group (NSG). NSGs contain a list of Access Control List (ACL) rules that allow or deny network traffic to your subnet. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnSubnets_Audit.json) | |[Virtual machines should be connected to an approved virtual network](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd416745a-506c-48b6-8ab1-83cb814bcaa3) |This policy audits any virtual machine connected to a virtual network that is not approved. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ApprovedVirtualNetwork_Audit.json) |
@@ -154,17 +154,17 @@ This built-in initiative is deployed as part of the
|[Audit diagnostic setting](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types |AuditIfNotExists |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) | |[Auditing on SQL server should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6fb4358-5bf4-4ad7-ba82-2cd2f41ce5e9) |Auditing on your SQL Server should be enabled to track database activities across all databases on the server and save them in an audit log. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditing_Audit.json) | |[Diagnostic logs in App Services should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb607c5de-e7d9-4eee-9e5c-83f1bcee4fa0) |Audit enabling of diagnostic logs on the app. This enables you to recreate activity trails for investigation purposes if a security incident occurs or your network is compromised |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_AuditLoggingMonitoring_Audit.json) |
-|[Diagnostic logs in Azure Data Lake Store should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F057ef27e-665e-4328-8ea3-04b3122bd9fb) |Audit enabling of diagnostic logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Data%20Lake/DataLakeStore_AuditDiagnosticLog_Audit.json) |
-|[Diagnostic logs in Azure Stream Analytics should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff9be5368-9bf5-4b84-9e0a-7850da98bb46) |Audit enabling of diagnostic logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Stream%20Analytics/StreamAnalytics_AuditDiagnosticLog_Audit.json) |
-|[Diagnostic logs in Batch accounts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F428256e6-1fac-4f48-a757-df34c2b3336d) |Audit enabling of diagnostic logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Batch/Batch_AuditDiagnosticLog_Audit.json) |
-|[Diagnostic logs in Data Lake Analytics should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc95c74d9-38fe-4f0d-af86-0c7d626a315c) |Audit enabling of diagnostic logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Data%20Lake/DataLakeAnalytics_AuditDiagnosticLog_Audit.json) |
-|[Diagnostic logs in Event Hub should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F83a214f7-d01a-484b-91a9-ed54470c9a6a) |Audit enabling of diagnostic logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Hub/EventHub_AuditDiagnosticLog_Audit.json) |
-|[Diagnostic logs in IoT Hub should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F383856f8-de7f-44a2-81fc-e5135b5c2aa4) |Audit enabling of diagnostic logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Internet%20of%20Things/IoTHub_AuditDiagnosticLog_Audit.json) |
-|[Diagnostic logs in Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcf820ca0-f99e-4f3e-84fb-66e913812d21) |Audit enabling of diagnostic logs. This enables you to recreate activity trails to use for investigation purposes when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_AuditDiagnosticLog_Audit.json) |
-|[Diagnostic logs in Logic Apps should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34f95f76-5386-4de7-b824-0d8478470c9d) |Audit enabling of diagnostic logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Logic%20Apps/LogicApps_AuditDiagnosticLog_Audit.json) |
-|[Diagnostic logs in Search services should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb4330a05-a843-4bc8-bf9a-cacce50c67f4) |Audit enabling of diagnostic logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Search/Search_AuditDiagnosticLog_Audit.json) |
-|[Diagnostic logs in Service Bus should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8d36e2f-389b-4ee4-898d-21aeb69a0f45) |Audit enabling of diagnostic logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Bus/ServiceBus_AuditDiagnosticLog_Audit.json) |
-|[Diagnostic logs in Virtual Machine Scale Sets should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7c1b1214-f927-48bf-8882-84f0af6588b1) |It is recommended to enable Logs so that activity trail can be recreated when investigations are required in the event of an incident or a compromise. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/ServiceFabric_and_VMSS_AuditVMSSDiagnostics.json) |
+|[Resource logs in Azure Data Lake Store should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F057ef27e-665e-4328-8ea3-04b3122bd9fb) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[4.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Data%20Lake/DataLakeStore_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Azure Stream Analytics should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff9be5368-9bf5-4b84-9e0a-7850da98bb46) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[4.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Stream%20Analytics/StreamAnalytics_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Batch accounts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F428256e6-1fac-4f48-a757-df34c2b3336d) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[4.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Batch/Batch_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Data Lake Analytics should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc95c74d9-38fe-4f0d-af86-0c7d626a315c) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[4.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Data%20Lake/DataLakeAnalytics_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Event Hub should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F83a214f7-d01a-484b-91a9-ed54470c9a6a) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[4.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Hub/EventHub_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in IoT Hub should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F383856f8-de7f-44a2-81fc-e5135b5c2aa4) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Internet%20of%20Things/IoTHub_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcf820ca0-f99e-4f3e-84fb-66e913812d21) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[4.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Logic Apps should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34f95f76-5386-4de7-b824-0d8478470c9d) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[4.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Logic%20Apps/LogicApps_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Search services should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb4330a05-a843-4bc8-bf9a-cacce50c67f4) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[4.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Search/Search_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Service Bus should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8d36e2f-389b-4ee4-898d-21aeb69a0f45) |Audit enabling of reso