Updates from: 07/04/2022 01:04:11
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Workday Integration Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/workday-integration-reference.md
This article explains how the integration works and how you can customize the pr
### Restricting Workday API access to Azure AD endpoints Azure AD provisioning service uses basic authentication to connect to Workday Web Services API endpoints.
-To further secure the connectivity between Azure AD provisioning service and Workday, you can restrict access so that the designated integration system user only accesses the Workday APIs from allowed Azure AD IP ranges. Please engage your Workday administrator to complete the following configuration in your Workday tenant.
+To further secure the connectivity between Azure AD provisioning service and Workday, you can restrict access so that the designated integration system user only accesses the Workday APIs from allowed Azure AD IP ranges. Engage your Workday administrator to complete the following configuration in your Workday tenant.
1. Download the [latest IP Ranges](https://www.microsoft.com/download/details.aspx?id=56519) for the Azure Public Cloud. 1. Open the file and search for tag **AzureActiveDirectory**
To further secure the connectivity between Azure AD provisioning service and Wor
>![Azure AD IP range](media/sap-successfactors-integration-reference/azure-active-directory-ip-range.png) 1. Copy all IP address ranges listed within the element *addressPrefixes* and use the range to build your IP address list.
-1. Log in to Workday admin portal.
+1. Sign in to Workday admin portal.
1. Access the **Maintain IP Ranges** task to create a new IP range for Azure data centers. Specify the IP ranges (using CIDR notation) as a comma-separated list. 1. Access the **Manage Authentication Policies** task to create a new authentication policy. In the authentication policy, use the authentication allow list to specify the Azure AD IP range and the security group that will be allowed access from this IP range. Save the changes. 1. Access the **Activate All Pending Authentication Policy Changes** task to confirm changes.
To further secure the connectivity between Azure AD provisioning service and Wor
The default steps to [configure the Workday integration system user](../saas-apps/workday-inbound-tutorial.md#configure-integration-system-user-in-workday) grants access to retrieve all users in your Workday tenant. In certain integration scenarios, you may want to limit the access, so that users belonging only to certain supervisory organizations are returned by the Get_Workers API call and processed by the Workday Azure AD connector.
-You can fulfill this requirement by working with your Workday admin and configuring constrained integration system security groups. For more information on how this is done, please refer to [this Workday community article](https://community.workday.com/forums/customer-questions/620393) (*Workday Community login credentials are required to access this article*)
+You can fulfill this requirement by working with your Workday admin and configuring constrained integration system security groups. For more information, refer to [this Workday community article](https://community.workday.com/forums/customer-questions/620393) (*Workday Community access required for this article*)
-This strategy of limiting access using constrained ISSG (Integration System Security Groups) is particularly useful in the following scenarios:
-* **Phased rollout scenario**: You have a large Workday tenant and plan to perform a phased rollout of Workday to Azure AD automated provisioning. In this scenario, rather than excluding users who are not in scope of the current phase with Azure AD scoping filters, we recommend configuring constrained ISSG so that only in-scope workers are visible to Azure AD.
+This strategy of limiting access using constrained ISSG (Integration System Security Groups) is useful in the following scenarios:
+* **Phased rollout scenario**: You have a large Workday tenant and plan to perform a phased rollout of Workday to Azure AD automated provisioning. In this scenario, rather than excluding users who aren't in scope of the current phase with Azure AD scoping filters, we recommend configuring constrained ISSG so that only in-scope workers are visible to Azure AD.
* **Multiple provisioning jobs scenario**: You have a large Workday tenant and multiple AD domains each supporting a different business unit/division/company. To support this topology, you would like to run multiple Workday to Azure AD provisioning jobs with each job provisioning a specific set of workers. In this scenario, rather than using Azure AD scoping filters to exclude worker data, we recommend configuring constrained ISSG so that only the relevant worker data is visible to Azure AD. ### Workday test connection query
Azure AD sends the following *Get_Workers* Workday Web Services request to retri
</p1:Response_Group> </Get_Workers_Request> ```
-The *Response_Group* node is used to specify which worker attributes to fetch from Workday. For a description of each flag in the *Response_Group* node, please refer to the Workday [Get_Workers API documentation](https://community.workday.com/sites/default/files/file-hosting/productionapi/Human_Resources/v35.2/Get_Workers.html#Worker_Response_GroupType).
+The *Response_Group* node is used to specify which worker attributes to fetch from Workday. For a description of each flag in the *Response_Group* node, refer to the Workday [Get_Workers API documentation](https://community.workday.com/sites/default/files/file-hosting/productionapi/Human_Resources/v35.2/Get_Workers.html#Worker_Response_GroupType).
Certain flag values specified in the *Response_Group* node are calculated based on the attributes configured in the Workday Azure AD provisioning application. Refer to the section on *Supported entities* for the criteria used to set the flag values.
The table below provides guidance on mapping configuration to use to retrieve a
| \# | Workday Entity | Included by default | XPATH pattern to specify in mapping to fetch non-default entities | |-|--||-|
-| 1 | Personal Data | Yes | wd:Worker\_Data/wd:Personal\_Data |
-| 2 | Employment Data | Yes | wd:Worker\_Data/wd:Employment\_Data |
-| 3 | Additional Job Data | Yes | wd:Worker\_Data/wd:Employment\_Data/wd:Worker\_Job\_Data\[@wd:Primary\_Job=0\]|
-| 4 | Organization Data | Yes | wd:Worker\_Data/wd:Organization\_Data |
-| 5 | Management Chain Data | Yes | wd:Worker\_Data/wd:Management\_Chain\_Data |
-| 6 | Supervisory Organization | Yes | 'SUPERVISORY' |
-| 7 | Company | Yes | 'COMPANY' |
-| 8 | Business Unit | No | 'BUSINESS\_UNIT' |
-| 9 | Business Unit Hierarchy | No | 'BUSINESS\_UNIT\_HIERARCHY' |
-| 10 | Company Hierarchy | No | 'COMPANY\_HIERARCHY' |
-| 11 | Cost Center | No | 'COST\_CENTER' |
-| 12 | Cost Center Hierarchy | No | 'COST\_CENTER\_HIERARCHY' |
-| 13 | Fund | No | 'FUND' |
-| 14 | Fund Hierarchy | No | 'FUND\_HIERARCHY' |
-| 15 | Gift | No | 'GIFT' |
-| 16 | Gift Hierarchy | No | 'GIFT\_HIERARCHY' |
-| 17 | Grant | No | 'GRANT' |
-| 18 | Grant Hierarchy | No | 'GRANT\_HIERARCHY' |
-| 19 | Business Site Hierarchy | No | 'BUSINESS\_SITE\_HIERARCHY' |
-| 20 | Matrix Organization | No | 'MATRIX' |
-| 21 | Pay Group | No | 'PAY\_GROUP' |
-| 22 | Programs | No | 'PROGRAMS' |
-| 23 | Program Hierarchy | No | 'PROGRAM\_HIERARCHY' |
-| 24 | Region | No | 'REGION\_HIERARCHY' |
-| 25 | Location Hierarchy | No | 'LOCATION\_HIERARCHY' |
-| 26 | Account Provisioning Data | No | wd:Worker\_Data/wd:Account\_Provisioning\_Data |
-| 27 | Background Check Data | No | wd:Worker\_Data/wd:Background\_Check\_Data |
-| 28 | Benefit Eligibility Data | No | wd:Worker\_Data/wd:Benefit\_Eligibility\_Data |
-| 29 | Benefit Enrollment Data | No | wd:Worker\_Data/wd:Benefit\_Enrollment\_Data |
-| 30 | Career Data | No | wd:Worker\_Data/wd:Career\_Data |
-| 31 | Compensation Data | No | wd:Worker\_Data/wd:Compensation\_Data |
-| 32 | Contingent Worker Tax Authority Data | No | wd:Worker\_Data/wd:Contingent\_Worker\_Tax\_Authority\_Form\_Type\_Data |
-| 33 | Development Item Data | No | wd:Worker\_Data/wd:Development\_Item\_Data |
-| 34 | Employee Contracts Data | No | wd:Worker\_Data/wd:Employee\_Contracts\_Data |
-| 35 | Employee Review Data | No | wd:Worker\_Data/wd:Employee\_Review\_Data |
-| 36 | Feedback Received Data | No | wd:Worker\_Data/wd:Feedback\_Received\_Data |
-| 37 | Worker Goal Data | No | wd:Worker\_Data/wd:Worker\_Goal\_Data |
-| 38 | Photo Data | No | wd:Worker\_Data/wd:Photo\_Data |
-| 39 | Qualification Data | No | wd:Worker\_Data/wd:Qualification\_Data |
-| 40 | Related Persons Data | No | wd:Worker\_Data/wd:Related\_Persons\_Data |
-| 41 | Role Data | No | wd:Worker\_Data/wd:Role\_Data |
-| 42 | Skill Data | No | wd:Worker\_Data/wd:Skill\_Data |
-| 43 | Succession Profile Data | No | wd:Worker\_Data/wd:Succession\_Profile\_Data |
-| 44 | Talent Assessment Data | No | wd:Worker\_Data/wd:Talent\_Assessment\_Data |
-| 45 | User Account Data | No | wd:Worker\_Data/wd:User\_Account\_Data |
-| 46 | Worker Document Data | No | wd:Worker\_Data/wd:Worker\_Document\_Data |
+| 1 | Personal Data | Yes | `wd:Worker_Data/wd:Personal_Data` |
+| 2 | Employment Data | Yes | `wd:Worker_Data/wd:Employment_Data` |
+| 3 | Additional Job Data | Yes | `wd:Worker_Data/wd:Employment_Data/wd:Worker_Job_Data[@wd:Primary_Job=0]`|
+| 4 | Organization Data | Yes | `wd:Worker_Data/wd:Organization_Data` |
+| 5 | Management Chain Data | Yes | `wd:Worker_Data/wd:Management_Chain_Data` |
+| 6 | Supervisory Organization | Yes | `SUPERVISORY` |
+| 7 | Company | Yes | `COMPANY` |
+| 8 | Business Unit | No | `BUSINESS_UNIT` |
+| 9 | Business Unit Hierarchy | No | `BUSINESS_UNIT_HIERARCHY` |
+| 10 | Company Hierarchy | No | `COMPANY_HIERARCHY` |
+| 11 | Cost Center | No | `COST_CENTER` |
+| 12 | Cost Center Hierarchy | No | `COST_CENTER_HIERARCHY` |
+| 13 | Fund | No | `FUND` |
+| 14 | Fund Hierarchy | No | `FUND_HIERARCHY` |
+| 15 | Gift | No | `GIFT` |
+| 16 | Gift Hierarchy | No | `GIFT_HIERARCHY` |
+| 17 | Grant | No | `GRANT` |
+| 18 | Grant Hierarchy | No | `GRANT_HIERARCHY` |
+| 19 | Business Site Hierarchy | No | `BUSINESS_SITE_HIERARCHY` |
+| 20 | Matrix Organization | No | `MATRIX` |
+| 21 | Pay Group | No | `PAY_GROUP` |
+| 22 | Programs | No | `PROGRAMS` |
+| 23 | Program Hierarchy | No | `PROGRAM_HIERARCHY` |
+| 24 | Region | No | `REGION_HIERARCHY` |
+| 25 | Location Hierarchy | No | `LOCATION_HIERARCHY` |
+| 26 | Account Provisioning Data | No | `wd:Worker_Data/wd:Account_Provisioning_Data` |
+| 27 | Background Check Data | No | `wd:Worker_Data/wd:Background_Check_Data` |
+| 28 | Benefit Eligibility Data | No | `wd:Worker_Data/wd:Benefit_Eligibility_Data` |
+| 29 | Benefit Enrollment Data | No | `wd:Worker_Data/wd:Benefit_Enrollment_Data` |
+| 30 | Career Data | No | `wd:Worker_Data/wd:Career_Data` |
+| 31 | Compensation Data | No | `wd:Worker_Data/wd:Compensation_Data` |
+| 32 | Contingent Worker Tax Authority Data | No | `wd:Worker_Data/wd:Contingent_Worker_Tax_Authority_Form_Type_Data` |
+| 33 | Development Item Data | No | `wd:Worker_Data/wd:Development_Item_Data` |
+| 34 | Employee Contracts Data | No | `wd:Worker_Data/wd:Employee_Contracts_Data` |
+| 35 | Employee Review Data | No | `wd:Worker_Data/wd:Employee_Review_Data` |
+| 36 | Feedback Received Data | No | `wd:Worker_Data/wd:Feedback_Received_Data` |
+| 37 | Worker Goal Data | No | `wd:Worker_Data/wd:Worker_Goal_Data` |
+| 38 | Photo Data | No | `wd:Worker_Data/wd:Photo_Data` |
+| 39 | Qualification Data | No | `wd:Worker_Data/wd:Qualification_Data` |
+| 40 | Related Persons Data | No | `wd:Worker_Data/wd:Related_Persons_Data` |
+| 41 | Role Data | No | `wd:Worker_Data/wd:Role_Data` |
+| 42 | Skill Data | No | `wd:Worker_Data/wd:Skill_Data` |
+| 43 | Succession Profile Data | No | `wd:Worker_Data/wd:Succession_Profile_Data` |
+| 44 | Talent Assessment Data | No | `wd:Worker_Data/wd:Talent_Assessment_Data` |
+| 45 | User Account Data | No | `wd:Worker_Data/wd:User_Account_Data` |
+| 46 | Worker Document Data | No | `wd:Worker_Data/wd:Worker_Document_Data` |
>[!NOTE] >Each Workday entity listed in the table is protected by a **Domain Security Policy** in Workday. If you are unable to retrieve any attribute associated with the entity after setting the right XPATH, check with your Workday admin to ensure that the appropriate domain security policy is configured for the integration system user associated with the provisioning app. For example, to retrieve *Skill data*, *Get* access is required on the Workday domain *Worker Data: Skills and Experience*.
Let's say you want to retrieve the following data sets from Workday and use them
* Cost center hierarchy * Pay group
-The above data sets are not included by default.
+The above data sets aren't included by default.
To retrieve these data sets:
-1. Login to the Azure portal and open your Workday to AD/Azure AD user provisioning app.
+1. Sign in to the Azure portal and open your Workday to AD/Azure AD user provisioning app.
1. In the Provisioning blade, edit the mappings and open the Workday attribute list from the advanced section. 1. Add the following attributes definitions and mark them as "Required". These attributes will not be mapped to any attribute in AD or Azure AD. They just serve as signals to the connector to retrieve the Cost Center, Cost Center Hierarchy and Pay Group information.
This section covers how you can customize the provisioning app for the following
### Support for worker conversions
-When a worker converts from employee to contingent worker or from contingent worker to employee, the Workday connector automatically detects this change and links the AD account to the active worker profile so that all AD attributes are in sync with the active worker profile. No configuration changes are required to enable this functionality. Here is the description of the provisioning behavior when a conversion happens.
+This section describes the Azure AD provisioning service support for scenarios when a worker converts from full-time employee (FTE) to contingent worker (CW) or vice versa. Depending on how worker conversions are processed in Workday, there may be different implementation aspects to consider.
-* Let's say John Smith joins as a contingent worker in January. As there is no AD account associated with John's *WorkerID* (matching attribute), the provisioning service creates a new AD account for the user and links John's contingent worker *WID (WorkdayID)* to his AD account.
-* Three months later, John converts to a full-time employee. In Workday, a new worker profile is created for John. Though John's *WorkerID* in Workday stays the same, John now has two *WID*s in Workday, one associated with the contingent worker profile and another associated with the employee worker profile.
-* During incremental sync, when the provisioning service detects two worker profiles for the same WorkerID, it automatically transfers ownership of the AD account to the active worker profile. In this case, it de-links the contingent worker profile from the AD account and establishes a new link between John's active employee worker profile and his AD account.
+* [Scenario 1: Backdated conversion from FTE to CW or vice versa](#scenario-1-backdated-conversion-from-fte-to-cw-or-vice-versa)
+* [Scenario 2: Worker employed as CW/FTE today, will change to FTE/CW today](#scenario-2-worker-employed-as-cwfte-today-will-change-to-ftecw-today)
+* [Scenario 3: Worker employed as CW/FTE is terminated, rejoins as FTE/CW after a significant gap](#scenario-3-worker-employed-as-cwfte-is-terminated-rejoins-as-ftecw-after-a-significant-gap)
+* [Scenario 4: Future-dated conversion, when worker is an active CW/FTE](#scenario-4-future-dated-conversion-when-worker-is-an-active-cwfte)
+
+#### Scenario 1: Backdated conversion from FTE to CW or vice versa
+Your HR team may backdate a worker conversion transaction in Workday for valid business reasons, such as payroll processing, budget compliance, legal requirements or benefits management. Here's an example to illustrate how provisioning is handled for this scenario.
+
+* It's January 15, 2022 and Jane Doe is employed as a contingent worker. HR offers Jane a full-time position.
+* The terms of Jane's contract change require backdating the transaction so it aligns with the start of the current month. HR initiates a backdated worker conversion transaction Workday on January 15, 2022 with effective date as January 1, 2022. Now there are two worker profiles in Workday for Jane. The CW profile is inactive, while the FTE profile is active.
+* The Azure AD provisioning service will detect this change in the Workday transaction log on January 15, 2022 and automatically provision attributes of the new FTE profile in the next sync cycle.
+* No changes are required in the provisioning app configuration to handle this scenario.
+
+#### Scenario 2: Worker employed as CW/FTE today, will change to FTE/CW today
+This scenario is similar to the above scenario, except that instead of backdating the transaction, HR performs a worker conversion that is effective immediately. The Azure AD provisioning service will detect this change in the Workday transaction log and automatically provision attributes associated with active FTE profile in the next sync cycle. No changes are required in the provisioning app configuration to handle this scenario.
+
+#### Scenario 3: Worker employed as CW/FTE is terminated, rejoins as FTE/CW after a significant gap
+It's common for workers to start work at a company as a contingent worker, leave the company and then rejoin after several months as a full-time employee. Here's an example to illustrate how provisioning is handled for this scenario.
+
+* It's January 1, 2022 and John Smith starts work at as a contingent worker. As there's no AD account associated with John's *WorkerID* (matching attribute), the provisioning service creates a new AD account and links John's contingent worker *WID (WorkdayID)* to John's AD account.
+* John's contract ends on January 31, 2022. In the provisioning cycle that runs after end of day January 31, John's AD account is disabled.
+* John applies for another position and decides to rejoin the company as full-time employee effective May 1, 2022. HR enters John's information as a pre-hire on April 15, 2022. Now there are two worker profiles in Workday for John. The CW profile is inactive, while the FTE profile is active. The two records have the same *WorkerID* but different *WID*s.
+* On April 15, during incremental cycle, the Azure AD provisioning service automatically transfers ownership of the AD account to the active worker profile. In this case, it de-links the contingent worker profile from the AD account and establishes a new link between John's active employee worker profile and John's AD account.
+* No changes are required in the provisioning app configuration to handle this scenario.
+
+#### Scenario 4: Future-dated conversion, when worker is an active CW/FTE
+Sometimes, a worker may already be an active contingent worker, when HR initiates a future-dated worker conversion transaction. Here's an example to illustrate how provisioning is handled for this scenario and what configuration changes are required to support this scenario.
+
+* It's January 1, 2022 and John Smith starts work at as a contingent worker. As there's no AD account associated with John's *WorkerID* (matching attribute), the provisioning service creates a new AD account and links John's contingent worker *WID (WorkdayID)* to John's AD account.
+* On January 15, HR initiates a transaction to convert John from contingent worker to full-time employee effective February 1, 2022.
+* Since Azure AD provisioning service automatically processes future-dated hires, it will process John's new full-time employee worker profile on January 15, and update John's profile in AD with full-time employment details even though he is still a contingent worker.
+* To avoid this behavior and ensure that John's FTE details get provisioned on February 1, 2022, perform the following configuration changes.
+
+ **Configuration changes**
+ 1. Engage your Workday admin to create a provisioning group called "Future-dated conversions".
+ 1. Implement logic in Workday to add employee/contingent worker records with future dated conversions to this provisioning group.
+ 1. Update the Azure AD provisioning app to read this provisioning group. Refer to instructions here on how to [retrieve the provisioning group](#example-3-retrieving-provisioning-group-assignments)
+ 1. Create a [scoping filter](define-conditional-rules-for-provisioning-user-accounts.md) in Azure AD to exclude worker profiles that are part of this provisioning group.
+ 1. In Workday, implement logic so that when the date of conversion is effective, Workday removes the relevant employee/contingent worker record from the provisioning group in Workday.
+ 1. With this configuration, the existing employee/contingent worker record will continue to be effective and the provisioning change will happen only on the day of conversion.
>[!NOTE] >During initial full sync, you may notice a behavior where the attribute values associated with the previous inactive worker profile flow to the AD account of converted workers. This is temporary and as full sync progresses, it will eventually be overwritten by attribute values from the active worker profile. Once the full sync is complete and the provisioning job reaches steady state, it will always pick the active worker profile during incremental sync.
Use the steps below to retrieve attributes associated with international job ass
1. Set the Workday connection URL uses Workday Web Service API version 30.0 or above. Accordingly set the [correct XPATH values](workday-attribute-reference.md#xpath-values-for-workday-web-services-wws-api-v30) in your Workday provisioning app. 1. Use the selector `@wd:Primary_Job=0` on the `Worker_Job_Data` node to retrieve the correct attribute.
- * **Example 1:** To get `SecondaryBusinessTitle` use the XPATH `wd:Worker/wd:Worker_Data/wd:Employment_Data/wd:Worker_Job_Data[@wd:Primary_Job=0]/wd:Position_Data/wd:Business_Title/text()`
- * **Example 2:** To get `SecondaryBusinessLocation` use the XPATH `wd:Worker/wd:Worker_Data/wd:Employment_Data/wd:Worker_Job_Data[@wd:Primary_Job=0]/wd:Position_Data/wd:Business_Site_Summary_Data/wd:Location_Reference/@wd:Descriptor`
+ * **Example 1:** To get `SecondaryBusinessTitle`, use the XPATH `wd:Worker/wd:Worker_Data/wd:Employment_Data/wd:Worker_Job_Data[@wd:Primary_Job=0]/wd:Position_Data/wd:Business_Title/text()`
+ * **Example 2:** To get `SecondaryBusinessLocation`, use the XPATH `wd:Worker/wd:Worker_Data/wd:Employment_Data/wd:Worker_Job_Data[@wd:Primary_Job=0]/wd:Position_Data/wd:Business_Site_Summary_Data/wd:Location_Reference/@wd:Descriptor`
active-directory How To Gmsa Cmdlets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-gmsa-cmdlets.md
Previously updated : 2022-07-01 Last updated : 07/01/2022
The following prerequisites are required to use these cmdlets.
`Set-AADCloudSyncPermissions -EACredential $credential`
-6. To restrict Active Directory permissions set by default on the cloud provisioning agent account, you can use the following cmdlet. This will increase the security of the service account by disabling permission inheritance and removing all existing permissions, except SELF and Full Control for administrators. See [Using Set-AADCloudSyncRestrictedPermission](#using-set-aadcloudsyncrestrictedpermission) below for examples on restricting the permissions.
+6. To restrict Active Directory permissions set by default on the cloud provisioning agent account, you can use the following cmdlet. This will increase the security of the service account by disabling permission inheritance and removing all existing permissions, except SELF and Full Control for administrators. See [Using Set-AADCloudSyncRestrictedPermission](#using-set-aadcloudsyncrestrictedpermissions) below for examples on restricting the permissions.
`Set-AADCloudSyncRestrictedPermission -Credential $credential`
active-directory Credential Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/credential-design.md
Title: How to customize your Microsoft Entra Verified ID (preview)
-description: This article shows you how to create your own custom verifiable credential
+ Title: Customize your Microsoft Entra Verified ID (preview)
+description: This article shows you how to create your own custom verifiable credential.
Last updated 06/22/2022
-# Customer intent: As a developer I am looking for information on how to enable my users to control their own information
+# Customer intent: As a developer, I am looking for information about how to enable my users to control their own information.
-# How to customize your verifiable credentials (preview)
+# Customize your verifiable credentials (preview)
-Verifiable credentials are made up of two components, the rules and display definitions. The rules definition determines what the user needs to provide before they receive a verifiable credential. The display definition controls the branding of the credential and styling of the claims. In this guide, we'll explain how to modify both files to meet the requirements of your organization.
+Verifiable credentials are made up of two components, *rules* definitions and *display* definitions. A rules definition determines what users need to provide before they receive a verifiable credential. A display definition controls the branding of the credential and styling of the claims.
+
+This article explains how to modify both types of files to meet the requirements of your organization.
> [!IMPORTANT]
-> Microsoft Entra Verified ID is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> Microsoft Entra Verified ID is currently in preview. This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). ## Rules definition: Requirements from the user The rules definition is a simple JSON document that describes important properties of verifiable credentials. In particular, it describes how claims are used to populate your verifiable credential.
-There are currently four input types that are available to configure in the rules definition. These types are used by the verifiable credential issuing service to insert claims into a verifiable credential and attest to that information with your DID. The following are the four types with explanations.
+### User-input types
+
+The following four user-input types are currently available to be configured in the rules definition. They're used by the verifiable credential issuing service to insert claims into a verifiable credential and attest to that information with your decentralized identifier (DID).
-- ID Token-- ID Token Hint-- Verifiable credentials via a verifiable presentation.-- Self-Attested Claims
+* **ID token**: When this option is configured, you'll need to provide an Open ID Connect configuration URI and include the claims that should be included in the verifiable credential. Users are prompted to 'Sign in' on the Authenticator app to meet this requirement and add the associated claims from their account.
-**ID token:** When this option is configured, you'll need to provide an Open ID Connect configuration URI and include the claims that should be included in the VC. The user will be prompted to 'Sign in' on the Authenticator app to meet this requirement and add the associated claims from their account.
+* **ID token hint**: The sample App and Tutorial use the ID token Hint. When this option is configured, the relying party app will need to provide claims that should be included in the verifiable credential in the Request Service API issuance request. Where the relying party app gets the claims from is up to the app, but it can come from the current sign-in session, from backend CRM systems or even from self asserted user input.
-**ID token hint:** The sample App and Tutorial use the ID Token Hint. When this option is configured, the relying party app will need to provide claims that should be included in the VC in the Request Service API issuance request. Where the relying party app gets the claims from is up to the app, but it can come from the current sign-in session, from backend CRM systems or even from self asserted user input.
+* **Verifiable credentials**: The end result of an issuance flow is to produce a verifiable credential but you may also ask the user to Present a verifiable credential in order to issue one. The rules definition is able to take specific claims from the presented verifiable credential and include those claims in the newly issued verifiable credential from your organization.
-**Verifiable credentials:** The end result of an issuance flow is to produce a Verifiable Credential but you may also ask the user to Present a Verifiable Credential in order to issue one. The rules definition is able to take specific claims from the presented Verifiable Credential and include those claims in the newly issued Verifiable Credential from your organization.
+* **Self-attested claims**: When this option is selected, the user can type information directly into Authenticator. At this time, strings are the only supported input for self attested claims.
-**Self attested claims:** When this option is selected, the user will be able to directly type information into Authenticator. At this time, strings are the only supported input for self attested claims.
+ ![Detailed view of a verifiable credential card.](media/credential-design/issuance-doc.png)
-![detailed view of verifiable credential card](media/credential-design/issuance-doc.png)
+### Static claims
-**Static claims:** Additionally we can declare a static claim in the rules definition, however this input doesn't come from the user. The Issuer defines a static claim in the rules definition and would look like any other claim in the Verifiable Credential. Add a credentialSubject after vc.type and declare the attribute and the claim.
+Additionally, you can declare a static claim in the rules definition, but this input doesn't come from the user. The issuer defines a static claim in the rules definition, and it looks like any other claim in the verifiable credential. You add credentialSubject after vc.type and declare the attribute and the claim.
```json "vc": {
There are currently four input types that are available to configure in the rule
## Input type: ID token
-To get ID Token as input, the rules definition needs to configure the well-known endpoint of the OIDC compatible Identity system. In that system you need to register an application with the correct information from [Issuer service communication examples](issuer-openid.md). Additionally, the client_id needs to be put in the rules definition, and a scope parameter needs to be filled in with the correct scopes. For example, Azure Active Directory needs the email scope if you want to return an email claim in the ID token.
+To get an ID token as input, the rules definition needs to configure the well-known endpoint of the OpenID Connect (OIDC)-compatible identity system. In that system you need to register an application with the correct information from the [Issuer service communication examples](issuer-openid.md). Additionally, you need to put client_id in the rules definition and fill in a scope parameter with the correct scopes. For example, Azure Active Directory needs the email scope if you want to return an email claim in the ID token.
```json {
To get ID Token as input, the rules definition needs to configure the well-known
} ```
-See [idToken attestation](rules-and-display-definitions-model.md#idtokenattestation-type) for reference of properties.
+For more information about properties, see [idTokenAttestation type](rules-and-display-definitions-model.md#idtokenattestation-type).
## Input type: ID token hint
-To get ID Token hint as input, the rules definition shouldn't contain configuration for and OIDC Identity system but instead have the special value `https://self-issued.me` for the configuration property. The claims mappings are the same as for the ID token type, but the difference is that the claim values need to be provided by the issuance relying party app in the Request Service API issuance request.
+To get an ID token hint as input, the rules definition shouldn't contain configuration for an OIDC identity system. Instead, it should have the special value `https://self-issued.me` for the configuration property. The claims mappings are the same as for the ID token type, but the difference is that the claim values need to be provided by the issuance relying party app in the Request Service API issuance request.
```json {
To get ID Token hint as input, the rules definition shouldn't contain configurat
} ```
-See [idTokenHint attestation](rules-and-display-definitions-model.md#idtokenhintattestation-type) for reference of properties.
+For more information about properties, see [idTokenHintAttestation type](rules-and-display-definitions-model.md#idtokenhintattestation-type).
-### vc.type: Choose credential type(s)
+### vc.type: Choose credential types
-All verifiable credentials must declare their "type" in their rules definition. The type of a credential distinguishes your verifiable credentials from credentials issued by other organizations and ensures interoperability between issuers and verifiers. To indicate a credential type, you must provide one or more credential types that the credential satisfies. Each type is represented by a unique string - often a URI will be used to ensure global uniqueness. The URI doesn't need to be addressable; it's treated as a string.
+All verifiable credentials must declare their *type* in their rules definition. The credential type distinguishes your verifiable credentials from credentials that are issued by other organizations, and it ensures interoperability between issuers and verifiers.
+
+To indicate a credential type, provide one or more credential types that the credential satisfies. Each type is represented by a unique string. Often, a URI is used to ensure global uniqueness. The URI doesn't need to be addressable. It's treated as a string.
As an example, a diploma credential issued by Contoso University might declare the following types: | Type | Purpose | | - | - |
-| `https://schema.org/EducationalCredential` | Declares that diplomas issued by Contoso University contain attributes defined by schema.org's `EducationaCredential` object. |
-| `https://schemas.ed.gov/universityDiploma2020` | Declares that diplomas issued by Contoso University contain attributes defined by the United States department of education. |
+| `https://schema.org/EducationalCredential` | Declares that diplomas issued by Contoso University contain attributes defined by the schema.org `EducationaCredential` object. |
+| `https://schemas.ed.gov/universityDiploma2020` | Declares that diplomas issued by Contoso University contain attributes defined by the U.S. Department of Education. |
| `https://schemas.contoso.edu/diploma2020` | Declares that diplomas issued by Contoso University contain attributes defined by Contoso University. |
-Contoso declaring three types of diplomas, allows them to issue credentials that satisfy different requests from verifiers. A bank can request a set of `EducationCredential`s from a user, and the diploma can be used to satisfy the request. But the Contoso University Alumni Association can request a credential of type `https://schemas.contoso.edu/diploma2020`, and the diploma will also satisfy the request.
+By declaring three types of diplomas, Contoso can issue credentials that satisfy different requests from verifiers. A bank can request a set of `EducationCredential`s from a user, and the diploma can be used to satisfy the request. Or the Contoso University Alumni Association can request a credential of type `https://schemas.contoso.edu/diploma2020`, and the diploma can also satisfy the request.
-To ensure interoperability of your credentials, it's recommended that you work closely with related organizations to define credential types, schemas, and URIs for use in your industry. Many industry bodies provide guidance on the structure of official documents that can be repurposed for defining the contents of verifiable credentials. You should also work closely with the verifiers of your credentials to understand how they intend to request and consume your verifiable credentials.
+To ensure interoperability of your credentials, we recommend that you work closely with related organizations to define credential types, schemas, and URIs for use in your industry. Many industry bodies provide guidance on the structure of official documents that can be repurposed for defining the contents of verifiable credentials. You should also work closely with the verifiers of your credentials to understand how they intend to request and consume your verifiable credentials.
## Input type: Verifiable credential
->[!NOTE]
->rules definitions that ask for a verifiable credential do not use the presentation exchange format for requesting credentials. This will be updated when the Issuing Service supports the standard, Credential Manifest.
+> [!NOTE]
+> Rules definitions that ask for a verifiable credential don't use the presentation exchange format for requesting credentials. This approach will be updated when the issuing service supports the standard, Credential Manifest.
```json {
To ensure interoperability of your credentials, it's recommended that you work c
} ```
-See [verifiablePresentation attestation](rules-and-display-definitions-model.md#verifiablepresentationattestation-type) for reference of properties.
+For more information about properties, see [verifiablePresentationAttestation type](rules-and-display-definitions-model.md#verifiablepresentationattestation-type).
-## Input type: Selfattested claims
+## Input type: Self-attested claims
-During the issuance flow, the user can be asked to input some self-attested information. As of now, the only input type is a 'string'.
+During the issuance flow, users can be asked to input some self-attested information. As of now, the only input type is 'string'.
```json {
During the issuance flow, the user can be asked to input some self-attested info
} ```
-See [selfIssued attestation](rules-and-display-definitions-model.md#selfissuedattestation-type) for reference of properties.
+For more information about properties, see [selfIssuedAttestation type](rules-and-display-definitions-model.md#selfissuedattestation-type).
## Display definition: Verifiable credentials in Microsoft Authenticator
-Verifiable credentials offer a limited set of options that can be used to reflect your brand. This article provides instructions how to customize your credentials, and best practices for designing credentials that look great once issued to users.
+Verifiable credentials offer a limited set of options that can be used to reflect your brand. This article provides instructions how to customize your credentials, and best practices for designing credentials that look great after they're issued to users.
-Verifiable credentials issued to users are displayed as cards in Microsoft Authenticator. As the administrator, you may choose card color, icon, and text strings to match your organization's brand.
+Authenticator displays verifiable credentials that are issued to users as cards. As an administrator, you can choose card colors, icons, and text strings to match your organization's brand.
-![issuance documentation](media/credential-design/detailed-view.png)
+![Image of a verified credential card in Authenticator, calling out key elements.](media/credential-design/detailed-view.png)
Cards also contain customizable fields. You can use these fields to let users know the purpose of the card, the attributes it contains, and more. ## Create a credential display definition
-Much like the rules definition, the display definition is a simple JSON document that describes how the contents of your verifiable credentials should be displayed in the Microsoft Authenticator app.
+Much like the rules definition, the display definition is a simple JSON document that describes how the Authenticator app should display the contents of your verifiable credentials.
>[!NOTE]
-> At this time, this display model is only used by Microsoft Authenticator.
+> This display model is currently used only by Microsoft Authenticator.
-The display definition has the following structure.
+The display definition has the following structure:
```json {
The display definition has the following structure.
} ```
-See [Display definition model](rules-and-display-definitions-model.md#displaymodel-type) for reference of properties.
+For more information about properties, see [displayModel type](rules-and-display-definitions-model.md#displaymodel-type).
## Next steps
-Now you have a better understanding of verifiable credential design and how you can create your own to meet your needs.
+Now that you have a better understanding of verifiable credential design and how to create your own, see:
- [Issuer service communication examples](issuer-openid.md)-- Reference for [Rules and Display definitions](rules-and-display-definitions-model.md)
+- [Rules and display definition reference](rules-and-display-definitions-model.md)
active-directory How To Create A Free Developer Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-create-a-free-developer-account.md
Title: How to create a free Azure Active Directory developer tenant
-description: This article shows you how to create a developer account
+ Title: Create a free Azure Active Directory developer tenant
+description: This article shows you how to create a developer account.
Last updated 04/01/2021
-# Customer intent: As a developer I am looking to create a developer Azure Active Directory account so I can participate in the Preview with a P2 license.
+# Customer intent: As a developer, I want to learn how to create a developer Azure Active Directory account so I can participate in the preview with a P2 license.
# Microsoft Entra Verified ID developer information
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)] > [!IMPORTANT]
-> Microsoft Entra verified ID is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> Microsoft Entra Verified ID is currently in preview. This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). > [!NOTE]
-> The requirement of an Azure AD P2 license was removed in early May 2002. Azure AD Free tier is now supported.
+> The requirement of an Azure Active Directory (Azure AD) P2 license was removed in early May 2001. The Azure AD Free tier is now supported.
-## Creating an Azure AD tenant for development
+## Create an Azure AD tenant for development
-There are two easy ways to create a free Azure Active Directory so you can onboard the Verifiable Credential service and test issuing and verifying Verifiable Credentials:
+ With a free Azure Active Directory account, you can onboard the verifiable credential service and test issuing and verifying verifiable credentials. Create a free account in either of two ways:
-- [Join](https://aka.ms/o365devprogram) the free Microsoft 365 Developer Program and get a free sandbox, tools, and other resources like an Azure Active Directory with P2 licenses. Configured Users, Groups, mailboxes etc.-- Create a new [tenant](../develop/quickstart-create-new-tenant.md) and activate a [free trial](https://azure.microsoft.com/trial/get-started-active-directory/) of Azure AD Premium P1 or P2 in your new tenant.
+- [Join the free Microsoft 365 Developer Program](https://aka.ms/o365devprogram), and get a free sandbox, tools, and other resources (for example, an Azure AD account with P2 licenses, configured users, groups, and mailboxes).
+- [Create a new tenant](../develop/quickstart-create-new-tenant.md) and [activate a free trial of Azure AD Premium P1 or P2](https://azure.microsoft.com/trial/get-started-active-directory/) in your new tenant.
If you decide to sign up for the free Microsoft 365 developer program, you need to follow a few easy steps:
-1. Click on the **Join Now** button on the screen.
+1. On the [Join the free Microsoft 365 Developer Program](https://aka.ms/o365devprogram) page, select **Join now**.
-2. Sign in with a new Microsoft Account or use an existing (work) account you already have.
+1. Sign in with a new Microsoft account or use an existing (work) account.
-3. On the sign-up page select your region, enter a company name and accept the terms and conditions of the program before you click **Next**.
+1. On the sign-up page, select your region, enter a company name, and accept the terms and conditions of the program.
-4. Click on **set up subscription**. Specify the region where you want to create your new tenant, create a username, domain, and enter a password. This will create a new tenant and the first administrator of the tenant.
+1. Select **Next**.
-5. Enter the security information needed to protect the administrator account of your new tenant. This will setup MFA authentication for the account.
+1. Select **Set up subscription**. Specify the region where you want to create your new tenant, create a username, domain, and enter a password. This creates a new tenant and the first administrator of the tenant.
+1. Enter the security information needed to protect the administrator account of your new tenant. This sets up multifactor authentication for the account.
-At this point, you have created a tenant with 25 E5 user licenses. The E5 licenses include Azure AD P2 licenses. Optionally, you can add sample data packs with users, groups, mail, and SharePoint to help you test in your development environment. For the Verifiable Credential Issuing service, they are not required.
-For your convenience, you could add your own work account as [guest](../external-identities/b2b-quickstart-add-guest-users-portal.md) in the newly created tenant and use that account to administer the tenant. If you want the guest account to be able to manage the Verifiable Credential Service you need to assign the role 'Global Administrator' to that user.
+At this point, you've created a tenant with 25 E5 user licenses. The E5 licenses include Azure AD P2 licenses. Optionally, you can add sample data packs with users, groups, mail, and SharePoint to help you test in your development environment. For the verifiable credential issuing service, they're not required.
+
+For your convenience, you could add your own work account as [guest](../external-identities/b2b-quickstart-add-guest-users-portal.md) in the newly created tenant and use that account to administer the tenant. If you want the guest account to be able to manage the verifiable credential service, you need to assign the *Global Administrator* role to that user.
## Next steps
-Now that you have a developer account you can try our [first tutorial](get-started-verifiable-credentials.md) to learn more about verifiable credentials.
+Now that you have a developer account, try our [first tutorial](get-started-verifiable-credentials.md) to learn more about verifiable credentials.
active-directory How To Use Quickstart Idtoken https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-use-quickstart-idtoken.md
Title: How to create verifiable credentials for idTokens
-description: Learn how to use the QuickStart to create custom credentials for idTokens
+ Title: Create verifiable credentials for ID tokens
+description: Learn how to use a quickstart to create custom credentials for ID tokens
documentationCenter: ''
Last updated 06/22/2022
-#Customer intent: As an administrator, I am looking for information to help me disable
+#Customer intent: As an administrator, I am looking for information to help me create verifiable credentials for ID tokens.
-# How to create verifiable credentials for idTokens
+# Create verifiable credentials for ID tokens
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)] > [!IMPORTANT]
-> Microsoft Entra Verified ID is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> Microsoft Entra Verified ID is currently in preview. This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-A [rules definition](rules-and-display-definitions-model.md#rulesmodel-type) using the [idTokens attestation](rules-and-display-definitions-model.md#idtokenattestation-type) will produce an issuance flow where the user will be required to do an interactive sign-in to an OIDC identity provider in the Authenticator. Claims in the id_token the identity provider returns can be used to populate the issued verifiable credential. The claims mapping section in the rules definition specifies which claims are used.
+A [rules definition](rules-and-display-definitions-model.md#rulesmodel-type) that uses the [idTokens attestation](rules-and-display-definitions-model.md#idtokenattestation-type) produces an issuance flow where you're required to do an interactive sign-in to an OpenID Connect (OIDC) identity provider in Microsoft Authenticator. Claims in the ID token that the identity provider returns can be used to populate the issued verifiable credential. The claims mapping section in the rules definition specifies which claims are used.
-## Create a Custom credential with the idTokens attestation type
+## Create a custom credential with the idTokens attestation type
-When you select + Add credential in the portal, you get the option to launch two Quickstarts. Select [x] Custom credential and select Next.
+In the Azure portal, when you select **Add credential**, you get the option to launch two quickstarts. Select **custom credential**, and then select **Next**.
-![Screenshot of VC quickstart](media/how-to-use-quickstart/quickstart-startscreen.png)
+![Screenshot of the "Issue credentials" quickstart for creating a custom credential.](media/how-to-use-quickstart/quickstart-startscreen.png)
-In the next screen, you enter JSON for the Display and the Rules definitions and give the credential a type name. Select Create to create the credential.
+On the **Create a new credential** page, enter the JSON code for the display and the rules definitions. In the **Credential name** box, give the credential a type name. To create the credential, select **Create**.
-![screenshot of create new credential section with JSON sample](media/how-to-use-quickstart/quickstart-create-new.png)
+![Screenshot of the "Create a new credential" page, displaying JSON samples for the display and rules files.](media/how-to-use-quickstart/quickstart-create-new.png)
-## Sample JSON Display definitions
+## Sample JSON display definitions
-The Display JSON definition is very much the same regardless of attestation type. You just have to adjust the labels depending on what claims your VC have. The Display JSON definition is the same regardless of attestation type. The expected JSON for the Display definitions is the inner content of the displays collection. The JSON is a collection, so if you want to support multiple locales, you add multiple entries with a comma as separator.
+The JSON display definition is nearly the same, regardless of attestation type. You only have to adjust the labels according to the claims that your verifiable credential has. The expected JSON for the display definitions is the inner content of the displays collection. The JSON is a collection, so if you want to support multiple locales, add multiple entries with a comma as separator.
```json {
The Display JSON definition is very much the same regardless of attestation type
} ```
-## Sample JSON Rules definitions
+## Sample JSON rules definitions
-The JSON attestation definition should contain the **idTokens** name, the [OIDC configuration details](rules-and-display-definitions-model.md#idtokenattestation-type) and the claims mapping section. The expected JSON for the Rules definitions is the inner content of the rules attribute, which starts with the attestation attribute. The claims mapping in the below example will require that you do the token configuration as explained below in the section [Claims in id_token from Identity Provider](#claims-in-id_token-from-identity-provider).
+The JSON attestation definition should contain the **idTokens** name, the [OIDC configuration details](rules-and-display-definitions-model.md#idtokenattestation-type) and the claims mapping section. The expected JSON for the rules definitions is the inner content of the rules attribute, which starts with the attestation attribute.
+
+The claims mapping in the following example requires that you configure the token as explained in the [Claims in the ID token from the identity provider](#claims-in-the-id-token-from-the-identity-provider) section.
```json {
The JSON attestation definition should contain the **idTokens** name, the [OIDC
} ```
-## Application Registration
+## Application registration
+
+The clientId attribute is the application ID of a registered application in the OIDC identity provider. For Azure Active Directory, you create the application by doing the following:
+
+1. In the Azure portal, go to [Azure Active Directory](https://portal.azure.com/#view/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/~/RegisteredApps).
+
+1. Select **App registrations**, select **New registration**, and then give the app a name.
-The **clientId** attribute is the AppId of a registered application in the OIDC identity provider. For **Azure Active Directory**, you create the application via these steps.
+ If you want only accounts in your tenant to be able to sign in, keep the **Accounts in this directory only** checkbox selected.
-1. Navigate to [Azure Active Directory in portal.azure.com](https://portal.azure.com/#view/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/~/RegisteredApps).
-1. Select **App registrations** and select on **+New registration** and give the app a name
-1. Let the selection of **Accounts in this directory only** if you only want accounts in your tenant to be able to sign in
-1. In **Redirect URI (optional)**, select **Public client/native (mobile & desktop)** and enter value **vcclient://openid**
+1. In **Redirect URI (optional)**, select **Public client/native (mobile & desktop)**, and then enter **vcclient://openid**.
-If you want to be able to test what claims are in the token, do the following
-1. Select **Authentication** in the left hand menu and do
-1. **+Add platform**
-1. **Web**
-1. Enter **https://jwt.ms** as **Redirect URI** and select **ID Tokens (used for implicit and hybrid flows)**
-1. Select on **Configure**
+If you want to be able to test what claims are in the token, do the following:
-Once you finish testing your id_token, you should consider removing **https://jwt.ms** and the support for **implicit and hybrid flows**.
+1. On the left pane, select **Authentication**> **Add platform** > **Web**.
-For **Azure Active Directory**, you can test your app registration and that you get an id_token via running the following in the browser if you have enabled support for redirecting to jwt.ms.
+1. For **Redirect URI**, enter **https://jwt.ms**, and then select **ID Tokens (used for implicit and hybrid flows)**.
+
+1. Select **Configure**.
+
+After you've finished testing your ID token, consider removing **https://jwt.ms** and the support for **implicit and hybrid flows**.
+
+**For Azure Active Directory**: You can test your app registration and, if you've enabled support for redirecting to **https://jwt.ms**, you can get an ID token by running the following in your browser:
```http https://login.microsoftonline.com/<your-tenantId>/oauth2/v2.0/authorize?client_id=<your-appId>&nonce=defaultNonce&redirect_uri=https%3A%2F%2Fjwt.ms&scope=openid%20profile&response_type=id_token&prompt=login ```
-Replace the <your-tenantidNote that you need to have **profile** as part of the **scope** in order to get the extra claims.
+In the code, replace \<your-tenantId> with your tenant ID. To get the extra claims, you need to have **profile** as part of the **scope**.
+
+**For Azure Active Directory B2C**: The app registration process is the same, but B2C has built-in support in the Azure portal for testing your B2C policies via the **Run user flow** functionality.
-For **Azure Active Directory B2C**, the app registration process is the same but B2C has built in support in the portal for testing your B2C policies via the **Run user flow** functionality.
+## Claims in the ID token from the identity provider
-## Claims in id_token from Identity Provider
+Claims must exist in the returned identity provider so that they can successfully populate your verifiable credential.
-Claims must exist in the returned identity provider so that they can successfully populate your VC.
-If the claims don't exist, there will be no value in the issued VC. Most OIDC identity providers don't issue a claim in an id_token if the claim has a null value in the user's profile. Make sure you include the claim in the id_token definition and that the user has a value for the claim in the user profile.
+If the claims don't exist, there's no value in the issued verifiable credential. Most OIDC identity providers don't issue a claim in an ID token if the claim has a null value in your profile. Be sure to include the claim in the ID token definition, and ensure that you've entered a value for the claim in your user profile.
-For **Azure Active Directory**, see documentation [Provide optional claims to your app](../../active-directory/develop/active-directory-optional-claims.md) on how to configure what claims to include in your token. The configuration is per application, so the configuration you make should be for the app with AppId specified in the **clientId** in the rules definition.
+**For Azure Active Directory**: To configure the claims to include in your token, see [Provide optional claims to your app](../../active-directory/develop/active-directory-optional-claims.md). The configuration is per application, so this configuration should be for the app that has the application ID specified in the client ID in the rules definition.
-To match the above Display & Rules definition, you should have your application manifest having its **optionalClaims** looking like below.
+To match the display and rules definitions, you should make your application's optionalClaims JSON look like the following:
```json "optionalClaims": {
To match the above Display & Rules definition, you should have your application
}, ```
-For **Azure Active Directory B2C**, configuring other claims in your id_token depends on if your B2C policy is a **User Flow** or a **Custom Policy**. For documentation on User Flows, see [Set up a sign-up and sign-in flow in Azure Active Directory B2C](../../active-directory-b2c/add-sign-up-and-sign-in-policy.md?pivots=b2c-user-flow) and for Custom Policy, see documentation [Provide optional claims to your app](../../active-directory-b2c/configure-tokens.md?pivots=b2c-custom-policy#provide-optional-claims-to-your-app).
+**For Azure Active Directory B2C**: Configuring other claims in your ID token depends on whether your B2C policy is a *user flow* or a *custom policy*. For information about user flows, see [Set up a sign-up and sign-in flow in Azure Active Directory B2C](../../active-directory-b2c/add-sign-up-and-sign-in-policy.md?pivots=b2c-user-flow). For information about custom policy, see [Provide optional claims to your app](../../active-directory-b2c/configure-tokens.md?pivots=b2c-custom-policy#provide-optional-claims-to-your-app).
For other identity providers, see the relevant documentation.
-## Configure the samples to issue and verify your Custom credential
+## Configure the samples to issue and verify your custom credential
-To configure your sample code to issue and verify using custom credentials, you need:
+To configure your sample code to issue and verify your custom credentials, you need:
-- Your tenant's issuer DID
+- Your tenant's issuer decentralized identifier (DID)
- The credential type-- The manifest url to your credential.
+- The manifest URL to your credential
-The easiest way to find this information for a Custom Credential is to go to your credential in the portal, select **Issue credential** and switch to Custom issue.
+The easiest way to find this information for a custom credential is to go to your credential in the Azure portal. Select **Issue credential**, and then switch to the custom issue.
-![Screenshot of QuickStart issue credential screen.](media/how-to-use-quickstart/quickstart-config-sample-1.png)
+![Screenshot of the quickstart "Issue credential" page.](media/how-to-use-quickstart/quickstart-config-sample-1.png)
-After switching to custom issue, you have access to a textbox with a JSON payload for the Request Service API. Replace the place holder values with your environment's information. The issuerΓÇÖs DID is the authority value.
+After you've switched to the custom issue, you have access to a text box with a JSON payload for the Request Service API. Replace the placeholder values with your environment's information. The issuerΓÇÖs DID is the authority value.
-![Screenshot of Quickstart custom issue.](media/how-to-use-quickstart/quickstart-config-sample-2.png)
+![Screenshot of the quickstart custom credential issue.](media/how-to-use-quickstart/quickstart-config-sample-2.png)
## Next steps -- Reference for [Rules and Display definitions model](rules-and-display-definitions-model.md)
+See the [Rules and display definitions reference](rules-and-display-definitions-model.md).
active-directory How To Use Quickstart Selfissued https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-use-quickstart-selfissued.md
Title: How to create verifiable credentials for self-asserted claims
-description: Learn how to use the QuickStart to create custom credentials for self-issued
+ Title: Create verifiable credentials for self-asserted claims
+description: Learn how to use a quickstart to create custom credentials for self-issued claims
documentationCenter: ''
Last updated 06/22/2022
-#Customer intent: As a verifiable credentials Administrator, I want to create a verifiable credential for self-asserted claims scenario
+#Customer intent: As a verifiable credentials administrator, I want to create a verifiable credential for self-asserted claims scenario.
-# How to create verifiable credentials for self-asserted claims
+# Create verifiable credentials for self-asserted claims
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)] > [!IMPORTANT]
-> Microsoft Entra Verified ID is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> Microsoft Entra Verified ID is currently in preview. This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-A [rules definition](rules-and-display-definitions-model.md#rulesmodel-type) using the [selfIssued attestation](rules-and-display-definitions-model.md#selfissuedattestation-type) will produce an issuance flow where the user will be required to manually enter values for the claims in the Authenticator.
+A [rules definition](rules-and-display-definitions-model.md#rulesmodel-type) that uses the [selfIssued attestation](rules-and-display-definitions-model.md#selfissuedattestation-type) type produces an issuance flow where you're required to manually enter values for the claims in Microsoft Authenticator.
-## Create a Custom credential with the selfIssued attestation type
+## Create a custom credential with the selfIssued attestation type
-When you select + Add credential in the portal, you get the option to launch two QuickStarts. Select [x] Custom credential and select Next.
+In the Azure portal, when you select **Add credential**, you get the option to launch two quickstarts. Select **custom credential**, and then select **Next**.
-![Screenshot of VC quickstart](media/how-to-use-quickstart/quickstart-startscreen.png)
+![Screenshot of the "Issue credentials" quickstart for creating a custom credential.](media/how-to-use-quickstart/quickstart-startscreen.png)
-In the next screen, you enter JSON for the Display and the Rules definitions and give the credential a type name. Select Create to create the credential.
+On the **Create a new credential** page, enter the JSON code for the display and the rules definitions. In the **Credential name** box, give the credential a type name. To create the credential, select **Create**.
-![screenshot of create new credential section with JSON sample](media/how-to-use-quickstart/quickstart-create-new.png)
+![Screenshot of the "Create a new credential" page, displaying JSON samples for the display and rules files.](media/how-to-use-quickstart/quickstart-create-new.png)
-## Sample JSON Display definitions
+## Sample JSON display definitions
-The Display JSON definition is very much the same regardless of attestation type. You just have to adjust the labels depending on what claims your VC have. The expected JSON for the Display definitions is the inner content of the displays collection. The JSON is a collection, so if you want to support multiple locales, you add multiple entries with a comma as separator.
+The JSON display definition is nearly the same, regardless of attestation type. You only have to adjust the labels according to the claims that your verifiable credential has. The expected JSON for the display definitions is the inner content of the displays collection. The JSON is a collection, so if you want to support multiple locales, add multiple entries with a comma as separator.
```json {
The Display JSON definition is very much the same regardless of attestation type
} ```
-## Sample JSON Rules definitions
+## Sample JSON rules definitions
-The JSON attestation definition should contain the **selfIssued** name and the claims mapping section. Since the claims are selfIssued, the value will be the same for the **outputClaim** and the **inputClaim**. The expected JSON for the Rules definitions is the inner content of the rules attribute, which starts with the attestation attribute.
+The JSON attestation definition should contain the **selfIssued** name and the claims mapping section. Because the claims are self issued, the value is the same for **outputClaim** and **inputClaim**. The expected JSON for the rules definitions is the inner content of the rules attribute, which starts with the attestation attribute.
```json {
The JSON attestation definition should contain the **selfIssued** name and the c
## Claims input during issuance
-During issuance, the Microsoft Authenticator will prompt the user to enter values for the specified claims. There's no validation of user input.
+During issuance, Authenticator prompts you to enter values for the specified claims. User input isn't validated.
-![selfIssued claims input](media/how-to-use-quickstart-selfissued\selfIssued-claims-input.png)
+![Screenshot of selfIssued claims input.](media/how-to-use-quickstart-selfissued\selfIssued-claims-input.png)
-## Configure the samples to issue and verify your Custom credential
+## Configure the samples to issue and verify your custom credential
-To configure your sample code to issue and verify using custom credentials, you need:
+To configure your sample code to issue and verify your custom credential, you need:
-- Your tenant's issuer DID
+- Your tenant's issuer decentralized identifier (DID)
- The credential type-- The manifest url to your credential.
+- The manifest URL to your credential
-The easiest way to find this information for a Custom Credential is to go to your credential in the portal, select **Issue credential** and switch to Custom issue.
+The easiest way to find this information for a custom credential is to go to your credential in the Azure portal. Select **Issue credential**, and then switch to the custom issue.
-![Screenshot of QuickStart issue credential screen.](media/how-to-use-quickstart/quickstart-config-sample-1.png)
+![Screenshot of the quickstart "Issue credential" page.](media/how-to-use-quickstart/quickstart-config-sample-1.png)
-After switching to custom issue, you have access to a textbox with a JSON payload for the Request Service API. Replace the place holder values with your environment's information. The issuerΓÇÖs DID is the authority value.
+After you've switched to the custom issue, you have access to a text box with a JSON payload for the Request Service API. Replace the placeholder values with your environment's information. The issuerΓÇÖs DID is the authority value.
-![Screenshot of Quickstart custom issue.](media/how-to-use-quickstart/quickstart-config-sample-2.png)
+![Screenshot of the quickstart custom credential issue.](media/how-to-use-quickstart/quickstart-config-sample-2.png)
## Next steps -- Reference for [Rules and Display definitions model](rules-and-display-definitions-model.md)
+See the [Rules and display definitions reference](rules-and-display-definitions-model.md).
active-directory How To Use Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-use-quickstart.md
Title: How to create verifiable credentials for ID token hint
-description: Learn how to use the QuickStart to create custom verifiable credential for ID token hint
+ Title: Create verifiable credentials for an ID token hint
+description: In this article, you learn how to use a quickstart to create a custom verifiable credential for an ID token hint.
documentationCenter: ''
Last updated 06/16/2022
-#Customer intent: As a verifiable credentials Administrator, I want to create a verifiable credential for the ID token hint scenario
+#Customer intent: As a verifiable credentials administrator, I want to create a verifiable credential for the ID token hint scenario.
-# How to create verifiable credentials for ID token hint
+# Create verifiable credentials for ID token hint
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)] > [!IMPORTANT]
-> Microsoft Entra Verified ID is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> Microsoft Entra Verified ID is currently in preview. This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). ## Prerequisites
-To use the Microsoft Entra Verified ID QuickStart, you only need to complete verifiable credentials onboarding.
+To use the Microsoft Entra Verified ID quickstart, you need only to complete the verifiable credentials onboarding process.
-## What is the QuickStart?
+## What is the quickstart?
-Azure AD verifiable Credentials now come with a QuickStart in the portal for creating custom credentials. When using the QuickStart, you don't need to edit and upload of display and rules files to Azure Storage. Instead you enter all details in the portal and create the credential in one page.
+Azure Active Directory verifiable credentials now come with a quickstart in the Azure portal for creating custom credentials. When you use the quickstart, you don't need to edit and upload rules and display files to Azure Storage. Instead, you enter all details in the Azure portal and create the credential on a single page.
>[!NOTE]
->When working with custom credentials, you provide display and rules definitions in JSON documents. These definitions are now stored together with the credential's details.
+>When you work with custom credentials, you provide display definitions and rules definitions in JSON documents. These definitions are stored with the credential details.
-## Create a Custom credential
+## Create a custom credential
-When you select + Add credential in the portal, you get the option to launch two Quickstarts. Select [x] Custom credential and select Next.
+In the Azure portal, when you select **Add credential**, you get the option to launch two quickstarts. Select **custom credential**, and then select **Next**.
-![Screenshot of VC quickstart](media/how-to-use-quickstart/quickstart-startscreen.png)
+![Screenshot of the "Issue credentials" quickstart for creating a custom credential.](media/how-to-use-quickstart/quickstart-startscreen.png)
-In the next screen, you enter JSON for the Display and the Rules definitions and give the credential a type name. Select Create to create the credential.
+On the **Create a new credential** page, enter the JSON code for the rules and display definitions. In the **Credential name** box, give the credential a type name. To create the credential, select **Create**.
-![screenshot of create new credential section with JSON sample](media/how-to-use-quickstart/quickstart-create-new.png)
+![Screenshot of the "Create a new credential" page, displaying JSON samples for the rules and display files.](media/how-to-use-quickstart/quickstart-create-new.png)
-## Sample JSON Display definitions
+## Sample JSON display definitions
-The expected JSON for the Display definitions is the inner content of the displays collection. The JSON is a collection, so if you want to support multiple locales, you add multiple entries with a comma as separator.
+The expected JSON for the display definitions is the inner content of the displays collection. The JSON is a collection, so if you want to support multiple locales, you add multiple entries, with a comma as a separator.
```json {
The expected JSON for the Display definitions is the inner content of the displa
} ```
-## Sample JSON Rules definitions
+## Sample JSON rules definitions
-The expected JSON for the Rules definitions is the inner content of the rules attribute, which starts with the attestation attribute.
+The expected JSON for the rules definitions is the inner content of the rules attribute, which starts with the attestation attribute.
```json {
The expected JSON for the Rules definitions is the inner content of the rules at
} ```
-## Configure the samples to issue and verify your Custom credential
+## Configure the samples to issue and verify your custom credential
-To configure your sample code to issue and verify using custom credentials, you need:
+To configure your sample code to issue and verify by using custom credentials, you need:
-- Your tenant's issuer DID
+- Your tenant's issuer decentralized identifier (DID)
- The credential type-- The manifest url to your credential.
+- The manifest URL to your credential
-The easiest way to find this information for a Custom Credential is to go to your credential in the portal, select **Issue credential** and switch to Custom issue.
+The easiest way to find this information for a custom credential is to go to your credential in the Azure portal. Select **Issue credential**, and then switch to the custom issue.
-![Screenshot of QuickStart issue credential screen.](media/how-to-use-quickstart/quickstart-config-sample-1.png)
+![Screenshot of the quickstart "Issue credential" page.](media/how-to-use-quickstart/quickstart-config-sample-1.png)
-After switching to custom issue, you have access to a textbox with a JSON payload for the Request Service API. Replace the place holder values with your environment's information. The issuerΓÇÖs DID is the authority value.
+After you've switched to the custom issue, you have access to a text box with a JSON payload for the Request Service API. Replace the placeholder values with your environment's information. The issuerΓÇÖs DID is the authority value.
-![Screenshot of Quickstart custom issue.](media/how-to-use-quickstart/quickstart-config-sample-2.png)
+![Screenshot of the quickstart custom credential issue.](media/how-to-use-quickstart/quickstart-config-sample-2.png)
## Next steps -- Reference for [Rules and Display definitions model](rules-and-display-definitions-model.md)-- Reference for creating a credential using the [idToken] attestation (idtoken-reference.md)
+For more information, see:
+- [Rules and display definitions reference](rules-and-display-definitions-model.md)
active-directory Verifiable Credentials Standards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-standards.md
Title: Current and upcoming standards
+ Title: Microsoft Entra Verified ID-supported standards
description: This article outlines current and upcoming standards
Last updated 06/22/2022
-# Customer intent: As a developer I am looking for information around the open standards supported by Microsoft Entra verified ID
+# Customer intent: As a developer, I'm looking for information about the open standards that are supported by Microsoft Entra Verified ID.
-# Entra Verified ID supported standards
+# Microsoft Entra Verified ID-supported standards
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)]
-Microsoft is actively collaborating with members of the Decentralized Identity Foundation (DIF), the W3C Credentials Community Group, and the wider identity community. WeΓÇÖve worked with these groups to identify and develop critical standards, and have implemented the open standards in our services. This page outlines currently supported open standards for Microsoft Entra Verified ID.
+Microsoft is actively collaborating with members of the Decentralized Identity Foundation (DIF), the W3C Credentials Community Group, and the wider identity community. WeΓÇÖre working with these groups to identify and develop critical standards, and we've implemented the open standards in our services.
-## Standard bodies
+In this article, you'll find the currently supported open standards for Microsoft Entra Verified ID.
+
+## Standards bodies
- [OpenID Foundation (OIDF)](https://openid.net/foundation/) - [Decentralized Identity Foundation (DIF)](https://identity.foundation/)
Microsoft is actively collaborating with members of the Decentralized Identity F
Entra Verified ID supports the following open standards:
-| Component in a Tech Stack | Open Standard | Standard Body |
+| Technology stack component | Open standard | Standard body |
|:|:--|:--|
-| Data Model | [Verifiable Credentials Data Model v1.1](https://www.w3.org/TR/vc-data-model) | W3C VC WG |
-| Credential Format | [JSON Web Token VC (JWT-VC)](https://www.w3.org/TR/vc-data-model/#json-web-token) - encoded as JSON and signed as a JWS ([RFC7515](https://datatracker.ietf.org/doc/html/rfc7515)) | W3C VC WG /IETF |
-| Entity Identifier (Issuer, Verifier) | [did:web](https://github.com/w3c-ccg/did-method-web) | W3C CCG |
-| Entity Identifier (Issuer, Verifier, User) | [did:ion](https://github.com/decentralized-identity/ion)| DIF |
-| User Authentication | [Self-Issued OpenID Provider v2](https://openid.net/specs/openid-connect-self-issued-v2-1_0.html)| OIDF |
+| Data model | [Verifiable Credentials Data Model v1.1](https://www.w3.org/TR/vc-data-model) | W3C VC WG |
+| Credential format | [JSON Web Token VC (JWT-VC)](https://www.w3.org/TR/vc-data-model/#json-web-token) - encoded as JSON and signed as a JWS ([RFC7515](https://datatracker.ietf.org/doc/html/rfc7515)) | W3C VC WG /IETF |
+| Entity identifier (issuer, verifier) | [did:web](https://github.com/w3c-ccg/did-method-web) | W3C CCG |
+| Entity identifier (issuer, verifier, user) | [did:ion](https://github.com/decentralized-identity/ion)| DIF |
+| User authentication | [Self-Issued OpenID Provider v2](https://openid.net/specs/openid-connect-self-issued-v2-1_0.html)| OIDF |
| Presentation | [OpenID for Verifiable Credentials](https://openid.net/specs/openid-connect-4-verifiable-presentations-1_0.html) | OIDF| | Query language | [Presentation Exchange v1.0](https://identity.foundation/presentation-exchange/spec/v1.0.0/)| DIF |
-| User Authentication | [Self-Issued OpenID Provider v2](https://openid.net/specs/openid-connect-self-issued-v2-1_0.html)| OIDF |
-| Trust in DID Owner | [Well Known DID Configuration](https://identity.foundation/.well-known/resources/did-configuration)| DIF |
+| User authentication | [Self-Issued OpenID Provider v2](https://openid.net/specs/openid-connect-self-issued-v2-1_0.html)| OIDF |
+| Trust in DID (decentralized identifier) owner | [Well Known DID Configuration](https://identity.foundation/.well-known/resources/did-configuration)| DIF |
| Revocation |[Verifiable Credential Status List 2021](https://github.com/w3c-ccg/vc-status-list-2021/tree/343b8b59cddba4525e1ef355356ae760fc75904e)| W3C CCG | ## Supported algorithms
-Entra Verified ID supports the following Key Types for the JWS signature verification:
+Microsoft Entra Verified ID supports the following key types for the JSON Web Signature (JWS) signature verification:
-|Key Type|JWT Algorithm|
+|Key type|JWT algorithm|
|--|-| |secp256k1|ES256K| |Ed25519|EdDSA| ## Interoperability
-Microsoft is collaborating with organizations members of Decentralized Identity Foundation (DIF), the W3C Credentials Community Group, and the wider identity community. Our collaboration efforts aim to build a Verifiable Credentials Interoperability profile to support standards based issuance, revocation, presentation and wallet portability.
+Microsoft is collaborating with organization members of Decentralized Identity Foundation (DIF), the W3C Credentials Community Group, and the wider identity community. Our collaboration efforts aim to build a Verifiable Credentials Interoperability profile to support standards-based issuance, revocation, presentation, and wallet portability.
-Today, we have a working JWT VC presentation profile that supports interoperable presentation of Verifiable Credentials between Wallets and Verifiers/RPs. Join us at DIF Claims and Credentials working group: [aka.ms/vcinterop](https://aka.ms/vcinterop)
+Today, we have a working JWT verifiable credentials presentation profile that supports the interoperable presentation of verifiable credentials between wallets and verifiers/resource providers. Join us at the DIF Claims and Credentials working group, [aka.ms/vcinterop](https://aka.ms/vcinterop).
## Next steps -- [Get started with verifiable credentials](verifiable-credentials-configure-tenant.md)
+- [Get started with verifiable credentials](verifiable-credentials-configure-tenant.md)
applied-ai-services How To Create Immersive Reader https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/immersive-reader/how-to-create-immersive-reader.md
Previously updated : 11/11/2021 Last updated : 7/1/2022
The script is designed to be flexible. It will first look for existing Immersive
[Parameter(Mandatory=$true)] [String] $ResourceLocation, [Parameter(Mandatory=$true)] [String] $ResourceGroupName, [Parameter(Mandatory=$true)] [String] $ResourceGroupLocation,
- [Parameter(Mandatory=$true)] [String] $AADAppDisplayName="ImmersiveReaderAAD",
+ [Parameter(Mandatory=$true)] [String] $AADAppDisplayName,
[Parameter(Mandatory=$true)] [String] $AADAppIdentifierUri,
- [Parameter(Mandatory=$true)] [String] $AADAppClientSecret,
[Parameter(Mandatory=$true)] [String] $AADAppClientSecretExpiration ) {
The script is designed to be flexible. It will first look for existing Immersive
$clientId = az ad app show --id $AADAppIdentifierUri --query "appId" -o tsv if (-not $clientId) { Write-Host "Creating new Azure Active Directory app"
- $clientId = az ad app create --password $AADAppClientSecret --end-date "$AADAppClientSecretExpiration" --display-name $AADAppDisplayName --identifier-uris $AADAppIdentifierUri --query "appId" -o tsv
-
+ $clientId = az ad app create --display-name $AADAppDisplayName --identifier-uris $AADAppIdentifierUri --query "appId" -o tsv
if (-not $clientId) {
- throw "Error: Failed to create Azure Active Directory app"
+ throw "Error: Failed to create Azure Active Directory application"
+ }
+ Write-Host "Azure Active Directory application created successfully."
+
+ $clientSecret = az ad app credential reset --id $clientId --end-date "$AADAppClientSecretExpiration" --query "password" | % { $_.Trim('"') }
+ if (-not $clientSecret) {
+ throw "Error: Failed to create Azure Active Directory application client secret"
}
- Write-Host "Azure Active Directory app created successfully."
- Write-Host "NOTE: To manage your Active Directory app client secrets after this Immersive Reader Resource has been created please visit https://portal.azure.com and go to Home -> Azure Active Directory -> App Registrations -> $AADAppDisplayName -> Certificates and Secrets blade -> Client Secrets section" -ForegroundColor Yellow
+ Write-Host "Azure Active Directory application client secret created successfully."
+
+ Write-Host "NOTE: To manage your Active Directory application client secrets after this Immersive Reader Resource has been created please visit https://portal.azure.com and go to Home -> Azure Active Directory -> App Registrations -> (your app) '$AADAppDisplayName' -> Certificates and Secrets blade -> Client Secrets section" -ForegroundColor Yellow
} # Create a service principal if it doesn't already exist
- $principalId = az ad sp show --id $AADAppIdentifierUri --query "objectId" -o tsv
+ $principalId = az ad sp show --id $AADAppIdentifierUri --query "id" -o tsv
if (-not $principalId) { Write-Host "Creating new service principal" az ad sp create --id $clientId | Out-Null
- $principalId = az ad sp show --id $AADAppIdentifierUri --query "objectId" -o tsv
+ $principalId = az ad sp show --id $AADAppIdentifierUri --query "id" -o tsv
if (-not $principalId) { throw "Error: Failed to create new service principal" } Write-Host "New service principal created successfully"
- }
- # Sleep for 5 seconds to allow the new service principal to propagate
- Write-Host "Sleeping for 5 seconds"
- Start-Sleep -Seconds 5
+ # Sleep for 5 seconds to allow the new service principal to propagate
+ Write-Host "Sleeping for 5 seconds"
+ Start-Sleep -Seconds 5
+ }
Write-Host "Granting service principal access to the newly created Immersive Reader resource" $accessResult = az role assignment create --assignee $principalId --scope $resourceId --role "Cognitive Services Immersive Reader User"
The script is designed to be flexible. It will first look for existing Immersive
$result = @{} $result.TenantId = $tenantId $result.ClientId = $clientId
- $result.ClientSecret = $AADAppClientSecret
+ $result.ClientSecret = $clientSecret
$result.Subdomain = $ResourceSubdomain
- Write-Host "Success! " -ForegroundColor Green -NoNewline
- Write-Host "Save the following JSON object to a text file for future reference:"
+ Write-Host "`nSuccess! " -ForegroundColor Green -NoNewline
+ Write-Host "Save the following JSON object to a text file for future reference."
+ Write-Host "*****"
+ if($clientSecret -ne $null) {
+
+ Write-Host "This function has created a client secret (password) for you. This secret is used when calling Azure Active Directory to fetch access tokens."
+ Write-Host "This is the only time you will ever see the client secret for your Azure Active Directory application, so save it now." -ForegroundColor Yellow
+ }
+ else{
+ Write-Host "You will need to retrieve the ClientSecret from your original run of this function that created it. If you don't have it, you will need to go create a new client secret for your Azure Active Directory application. Please visit https://portal.azure.com and go to Home -> Azure Active Directory -> App Registrations -> (your app) '$AADAppDisplayName' -> Certificates and Secrets blade -> Client Secrets section." -ForegroundColor Yellow
+ }
+ Write-Host "*****`n"
Write-Output (ConvertTo-Json $result) } ```
The script is designed to be flexible. It will first look for existing Immersive
1. Run the function `Create-ImmersiveReaderResource`, supplying the '<PARAMETER_VALUES>' placeholders below with your own values as appropriate. ```azurepowershell-interactive
- Create-ImmersiveReaderResource -SubscriptionName '<SUBSCRIPTION_NAME>' -ResourceName '<RESOURCE_NAME>' -ResourceSubdomain '<RESOURCE_SUBDOMAIN>' -ResourceSKU '<RESOURCE_SKU>' -ResourceLocation '<RESOURCE_LOCATION>' -ResourceGroupName '<RESOURCE_GROUP_NAME>' -ResourceGroupLocation '<RESOURCE_GROUP_LOCATION>' -AADAppDisplayName '<AAD_APP_DISPLAY_NAME>' -AADAppIdentifierUri '<AAD_APP_IDENTIFIER_URI>' -AADAppClientSecret '<AAD_APP_CLIENT_SECRET>' -AADAppClientSecretExpiration '<AAD_APP_CLIENT_SECRET_EXPIRATION>'
+ Create-ImmersiveReaderResource -SubscriptionName '<SUBSCRIPTION_NAME>' -ResourceName '<RESOURCE_NAME>' -ResourceSubdomain '<RESOURCE_SUBDOMAIN>' -ResourceSKU '<RESOURCE_SKU>' -ResourceLocation '<RESOURCE_LOCATION>' -ResourceGroupName '<RESOURCE_GROUP_NAME>' -ResourceGroupLocation '<RESOURCE_GROUP_LOCATION>' -AADAppDisplayName '<AAD_APP_DISPLAY_NAME>' -AADAppIdentifierUri '<AAD_APP_IDENTIFIER_URI>' -AADAppClientSecretExpiration '<AAD_APP_CLIENT_SECRET_EXPIRATION>'
```
- The full command will look something like the following. Here we have put each parameter on its own line for clarity, so you can see the whole command. Do not copy or use this command as-is. Copy and use the command above with your own values. This example has dummy values for the '<PARAMETER_VALUES>' above. Yours will be different, as you will come up with your own names for these values.
+ The full command will look something like the following. Here we have put each parameter on its own line for clarity, so you can see the whole command. __Do not copy or use this command as-is.__ Copy and use the command above with your own values. This example has dummy values for the '<PARAMETER_VALUES>' above. Yours will be different, as you will come up with your own names for these values.
``` Create-ImmersiveReaderResource
The script is designed to be flexible. It will first look for existing Immersive
-ResourceGroupLocation 'westus2' -AADAppDisplayName 'MyOrganizationImmersiveReaderAADApp' -AADAppIdentifierUri 'api://MyOrganizationImmersiveReaderAADApp'
- -AADAppClientSecret 'SomeStrongPassword'
-AADAppClientSecretExpiration '2021-12-31' ```
The script is designed to be flexible. It will first look for existing Immersive
| ResourceName | Must be alphanumeric, and may contain '-', as long as the '-' is not the first or last character. Length may not exceed 63 characters.| | ResourceSubdomain |A custom subdomain is needed for your Immersive Reader resource. The subdomain is used by the SDK when calling the Immersive Reader service to launch the Reader. The subdomain must be globally unique. The subdomain must be alphanumeric, and may contain '-', as long as the '-' is not the first or last character. Length may not exceed 63 characters. This parameter is optional if the resource already exists. | | ResourceSKU |Options: `S0` (Standard tier) or `S1` (Education/Nonprofit organizations). Visit our [Cognitive Services pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/immersive-reader/) to learn more about each available SKU. This parameter is optional if the resource already exists. |
- | ResourceLocation |Options: `eastus`, `eastus2`, `southcentralus`, `westus`, `westus2`, `australiaeast`, `southeastasia`, `centralindia`, `japaneast`, `northeurope`, `uksouth`, `westeurope`. This parameter is optional if the resource already exists. |
+ | ResourceLocation |Options: `australiaeast`, `brazilsouth`, `canadacentral`, `centralindia`, `centralus`, `eastasia`, `eastus`, `eastus2`, `francecentral`, `germanywestcentral`, `japaneast`, `japanwest`, `jioindiawest`, `koreacentral`, `northcentralus`, `northeurope`, `norwayeast`, `southafricanorth`, `southcentralus`, `southeastasia`, `swedencentral`, `switzerlandnorth`, `switzerlandwest`, `uaenorth`, `uksouth`, `westcentralus`, `westeurope`, `westus`, `westus2`, `westus3`. This parameter is optional if the resource already exists. |
| ResourceGroupName |Resources are created in resource groups within subscriptions. Supply the name of an existing resource group. If the resource group does not already exist, a new one with this name will be created. | | ResourceGroupLocation |If your resource group doesn't exist, you need to supply a location in which to create the group. To find a list of locations, run `az account list-locations`. Use the *name* property (without spaces) of the returned result. This parameter is optional if your resource group already exists. | | AADAppDisplayName |The Azure Active Directory application display name. If an existing Azure AD application is not found, a new one with this name will be created. This parameter is optional if the Azure AD application already exists. |
- | AADAppIdentifierUri |The URI for the Azure AD app. If an existing Azure AD app is not found, a new one with this URI will be created. For example, `api://MyOrganizationImmersiveReaderAADApp`. Here we are using the default Azure AD URI scheme prefix of `api://` for compatibility with the [Azure AD policy of using verified domains](../../active-directory/develop/reference-breaking-changes.md#appid-uri-in-single-tenant-applications-will-require-use-of-default-scheme-or-verified-domains). |
- | AADAppClientSecret |A password you create that will be used later to authenticate when acquiring a token to launch the Immersive Reader. The password must be at least 16 characters long, contain at least 1 special character, and contain at least 1 numeric character. To manage Azure AD application client secrets after you've created this resource please visit https://portal.azure.com and go to Home -> Azure Active Directory -> App Registrations -> `[AADAppDisplayName]` -> Certificates and Secrets blade -> Client Secrets section (as shown in the "Manage your Azure AD application secrets" screenshot below). |
- | AADAppClientSecretExpiration |The date or datetime after which your `[AADAppClientSecret]` will expire (e.g. '2020-12-31T11:59:59+00:00' or '2020-12-31'). |
+ | AADAppIdentifierUri |The URI for the Azure AD application. If an existing Azure AD application is not found, a new one with this URI will be created. For example, `api://MyOrganizationImmersiveReaderAADApp`. Here we are using the default Azure AD URI scheme prefix of `api://` for compatibility with the [Azure AD policy of using verified domains](../../active-directory/develop/reference-breaking-changes.md#appid-uri-in-single-tenant-applications-will-require-use-of-default-scheme-or-verified-domains). |
+ | AADAppClientSecretExpiration |The date or datetime after which your AAD Application Client Secret (password) will expire (e.g. '2020-12-31T11:59:59+00:00' or '2020-12-31'). This function will create a client secret for you. To manage Azure AD application client secrets after you've created this resource, please visit https://portal.azure.com and go to Home -> Azure Active Directory -> App Registrations -> (your app) `[AADAppDisplayName]` -> Certificates and Secrets blade -> Client Secrets section (as shown in the "Manage your Azure AD application secrets" screenshot below).|
Manage your Azure AD application secrets
azure-arc Point In Time Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/point-in-time-restore.md
Previously updated : 03/01/2022 Last updated : 06/17/2022
The Retention period for an Azure Arc-enabled SQL managed instance can be reconf
> [!WARNING] > If you reduce the current retention period, you lose the ability to restore to points in time older than the new retention period. Backups that are no longer needed to provide PITR within the new retention period are deleted. If you increase the current retention period, you do not immediately gain the ability to restore to older points in time within the new retention period. You gain that ability over time, as the system starts to retain backups for longer.
-### Change Retention period for **Direct** connected SQL managed instance
++
+The ```--retention-period``` can be changed for a SQL Managed Instance-Azure Arc as follows. The below command applies to both ```direct``` and ```indirect``` connected modes.
+ ```azurecli
-az sql mi-arc edit --name <SQLMI name> --custom-location dn-octbb-cl --resource-group dn-testdc --location eastus --retention-days 10
-#Example
-az sql mi-arc edit --name sqlmi --custom-location dn-octbb-cl --resource-group dn-testdc --location eastus --retention-days 10
+az sql mi-arc update --name <SQLMI name> --k8s-namespace <namespace> --use-k8s --retention-days <retentiondays>
```
-### Change Retention period for **Indirect** connected SQL managed instance
-
+For example:
```azurecli
-az sql mi-arc edit --name <SQLMI name> --k8s-namespace <namespace> --use-k8s --retention-days <retentiondays>
-#Example
-az sql mi-arc edit --name sqlmi --k8s-namespace arc --use-k8s --retention-days 10
+az sql mi-arc update --name sqlmi --k8s-namespace arc --use-k8s --retention-days 10
``` ## Disable Automatic backups
-You can disable the automated backups for a specific instance of Azure Arc-enabled SQL managed instance by setting the `--retention-days` property to 0, as follows.
+You can disable the built-in automated backups for a specific instance of Azure Arc-enabled SQL managed instance by setting the `--retention-days` property to 0, as follows. The below command applies to both ```direct``` and ```indirect``` modes.
> [!WARNING] > If you disable Automatic Backups for an Azure Arc-enabled SQL managed instance, then any Automatic Backups configured will be deleted and you lose the ability to do a point-in-time restore. You can change the `retention-days` property to re-initiate automatic backups if needed.
-### Disable Automatic backups for **Direct** connected SQL managed instance
```azurecli
-az sql mi-arc edit --name <SQLMI name> --custom-location dn-octbb-cl --resource-group dn-testdc --location eastus --retention-days 0
-#Example
-az sql mi-arc edit --name sqlmi --custom-location dn-octbb-cl --resource-group dn-testdc --location eastus --retention-days 0
+az sql mi-arc update --name <SQLMI name> --k8s-namespace <namespace> --use-k8s --retention-days 0
```
-### Disable Automatic backups for **Indirect** connected SQL managed instance
-
+For example:
```azurecli
-az sql mi-arc edit --name <SQLMI name> --k8s-namespace <namespace> --use-k8s --retention-days 0
-#Example
-az sql mi-arc edit --name sqlmi --k8s-namespace arc --use-k8s --retention-days 0
+az sql mi-arc update --name sqlmi --k8s-namespace arc --use-k8s --retention-days 0
``` ## Monitor backups
azure-fluid-relay Version Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/concepts/version-compatibility.md
npx install-peerdeps @fluidframework/azure-client
> supported with the General Availability of Azure Fluid Relay. With this upgrade, youΓÇÖll make use of our new multi-region routing capability where > Azure Fluid Relay will host your session closer to your end users to improve customer experience. In the latest package, you will need to update your > serviceConfig object to the new Azure Fluid Relay service endpoint instead of the storage and orderer endpoints:
-> If your Azure Fluid Relay resource is in West US 2, please use **https://us.fluidrelay.azure.com**. If it is West Europe,
-> use **https://eu.fluidrelay.azure.com**. If it is in Southeast Asia, use **https://global.fluidrelay.azure.com**.
+> If your Azure Fluid Relay resource is in West US 2, please use **`https://us.fluidrelay.azure.com`**. If it is West Europe,
+> use **`https://eu.fluidrelay.azure.com`**. If it is in Southeast Asia, use **`https://global.fluidrelay.azure.com`**.
> These values can also be found in the "Access Key" section of the Fluid Relay resource in the Azure portal. The orderer and storage endpoints will be deprecated soon.
azure-functions Durable Functions Error Handling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-error-handling.md
Title: Handling errors in Durable Functions - Azure description: Learn how to handle errors in the Durable Functions extension for Azure Functions. Previously updated : 05/09/2022 Last updated : 07/01/2022 ms.devlang: csharp, javascript, powershell, python, java
main = df.Orchestrator.create(orchestrator_function)
``` # [PowerShell](#tab/powershell)
+By default, cmdlets in PowerShell do not raise exceptions that can be caught using try/catch blocks. You have two options for changing this behavior:
+
+1. Use the `-ErrorAction Stop` flag when invoking cmdlets, such as `Invoke-DurableActivity`.
+2. Set the [`$ErrorActionPreference`](/powershell/module/microsoft.powershell.core/about/about_preference_variables#erroractionpreference) preference variable to `"Stop"` in the orchestrator function before invoking cmdlets.
+ ```powershell param($Context)+
+$ErrorActionPreference = "Stop"
+ $transferDetails = $Context.Input Invoke-DurableActivity -FunctionName 'DebitAccount' -Input @{ account = transferDetails.sourceAccount; amount = transferDetails.amount }
try {
} ```
+For more information on error handling in PowerShell, see the [Try-Catch-Finally](/powershell/module/microsoft.powershell.core/about/about_try_catch_finally) PowerShell documentation.
+ # [Java](#tab/java) ```java
azure-monitor Logs Data Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-data-export.md
Data is exported without a filter. For example, when you configure a data export
Log Analytics workspace data export continuously exports data that is sent to your Log Analytics workspace. There are other options to export data for particular scenarios: - Configure Diagnostic Settings in Azure resources. Logs is sent to destination directly and has lower latency compared to data export in Log Analytics.-- Scheduled export from a log query using a Logic App. This is similar to the data export feature, but allows you to export historical data from your workspace, using filters and aggregation. This method is subject to [log query limits](../service-limits.md#log-analytics-workspaces) and not intended for scale. See [Archive data from Log Analytics workspace to Azure Storage Account using Logic App](logs-export-logic-app.md).
+- Schedule export of data based on a log query you define with the [Log Analytics query API](/rest/api/loganalytics/dataaccess/query/execute). Use services such as Azure Data Factory, Azure Functions, or Azure Logic App to orchestrate queries in your workspace and export data to a destination. This is similar to the data export feature, but allows you to export historical data from your workspace, using filters and aggregation. This method is subject to [log query limits](../service-limits.md#log-analytics-workspaces) and not intended for scale. See [Archive data from Log Analytics workspace to Azure Storage Account using Logic App](logs-export-logic-app.md).
- One time export to local machine using PowerShell script. See [Invoke-AzOperationalInsightsQueryExport](https://www.powershellgallery.com/packages/Invoke-AzOperationalInsightsQueryExport). ## Limitations
azure-monitor Tutorial Custom Logs Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-custom-logs-api.md
Ensure that you have the correct permissions for your application to the DCR. Yo
The message is too large. The maximum message size is currently 1MB per call. ### Script returns error code 429
-API limits have been exceeded. The limits are currently set to 500MB of data/minute for both compressed and uncompressed data, as well as 300,000 requests/minute. Retry after the duration listed in the `Retry-After` header in the response.
+API limits have been exceeded. Refer to [service limits for custom logs](../service-limits.md#custom-logs) for details on the current limits.
+ ### Script returns error code 503 Ensure that you have the correct permissions for your application to the DCR. You may also need to wait up to 30 minutes for permissions to propagate.
container-apps Compare Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/compare-options.md
There are many options for teams to build and deploy cloud native and containeri
- [Azure Container Instances](#azure-container-instances) - [Azure Kubernetes Service](#azure-kubernetes-service) - [Azure Functions](#azure-functions)-- [Azure Spring Cloud](#azure-spring-cloud)
+- [Azure Spring Cloud](#azure-spring-apps)
- [Azure Red Hat OpenShift](#azure-red-hat-openshift) There's no perfect solution for every use case and every team. The following explanation provides general guidance and recommendations as a starting point to help find the best fit for your team and your requirements.
cosmos-db Materialized Views Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/materialized-views-cassandra.md
+
+ Title: Materialized Views for Azure Cosmos DB API for Cassandra. (Preview)
+description: This documentation is provided as a resource for participants in the preview of Azure Cosmos DB Cassandra API Materialized View.
++++ Last updated : 01/06/2022+++
+# Enable materialized views for Azure Cosmos DB API for Cassandra operations (Preview)
+
+> [!IMPORTANT]
+> Materialized Views for Azure Cosmos DB API for Cassandra is currently in gated preview. Please send an email to mv-preview@microsoft.com to try this feature.
+> Materialized View preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Feature overview
+
+Materialized Views when defined will help provide a means to efficiently query a base table (container on Cosmos DB) with non-primary key filters. When users write to the base table, the Materialized view is built automatically in the background. This view can have a different primary key for lookups. The view will also contain only the projected columns from the base table. It will be a read-only table.
+
+You can query a column store without specifying a partition key by using Secondary Indexes. However, the query won't be effective for columns with high cardinality (scanning through all data for a small result set) or columns with low cardinality. Such queries end up being expensive as they end up being a cross partition query.
+
+With Materialized view, you can
+- Use as Global Secondary Indexes and save cross partition scans that reduce expensive queries
+- Provide SQL based conditional predicate to populate only certain columns and certain data that meet the pre-condition
+- Real time MVs that simplify real time event based scenarios where customers today use Change feed trigger for precondition checks to populate new collections"
+
+## Main benefits
+
+- With Materialized View (Server side denormalization), you can avoid multiple independent tables and client side denormalization.
+- Materialized view feature takes on the responsibility of updating views in order to keep them consistent with the base table. With this feature, you can avoid dual writes to the base table and the view.
+- Materialized Views helps optimize read performance
+- Ability to specify throughput for the materialized view independently
+- Based on the requirements to hydrate the view, you can configure the MV builder layer appropriately.
+- Speeding up write operations as it only needs to be written to the base table.
+- Additionally, This implementation on Cosmos DB is based on a pull model, which doesn't affect the writer performance.
+++
+## How to get started?
+
+New Cassandra API accounts with Materialized Views enabled can be provisioned on your subscription by using REST API calls from az CLI.
+
+### Log in to the Azure command line interface
+
+Install Azure CLI as mentioned at [How to install the Azure CLI | Microsoft Docs](https://docs.microsoft.com/cli/azure/install-azure-cli) and log on using the below:
+ ```azurecli-interactive
+ az login
+ ```
+
+### Create an account
+
+To create account with support for customer managed keys and materialized views skip to **this** section
+
+To create an account, use the following command after creating body.txt with the below content, replacing {{subscriptionId}} with your subscription ID, {{resourceGroup}} with a resource group name that you should have created in advance, and {{accountName}} with a name for your Cassandra API account.
+
+ ```azurecli-interactive
+ az rest --method PUT --uri https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resourceGroup}}/providers/Microsoft.DocumentDb/databaseAccounts/{{accountName}}?api-version=2021-11-15-preview --body @body.txt
+ body.txt content:
+ {
+ "location": "East US",
+ "properties":
+ {
+ "databaseAccountOfferType": "Standard",
+ "locations": [ { "locationName": "East US" } ],
+ "capabilities": [ { "name": "EnableCassandra" }, { "name": "CassandraEnableMaterializedViews" }],
+ "enableMaterializedViews": true
+ }
+ }
+ ```
+
+ Wait for a few minutes and check the completion using the below, the provisioningState in the output should have become Succeeded:
+ ```
+ az rest --method GET --uri https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resourceGroup}}/providers/Microsoft.DocumentDb/databaseAccounts/{{accountName}}?api-version=2021-11-15-preview
+ ```
+### Create an account with support for customer managed keys and materialized views
+
+This step is optional ΓÇô you can skip this step if you don't want to use Customer Managed Keys for your Cosmos DB account.
+
+To use Customer Managed Keys feature and Materialized views together on Cosmos DB account, you must first configure managed identities with Azure Active Directory for your account and then enable support for materialized views.
+
+You can use the documentation [here](https://docs.microsoft.com/azure/cosmos-db/how-to-setup-cmk) to configure your Cosmos DB Cassandra account with customer managed keys and setup managed identity access to the key Vault. Make sure you follow all the steps in [Using a managed identity in Azure key vault access policy](https://docs.microsoft.com/azure/cosmos-db/how-to-setup-managed-identity). The next step to enable materializedViews on the account.
+
+Once your account is set up with CMK and managed identity, you can enable materialized views on the account by enabling ΓÇ£enableMaterializedViewsΓÇ¥ property in the request body.
+
+ ```azurecli-interactive
+ az rest --method PATCH --uri https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resourceGroup}}/providers/Microsoft.DocumentDb/databaseAccounts/{{accountName}}?api-version=2021-07-01-preview --body @body.txt
++
+body.txt content:
+{
+ "properties":
+ {
+ "enableMaterializedViews": true
+ }
+}
+ ```
++
+ Wait for a few minutes and check the completion using the below, the provisioningState in the output should have become Succeeded:
+ ```
+az rest --method GET --uri https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resourceGroup}}/providers/Microsoft.DocumentDb/databaseAccounts/{{accountName}}?api-version=2021-07-01-preview
+```
+
+Perform another patch to set ΓÇ£CassandraEnableMaterializedViewsΓÇ¥ capability and wait for it to succeed
+
+```
+az rest --method PATCH --uri https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resourceGroup}}/providers/Microsoft.DocumentDb/databaseAccounts/{{accountName}}?api-version=2021-07-01-preview --body @body.txt
+
+body.txt content:
+{
+ "properties":
+ {
+ "capabilities":
+[{"name":"EnableCassandra"},
+ {"name":"CassandraEnableMaterializedViews"}]
+ }
+}
+```
+
+### Create materialized view builder
+
+Following this step, you'll also need to provision a Materialized View Builder:
+
+```
+az rest --method PUT --uri https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resourceGroup}}/providers/Microsoft.DocumentDb/databaseAccounts/{{accountName}}/services/materializedViewsBuilder?api-version=2021-07-01-preview --body @body.txt
+
+body.txt content:
+{
+ "properties":
+ {
+ "serviceType": "materializedViewsBuilder",
+ "instanceCount": 1,
+ "instanceSize": "Cosmos.D4s"
+ }
+}
+```
+
+Wait for a couple of minutes and check the status using the below, the status in the output should have become Running:
+
+```
+az rest --method GET --uri https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resourceGroup}}/providers/Microsoft.DocumentDb/databaseAccounts/{{accountName}}/services/materializedViewsBuilder?api-version=2021-07-01-preview
+```
+
+## Caveats and current limitations
+
+Once your account and Materialized View Builder is set up, you should be able to create Materialized views per the documentation [here](https://cassandra.apache.org/doc/latest/cql/mvs.html) :
+
+However, there are a few caveats with Cosmos DB Cassandra APIΓÇÖs preview implementation of Materialized Views:
+- Materialized Views can't be created on a table that existed before the account was onboarded to support materialized views. Create new table after account is onboarded on which materialized views can be defined.
+- For the MV definitionΓÇÖs WHERE clause, only ΓÇ£IS NOT NULLΓÇ¥ filters are currently allowed.
+- After a Materialized View is created against a base table, ALTER TABLE ADD operations aren't allowed on the base tableΓÇÖs schema - they're allowed only if none of the MVs have select * in their definition.
+
+In addition to the above, note the following limitations
+
+### Availability zones limitations
+
+- Materialized views can't be enabled on an account that has Availability zone enabled regions.
+- Adding a new region with Availability zone is not supported once ΓÇ£enableMaterializedViewsΓÇ¥ is set to true on the account.
+
+### Periodic backup and restore limitations
+
+Materialized views aren't automatically restored with the restore process. Customer needs to re-create the materialized views after the restore process is complete. Customer needs to enableMaterializedViews on their restored account before creating the materialized views and provision the builders for the materialized views to be built.
+
+Other limitations similar to **Open Source Apache Cassandra** behavior
+
+- Defining Conflict resolution policy on Materialized Views is not allowed.
+- Write operations from customer aren't allowed on Materialized views.
+- Cross document queries and use of aggregate functions aren't supported on Materialized views.
+- Modifying MaterializedViewDefinitionString after MV creation is not supported.
+- Deleting base table is not allowed if at least one MV is defined on it. All the MVs must first be deleted and then the base table can be deleted.
+- Defining materialized views on containers with Static columns is not allowed
+
+## Under the hood
+
+Azure Cosmos DB Cassandra API uses a MV builder compute layer to maintain Materialized views. Customer gets flexibility to configure the MV builder compute instances depending on the latency and lag requirements to hydrate the views. The compute containers are shared among all MVs within the database account. Each provisioned compute container spawns off multiple tasks that read change feed from base table partitions and write data to MV (which is also another table) after transforming them as per MV definition for every MV in the database account.
+
+## Frequently asked questions (FAQs) …
++
+### What transformations/actions are supported?
+
+- Specifying a partition key that is different from base table partition key.
+- Support for projecting selected subset of columns from base table.
+- Determine if row from base table can be part of materialized view based on conditions evaluated on primary key columns of base table row. Filters supported - equalities, inequalities, contains. (Planned for GA)
+
+### What consistency levels will be supported?
+
+Data in materialized view is eventually consistent. User might read stale rows when compared to data on base table due to redo of some operations on MVs. This behavior is acceptable since we guarantee only eventual consistency on the MV. Customers can configure (scale up and scale down) the MV builder layer depending on the latency requirement for the view to be consistent with base table.
+
+### Will there be an autoscale layer for the MV builder instances?
+
+Autoscaling for MV builder is not available right now. The MV builder instances can be manually scaled by modifying the instance count(scale out) or instance size(scale up).
+
+### Details on the billing model
+
+The proposed billing model will be to charge the customers for:
+
+**MV Builder compute nodes** MV Builder Compute ΓÇô Single tenant layer
+
+**Storage** The OLTP storage of the base table and MV based on existing storage meter for Containers. LogStore won't be charged.
+
+**Request Units** The provisioned RUs for base container and Materialized View.
+
+### What are the different SKUs that will be available?
+Refer to Pricing - [Azure Cosmos DB | Microsoft Azure](https://azure.microsoft.com/pricing/details/cosmos-db/) and check instances under Dedicated Gateway
+
+### What type of TTL support do we have?
+
+Setting table level TTL on MV is not allowed. TTL from base table rows will be applied on MV as well.
++
+### Initial troubleshooting if MVs aren't up to date:
+- Check if MV builder instances are provisioned
+- Check if enough RUs are provisioned on the base table
+- Check for unavailability on Base table or MV
+
+### What type of monitoring is available in addition to the existing monitoring for Cassandra API?
+
+- Max Materialized View Catchup Gap in Minutes ΓÇô Value(t) indicates rows written to base table in last ΓÇÿtΓÇÖ minutes is yet to be propagated to MV.
+- Metrics related to RUs consumed on base table for MV build (read change feed cost)
+- Metrics related to RUs consumed on MV for MV build (write cost)
+- Metrics related to resource consumption on MV builders (CPU, memory usage metrics)
++
+### What are the restore options available for MVs?
+MVs can't be restored. Hence, MVs will need to be recreated once the base table is restored.
+
+### Can you create more than one view on a base table?
+
+Multiple views can be created on the same base table. Limit of five views is enforced.
+
+### How is uniqueness enforced on the materialized view? How will the mapping between the records in base table to the records in materialized view look like?
+
+The partition and clustering key of the base table are always part of primary key of any materialized view defined on it and enforce uniqueness of primary key after data repartitioning.
+
+### Can we add or remove columns on the base table once materialized view is defined?
+
+You'll be able to add a column to the base table, but you won't be able to remove a column. After a MV is created against a base table, ALTER TABLE ADD operations aren't allowed on the base table - they're allowed only if none of the MVs have select * in their definition. Cassandra doesn't support dropping columns on the base table if it has a materialized view defined on it.
+
+### Can we create MV on existing base table?
+
+No. Materialized Views can't be created on a table that existed before the account was onboarded to support materialized views. Create new table after account is onboarded on which materialized views can be defined. MV on existing table is planned for the future.
+
+### What are the conditions on which records won't make it to MV and how to identify such records?
+
+Below are some of the identified cases where data from base table can't be written to MV as they violate some constraints on MV table-
+- Rows that donΓÇÖt satisfy partition key size limit in the materialized views
+- Rows that don't satisfy clustering key size limit in materialized views
+
+Currently we drop these rows but plan to expose details related to dropped rows in future so that the user can reconcile the missing data.
data-factory Control Flow Web Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-web-activity.md
Property | Description | Allowed values | Required
-- | -- | -- | -- name | Name of the web activity | String | Yes type | Must be set to **WebActivity**. | String | Yes
-method | REST API method for the target endpoint. | String. <br/><br/>Supported Types: "GET", "POST", "PUT" | Yes
+method | REST API method for the target endpoint. | String. <br/><br/>Supported Types: "GET", "POST", "PUT", "PATCH", "DELETE" | Yes
url | Target endpoint and path | String (or expression with resultType of string). The activity will timeout at 1 minute with an error if it does not receive a response from the endpoint. You can increase this response timeout up to 10 mins by updating the httpRequestTimeout property | Yes httpRequestTimeout | Response timeout duration | hh:mm:ss with the max value as 00:10:00. If not explicitly specified defaults to 00:01:00 | No
-headers | Headers that are sent to the request. For example, to set the language and type on a request: `"headers" : { "Accept-Language": "en-us", "Content-Type": "application/json" }`. | String (or expression with resultType of string) | Yes, Content-type header is required. `"headers":{ "Content-Type":"application/json"}`
-body | Represents the payload that is sent to the endpoint. | String (or expression with resultType of string). <br/><br/>See the schema of the request payload in [Request payload schema](#request-payload-schema) section. | Required for POST/PUT methods.
-authentication | Authentication method used for calling the endpoint. Supported Types are "Basic, or ClientCertificate." For more information, see [Authentication](#authentication) section. If authentication is not required, exclude this property. | String (or expression with resultType of string) | No
+headers | Headers that are sent to the request. For example, to set the language and type on a request: `"headers" : { "Accept-Language": "en-us", "Content-Type": "application/json" }`. | String (or expression with resultType of string) | No
+body | Represents the payload that is sent to the endpoint. | String (or expression with resultType of string). <br/><br/>See the schema of the request payload in [Request payload schema](#request-payload-schema) section. | Required for POST/PUT/PATCH methods.
+authentication | Authentication method used for calling the endpoint. Supported Types are "Basic, Client Certificate, System-assigned Managed Identity, User-assigned Managed Identity, Service Principal." For more information, see [Authentication](#authentication) section. If authentication is not required, exclude this property. | String (or expression with resultType of string) | No
+turnOffAsync | Option to disable invoking HTTP GET on location field in the response header of a HTTP 202 Response. If set true, it stops invoking HTTP GET on http location given in response header. If set false then it continues to invoke HTTP GET call on location given in http response headers. | Allowed values are false (default) and true. | No
+disableCertValidation | Removes server side certificate validation (not recommended unless you are connecting to a trusted server that does not use a standard CA cert). | Allowed values are false (default) and true. | No
datasets | List of datasets passed to the endpoint. | Array of dataset references. Can be an empty array. | Yes linkedServices | List of linked services passed to endpoint. | Array of linked service references. Can be an empty array. | Yes connectVia | The [integration runtime](./concepts-integration-runtime.md) to be used to connect to the data store. You can use the Azure integration runtime or the self-hosted integration runtime (if your data store is in a private network). If this property isn't specified, the service uses the default Azure integration runtime. | The integration runtime reference. | No
frontdoor Create Front Door Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-template.md
+
+ Title: 'Quickstart: Create an Azure Front Door Standard/Premium by using an Azure Resource Manager template (ARM template)'
+description: This quickstart describes how to create an Azure Front Door Standard/Premium by using Azure Resource Manager template (ARM template).
+
+documentationcenter:
++
+editor:
Last updated : 07/12/2022+++
+ na
+
+#Customer intent: As an IT admin, I want to direct user traffic to ensure high availability of web applications.
++
+# Quickstart: Create a Front Door Standard/Premium using an ARM template
+
+This quickstart describes how to use an Azure Resource Manager template (ARM Template) to create an Azure Front Door Standard/Premium with a Web App as origin
++
+If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
+
+[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.cdn%2Ffront-door-standard-premium-app-service-public%2Fazuredeploy.json)
+
+## Prerequisites
+
+* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+* IP or FQDN of a website or web application.
+
+## Review the template
+
+The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/front-door-standard-premium-app-service-public/).
+
+In this quickstart, you'll create a Front Door Standard/Premium, an App Service, and configure the App Service to validate that traffic has come through the Front Door origin.
++
+One Azure resource is defined in the template:
+
+* [**Microsoft.Network/frontDoors**](/azure/templates/microsoft.network/frontDoors)
+
+## Deploy the template
+
+1. Select **Try it** from the following code block to open Azure Cloud Shell, and then follow the instructions to sign in to Azure.
+
+> [!NOTE]
+> If you want to deploy Azure Front Door Premium instead of Standard substitute the value of the sku parameter with `Premium_AzureFrontDoor`. For detailed comparison, view [Azure Front Door tier comparison](standard-premium/tier-comparison.md).
++
+```azurepowershell-interactive
+$projectName = Read-Host -Prompt "Enter a project name that is used for generating resource names"
+$location = Read-Host -Prompt "Enter the location (i.e. centralus)"
+$templateUri = "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.cdn/front-door-standard-premium-app-service-public/azuredeploy.json"
+
+$resourceGroupName = "${projectName}rg"
+
+New-AzResourceGroup -Name $resourceGroupName -Location "$location"
+New-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName -TemplateUri $templateUri -frontDoorSkuName Standard_AzureFrontDoor
+
+Read-Host -Prompt "Press [ENTER] to continue ..."
+```
+
+Wait until you see the prompt from the console.
+
+2. Select **Copy** from the previous code block to copy the PowerShell script.
+
+3. Right-click the shell console pane and then select **Paste**.
+
+4. Enter the values.
+
+ The template deployment creates a Front Door with a web app as origin
+
+ The resource group name is the project name with **rg** appended.
+
+ > [!NOTE]
+ > **frontDoorName** needs to be a globally unique name in order for the template to deploy successfully. If deployment fails, start over with Step 1.
+
+ It takes a few minutes to deploy the template. When completed, the output is similar to:
+
+ :::image type="content" source="./media/quickstart-create-front-door-template/front-door-standard-premium-template-deployment-powershell-output.png" alt-text="Front Door Resource Manager template PowerShell deployment output":::
+
+Azure PowerShell is used to deploy the template. In addition to Azure PowerShell, you can also use the Azure portal, Azure CLI, and REST API. To learn other deployment methods, see [Deploy templates](../azure-resource-manager/templates/deploy-portal.md).
+
+## Validate the deployment
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. Select **Resource groups** from the left pane.
+
+3. Select the resource group that you created in the previous section. The default resource group name is the project name with **rg** appended.
+
+4. Select the Front Door you created previously and you'll be able to see the endpoint hostname. Copy the hostname and paste it on to the address bar of a browser. Press enter and your request will automatically get routed to the web app.
+
+ :::image type="content" source="./media/create-front-door-portal/front-door-web-app-origin-success.png" alt-text="Screenshot of the message: Your web app is running and waiting for your content.":::
+++
+## Clean up resources
+
+When you no longer need the Front Door service, delete the resource group. This will remove the Front Door and all the related resources.
+
+To delete the resource group, call the `Remove-AzResourceGroup` cmdlet:
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name <your resource group name>
+```
+
+## Next steps
+
+In this quickstart, you created a Front Door.
+
+To learn how to add a custom domain to your Front Door, continue to the Front Door tutorials.
+
+> [!div class="nextstepaction"]
+> [Front Door tutorials](front-door-custom-domain.md)
iot-hub-device-update Device Update Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-agent-overview.md
-# Device Update for IoT Hub Agent Overview
+# Device Update for IoT Hub agent overview
-The Device Update Agent consists of two conceptual layers:
+The Device Update agent consists of two conceptual layers:
-* The Interface Layer builds on top of [Azure IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md)
-allowing for messaging to flow between the Device Update Agent and Device Update Services.
-* The Platform Layer is responsible for the high-level update actions of Download, Install, and Apply that may be platform, or device specific.
+* The *interface layer* builds on top of [Azure IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md), allowing for messaging to flow between the Device Update agent and Device Update service.
+* The *platform layer* is responsible for the high-level update actions of download, install, and apply that may be platform- or device-specific.
:::image type="content" source="media/understand-device-update/client-agent-reference-implementations.png" alt-text="Agent Implementations." lightbox="media/understand-device-update/client-agent-reference-implementations.png":::
-## The Interface Layer
+## The interface layer
-The Interface layer is made up of the [Device Update Core Interface](https://github.com/Azure/iot-hub-device-update/tree/main/src/agent/adu_core_interface) and the [Device Information Interface](https://github.com/Azure/iot-hub-device-update/tree/main/src/agent/device_info_interface).
+The interface layer is made up of the [Device Update core interface](https://github.com/Azure/iot-hub-device-update/tree/main/src/agent/adu_core_interface) and the [Device information interface](https://github.com/Azure/iot-hub-device-update/tree/main/src/agent/device_info_interface).
-These interfaces rely on a configuration file for the device specific values that need to be reported to the Device Update services. [Learn More](device-update-configuration-file.md) about the configuration file.
+These interfaces rely on a configuration file for the device specific values that need to be reported to the Device Update services. For more information, see [Device Update configuration file](device-update-configuration-file.md).
-### Device Update Core Interface
+### Device Update core interface
-The 'DeviceUpdate Core' interface is the primary communication channel between Device Update Agent and Services. [Learn More](device-update-plug-and-play.md#device-update-core-interface) about this interface.
+The *Device Update core interface* is the primary communication channel between the Device Update agent and services. For more information, see [Device Update core interface](device-update-plug-and-play.md#device-update-core-interface).
-### Device Information Interface
+### Device information interface
-The Device Information Interface is used to implement the `Azure IoT PnP DeviceInformation` interface. [Learn More](device-update-plug-and-play.md#device-information-interface) about this interface.
+The *device information interface* is used to implement the `Azure IoT PnP DeviceInformation` interface. For more information, see [Device information interface](device-update-plug-and-play.md#device-information-interface).
-## The Platform Layer
+## The platform Layer
-The Linux Platform Layer integrates with [Delivery Optimization](https://github.com/microsoft/do-client) for
-downloads and is used in our Raspberry Pi reference image, and all clients that run on Linux systems.
+The Linux *platform layer* integrates with [Delivery Optimization](https://github.com/microsoft/do-client) for downloads and is used in our Raspberry Pi reference image, and all clients that run on Linux systems.
-### Linux Platform Layer
+The Linux platform layer implementation can be found in the `src/platform_layers/linux_platform_layer` and it integrates with the [Delivery Optimization client](https://github.com/microsoft/do-client/releases) for downloads.
-The Linux Platform Layer implementation can be found in the
-`src/platform_layers/linux_platform_layer` and it integrates with the [Delivery Optimization Client](https://github.com/microsoft/do-client/releases) for downloads.
+This layer can integrate with different update handlers to implement the
+installers. For instance, the `SWUpdate` update handler, `Apt` update handler, and `Script` update handler.
-This layer can integrate with different Update Handlers to implement the
-installers. For instance, the `SWUpdate` update handler, 'Apt' update handler, and 'Script' update handler.
+## Update handlers
-## Update Handlers
+Update handlers are used to invoke installers or commands to do an over-the-air update. You can either use [existing update content handlers](https://github.com/Azure/iot-hub-device-update/tree/main/src/content_handlers) or [implement a custom content handler](https://github.com/Azure/iot-hub-device-update/tree/main/docs/agent-reference/how-to-implement-custom-update-handler.md) that can invoke any installer and execute the over-the-air update needed for your use case.
-Update Handlers used to invoke installers or commands to do an over-the-air update. You can either use [existing update content handlers](https://github.com/Azure/iot-hub-device-update/tree/main/src/content_handlers) or [implement a custom Content Handler](https://github.com/Azure/iot-hub-device-update/tree/main/docs/agent-reference/how-to-implement-custom-update-handler.md) which can invoke any installer and execute the over-the-air update needed for your use case.
+## Updating to latest Device Update agent
-## Updating to latest Device update agent
+We have added many new capabilities to the Device Update agent in the latest public preview refresh agent (version 0.8.0). For more information, see the [list of new capabilities](https://github.com/Azure/iot-hub-device-update/blob/main/docs/agent-reference/whats-new.md).
-We have added many new capabilities to the Device Update agent in the latest Public Preview Refresh agent (version 0.8.0). See [list of new capabilities](https://github.com/Azure/iot-hub-device-update/blob/main/docs/agent-reference/whats-new.md) for details.
+If you're using the Device Update agent versions 0.6.0 or 0.7.0, please migrate to the latest agent version 0.8.0. For more information, see [Migrate devices and groups to public preview refresh](migration-pp-to-ppr.md).
-If you are using the Device Update agent versions 0.6.0 or 0.7.0 please migrate to the latest agent version 0.8.0. See [Public Preview Refresh agent for changes and how to upgrade](migration-pp-to-ppr.md)
-
-You can check installed version of the Device Update agent and the Delivery Optimization agent in the Device Properties section of your [IoT device twin](../iot-hub/iot-hub-devguide-device-twins.md). [Learn more about device properties under Device Update Core Interface](device-update-plug-and-play.md#device-properties).
+You can check the installed version of the Device Update agent and the Delivery Optimization agent in the device properties section of your [IoT device twin](../iot-hub/iot-hub-devguide-device-twins.md). For more information, see [device properties of the Device Update core interface](device-update-plug-and-play.md#device-properties).
## Next Steps+ [Understand Device Update agent configuration file](device-update-configuration-file.md) You can use the following tutorials for a simple demonstration of Device Update for IoT Hub: -- [Image Update: Getting Started with Raspberry Pi 3 B+ Reference Yocto Image](device-update-raspberry-pi.md) extensible via open source to build you own images for other architecture as needed.
-
-- [Package Update: Getting Started using Ubuntu Server 18.04 x64 Package agent](device-update-ubuntu-agent.md)
-
-- [Proxy Update: Getting Started using Device Update binary agent for downstream devices](device-update-howto-proxy-updates.md)
-
-- [Getting Started Using Ubuntu (18.04 x64) Simulator Reference Agent](device-update-simulator.md)
+* [Image Update: Getting Started with Raspberry Pi 3 B+ Reference Yocto Image](device-update-raspberry-pi.md) extensible via open source to build your own images for other architecture as needed.
+
+* [Package Update: Getting Started using Ubuntu Server 18.04 x64 Package agent](device-update-ubuntu-agent.md)
+
+* [Proxy Update: Getting Started using Device Update binary agent for downstream devices](device-update-howto-proxy-updates.md)
+
+* [Getting Started Using Ubuntu (18.04 x64) Simulator Reference Agent](device-update-simulator.md)
-- [Device Update for Azure IoT Hub tutorial for Azure-Real-Time-Operating-System](device-update-azure-real-time-operating-system.md)
+* [Device Update for Azure IoT Hub tutorial for Azure-Real-Time-Operating-System](device-update-azure-real-time-operating-system.md)
iot-hub-device-update Device Update Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-networking.md
-# Ports Used With Device Update for IoT Hub
-ADU uses a variety of network ports for different purposes.
+# Ports used with Device Update for IoT Hub
+
+Device Update for IoT Hub uses a variety of network ports for different purposes.
## Default Ports Purpose|Port Number | |
-Download from Networks/CDNs | 80 (HTTP Protocol)
-Download from MCC/CDNs | 80 (HTTP Protocol)
-ADU Agent Connection to Azure IoT Hub | 8883 (MQTT Protocol)
+Download from networks/CDNs | 80 (HTTP protocol)
+Download from MCC/CDNs | 80 (HTTP protocol)
+Device Update agent connection to IoT Hub | 8883 (MQTT protocol)
+
+## Use IoT Hub supported protocols
-## Use Azure IoT Hub supported protocols
-The ADU agent can be modified to use any of the supported Azure IoT Hub protocols.
+The Device Update agent can be modified to use any of the supported Azure IoT Hub protocols.
-[Learn more](../iot-hub/iot-hub-devguide-protocols.md) about the current list of supported protocols.
+For more information, see [Choose a device communication protocol](../iot-hub/iot-hub-devguide-protocols.md).
machine-learning How To Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-assign-roles.md
If you're an owner of a workspace, you can add and remove roles for the workspac
You can use Azure AD security groups to manage access to workspaces. This approach has following benefits: * Team or project leaders can manage user access to workspace as security group owners, without needing Owner role on the workspace resource directly. * You can organize, manage and revoke users' permissions on workspace and other resources as a group, without having to manage permissions on user-by-user basis.
- * Using Azure AD groups helps you to avoid reaching the [subscription limit](https://docs.microsoft.com/azure/role-based-access-control/troubleshooting#azure-role-assignments-limit) on role assignments.
+ * Using Azure AD groups helps you to avoid reaching the [subscription limit](../role-based-access-control/troubleshooting.md#azure-role-assignments-limit) on role assignments.
To use Azure AD security groups:
- 1. [Create a security group](https://docs.microsoft.com/azure/active-directory/fundamentals/active-directory-groups-create-azure-portal).
- 2. [Add a group owner](https://docs.microsoft.com/azure/active-directory/fundamentals/active-directory-accessmanagement-managing-group-owners). This user has permissions to add or remove group members. Note that the group owner is not required to be group member, or have direct RBAC role on the workspace.
+ 1. [Create a security group](../active-directory/fundamentals/active-directory-groups-create-azure-portal.md).
+ 2. [Add a group owner](../active-directory/fundamentals/active-directory-accessmanagement-managing-group-owners.md). This user has permissions to add or remove group members. Note that the group owner is not required to be group member, or have direct RBAC role on the workspace.
3. Assign the group an RBAC role on the workspace, such as AzureML Data Scientist, Reader or Contributor.
- 4. [Add group members](https://docs.microsoft.com/azure/active-directory/fundamentals/active-directory-groups-members-azure-portal). The members consequently gain access to the workspace.
+ 4. [Add group members](../active-directory/fundamentals/active-directory-groups-members-azure-portal.md). The members consequently gain access to the workspace.
## Create custom role
network-function-manager Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-function-manager/requirements.md
The Azure Network Function Manager service consists of Network Function Manager
## <a name="port-firewall"></a>Port requirements and firewall rules
-Network Function Manager (NFM) services running on the Azure Stack Edge require outbound connectivity to the NFM cloud service for management traffic to deploy network functions. NFM is fully integrated with the Azure Stack Edge service. Review the networking port requirements and firewall rules for the [Azure Stack Edge](../databox-online/azure-stack-edge-gpu-system-requirements.md#networking-port-requirements) device.
+Network Function Manager (NFM) services running on the Azure Stack Edge require outbound connectivity to the NFM cloud service for management traffic to deploy network functions. NFM is fully integrated with the Azure Stack Edge service. Review the networking port requirements and firewall rules for the [Azure Stack Edge](../databox-online/azure-stack-edge-gpu-system-requirements.md#networking-port-requirements) device.
+
+Your firewall rules must allow outbound HTTPS connections to
+
+* *.blob.storage.azure.net
+* *.mecdevice.azure.com
Network Function partners will have different requirements for firewall and port configuration rules to manage traffic to the partner management portal. Check with your network function partner for specific requirements.
security Double Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/double-encryption.md
na Previously updated : 12/28/2021 Last updated : 07/01/2022 # Double encryption
Azure provides double encryption for data at rest and data in transit.
## Data at rest MicrosoftΓÇÖs approach to enabling two layers of encryption for data at rest is: -- **Disk encryption using customer-managed keys**. You provide your own key for disk encryption. You can bring your own keys to your Key Vault (BYOK ΓÇô Bring Your Own Key), or generate new keys in Azure Key Vault to encrypt the desired resources.-- **Infrastructure encryption using platform-managed keys**. By default, disks are automatically encrypted at rest using platform-managed encryption keys.
+- **Encryption at rest using customer-managed keys**. You provide your own key for data encryption at rest. You can bring your own keys to your Key Vault (BYOK ΓÇô Bring Your Own Key), or generate new keys in Azure Key Vault to encrypt the desired resources.
+- **Infrastructure encryption using platform-managed keys**. By default, data is automatically encrypted at rest using platform-managed encryption keys.
## Data in transit MicrosoftΓÇÖs approach to enabling two layers of encryption for data in transit is:
sentinel Create Codeless Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/create-codeless-connector.md
Title: Create a codeless connector for Microsoft Sentinel description: Learn how to create a codeless connector in Microsoft Sentinel using the Codeless Connector Platform (CCP).--++ Previously updated : 01/24/2022 Last updated : 06/30/2022 # Create a codeless connector for Microsoft Sentinel (Public preview)
This section describes the configuration for how data is polled from your data s
The following code shows the syntax of the `pollingConfig` section of the [CCP configuration](#create-a-connector-json-configuration-file) file.
-```rest
+```json
"pollingConfig": { auth": { "authType": <string>,
The following code shows the syntax of the `pollingConfig` section of the [CCP c
The `pollingConfig` section includes the following properties:
-|Name |Type |Description |
-||||
-|**id** | String | Mandatory. Defines a unique identifier for a rule or configuration entry, using one of the following values: <br><br>- A GUID (recommended) <br>- A document ID, if the data source resides in a Cosmos DB |
-|**auth** | String | Describes the authentication properties for polling the data. For more information, see [auth configuration](#auth-configuration). |
-|<a name="authtype"></a>**auth.authType** | String | Mandatory. Defines the type of authentication, nested inside the `auth` object, as one of the following values: `Basic`, `APIKey`, `Session` |
-|**request** | Nested JSON | Mandatory. Describes the request payload for polling the data, such as the API endpoint. For more information, see [request configuration](#request-configuration). |
-|**response** | Nested JSON | Mandatory. Describes the response object and nested message returned from the API when polling the data. For more information, see [response configuration](#response-configuration). |
-|**paging** | Nested JSON. | Optional. Describes the pagination payload when polling the data. For more information, see [paging configuration](#paging-configuration). |
-
+| Name | Type | Description |
+| | -- | |
+| **id** | String | Mandatory. Defines a unique identifier for a rule or configuration entry, using one of the following values: <br><br>- A GUID (recommended) <br>- A document ID, if the data source resides in a Cosmos DB |
+| **auth** | String | Describes the authentication properties for polling the data. For more information, see [auth configuration](#auth-configuration). |
+| <a name="authtype"></a>**auth.authType** | String | Mandatory. Defines the type of authentication, nested inside the `auth` object, as one of the following values: `Basic`, `APIKey`, `OAuth2`, `Session`, `CiscoDuo` |
+| **request** | Nested JSON | Mandatory. Describes the request payload for polling the data, such as the API endpoint. For more information, see [request configuration](#request-configuration). |
+| **response** | Nested JSON | Mandatory. Describes the response object and nested message returned from the API when polling the data. For more information, see [response configuration](#response-configuration). |
+| **paging** | Nested JSON | Optional. Describes the pagination payload when polling the data. For more information, see [paging configuration](#paging-configuration). |
For more information, see [Sample pollingConfig code](#sample-pollingconfig-code).
For more information, see [Sample pollingConfig code](#sample-pollingconfig-code
The `auth` section of the [pollingConfig](#configure-your-connectors-polling-settings) configuration includes the following parameters, depending on the type defined in the [authType](#authtype) element:
+#### Basic authType parameters
+
+| Name | Type | Description |
+| - | - | -- |
+| **Username** | String | Mandatory. Defines user name. |
+| **Password** | String | Mandatory. Defines user password. |
+ #### APIKey authType parameters
-|Name |Type |Description |
-||||
+| Name | Type | Description |
+| - | - | -- |
|**APIKeyName** |String | Optional. Defines the name of your API key, as one of the following values: <br><br>- `XAuthToken` <br>- `Authorization` | |**IsAPIKeyInPostPayload** |Boolean | Determines where your API key is defined. <br><br>True: API key is defined in the POST request payload <br>False: API key is defined in the header | |**APIKeyIdentifier** | String | Optional. Defines the name of the identifier for the API key. <br><br>For example, where the authorization is defined as `"Authorization": "token <secret>"`, this parameter is defined as: `{APIKeyIdentifier: ΓÇ£tokenΓÇ¥})` |
+#### OAuth2 authType parameters
+
+The Codeless Connector Platform supports OAuth 2.0 authorization code grant.
+
+The Authorization Code grant type is used by confidential and public clients to exchange an authorization code for an access token.
+
+After the user returns to the client via the redirect URL, the application will get the authorization code from the URL and use it to request an access token.
++
+| Name | Type | Description |
+| - | - | -- |
+| **FlowName** | String | Mandatory. Defines an OAuth2 flow.<br><br>Supported values:<br>- `AuthCode` - requires an authorization flow<br>- `ClientCredentials` - doesn't require authorization flow. |
+| **AccessToken** | String | Optional. Defines an OAuth2 access token, relevant when the access token doesn't expire. |
+| **AccessTokenPrepend** | String | Optional. Defines an OAuth2 access token prepend. Default is `Bearer`. |
+| **RefreshToken** | String | Mandatory for OAuth2 auth types. Defines the OAuth2 refresh token. |
+| **TokenEndpoint** | String | Mandatory for OAuth2 auth types. Defines the OAuth2 token service endpoint. |
+| **AuthorizationEndpoint** | String | Optional. Defines the OAuth2 authorization service endpoint. Used only during onboarding or when renewing a refresh token. |
+| **RedirectionEndpoint** | String | Optional. Defines a redirection endpoint during onboarding. |
+| **AccessTokenExpirationDateTimeInUtc** | String | Optional. Defines an access token expiration datetime, in UTC format. Relevant for when the access token doesn't expire, and therefore has a large datetime in UTC, or when the access token has a large expiration datetime. |
+| **RefreshTokenExpirationDateTimeInUtc** | String | Mandatory for OAuth2 auth types. Defines the refresh token expiration datetime in UTC format. |
+| **TokenEndpointHeaders** | Dictionary<string, object> | Optional. Defines the headers when calling an OAuth2 token service endpoint.<br><br>Define a string in the serialized `dictionary<string, string>` format: `{'<attr_name>': '<val>', '<attr_name>': '<val>'... }` |
+| **AuthorizationEndpointHeaders** | Dictionary<string, object> | Optional. Defines the headers when calling an OAuth2 authorization service endpoint. Used only during onboarding or when renewing a refresh token.<br><br>Define a string in the serialized `dictionary<string, object>` format: `{'<attr_name>': <serialized val>, '<attr_name>': <serialized val>, ... }` |
+| **AuthorizationEndpointQueryParameters** | Dictionary<string, object> | Optional. Defines query parameters when calling an OAuth2 authorization service endpoint. Used only during onboarding or when renewing a refresh token.<br><br>Define a string in the serialized `dictionary<string, object>` format: `{'<attr_name>': <serialized val>, '<attr_name>': <serialized val>, ... }` |
+| **TokenEndpointQueryParameters** | Dictionary<string, object> | Optional. Define query parameters when calling OAuth2 token service endpoint.<br><br>Define a string in the serialized `dictionary<string, object>` format: `{'<attr_name>': <serialized val>, '<attr_name>': <serialized val>, ... }` |
+| **IsTokenEndpointPostPayloadJson** | Boolean | Optional, default is false. Determines whether query parameters are in JSON format and set in the request POST payload. |
+| **IsClientSecretInHeader** | Boolean | Optional, default is false. Determines whether the `client_id` and `client_secret` values are defined in the header, as is done in the Basic authentication schema, instead of in the POST payload. |
+| **RefreshTokenLifetimeinSecAttributeName** | String | Optional. Defines the attribute name from the token endpoint response, specifying the lifetime of the refresh token, in seconds. |
+| **IsJwtBearerFlow** | Boolean | Optional, default is false. Determines whether you are using JWT. |
+| **JwtHeaderInJson** | Dictionary<string, object> | Optional. Define the JWT headers in JSON format.<br><br>Define a string in the serialized `dictionary<string, object>` format: `{'<attr_name>': <serialized val>, '<attr_name>': <serialized val>...}` |
+| **JwtClaimsInJson** | Dictionary<string, object> | Optional. Defines JWT claims in JSON format.<br><br>Define a string in the serialized `dictionary<string, object>` format: `{'<attr_name>': <serialized val>, '<attr_name>': <serialized val>, ...}` |
+| **JwtPem** | String | Optional. Defines a secret key, in PEM Pkcs1 format: `'--BEGIN RSA PRIVATE KEY--\r\n{privatekey}\r\n--END RSA PRIVATE KEY--\r\n'`<br><br>Make sure to keep the `'\r\n'` code in place. |
+| **RequestTimeoutInSeconds** | Integer | Optional. Determines timeout in seconds when calling token service endpoint. Default is 180 seconds |
+
+Here's an example of how an OAuth2 configuration might look:
-#### Session authType parameters
-
-|Name |Type |Description |
-||||
-|**QueryParameters** | String | Optional. A list of query parameters, in the serialized `dictionary<string, string>` format: <br><br>`{'<attr_name>': '<val>', '<attr_name>': '<val>'... }` |
-|**IsPostPayloadJson** | Boolean | Optional. Determines whether the query parameters are in JSON format. |
-|**Headers** | String. | Optional. Defines the header used when calling the endpoint to get the session ID, and when calling the endpoint API. <br><br> Define the string in the serialized `dictionary<string, string>` format: `{'<attr_name>': '<val>', '<attr_name>': '<val>'... }` |
-|**SessionTimeoutInMinutes** | String | Optional. Defines a session timeout, in minutes. |
-|**SessionIdName** | String | Optional. Defines an ID name for the session. |
-|**SessionLoginRequestUri** | String | Optional. Defines a session login request URI. |
--
+```json
+"pollingConfig": {
+ "auth": {
+ "authType": "OAuth2",
+ "authorizationEndpoint": "https://accounts.google.com/o/oauth2/v2/auth?access_type=offline&prompt=consent",
+ "redirectionEndpoint": "https://portal.azure.com/TokenAuthorize",
+ "tokenEndpoint": "https://oauth2.googleapis.com/token",
+ "authorizationEndpointQueryParameters": {},
+ "tokenEndpointHeaders": {
+ "Accept": "application/json"
+ },
+ "TokenEndpointQueryParameters": {},
+ "isClientSecretInHeader": false,
+ "scope": "https://www.googleapis.com/auth/admin.reports.audit.readonly",
+ "grantType": "authorization_code",
+ "contentType": "application/x-www-form-urlencoded",
+ "FlowName": "AuthCode"
+ },
+```
+#### Session authType parameters
+| Name | Type | Description |
+| | -- | |
+| **QueryParameters** | Dictionary<string, object> | Optional. A list of query parameters, in the serialized `dictionary<string, string>` format: <br><br>`{'<attr_name>': '<val>', '<attr_name>': '<val>'... }` |
+| **IsPostPayloadJson** | Boolean | Optional. Determines whether the query parameters are in JSON format. |
+| **Headers** | Dictionary<string, object> | Optional. Defines the header used when calling the endpoint to get the session ID, and when calling the endpoint API. <br><br> Define the string in the serialized `dictionary<string, string>` format: `{'<attr_name>': '<val>', '<attr_name>': '<val>'... }` |
+| **SessionTimeoutInMinutes** | String | Optional. Defines a session timeout, in minutes. |
+| **SessionIdName** | String | Optional. Defines an ID name for the session. |
+| **SessionLoginRequestUri** | String | Optional. Defines a session login request URI. |
-### request configuration
+### Request configuration
The `request` section of the [pollingConfig](#configure-your-connectors-polling-settings) configuration includes the following parameters:
-|Name |Type |Description |
-||||
-|**apiEndpoint** | String | Mandatory. Defines the endpoint to pull data from. |
-|**httpMethod** |String | Mandatory. Defines the API method: `GET` or `POST` |
-|**queryTimeFormat** | String, or *UnixTimestamp* or *UnixTimestampInMills* | Mandatory. Defines the format used to define the query time. <br><br>This value can be a string, or in *UnixTimestamp* or *UnixTimestampInMills* format to indicate the query start and end time in the UnixTimestamp. |
-|**startTimeAttributeName** | String | Optional. Defines the name of the attribute that defines the query start time. |
-|**endTimeAttributeName** | String | Optional. Defines the name of the attribute that defines the query end time. |
-|**queryTimeIntervalAttributeName** | String. | Optional. Defines the name of the attribute that defines the query time interval. |
-|**queryTimeIntervalDelimiter** | String | Optional. Defines the query time interval delimiter. |
-|**queryWindowInMin** | String | Optional. Defines the available query window, in minutes. <br><br>Minimum value: `5` |
-|**queryParameters** | String | Optional. Defines the parameters passed in the query in the [`eventsJsonPaths`](#eventsjsonpaths) path. <br><br>Define the string in the serialized `dictionary<string, string>` format: `{'<attr_name>': '<val>', '<attr_name>': '<val>'... }` |
-|**queryParametersTemplate** | String object | Optional. Defines the query parameters template to use when passing query parameters in advanced scenarios. <br><br>For example: `"queryParametersTemplate": "{'cid': 1234567, 'cmd': 'reporting', 'format': 'siem', 'data': { 'from': '{_QueryWindowStartTime}', 'to': '{_QueryWindowEndTime}'}, '{_APIKeyName}': '{_APIKey}'}"` |
-|**isPostPayloadJson** | Boolean | Optional. Determines whether the POST payload is in JSON format. |
-|**rateLimitQPS** | Double | Optional. Defines the number of calls or queries allowed in a second. |
-|**timeoutInSeconds** | Integer | Optional. Defines the request timeout, in seconds. |
-|**retryCount** | Integer | Optional. Defines the number of request retries to try if needed. |
-|**headers** | String | Optional. Defines the request header value, in the serialized `dictionary<string, string>` format: `{'<attr_name>': '<val>', '<attr_name>': '<val>'... }` |
---
-### response configuration
+| Name | Type | Description |
+| - | - | -- |
+| **apiEndpoint** | String | Mandatory. Defines the endpoint to pull data from. |
+| **httpMethod** | String | Mandatory. Defines the API method: `GET` or `POST` |
+| **queryTimeFormat** | String, or *UnixTimestamp* or *UnixTimestampInMills* | Mandatory. Defines the format used to define the query time. <br><br>This value can be a string, or in *UnixTimestamp* or *UnixTimestampInMills* format to indicate the query start and end time in the UnixTimestamp. |
+| **startTimeAttributeName** | String | Optional. Defines the name of the attribute that defines the query start time. |
+| **endTimeAttributeName** | String | Optional. Defines the name of the attribute that defines the query end time. |
+| **queryTimeIntervalAttributeName** | String | Optional. Defines the name of the attribute that defines the query time interval. |
+| **queryTimeIntervalDelimiter** | String | Optional. Defines the query time interval delimiter. |
+| **queryWindowInMin** | Integer | Optional. Defines the available query window, in minutes. <br><br>Minimum value: `5` |
+| **queryParameters** | Dictionary<string, object> | Optional. Defines the parameters passed in the query in the [`eventsJsonPaths`](#eventsjsonpaths) path. <br><br>Define the string in the serialized `dictionary<string, string>` format: `{'<attr_name>': '<val>', '<attr_name>': '<val>'... }`. |
+| **queryParametersTemplate** | String | Optional. Defines the query parameters template to use when passing query parameters in advanced scenarios. <br><br>For example: `"queryParametersTemplate": "{'cid': 1234567, 'cmd': 'reporting', 'format': 'siem', 'data': { 'from': '{_QueryWindowStartTime}', 'to': '{_QueryWindowEndTime}'}, '{_APIKeyName}': '{_APIKey}'}"` <br><br>`{_QueryWindowStartTime}` and `{_QueryWindowEndTime}` are only supported in the `queryParameters` and `queryParametersTemplate` request parameters. <br><br>`{_APIKeyName}` and `{_APIKey}` are only supported in the `queryParametersTemplate` request parameter. |
+| **isPostPayloadJson** | Boolean | Optional. Determines whether the POST payload is in JSON format. |
+| **rateLimitQPS** | Double | Optional. Defines the number of calls or queries allowed in a second. |
+| **timeoutInSeconds** | Integer | Optional. Defines the request timeout, in seconds. |
+| **retryCount** | Integer | Optional. Defines the number of request retries to try if needed. |
+| **headers** | Dictionary<string, object> | Optional. Defines the request header value, in the serialized `dictionary<string, object>` format: `{'<attr_name>': '<serialized val>', '<attr_name>': '<serialized val>'... }` |
+
+### Response configuration
The `response` section of the [pollingConfig](#configure-your-connectors-polling-settings) configuration includes the following parameters:
The following code shows an example of the [eventsJsonPaths](#eventsjsonpaths) v
```
-### paging configuration
+### Paging configuration
The `paging` section of the [pollingConfig](#configure-your-connectors-polling-settings) configuration includes the following parameters:
The `paging` section of the [pollingConfig](#configure-your-connectors-polling-s
The following code shows an example of the `pollingConfig` section of the [CCP configuration](#create-a-connector-json-configuration-file) file:
-```rest
+```json
"pollingConfig": { "auth": { "authType": "APIKey",
After creating your [JSON configuration file](#create-a-connector-json-configura
If you're using a [template configuration file with placeholder data](#add-placeholders-to-your-connectors-json-configuration-file), send the data together with the `placeHolderValue` attributes that hold the user data. For example:
- ```rest
+ ```json
"requestConfigUserInputValues": [ { "displayText": "<A display name>",
Use one of the following methods:
- **API**: Use the [DISCONNECT](/rest/api/securityinsights/preview/data-connectors/disconnect) API to send a PUT call with an empty body to the following URL:
- ```rest
+ ```http
https://management.azure.com /subscriptions/{{SUB}}/resourceGroups/{{RG}}/providers/Microsoft.OperationalInsights/workspaces/{{WS-NAME}}/providers/Microsoft.SecurityInsights/dataConnectors/{{Connector_Id}}/disconnect?api-version=2021-03-01-preview ```
Use one of the following methods:
If you haven't yet, share your new codeless data connector with the Microsoft Sentinel community! Create a solution for your data connector and share it in the Microsoft Sentinel Marketplace.
-For more information, see [About Microsoft Sentinel solutions](sentinel-solutions.md).
+For more information, see [About Microsoft Sentinel solutions](sentinel-solutions.md).
sentinel Network Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/network-normalization-schema.md
The following fields are used to represent that inspection which a security devi
| Field | Class | Type | Description | | | | | |
-| **NetworkRuleName** | Optional | String | The name or ID of the rule by which [DvcAction](#dvcaction) was decided upon.<br><br> Example: `AnyAnyDrop` |
-| **NetworkRuleNumber** | Optional | Integer | The number of the rule by which [DvcAction](#dvcaction) was decided upon.<br><br>Example: `23` |
-| **Rule** | Mandatory | Alias | Either `NetworkRuleName` or `NetworkRuleNumber`. |
+| <a name="networkrulename"></a>**NetworkRuleName** | Optional | String | The name or ID of the rule by which [DvcAction](#dvcaction) was decided upon.<br><br> Example: `AnyAnyDrop` |
+| <a name="networkrulenumber"></a>**NetworkRuleNumber** | Optional | Integer | The number of the rule by which [DvcAction](#dvcaction) was decided upon.<br><br>Example: `23` |
+| **Rule** | Mandatory | String | Either the value of [NetworkRuleName](#networkrulename) or the value of [NetworkRuleNumber](#networkrulenumber). Note that if the value of [NetworkRuleNumber](#networkrulenumber) is used, the type should be converted to string. |
| **ThreatId** | Optional | String | The ID of the threat or malware identified in the network session.<br><br>Example: `Tr.124` | | **ThreatName** | Optional | String | The name of the threat or malware identified in the network session.<br><br>Example: `EICAR Test File` | | **ThreatCategory** | Optional | String | The category of the threat or malware identified in the network session.<br><br>Example: `Trojan` |
virtual-wan Howto Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/howto-firewall.md
A **secured hub** is an Azure Virtual WAN hub with Azure Firewall. This article
## Before you begin
-The steps in this article assume that you have already deployed a virtual WAN with one or more hubs.
+The steps in this article assume that you've already deployed a virtual WAN with one or more hubs.
To create a new virtual WAN and a new hub, use the steps in the following articles: * [Create a virtual WAN](virtual-wan-site-to-site-portal.md#openvwan) * [Create a hub](virtual-wan-site-to-site-portal.md#hub)
+> [!IMPORTANT]
+> Virtual WAN is a collection of hubs and services made available inside the hub. The user can have as many Virtual WAN per their need. In a Virtual WAN hub, there are multiple services like VPN, ExpressRoute etc. Each of these services is automatically deployed across **Availability Zones** *except* Azure Firewall, if the region supports Availability Zones. To deploy an Azure Firewall with Availability Zones (recommended) in a Secure vWAN Hub, [this article](https://docs.microsoft.com/azure/firewall-manager/secure-cloud-network) must be used.
+ ## View virtual hubs The **Overview** page for your virtual WAN shows a list of virtual hubs and secured hubs. The following figure shows a virtual WAN with no secured hubs. ## Convert to secured hub
-1. On the **Overview** page for your virtual WAN, select the hub that you want to convert to a secured hub. On the virtual hub page, you see two options to deploy Azure Firewall into this hub. Select either option.
+1. On the **Overview** page for your virtual WAN, select the hub that you want to convert to a secured hub.
+
+2. Once in the hub properties, select on **Azure Firewall and Firewall Manager** under the "Security" section on the left:
+
+ :::image type="content" source="./media/howto-firewall/vwan-convert-firewall-start.png" alt-text="Screenshot showing Azure Virtual WAN Hub properties." lightbox="./media/howto-firewall/vwan-convert-firewall-start.png":::
+
+3. Select on **Next: Azure Firewall** button at the bottom of screen:
- :::image type="content" source="./media/howto-firewall/security.png" alt-text="Screenshot shows the Overview page for your virtual WAN where you can select either Convert to secure hub or Azure Firewall." lightbox="./media/howto-firewall/security.png":::
+ :::image type="content" source="./media/howto-firewall/vwan-select-hub.png" alt-text="Screenshot showing [Select virtual hubs] step in the conversion flow" lightbox="./media/howto-firewall/vwan-select-hub.png":::
-1. After you select one of the options, you see the **Convert to secure hub** page. Select a hub to convert, and then select **Next: Azure Firewall** at the bottom of the page.
+4. Select the Azure Firewall properties and status desired, then complete the wizard up to the **Review + confirm** tab:
- :::image type="content" source="./media/howto-firewall/select-hub.png" alt-text="Screenshot of Convert to secure hub with a hub selected." lightbox="./media/howto-firewall/select-hub.png":::
-1. After completing the workflow, select **Confirm**.
+ :::image type="content" source="./media/howto-firewall/vwan-firewall-properties-conversion.png" alt-text="[Azure Firewall] step in the conversion flow" lightbox="./media/howto-firewall/vwan-firewall-properties-conversion.png":::
- :::image type="content" source="./media/howto-firewall/confirm.png" alt-text="Screenshot shows the Convert to secure hub pane with Confirm selected." lightbox="./media/howto-firewall/confirm.png":::
-1. After the hub has been converted to a secured hub, you can view it on the virtual WAN **Overview** page.
+> [!NOTE]
+> As reported at the beginning of the article, the procedure described in this article will not permit the usage of Availability Zones for Azure Firewall.
- :::image type="content" source="./media/howto-firewall/secured-hub.png" alt-text="Screenshot of view secured hub." lightbox="./media/howto-firewall/secured-hub.png":::
+5. After the hub has been converted to a secured hub, Azure Firewall status will be reported as in the image below:
+
+ :::image type="content" source="./media/howto-firewall/vwan-firewall-secured-final.png" alt-text="Screenshot showing end result of the conversion flow." lightbox="./media/howto-firewall/vwan-firewall-secured-final.png":::
## View hub resources From the virtual WAN **Overview** page, select the secured hub. On the hub page, you can view all the virtual hub resources, including Azure Firewall.
-To view Azure Firewall settings from the secured hub, under **Security**, select **Secured virtual hub settings**.
+To view Azure Firewall settings from the secured hub, select on **Azure Firewall and Firewall Manager** under the "Security" section on the left:
++
+Usage of Availability Zones for Azure Firewall in the Azure Virtual WAN Hub, can be checked accessing the security properties of the hub, as shown in the screenshot below:
+ ## Configure additional settings To configure additional Azure Firewall settings for the virtual hub, select the link to **Azure Firewall Manager**. For information about firewall policies, see [Azure Firewall Manager](../firewall-manager/secure-cloud-network.md#create-a-firewall-policy-and-secure-your-hub). To return to the hub **Overview** page, you can navigate back by clicking the path, as shown by the arrow in the following figure. ## Upgrade to Azure Firewall Premium
-At any time, it is possible to upgrade from Azure Firewall Standard to Premium following these [instructions](../firewall/premium-migrate.md#migrate-a-secure-hub-firewall). This operation will require a maintenance windows since some minimal downtime will be generated.
+At any time, it's possible to upgrade from Azure Firewall Standard to Premium following these [instructions](../firewall/premium-migrate.md#migrate-a-secure-hub-firewall). This operation will require a maintenance window since some minimal downtime will be generated.
## Next steps